Introduction

For many applications, ranking DMUs is an important and essential procedure to decision makers in DEA, especially when there are extremely efficient DMUs. In these regards, several methods have been proposed for ranking of the extreme efficient DMUs, see Andersen and Petersen (1993), Mehrabian et al. (1999) and Jahanshahloo et al. (2004). A DMU is called extremely efficient if it cannot be represented as a linear combination (with nonnegative coefficients) of the remaining DMUs (Charnes et al. 1991). Andersen and Petersen (AP) (Andersen and Petersen 1993) proposed a new procedure to rank efficient DMUs. The AP model determines the rank of a given DMU by removing it from the reference set and by computing its super-efficiency score. However, the AP model may be infeasible in some cases. It is proved that super-efficient DEA models are infeasible (see Thrall 1996; Seiford and Zhu 1999). Mehrabian et al. (1999) suggested the MAJ model for complete ranking of efficient DMUs, but their approach lacks feasibility in some cases, too. To overcome the drawbacks of the AP (Andersen and Petersen 1993) and MAJ (Mehrabian et al. 1999) models, Jahanshahloo et al. (2004) presented a method to rank the extremely efficient DMUs in DEA models with constant and variable returns to scale using \(L_1\)-norm. Their proposed model is a nonlinear programming form which has the higher order of complexity in solving. In addition, a complex procedure was applied in Jahanshahloo et al. (2004) to provide the nonlinear model into a linear one which obtains an approximately optimal solution. Wu and Yan (2010) have also suggested an effective transformation to convert the nonlinear model in Jahanshahloo et al. (2004) into a linear model. Recently, Ziari and Raissi (2016) proposed an approach to rank the efficient DMUs in DEA based on minimizing distance of the under evaluation DMU to the frontier of efficiency. Following this trend, the present paper is an attempt to provide an alternative transformation to convert the nonlinear model in Jahanshahloo et al. (2004) into a linear model. The proposed model in this paper linearizes model in Jahanshahloo et al. (2004) in a way which is different from the presented method in Wu and Yan (2010). To linearize model in Jahanshahloo et al. (2004), the presented model uses the number of fewer auxiliary variables with respect to proposed model in Wu and Yan (2010). Furthermore, it is shown that the model with this transformation is equivalent to the nonlinear model and is easier to be solved. The rest of the paper is organized as follows. Section 2 reviews some ranking methods, especially the model by Jahanshahloo et al. (2004). The paper proposes an alternative transformation in Sect. 3. Section 4 re-executes the empirical example by Jahanshahloo et al. (2004) for illustration. The last Section concludes the study.

Review of some ranking models

In this section, we review some ranking models in data envelopment analysis. In the following subsections, it is assumed that there are n DMUs and for each DMU\(_j\) \((j = 1,\ldots ,n)\) a vector of inputs \((X_j)\) is considered to produce a vector of outputs \((Y_j)\), where \(X_j = (x_{1j},x_{2j}, \ldots ,x_{mj})\) and \(Y_j = (y_{1j},y_{2j}, \ldots ,y_{sj})\). It is also assumed that \(X_j \ge 0,~~ Y_j \ge 0,~ X_j\ne 0,\) and \(Y_j \ne 0\) for every \(j = 1,\ldots ,n\).

AP model

The ranking method proposed by Andersen and Petersen (1993) is a supper efficiency model. In the AP model, DMU under evaluation is excluded from reference set and using other units, the rank of given DMU is obtained.

The input-oriented AP model using the CRS super-efficiency model is as follows:

$$\begin{aligned}&\mathrm{min}\;\theta \nonumber \\&\mathrm{s.t.}~~\sum _{\begin{array}{c} j=1,~ j\ne k \end{array}}^{n}\lambda _{j}x_{ij}\le \theta x_{ik},\quad i=1,\ldots ,m\nonumber \\&\quad \;\; \,\, \sum _{\begin{array}{c} j=1,~ j\ne k \end{array}}^{n}\lambda _{j}y_{rj}\ge y_{rk},\quad r=1,\ldots ,s\nonumber \\&\quad \;\; \,\, \lambda _{j}\ge 0,\quad j=1,\ldots ,n,\,\,j\ne k, \end{aligned}$$
(1)

where \(\lambda _j, \,\,j=j=1,\ldots ,n,\,j\ne k\) and \(\theta\) are the variables of model.

In addition, the output-oriented AP model using the CRS super-efficiency model is as follows:

$$\begin{aligned}&\mathrm{max}\;\phi \nonumber \\&\mathrm{s.t.}\,\,\sum _{\begin{array}{c} j=1,\, j\ne k \end{array}}^{n}\lambda _{j}x_{ij}\le x_{ik},\quad i=1,\ldots ,m\nonumber \\&\quad \;\; \,\,\sum _{\begin{array}{c} j=1,\, j\ne k \end{array}}^{n}\lambda _{j}y_{rj}\ge \phi y_{rk},\quad r=1,\ldots ,s\nonumber \\&\quad \;\; \,\,\lambda _{j}\ge 0,\quad j=1,\ldots ,n,\quad j\ne k, \end{aligned}$$
(2)

where \(\lambda _j, \,j=j=1,\ldots ,n, \quad j\ne k\) and \(\phi\) are the variables of model.

The main drawbacks of this model are infeasibility and instability for some DMUs. It can be said that a model is stable if the DMU under evaluation which is efficient remains efficient after perturbation on data.

MAJ model

This ranking model is proposed by Mehrabian et al. (1999) to solve infeasibility of AP models in some cases. The MAJ model can be expressed as follows:

$$\begin{aligned}&\mathrm{min}\;1+w\nonumber \\&\mathrm{s.t.}\,\,\sum _{\begin{array}{c} j=1,\, j\ne k \end{array}}^{n}\lambda _{j}x_{ij}\le x_{ik}+w,\quad \, i=1,\ldots ,m \nonumber \\&\quad \;\; \,\sum _{\begin{array}{c} j=1,\, j\ne k \end{array}}^{n}\lambda _{j}y_{rj}\ge y_{rk},\quad r=1,\ldots ,s\nonumber \\&\quad \;\; \,\lambda _{j}\ge 0,\quad \, j=1,\ldots ,n,\,j\ne k , \end{aligned}$$
(3)

where \(\lambda _j, \,j=j=1,\ldots ,n,\quad j\ne k\) and w are the variables of model.

\(L_1\)-norm model

In this subsection, the model of Jahanshahloo et al. (2004) is explained as follows. Then the production possibility set (PPS) with constant returns to scale \(T_c\) and the PPS with variable returns to scale \(T_v\) are defined as:

$$\begin{aligned} T_c=\left\{ (X,Y){\mid }\, X\ge \sum _{j=1}^{n}\lambda _{j}X_{j},\, Y\le \sum _{j=1}^{n}\lambda _{j}Y_{j},\,\lambda _j \ge 0,\,j=1,\ldots ,n \right\} , \end{aligned}$$
(4)

and

$$\begin{aligned} T_{v}=\left\{ (X,Y){\mid }\, X\ge \sum _{j=1}^{n}\lambda _{j}X_{j},\, Y\le \sum _{j=1}^{n}\lambda _{j}Y_{j} ,\,\sum _{j=1}^{n}\lambda _{j}=1\,\lambda _j \ge 0,\,j=1,\ldots ,n \right\} , \end{aligned}$$
(5)

respectively.

DMU\(_k\) is assumed to be extremely efficient. By removing \((X_k, Y_k)\) from \(T_c\), the production possibility set \(T^\prime _c\) is defined as:

$$\begin{aligned} T^\prime _c=\left\{ (X,Y){\mid }\, X\ge \sum _{\begin{array}{c} j=1,\, j\ne k \end{array}}^{n}\lambda _{j}X_{j},\, Y\le \sum _{\begin{array}{c} j=1,\, j\ne k \end{array}}^{n}\lambda _{j}Y_{j},\,\lambda _j \ge 0,\,j=1,\ldots ,n \right\} . \end{aligned}$$
(6)

To obtain the ranking score of DMU\(_k\) the model in Jahanshahloo et al. (2004) is considered as follows:

$$\begin{aligned}&\mathrm{min}\;\Gamma ^c_k(X,Y)=\sum _{i=1}^{m} \left| x_i-x_{ik}\right| +\sum _{r=1}^{s} \left| y_r-y_{rk}\right| \nonumber \\&\mathrm{s.t.}\,\sum _{\begin{array}{c} j=1,\, j\ne k \end{array}}^{n}\lambda _{j}x_{ij}\le x_{i},\quad \, i=1,\ldots ,m\nonumber \\&\quad \;\; \,\,\sum _{\begin{array}{c} j=1,\, j\ne k \end{array}}^{n}\lambda _{j}y_{rj}\ge y_{r},\quad r=1,\ldots ,s\nonumber \\&\quad \;\; \,x_i\ge 0,\quad y_r\ge 0\quad \,i=1,\ldots ,m,\,\,r=1,\ldots ,s\nonumber \\&\quad \;\; \,\lambda _{j}\ge 0,\quad \,j=1,\ldots ,n, \quad j\ne k, \end{aligned}$$
(7)

where \({X}=(x_1,...,x_m)\), \({Y}=(y_1,...,y_s)\) and \(\mathbf \lambda =(\lambda _1,...,\lambda _{k-1},\lambda _{k+1},...,\lambda _n)\) are the variables of the model (7) and \(\Gamma ^c_k(X,Y)\) is the distance between \((X_k,Y_k)\) and (XY) by \(l_1\)-norm.

To convert model (4) into a linear model, [1] defines the set \(T^{\prime \prime }_c\) as follows:

$$\begin{aligned} T_c^{\prime \prime }=T_c^\prime \cap \Bigl \{ (X,Y){\mid }\, X\ge X_k \quad \mathrm{and} \quad Y\le Y_k\Bigr \} \end{aligned}$$
(8)

and they apply the scaling input and output data by normalization. After these changes, an approximately optimal solution of model (7) is obtained by solving a linear programming model related to it.

In next section, an alternative transformation to model (7) is considered and to obtain the optimal solution, an equivalent linear programming model is solved.

An alternative transformation

For converting model (7) into a linear model, the following transformation is utilized: \(~~~{\left| x_i-x{ik}\right| \le a_i\quad i=1,\ldots ,m}~\) and \(~{\left| y_r-y{rk}\right| \le b_r\quad r=1,\ldots ,s}.\) Thus, we have:

$$\begin{aligned} \left\{ {\begin{array}{ll} {{x_i} - {x_{ik}} \le {a_i},} &{} {i = 1, \ldots ,m,}\\ {{x_i} - {x_{ik}} \ge - {a_i},} &{} {i = 1, \ldots ,m,} \end{array}} \right. \quad \mathrm{and} \quad \left\{ {\begin{array}{ll} {{y_r} - {y_{rk}} \le {b_r},} &{} {r = 1, \ldots ,s,}\\ {{y_r} - {y_{rk}} \ge - {b_r},} &{} {r = 1, \ldots ,s.} \end{array}} \right. \end{aligned}$$

Then Model (7) can be converted into the following linear programming problem:

$$\begin{aligned}&\mathrm{min} \;\Gamma ^c_k(X,Y)=\sum _{i=1}^{m} a_i+\sum _{r=1}^{s} b_r\nonumber \\&\mathrm{s.t.}\,\sum _{\begin{array}{c} j=1,\, j\ne k \end{array}}^{n}\lambda _{j}x_{ij}\le x_{i},\quad i=1,\ldots ,m \nonumber \\&\quad \;\; \,\sum _{\begin{array}{c} j=1\, j\ne k \end{array}}^{n}\lambda _{j}y_{rj}\ge y_{r},\quad r=1,\ldots ,s \nonumber \\&\quad \;\;~~~~~~ x_i-x_{ik}\le a_i,\quad i=1,\ldots ,m, \nonumber \\&\quad \;\;~~-x_i+x_{ik}\le a_i,\quad i=1,\ldots ,m, \nonumber \\&\quad \;\;~~~~~~ y_r-y_{rk}\le b_r,\quad r=1,\ldots ,s, \nonumber \\&\quad \;\;~~-y_r+y_{rk}\le b_r,\quad r=1,\ldots ,s, \nonumber \\&\quad \;\;~~~x_i\ge 0,\quad y_r\ge 0,\quad a_i\ge 0, \quad b_r\ge 0,\quad i=1,\ldots ,m,~~r=1,\ldots ,s, \nonumber \\&\quad \;\; ~~~ \lambda _{j}\ge 0,\quad ~~ j=1,\ldots ,n,\quad j\ne k, \end{aligned}$$
(9)

where \({X}=(x_1,...,x_m),~ {Y}=(y_1,...,y_s),~ a=(a_1,...,a_m)\), \(b=(b_1,...,b_s)\) and \(\lambda =(\lambda _1,...,\lambda _{k-1},\lambda _{k+1},...,\lambda _n)\) are the variables of model (9).

It is obvious that model (9) is equivalent with model (7). Furthermore, model (9) is a linear programming problem which can easily provide the optimal solution of model (7). The above-proposed model includes \(3(m+n)\) constraints whereas Wu and Yan (2010) model has \(2(m+n)\) constraints that obviously the Wu and Yan (2010) model is more efficient than the above model. To overcome this problem, we proposed the following model which is more efficient with respect to Wu and Yan (2010) model. The efficient proposed model is as follows:

$$\begin{aligned}&\mathrm{min}\;\Gamma ^c_k(X,Y)=\sum _{i=1}^{m} (x_i-x_{ik})+\sum _{r=1}^{s} (-y_r+y_{rk})\nonumber \\&\mathrm{s.t.}~~\sum _{\begin{array}{c} j=1,~ j\ne k \end{array}}^{n}\lambda _{j}x_{ij}\le x_{i},\quad ~ i=1,\ldots ,m\nonumber \\&\quad \;\; ~~\sum _{\begin{array}{c} j=1,~ j\ne k \end{array}}^{n}\lambda _{j}y_{rj}\ge y_{r},\quad r=1,\ldots ,s\nonumber \\&\quad \;\; ~~x_i\ge x_{ik},\quad i=1,\ldots ,m\nonumber \\&\quad \;\; ~~y_r\le y_{rk},\quad r=1,\ldots ,s\nonumber \\&\quad \;\; ~~ x_i\ge 0,\quad y_r\ge 0\quad ~~ i=1,\ldots ,m,~~r=1,\ldots ,s\nonumber \\&\quad \;\; ~~ \lambda _{j}\ge 0,\quad j=1,\ldots ,n,\,\,j\ne k, \end{aligned}$$
(10)

where \({X}=(x_1,...,x_m),~ {Y}=(y_1,...,y_s)\) and \(\lambda =(\lambda _1,...,\lambda _{k-1},\lambda _{k+1},...,\lambda _n)\) are the variables of model (10).

Next, model (10) is reformulated by imposing the convexity constraint \(\sum _{\begin{array}{c} j=1,~ j\ne k \end{array}}^{n}\lambda _{j}=1\) on (10), to extend the model (10) from the constant returns to scale to the variable returns to scale case. Then, to obtain the ranking score of DMU under evaluation, the following linear programming problem is solved:

$$\begin{aligned}&\mathrm{min} \;\Gamma ^c_k(X,Y)=\sum _{i=1}^{m} (x_i-x_{ik})+\sum _{r=1}^{s}(-y_r+y_{rk})\nonumber \\&\mathrm{s.t.}~~\sum _{\begin{array}{c} j=1,~ j\ne k \end{array}}^{n}\lambda _{j}x_{ij}\le x_{i},\quad ~ i=1,\ldots ,m\nonumber \\&\quad \;\; ~~\sum _{\begin{array}{c} j=1,~ j\ne k \end{array}}^{n}\lambda _{j}y_{rj}\ge y_{r},\quad r=1,\ldots ,s\nonumber \\&\quad \;\;~~\sum _{\begin{array}{c} j=1,~ j\ne k \end{array}}^{n}\lambda _{j}=1 \nonumber \\&\quad \;\; ~~x_i\ge x_{ik},\quad i=1,\ldots ,m \nonumber \\&\quad \;\; ~~y_r\le y_{rk},\quad r=1,\ldots ,s \nonumber \\&\quad \;\; ~~ \lambda _{j}\ge 0,\quad j=1,\ldots ,n,\,\,j\ne k\nonumber \\&\quad \;\; ~~ x_i\ge 0,\quad y_r\ge 0\quad i=1,\ldots ,m,\,\,r=1,\ldots ,s \nonumber \\&\quad \;\; ~~ \lambda _{j}\ge 0,\quad j=1,\ldots ,n,\quad j\ne k, \end{aligned}$$
(11)

Model (11) is similar to model (10) which can be solved by any classical mathematical tool such as GAMS.

Empirical example

In this section, we apply DEA model (11) on the data set used by Jahanshahloo et al. (2004), with the assumption of variable returns to scale. The data set consists of 28 DMUs with 3 inputs and 3 outputs. Data are originally reported by Charnes et al. (1989) which comprised 28 Chinese cities (DMUs) in 1983. The inputs are labor, working funds, and investment. The outputs are gross industrial output value, profit and taxes, and retail sales. The data in Table 1 should be normalized before applying model (11). Table 2 includes the ranking results for 10 extremely efficient DMUs in model (11) \((\mathrm{DMU}_1, \mathrm{DMU}_2, \mathrm{DMU}_6, \mathrm{DMU}_8, \mathrm{DMU}_{21}, \mathrm{DMU}_{23}, \mathrm{DMU}_{24}, \mathrm{DMU}_{25},\mathrm{DMU}_{26}, \mathrm{DMU}_{27})\). Note that in the empirical study these results are the same as the studies by Jahanshahloo et al. (2004) and Wu and Yan (2010). In this study, we have presented a method for converting the nonlinear programming model by Jahanshahloo et al. (2004) into linear programming problem using an alternative transformation, and it can easily be solved. It is worth nothing that some of the input data of Table 3 in Jahanshahloo et al. (2004) were mistakenly recorded and in this Section we have corrected them.

Table 1 The data for 28 Chinese cities
Table 2 The results of ranking by applying model (11)

Conclusion

Jahanshahloo et al. (2004) propose a new ranking method using super-efficiency technique and \(l_1\)-norm. It was shown that the proposed method is able to eliminate the existing problems in some methods. In this regard, this paper provides an alternative transformation for converting the nonlinear programming model by Jahanshahloo et al. (2004) into the linear programming model which is equivalent to the original nonlinear model. In addition, we proposed the new efficient model which gives the same solutions of original model proposed by Jahanshahloo et al. (2004). Considering the higher order of complexity of nonlinear model (7), the proposed treatment in this article is easier to be utilized.