Skip to main content
Log in

Are all firms inefficient?

  • Published:
Journal of Productivity Analysis Aims and scope Submit manuscript

Abstract

In the usual stochastic frontier model, all firms are inefficient, because inefficiency is non-negative and the probability that inefficiency is exactly zero equals zero. We modify this model by adding a parameter p which equals the probability that a firm is fully efficient. We can estimate this model by MLE and obtain estimates of the fraction of firms that are fully efficient and of the distribution of inefficiency for the inefficient firms. This model has also been considered by Kumbhakar et al. (J Econ 172:66–76, 2013). We extend their paper in several ways. We discuss some identification issues that arise if all firms are inefficient or no firms are inefficient. We show that results like those of Waldman (J Econ 18:275–279, 1982) hold for this model, that is, that the likelihood has a stationary point at parameters that indicate no inefficiency and that this point is a local maximum if the OLS residuals are positively skewed. Finally, we consider problems involved in testing the hypothesis that p = 0. We also provide some simulations and an empirical example.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Aigner D, Lovell CK, Schmidt P (1977) Formulation and estimation of stochastic frontier production function models. J Econ 6:21–37

    Article  Google Scholar 

  • Akaike H (1974) A new look at the statistical model identication. IEEE Trans Autom Control 19:716–723

    Article  Google Scholar 

  • Alvarez A, Amsler C, Orea L, Schmidt P (2006) Interpreting and testing the scaling property in models where inefficiency depends on firm characteristics. J Prod Anal 25:201–212

    Article  Google Scholar 

  • Amsler C, Prokhorov A, Schmidt P (2014) Using copulas to model time dependence in stochastic frontier models. Econ Rev (forthcoming)

  • Andrews DWK (2001) Testing when a parameter is on the boundary of the maintained hypothesis. Econometrica 69:683–734

    Article  Google Scholar 

  • Battese GE, Coelli TJ (1988) Prediction of firm-level technical efficiencies with a generalized frontier production function and panel data. J Econ 38:387–399

    Article  Google Scholar 

  • Battese GE, Coelli TJ (1995) A model for technical inefficiency effects in a stochastic frontier production function for panel data. Empir Econ 20:325–332

    Article  Google Scholar 

  • Berg SA, Førsund FR, Hjalmarsson L, Suominen M (1993) Banking efficiency in the Nordic countries. J Banking Financ 17:371–388

    Article  Google Scholar 

  • Caudill SB (2003) Estimating a mixture of stochastic frontier regression models via the em algorithm: a multiproduct cost function application. Empir Econ 28:581–598

    Article  Google Scholar 

  • Caudill SB, Ford JM (1993) Biases in frontier estimation due to heteroskedasticity. Econ Lett 41:17–20

    Article  Google Scholar 

  • Caudill SB, Ford JM, Gropper DM (1995) Frontier estimation and firm-specific inefficiency measures in the presence of heteroscedasticity. J Bus Econ Stat 13:105–111

    Google Scholar 

  • Chen Y, Liang K-Y (2010) On the asymptotic behaviour of the pseudolikelihood ratio test statistic with boundary problems. Biometrika 97:603–620

    Article  Google Scholar 

  • Coelli TJ, Rao DP, O’Donnell CJ, Battese GE (2005) An introduction to efficiency and productivity analysis. Springer, New York

    Google Scholar 

  • Førsund FR, Hjalmarsson L (1974) On the measurement of productive efficiency. Swed J Econ 76:141–154

    Article  Google Scholar 

  • Gouriéroux C, Holly A, Monfort A (1982) Likelihood ratio test, Wald test, and Kuhn--Tucker test in linear models with inequality constraints on the regression parameters. Econometrica 50:63–80

    Article  Google Scholar 

  • Gouriéroux C, Monfort A (1995) Statistics and econometric models. In: Statistics and econometric models: testing, confidence regions, model selection, and asymptotic theory, vol 2. Cambridge University Press, New York

  • Grassetti L (2011) A novel mixture based stochastic frontier model with application to hospital efficiency. unpublished manuscript, University of Udine

  • Greene W (2005) Reconsidering heterogeneity in panel data estimators of the stochastic frontier model. J Econ 126:269–303

    Article  Google Scholar 

  • Greene WH (2012) Econometric analysis. Prentice Hall, New York

    Google Scholar 

  • Hannan EJ, Quinn BG (1979) The determination of the order of an autoregression. J R Stat Soc Ser B (Methodological) 41:190–195

    Google Scholar 

  • Hayashi F (2000) Econometrics. Princeton University Press, Princeton

    Google Scholar 

  • Huang CJ, Liu J-T (1994) Estimation of a non-neutral stochastic frontier production function. J Prod Anal 5:171–180

    Article  Google Scholar 

  • Jondrow J, Lovell CK, Materov IS, Schmidt P (1982) On the estimation of technical inefficiency in the stochastic frontier production function model. J Econ 19:233–238

    Article  Google Scholar 

  • Kumbhakar SC, Ghosh S, McGuckin JT (1991) A generalized production Frontier approach for estimating determinants of inefficiency in U.S. dairy farms. J Bus Econ Stat 9:279–286

    Google Scholar 

  • Kumbhakar SC, Parmeter CF, Tsionas EG (2013) A zero inefficiency stochastic Frontier model. J Econ 172:66–76

    Article  Google Scholar 

  • Meeusen W, van den Broeck J (1977) Efficiency estimation from Cobb–Douglas production function with composed error. Int Econ Rev 18:435–444

    Article  Google Scholar 

  • Orea L, Kumbhakar SC (2004) Efficiency measurement using a latent class stochastic frontier model. Empir Econ 29:169–183

    Article  Google Scholar 

  • Reifschneider D, Stevenson R (1991) Systematic departures from the Frontier: a framework for the analysis of firm inefficiency. Int Econ Rev 32:715–723

    Article  Google Scholar 

  • Rogers AJ (1986) Modified lagrange multiplier tests for problems with one-sided alternatives. J Econ 31:341–361

    Article  Google Scholar 

  • Schwarz G (1978) Estimating the dimension of a model. Ann Stat 6:461–464

    Article  Google Scholar 

  • Self SG, Liang K-Y (1987) Asymptotic properties of maximum likelihood estimators and likelihood ratio tests under nonstandard conditions. J Am Stat Assoc 82:605–610

    Article  Google Scholar 

  • Waldman DM (1982) A stationary point for the stochastic frontier likelihood. J Econ 18:275–279

    Article  Google Scholar 

  • Wang H-J (2002) Heteroscedasticity and non-monotonic efficiency effects of a stochastic frontier model. J Prod Anal 18:241–253

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter Schmidt.

Appendix

Appendix

We will use the following notation. Let \(f_{v}\left(\varepsilon_{i}\right)=\sqrt{{\frac{1+\lambda^{2}}{\sigma^{2}}}}\phi\left(\varepsilon_{i}\sqrt{{\frac{1+\lambda^{2}}{\sigma^{2}}}}\right),f_{\varepsilon}\left(\varepsilon_{i}\right)={\frac{2}{\sigma}}\phi\left({\frac{\varepsilon_{i}}{\sigma}}\right)\left(1\!-\!\Upphi\left({\frac{\varepsilon_{i}\lambda}{\sigma}}\right)\right),f_{p}\left(\varepsilon_{i}\right)=pf_{v}\left(\varepsilon_{i}\right)+\left(1-p\right)f_{\varepsilon}\left(\varepsilon_{i}\right),\ln{L}=\sum \ln{f}_{p}\left(\varepsilon_{i}\right),m_{i}={\frac{\phi\left({\frac{\varepsilon_{i}\lambda}{\sigma}}\right)}{1-\Upphi\left({\frac{\varepsilon_{i}\lambda}{\sigma}}\right)}},\theta=\left(\beta^{\prime},\lambda,\sigma^{2},p\right)^{\prime},\beta=k\times 1\hbox{ vector},\theta^{**}=\left(\hat{\beta}^{\prime},\hat{\lambda},\hat{\sigma}^{2},\hat{p}\right)^{\prime},\) where \(\hat{\beta}=\hbox{OLS}, \hat{\lambda}=0, \hat{\sigma}^{2}=\frac{1}{n}\sum\hat{\varepsilon}_i^2, \hat{\varepsilon}_i=y_i-x_i^{\prime}\hat{\beta},\;\hat{p}\in[0, 1].\) ∨ indicates maximum.

Result 1

θ ** is a stationary point of the log likelihood function.

Proof

The first derivative of \(\ln{L}\) is:

$$ \begin{aligned} S(\theta)&= {\left(\begin{array}{c} {\frac{\partial\ln{L}}{\partial\beta}} \\ {\frac{\partial\ln{L}}{\partial\lambda}} \\ {\frac{\partial\ln{L}}{\partial\sigma^{2}}} \\ {\frac{\partial\ln{L}}{\partial p}} \\ \end{array}\right)} \\ &={\left( \begin{array}{l} \sum\limits_{i=1}^{n}\frac{pf_{v}\left(\varepsilon_{i}\right)\left(\frac{1+\lambda^{2}}{\sigma^{2}}\varepsilon_{i}{{\bf x}}_{i}\right)+\left(1-p\right)f_{\varepsilon}\left(\varepsilon_{i}\right)\left(\frac{\varepsilon_{i}{{\bf x}}_{i}}{\sigma^{2}}+\frac{m_{i}{{\bf x}}_{i}\lambda}{\sigma}\right)}{f_{p}\left(\varepsilon_{i}\right)}\\ \sum\limits_{i=1}^{n} \frac{pf_{v}\left(\varepsilon_{i}\right)\left(\frac{\lambda}{1+\lambda^{2}}-\frac{\lambda}{\sigma^{2}}\varepsilon_{i}^{2}\right)-\left(1-p\right)f_{\varepsilon}\left(\varepsilon_{i}\right)\left(\frac{1}{\sigma}m_{i}\varepsilon_{i}\right)}{f_{p}\left(\varepsilon_{i}\right)}\\ \sum\limits_{i=1}^{n} \frac{pf_{v}\left(\varepsilon_{i}\right)\left(-\frac{1}{2\sigma^{2}}+\frac{1+\lambda^{2}}{2\sigma^{4}}\varepsilon_{i}^{2}\right) +\left(1-p\right)f_{\varepsilon}\left(\varepsilon_{i}\right)\left(-\frac{1}{2\sigma^{2}}+\frac{1}{2\sigma^{4}}\varepsilon_{i}^{2}+\frac{\lambda}{2\sigma^{3}}m_{i}\varepsilon_{i}\right)}{f_{p}\left(\varepsilon_{i}\right)}\\ \sum\limits_{i=1}^{n}\frac{f_{v}\left(\varepsilon_{i}\right)- f_{\varepsilon}\left(\varepsilon_{i}\right)}{f_{p}\left(\varepsilon_{i}\right)}\\ \end{array} \right)}. \end{aligned} $$

When λ = 0,

$$ S\left(\theta\right)|_{\lambda=0}= \left(\begin{array}{c} {\frac{1}{\sigma^{2}}}\sum\limits_{i=1}^{n}\varepsilon_{i}{{\bf x}}_{i} \\ -\left(1-p\right)\sqrt{{\frac{2}{\pi}}}{\frac{1}{\sigma}} \sum\limits_{i=1}^{n}\varepsilon_{i} \\ -{\frac{n}{2\sigma^{2}}}+{\frac{1}{2\sigma^{4}}} \sum\limits_{i=1}^{n}\varepsilon_{i}^{2} \\ 0 \\ \end{array}\right). \\ $$

It is straightforward that \(S\left(\theta^{**}\right)=0,\) since, with \(\varepsilon_{i}=\hat{\varepsilon}_{i}, \sum\nolimits_{i=1}^{n}\hat{\varepsilon}_{i}=0\) and \(\sum\nolimits_{i=1}^{n}\hat{\varepsilon}_{i}{{\bf x}}_{i}=0.\) Therefore, θ ** is a stationary point. \(\square\)

Result 2

Evaluated at the stationary point, θ ,** the Hessian of the log likelihood is negative semi-definite with two zero eigenvalues.

Proof

The Hessian evaluated at the stationary point θ ** is

$$ H\left(\theta^{**}\right)= \left( \begin{array}{cccc}-{\frac{1}{\hat{\sigma}^{2}}}\sum\nolimits_{i=1}^{n} {{\bf x}}_{i}{{\bf x}}_{i}^{\prime}& \left(1-\hat{p}\right)\sqrt{{\frac{2}{\pi}}}{\frac{1}{\hat{\sigma}}}\sum\nolimits_{i=1}^{n}{{\bf x}}_{i}& 0& 0 \\\left(1-\hat{p}\right)\sqrt{{\frac{2}{\pi}}}{\frac{1}{\hat{\sigma}}}\sum\nolimits_{i=1}^{n}{{\bf x}}_{i}^{\prime} &-\left(1-\hat{p}\right)^{2} {\frac{2n}{\pi}} & 0& 0 \\ 0&0&-{\frac{n}{2\hat{\sigma}^4}}& 0 \\ 0&0&0& 0 \\\end{array}\right). \\ $$

When \(\hat{p}=1,\)

$$ H\left(\theta^{**}\right)= \left( \begin{array}{cccc} -{\frac{1}{\hat{\sigma}^{2}}}\sum\nolimits_{i=1}^{n}{{\bf x}}_{i}{{\bf x}}_{i}^{\prime}& 0& 0& 0 \\ 0& 0& 0& 0 \\ 0&0& -{\frac{n}{2\hat{\sigma}^4}}& 0 \\ 0&0&0& 0 \\ \end{array}\right). \\ $$

Because \(-{\frac{1}{\hat{\sigma}^{2}}}\sum_{i=1}^{n}{{\bf x}}_{i}{{\bf x}}_{i}^{\prime}\) is a negative definite matrix, \(H\left(\theta^{**}\right)\) is a negative semi-definite matrix with two zero eigenvalues.

Now suppose that \(\hat{p}\neq 1.\) Note that the first row of \(H\left(\theta^{**}\right), \left(-{\frac{1} {\hat{\sigma}^{2}}}\sum\nolimits_{i=1}^{n}{{\bf x}}_{i}^{\prime}, \left(1-\hat{p}\right)\sqrt{{\frac{2}{\pi}}}{\frac{n} {\hat{\sigma}}}, 0, 0 \right),\) is linearly dependent with the (k + 1)th row of \(H\left(\theta^{**}\right).\) Multiplying the first row by \(\left(1-\hat{p}\right)\sqrt{{\frac{2} {\pi}}}\hat{\sigma}\) and adding to the \(\left(k+1\right){\rm th} \) row results in a row vector of zeros. Hence,

$$ H\left(\theta^{**}\right)\sim\left(\begin{array}{cccc} -{\frac{1}{\hat{\sigma}^{2}}}\sum\nolimits_{i=1}^{n}{{\bf x}}_{i}{{\bf x}}_{i}^{\prime}& \left(1-\hat{p}\right)\sqrt{{\frac{2}{\pi}}} {\frac{1}{\hat{\sigma}}}\sum\nolimits_{i=1}^{n}{{\bf x}}_{i}& 0& 0 \\ 0& 0& 0& 0 \\ 0&0& -{\frac{n}{2\hat{\sigma}^4}}& 0 \\ 0&0&0& 0 \\ \end{array}\right), $$

where ∼ stands for an elementary row operation. Again, the first column and the (k + 1)th column of the transferred matrix are linearly dependent. Similarly, multiplying the first column by \((1-\hat{p})\sqrt{{\frac{2}{\pi}}}\hat{\sigma}\) and adding to the \((k+1){\rm th}\) column results in a column vector of zeros. In other words,

$$ H\left(\theta^{**}\right) \sim\left(\begin{array}{cccc} -{\frac{1}{\hat{\sigma}^{2}}}\sum\nolimits_{i=1}^{n} {{\bf x}}_{i} {{\bf x}}_{i}^{\prime}& \left(1-\hat{p}\right)\sqrt{{\frac{2}{\pi}}}{\frac{1}{\hat{\sigma}}} \sum\nolimits_{i=1}^{n}{{\bf x}}_{i}& 0& 0 \\ 0& 0& 0& 0 \\ 0&0& -{\frac{n}{2\hat{\sigma}^4}}& 0 \\ 0&0&0& 0 \\ \end{array}\right)\sim \left( \begin{array}{cccc} -{\frac{1}{\hat{\sigma}^{2}}}\sum\nolimits_{i=1}^{n} {{\bf x}}_{i}{{\bf x}}_{i}^{\prime}& 0& 0& 0 \\ 0& 0& 0& 0 \\ 0&0& -{\frac{n}{2\hat{\sigma}^4}}& 0 \\ 0&0&0& 0 \\ \end{array}\right). $$

Elementary operations preserve the rank of a matrix. Hence, the rank of \( H\left(\theta^{**}\right)\) is k + 1, i.e., \(H\left(\theta^{**}\right)\) has two zero eigenvalues.

Now we will show that H(θ **) is negative semi-definite. Let \(\alpha=\left(\alpha_{1}^{\prime},\alpha_{2},\alpha_{3},\alpha_{4}\right)^{\prime}\) be an arbitrary non-zero \(\left(k+3\right) \times 1\) vector, where α1 is a k × 1 vector, and α2, α3, α4 are scalars. Then,

$$ \begin{aligned} \alpha^{\prime} H\left(\theta^{**}\right)\alpha=& -\left({\frac{1}{\hat{\sigma}}}{\frac{1}{\sqrt{n}}}\alpha_{1}^{\prime} \sum\limits_{i=1}^{n} {{\bf x}}_{i}-\alpha_{2}\left(1-p\right) \sqrt{{\frac{2}{\pi}}}\sqrt{n}\right)^{2} \\ &-{\frac{1}{\hat{\sigma}^{2}}} \alpha_{1}^{\prime}\left(\sum\limits_{i=1}^{n} {{\bf x}}_{i}{{\bf x}}_{i}^{\prime} -{\frac{1}{n}}\sum\limits_{i=1}^{n} {{\bf x}}_{i}\sum\limits_{i=1}^{n} {{\bf x}}_{i}^{\prime}\right)\alpha_{1}-{\frac{n}{2\hat{\sigma}^4}}\alpha_{3}^{2} \\ \leq &0, \\ \hbox{because}\quad &\sum\limits_{i=1}^{n} {{\bf x}}_{i}{{\bf x}}_{i}^{\prime} -{\frac{1}{n}}\sum\limits_{i=1}^{n} {{\bf x}}_{i}\sum\limits_{i=1}^{n} {{\bf x}}_{i}^{\prime} \\ &=\sum\limits_{i=1}^{n}\left({{\bf x}}_{i}-{\frac{1}{n}}\sum\limits_{j=1}^{n} {{\bf x}}_{j}\right)\left({{\bf x}}_{i}-{\frac{1}{n}}\sum\limits_{j=1}^{n} {{\bf x}}_{j}\right)^{\prime}\quad\hbox{is positive semi-definite.} \\ \end{aligned} $$

Therefore \(H\left(\theta^{**}\right)\) is negative semi-definite. \(\square\)

Result 3

θ ** with \(\hat p\in[0,1)\) is a local maximizer of the log likelihood function if and only if \( \sum\nolimits_{i=1}^{n}\hat{\varepsilon}_{i}^{3}\!\!>0.\)

Proof

From Result 2, we know that the Hessian evaluated at θ ** is negative semi-definite. Therefore, if the log likelihood decreases in the direction of the two eigenvectors associated with zero eigenvalues, θ ** is a local maximizer of the log likelihood. The two eigenvectors that are associated with the two zero eigenvalue are

$$ \left( \begin{array}{c} \left(1-\hat{p}\right)\sqrt{{\frac{2}{\pi}}}\hat{\sigma} \\ 0 \\ 1 \\ 0 \\ 0 \\ \end{array}\right) \quad\hbox{and}\quad \left(\begin{array}{c} 0 \\ 0 \\ 0 \\ 0 \\ 1 \\ \end{array}\right).\\ $$

Let

$$ \Updelta\theta=\mu \left( \begin{array}{c} \left(1-\hat{p}\right)\sqrt{{\frac{2}{\pi}}}\hat{\sigma} \\ 0 \\ 1 \\ 0 \\ 0 \\ \end{array}\right) +\phi\left( \begin{array}{c} 0 \\ 0 \\ 0 \\ 0 \\ 1 \\ \end{array}\right) =\left( \begin{array}{c} \left(1-\hat{p}\right)\sqrt{{\frac{2}{\pi}}}\hat{\sigma}\mu \\ 0 \\ \mu \\ 0 \\ \phi \\ \end{array}\right).\\ $$

Because λ ≥ 0, μ > 0. \(\Updelta\theta\) has only three non-zero arguments. Thus, relevant parameters would be β 0,λ, and p. By Taylor’s expansion,

$$ \begin{aligned} L\left(\theta^{*}+\Updelta\theta\right)-L\left(\theta^{*}\right) &={\frac{1}{6}}\left[L_{\beta_{0}\beta_{0}\beta_{0}} \left(\left(1-\hat{p}\right)\sqrt{{\frac{2}{\pi}}}\hat{\sigma} \mu\right)^{3}+3L_{\beta_{0}\beta_{0}\lambda} \left(\left(1-\hat{p}\right)\sqrt{{\frac{2}{\pi}}}\hat{\sigma} \mu\right)^{2}\mu \right.\\ &\quad \left.+3L_{\beta_{0}\lambda\lambda} \left(\left(1-\hat{p}\right) \sqrt{{\frac{2}{\pi}}}\hat{\sigma}\mu\right)\mu^{2}\right. \\ &\quad +\left.3L_{\beta_{0}\beta_{0}p} \left(\left(1-\hat{p}\right) \sqrt{{\frac{2}{\pi}}}\hat{\sigma}\mu\right)^{2}\phi+3L_{\beta_{0}pp} \left(\left(1-\hat{p}\right)\sqrt{{\frac{2}{\pi}}}\hat{\sigma}\mu\right) \phi^{2}+L_{\lambda\lambda\lambda} \mu^{3}\right. \\ &\quad +\left.3L_{\lambda\lambda p} \mu^{2}\phi+3L_{\lambda pp} \mu\phi^{2}+L_{ppp} \phi^{3} +6L_{\beta_{0}p\lambda} \left(\left(1-\hat{p}\right)\sqrt{{\frac{2}{\pi}}}\hat{\sigma} \mu\right)\mu\phi\right] \\ &\quad +o\left(\left(\mu\vee\phi\right)^{4}\right) \\ &\quad =\left(1-\hat{p}\right){\frac{1}{6\pi }}\sqrt{{\frac{2}{\pi}}} {\frac{1}{\hat{\sigma}^{3}}}\left(-4 \hat{p}^{2}+\hat{p} (8-3 \pi ) +\pi-4 \right) \sum\limits_{i=1}^{n}\hat{\varepsilon}_{i}^{3}\mu^{3} \\ &\quad +o\left(\left(\mu\vee\phi\right)^{4}\right). \\ \end{aligned} $$

The 1st order term is zero because θ ** is a stationary point (Result 1). The 2nd order term is zero by the definition of the eigenvector. Note that \(\left(-4 \hat{p}^{2}+\hat{p} (8-3 \pi )+\pi-4 \right)\) has its maximum, π − 4 < 0, when \(\hat{p}=0.\) Since \(\mu>0, L\left(\theta^{*}+\Updelta\theta\right)-L\left(\theta^{*}\right)<0\) if and only if \(\sum\hat{\varepsilon}_{i}^{3}>0.\) Therefore, θ ** with \(\hat{p}\in\left[0,1\right)\) is a local maximizer if and only if \(\sum\hat{\varepsilon}_{i}^{3}>0.\) When \(\hat{p}=0,\) the expression goes back to the one in Waldman (1982). \(\square\)

Result 4

θ ** with \(\hat p=1\) is a local maximizer of the likelihood function if \( \sum\nolimits_{i=1}^{n}\hat{\varepsilon}_{i}^{3}\!\!>0.\)

Proof

The two eigenvectors associated with the zero eigenvalues are

$$ \left( \begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \\ \end{array}\right) \quad\hbox{and}\quad \left( \begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \\ \end{array}\right). $$

Let

$$ \Updelta\theta=\mu \left( \begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \\ \end{array}\right) +\phi\left( \begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \\ \end{array}\right) =\left( \begin{array}{c} 0 \\ \mu \\ 0 \\ \phi \\ \end{array}\right). $$

Because λ ≥ 0 and p ≤ 1, μ > 0 and ϕ < 0. \(\Updelta\theta\) has only two non-zero arguments. Thus, the relevant parameters would be λ and p. By Taylor’s expansion,

$$ \begin{aligned} L\left(\theta^{*}+\Updelta\theta\right)-L\left(\theta^{*}\right)\!\!&=\!\! {\frac{1}{24}}\left[L_{\lambda\lambda\lambda\lambda} \mu^{4} +4L_{\lambda\lambda\lambda p} \mu^{3}\phi +6L_{\lambda\lambda pp} \mu^{2}\phi^{2}+4L_{\lambda ppp} \mu\phi^{3}+ L_{pppp} \phi^{4}\right] \\ &\quad +o\left(\left(\mu\vee\phi\right)^{5}\right) \\ &=-{\frac{1}{4}}\mu^{4}+{\frac{1}{3\hat{\sigma}^{3}}}\sqrt{{\frac{2}{\pi}}} \sum\limits_{i=1}^{n}\hat{\varepsilon}_{i}^{3}\mu^{3}\phi -{\frac{n}{\pi}}\mu^{2}\phi^{2}+o\left(\left(\mu\vee\phi\right)^{5}\right) \end{aligned} $$

The 1st order term is zero because θ ** is a stationary point (Result 1). The 2nd order term is zero by the definition of the eigenvector. The third order term is zero because \(L\left(\theta^{*}+\Updelta\theta\right)-L\left(\theta^{*}\right)\) in Result 3 is zero when \(\hat{p}=1.\) Since ϕ < 0 and \(\mu>0, {\frac{1}{3\hat{\sigma}^{3}}}\sqrt{{\frac{2} {\pi}}}\sum\nolimits_{i=1}^{n}\hat{\varepsilon}_{i}^{3}\mu^{3}\phi<0\) when \(\sum\hat{\varepsilon}_{i}^{3}>0.\) Therefore, if \(\sum\hat{\varepsilon}_{i}^{3}>0, L\left(\theta^{*}+\Updelta\theta\right)-L\left(\theta^{*}\right)<0\) and θ ** with \(\hat{p}=1\) is a local maximizer. \(\square\)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Rho, S., Schmidt, P. Are all firms inefficient?. J Prod Anal 43, 327–349 (2015). https://doi.org/10.1007/s11123-013-0374-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11123-013-0374-7

Keywords

JEL Classification

Navigation