Abstract
In this paper, we study the identification of a particular case of the 3PL model, namely when the discrimination parameters are all constant and equal to 1. We term this model, 1PL-G model. The identification analysis is performed under three different specifications. The first specification considers the abilities as unknown parameters. It is proved that the item parameters and the abilities are identified if a difficulty parameter and a guessing parameter are fixed at zero. The second specification assumes that the abilities are mutually independent and identically distributed according to a distribution known up to the scale parameter. It is shown that the item parameters and the scale parameter are identified if a guessing parameter is fixed at zero. The third specification corresponds to a semi-parametric 1PL-G model, where the distribution G generating the abilities is a parameter of interest. It is not only shown that, after fixing a difficulty parameter and a guessing parameter at zero, the item parameters are identified, but also that under those restrictions the distribution G is not identified. It is finally shown that, after introducing two identification restrictions, either on the distribution G or on the item parameters, the distribution G and the item parameters are identified provided an infinite quantity of items is available.
Similar content being viewed by others
Notes
This suggestion is due to Paul De Boeck.
References
Adams, R.J., Wilson, M.R., & Wang, W. (1997). The multidimensional random coefficients multinomial logit model. Applied Psychological Measurement, 21, 1–23.
Adams, R.J., & Wu, M.L. (2007). The mixed-coefficients multinomial logit model: a generalization form of the Rasch model. In M. von Davier & C.H. Carstensen (Eds.), Multivariate and mixture distribution Rasch models (pp. 57–75). Berlin: Springer.
Andersen, E.B. (1980). Discrete statistical models with social sciences applications. Amsterdam: North-Holland.
Béguin, A.A., & Glas, C.A.W. (2001). MCMC estimation and some model-fit analysis of multidimensional IRT models. Psychometrika, 66, 541–562.
Berti, P., Pratelli, L., & Riggo, P. (2008). Trivial intersection of σ-fields and Gibbs sampling. Annals of Probability, 36, 2215–2234.
Berti, P., Pratelli, L., & Riggo, P. (2010). Atomic intersection of σ-fields and some of its consequences. Probability Theory and Related Fields, 148, 269–283.
Birnbaum, A. (1968). Some latent trait models and their use in inferring any examinee’s ability. In F.M. Lord & M.R. Novick (Eds.), Statistical theories of mental test scores (pp. 395–479). Reading: Addison-Wesley.
Bock, R.D., & Aitkin, M. (1981). Marginal maximum likelihood estimation of item parameters: application of an EM algorithm. Psychometrika, 46, 443–459.
Bock, R.D., & Zimowski, M.F. (1997). Multiple group IRT. In W.J. van der Linden & R.K. Hambleton (Eds.), Handbook of modern item response theory (pp. 433–448). Berlin: Springer.
Carlin, B.P., & Louis, T.A. (2000). Bayes and empirical Bayes methods for data analysis (2nd ed.). London: Chapman & Hall/CRC.
De Boeck, P., & Wilson, M. (2004). Explanatory item response models. A generalized linear and nonlinear approach. Berlin: Springer.
Del Pino, G., San Martín, E., González, J., & De Boeck, P. (2008). On the relationships between sum score based estimation and joint maximum likelihood estimation. Psychometrika, 73, 145–151.
Eberly, E.E., & Carlin, B.P. (2000). Identifiability and convergence issues for Markov chain Monte Carlo fitting of spatial data. Statistics in Medicine, 19, 2279–2294.
Embretson, S.E., & Reise, S.P. (2000). Item response theory for psychologists. Mahwah: Lawrence Erlbaum Associates.
Florens, J.-P., & Mouchart, M. (1982). A note on noncausality. Econometrica, 50, 583–591.
Florens, J.-P., Mouchart, M., & Rolin, J.-M. (1990). Elements of Bayesian statistics. New York: Dekker.
Florens, J.-P., & Rolin, J.-M. (1984). Asymptotic sufficiency and exact estimability. In J.-P. Florens, M. Mouchart, J.-P. Raoult, & L. Simar (Eds.), Alternative approaches to time series analysis (pp. 121–142). Bruxelles: Publications des Facultés Universitaires Saint-Louis.
Gabrielsen, A. (1978). Consistency and identifiability. Journal of Econometrics, 8, 261–263.
Gelfand, A.E., & Sahu, S.K. (1999). Identifiability, improper priors, and Gibbs sampling for generalized linear models. Journal of the American Statistical Association, 94, 247–253.
Gelman, A., & Rubin, R. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, 7, 457–511.
Geweke, J. (1992). Bayesian statistics: Vol. 4. Evaluating the accuracy of sampling-based approaches to calculating posterior moments. Oxford: Clarendon Press.
Ghosh, M., Ghosh, A., Chen, M.-H., & Agresti, A. (2000). Noninformative priors for one-parameter item response models. Journal of Statistical Planning and Inference, 88, 99–115.
Gustafson, P. (2005). On model expansion, model contraction, identifiability and prior information: two illustrative scenarios involving mismeasured variables (with discussion). Statistical Sciences, 20, 111–140.
Halmos, P. (1951). Introduction to Hilbert space, and the theory of spectral multiplicity. New York: Chelsea.
Hambleton, R.K., Swaminathan, H., & Rogers, H.J. (1991). Fundamentals of item response theory. Thousand Oaks: Sage.
Heidelberg, P., & Welch, P. (1992). Simulation run length control in the presence of an initial transient. Operations Research, 31, 1109–1144.
Hutschinson, T.P. (1991). Ability, parameter information, guessing: statistical modelling applied to multiple-choice tests. Rundle Mall: Rumsby Scientific Publishing.
Karabatsos, G., & Walker, S. (2009). Coherent psychometric modelling with Bayesian nonparametrics. British Journal of Mathematical & Statistical Psychology, 62, 1–20.
Kass, R., Carlin, B., Gelman, A., & Neal, R. (1998). Markov chain Monte Carlo in practice: a roundtable discussion. American Statistician, 52, 93–100.
Koopmans, T.C., & Reiersøl, O. (1950). The identification of structural characteristics. The Annals of Mathematical Statistics, 21, 165–181.
Lancaster, T. (2000). The incidental parameter problem since 1948. Journal of Econometrics, 95, 391–413.
van der Linden, W., & Hambleton, R.K. (1997). Handbook of modern item response theory. Berlin: Springer.
Lindley, D.V. (1971). Bayesin statistics: a review. Philadelphia: Society for Industrial and Applied Mathematics.
Maris, G., & Bechger, T. (2009). On interpreting the model parameters for the three parameter logistic model. Measurement Interdisciplinary Research & Perspective, 7, 75–86.
McDonald, R.P. (1999). Test theory: a unified treatment. Hillsdale: Erlbaum.
Millsap, R., & Maydeu-Olivares, A. (2009). Quantitative methods in psychology. Thousand Oaks: Sage.
Miyazaki, K., & Hoshino, T. (2009). A Bayesian semiparametric item response model with Dirichlet process priors. Psychometrika, 74, 375–393.
Molenaar, I.W. (1995). Estimation of item parameters. In G.H. Fischer & I.W. Molenaar (Eds.), Rasch models. Foundations, recent developments and applications. New York: Springer (Chapter 3).
Mouchart, M. (1976). A note on Bayes theorem. Statistica, 36, 349–357.
Poirier, D.J. (1998). Revising beliefs in nonidentified models. Econometric Theory, 14, 483–509.
R Development Core Team (2006). R: a language and environment for statistical computing [Computer software manual]. Vienna, Austria. http://www.R-project.org (ISBN 3-900051-07-0).
Rao, M.M. (2005). Conditional measures and applications (2nd ed.). London: Chapman & Hall/CRC.
Rizopoulos, D. (2006). ltm: an R package for latent variable modelling and item response theory analyses. Journal of Statistical Software, 17(5), 1–25. http://www.jstatsoft.org/v17/i05/.
Roberts, G.O., & Rosenthal, J. (1998). Markov-chain Monte Carlo: some practical implications of theoretical results. Canadian Journal of Statistics, 26(1), 5–20.
San Martín, E., Del Pino, G., & De Boeck, P. (2006). IRT models for ability-based guessing. Applied Psychological Measurement, 30, 183–203.
San Martín, E., & González, J. (2010). Bayesian identifiability: contributions to an inconclusive debate. Chilean Journal of Statistics, 1, 69–91.
San Martín, E., González, J., & Tuerlinckx, F. (2009). Identified parameters, parameters of interest and their relationships. Measurement Interdisciplinary Research & Perspective, 7, 95–103.
San Martín, E., Jara, A., Rolin, J.-M., & Mouchart, M. (2011). On the Bayesian nonparametric generalization of IRT-type models. Psychometrika, 76, 385–409.
San Martín, E., Mouchart, M., & Rolin, J.M. (2005). Ignorable common information, null sets and Basu’s first theorem. Sankhyā, 67, 674–698.
San Martín, E., & Quintana, F. (2002). Consistency and identifiability revisited. Brazilian Journal of Probability and Statistics, 16, 99–106.
Shiryaev, A.N. (1995). Probability (2nd ed.). Berlin: Springer.
Spivak, M. (1965). Calculus on manifolds: a modern approach to classical theorems of advanced calculus. Cambridge: Perseus Book Publishing.
Swaminathan, H., & Gifford, J.A. (1986). Bayesian estimation in the three-parameter logistic model. Psychometrika, 51, 589–601.
Thissen, D. (2009). On interpreting the parameters for any item response model. Measurement Interdisciplinary Research & Perspective, 7, 104–108.
Thissen, D., & Wainer, H. (2001). Item response models for items scored in two categories. Berlin: Springer.
Woods, C.M. (2006). Ramsay-curve item response theory (RC-IRT) to detect and correct for nonnormal latent variables. Psychological Methods, 11, 253–270.
Woods, C.M. (2008). Ramsay-curve item response theory for the three-parameter logistic item response model. Applied Psychological Measurement, 32, 447–465.
Woods, C.M., & Thissen, D. (2006). Item response theory with estimation of the latent population distribution using spline-based densities. Psychometrika, 71, 281–301.
Xie, Y., & Carlin, B.P. (2006). Measure of Bayesian learning and identifiability in hierarchical models. Journal of Statistcal Planning and Inference, 136, 3458–3477.
Acknowledgements
The work developed in this paper was presented in a Symposium on Identification Problems in Psychometrics at the International Meeting of the Psychometric Society IMPS 2009. The meeting was held in Cambridge (UK), in July 2009. The first author gratefully acknowledges the partial financial support from the ANILLO Project SOC1107 from the Chilean Gouvernment. The third author acknowledges the partial financial support from the Grant FONDECYT 11100076 from Chilean Government. The authors gratefully acknowledge several discussions with Claudio Fernández (Faculty of Mathematics, Pontificia Universidad Católica de Chile) and Paul De Boeck (University of Amsterdam). This paper benefited from the helpful suggestions of three anonymous referees and the associate editor. In particular, one of the questions proposed by a referee led us to correct an error in a conclusion of Theorem 2.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A. Identifiability of the Scale Parameter σ by ω 12,δ 1,δ 2,ω 1, and ω 2
To prove that the function ω 12 given by (3.11) is a strictly increasing continuous function of σ, we need to study the sign of its derivative with respect to σ. This requires not only using the Implicit Function Theorem (Spivak 1965), but also assuming regularity conditions that allow performing derivatives under the integral sign. We accordingly assume that the cumulative distribution function F has a continuous density function f strictly positive on ℝ. Furthermore, to prove that ω 12 is a strictly increasing continuous function of σ, we need to obtain the derivatives under the integral sign of the function p(σ,β) as defined in (3.7) with respect to σ and to β. Consequently, it is assumed that, \(\forall\sigma\in\mathbb{R}_{0}^{+} \) and ∀β∈ℝ, there exist ϵ>0 and η>0, such that
Under these regularity conditions, the function p(σ,β) is continuously differentiable under the integral on \(\mathbb {R}_{0}^{+}\times\mathbb{R}\) and, therefore,
Thus, \(\overline{p} ( \sigma, \alpha) \) as defined by (3.9) is also continuously differentiable on \(\mathbb{R}_{0}^{+} \times( 0 , 1 ) \); and from (3.10), we obtain that
where
Combining (A.1) and (A.2), we obtain that
where
Thanks to the regularity conditions allowing to perform derivatives of p(σ,β), and to the fact that F≤1, it can be shown that ω 12 is continuously differentiable under the integral sign in σ, β 1 and β 2; therefore, the function φ(σ,δ 1,δ 2,ω 1,ω 2) is continuously differentiable under the integral sign with respect to σ. It remains to show that the derivative w.r.t. σ is strictly positive. Let us consider the sign of one of the two terms of the derivative of φ(σ,δ 1,δ 2,ω 1,ω 2). Using (A.3.ii), we obtain that
Therefore, such a second term can be written as
Now, since \(F [\sigma\theta- \overline{p} ( \sigma, \omega_{2}/\delta_{2} ) ] \) is a strictly increasing function of θ, the covariance between θ and \(F [\sigma\theta- \overline{p} ( \sigma, \omega_{2}/\delta_{2} ) ] \) (with respect to \(G_{\sigma, \omega_{1}/\delta_{1}} \)) is strictly positive (if θ is not degenerate). Furthermore,
is clearly strictly positive. The two terms of the derivative of φ(σ,δ 1,δ 2,ω 1,ω 2) are, therefore, strictly positive. □
Appendix B. Identification of the Random-Effects 2PL Model
Random-effects 2PL-type models are specified under the same hypotheses of the random-effects 1PL-G model (see Section 3.1), but the conditional distribution of Y ij given the person specific ability θ i is given by
where F is a strictly increasing distribution function with a continuous density function f strictly positive on ℝ. Let us suppose that the person specific abilities are distributed according to a known distribution G.
2.1 B.1 Identification of the Difficulty Parameters
Let
which is a continuous function strictly decreasing in β j . Define
Since \(\overline{p} [ \alpha,p(,\alpha,\beta)]=\beta\), it follows that \(\beta_{j}=\overline{p} [ \alpha_{j},\gamma_{j} ]\) for each j=1,…,J. Thus, the item parameter β j becomes identified once the discrimination parameter α j becomes identified.
2.2 B.2 Monotonicity of P[Y ij =1,Y ik =1∣α 1:J ,β 1:J ]
In order to identify the discrimination parameters, we need to study the monotonicity of P[Y ij =1,Y ik =1∣α 1:J ,β 1:J ] as a function of the discrimination parameters. Using the equality \(\overline{p} [ \alpha, p(\alpha,\beta) ]=\beta\), it follows that
where
Thus,
where \(\frac{\partial}{\partial\alpha} \overline{p} [ \alpha ,\gamma ]=E_{\alpha,\overline{p}[ \alpha,\gamma]}[ X ]\).
Let
Using (B.2), it follows that
provided α k >0 since in this case \(F[ \alpha_{k}X-\overline {p}(\alpha_{k},\gamma_{k}) ]\) is a strictly increasing function of X and, consequently, the covariance between X and \(F[ \alpha_{k}X-\overline{p}(\alpha_{k},\gamma_{k}) ]\) is positive (if X is not degenerate). Thus, h(α j ,γ j ,α k ,γ k ) is a strictly increasing function in α j . Similarly, it is concluded that h(α j ,γ j ,α k ,γ k ) is also a strictly increasing function in α k provided α j >0. The inverse function of h can, therefore, be defined as
and consequently,
2.3 B.3 Identification of the Discrimination Parameters
Let J≥3. Using (B.3), we have that
Therefore, by (B.5.i) it follows that \(\alpha_{2}=\overline{h}(\alpha_{1},\gamma_{1},\gamma_{12},\gamma_{2})\) and \(\alpha_{3}=\overline{h}(\alpha_{1},\gamma_{1},\gamma_{13},\gamma_{3})\). Thus,
The identification of α 1 follows because the function k is invertible. As a matter of fact,
But
Using (B.4), we conclude that \(\frac{\partial }{\partial\alpha_{1}}k(\alpha_{1},\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{12},\gamma_{13})<0\) and, therefore, k is invertible. Finally, by (B.5.i), the identification of the remaining discrimination parameters then follows.
The previous arguments have been established assuming that the distribution generating the person specific abilities is known. If such a distribution is known up to the scale parameter σ, the previous arguments apply for \(\widetilde{\alpha}_{j}=\alpha_{j}\sigma\). Thus, we obtain the following theorem.
Theorem B.1
Consider the statistical model induced by both the 2PL model (B.1) and the person specific abilities distributed according to a distribution G known up to the scale parameter σ, where the F is a strictly continuous increasing distribution function with a density function f strictly positive on ℝ. The parameters of interest (α 1:J ,β 1:J ,σ) are identified by one observation provided that
-
1.
At least three items are available.
-
2.
The discrimination parameter α 1 is fixed at 1.
If the distribution G is fully known, then parameters of interest (α 1:J ,β 1:J ) are identified provided that at least three items are available.
It is relevant to remark that the positivity of the discrimination parameters is established by the identification analysis.
Appendix C. Proof of Theorem 5
The identification analysis of the parameters of interest (β 1:∞,c 1:∞,G) should be done in the asymptotic Bayesian model defined on (Y i ,β 1:∞,c 1:∞,G), where Y i ∈{0,1}ℕ corresponds to the response pattern of person i. According to Definition 2, the corresponding minimal sufficient parameter is given by the following σ-field:
where [σ(Y 1)]+ denotes the set of positive functions f such that f=g(Y 1), with g a measurable function. The identification of the semi-parametric 1PL-G model leads to proving, under identification restrictions if necessary, that (β 1:∞,c 1:∞,G) is a measurable function of the parameter \(\mathcal{A}^{*}\). By the Doob–Dynkin lemma, this is equivalent to proving that, under identification restrictions if necessary,
This equality relies on the following steps:
- Step 1::
-
By the same arguments used to establish identity (4.4), it follows that
$$\sigma(\boldsymbol{\beta}_{2:\infty}, \boldsymbol{\delta}_{2:\infty}) \doteq\sigma(\beta_j :j\ge2)\vee \sigma(\delta_j:j\geq2) \subset\mathcal{A}^*, $$where δ j ≐1−c j .
- Step 2::
-
Hypotheses H1, H2, H3, and H4 jointly imply that {(u j ,v j ):2≤j<∞} are iid conditionally on (β 1,δ 1,K,H). By the Strong Law of Large Numbers, it follows that
for \(B\in\mathcal{B}^{+}\times\mathcal{B}\). But Propositions 2 and 3 ensure that {u j :2≤j<∞} and {v j :2≤j<∞} are identified parameters. It follows that {(u j ,v j ):2≤j<∞} is measurable w.r.t. \(\mathcal{A}^{*}\) and, consequently, \(W^{\beta _{1},\delta_{1}}(B)\) is measurable w.r.t. \(\overline{\mathcal{A}^{*}}\) for all \(B\in\mathcal{B}^{+}\times\mathcal{B}\). The upper-bar denotes a σ-field completed with measurable sets; for a definition, see Chapter 2 in Florens et al. (1990).
- Step 3::
-
Using (4.4), it follows that
and
- Step 3.a::
-
By the law of large deviations (see Shiryaev 1995, Chapter IV, Section 5), it follows that
$$\overline{Y}_{iJ}-\overline{p}_{iJ}\longrightarrow0\quad \mbox {a.s. conditionally on $(\beta_{1},\delta_{1}, \theta_{i})$ as $J\to\infty$.} $$But as J→∞
Therefore,
$$\overline{Y}_{iJ}\longrightarrow p(\beta_1, \delta_1,\theta_i)\quad \mbox{a.s. conditionally on}\ ( \beta_1,\delta_1,\theta_i)\ \mbox{as}\ J\to \infty. $$ - Step 3.b::
-
It follows that, for all g∈C b ([0,1]),
$$g (\overline{Y}_{iJ} )\longrightarrow g \bigl(p(\beta_1, \delta_1,\theta_i) \bigr)\quad\mbox{a.s. and in}\ L^1\ \mbox{conditionally on}\ (\beta_1,\delta_1, \theta_i)\ \mbox{as}\ J\to\infty. $$Then for all g∈C b ([0,1]),
By definition of conditional expectation, \(E [g (\overline{Y}_{iJ} )\mid\boldsymbol{\beta}_{1:\infty },\boldsymbol{c}_{1:\infty},G ]\) is measurable w.r.t. \(\mathcal{A}^{*}\). Thus,
$$\int_{\mathbb{R}_+} g \biggl\{ 1-E \biggl[\frac{u_j}{v_j+\frac {1}{\delta_1}+\frac{1}{\delta_1\exp(\beta_1)}\exp(\theta_i)}\Bigm| \beta_1,\delta_1,\theta_i \biggr] \biggr\}G(d\theta) $$is measurable w.r.t. \(\overline{\mathcal{A}^{*}}\); the bar is added because such an integral is the a.s. limit of the sequence \(\{E [g (\overline{Y}_{iJ} )\mid\boldsymbol{\beta}_{1:\infty },\boldsymbol{c}_{1:\infty},G ]:J\in\mathbb{N}\}\).
- Step 3.c::
-
Using the transformation (4.10), it is follows that
$$\int_{\mathbb{R}_+}g \bigl[L(x) \bigr]G_{\beta_1,\delta_1}(dx) \quad \mbox{is}\ \overline{\mathcal{A}^*}\hbox{-measurable}, $$where
$$L(x)=\int_{\mathbb{R}_+\times\mathbb{R}} \biggl(1-\frac {u}{v+x} \biggr) W^{\beta_1,\delta_1}(du,dv). $$The function L(⋅) is a strictly continuous function from (δ −1,∞) to (0,1) that is known because it is measurable w.r.t. \(\sigma(W^{\beta_{1},\delta_{1}})\). By Step 2, \(\sigma(W^{\beta_{1},\delta_{1}})\subset\overline{\mathcal{A}^{*}}\). In particular, for every function f∈C b (ℝ+), take \(g(y)=f [\overline{L}(y) ]\), where \(\overline{L}(\alpha )=\inf\{x:L(x)\geq\alpha\}\). It follows that
$$\int_{\mathbb{R}_+}f(x)G_{\beta_1,\delta_1}(dx) $$is measurable w.r.t. \(\overline{\mathcal{A}^{*}}\). Considering
as n→∞, the monotone convergence theorem implies that, for every x∈ℝ+, \(G_{\beta_{1},\delta_{1}}((0,x])\), and so \(G_{\beta_{1},\delta_{1}}\), is measurable w.r.t. \(\overline{\mathcal{A}^{*}}\).
- Step 4::
-
From Steps 1 and 3c, it follows that \((\boldsymbol{\beta}_{2:\infty},\boldsymbol{\delta}_{2:\infty },G_{\beta_{1}, \delta_{1}})\) is measurable w.r.t. \(\overline{\mathcal{A}^{*}}\). By the Doob–Dynkin lemma, this is equivalent to
$$\sigma(\boldsymbol{\beta}_{2:\infty},\boldsymbol{\delta }_{2:\infty}) \vee\sigma(G_{\beta_1,\delta_1})\subset\overline {\mathcal{A}^*}. $$However, \(\sigma(G_{\beta_{1},\delta_{1}})\subset\sigma(G)\vee\sigma (\beta_{1},\delta_{1})\). Therefore, two restrictions should be introduced in order to obtain the equality \(\sigma(G_{\beta_{1},\delta_{1}})= \sigma(G)\vee\sigma(\beta_{1},\delta_{1})\). Two possibilities can be considered:
-
1.
The first possibility consists in fixing two q-quantiles of G. In fact, let
$$x_1=\inf\bigl\{x:G_{\beta_1,\delta_1}(x)>q_1\bigr\},\qquad x_2=\inf\bigl\{ x:G_{\beta_1,\delta_1}(x)>q_2\bigr\}. $$Using (4.10), this is equivalent to
It follows that
$$\beta_1=\ln \biggl[\frac{x_1e^{y_2}-x_2e^{y_1}}{x_2-x_1} \biggr], \qquad \delta_1=\frac{e^{y_2}-e^{y_1}}{x_1e^{y_2}-x_2e^{y_1}}; $$that is, β 1 and δ 1 are identified since x 1,x 2,y 1,y 2 depends on G that it is identified.
-
2.
The second possibility consists in fixing the mean and the variance of the distribution of exp(θ), namely
$$E_G\bigl(e^{\theta}\bigr)=\mu,\qquad V_G \bigl(e^{\theta}\bigr)=\sigma^2. $$Using (4.10), this is equivalent to
$$m=E_{G_{\beta_1,\delta_1}}(X)=\frac{1}{\delta_1}+\frac{\mu }{\delta_1e^{\beta_1}},\qquad v^2=V_{G_{\beta_1,\delta_1}}(X)=\frac {\sigma^2}{\delta_1^2e^{2\beta_1}}. $$It follows that
$$\beta_1=\ln \biggl(\frac{m\sigma}{v}-\mu \biggr),\qquad \delta_1^{-1}=m-\frac{\mu v}{\sigma}. $$For instance, if μ=0 and σ=1, then
$$\beta_1=\ln \biggl(\frac{m}{v} \biggr),\qquad \delta_1=\frac{1}{m}; $$that is, β 1 and δ 1 are identified since m and v depend on G which is identified.
-
1.
Rights and permissions
About this article
Cite this article
San Martín, E., Rolin, JM. & Castro, L.M. Identification of the 1PL Model with Guessing Parameter: Parametric and Semi-parametric Results. Psychometrika 78, 341–379 (2013). https://doi.org/10.1007/s11336-013-9322-8
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11336-013-9322-8