Introduction

Rating scales (also called assessment scale) are used to elicit data about quantitative entities (e.g., research collaboration as in Bornmann et al. (2009)). Often, predictability of rating scales (also called “assessment scales”) could be improved. Rating scales often use values: “1 to 10” and some rating scales may have over 100 items (questions) to rate. Other popular terms for rating scales are: survey and questionnaire although a questionnaire is a method of data collection while survey may not necessarily be conducted by questionnaires. Some surveys may be conducted by interviews or by analyzing web pages. Rating itself is very popular on the Internet for “Customer Reviews” where often uses five stars (e.g., by Amazon.com) instead of ordinal numbers. One may regard such rating as a one item rating scale. Surveys are used in Cinzia and Wolfgang (2016) on Fig. 1 (with the caption: “Sketch of data integration in use for different purposes with interference points for standardisation”) as one of the main sources of data.

A survey, based on the questionnaire, answered by 1704 researchers from 86 different countries, was conducted by the Scientometrics study (Buela-Casal and Zych 2012) on the impact factor, which is regarded as a controversial metric. Rating scales were also used in Prpic (2007) and Koczkodaj et al. (2014). In Kakiashvili et al. (2012) and Gan et al. (2013), a different type of the rating scale improvement was used (based on pairwise comparisons). The evidence of improving accuracy by pairwise comparisons is in Koczkodaj (1996) and Koczkodaj (1998).

According to Moigne and Ragouet (2012):

... the differentiation of sciences can be explained in a large part by the diffusion of generic instruments created by research-technologists moving in interstitial arenas between higher education, industry, statistics institutes or the military. We have applied this analysis to research on depression by making the hypothesis that psychiatric rating scales could have played a similar role in the development of this scientific field.

The absence of a well-established unit (e.g., one kilogram or meter) for measuring the science compels us to use rating scales. They have great application to scientometrics for measuring and analyzing performance based on subjective assessments. Even granting academic degrees is based on rating scales (in this case, several exams which are often given to students by questionnaires). Evidently, we regard this rating scale as accurate otherwise our academic degrees may not have much value.

The importance of subjectivity processing was driven by the idea of bounded rationality, proposed by Herbert A. Simon (the Nobel Prize winner), as an alternative basis for the mathematical modelling of decision making.

The data model

Data collected by a rating scale with fixed number of items (questions) are stored in a table with one decision (in our case, binary) variable. The parametrized classifier is usually created by total score of all items. Outcome of such rating scales is usually compared to external validation provided by assessing professionals (e.g., grant application committees).

Our approach not only reduces the number of items but also sequences them according to the contribution to predictability. It is based on the Receiver Operator Characteristic (ROC) which gives individual scores for all examined items Table 1.

Predictability measures

The term “receiver operating characteristic” (ROC), or “ROC curve” was coined for a graphical plot illustrating the performance of radar operators (hence “operating”). A binary classifier represented absence or presence of an enemy aircraft and was used to plot the fraction of true positives out of the total actual positives (TPR = true positive rate) vs. the fraction of false positives out of the total actual negatives (FPR = false positive rate). Positive instances (P) and negative instances (N) for some condition are computed and stored as four outcomes a 2 contingency table or confusion matrix, as follows:

Table 1 The confusion matrix

In assessment and evaluation research, the ROC curve is a representation of a “separator” (or decision) variable. The decision variable is usually: “has a property” or “does not have a property” or has some condition to meet (pass/fail). The frequencies of positive and negative cases of the diagnostic test vary for the “cut-off” value for the positivity. By changing the “cut-off” value from 0 (all negatives) to a maximum value (all positives), we obtain the ROC by plotting TPR (true positive rate also called sensitivity) versus FPR (false positive also called specificity) across varying cut-offs, which generate a curve in the unit square called an ROC curve.

According to Fawcett (2006), the area under the curve (the AUC or AUROC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming the ’positive’ rank higher than ’negative’).

AUC is closely related to the Mann-Whitney U test which tests whether positives are ranked higher than negatives. It is also equivalent to the Wilcoxon test of ranks. The AUC is related to the Gini coefficient given by the formula

$$\begin{aligned} G_1 = 2*AUC - 1, \end{aligned}$$
(1)

where

$$\begin{aligned} G_1 = 1 - \sum _{k=1}^n (X_{k} - X_{k-1}) (Y_k + Y_{k-1}) \end{aligned}$$

In this way, it is possible to compute the AUC using an average of a number of trapezoidal approximations. Practically, all advanced statistics can be questioned and they often gain recognition after their intensive use. The number of publications with ROC listed by PubMed.com has exploded in the last decade and reached 3588 in 2013. An excellent tutorial-type introduction to ROC is in Fawcett (2006). It was introduced during the World War II for evaluation of performance the radar operators. Its first use in health-related sciences, according to Medline search, is traced to Carterette and Jones (1967).

Table 2 AUC of individual variables in the original data

Validation of the predictability improvement

Supervised learning is the process of inferring a decision (of classification) from labeled training data. However, the supervised learning may also employ other techniques, including statistical methods that summarize and explain key features of the data. For the unsupervised learning, clustering is the most popular method for analyzing data. The k-means clustering optimizes well for the given number of classes. In our case, we have two classes: 0 for “negative” and 1 for “positive” outcome of diagnosis for depression.

The area under the receiver operating characteristic curve (AUC) reflects the relationship between sensitivity and specificity for a given scale. An ideal scale has an AUC score equal to 1 but it is not realistic in clinical practice. Cutoff values for positive and negative tests can influence specificity and sensitivity, but they do not affect AUC. The AUC is widely recognized as the performance measure of a diagnostic test’s discriminatory power (see Lasko et al. 2005; Zou et al. 2007). In our case, the input data have AUC of 81.17%.

The following System R code was used to compute the AUC for all 21 individual items:

figure a

When AUC values are computed for all individual variables, we arrange them in an ascending order. These variables are present in Table  3 in bold. Values in the row below running total up to the current variable. Evidently, the first value 0.725 is the same as in Table  2 since the running total is the single variable 1. However, the third value in the second row (0.795) is not for variable 7 but the total of variables 1, 14, and 7. In particular, the last value (0.812) in Table  3 is for the total of all variables. Frankly, these numbers are very close to each other but their line plot 1 demonstrates its usefulness. The curve peek is for variable #6 which is 15. There is a slight decline until variable 16.

Table 3 AUC of running variable totals
Fig. 1
figure 1

AUC for the running total of all variables

Relating the results to graded response model

Let us examine how our results can be related to the Graded Response Model (GRM). GRM is equivalent of Item Response Theory, well addressed by a Wikipedia article, but used for ordinary, not binary, data. GRM is usually conducted to establish the usefulness of test items (Ayala et al. 1992).

GRM is used in psychometric scales to determine the level of three characteristics of each item, namely: (a) item’s difficulty, (b) item’s discriminant power, and (c) item’s guessing factor.

Item’s difficulty describes how difficult or easy it is for individuals to answer on the item. High positive value means that the item is very difficult, high negative value means that the item is very easy.

Item’s discriminant power describes ability for a specific item to distinguish among upper and lower ability individuals’ on a test.

Item’s guessing factor describes probability that individual with low feature (low depression) achieved high scores in this item.

The aim of our analysis was to establish whether or not the GRM indicates the same items as the proposed method based on AUC. Two GRM models were build for the given rating scales:

Constrained (that assumes equal discrimination parameters across items),

Unconstrained (that assumes unequal discrimination parameters across items).

System R ltm package (Rizopoulos 2006) was used in our analysis. Fig. 2 illustrates system R code for GRM models.

Fig. 2
figure 2

System R code for GRM models

In order to check whether or not the unconstrained GRM provides a better fit than the constrained GRM, a likelihood ratio test was used. It revealed that unconstrained GRM is preferable (fit2 in Table  4). The results of the Likelihood Ratio are presented in Table  4.

Table 4 Likelihood ratio for the full GRM model

Table  5 shows the unconstrained GRM model results with the item discrimination power. It provides information on discrimination power of each item.

Table 5 Unconstrained GRM model results for the full rating scale and the item discrimination power

Items selected by AUC ROC are shown in Table  5 as bold. Evidently, they have the large discrimination power (seen in the last column). All selected items discriminate between responses above the mean value (so on their basis we can discriminate between respondents with severe and moderate level of depression). Discrimination power is a characteristic of items in the scale. It is a measurement method which aim is to assess how respondents differ in their answers on rating scale items. The larger is the discrimination power of the item, the better, more useful is item in the scale (Anastasi and Urbina 1999). Items computed by the proposed (AUC ROC) method have a good discrimination as it can be seen in the Table  4 (for example, number 1.799 means that item V1 has a good discrimination power).

All items of the given rating scale give 56.21% of total information for the latent trait and the latent variable (adolescent depression in school in our case). Test Information Curve (see Table  6) shows that six items provides 19.62% of the total information for latent trait. The higher is items’ discrimination, the more information or precision the scale provides.

GRM model computes different items than our proposed method. AUC ROC is based on the count of true and false positive rate while GRM model is based on the maximum likelihood estimate. The proposed method has a bigger diagnostic power. Diagnostic power is the ability of the test to detect all subjects, which have been measured by the test characteristics (in our case, for depression). A test with the maximum diagnostic power would detect all subjects (suffering from depression). Unfortunately, the most selections of rating scale items do not compare solutions with the diagnostic criterion. That is why the proposed method is so useful for the selection of items in different measurement tools (examination, tests, socio-metrical scales, psychometrical scales, and many others).

We used GRM model here to show that even such powerful method like GRM (used in psychometrics to indicate which items can discriminate subjects), does not provide an answer to a question about diagnostic accuracy of items. According to GRM items, V2 and V3 (Table  5) have a considerable discriminant power, but the proposed method shows which items better discriminate between subjects on the basis of diagnostic criteria.

Table 6 Test information curve

Reduced scale psychometric properties

Confirmatory Factor Analysis (CFA) (Hair et al. 2006; Bartholomew et al. 2008) was used to verify the structure of our results. CFA is a factor analysis which purpose is to verify the structural validity whether items belong to scales and what are their factor loading. Factor loading measures the relations between observed variable (item) and latent feature (scale). The higher the factor loading, the stronger the relation, and the item has greater importance in the scale. More specifically, CFA was used to determine whether:

  • items indicated by AUC form a coherent scale that exhibits good reliability,

  • the reliability of the rating scale has not deteriorated by the scale item reduction.

Two CFA models were built. The first CFA model has all items and the second CFA model has a reduced number of items. Since items of the scale have categorical format, the robust estimator WLSMV (weighted least squares means and variance, see Beauducel and Herzberg (2006)) was used as it is designed for categorical scales. The robust estimator resists the lack of normal distributions. The analysis was conducted in “lavaan” package of R program (Fig. 3).

Fig. 3
figure 3

System R code for CFA

Fig. 4
figure 4

CFA model for the rating scale with all items presented in AMOS graphics

Table 7 Parameter estimates of the full rating scale

The model for the full rating scale is presented by Fig. 4. Table 7 presents parameter estimates of the full rating scale. Loads of those items, which have been identified by the presented method as having the greatest predictive power, is in bold in Table  7. A model with a reduced number of items is in Fig. 5. Table  8 presents parameter estimates for the reduced scale model.

Fig. 5
figure 5

CFA Model with a reduced number of items presented in AMOS graphics

Table 8 Parameter estimates for the reduced scale

For the purpose of checking whether the models have a good fit, we used two fit indices: CFI (cross validation index) and RMSEA (root mean square error of approximation). According to Bartholomew et al. (2008) and Saris et al. (2009), both CFA models have a good fit to the data as illustrated by Table  9. Values of CFI statistics for both models exceeded the required level of 0.9. For both models, the values of RMSEA statistics (lower than 0.08) indicates the good fitness of the proposed new scale structure for the given data.

Table 9 Results of fit statistics for two rating scale models

For both CFA models, construct reliability (CR) and variance extracted (VE) were computed. CR was computed by the formula (given in Hair et al. (2006)):

$$\begin{aligned} CR = \frac{{\left( \sum _{i=1}^{n}{\lambda }_{i}\right) }^{2}}{{\left( \sum _{i=1}^{n}{\lambda }_{i}\right) }^{2}+\left( \sum _{\mathop {c}\limits ^{.}=1}^{n}{\delta }_{i}\right) } \end{aligned}$$
(2)

where i is a total number of items, \(\lambda\) is a factor loading, \(\delta\) is an error variance, which is the amount of variability unexplained by the items in scale.

The formula for computing variance extracted (VE) is based on Hair et al. (2006):

$$\begin{aligned} VE = \frac{\sum _{i=1}^{n}{\lambda }_{i}^{2}}{n} \end{aligned}$$
(3)

where i is the number of items, \(\lambda\) is a factor loading, n is a number of rating scale items.

Table 10 Results of CR and VE of two models

The results revealed that the reliability of the reduced model CR = .822 and is lower than the reliability of the full model of 0.1 (CR =.929). Therefore, it can be concluded that the reliability of the scale is above the acceptability level. Removing 15 items has not impaired its reliability as Table  10 demonstrates it.

For the reduced model, VE = .438 while for the given model, VE = .394. Evidently, the new model has VE closer to criterion of .500. The reduced rating scale model has a better VE than the full rating scale model. It means that the reduced rating scale model explains the diversity of the results better than the full rating scale model (see Table  10).

On the basis of factor loadings (\(\lambda\)), we are unable to determine which items have the most predictive power. Items V3 or V20 have one of the top factor loadings in the full rating scale, but they do not still have the most predictive power. Therefore, it is impossible to indicate the ordinal number of the rating scale item according to the factor analyses, but it is possible by the proposed method and GRM. However, GRM cannot compare its solution with a diagnostic criterion while the proposed method can.

Discussion

The Beck Depression Inventory (BDI) was selected for our study since it is one of the best known and most widely used self-rating scales to assess the presence and severity of depressive symptoms (Beck et al. 1996; American Psychiatric Association 2000; World Health Organization 1994). Our data were collected in high schools (Araya et al. 2011, 2013). However, it needs to be stressed that our method is applicable to practically all rating scales.

In summary, both models fit the data well. Both of them have a good reliability and a relatively good variance. Reducing the number of items did not burden psychometric properties, but simplified the whole structure (as indicated by the smaller number of degrees of freedom). According to the Occam’s Razor law, the simpler models, the better. Although it was not the main objective of this study, it is worth to notice that the six rating scale items have a better predictive power in our study than other 21 items. We have also demonstrated that our results have the domain (semantic) meaning.

Conclusions

The presented method has reduced the number of the rating scale items (variables) to 28.57% (from 21 to 6) making over 70% of collected data unnecessary. It is not only an essential budgetary saving, as data collection is usually expensive, but it often contributes to the data collection error reduction. The more data are collected, the more errors are expected to occur. When we use the proposed AUC ROC reduction method, the predictability has increased by approximately 0.5%. It may seem insignificant but for a large population, it is not so. In fact, http://www.who.int/mental_health/publications/action_plan/en/ states that: “Taken together, mental, neurological and substance use disorders exact a high toll, accounting for 13% of the total global burden.”

The proposed use of AUC for reducing the number of rating scale items is innovative and applicable to practically all rating scales. System R code is posted on the Internet for the general use. A package for System R is under development. Certainly, more validation cases would be helpful and the assistance will be provided to anyone who wishes to try this method using his/her data.

Supporting information

The source code will be deposited at SourceForge.net hosting provider (see http://www.sourceforge.net/). According to http://www.sourceforge.net/, SourceForge “creates powerful software in over 400,000 open source projects ans hosts over 3.7 million registered users”. It connects well over 40 million customers with more than 4,800,000 downloads a day.