The prospects of a quantitative measurement of agility: A validation study on an agile maturity model
Introduction
The study of agile development and management practices is a relatively new field of research. The term itself, “agile development”, was first coined in the area of software development but similar concepts preceded it in the literature on manufacturing. Today it has become a general project management concept/tool, and the word “agile” is frequently used in the general business and project management literature, e.g. Miles (2013), Poolton et al. (2006), Vinodh et al. (2010).
Agile methods in software engineering evolved during the 1990s and in 2001 it became a recognized concept due to “The manifesto for agile software development” written by a group of software developers (Fowler and Highsmith, 2001). According to Cobb (2011) the background to the agile ideas was that projects in crisis sometimes took on more flexible ways of thinking and working and then were more successful. This style was named “agile”, which literally means to be able to move quickly and easily (Fowler and Highsmith, 2001), and emerged in reaction to more traditional project management methods were detailed planning typically precedes any implementation work.
During the 1990s the traditional way of doing procurement, elicitation of requirements, contract negotiations and then production and, finally, delivery (e.g. what is often termed the waterfall model in software development literature), sometimes helped create computer and software systems that were obsolete before they were delivered. To try to solve these challenges the agile community thus defined a set of values that they summarized in the agile manifesto (Fowler and Highsmith, 2001):
- •
Individuals and interactions over processes and tools.
- •
Working software over comprehensive documentation.
- •
Customer collaboration over contract negotiation.
- •
Responding to change over following a plan.
Laanti et al. (2011) claim that scientific and quantitative studies on agile methods were still rare in 2011, while requesting such studies since they can give more general advice about the practices involved. Overall, if an organization wants to transition to more agile ways of working, regardless of whether they are a software organization or not, the decision-makers will benefit from measuring agility both before, during, and after such a transition. The question is if this is possible since agility is a cultural change (described in the agile manifesto above) as well as a smorgasbord of practices to support them (Ranganath, 2011, Williams, 2012, Zieris, Salinger, 2013).
There is a diversity of agile measurement tools out there, both scientific and commercial but almost none of them has been statistically validated. In order to measure agility and trust in the given results/output, both researchers and practitioners need validated tools to guide their process. The problem is what to focus on and on what level, since the agile approach is on a diversity of levels in the organization. This empirical study will evaluate one of the agility maturity models found in research through a statistical validation process. This tool focuses a bit more on behavior and not only lists a set of practices for the research subjects to tick yes or no regarding if they are implemented or not. We also connect a Likert scale to the evaluation in order to capture more variance in connection to each item. Section 2 will outline existing agile measurement tools found in the literature, Section 3 will present how our main statistical investigation was conducted, but also describe a pretest conducted before the main study including its findings under 2.2 Pretest, 4 Results will present main study findings, Section 5 will analyze and discuss these overall results, and, finally, Section 6 will present conclusions and suggest future work.
This study aims to contribute with the following:
- 1.
A test to evaluate if the agile adoption framework can be used to measure current agility (instead of agile potential).
- 2.
If practitioners think such an evaluation is relevant through a case study pretest.
- 3.
Expand the agile adoption framework to include a Likert scale evaluation survey filled out by all the team members and not just by the assessor/researcher and connect a confidence interval to the item results.
- 4.
Partly validate the agile adoption framework with statistical tests.
- 5.
Suggest changes agile adoption framework and/or highlight the issues connected to agility measurement.
Section snippets
Related work
Some researchers suggest qualitative approaches like interviewing as a method for assessing agility in teams (Boehm, Turner, 2003, Pikkarainen, Huomo, Sidky, Arthur, Bohner, 2007). Hoda et al. (2012) even suggest the use of grounded theory which is an even more iterative and domain specific analysis method (Glaser and Strauss, 2006). Interviewing is a good way to deal with interviewee misinterpretations and other related biases. The work proposed by Lee and Xia (2010) compares a few agility
Hypothesis testing
In this study we want to see if empirical data of the agile adoption framework’s Level 1 survey for developers correspond to Sidky’s (2007) categorization of agile practices and are reliable and valid according to statistical analyses.
Hypothesis. The agile adoption framework is valid according to quantitative tests for internal consistency and construct validity.
Participants
The sample of the main study consisted of 45 employees from two large multinational US-based companies with 16,000 and 26,000
Results
In this section we will present the result of statistical tests for internal consistency and construct validity. The former will be tested by a Cronbach’s α and the latter by exploratory principal factor analysis (or PFA).
However, before these statistical tests we would like to highlight a problem with using the agile adoption framework to measure agility. The terms “manager” and “Scrum Master/agile coach” could be a source of confusion. Two respondents gave the open-ended feedback of “we have
Discussion
In this study we first tested how practitioners rate the use of the agile adoption framework through a focus group. The result of this was positive. However, the statistical tests did not support the categorization of factors in the framework and can therefore not be considered to measure distinct constructs (i.e. being a valid measurement for agility, in this case).
The pretest showed that the teams found the categories of the agile adoption framework relevant and measured how the teams worked
Conclusions and future work
In conclusion, this study has shown that quantitative data do not support the categorization of a subset of items in the agile adoption framework. It is not a surprise that the categorisation made in the agile adoption framework needs more work, since no quantitative validation has been conducted. Since this is the case researchers cannot correlate quantitative agile maturity measurements to other variables in software engineering research and be confident that the results are correct.
Acknowledgments
This study was conducted jointly with SAP AG3, and we would especially like to thank Jan Musil at SAP America Inc. We would also like to thank the SAP customers who were willing to share information. Volvo Logistics, Pasi Moisander, Karin Scholes, and Kristin Boissonneau Gren (without your goodwill this work could not have been done).
Lucas Gren is a Ph.D. student in software engineering at Chalmers and the University of Gothenburg, Sweden. He has M.Sc. degrees in software engineering, psychology, business administration, and industrial engineering and management. His research focus is on decision-making, psychological aspects, agile development processes, and statistical methods (all in the context of empirical software engineering).
References (35)
- et al.
Investigating the applicability of agility assessment surveys: a case study
J. Syst. Softw.
(2014) - et al.
Agile methods rapidly replacing traditional methods at Nokia: a survey of opinions on agile transformation
Inf. Softw. Technol.
(2011) - et al.
A framework to support the evaluation, adoption and improvement of agile methods in practice
J. Syst. Softw.
(2008) The agile maturity model (AMM)
Dr. Dobbs J.
(2010)Response rate in academic studies—a comparative analysis
Hum. Relat.
(1999)- et al.
Balancing Agility and Discipline: A Guide for the Perplexed
(2003) Making Sense of Agile Project Management: Balancing Control and Agility
(2011)Coefficient alpha and the internal structure of tests
Psychometrika
(1951)- Datta, S., 2009. Metrics and Techniques to Guide Software Development(Ph.D. thesis). Florida State University College...
Measuring innovation culture in organizations: the development of a generalized innovation culture construct using exploratory factor analysis
Eur. J. Innov. Manage.
(2008)
Exploratory Factor Analysis
Advanced Research Methods in Psychology
The Discovery of Grounded Theory: Strategies for Qualitative Research
Developing a grounded theory to explain the practices of self-organizing agile teams
Empirical Software Engineering
Toward agile: an integrated analysis of quantitative and qualitative field data on software development agility
MIS Q.
A comparative analysis of agile maturity models
Information Systems Development
Cited by (0)
Lucas Gren is a Ph.D. student in software engineering at Chalmers and the University of Gothenburg, Sweden. He has M.Sc. degrees in software engineering, psychology, business administration, and industrial engineering and management. His research focus is on decision-making, psychological aspects, agile development processes, and statistical methods (all in the context of empirical software engineering).
Richard Torkar is a professor of software engineering at Chalmers and the University of Gothenburg, Sweden. His focus is on quantitative research methods in the field of software engineering. He received his Ph.D. in software engineering from Blekinge Institute of Technology, Sweden, in 2006.
Robert Feldt is a professor of software engineering at Blekinge Institute of Technology, Sweden and at Chalmers University of Technology, Sweden. He has also worked as an IT and software consultant for more than 20 years helping companies with strategic decisions and technical innovation. His research interests include human-centered software engineering, software testing, automated software engineering, requirements engineering and user experience. Most of his research is empirical and conducted in close collaboration with industry partners in Sweden and globally. He received a Ph.D. in computer engineering from Chalmers University in 2002.