Editorial
Cross-sectional research: A critical perspective, use cases, and recommendations for IS research

https://doi.org/10.1016/j.ijinfomgt.2023.102625Get rights and content

Highlights

  • Cross-sectional study designs are predominant in Information Systems research.

  • Cross-sectional research follows an efficient and inexpensive execution.

  • Cross-sectional data limit statements related to temporal causality.

  • Use configurational or mixed-method studies when having cross-sectional data.

  • We offer four recommendations for when and how to use cross-sectional data.

Abstract

Cross-sectional data is pervasive in information systems (IS) research. This editorial reviews cross-sectional studies, summarizes their strengths and limitations, and derives use cases of when cross-sectional data is and is not useful in answering research questions. We raise concerns about assertions of temporal causality using data collected employing cross-sectional methods with no temporal order, which makes cause and effect difficult to establish. Based on our discussion of research using cross-sectional data and its limitations, we offer four recommendations for when and how to use such data: (1) improve credibility by reporting research in detail and transparently, (2) ensure appropriate sampling, (3) take configurational perspectives, and (4) integrate cross-sectional data into mixed- or multi-method designs. By doing so, we help IS researchers position and use cross-sectional studies appropriately within their methodological repertoire.

Introduction

In order to prepare a tasty dish, a chef needs a clear and good recipe, carefully selected ingredients, and a well-equipped kitchen. The dish’s flavor and presentation depend on the chef attending to all three elements of its preparation – following all the proper steps, using high quality ingredients with good flavor, and having all the tools needed to prepare the meal. If the chef skips a step, uses low-quality ingredients, or lacks an essential tool, the dish may well be a failure.

Good research is analogous to cooking. Studies are grounded in clear and solid theory, analyze meaningful data (sets), and apply research methods properly. Numerous Information systems (IS) researchers provide careful guidance on how to develop theory (e.g., contextualization; Hong et al., 2014), advice on how to gather data (Straub, 1989), and when to apply research methods (e.g., qualitative comparative analysis; Mattke et al., 2022; Mattke et al., 2021; Pappas & Woodside, 2021) that can help scholars develop compelling explanations for real-world phenomena. However, if scholars fail to pay sufficient attention to how well different aspects of the research align, e.g., with the research questions and data method, researchers risk overstating conclusions or drawing incorrect inferences.

Data sets are the least elaborated element in IS research. Scholars often fail to sufficiently explain the implications of the data collection technique on theory testing or formally test whether their data meets the quality thresholds needed to conduct analysis. As a result, scholars sometimes rely on sub-optimal data, overuse one form of data collection, or make greater inferences than the data support, an especially great danger with regard to cross-sectional data.

A casual review of research published in major IS journals reveals the pervasiveness of using cross-sectional data. In the International Journal of Information Management (IJIM), for example, more than 85 % of papers published from 2020 to 2022 employ a cross-sectional research design. Editors-in-chief of various journals across different subject disciplines have raised concerns about overusing any one form of data collection (Dwivedi, Hughes, Cheung, et al., 2022). One common mistake is using cross-sectional data inappropriately to answer research questions that implicitly or explicitly point to temporal causality (Bono & McNamara, 2011). Consider a research question exploring whether and how beliefs and intentions shape behaviors. Cross-sectional survey data cannot be analyzed to establish causal relationships, but rather only associations between concepts. Temporal causality between beliefs, intentions, and behavior and how variance in beliefs and intentions relate to behavior can be better proven using multi-wave data (Kim, 2009, Kim and Malhotra, 2005).

Since the great majority of IS research attempts to establish temporal causality (e.g., focuses on cause-effect relationships) or assesses how temporal causality evolves over time, and since most recent IS research analyzes cross-sectional data, it is valuable to review the use of cross-sectional data in IS research. In this editorial, we assert the need in IS research to employ multiple research designs, such as collecting data from the same people in two (Shirish et al., 2021) or more (Adamovic, 2022) waves or using experiments to establish causality (Fink, 2022). While cross-sectional data can be analyzed to demonstrate associations between concepts, causality and cause-effect relationships can be demonstrated effectively using research designs that allow for the temporal ordering of variables.

This editorial provides an overview of cross-sectional studies, presents the strengths and weaknesses of using such data, and provides recommendations for conducting cross-sectional research, thus contributing to IS theory and practice. We hope that this critical review of the use of cross-sectional data in IS research will stimulate a broader conversation about data collection practices.

Section snippets

A brief overview of cross-sectional studies

A cross-sectional study, also known as a prevalence or transverse study, uses a snapshot of participants’ beliefs, behaviors, or other variables of interest of a study population (e.g., a group of individuals or organizations) at a specified point in time (Grimes and Schulz, 2002, Hua and David, 2008) to examine research questions (see Fig. 1).

Descriptive and analytical approaches are commonly used in cross-sectional studies (Kesmodel, 2018, Wang and Cheng, 2020). Descriptive cross-sectional

Strengths, limitations, and use cases of cross-sectional studies

Each cross-sectional approach has strengths and limitations (Fig. 2). We pair these strengths and limitations to offer guidance on when using cross-sectional data is or is not appropriate.

Recommendations for conducting rigorous cross-sectional research

We offer four recommendations to ensure that cross-sectional data is used appropriately (see Table 2).

The first recommendation points to the need for detailed and transparent research reporting. Such reporting includes disclosing the use of cross-sectional data (e.g., in the abstract), formulating appropriate objectives (e.g., referring to relevant practical or theoretical problems), specifying method and findings (e.g., how to reduce biases or handling missing values, providing descriptive

Elusive causality and cross-sectional data

While causality is difficult to establish in purely cross-sectional studies, this does not imply these studies cannot contribute to making a case for causality. As mentioned above, causality requires temporal precedence, covariation, and control of third variables. While the first condition is not met, covariation can easily be established. Similarly, using covariates in multivariate analysis can control for third-variable effects. Even path modeling techniques, like SEM, offer structural

Concluding remarks

Many IS research papers use cross-sectional designs to gather data. While cross-sectional design’s benefits can explain its frequent use (e.g., inexpensive, efficient data collection), it has significant shortcomings, including, among others, the difficulty of deriving cause-effect relationships, the non-existence of a temporal order between the collected variables, and the limited ability to test temporal causality. To realize the benefits of cross-sectional design, we recommend that

References (70)

  • M.J. Mortenson et al.

    A computational literature review of the technology acceptance model

    International Journal of Information Management

    (2016)
  • I.O. Pappas et al.

    Fuzzy-set qualitative comparative analysis (fsQCA): Guidelines for research practice in information systems and marketing

    International Journal of Information Management

    (2021)
  • M.M. Queiroz et al.

    Blockchain adoption challenges in supply chain: An empirical investigation of the main drivers in India and the USA

    International Journal of Information Management

    (2019)
  • T. Ryan et al.

    Who uses Facebook? An investigation into the relationship between the Big Five, shyness, narcissism, loneliness, and Facebook usage

    Computers in Human Behavior

    (2011)
  • A. Shirish et al.

    Switching to online learning during COVID-19: Theorizing the role of IT mindfulness and techno eustress for facilitating productivity and creativity in student learning

    International Journal of Information Management

    (2021)
  • X. Wang et al.

    Cross-sectional studies: Strengths, weaknesses, and recommendations

    CHEST Journal

    (2020)
  • R. Aggarwal et al.

    Study designs: Part 2 - Descriptive studies

    Perspectives in Clinical Research

    (2019)
  • M. Aguirre-Urreta et al.

    Sample size determination and statistical power analysis in PLS using R: An annotated tutorial

    Communications of the Association for Information Systems

    (2015)
  • N. Asiamah et al.

    A spotlight on cross-sectional research: Addressing the issues of confounding and adjustment

    International Journal of Healthcare Management

    (2021)
  • E. Bogert et al.

    Preregistration of information systems research

    Communications of the Association for Information Systems

    (2021)
  • J.E. Bono et al.

    Publishing in AMJ —Part 2: Research design

    Academy of Management Journal

    (2011)
  • Editor's comments: Advancing research transparency at MIS quarterly: A pluralistic approach

  • D. Compeau et al.

    When constructs become obsolete: A systematic approach to evaluating and updating constructs for information systems research

    MIS Quarterly

    (2022)
  • D. Compeau et al.

    Research commentary—Generalizability of information systems research using student subjects - A reflection on our practices and recommendations for future research

    Information Systems Research

    (2012)
  • D.R. Cox et al.

    Multivariate dependencies

    (2014)
  • C. Cummings

    Cross-sectional design

  • F.D. Davis

    A Technology Acceptance Model for Empirically Testing New End-User Information Systems: Theory and Results. Doctoral dissertation

    (1986)
  • M. Delgado-Rodríguez et al.

    Bias

    Journal of Epidemiology and Community Health

    (2004)
  • E. Elm et al.

    The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: Guidelines for reporting observational studies

    Annals of Internal Medicine

    (2007)
  • L. Fink

    Why and how online experiments can benefit information systems research

    Journal of the Association for Information Systems

    (2022)
  • M. Fishbein et al.

    Belief, attitude, intention and behavior: An introduction to theory and research

    (1975)
  • Gimpel, H., Lanzl, J., Manner-Romberg, T., & Nüske, N. (2018). Digitaler Stress in Deutschland: Eine Befragung von...
  • D. Goodhue et al.

    PLS, small sample size, and statistical power in MIS research

    Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS'06)

    (2006)
  • V. Grover et al.

    The complexity of establishing causality in a digital environment: an eye to the future

    Journal of Information Technology Case and Application Research

    (2020)
  • G.P. Hammer et al.

    Avoiding bias in observational studies: Part 8 in a series of articles on evaluation of scientific publications

    Deutsches Arzteblatt International

    (2009)
  • Cited by (40)

    View all citing articles on Scopus
    View full text