Skip to main content

Performance of dental students, orthodontic residents, and orthodontists for classification of midpalatal suture maturation stages on cone-beam computed tomography scans – a preliminary study

Abstract

Background

Assessment of midpalatal suture maturation on cone-beam computed tomography (CBCT) scans is performed by visual inspection and is therefore subjective. The extent to which the assessment of midpalatal suture maturation is affected by rater experience has not been adequately explored in the existing literature, thus limiting the availability of evidence-based findings. This study compared the outcomes of classification by dental students, orthodontic residents, and orthodontists.

Methods

Three different groups of students, orthodontic residents, and orthodontists evaluated 10 randomly chosen CBCT scans regarding midpalatal suture maturation from a pool of 179 patients (98 female and 81 male patients) aged 8 – 40 years which were previously classified by evaluating CBCT scans. The pool was set as benchmark utilizing midpalatal suture maturation classification by one examiner (OsiriX Lite version 11.0; Pixmeo SARL, Bernex, Switzerland). For assessment of intra-rater reliability of the examiners of each group the randomly chosen subjects were reclassified for midpalatal suture maturation after a wash-out period of two weeks by using the same software. Statistical analysis was performed to evaluate intra- and interrater reliability of the three groups with differing experience level.

Results

Groupwise intra-rater reliability assessment between the classification and reclassification was weak for examiners with a low level of experience (k = 0.59). Orthodontists had highest degree of agreement with regard to benchmark classification with an inter-rater reliability to be considered as moderate (k = 0.68).

Conclusions

Assessment of midpalatal suture maturation on CBCT scans appears to be a subjective process and is considerably related to the experience level of the examiner. A high level of clinical experience seems to be favorable but does not necessarily ensure accurate results.

Peer Review reports

Background

Rapid maxillary expansion (RME) is a common orthodontic procedure used to treat transverse discrepancies. Although surgically assisted RME, also known as surgically assisted rapid palatal expansion (SARPE), is more frequently employed in adults, conventional RME is typically performed in children and adolescents [1]. The need of SARPE is determined on the basis of distinct features. Skeletal involvement is indicated by the presence of more than two teeth in a crossbite [2], and a presence of at least 5% ossification of the midpalatal suture indicates the need for SARPE since it can increase the transverse osseous resistance to conventional expansion [3]. The findings of finite element analysis indicate that the material properties of the midpalatal suture and the circummaxillary sutures have a significant impact on the pattern of expansion [4], this indicates that the characteristics of the sutures play a significant role in maxillary expansion.

On the basis of a visual examination of cone-beam computed tomography (CBCT) results, Angelieri et al. classified the sutural appearance of the midpalatal suture into five maturation stages (A–E). Angelieri et. state that the stages A and B were not anticipated to exhibit greater bone resistance, and when stages D and E were identified, surgically assisted therapy was recommended [5]. However the recommendations made were based on qualitative assessment of the midpalatal suture, without performing and measuring outcome of surgical intervention or consideration of ossification as a measured parameter.

The reliability of diagnostic evaluations is crucial to minimize errors and ensure consistent outcomes, regardless of variables such as the examination environment, the time of assessment, and the examiner [6]. Reliable means of assessing midpalatal suture maturity could enhance the selection of optimal treatment strategies for the patient [7]. As the classification approach proposed by Angelieri et al. employs CBCTs, which result in a higher radiation dose, it is crucial to ensure the reliability of the assessment. This is particularly significant since the classification using CBCT has been suggested for patients aged between 14 and 18 years [8].

Thus, although classification based on the criteria by Angelieri et al. showed strong to moderate agreement for intra-examiner and inter-examiner agreement [9], the ratings themselves may be affected by the experience level of the rater. It has been shown that classification of midpalatal suture maturation performed by visual inspection by humans, is not free of subjectivity and requires a high level of technical sensitivity [10]. This is supported by another investigative study on the reliability of midpalatal suture maturation evaluation via CBCT imaging, which found the inter-examiner reliability to be moderate to weak, and thereby advise that the application of this approach should be considered with caution [11]. The assessment of midpalatal suture maturation necessitates extensive training for the examiner [12].

The aim of this study was therefore to evaluate the influence of raters’ experience on the classification of midpalatal suture maturation. For this we compared the performance for midpalatal suture maturation stage assessment, as described by Angelieri et al., of dental students, orthodontic residents, and orthodontists.

The null hypothesis was that the assessed maturation stage of the midpalatal suture would show no differences among the groups.

Methods

Ethical approval for this observational comparative study, which utilized retrospectively analyzed data, was provided by the Ethics Committee of the University of Witten/Herdecke (approval no. 291/2021). The original pool of CBCT images was obtained from 547 patients who were treated at the Dental Clinic of the University of Witten/Herdecke, Germany in the years 2015–2016. Sample size determination was not undertaken for this preliminary study.

All CBCT images were generated in Digital Imaging and Communications in Medicine (DICOM) format with GALILEOS Comfort (Sirona Dental Systems GmbH, Germany) at an X-ray exposure of 85 kV and 5–7 mA (14 s; field of view: 150 × 150 mm, 200 singular images), yielding a voxel size of 0.027 mm3 and slice thickness of 300 µm.

The inclusion criteria for this study were as follows: patients aged between 8 and 40 years who had not received prior orthodontic or surgical treatment and had a CBCT image of adequate quality.

Patients with a history of craniofacial anomalies such as cleft lip and palate, cysts or tumors in the maxillary region, and CBCT scans with subjectively insufficient image quality were excluded.

The patient data were pseudonymized at the source. All CBCT scans were analyzed using OsiriX Lite version 11.0 (Pixmeo SARL, Bernex, Switzerland) in a dark room with an X-ray reporting monitor under the same screen settings. The 179 CBCT images were initially assessed and classified by a single examiner, a dentist who had received training in CBCT diagnosis. This examiner utilized dynamic free scanning to classify the images based on the classification system developed by Angelieri et al. [5]. We refer to the existing literature for the exact definitions of each class.

A second opinion from an experienced and trained orthodontist in CBCT diagnosis was sought when there were uncertainties regarding the classification. Through collaboration, a mutually agreed upon classification was established.

Subsequently, for a sample of 60 CBCT images selected randomly, an additional measurement was conducted two weeks later in order to assess the intra-rater reliability. The examiner’s classification was set as the benchmark for the following assessments by the three examiner groups.

A total of ten CBCT scans were selected at random, while ensuring that all defined stages were included at least once. These scans were assessed by three distinct groups of investigators, comprising dental students, orthodontic residents, and practicing orthodontists (each group comprised of five examiners). The assessment was conducted using a dynamic free-screening procedure. The examiners performed an independent evaluation and were required to orient the CBCT slices themselves, as instructed in training, to analyze midpalatal suture maturation. The number of examiners in each group was based on the study by Obuchowski, in which medical imaging studies were performed by 5–10 examiners [13]. The examiners were provided with training materials and an evaluation scheme in the form of a handout describing the radiological features at each maturity stage and accurate figures and flowcharts as described by Angelieri et al. to assess the midpalatal suture maturation stage. Prior to the assessment of the ten selected CBCT scans, all examiners were provided five different CBCT scans for training purposes. Examiners who had not acquired CBCT expertise were instructed using the software. Students were selected from higher semesters to ensure that they had adequate anatomical knowledge. A repeat assessment was performed after two weeks to measure the intra-rater reliability for each examiner. The software programs Medas (EDV Grundysteme, Margetshöchheim, Germany) and IBM SPSS Statistics 29 (IBM, Armonk, USA) were used for statistical analyses. Data distribution was analyzed using descriptive statistics and the Kolmogorov–Smirnov test. Weighted Cohen´s kappa was employed to determine the intrarater reliability for the benchmark classification and for inter- and intraexaminer agreement of the pooled performance for each group of dental students, orthodontic residents and orthodontists. Additionally, the agreement to the benchmark classification for each group was assessed using Cohen's kappa (k). The interpretation of kappa values was conducted in accordance with McHugh's classification system, which categorizes the level of agreement as follows: k ≤ 0.20 as none, k = 0.21 ≤ 0.39 as minimal, k = 0.40 ≤ 0.59 as weak, k = 0.60 ≤ 0.79 as moderate, k = 0.80 ≤ 0.90 as strong, and k > 0.90 as almost perfect [14].

For the level of intra-rater reliability exhibited by the individuals, a Kendall-tau correlation analysis was conducted and assessed in accordance with the classification system established by Chan, in which ratings of r < 0.3 were deemed poor, r = 0.3 ≤ 0.5 were considered fair, r = 0.6 ≤ 0.8 were categorized as moderately strong, and r > 0.8 were characterized as very strong correlation [15].

In addition to evaluate the deviation from the benchmark classification and compare the respective groups for diagnostic performance, stages A-E were coded in numbers 0–4. The classified values were summed for the respective examiners and the values of the benchmark classification were subtracted. Finally, the mean values for the respective groups were calculated, in analogy to the study of Barbosa et al. utilizing median values [16], and Wilcoxon pairwise comparison analysis was performed to compare for significant differences between the groups. The significance level for all statistical tests was set at p < 0.05.

Results

Intra-rater reliability for the benchmark classification

A strong intra-rater reliability (k = 0.85) for the benchmark classification, which involved classifying 179 CBCT scans and reclassifying 60 CBCT scans by one examiner, could be demonstrated.

Intra-rater and inter-rater reliability for ten randomly chosen CBCTs for group performance

Student 1 showed poor and insignificant intra-rater reliability (tau = 0.2973, p = 0.23). Student 5 also showed insignificant but fair correlation (tau = 0.3244, p = 0.19), whereas student 2 showed a very strong correlation (tau = 1.000, p < 0.001). The other two students showed moderately strong correlation for intra-rater reliability (tau = 0.6494, 0.6844; p < 0.001).

For the pooled group of students the intra-rater reliability after a wash-out period of two weeks was to be considered as weak (k = 0.59).

Four orthodontic residents showed very strong intra-rater reliability (tau > 0.8, p < 0.001), of which one showed moderately strong intra-rater reliability (tau = 0.78, p < 0.05).

Intra-rater reliability according to weighted Cohen´s kappa for the group performance of orthodontic residents showed a strong level of agreement (k = 0.81) at the end of the wash-out period.

Three orthodontists showed very strong intra-rater reliability (tau > 0.8, p < 0.001) and two orthodontists showed moderately strong intra-rater reliability (0.7 < tau < 0.8, p < 0.05).

Weighted Cohen´s kappa analysis (k = 0.74) showed a moderate level of intra-rater agreement for the group after two weeks for orthodontists.

Inter-rater reliability of the three groups for the assessment of the ten CBCTs was to be considered as minimal (Table 1).

Table 1 Inter-rater reliability between the groups regarding diagnostic performance for ten randomly chosen CBCTs

Agreement level of the groups with regard to the benchmark classification

The degree of agreement between the benchmark classification and the actual classification is illustrated in Table 2. This table demonstrates a maximum deviation of three stages for all groups and the highest percentage of exact agreement for the orthodontists.

Table 2 Deviation of agreement with the benchmark classification of the three groups with different experience level

Comparison of the performance of the groups in terms of deviation from the benchmark classification and diagnostic precision, showed that the assessments performed by the orthodontists were significantly closer to the benchmark classification than those reported by the students (p < 0.05). However, no significant differences were observed in diagnostic performance between orthodontists and orthodontic residents and between orthodontic residents and students (p > 0.05) (Table 3, Fig. 1). This is supported by weighted Cohen´s kappa analysis for inter-rater reliability with regard to benchmark classification, which was minimal for the group of students (k = 0.34), and weak for orthodontic residents (k = 0.46) whereas orthodontists showed a moderate level of agreement (k = 0.68).

Table 3 Wilcoxon pairwise comparison of deviations from the benchmark classification
Fig. 1
figure 1

Boxplot diagram of the three examiner groups and their respective deviations from the benchmark classification; p = p-value

Discussion

Visualization and classification of the appearance of the midpalatal suture by CBCT images and its potential impact on treatment modality has been proposed by Angelieri et al. [5]. This proposal is not without controversy as resistance to maxillary expansion is influenced by multiple factors, which include not only intrinsic properties of the midpalatal suture but constraints imposed by surrounding structures [17]. Additionally no correlation could be found between ossification and the proposed classification stages, therefore the scientific rationale of the classification method has been questioned [18]. It has also been stated that the classification still lacks validity and is influenced by image quality and the examiner calibration, as interpretation of the midpalatal suture maturation stage is subjective [11]. This introduces potential variability in assessments among different examiners. The literature on the reproducibility of staging is inconclusive, and it has been suggested that future studies should include at least two examiners and a strict training protocol [8]. Our study compromises three different groups of examiners in terms of experience level, and each group consists of five examiners, and a training protocol was instituted.

The assessment for classification of midpalatal suture maturation in CBCT scans in our study had to be performed dynamically by scrolling to simulate a realistic clinical scenario, thereby enhancing the practical applicability of our findings. Barbosa et al. in contrast provided prepared axial slices of the midpalatal suture for the examiners as described by Angelieri et al. [5, 16]. This distinction is also important because examination of all midpalatal suture segments for assessment of the maxillary anatomy can be difficult when assessing a single slice.

The intra-rater reliability of the benchmark classification in this study was very strong. It can be inferred that the experience level of the benchmark examiner was likely high, considering 179 assessed CBCTs, which may have contributed to the high level of intra-rater agreement. A strong intra-examiner reliability for midpalatal suture classification using CBCT scans in adults was also reported by Angelieri et al. [19]. The use of multi-slice computed tomography (CT) scans to assess midpalatal suture maturation has shown to be reliable and reproducible [16], though it should be noted that our study utilized CBCT scans.

In this study, calibration of examiners was performed before evaluations in the three different groups, in accordance with the recommendations of a previous study [11]. However, unlike previous studies that used Cohen’s kappa values to evaluate intra-rater reliability [11, 16], Kendall’s tau correlation was additionally calculated in the present study. The use of Kendall ‘s tau correlation for assessment of intra-rater reliability has been reported in the literature [20], and this correlation analysis has been shown to be accurate for small datasets [21].

To our knowledge and on the basis of our review of the literature, only one study has compared the diagnostic performance of students and orthodontists for midpalatal suture classification using CBCT scans showing no significant difference regarding the experience level of the examiners. The results are restricted as the number of examiners were limited to one person per group [22]. Another study analyzed the performance of orthodontists and radiologists to classify the mipalatal suture maturation on CBCTs. Though it was stated they had varying level of experience, all were specialists in their field and thereby a high level of overall clinical experience could be assumed [16]. Our findings showed significant differences in the diagnostic performance of orthodontists and dental students, indicating that midpalatal suture maturation classification is related to the experience level of the examiner. These findings are in contrast to those obtained for volumetric cephalometric landmark identification, wherein inexperienced raters showed better performance than experienced raters in a comparison of dental students and orthodontic residents [23]. Nevertheless, significant differences between experienced practitioners and dental students have been reported for accurate measurement of the mandibular anatomy in CBCT scans [24]. These findings imply a major knowledge gap in the use of CBCT scans among dental professionals and indicate the need for adequate training [25].

In this study, none of the orthodontists showed almost perfect or no correlation for inter-rater reliability, but one dental student and one orthodontic resident showed almost perfect correlation. This may imply either very good conformity or very high bias owing to the previous classification. A notable aspect is that the classifications were performed unsupervised after calibrating the examiners. Nevertheless, since the median deviation from the benchmark classification for dental students was 1.0, in general, the error for students was approximately one classification stage, while it was lower for orthodontic residents and orthodontists. The group of orthodontists showed highest relative agreement of 92% and the least mean deviation with regard to the benchmark classification indicating that a high level of experience is necessary for the assessment of midplatal suture maturation stages. The inter-rater reliability was minimal between the groups of experience level and not higher than moderate with regard to benchmark classification indicating that the method is not free of subjectivity.

The study's prospective nature allows for meaningful comparisons between diverse examiner groups with varying levels of experience. The preliminary study is subject to certain limitations that must be taken into consideration. The within-group classification employs a relatively small sample size, which may affect the precision of the results. In order to assess the performance of midpalatal suture maturation, a subsample of 10 CBCT images was randomly selected from the original dataset. This approach may have resulted in certain stages being underrepresented, potentially impacting the generalizability of our findings. Therefore, to validate our findings, future studies with a more comprehensive sample that is representative of all stages would be beneficial. While we followed the ALARA (As Low As Reasonably Achievable) principle for radiation safety in our study, we acknowledge that the ALADAIP (As Low As Diagnostically Acceptable being Indication-oriented and Patient-specific) guideline would have been a more appropriate framework, especially for aligning radiation dose with the diagnostic necessity in pediatric imaging [26].

Conclusions

The classification of midpalatal suture maturation on CBCT scans by visual inspection seems to be related to the experience level of the examiner. High level of experience seems favorable but does not necessarily ensure reliable assessment.

Availability of data and materials

All data are available on request by Annahita Arman (arman.05@hotmail.de).

Abbreviations

CBCT:

Cone-beam computed tomography

CT:

Computed tomography

CVM:

Cervical vertebral maturation

DICOM:

Digital Imaging and Communications in Medicine

RME:

Rapid maxillary expansion

SARPE:

Surgically assisted rapid palatal expansion

MARPE:

Miniscrew assisted rapid palatal expansion

References

  1. Schwarz A, Watzke I. Kieferorthopädische Gaumennahterweiterung (RPE) versus chirurgisch unterstützte Gaumennahterweiterung (SARPE)–eine Gegenüberstellung. Inf Orthod Kieferorthop. 2009;41(04):253–8.

    Article  Google Scholar 

  2. Jacobs JD, Bell WH, Williams CE, Kennedy JW III. Control of the transverse dimension with surgery and orthodontics. Am J Orthod. 1980;77(3):284–306.

    Article  CAS  PubMed  Google Scholar 

  3. Persson M, Thilander B. Palatal suture closure in man from 15 to 35 years of age. Am J Orthod. 1977;72(1):42–52.

    Article  CAS  PubMed  Google Scholar 

  4. Chang C-J, Chen M-Y, Chang C-H, Chang H-C. Investigation of the role of midpalatal and circummaxillary sutures in bone-anchored rapid maxillary expansion using a verified finite-element model. Am J Orthod Dentofac Orthop. 2023;163(2):198–209.

    Article  Google Scholar 

  5. Angelieri F, Cevidanes LH, Franchi L, Goncalves JR, Benavides E, McNamara JA Jr. Midpalatal suture maturation: classification method for individual assessment before rapid maxillary expansion. Am J Orthod Dentofacial Orthop. 2013;144(5):759–69.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Lee KM, Lee J, Chung CY, Ahn S, Sung KH, Kim TW, et al. Pitfalls and important issues in testing reliability using intraclass correlation coefficients in orthopaedic research. Clin Orthop Surg. 2012;4(2):149.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Tonello DL, Ladewig VM, Guedes FP, Ferreira Conti ACC, Almeida-Pedrin RR, Capelozza-Filho L. Midpalatal suture maturation in 11- to 15-year-olds: a cone-beam computed tomographic study. Am J Orthod Dentofacial Orthop. 2017;152(1):42–8.

    Article  PubMed  Google Scholar 

  8. Gonzálvez Moreno A, Garcovich D, Aiuto R, Dioguardi M, Re D, Paglia L. Cone beam computed tomography evaluation of Midpalatal suture maturation according to age and sex: a systematic review. Eur J Paediatr Dent. 2022;23(1):44–50.

    PubMed  Google Scholar 

  9. Chanchala H, Nandlal B, Murthy N, Shanbhog R. Validation of cone beam computed tomography-based classification method for individual assessment of mid-palatal suture with respect to clinical application among Indian children. Indian J Dent Res. 2020;31(1):85.

    Article  CAS  PubMed  Google Scholar 

  10. Gao L, Chen Z, Zang L, Sun Z, Wang Q, Yu G. Midpalatal suture CBCT image quantitive characteristics analysis based on machine learning algorithm construction and optimization. Bioengineering. 2022;9(7):316.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Isfeld D, Flores-Mir C, Leon-Salazar V, Lagravere M. Evaluation of a novel palatal suture maturation classification as assessed by cone-beam computed tomography imaging of a pre- and postexpansion treatment cohort. Angle Orthod. 2019;89(2):252–61.

    Article  PubMed  Google Scholar 

  12. Shayani A, Sandoval Vidal P, Garay Carrasco I, Merino GM. Midpalatal suture maturation method for the assessment of maturation before maxillary expansion: a systematic review. Diagnostics. 2022;12(11):2774.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Obuchowski NA. How many observers are needed in clinical studies of medical imaging? AJR Am J Roentgenol. 2004;182(4):867–9.

    Article  PubMed  Google Scholar 

  14. McHugh ML. Interrater reliability: the kappa statistic. Biochem Med. 2012;22(3):276–82.

    Article  Google Scholar 

  15. Chan Y. Biostatistics 104: correlational analysis. Singapore Med J. 2003;44(12):614–9.

    CAS  PubMed  Google Scholar 

  16. Barbosa NMV, Castro ACd, Conti F, Capelozza-Filho L, Almeida-Pedrin RRd, Cardoso MdA. Reliability and reproducibility of the method of assessment of midpalatal suture maturation: a tomographic study. The Angle orthodontist. 2019;89(1):71–7.

    Article  PubMed  Google Scholar 

  17. Savoldi F, Xu B, Tsoi JK, Paganelli C, Matinlinna JP. Anatomical and mechanical properties of swine midpalatal suture in the premaxillary, maxillary, and palatine region. Sci Rep. 2018;8(1):7073.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Savoldi F, Wong KK, Yeung AW, Tsoi JK, Gu M, Bornstein MM. Midpalatal suture maturation staging using cone beam computed tomography in patients aged between 9 to 21 years. Sci Rep. 2022;12(1):4318.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  19. Angelieri F, Franchi L, Cevidanes LHS, Goncalves JR, Nieri M, Wolford LM, et al. Cone beam computed tomography evaluation of midpalatal suture maturation in adults. Int J Oral Maxillofac Surg. 2017;46(12):1557–61.

    Article  CAS  PubMed  Google Scholar 

  20. Cho D. A study on ESL writing assessment: intra-rater reliability of ESL compositions. Melb Pap Lang Test. 1999;8(1):1–24.

    Google Scholar 

  21. Akoglu H. User’s guide to correlation coefficients. Turk J Emerg Med. 2018;18(3):91–3.

    Article  PubMed  PubMed Central  Google Scholar 

  22. dos Santos LT, Lisboa CO, Christovam IO, Mattos CT. Midpalatal suture maturation assessment by individuals with different levels of academic degree using cone beam computed tomography. Rev Cientí CRO-RJ (Rio de Janeiro Dent J). 2020;4(3):32–6.

    Google Scholar 

  23. Míguez-Contreras M, Jiménez-Trujillo I, Romero-Maroto M, López-de-Andrés A, Lagravère MO. Cephalometric landmark identification consistency between undergraduate dental students and orthodontic residents in 3-dimensional rendered cone-beam computed tomography images: a preliminary study. Am J Orthod Dentofac Orthop. 2017;151(1):157–66.

    Article  Google Scholar 

  24. Ludlow JB, Laster WS, See M, Bailey LTJ, Hershey HG. Accuracy of measurements of mandibular anatomy in cone beam computed tomography images. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2007;103(4):534–42.

    Article  PubMed  Google Scholar 

  25. Hashim HA, ALSharbati R, Bader L, Kasouma E, Hashim MH, Al-Sayed N. Knowledge and attitude of orthodontists towards cone-beam computed tomography (CBCT). Eur J Med Health Sci. 2022;4(6):22–8.

    Google Scholar 

  26. Oenning AC, Jacobs R, Salmon B. ALADAIP, beyond ALARA and towards personalized optimization for paediatric cone-beam CT. Int J Pediatr Dent. 2021;31(5):676–8.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

Open Access funding enabled and organized by Projekt DEAL. No funding was given for this study.

Author information

Authors and Affiliations

Authors

Contributions

SC wrote the manuscript, performed data analysis, revised the manuscript, and provided final approval for the submission. AA performed data acquisition, curated images, provided inputs for the manuscript, revised the manuscript, and provided final approval for the submission. SCM interpreted the data, provided inputs for the manuscript, revised the manuscript, and provided final approval for the submission. BL interpreted the data, provided inputs for the manuscript, revised the manuscript, and provided final approval for the submission. JJ supervised the study, revised the manuscript, provided inputs for the manuscript, and provided final approval for the submission. GD assisted in conception of the study and preparing the study design, provided inputs for the manuscript, revised the manuscript, and provided final approval for the submission.

Corresponding author

Correspondence to Sachin Chhatwani.

Ethics declarations

Ethics approval and consent to participate

Ethical approval (291/2021) was obtained from the Ethics Committee of the University of Witten/Herdecke, Germany.

The informed consent was waived by the Ethics committee of the University of Witten/Herdecke, Germany. All methods were performed in accordance with relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chhatwani, S., Arman, A., Möhlhenrich, S.C. et al. Performance of dental students, orthodontic residents, and orthodontists for classification of midpalatal suture maturation stages on cone-beam computed tomography scans – a preliminary study. BMC Oral Health 24, 373 (2024). https://doi.org/10.1186/s12903-024-04163-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12903-024-04163-3

Keywords