Skip to main content
  • Research article
  • Open access
  • Published:

Evaluating quality of obstetric care in low-resource settings: Building on the literature to design tailor-made evaluation instruments - an illustration in Burkina Faso

Abstract

Background

There are many instruments available freely for evaluating obstetric care quality in low-resource settings. However, this profusion can be confusing; moreover, evaluation instruments need to be adapted to local issues. In this article, we present tools we developed to guide the choice of instruments and describe how we used them in Burkina Faso to facilitate the participative development of a locally adapted instrument.

Methods

Based on a literature review, we developed two tools: a conceptual framework and an analysis grid of existing evaluation instruments. Subsequently, we facilitated several sessions with evaluation stakeholders in Burkina Faso. They used the tools to develop a locally adapted evaluation instrument that was subsequently tested in six healthcare facilities.

Results

Three outputs emerged from this process:

  1. 1)

    A comprehensive conceptual framework for the quality of obstetric care, each component of which is a potential criterion for evaluation.

  2. 2)

    A grid analyzing 37 instruments for evaluating the quality of obstetric care in low-resource settings. We highlight their key characteristics and describe how the grid can be used to prepare a new evaluation.

  3. 3)

    An evaluation instrument adapted to Burkina Faso. We describe the experience of the Burkinabé stakeholders in developing this instrument using the conceptual framework and the analysis grid, while taking into account local realities.

Conclusions

This experience demonstrates how drawing upon existing instruments can inspire and rationalize the process of developing a new, tailor-made instrument. Two tools that came out of this experience can be useful to other teams: a conceptual framework for the quality of obstetric care and an analysis grid of existing evaluation instruments. These provide an easily accessible synthesis of the literature and are useful in integrating it with the context-specific knowledge of local actors, resulting in evaluation instruments that have both scientific and local legitimacy.

Peer Review reports

Background

Nearly all of the 500 000 maternal deaths worldwide every year occur in low- and middle-income countries (LMICs). Efforts to achieve the 5th Millennium Development Goal have been largely ineffective in regions with the highest maternal mortality, notably sub-Saharan Africa [1]. One strongly recommended strategy for reducing maternal deaths is to improve women's healthcare, especially during pregnancy and delivery [2]. Access to good obstetric care (OC) would prevent 50% to 70% of maternal deaths, reduce neonatal mortality by 10% to 15%, and substantially reduce the number of women living with sequelae of obstetric complications [35]. Good quality is essential not only in emergency OC, but also in basic OC, to detect complications early [5, 6].

The first step in improving OC quality is evaluation, to identify problems. There are many freely available instruments for evaluating OC quality in LMICs. However, it is easy to lose one's way among these many instruments, whose evaluation approaches are quite diverse. Also, while healthcare may appear to be a well-defined field, there is nevertheless a certain amount of subjectivity in what is considered important in producing "quality" [7]. There is considerable variability in both the literature and the instruments, each of which studies OC quality from its own angle, focusing on specific elements: material resources, treatment protocols, women's satisfaction with services, etc. Each environment presents specific issues that may require evaluating some aspects of quality rather than others, such that ready-made instruments are not always appropriate. It is important to ensure that the instrument used in a given environment responds adequately to stakeholders' concerns, so that they will take ownership of the results [8, 9]. Existing evaluation instruments provide a well-established scientific base upon which to build, having been tested already in their original environments. However, to this scientific base should be added "colloquial evidence" [10], i.e., the informal knowledge considered important by the stakeholders.

This article presents the process we followed to prepare a national evaluation of OC quality in Burkina Faso. We began with a review of existing evaluation instruments, which we then used to develop with stakeholders a locally appropriate evaluation instrument.

Context

Burkina Faso is ranked next-to-last on the Human Development Index. Forty-six percent of its population lives under the poverty threshold, 75% of adults are illiterate, and life expectancy is only 51 years [11]. The maternal mortality ratio is 700 per 100 000 live births [2]. In rural areas, where 85% of the population lives, only 31% of deliveries occur in a healthcare facility [12], even though the geographic accessibility of OC has improved since the 1990s in the health districts--the healthcare system's first level and the designated point for managing OC. The government doubled the number of CSPSs (first-line health centres with at least one nurse, one auxiliary midwife, and one mobile health worker) and, beginning in 1994, to decentralize the management of obstetric and surgical emergencies, implemented in each health district a medical centre with a surgical unit (CMA) staffed by approximately 20 professionals.

In 2006 the Ministry of Health decided to subsidize OC and emergency neonatal services at 80% to encourage their utilization. Patients now pay only 20% of what they were previously required to pay, which had by far exceeded households' average health expenditures, to the point of being prohibitive for procedures such as caesareans [1315]. This subsidy is an important step forward, but the Ministry of Health has cautioned that maternal mortality will be reduced only if parallel efforts are made to improve service availability and quality [16].

There appear to be numerous problems in this area. Several studies [17, 18] have shown that providing more health facilities since the 1990s has not increased service utilization, most likely because of service quality problems. OC quality has not yet been systematically evaluated at the national level. However, a 2006 evaluation of health facilities' functionality in two districts [19] confirmed problems reported elsewhere [16, 20]: shortages of qualified OC personnel in rural areas; lack of equipment, means of communication, and transport for evacuation referrals; very limited blood transfusion capacity, among others. As is often the case in LMICs, problems of quality are largely related to non-availability of resources; for this reason, quality of care is also measured in terms of availability. Because the population also perceives the quality to be poor, they may be dissuaded from using the healthcare system [21, 22, 18].

To document OC quality problems precisely and to support informed decision-making, the Burkinabé Public Health Association, working closely with the Ministry of Health, decided to evaluate OC availability and quality in the district health facilities (CMAs and CSPSs). They approached the University of Montreal/CRCHUM (Research Centre of the University of Montreal Hospital Centre) to help develop a context-adapted evaluation instrument.

Methods

We carried out this study in five stages. First, we reviewed the literature to identify frameworks and evaluation instruments referring to OC quality in LMICs. This was not meant to be a systematic review; nevertheless, we conducted a broad search of both the scientific and the grey literature, as we expected most evaluation instruments to be found in the latter. We began in January 2007 by searching in Ovid MEDLINE for articles published since 1996. Using the combination of MeSH terms "*obstetrics" and "*quality of health care", we identified 32 articles. After reviewing their abstracts, we selected five of them; the remainder were excluded either because they were not about LMICs, dealt with other topics (e.g. postnatal care, analysis of delivery outcomes statistics in a specific area, etc.), were in languages other than English or French, or were not accessible. At that stage, we added some well-known seminal documents on OC in LMICs [2325] as well as internal documents used by our team in ongoing projects in Africa [26]. Then we identified other relevant documents from the reference lists of this first set of documents. We continued with this snowball approach until reference lists of new documents only contained documents we had already identified. In total, we found 37 evaluation instruments, which are listed in Table 1. Nearly all of them were identified in the grey literature--twelve from EngenderHealth/AMDD [23, 27], seven from Jhpiego [28, 29], five from IMMPACT [24], four from the World Health Organization [25], four from Columbia University [30], and three from our own team [26]; only two were identified in scientific articles [31, 32]. We submitted the list of instruments to an expert on OC in developing countries, who did not find any major instrument missing; still, given the profusion of instruments for evaluating OC quality in LMICs, there is no guarantee that our survey was totally comprehensive.

Table 1 Summary of characteristics of instruments for evaluating quality of obstetric care in low-resource settings

Second, from the literature collected, we inventoried the different components of OC quality and organized them into an exhaustive conceptual framework, as a tool to guide evaluation. "OC quality" is a broad and multifaceted concept. Evaluating it in practical terms requires very precise criteria focused on specific components; hence the value of a conceptual framework that details all the components of OC quality, each of which can serve as an evaluation criterion.

Third, we studied the 37 OC quality evaluation instruments using the descriptive-analytical method: we applied the same analytical framework to all the instruments to collect standardized information and to facilitate comparisons [33]. We presented the instruments in an analysis grid that followed the structure of our conceptual framework and highlighted the evaluation strategies and criteria used by each.

Fourth, in February and March 2007, we led a deliberative process with stakeholders in Burkina Faso to develop a locally relevant evaluation instrument. Deliberative processes are participative mechanisms for eliciting and combining "scientific" evidence from the literature with "colloquial" evidence from local stakeholders' experience to increase the probability of taking sound and acceptable decisions in a given context [10]. Specifically, a deliberative process involves bringing together stakeholders, presenting them with the scientific evidence on the subject of interest and engaging them in a discussion of how it can be integrated with their knowledge of the situation for informed and context-appropriate decision-making. Our working group was made up of six representatives of the Burkinabé Public Health Association and the Ministry of Health. In developing locally appropriate evaluation strategy and criteria, they used our conceptual framework for OC quality and considered a variety of situational factors. They operationalized the evaluation criteria in questions and response choices, using the analysis grid of existing instruments and their own expertise in OC in the CMAs and CSPSs.

Fifth, the evaluation instrument was finalized after being tested in two CMAs and four CSPSs, which allowed health professionals in the field to participate in the development process.

Results

The results consist of three outputs: 1) a conceptual framework for OC quality; 2) an analysis grid of instruments for evaluating OC quality in LMICs; and 3) an evaluation instrument for OC quality in Burkina Faso.

Conceptual framework for OC quality

Even if the decision is taken to evaluate only certain aspects of OC quality that are considered the most important in a given context, this choice should be explained in relation to the entire set of possible evaluation criteria. This transparency is even more necessary when evaluation choices are negotiated between different stakeholders. In these circumstances, it is important to use a conceptual framework that is sufficiently exhaustive and operational to guide the selection of evaluation criteria. There is considerable variability in the evaluation instruments and in the literature on OC quality. In the absence of any consensus-supported conceptual framework on OC quality, we refined Donabedian's classical model [34, 35], which considers three levels for evaluating the quality of care: structure (human, material, and organizational resources); process (the health services themselves); and outcome (the consequences of these services on patients). These levels follow a logical sequence: available resources, put into action, lead to activities that produce results. We inventoried the components of OC quality from the literature and organized them into these three categories, producing a comprehensive conceptual framework in which every item is a potential criterion for evaluating OC quality. Figure 1 presents this framework with brief explanations; detailed descriptions with references to the literature are in Additional File 1.

Figure 1
figure 1

Conceptual framework for the quality of obstetric care. 1Number of human resources on staff and on duty 24 hrs/day, 7 days/week. 2 Qualification is the fact, for example, of having a degree in medicine, midwifery, etc.; this is not to be confused with competence, which is expressed in the care process: qualification and competence are not automatically interrelated. 3 A person's interest in pursuing the objectives of the organization for which he or she works. 4 Should be available at all times, functional, and in sufficient quantity. 5 Including buildings and support services (sterilization, laundry, etc.). 6 E.g. team organization, job descriptions, regular payment of salaries, sanctions and rewards, etc. 7Should be in user-friendly formats and well maintained. 8 E.g. review of cases having negative outcomes, collecting patient's opinions on services received, etc. 9 Such that women are not required to pay anything before receiving obstetric services. 10 Between the caregiver and the patient. 11 Characteristics of the setting within which care is provided that help put the patient at ease (for example, not only are there curtains--a material resource--in the delivery room, but the caregivers actually take care to close them to protect the women's privacy). 12 All of the single interactions, and how they are interconnected, from the beginning to the end of the patient's treatment. This looks at how services are organized. 13 Within the health facility and, if the patient is referred, from one facility to another. 14 All the services required are provided. 15 Abusive fees charged by certain healthcare professionals, which are a flagrant sign of bad practices.

This conceptual framework does not judge the components' relative importance, which will, in any case, vary according to stakeholders' perspectives. Rather, it is a tool to support deliberation and the selection of the components to be evaluated in a given context.

The causal links between structure, process, and outcome are theoretical and not always verified in reality [34]. A good-quality structure has the potential to produce a good care process, but this potential may not be achieved. An evaluation focused only on outcomes, especially morbidity and mortality, does not discern to what extent these are due to quality of care rather than other factors, and therefore cannot guide decision-making for improving service quality. Thus, our review excluded instruments that measure only mortality or morbidity and nothing else, since they are not, strictly speaking, instruments for measuring OC quality. In short, if we use evaluation criteria from only one level, we cannot infer that the quality thus measured applies to the entire "chain of production" of OC.

An analysis grid of instruments to evaluate OC quality in LMICs

Given the large number of OC quality evaluation instruments freely available in English, and often also in French, the problem is not a lack of material, but rather, navigating through it. Thus, we developed an analysis grid to record each instrument's content and evaluation strategy and applied it to the 37 instruments. Additional File 2 contains the full grid; Table 2 presents an extract. With respect to content, the grid reproduces the structure of our conceptual framework: each line is devoted to one component of OC quality, with x's marking the instruments that use it as an evaluation criterion. From the distribution of x's we can see on what level(s)--structure, process, or outcome--each instrument is focused. The grid also presents each instrument's broad evaluation strategies: unit of observation (i.e., facility or case management level); information sources (interviews with staff, patients, or families; reviews of medical records and registers; observation); and type of data gathered (quantitative, qualitative, or combined). Even though most instruments are designed to evaluate emergency OC, those whose unit of observation is a facility can also, because of their configuration, be used to evaluate basic OC.

Table 2 Extract from the analysis grid of instruments to evaluate the quality of obstetric care in low- and middle-income countries.

The analysis grid can be used to prepare a new evaluation. For instance, if we want to assess quality at the level of OC process, we can easily locate in Table 2 the most comprehensive instrument for this purpose; if we want to evaluate the condition of buildings, a quick horizontal reading of the grid will show us two instruments for this. This allows us to go directly to the relevant instruments for detailed consultation and draw upon them as needed to produce a new instrument tailored to our specific context.

Table 1 summarizes the instruments, their evaluation strategies, and how many components of our conceptual framework they use as evaluation criteria; the specific components they use are noted in the full version of the analysis grid in Additional File 2. Table 1 reveals several trends. Instruments whose unit of observation is a facility essentially evaluate structural components; some also consider process components. All instruments whose unit of observation is a case focus on process. Of these, the majority also evaluate outcome, and half are very interested in structure (three components or more). Half of the instruments we surveyed use multiple sources of information. The choice of sources seems to be independent of the evaluation perspective and unit of observation. With respect to the type of data, instruments whose unit of observation is a case are more likely to collect unstructured responses that must then undergo qualitative analysis.

An instrument to evaluate OC quality in Burkina Faso

The instrument developed to evaluate OC quality in Burkina Faso is presented in its entirety, in French, in Additional File 3. Table 3 summarizes its main features. The working group, made up of Burkinabé Public Health Association and Ministry of Health representatives, chose facilities as the unit of observation because evaluating managed cases required too many resources (time, qualified evaluators) to cover enough cases. Thus, the stakeholders are hoping to include a certain number of facilities in the evaluation. Likewise, they preferred to collect quantitative data because these are easier to process. The group's discussions on evaluation criteria were based entirely on the conceptual framework for OC quality. They felt it was essential to evaluate quality at the structure level to identify weaknesses at the source that could affect process and outcome. They selected most of the evaluation criteria for human and material resources. They considered the organizational resources criteria to be interesting but too sophisticated, given the more basic quality problems in Burkina Faso. Understanding that structural quality is necessary but insufficient, the working group wanted the evaluation to also touch upon certain aspects of process quality. Other dimensions, as well as the evaluation of outcomes, were set aside because they would require more resources than were available for this evaluation.

Table 3 Characteristics of the instrument for evaluating quality of obstetric care in Burkina Faso

The group used the analysis grid to identify instruments that dealt with each evaluation criterion retained. They consulted these instruments to see how they measured these criteria (questions, information sources) and to assess whether they were replicable in Burkina Faso, taking into consideration how OC is organized in this country and the uncertain availability and reliability of data. They recognized that the Burkina Faso instrument could not be as detailed as others because its purpose was to produce a first status report of OC quality in many facilities. The group was rarely able to reuse the questions exactly as they were in the existing instruments. Still, consulting them helped inspire and rationalize the process of developing the new instrument because the group was compelled to consider how to adapt the questions for Burkina Faso. It was also through this consultation that they adopted the "room-by-room walk-through" strategy [31], in which the facility evaluation visit follows the route taken by an obstetric case, thus allowing the evaluation to touch upon certain aspects of process even as it focuses on structure.

Finally, the questionnaire was tested in two CMAs and four CSPSs. Some questions were then rewritten to incorporate service providers' suggestions, to compensate for gaps in information sources, or to control for possible biases in staff responses. Field testing allowed us to identify the most reliable source of information for each question. Certain realities led us to adjust the "room-by-room walk-through" approach. In the CSPSs, where teams and infrastructure are limited, following the patients' route provided no additional information and interrupted the flow of the interview. So there we favoured "classic" staff interviews followed by visits to the maternity ward. In CMAs, the walk-through remains relevant; however, it is not possible to respect the patients' route strictly, because of social sensitivities: starting the evaluation visit with the ambulance drivers, even if they are the patients' first contact with the CMA, is not acceptable to the health workers.

Discussion

The need to tailor evaluation instruments to local contexts

Our experience highlights the fact that, even with the abundance of OC quality evaluation instruments specially designed for LMICs, it is rare that an existing instrument will work perfectly, as is, for a new evaluation project.

  • Evaluation criteria: Our conceptual framework and our analysis grid highlight the multiplicity of possible criteria combinations. Chances are slim that an existing instrument's criteria set perfectly matches the issues under evaluation in a new context.

  • Evaluation perspective and resource constraints: Many instruments were developed for case studies such as facility supervision or case management review and are therefore very detailed. In Burkina Faso, the evaluation must cover many facilities, but with a restrained budget that limits the time and human resources available for data collection and analysis. The evaluation questions therefore had to be simplified.

  • Information sources: Documentary sources (registers, medical records) are less subject to desirability or memory biases than staff interviews. However, their availability and reliability vary from country to country, and an evaluation instrument that uses them may not be replicable elsewhere.

  • Organizational and sociocultural realities: The logical reasoning underlying some evaluation instruments occasionally collides with local realities (e.g. the "walk-through").

Still, our experience also demonstrates that the literature (including evaluation instruments), if appropriately presented, can inspire and rationalize the development of a new instrument.

Advantages of a synthesized presentation of the literature and evaluation instruments

The conceptual framework and the analysis grid of evaluation instruments proved useful as syntheses of the OC quality literature. The conceptual framework's components are not new; they come from the literature, where they are amply discussed. What is new, and what helps in rationalizing choices, as we saw with the Burkina Faso working group, is the framework's thoroughness and its structure based on Donabedian's three levels of evaluation of quality of care. Selecting criteria from a defined list involves justifying why the others are not retained.

Also, the visual representation of the relationships among the criteria and the levels (structure, process, or outcome) is a reminder that evaluation provides information on quality at the level evaluated, but not necessarily at the other levels. As for the analysis grid, our experience with the working group confirmed its ease of use for exploring the broad universe of existing instruments.

A key benefit of the conceptual framework and the analysis grid lies in their ability to present, in a synthesized, visual, and easily accessible way, the main elements from the scientific literature. This is especially important because the literature is still not readily available in many LMICs due to problems with Internet connections, cost of subscriptions to scientific journals [36], and language barriers for non-anglophones. Also, decision-makers and professionals (generally major stakeholders in the evaluations) are often unfamiliar with the literature and lack time to consult studies--hence the effectiveness of presenting them with syntheses [37, 38] tailored to their requirements [39].

The working group experience

We had two objectives in using a working group: to promote stakeholders' ownership of the evaluation instrument by involving them in its design, and to combine their informal knowledge of the evaluation context with scientific knowledge from the literature. The involvement of the Burkinabé Public Health Association, which launched the initiative, was a given. The Ministry of Health participated as the key decision-maker in matters of quality of care. Other stakeholders, notably funding agencies, also participate in these decisions, but to include them would have been cumbersome: an overly large working group is less effective [10], and developing an evaluation instrument is a detail-oriented project requiring members' active involvement. Service providers were not included in the working group for the same reasons, but some were consulted during field testing of the instrument.

The literature and the stakeholders' context-specific knowledge were easily integrated. Members of the working group appreciated the process and understood well and appropriated the lessons from the literature, such as the implications of evaluating OC quality at different levels, i.e., structure, process or outcome. The type of literature used--evaluation instruments and literature on OC--remained concrete and close to the health professionals' experience, and it was presented concisely and visually in a way that supported its direct, practical application, all of which facilitated its positive reception by the group.

There were no major disagreements around the development of the instrument, probably due to affinities between the institutions represented; some Association members have worked or are currently working for the Ministry. Minor disagreements were resolved pragmatically in the field testing, based on the feasibility of the evaluation options. The only difficulty was related to the availability of the group's members--busy professionals and decision-makers--for this process that involved several meetings and field visits outside the capital. We overcame this by dividing the group into working subgroups and providing continuous feedback on activities to the whole group.

Conclusion

This experience of developing an instrument to evaluate OC quality and availability in Burkina Faso not only underscores the importance of tailoring instruments to the evaluation context, but also shows that existing instruments can inspire and rationalize the process. Two methodological tools produced during this experience could be useful to other evaluation teams: a conceptual framework for OC quality and an analysis grid of existing evaluation instruments. These tools synthesize the literature in a user-friendly format that supports integrating local stakeholders' informal knowledge with the literature to produce evaluation instruments that have both scientific and local legitimacy. In this case, using a deliberative process to integrate these two types of knowledge worked well. It will be important to follow the evaluation currently under way and how its results are used, to see how well this process fulfills its promise of promoting ownership by the local stakeholders.

Abbreviations

OC:

obstetric care

LMICs:

low- and middle-income countries

CSPS:

Centre de Santé et de Promotion Sociale (centre for health and social advancement)

CMA:

Centre Médical avec Antenne chirurgicale (medical centre with a surgical unit)

CRCHUM:

Centre de recherche du Centre hospitalier de l'Université de Montréal (Research Centre of the University of Montreal Hospital Centre).

References

  1. United Nations: The Millennium Development Goals Report 2007. 2007, New York: UN

    Google Scholar 

  2. World Health Organization: Maternal mortality in 2005: estimates developed by WHO, UNICEF, UNFPA, and the World Bank. 2007, Geneva: WHO

    Google Scholar 

  3. Bouvier-Colle M-H, Ouedraogo C, Dumont A, Vangeenderhuysen C, Salanave B, Decam C: Maternal mortality in West Africa. Rates, causes and substandard care from a prospective survey. Acta obstetricia et gynecologica Scandinavica. 2001, 80: 113-119.

    CAS  PubMed  Google Scholar 

  4. Murray S, Pearson S: Maternity referral systems in developing countries: Current knowledge and future research needs. Social Science and Medicine. 2006, 62: 2205-2215. 10.1016/j.socscimed.2005.10.025.

    Article  PubMed  Google Scholar 

  5. World Health Organization: The world health report 2005 - make every mother and child count. 2005, Geneva: WHO

    Google Scholar 

  6. Adeyi O, Morrow R: Concepts and methods for assessing the quality of essential obstetric care. International Journal of Health Planning and Management. 1996, 11: 119-134. 10.1002/(SICI)1099-1751(199604)11:2<119::AID-HPM424>3.0.CO;2-M.

    Article  Google Scholar 

  7. Haddad S, Roberge D, Pineault R: Comprendre la qualité: en reconnaître la complexité. Ruptures, revue transdisciplinaire en santé. 1997, 4: 59-78.

    Google Scholar 

  8. Patton MQ: Utilization-focused evaluation: the new century text. 1997, Thousand Oaks: Sage Publications, 3

    Google Scholar 

  9. Patton MQ, LaBossière F: Les évaluations axées sur l'utilisation. Concepts et pratiques en évaluation de programme. Edited by: Ridde V, Dagenais C. 2009, Montréal: Presses de l'Université de Montréal

    Google Scholar 

  10. Lomas J, Culyer T, McCutcheon C, McAuley L, Law S: Conceptualizing and Combining Evidence for Health System Guidance. 2005, Ottawa: Canadian Health Services Research Foundation

    Google Scholar 

  11. United Nations Development Programme: Human Development Report 2007/2008. 2007, New York: UNDP

    Chapter  Google Scholar 

  12. Institut National de la Statistique et de la Démographie, ORC Macro: Enquête Démographique et de Santé du Burkina Faso 2003. 2004, Calverton, MD.: INSD & ORC Macro

    Google Scholar 

  13. Bicaba A, Ouedraogo J, Ki-Ouedraogo S, Zida B: Accès aux urgences chirurgicales et équité. Rapport final. 2003, Ouagadougou: ABSP

    Google Scholar 

  14. Richard F, Ouédraogo C, Compaoré J, Dubourg D, De Brouwere V: Reducing financial barriers to emergency obstetric care: experience of cost-sharing mechanism in a district hospital in Burkina Faso. Tropical Medicine and International Health. 2007, 12: 972-981.

    Article  CAS  PubMed  Google Scholar 

  15. Storeng KT, Baggaley RF, Ganaba R, Ouattara F, Akoum MS, Filippi V: Paying the price: The cost and consequences of emergency obstetric care in Burkina Faso. Social Science and Medicine. 2008, 66: 545-557. 10.1016/j.socscimed.2007.10.001.

    Article  PubMed  Google Scholar 

  16. Ministère de la Santé: Stratégie nationale de subvention des accouchements et des soins obstétricaux et néonatals d'urgence au Burkina Faso. 2006, Ouagadougou: Ministère de la Santé

    Google Scholar 

  17. Bodart C, Servais G, Mohamed YL, Schmidt-Ehry B: The influence of health sector reform and external assistance in Burkina Faso. Health Policy and Planning. 2001, 16: 74-86. 10.1093/heapol/16.1.74.

    Article  CAS  PubMed  Google Scholar 

  18. Haddad S, Nougtara A, Fournier P: Learning from health system reforms: lessons from Burkina Faso. Tropical Medicine and International Health. 2006, 11: 1889-1897. 10.1111/j.1365-3156.2006.01748.x.

    Article  PubMed  Google Scholar 

  19. Hounton S, Chapman G, Menten J, Brouwere VD, Ensor T, Sombié I, Meda N, Ronsmans C: Accessibility and utilisation of delivery care within a Skilled Care Initiative in rural Burkina Faso. Tropical Medicine and International Health. 2008, 13: 44-52.

    Article  PubMed  Google Scholar 

  20. Banque Mondiale: Santé et pauvreté au Burkina Faso: Progresser vers les objectifs internationaux dans le cadre de la stratégie de lutte contre la pauvreté. 2003, Washington: Banque Mondiale

    Google Scholar 

  21. Baltussen R, Yé Y, Haddad S, Sauerborn RS: Perceived quality of care of primary health care services in Burkina Faso. Health Policy and Planning. 2002, 17: 42-48. 10.1093/heapol/17.1.42.

    Article  CAS  PubMed  Google Scholar 

  22. Beninguisse G, Nikièma B, Fournier P, Haddad S: L'accessibilité culturelle: une exigence de la qualité des services et soins obstétricaux en Afrique. African Population Studies. 2005, 19: 243-266.

    Google Scholar 

  23. EngenderHealth, AMDD-Mailman School of Public Health-Columbia University: Quality improvement for emergency obstetric care. Toolbook. 2003, New York: EngenderHealth

    Google Scholar 

  24. IMMPACT: Immpact Toolkit: a guide and tools for maternal mortality programme assessment. 2007, Aberdeen: University of Aberdeen

    Google Scholar 

  25. World Health Organization: Beyond the numbers: reviewing maternal deaths and complications to make pregnancy safer. 2004, Geneva: World Health Organization

    Google Scholar 

  26. Université de Montréal, Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Direction Régionale de la Santé de Kayes - Mali: Audits de décès maternels. 2006, Montréal: Université de Montréal

    Google Scholar 

  27. EngenderHealth, AMDD-Mailman School of Public Health-Columbia University: Quality improvement for emergency obstetric care. Leadership manual. 2003, New York: EngenderHealth

    Google Scholar 

  28. JHPIEGO, USAID, Ministry of Public Health of Afghanistan: Standards-based management for improving infection prevention in hospitals - Afghanistan: Labor and delivery areas. Standards-based management and recognition Facilitator's handbook. 2005, Baltimore: JHPIEGO

    Google Scholar 

  29. JHPIEGO, USAID, Ministry of Public Health of Afghanistan: Standards-based management for improving quality in essential obstetric care in hospitals - Afghanistan. Standards-based management and recognition Facilitator's handbook. 2005, Baltimore: JHPIEGO

    Google Scholar 

  30. Maine D, Bailey P: Indicators for design, monitoring and evaluation of maternal mortality programs. AMDD Project workshop. 2001, Marrakech

    Google Scholar 

  31. Gill Z, Bailey P, Waxman R, Smith JB: A tool for assessing 'readiness' in emergency obstetric care: The room-by-room 'walk-through'. International Journal of Gynecology and Obstetrics. 2005, 89: 191-199. 10.1016/j.ijgo.2004.12.043.

    Article  CAS  PubMed  Google Scholar 

  32. Hussein J, Bell J, Nazzar A, Abbey M, Adjei S, Graham W: The Skilled Attendance Index: Proposal for a New Measure of Skilled Attendance at Delivery. Reproductive Health Matters. 2004, 12: 160-170. 10.1016/S0968-8080(04)24136-2.

    Article  PubMed  Google Scholar 

  33. Arksey H, O'Malley L: Scoping studies: Towards a methodological framework. International Journal of Social Research Methodology. 2005, 8: 19-32. 10.1080/1364557032000119616.

    Article  Google Scholar 

  34. Donabedian A: The quality of care. How can it be assessed?. JAMA. 1988, 260: 1743-1748. 10.1001/jama.260.12.1743.

    Article  CAS  PubMed  Google Scholar 

  35. Donabedian A: Defining and measuring the quality of health care. Assessing quality health care - Perspectives for clinicians. Edited by: Wenzel R. 1992, Baltimore: Williams & Wilkins

    Google Scholar 

  36. Forti S: Building a partnership for research in global health - Analytical framework. 2005, Ottawa: Canadian Coalition for Global Health Research

    Google Scholar 

  37. Lavis JN, Robertson D, Woodside JM, McLeod CB, Abelson J: How can research organizations more effectively transfer research knowledge to decision makers?. The Milbank Quarterly. 2003, 81: 221-248. 10.1111/1468-0009.t01-1-00052.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Pang T, Sadana R, Hanney S, Bhutta ZA, Hyder AA, Simon J: Knowledge for better health - a conceptual framework and foundation for health research systems. Bulletin of the World Health Organization. 2003, 81: 815-820.

    PubMed  Google Scholar 

  39. International Development Research Centre, Coalition for Global Health Research - Canada, Institute of Population Health - University of Ottawa: Knowledge translation in health and development - Research to policy strategies. 2003, Ottawa: IDRC, CGHR, IPH

    Google Scholar 

  40. Maine D, Akalin MZ, Ward VM, Kamara A: The Design and Evaluation of Maternal Mortality Programs. 1997, New York: Center for Population and Family Health, Columbia University

    Google Scholar 

Pre-publication history

Download references

Acknowledgements

This work was made possible thanks to funding from the International Development Research Centre (Canada)--Governance, Equity, and Health Program, in the context of the program entitled "Politiques publiques et protection contre l'exclusion" [Public policy and protection against exclusion]. We thank the other members of the working group, in particular Dr. Yorba Soura, director (until February 2007) of the program for reduced-risk maternity at the Ministry of Health of Burkina Faso; Dr. Boureima Zida, of the Burkinabé Public Health Association; Moussa Kaboré, health attaché in epidemiology, Burkinabé Public Health Association; Ms. Salimata Ki-Ouedraogo, Department of Studies, Ministry of Health. We also thank the staff of the CMAs of Yako and Saponé, and of the CSPSs of Saponé-Marché, Ipelcé, Pelectenga, and Arbollé, for their participation in the testing of the questionnaire. Our thanks go to Dr. Slim Haddad of the University of Montreal/CRCHUM for his methodological advice on evaluating quality of care. We would also like to express our appreciation to the two reviewers of this article for their constructive and detailed comments that helped us greatly to improve its quality. Thanks to Donna Riley for editing and translation support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Florence Morestin.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

FM, PF, and AB designed the study. FM reviewed the literature on quality of obstetric care, developed the conceptual framework and analyzed the existing evaluation instruments. PF then reviewed the conceptual framework and analysis grid of the evaluation instruments. FM, with assistance from AB, facilitated the process of developing a new evaluation instrument in Burkina Faso, including the pre-test. JdDS played an important role in the development and pre-testing of the instrument. FM wrote the first draft of the manuscript which was then critically revised by PF, AB, and JdDS. All authors read and approved the final manuscript.

Electronic supplementary material

12913_2009_1157_MOESM1_ESM.DOC

Additional file 1: Literature review: Components of obstetric care quality. Detailed description of the components of obstetric care quality inventoried from the literature. (DOC 156 KB)

12913_2009_1157_MOESM2_ESM.XLS

Additional file 2: Analysis grid of instruments to evaluate the quality of obstetric care in low- and middle-income countries. A grid recording the respective content and evaluation strategy of 37 instruments. (XLS 36 KB)

12913_2009_1157_MOESM3_ESM.DOC

Additional file 3: Instrument to evaluate the availability and quality of obstetric care in Burkina Faso. The instrument developed to evaluate OC quality in Burkina Faso presented in its entirety, in French. (DOC 810 KB)

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Morestin, F., Bicaba, A., Sermé, J.d.D. et al. Evaluating quality of obstetric care in low-resource settings: Building on the literature to design tailor-made evaluation instruments - an illustration in Burkina Faso. BMC Health Serv Res 10, 20 (2010). https://doi.org/10.1186/1472-6963-10-20

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6963-10-20

Keywords