Systematic reviews of empirical bioethics

Share Embed


Descrição do Produto

Downloaded from jme.bmj.com on 1 June 2008

Systematic reviews of empirical bioethics D Strech, M Synofzik and G Marckmann J. Med. Ethics 2008;34;472-477 doi:10.1136/jme.2007.021709

Updated information and services can be found at: http://jme.bmj.com/cgi/content/full/34/6/472

These include:

References

This article cites 26 articles, 14 of which can be accessed free at: http://jme.bmj.com/cgi/content/full/34/6/472#BIBL

Rapid responses

You can respond to this article at: http://jme.bmj.com/cgi/eletter-submit/34/6/472

Email alerting service

Receive free email alerts when new articles cite this article - sign up in the box at the top right corner of the article

Notes

To order reprints of this article go to: http://journals.bmj.com/cgi/reprintform

To subscribe to Journal of Medical Ethics go to: http://journals.bmj.com/subscriptions/

Downloaded from jme.bmj.com on 1 June 2008

Ethics

Systematic reviews of empirical bioethics D Strech,1,2 M Synofzik,1,3 G Marckmann1 1

Institute for Ethics and History in Medicine, University of Tu¨bingen, Tu¨bingen, Germany; 2 Department of Bioethics, National Institutes of Health, Bethesda, Maryland, USA; 3 Center of Neurology, HertieInstitute for Clinical Brain Research, University of Tu¨bingen, Tu¨bingen, Germany Correspondence to: Dr med Dr phil Daniel Strech, Institu¨t fu¨r Ethik und Geschichte der Medizin, Universita¨t Tu¨bingen, Schleichstraße 8, 72076 Tu¨bingen, Germany; daniel. [email protected] Received 28 May 2007 Revised 17 September 2007 Accepted 19 September 2007

ABSTRACT Background: Publications and discussions of survey research in empirical bioethics have steadily increased over the past two decades. However, findings often differ among studies with similar research questions. As a consequence, ethical reasoning that considers only parts of the existing literature and does not apply systematic reviews tends to be biased. To date, we lack a systematic review (SR) methodology that takes into account the specific conceptual and practical challenges of empirical bioethics. Methods: The steps of systematically reviewing empirical findings in bioethics are presented and critically discussed. In particular, (a) the limitations of traditional SR methodologies in the field of empirical bioethics are critically discussed, and (b) conceptual and practical recommendations for SRs in empirical bioethics that are (c) based on the authors’ review experiences in healthcare ethics are presented. Results: A 7-step approach for SRs of empirical bioethics is proposed: (1) careful definition of review question; (2) selection of relevant databases; (3) application of ancillary search strategies; (4) development of search algorithms; (5) relevance assessment of the retrieved references; (6) quality assessment of included studies; and (7) data analysis and presentation. Conceptual and practical challenges arise because of various peculiarities in reviewing empirical bioethics literature and can lead to biased results if they are not taken into account. Conclusions: If suitably adapted to the peculiarities of the field, SRs of empirical bioethics provide transparent information for ethical reasoning and decision-making that is less biased than single studies.

Empirical bioethics—understood as the application of social science research methods to examining bioethical issues1—has become a promising approach to provide important empirical data that can inform ethical reasoning and theoretical analyses. Accordingly, the number of empirical studies published in bioethical journals has steadily increased over the past two decades.2 As a consequence, we often find several empirical studies that investigate similar research questions but use different qualitative research methodologies (focus groups, in-depth interviews) and different quantitative survey instruments. This methodological diversity might result in varying research findings and hence incoherent conclusions. For example, several qualitative and quantitative studies have focused on the topic of healthcare rationing (HCR). They provide findings regarding physicians’ attitudes by interviewing physicians about the strategies they use for implicit or explicit bedside rationing, influencing factors in the process of cost containment, experiences of role conflicts and consequences for 472

the patient–physician relationship. Comparing these studies, we found that they are methodologically rather heterogeneous and do not provide a coherent pattern of conclusions. For instance, qualitative studies differ in the range of themes and concepts reported,3–6 while quantitative findings differ in the relative importance of preferences for certain prioritisation criteria or role conflicts with regard to HCR.7–10 As the methodological complexity and variability in interview research on HCR and other ethical issues increase, so does the need for a systematic and transparent presentation, synthesis, and interpretation of the results. To date, however, no attempts have been made to outline the major challenges and opportunities in summarising empirical research findings with relevance for bioethics. The fact that McCullough and colleagues recently presented a methodology for systematic reviews of conceptual or argumentbased ethical publications underlines the increasing awareness in the field for the need of synthesising the results of the growing body of bioethical publications.11 In general, SRs aim to summarise large bodies of evidence and help to explain different results among studies addressing the same research question. SRs require the transparent application of scientific strategies to limit the bias inherent in retrieving, critically appraising and summarising the relevant studies that address a specific empirical question. Because the review process itself is subject to bias, a sound review requires explicit reporting of information and the application of systematic methods. During the past two decades, SRs of clinical trials have been increasingly used to inform medical decisionmaking, plan future research agendas and establish healthcare policies.12 However, traditional methods for systematically reviewing research findings (for example, most of the Cochrane reviews13) are limited in several ways regarding their application to survey research in empirical bioethics. For example, traditional SRs usually deal with issues (such as specific diseases and interventions) and study designs (such as randomised controlled trials) that correspond well to the controlled vocabulary of databases such as MEDLINE, EMBASE and others. Choosing search terms for an appropriate search algorithm, therefore, does not pose a big challenge. In contrast, within systematic reviews of empirical bioethics that mostly deal with interview research and rather specific ethical issues, it is much more difficult to find adequate search terms that are represented by the databases’ controlled vocabulary. Because of the heterogeneity of those search terms that are relevant for empirical bioethics and are used by different databases, search algorithms for SRs of empirical bioethics have to be adapted to the J Med Ethics 2008;34:472–477. doi:10.1136/jme.2007.021709

Downloaded from jme.bmj.com on 1 June 2008

Ethics databases’ vocabulary to enhance the sensitivity and specificity of literature searches. Further limitations of traditional SRs will be discussed in more detail in the following sections.

A 7-STEP APPROACH FOR SYSTEMATIC REVIEWS OF EMPIRICAL STUDIES IN BIOETHICS In this paper we present and evaluate a new stepwise approach for SRs in empirical bioethics, highlighting the major differences compared with traditional SRs. The underlying questions are, what are the specific requirements of SRs in empirical bioethics and how can they be appropriately addressed? We illustrate our approach with a systematic review of interview research in the field of HCR. Our approach could serve as a reference work for the stepwise process of SRs of empirical bioethics literature and provide a starting point for future projects systematically reviewing other fields of empirical bioethics. Table 1 summarises the stepwise review process and the practical recommendations, which we will discuss in greater detail within the following sections.

Careful definition of review question The first important step for a systematic review is the definition of a precise review question. The anatomy of a good clinical question addressed by traditional SRs typically contains four aspects, also known as the PICO model: patient (or problem), intervention (or exposure), comparison and outcomes.14 However, most review questions on empirical bioethics deal with interview research and thus concentrate on different foci. In a review of research methodologies used in empirical bioethics, Borry and colleagues showed that the great majority (92%) of empirical studies published in bioethical journals applied non-experimental study designs and data collection methods, such as quantitative and qualitative interviews and survey research.2 These research paradigms do not focus on Table 1 The 7-step approach to systematic reviews in empirical bioethics Step

Recommendation

1

Careful definition of review question

Structured by the MIP (methodology, issues, participants) model

2

Selection of relevant databases

Related to the specific context

3

Application of ancillary search strategies

Review of bibliographies from relevant references Hand search of journals not or not completely captured by databases used

4

Development of a search algorithm

Use of database-specific vocabulary Index mapping Cluster modelling

5

Relevance assessment of the retrieved references

At least three reviewers Blinding of reviewers Predefined inclusion and exclusion criteria Predefined classification schemes Evaluation of inter-rater reliability Flow charts

6

Quality assessment of included studies

Justification and explication of assessment tools

7

Data analysis and data presentation

Justification and explication of methods

interventions or clinical outcomes. Because comparisons and clinical outcomes do not play a role in summarising interview research, the PICO model does not fit to define a review question for current empirical studies in bioethics. The PICO model, however, could be applied, if empirical bioethics would use experimental methods that use comparisons and focus on specific outcomes. One example is psychometric research in bioethics, such as [the validation and application of] questionnaires that try to measure patient decision-making competence (PDMC, or informed-consent research). Future empirical bioethics might also be involved in studying social interventions that deal with comparisons and outcomes. The Campbell Collaboration, for instance, produces, maintains and disseminates systematic reviews of research evidence (randomised trials) on the effectiveness of social interventions (for further information see http://www.campbellcollaboration.org). In addition, other factors such as the methodology of the empirical study and the participants in the interview or survey research are important determinants for reviews of empirical studies in bioethics. Therefore, we have developed an ‘‘MIP’’ (methodology, issues, participants) model that takes into account specifically the essential aspects of review questions within empirical bioethics: methodology (such as in-depth interviews or questionnaires), issues (such as HCR or end-of-life decision-making), and participants (for example, physicians or patients). Search terms for the systematic search in bibliographic databases should match with these three aspects to guarantee sensitivity and specificity in the retrieval of relevant literature (see also the section about the development of a search algorithm). We recommend limiting the range of methodologies, issues and participants included in one systematic review. While the experiences and attitudes of different stakeholders are all relevant for deciding about bioethical issues, it is not feasible to summarise them in one review because of methodological constraints. The feasibility of SRs of survey research mainly depends on the scope of the review question. For reasons of comparability and practicability, it is necessary to focus on specific methodologies, issues and participants when systematically reviewing interview research. For example, in one of our SRs we focused on qualitative research methods (methodology), HCR and resource allocation (issues), and physicians (participants).15

Selection of relevant databases

J Med Ethics 2008;34:472–477. doi:10.1136/jme.2007.021709

Meanwhile, various electronic bibliographic databases are available that include conceptual and empirical bioethics literature. For further information see the National Reference Center for Bioethics Literature (http://www.georgetown.edu/ research/nrcbl/nrc) or the European Information Network (http://www.eureth.net). The selected databases determine which articles will be found.16 In order to reduce the danger of a potential bias, authors of SRs should combine several resources to find all references relevant for the selected review question. Traditional SRs prefer databases that include a wide range of publications of clinical trials, such as MEDLINE and EMBASE. Interview research in empirical bioethics, however, has at least two peculiarities. (1) Interview research is often indexed in databases other than MEDLINE and EMBASE. For example, databases, such as CINAHL, that include psychological and sociological articles are also important. (2) Even more importantly, databases that specifically focus on ethics often provide further relevant references that might be neglected if the literature search is limited to, for example, MEDLINE or EMBASE, even though BIOETHICSLINE is included in 473

Downloaded from jme.bmj.com on 1 June 2008

Ethics MEDLINE. While databases such as CINAHL or PsychInfo have proved to be effective in SRs of interview research, we still lack experience with specific bioethical databases, such as EUROETHICS. Therefore, we will present and compare our findings in the different databases mentioned above in the following sections, which will allow a preliminary assessment of the databases’ specific relevance for SRs in empirical bioethics.

Ancillary search strategies Traditional SRs sometimes use ancillary search strategies to improve the sensitivity of their search. Common ancillary search strategies include the review of bibliographies from relevant references or manual search of journals that are not listed or not completely listed in the databases used.17 Since databases do not specialise in indexing interview research in bioethics, ancillary search strategies are especially important for SRs on empirical studies in bioethics.

Developing a search algorithm Determining appropriate search terms for the area of interest is essential for the effective use of bibliographic databases. Search terms should match with the controlled vocabulary used by the relevant databases for indexing references. MEDLINE, for example, uses terms from MeSH (medical subject headings). These headings are the keys that unlock the medical literature.18 Because search terms in traditional SRs typically include specific diseases, medical interventions, and study designs (randomised controlled trials, observational studies) that correspond to the controlled vocabulary of most databases, developing an appropriate search algorithm is not a big challenge. For SRs of interview research in bioethics, however, the situation is quite different. First, ethical issues are sometimes not adequately represented in the databases’ controlled vocabulary, or the databases use different terms for the same issue. For instance, while the controlled vocabulary of MEDLINE, EMBASE, CINAHL, PsychInfo, and EUROETHICS all include the term ‘‘resource allocation’’, only MEDLINE (including BIOETHICSLINE) and EUROETHICS include the term ‘‘health care rationing’’. The same problems occur with terms that describe the participants and the paradigm in interview research. Second, even if we use search terms included in the database-specific controlled vocabulary, we face the practical problem that the databases sometimes do not use these terms to index the relevant references. In our SRs, we found that publications of qualitative or quantitative interview research often are not indexed by appropriate MeSH terms such as ‘‘qualitative research’’, ‘‘focus group’’, ‘‘survey’’ or ‘‘questionnaire’’; even if the database’s controlled vocabulary includes these terms. With regard to these conceptual and practical challenges, we suggest three strategies that can help to identify the appropriate database-specific search terms and to use them adequately. Each strategy has proven to be helpful in our SRs. c Search terms have to be adapted to each database to develop a search algorithm with both good specificity and good sensitivity. Because of the differences in the databases’ controlled vocabulary, the search terms that have been successful in searching references in one database might not be successful in searching in another database. c To find adequate search terms one first has to become acquainted with the underlying mapping patterns of each database. Which headings are used by the database for indexing references that are of interest for a certain systematic review (SR)? Reviewers should look for the 474

database-related headings used for indexing those relevant articles that are already known from prior non-systematic literature reviews. We call this strategy index mapping. Subsequently, reviewers should check the controlled vocabulary of each database to find further search terms that are relevant for the review question. c Finally, one has to combine the database-related search terms into one search algorithm by using the common Boolean operators ‘‘and’’, ‘‘or’’ and (if useful) ‘‘not’’. To balance the need for good sensitivity and specificity, we recommend building three clusters according to the MIP model. For instance, all database-specific search terms that deal with participants (for example, physician’s role, physician attitudes) should be combined by the Boolean operator ‘‘or’’. Finally, the three clusters have to be combined by the Boolean operator ‘‘and’’. We call this strategy cluster modeling. To illustrate this strategy, the database-related search algorithms and numbers of retrieved references in our SR are presented in table 2. In our SR of qualitative interview research about HCR, the bibliographic search of five databases resulted in 614 references. Of the total number of references, 46% (283) were retrieved in MEDLINE, 31% (193) in EMBASE, 9.6% (59) in PsychInfo, 4.7% (29) in CINAHL, 3.4% (21) in EUROETHICS and 4.7% (29) by ancillary search strategies. There was an overlap in 6.4% (39) references: 18 were retrieved both in EMBASE and MEDLINE, 12 both in PsychInfo and MEDLINE and nine in both CINAHL and MEDLINE. After eliminating the overlapping results, the total number of references was 575.

Relevance assessment of the retrieved references Traditional SRs typically search for studies that involve distinct outcomes of specified medical interventions on specific patient populations. Assessing the relevance of the retrieved references, therefore, is rather a formal task. Little interpretation is needed, for example, to decide whether a certain study really deals with SSRI drug treatment for major depression among patients aged between 18 and 65. By contrast, in reviews of interview research, the relevance assessment involves a good deal of interpretation and therefore can be the step that is most susceptible to biases in decisions about inclusion or exclusion of information into the final summary. To deserve the label systematic, reviews in empirical bioethics have to follow the guiding principles of transparency and systematisation within the crucial steps of the relevance assessment. Based on our experience with SRs, we point out the following three strategies: (1) The relevance assessment has to be informed by a predefined list of inclusion and exclusion criteria. For example, inclusion criteria for references in our review were: (a) providing qualitative data through in-depth interviews, focus groups or surveys with open-ended questions; (b) being conducted in a developed or high-income country; (c) including practising physicians (general practitioners and specialists) as participants and (d) focusing on questions of rationing or resource allocation in healthcare but not allocation of organs or intensive care unit beds. (2) In assessing the relevance of the retrieved references, the reviewing authors should be blinded as thoroughly as possible. They should make their judgement based only on title and abstract. Potentially biasing information such as the authors’ names, the journal or the year of publication has to be eliminated from the list. (3) At least two experts in the field of inquiry should score the relevance of each reference in relation to the predefined inclusion criteria, using a classification scheme such as the following: (a) irrelevant: very poor in J Med Ethics 2008;34:472–477. doi:10.1136/jme.2007.021709

Downloaded from jme.bmj.com on 1 June 2008

Ethics Table 2 Database-specific search algorithms Cluster (MIP)

MEDLINE

EMBASE

PsychInfo

CINAHL

EUROETHICS

Methodology

1 interviews [MeSH] 2 healthcare surveys [MeSH] 3 focus groups [MeSH]

1 interview 2 semi-structured interview

1 attitudes 2 interviews

1 interviews 2 questionnaires

1 qualitative research 2 evaluation studies

3 unstructured interview

3 qualitative research

3 qualitative studies

4 narration [MeSH]

4 qualitative research

4 (1 or 2 or 3)

4 (views or opinions or attitudes or perceptions or experiences or expectations).m_titl. 5 (1 or 2 or 3 or 4)

3 (views or opinions or attitudes or experiences) 4 (1 or 2 or 3)

5 questionnaires [MeSH] 5 open-ended questionnaire 6 (perceptions or attitudes or 6 observational method opinions or experiences or expectations or views or survey or questionnaire or qualitative) [ti]) 7 (1 or 2 or 3 or 4 or 5 or 6) 7 (perceptions or attitudes or views or expectations or opinions or experiences).m_titl 8 (1 or 2 or 3 or 4 or 5 or 6 or 7) Issues

8 resource allocation [MeSH] 9 healthcare rationing [MeSH] 10 (8 or 9)

9 resource allocation 10 *health care 11 *health care policy 12 *patient care 13 (9 or 10 or 11 or 12)

Participants

No of all references (no included) Duplicates with MEDLINE

5 resource 6 resource allocation or allocation health resource allocation 6 health care utilisation 7 ‘‘costs and cost analysis’’ 8 (5 or 6 or 7)

9 management decision-making 10 ethics 11 physicians 12 attitudes 13 (9 or 10 or 11 or 12)

Ancillary search

5 resource allocation

Reference check & manual 6 health care rationing search 7 (1 or 2)

11 ethics [MeSH]

14 medical ethics

7 ethics, medical

12 ethics [Subheading] 13 decision-making [MeSH] 14 physician’s role [MeSH] 15 attitude of health personnel [MeSH] 16 physician’s practice patterns [MeSH] 17 (11 or 12 or 13 or 14 or 15 or 16) 18 (7 and 10 and 17)

15 decision-making 16 physician attitude 17 (14 or 15 or 16)

18 (8 and 13 and 17)

14 (4 and 8 and 13)

283 (8)

193 (5)

59 (0)

29 (2)

21 (0)

29 (1)

18

12

9

0

0

8 physicians 9 physician attitudes 10 physician’s role 11 decision-making, clinical 12 physician–patient relations 13 (7 or 8 or 9 or 10 or 11 or 12) 14 (5 and 6 and 13) 8 (4 and 7)

*Subject headings were used in a focused modus. [ti], search term limited to title only; .m_titl., search term limited to title only.

relation to the inclusion criteria; (b) slightly irrelevant: poor in relation to the inclusion criteria; (c) somewhat relevant: good in relation to the inclusion criteria, and (d) relevant: very good in relation to the inclusion criteria. Scores from the two experts should be compared to assess agreement and to evaluate the inter-rater reliability by intraclass correlation coefficients such as Cohen’s k or Cronbach’s a.19 In cases of discrepancy, a third expert should be consulted to determine the final relevance value for each reference. To ensure transparency, all decision-making points within the relevance assessment have to be explicitly documented. Common instruments to present the different steps of the relevance J Med Ethics 2008;34:472–477. doi:10.1136/jme.2007.021709

assessment are flow charts (figure 1 shows the flow chart of our SR of qualitative studies). In our review of qualitative interview studies of HCR, a total of 1.7% of the references (10 of 575) were considered relevant to the SR question. Only MEDLINE (8 references), EMBASE (5), CINAHL (2) and the ancillary search strategies (1) provided relevant references. MEDLINE retrieved the highest number of relevant references, while, in contrast to EMBASE, only CINAHL and the ancillary search strategies both retrieved one additional reference that was not found by MEDLINE. EUROETHICS and PsychInfo did not retrieve any of the studies that were finally included in the review. Even though 475

Downloaded from jme.bmj.com on 1 June 2008

Ethics

Figure 1

Flow chart for relevance assessment.

EUROETHICS might provide important conceptual bioethics literature, it was not helpful in our study to retrieve empirical studies in bioethics.

Quality assessment of the included studies In traditional SRs, the quality assessment more or less replaces the relevance assessment as a method of influencing the principal source of bias. The quality of the study often determines the inclusion or exclusion in the final outcome presentation or meta-analysis. Several more-or-less rigorous check lists such as the CONSORT, Jadad, or GRADE criteria exist to assess the quality of clinical trials.20–22 By contrast, the systematic appraisal of the quality of interview research is characterised by great controversy rather than by a gold standard. For a more detailed analysis of the various pros and cons of using checklists in qualitative and quantitative interview research, see Walsh and Downe,23 Barbour24 and Giacomini.25 We suggest that if specific tools for quality assessment within SRs of interview research are applied, reviewers should explicitly justify their objective and provide a detailed description of the instrument.26 27 For instance, in our review of qualitative studies, quality assessment was performed to inform the reader about the characteristics of the included studies. We assessed the quality of the included studies based on a modification of the appraisal tool for qualitative research studies (CASP) developed by the Public Health Resource Unit (PHRU, http://www.phru.nhs.uk/Pages/PHD/CASP.htm). See also table 3 (additional online resource). A similar modification of the CASP tool has proved to be effective in previous SRs on qualitative studies.28 29

Data analysis and data presentation Traditional SRs are mainly concerned with combining outcome data from interventional studies. They involve techniques, such as meta-analyses, that are concerned with assembling and pooling data, and require a basic comparability between the phenomena studied so that the data can be aggregated for analysis. In contrast, SRs of interview research in empirical bioethics aim to summarise the qualitative concepts and quantitative data identified in the primary studies without providing a final score that might decide about the effectiveness or ineffectiveness of a 476

specific intervention. Interview studies are not interventional studies. In past years several articles raised questions about how to summarise or synthesise qualitative research findings.30 31 Further on, the approaches might differ in the way they analyse the data reported by the studies included in the review. DixonWoods and colleagues give an overview of possible methods such as narrative summary, thematic analysis, grounded theory, metaethnography, content analysis, qualitative comparative analysis and others.32 The methods vary in their ability to deal with qualitative and quantitative forms of evidence and in the type of question for which they are most suitable. A more comprehensive description and discussion of these methods is beyond the scope of this paper. However, the simple fact that these various approaches exist underlines the need for clear reporting of the methods chosen for analysing and summarising study findings in empirical bioethics. For example, in our SR of qualitative interview research, we used the technique of thematic analysis to extract the qualitative data from the included studies. Though the main emphasis of the research question was somewhat different among those studies, all of them presented narrative accounts of physicians, relating to the topic of HCR. The final result of the thematic analysis of these narrative accounts is a summary table that presents the wide range of themes and concepts that play a crucial role in bedside rationing according to the interviewed physicians. Like all qualitative research findings, the summary table is concerned with the generalisability of the range of themes and key issues emerging in the narrative accounts that were presented by the reviewed studies. Yet, it will not provide any statistical or other kind of quantitative generalisability. To assess the relative importance of each single key issue captured by such a SR, one needs an additional SR of quantitative interview research. The objective of such a SR of quantitative interview research is twofold. On the one hand, it provides the data for assessing the relative importance of key issues in a certain field of bioethics. On the other hand, we often face the situation that, at present, not all key issues have already been studied for their relative importance. Data analysis within a SR of quantitative interview research, therefore, also aims to highlight the need for further research projects.

CONCLUSIONS Due to the considerable methodological complexity and diversity in empirical bioethics—such as qualitative and quantitative interview research—there is a great need for the development of systematic and transparent techniques to synthesise and interpret their results. If suitably adapted to the peculiarities of the field, SRs of empirical bioethics provide a pertinent instrument that addresses this need by optimising transparency and systematisation in summarising empirical findings with impact on ethical reasoning and theory. The impact itself can take two different forms. Empirical findings can have a modificatory impact, if they give rise to important changes or transformations of ethical theory or reasoning. They can have a supportive impact, if they corroborate an ethical theory by analysing the factual practice to which the theory shall apply. The results of the SR of HCR, for instance, had a supportive impact on the normative ethical frameworks developed for HCR, especially for the requirements of consent, minimising conflicts of interest, and publicity.33 34 In contrast, the results had only a low modificatory impact for the frameworks’ norms. In this article, we present a methodology for SRs that explicitly takes into account the specific features of interview J Med Ethics 2008;34:472–477. doi:10.1136/jme.2007.021709

Downloaded from jme.bmj.com on 1 June 2008

Ethics studies in empirical bioethics. We illustrated the application of this methodology by presenting the results of our SR of interview research about HCR. The major recommendations for SRs in empirical bioethics are as follows: c As the PICO-model of traditional SRs does not fit the specific aspects of interview research in bioethics, we developed the MIP model, which defines methodologies, issues and participants as the most important factors for developing a sound review question. For experimental, outcome-oriented studies, which to date are rare in empirical bioethics, the PICO model could also apply. c To achieve higher specificity and sensitivity in searching empirical studies relevant to bioethics, search terms have to be adapted to the specific keyword catalogues and indexing vocabulary in each bibliographic database. To become acquainted with the underlying mapping patterns of the database, we recommend looking for the database-related vocabulary used in indexing those relevant articles that were retrieved by prior non-systematic reviews or that are already known (index mapping). c To avoid different sources of bias during the relevance assessment, the reviewers that receive a list with the retrieved references should be blinded to the authors’ names, the journal and the year of publication. In addition, at least two experts should score the relevance of each reference in relation to predefined inclusion criteria and classification schemes. c Intra-class correlation coefficients should inform about the inter-rater reliability. c Because there is no gold standard to assess the quality of interview studies, reviewers should explicitly justify their objective by using any of the existing instruments and should provide a detailed description of the instrument. c Within the data analysis, the reasons for the use of a certain method for summarising qualitative or quantitative interview findings should be explicitly stated. These recommendations can help to increase the systematic character and transparency of reviews in the field of empirical ethics and therefore decrease the influence of various sorts of biases, such as bias in the literature search or in relevance assessment. Even though the methodology for SRs in empirical bioethics presented in this article was able to demonstrate its usefulness during our SR of interview research on HCR, the approach should not be considered definitive. Further experience with methodological variations and different issues could be helpful to further improve the interplay between empirical data and argument-based ethical reasoning. Finally, even if the risk of biases in the retrieval, interpretation, summary and communication of information from empirical studies in bioethics can be reduced by systematic reviews, we must not neglect the influence of other sorts of biases, such as sociocultural circumstances and subjective value judgements in ethical decision-making.

REFERENCES

Acknowledgements: We would like to thank Marion Danis and Jon Tilburt for their critical review of the manuscript.

30.

Funding: This work was supported by grant 01GP0608 from the German Federal Ministry of Education and Research and by a grant from the German Academic Exchange Service Competing interests: None declared. The views expressed by the authors do not necessarily reflect policies of the US National Institutes of Health or the US Department of Health and Human Services.

J Med Ethics 2008;34:472–477. doi:10.1136/jme.2007.021709

1. 2. 3.

4. 5. 6. 7. 8. 9. 10.

11.

12. 13.

14. 15. 16. 17. 18. 19. 20. 21.

22. 23. 24. 25.

26. 27. 28.

29.

31. 32.

33. 34.

Sugarman J, Sulmasy DP, eds. Methods in medical ethics. Washington, DC: Georgetown University Press, 2001. Borry P, Schotsmans P, Dierickx K. Empirical research in bioethical journals: a quantitative analysis. J Med Ethics 2006;32:240–5. Jones IR, Berney L, Kelly M, et al. Is patient involvement possible when decisions involve scarce resources? A qualitative study of decision-making in primary care. Soc Sci Med 2004;59:93–102. Hurst SA, Hull SC, DuVal G, et al. Physicians’ responses to resource constraints. Arch Intern Med 2005;165:639–44. Prosser H, Walley T. A qualitative study of GPs’ and PCO stakeholders’ views on the importance and influence of cost on prescribing. Soc Sci Med 2005 Mar;60:1335–46. Shortell SM, Waters TM, Clarke KW, et al. Physicians as double agents: maintaining trust in an era of multiple accountabilities. JAMA 1998;280:1102–8. Baines DL, Tolley KH, Whynes DK, et al. The ethics of resource allocation: the views of general practitioners in Lincolnshire, U.K. Soc Sci Med 1998;47:1555–64. Hurst SA, Slowther AM, Forde R, et al. Prevalence and determinants of physician bedside rationing: data from Europe. J Gen Intern Med 2006;21:1138–43. van Delden JJ, Vrakking AM, van der Heide A, et al. Medical decision making in scarcity situations. J Med Ethics 2004;30:207–11. Ryynanen OP, Myllykangas M, Kinnunen J, et al. Attitudes to health care prioritisation methods and criteria among nurses, doctors, politicians and the general public. Soc Sci Med 1999;49:1529–39. McCullough LB, Coverdale JH, Chervenak FA. Constructing a systematic review for argument-based clinical ethics literature: The example of concealed medications. J Med Philos 2007;32:65–76. Guyatt G, Cook D, Haynes B. Evidence based medicine has come a long way. BMJ 2004;329:990–1. Higgins JPT, Green S, eds. Cochrane handbook for systematic reviews of interventions 4.2.5 [updated May 2005]. In: The Cochrane Library, Issue 3. Chichester, UK: John Wiley & Sons, 2005. Sackett DL, Straus S, Scott Richardson W. Evidence-based medicine: how to practice and teach EBM. London:Churchill Livingstone, 2000. Strech D, Synofzik M, Marckmann G. How physicians allocate scarce resources at the bedside: a systematic review of qualitative studies. J Med Philos 2008;33:80–99. Fangerau H. Finding European bioethical literature: an evaluation of the leading abstracting and indexing services. J Med Ethics 2004;30:299–303. Bates MJ. The design of browsing and berrypicking techniques for online search interface. Online Review 1989;13:407–24. Coletti MH, Bleich HL. Medical subject headings used to search biomedical literature. J Am Med Inform Assoc 2001;8:317–23. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977;33:159–74. Guyatt G, Rennie D. Users’ guides to the medical literature: A manual for evidencebased clinical practice. Chicago, Illinois: AMA Press, 2002. Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. Ann Intern Med 2001;134:657–62. Atkins D, Best D, Briss PA, et al. Grading quality of evidence and strength of recommendations. BMJ 2004;328:1490. Walsh D, Downe S. Appraising the quality of qualitative research. Midwifery 2006;22:108–19. Barbour RS. Checklists for improving rigour in qualitative research: a case of the tail wagging the dog? BMJ 2001;322:1115–7. Giacomini MK, Cook DJ. Users’ guides to the medical literature: XXIII. Qualitative research in health care A. Are the results of the study valid? Evidence-Based Medicine Working Group. JAMA 2000;284:357–62. Hull SCH, Taylor HA, Kass NE. Qualitative methods. In: Sugarman JSD, ed. Methods in medical ethics. Washington, DC: Georgetown University Press, 2001:147–68. Pearlman RA, Starks HE. Quantitative surveys. In: Sugarman JSD, ed. Methods in medical ethics. Washington, DC: Georgetown University Press, 2001:192–206. Mills E, Jadad AR, Ross C, et al. Systematic review of qualitative studies exploring parental beliefs and attitudes toward childhood vaccination identifies common barriers to vaccination. J Clin Epidemiol 2005;58:1081–8. Mills EJ, Nachega JB, Bangsberg DR, et al. Adherence to HAART: a systematic review of developed and developing nation patient-reported barriers and facilitators. PLoS Med 2006;3:e438. Barbour RS, Barbour M. Evaluating and synthesizing qualitative research: the need to develop a distinctive approach. J Eval Clin Pract 2003;9:179–86. Sandelowski M, Docherty S, Emden C. Focus on qualitative methods: qualitative metasynthesis: issues and techniques. Res Nurs Health 1997;20:365–71. Dixon-Woods M, Agarwal S, Jones D, et al. Synthesising qualitative and quantitative evidence: a review of possible methods. J Health Serv Res Policy 2005;10:45–53. Daniels N, Sabin JE. Setting limits fairly. Oxford: Oxford University Press, 2002. Emanuel EJ. Justice and managed care. Four principles for the just allocation of health care resources. Hastings Cent Rep 2000;30:8–16.

477

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.