Consequences of Deciding on Research Questions Without Conducting a Literature Review
Literature reviews are an integral part of the process and communication of scientific research. Whilst systematic reviews have get regarded as the highest standard of evidence synthesis, many literature reviews fall short of these standards and may stop up presenting biased or incorrect conclusions. In this post, Neal Haddaway highlights 8 common bug with literature review methods, provides examples for each and provides practical solutions for means to mitigate them.
Researchers regularly review the literature – information technology's an integral office of twenty-four hours-to-day enquiry: finding relevant research, reading and digesting the primary findings, summarising across papers, and making conclusions about the show base as a whole. However, there is a central difference between brief, narrative approaches to summarising a pick of studies and attempting to reliably and comprehensively summarise an prove base to support controlling in policy and practise.
And so-called 'testify-informed decision-making' (EIDM) relies on rigorous systematic approaches to synthesising the evidence. Systematic review has become the highest standard of evidence synthesis and is well established in the pipeline from research to practice in the field of health. Systematic reviews must include a suite of specifically designed methods for the conduct and reporting of all synthesis activities (planning, searching, screening, appraising, extracting data, qualitative/quantitative/mixed methods synthesis, writing; eastward.g. encounter the Cochrane Handbook). The method has been widely adapted into other fields, including environment (the Collaboration for Ecology Evidence) and social policy (the Campbell Collaboration).
Despite the growing involvement in systematic reviews, traditional approaches to reviewing the literature continue to persist in gimmicky publications across disciplines. These reviews, some of which are incorrectly referred to as 'systematic' reviews, may be susceptible to bias and as a result, may end upwardly providing incorrect conclusions. This is of item concern when reviews address key policy- and practice- relevant questions, such as the ongoing COVID-19 pandemic or climate change.
These limitations with traditional literature review approaches could be improved relatively easily with a few key procedures; some of them not prohibitively costly in terms of skill, time or resources.
In our recent paper in Nature Ecology and Evolution, we highlight eight mutual problems with traditional literature review methods, provide examples for each from the field of ecology direction and ecology, and provide practical solutions for means to mitigate them.
Trouble | Solution |
---|---|
Lack of relevance – limited stakeholder engagement can produce a review that is of limited practical utilise to determination-makers | Stakeholders tin can be identified, mapped and contacted for feedback and inclusion without the demand for extensive budgets – check out all-time-practice guidance |
Mission pitter-patter – reviews that don't publish their methods in an a priori protocol tin can suffer from shifting goals and inclusion criteria | Carefully design and publish an a priori protocol that outlines planned methods for searching, screening, data extraction, disquisitional appraisal and synthesis in item. Make utilize of existing organisations to support you lot (east.thousand. the Collaboration for Environmental Testify). |
A lack of transparency/replicability in the review methods may mean that the review cannot be replicated – a central tenet of the scientific method! | Be explicit, and make apply of loftier-quality guidance and standards for review conduct (e.g. CEE Guidance) and reporting (PRISMA or ROSES) |
Selection bias (where included studies are not representative of the testify base) and a lack of comprehensiveness (an inappropriate search method) tin mean that reviews end up with the wrong show for the question at hand | Carefully design a search strategy with an info specialist; trial the search strategy (against a benchmark list); use multiple bibliographic databases/languages/sources of grey literature; publish search methods in an a priori protocol for peer-review |
The exclusion of gray literature and failure to test for evidence of publication bias can result in incorrect or misleading conclusions | Include attempts to discover grey literature, including both 'file-drawer' (unpublished academic) research and organisational reports. Test for possible bear witness of publication bias. |
Traditional reviews often lack appropriate critical appraisement of included study validity, treating all testify equally equally valid – we know some enquiry is more valid and nosotros need to account for this in the synthesis. | Carefully plan and trial a critical appraisal tool earlier starting the procedure in full, learning from existing robust critical appraisement tools. |
Inappropriate synthesis (e.1000. using vote-counting and inappropriate statistics) tin negate all of the preceding systematic effort. Vote-counting (tallying studies based on their statistical significance) ignores study validity and magnitude of effect sizes. | Select the synthesis method carefully based on the information analysed. Vote-counting should never be used instead of meta-analysis. Formal methods for narrative synthesis should be used to summarise and describe the prove base. |
There is a lack of sensation and appreciation of the methods needed to ensure systematic reviews are equally complimentary from bias and as reliable as possible: demonstrated by recent, flawed, loftier-profile reviews. We call on review authors to conduct more rigorous reviews, on editors and peer-reviewers to gate-go on more strictly, and the community of methodologists to better back up the broader inquiry customs. Only by working together can we build and maintain a potent system of rigorous, prove-informed decision-making in conservation and environmental management.
Note: This article gives the views of the authors, and not the position of the LSE Touch on Blog, nor of the London School of Economic science. Please review our comments policy if you have whatever concerns on posting a comment beneath
Image credit: Jaeyoung Geoffrey Kang via unsplash
Source: https://blogs.lse.ac.uk/impactofsocialsciences/2020/10/19/8-common-problems-with-literature-reviews-and-how-to-fix-them/
0 Response to "Consequences of Deciding on Research Questions Without Conducting a Literature Review"
Enviar um comentário