Why Theory Is Important in an Outcome Evaluations Peer Review
Systematic review of the use of process evaluations in cognition translation research
Shannon D. Scott
iFaculty of Nursing, University of Alberta, Edmonton, Alberta Canada
Thomas Rotter
2School of Nursing, Queen's University, Kingston, Ontario Canada
Rachel Flynn
oneFaculty of Nursing, University of Alberta, Edmonton, Alberta Canada
Hannah K. Brooks
1Faculty of Nursing, University of Alberta, Edmonton, Alberta Canada
Tabatha Plesuk
1Faculty of Nursing, Academy of Alberta, Edmonton, Alberta Canada
Katherine H. Bannar-Martin
1Faculty of Nursing, University of Alberta, Edmonton, Alberta Canada
Thane Chambers
iiiUniversity of Alberta Libraries, Edmonton, Alberta Canada
Lisa Hartling
4Department of Pediatrics, Academy of Alberta, Edmonton, Alberta Canada
Received 2019 Jun 21; Accepted 2019 Sep 13.
- Supplementary Materials
-
GUID: ECEE2068-8FD6-4E53-A92D-FE149D7ED1ED
GUID: 74E731B8-F192-48FC-91CB-A6166DCBA599
GUID: D3D1C454-F4E1-4F9B-A735-FBD731ADA085
- Data Availability Statement
-
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Abstract
Groundwork
Experimental designs for evaluating knowledge translation (KT) interventions tin provide potent estimates of effectiveness only offer limited insight into how the intervention worked. Consequently, process evaluations have been used to explore the causal mechanisms at work; however, there are limited standards to guide this piece of work. This study synthesizes current evidence of KT process evaluations to provide future methodological recommendations.
Methods
Peer-reviewed search strategies were developed by a health research librarian. Studies had to be in English language, published since 1996, and were non excluded based on design. Studies had to (1) be a process evaluation of a KT intervention report in principal health, (2) be a primary research report, and (3) include a licensed healthcare professional delivering or receiving the intervention. A two-stride, two-person hybrid screening arroyo was used for study inclusion with inter-rater reliability ranging from 94 to 95%. Information on study design, information collection, theoretical influences, and approaches used to evaluate the KT intervention, assay, and outcomes were extracted by 2 reviewers. Methodological quality was assessed with the Mixed Methods Appraisal Tool (MMAT).
Results
Of the twenty,968 articles screened, 226 studies fit our inclusion criteria. The bulk of process evaluations used qualitative forms of information collection (43.4%) and individual interviews as the predominant data collection method. 72.1% of studies evaluated barriers and/or facilitators to implementation. 59.7% of process evaluations were stand up-solitary evaluations. The timing of data collection varied widely with post-intervention data drove being the near frequent (46.0%). Only 38.1% of the studies were informed by theory. Furthermore, 38.9% of studies had MMAT scores of l or less indicating poor methodological quality.
Conclusions
There is widespread acceptance that the generalizability of quantitative trials of KT interventions would be significantly enhanced through complementary procedure evaluations. Even so, this systematic review found that procedure evaluations are of mixed quality and lack theoretical guidance. Most process evaluation information collection occurred post-intervention undermining the ability to evaluate the process of implementation. Strong science and methodological guidance is needed to underpin and guide the pattern and execution of process evaluations in KT science.
Registration
This report is not registered with PROSPERO.
Keywords: Process evaluation, Noesis translation, Research use, Health interventions, KT interventions, Systematic review
Background
The implementation of inquiry into healthcare exercise is circuitous [i], with multiple levels to consider such as the patient, healthcare provider, multidisciplinary team, healthcare institution, and local and national healthcare systems. The implementation of prove-based treatments to attain healthcare system improvement that is robust, efficient, and sustainable is crucially of import. However, it is well established that improving the availability of research is non enough for successful implementation [2]; rather, agile cognition translation (KT) interventions are essential to facilitate the implementation of inquiry to practice. Determining the success of KT interventions and the implementation process itself relies on evaluation studies.
In the KT field, experimental designs such as randomized trials, cluster randomized trials, and stepped wedge designs are widely used for evaluating the effectiveness of KT interventions. Rigorous experimental designs can provide stiff estimates of KT intervention effectiveness, merely offering express insight into how the intervention worked or not [1] likewise as how KT interventions are mediated past different facilitators and barriers and how they atomic number 82 to implementation or not [iii–5]. KT interventions contain several interacting components, such as the degree of flexibility or tailoring of the intervention, the number of interacting components within the interventions, and the number and difficulty of behaviors required by those delivering or receiving the intervention [3]. This complexity makes information technology peculiarly challenging to evaluate KT intervention effectiveness [3–five]. The effectiveness of KT interventions is a result of the interactions between many factors such as context and mechanisms of change. A lack of intervention effect may be due to implementation failure rather than the ineffectiveness of the intervention itself. KT interventions pose methodological challenges and require augmentations to the standard experimental designs [6] to empathize how they do or practice not piece of work.
As a result of these limitations, researchers have started to conduct process evaluations aslope experimental designs for evaluating KT interventions. The broad purpose of a process evaluation is to explore aspects of the implementation process [vii]. Process evaluations can exist used to assess the fidelity, dose, accommodation, reach, and quality of implementation [viii, 9] and to identify the causal mechanisms [10, 11], mechanisms of touch on [12], and contextual factors associated with variation in outcomes across sites [6, 13]. Furthermore, procedure evaluations can assist in interpreting the outcome results [7], the barriers and facilitators to implementation [14, 15] and sustainability [16], likewise as examining the participants' views [17] and understandings of components of the intervention [xviii, 19]. Procedure evaluations are vital in identifying the success or failure of implementation, which is critical in understanding intervention effectiveness.
Still the work of Moore and colleagues [12], there accept been scant methodological recommendations to guide KT process evaluations. This deficit has fabricated designing process evaluations in KT research challenging and has hindered the potential for meaningful comparisons across process evaluation studies. In 2000, the Medical Research Council released an evaluation framework for designing and evaluating circuitous interventions; this report was subsequently revised in 2008 [4, 20]. Of annotation, earlier guidance for evaluating complex interventions focused exclusively on randomized designs with no mention of process evaluations. The revisions mentioned process evaluations and the role that they tin can have with complex interventions, yet did non provide specific recommendations for evaluation designs, data collection types, time points, and standardized evaluation approaches for circuitous interventions. This level of specificity is imperative for enquiry comparisons beyond KT intervention process evaluations and to sympathise how change is mediated past specific factors.
Recently, the Medical Enquiry Council has deputed an update of this guidance to exist published in 2019 [21, 22]. The update re-emphasizes some of the previous messages related to complex intervention development and evaluation; however, it provides a more flexible and less linear model of the procedure with added emphasis to development, implementation, and evaluation phases equally well as providing a variety of successful case examples that employ a range of methods (from natural experiments to clinical trials). Early reports of the update to the MRC framework highlight the importance of procedure and economical evaluations as good investments and a move away from experimental methods as the only or best option for evaluation.
In 2013, a framework for process evaluations for cluster-randomized trials of complex interventions was proposed by Grant and colleagues [20]; however, these recommendations were not based upon a comprehensive, systematic review of all approaches used past others. Ane study found that merely 30% of the randomized controlled trails had associated qualitative investigations [23]. Moreover, a big proportion of those qualitative evaluations were completed earlier the trial, with smaller numbers of qualitative evaluations completed during the trial or post-obit it. Given the limitations of the process evaluation work to date, information technology is critical to systematically review all existing procedure evaluations of KT effect assessment. Doing and so will aid in the evolution of rigorous methodological guidance for process evaluation research of KT interventions moving forward.
The aim of our systematic review is to synthesize the existing prove on process evaluation studies assessing KT interventions. The purpose of our review is to make explicit the current land of methodological guidance for process evaluation research with the aim of providing recommendations for multiple end-user groups. This knowledge is critically important for healthcare providers, health quality consultants, conclusion and policy makers, not-governmental organizations, governmental departments, and health services researchers to evaluate the effectiveness of their KT efforts in order to ensure deficient healthcare resources are effectively utilized and enhanced knowledge is properly generalized to do good others.
Objectives and central questions
Equally per our study protocol [24] available openly via 10.1186/2046-4053-three-149, the objectives for this systematic review were to (i) systematically locate, assess, and study on published studies in healthcare that are a stand-lonely process evaluation of a KT intervention or take a process evaluation component, and (2) offer guidance for researchers in terms of the development and blueprint of procedure evaluations of KT interventions. The central research question guiding this systematic review was: what is the "country-of-the-science" of divide (stand up-alone) or integrated process evaluations conducted aslope KT intervention studies?
Methods
Search strategy
This systematic review followed a comprehensive methodology using rigorous guidelines to synthesize diverse forms of research evidence [25], every bit outlined in our published protocol [24]. A peer-reviewed literature search was conducted by a health research librarian of English linguistic communication articles published between 1996 and 2018 in six databases (Ovid MEDLINE/Ovid MEDLINE (R) In-Process & Other Not-Indexed Citations, Ovid EMBASE, Ovid PsycINFO, EBSCOhost CINAHL, ISI Web of Scientific discipline, and ProQuest Dissertations and Theses). Full search details tin can be found in Additional file 1. Come across Additional file two for the completed PRISMA checklist.
Inclusion/exclusion criteria
Studies were non excluded based upon inquiry design and had to comply with three inclusion criteria (Tabular arrayane). A two-person hybrid approach was used for screening commodity titles and abstracts with inter-rater reliability ranging from 94 to 95%. Full-text manufactures were independently screened by 2 reviewers, and a two-person hybrid approach was used for information extraction.
Table one
Study design | Must be a primary research study. Research studies including all designs, east.g., experimental, quasi-experimental, and non-experimental designs (e.grand., example study). Opinion pieces, commentaries, methodological papers, book chapters, books, dissertations, briefing abstracts, protocols, and reviews will not be included. |
Study criteria | The study is or includes a process evaluation of a health implementation study/project that has a primary purpose of translating research into action/practice.1 The health (research) information disseminated must therefore exist evidence-based. Studies must have clearly defined knowledge translation strategies or interventions to implement the wellness innovation. A registered/licensed healthcare professional person or centrolineal healthcare professional in medicine (physician, dentist), nursing, rehabilitation medicine (physiotherapy, occupational therapy, oral communication-language pathology), dietetics, or pharmacy must either deliver or receive the intervention (sensu Scott et al. 2011). A trainee healthcare professional (non yet licensed/registered) either delivering or receiving the intervention will exist excluded if: a. The intervention is mandatory curricula for finishing their degree/gaining licensing. b. The intervention has no licensed healthcare professional involved. Procedure evaluations may exist separate (stand-lone) or integrated (embedded) and must evaluate the knowledge translation strategies or interventions used to implement the testify-based innovation (the process of implementation). |
Result(southward) | The process evaluation must be distinct from the primary outcomes of the KT/research implementation component. Where the paper is only reporting the procedure evaluation, this will be considered a singled-out outcome. |
iWellness is divers according to the WHO (1946) conceptualization of a state of complete concrete and mental well-being and non merely the absence of disease or infirmity, including prevention components and mental wellness merely not "social health"
Quality assessment
The methodological quality of all included studies was assessed using the Mixed Methods Appraisal Tool (MMAT) [26, 27] for quantitative, qualitative, and mixed methods inquiry designs. The tool results in a methodological rating of 0, 25, l, 75, and 100 (with 100 being the highest quality) for each report based on the evaluation of study selection bias, study design, data drove methods, sample size, intervention integrity, and analysis. We adapted the MMAT for multi-method studies (studies where more one inquiry arroyo was utilized, only the information were not integrated) by assessing the methods in the study individually and then choosing the lowest quality rating assigned. For studies where the process evaluation was integrated into the study design, the quality of the entire study was assessed.
Data extraction, analysis, and synthesis
Study information were extracted using standardized Excel forms. Merely information reported in included studies were extracted. Variables extracted included the following: (i) study blueprint, (ii) process evaluation type (integrated vs. separate), (3) process evaluation terms used, (4) timing of information collection (due east.g., pre- and post-implementation of intervention), (5) KT intervention type, (6) KT intervention recipient, (7) target behavior, and (8) theory. Studies were grouped and synthesized according to each of the above variables. Testify tables were created to summarize and describe the studies included in this review.
Theoretical guidance
Nosotros extracted and analyzed data on any theoretical guidance that was identified and discussed for the procedure evaluation phase of the included studies. For the purpose of our systematic review, included studies were stated to be theoretically informed if the process evaluation used theory to (a) assistance in the identification of advisable outcomes, measures, and variables; (b) guide the evaluation of the KT process; and (c) place potential predictors or mediators, or (d) as a framework for data analysis.
Results
Study design
Of the 20,968 articles screened, 226 full-text manufactures were included in our review (Fig.1). See Additional file 3 for a full commendation list of included studies.
Among these included articles, the post-obit research designs were used: qualitative (n = 85, 37.6%), multi-methods (n = 55, 24.3%), quantitative descriptive (n = 44, 19.5%), mixed methods (n = 25, 11.i%), quantitative RCT (n = 14, 6.2%), and quantitative not-randomized (north = 3, 1.3%). Run into Tableii.
Table 2
Study design | Number of studies (%) | MMAT score distribution | ||||
---|---|---|---|---|---|---|
0 | 25 | 50 | 75 | 100 | ||
Mixed methods | 25 (11.1) | 2 | 1 | 8 | eleven | 3 |
Multi-methods | 55 (24.3) | v | fifteen | 21 | 10 | iv |
Qualitative | 85 (37.six) | – | 1 | xi | 42 | 31 |
Quantitative descriptive | 44 (19.5) | – | 8 | 11 | 14 | 11 |
Quantitative non-randomized | 3 (i.3) | – | 1 | – | 1 | 1 |
Quantitative RCT | 14 (half-dozen.2) | – | 3 | – | four | vii |
RCT randomized controlled trial
Process evaluation type and terms
A total of 136 (60.2%) of the included studies were separate (stand-alone) procedure evaluations, while the process evaluations of the remaining studies (n = 90, 39.8%) were integrated into the KT intervention evaluation. Process evaluation research designs included the following: qualitative (n = 98, 43.iv%), multi-methods (n = 56, 24.eight%), quantitative descriptive (north = 51, 22.6%), and mixed methods (n = 21, ix.3%). See Tabular array3.
Table 3
Procedure evaluation design | Number of studies (%) |
---|---|
Mixed methods | 21 (ix.3) |
Multi-methods | 56 (24.8) |
Qualitative | 98 (43.iv) |
Quantitative descriptive | 51 (22.6) |
The fashion in which each of the included studies described the purpose and focus of their procedure evaluation was synthesized and categorized thematically. Barriers and/or facilitators to implementation was the near widely reported term to describe the purpose and focus of the process evaluation (Tablefour).
Table 4
Procedure evaluation terms* | Number of studies |
---|---|
Acceptability | 46 |
Adherence and allegiance | 65 |
Attitudes | 17 |
Barriers and facilitators | 113 |
Barriers but | 43 |
Contextual factors | 25 |
Experiences and perceptions | 87 |
Facilitators just | vii |
Feasibility | 39 |
Feedback | 16 |
Satisfaction | thirty |
Sustainability and effectiveness | 31 |
*Some studies used multiple terms to depict the procedure evaluation and its focus
Methods and timing of information collection
Process evaluations had widespread variations in the methods of data drove, with individual interviews (n = 123) and surveys or questionnaires (due north = 100) being the predominant methods (Tablefive).
Table 5
Data drove methods* | Number of studies |
---|---|
Qualitative methods | |
Individual interviews | 123 |
Group interviews | xv |
Focus groups | 51 |
Open-concluded survey or questionnaires | fourteen |
Other | 35 |
Quantitative methods | |
Survey or questionnaire | 100 |
Record Review | 14 |
Other | 37 |
*Some studies had more ane method of data drove
The majority of process evaluations nerveless data post-intervention (northward = 104, 46.0%). The remaining studies collected data pre- and postal service-intervention (due north = 40, 17.7%); during and post-intervention (n = 29, 12.8%); during intervention (n = 25, 11.ane%); pre-, during, and mail service-intervention (n = xviii, vii.9%); pre- and during intervention (northward = five, ii.ii%); or pre-intervention (n = 3, 1.3%). In 2 studies (0.9%), the timing of data drove was unclear. Run into Table6.
Table 6
Time of data collection | Number of studies (%) |
---|---|
Pre-intervention | 3 (1.three) |
Pre- and during intervention | 5 (ii.2) |
Pre- and postal service-intervention | 40 (17.vii) |
Pre-, during, and post-intervention | 18 (7.9) |
During and post-intervention | 29 (12.eight) |
During intervention | 25 (11.1) |
Post-intervention | 104 (46) |
Unclear | 2 (0.ix) |
Total | 226 (100) |
Intervention details (type, recipient, and target behavior)
Nearly of the studies (north = 154, 68.1%) identified healthcare professionals (HCPs) every bit the exclusive KT intervention recipient, while the remaining studies had combined intervention recipients including HCP and others (n = 59, 26.1%), and HCP and patients (due north = 13, 5.8%). Utilizing the Cochrane Effective Practice and Organisation of Intendance (EPOC) intervention classification schema [28], 218 (96.5%) studies had professional type interventions, 5 (ii.2%) studies had professional type and organizational blazon interventions, and 3 (i.three%) studies had professional type and financial type interventions. The most mutual KT intervention target behaviors were "General management of a problem" (n = 132), "Clinical prevention services" (n = 45), "Patient outcome" (due north = 35), "Procedures" (n = 33), and "Patient education/communication" (n = 32). See Table7.
Table 7
KT intervention blazon | Number of studies (%) |
Professional | 218 (96.5) |
Professional and organizational | 5 (2.ii) |
Professional and financial | iii (1.3) |
Total | 226 (100) |
KT intervention recipient type | |
HCP | 154 (68.1) |
HCP and patients | thirteen (v.8) |
HCP and others | 59 (26.1) |
Total | 226 (100) |
Target behavior of KT intervention* | |
Full general management of a problem | 132 |
Clinical prevention services | 45 |
Patient consequence | 35 |
Procedures | 33 |
Patient education/advice | 32 |
Prescribing | 20 |
Test ordering | xiii |
Diagnosis | xi |
Referrals | five |
Tape keeping | ii |
Professional-patient advice | 1 |
Total | 226 |
*Some studies had multiple targeted behaviors
Theoretical guidance
Of the 226 studies, 38.1% (n = 86) were informed by theory (Tabular arrayeight). The most oftentimes reported theories were as follows: (a) Roger's Improvidence of Innovation Theory (n = 13), (b) Normalization Process Theory (n = x), (c) Promoting Action on Research Implementation in Health Services Framework (north = 9), (d) Theory of Planned Behavior (n = 9), (e) Plan-Do-Study-Act Framework (n = 7), and (f) the Consolidated Framework for Implementation Research (north = half-dozen).
Table 8
Applied theories* | Number of studies |
---|---|
Roger's theory/diffusion of innovation | 13 |
Normalization procedure theory | 10 |
Promoting Activity on Research Implementation in Health Services framework | nine |
Theory of planned behavior | 9 |
Plan-Practice-Report-Human action Framework | 7 |
Theoretical Domains Framework | 6 |
Consolidated Framework for Implementation Research | 5 |
Reach, Effectiveness, Adoption, Implementation, and Maintenance Framework | 4 |
Behavior Change Theory | three |
Carrol et al. Framework for Intervention Fidelity | 3 |
Grol and Wensing Theoretical Framework | 3 |
Hulsher et al. Process Evaluation Framework | 3 |
Medical Enquiry Council Framework | three |
Braun and Clarke Thematic Analysis in Psychology | 2 |
Kirkpatrick and Kirkpatrick Training Program Evaluation Model | ii |
Ottawa Model of Enquiry Use | two |
Precede/Go on Implementation Model | two |
Prochaska and DiClemente Stages of Change Model | 2 |
Other | 19 |
Total | 86 |
*Some studies had multiple theories guiding the process evaluation
Quality assessment
The distribution of MMAT scores varied with written report design (Tabular array2). The lowest scoring report blueprint was multi-method, with 74.5% (n = 41) of multi-method studies scoring 50 or lower. Overall, many of the studies (n = 88, 38.9%) had an MMAT score of 50 or lower, with 29 (12.8%) studies scoring 25 and 7 (3.1%) studies scoring 0. Fourscore-1 studies (35.8%) scored 75, and 57 studies (25.ii%) scored 100 (high quality). See Table9.
Table 9
MMAT score distribution | Number of studies (%) |
---|---|
0 | vii (iii.1) |
25 | 29 (12.8) |
l | 52 (23.ane) |
75 | 81 (35.8) |
100 | 57 (25.2) |
Total | 226 (100) |
Give-and-take
Our findings provided many insights into the electric current practices of KT researchers conducting integrated or separate process evaluations, the focus of these process evaluations, the data collection considerations, and the poor methodological quality and a lack of theoretical guidance informing these process evaluations.
The majority of included studies (60.two%) conducted a split up (stand-alone) rather than integrated process evaluation. As Moore and colleagues advise, there are advantages and disadvantages of either (separated or integrated) approach [12]. Arguments for separate procedure evaluations focus on analyzing process information without knowledge of event analysis to prevent biasing interpretations of results. Arguments for integration include ensuring implementation data is integrated into outcome analysis and using the process evaluation to identify intermediate outcome data and causal processes while informing the integration of new measures into outcome data drove. Our findings highlight that there is no clear preference for split or integrated process evaluations. The conclusion for separation or integration of the procedure evaluation should exist carefully considered by written report teams to ensure it is the best option for their study objectives.
Our findings draw attention to a broad variety of terms and foci used within process evaluations. We identified a lack of clear and consequent concepts for process evaluations and their multifaceted components, likewise as an absenteeism of standard recommendations on how process evaluations should exist developed and conducted. This finding is supported past a literature overview on process evaluations in public wellness published past Linnan and Steckler in 2002 [29]. We would encourage researchers to employ terms that are utilized by other researchers to facilitate making meaningful comparisons beyond studies in the future and to be mindful of comprehensively including the key components of a procedure evaluation, context, implementation, and mechanisms of impact [12].
Our findings highlight two of import aspects well-nigh procedure evaluation information collection in relation to timing and blazon of data collected. In terms of data drove timing, almost half of the investigators nerveless their process evaluation data post-intervention (46%) without whatever pre-intervention or during intervention information collection. Surprisingly, only 17.seven% of the included studies collected data pre- and post-intervention, and merely eighteen studies nerveless data pre-, during, and postal service-intervention. Process evaluations tin can provide useful information about intervention delivery and if the interventions were delivered as planned (fidelity), the intervention dose, as well as useful information about intervention reach and how the context shaped the implementation process. Our findings suggest a current propensity to collect information after intervention delivery (as compared to earlier and/or during). It is unclear if our findings are the outcome of a lack of forethought to use information collection pre- and during implementation, a lack of resources, or a reliance on data collection approaches post-intervention. This aside, based upon our findings, we recommend that KT researchers planning process evaluations consider data collection before in the implementation process to prevent challenges with retrospective data collection and to maximize the potential power of process evaluations. Consideration of key components of procedure evaluations (context, implementation, and mechanisms of impact) is critically important to prevent inference-observation confusion from an sectional reliance on outcome evaluations [12]. An intervention can have positive outcomes even when an intervention was not delivered as intended, as other events or influences can be shaping a context [30]. Conversely, an intervention may have limited or no effects for a number of reasons that extend across the ineffectiveness of the intervention including a weak enquiry design or improper implementation of the intervention [31]. Implicitly, the process evaluation framework by Moore and colleagues suggests that process evaluation data drove ideally needs to be collected before and throughout the implementation procedure in order to capture all aspects of implementation [12].
In terms of information collection type, but over one-half (54.4%) of the studies utilized qualitative interviews as ane course of data collection. Reflecting on the central components of process evaluations (context, implementation, and mechanisms of affect), the frequency of qualitative information drove approaches is lower than anticipated. Qualitative approaches such as interviewing are platonic for uncovering rich and detailed aspects of the implementation context, nuanced participant perspectives on the implementation processes, and the potential mediators to implementation impact. When because the key components of a procedure evaluation (context, implementation, and mechanisms of bear upon), by default, information technology is suggestive of multi-method work. Consequently, we urge researchers to consider integrating qualitative and quantitative data into their procedure evaluation study designs to richly capture various perspectives. In add-on to individual interviews, surveys, participant observation, focus groups, and certificate analysis could exist used.
A major finding from this systematic review is the lack of methodological rigor in many of the process evaluations. Almost twoscore% of the studies included in this review had a MMAT score of 50 or less, but the scores varied significantly in terms of study designs used by the investigators. Moreover, the frequency of low MMAT scores for multi-method and mixed method studies suggests a tendency for lower methodological quality which could point to the challenging nature of these research designs [32] or a lack of reporting guidelines.
Our findings identified a lack of theoretical guidance employed and reported in the included procedure evaluation studies. It is important to annotation the role of theory within evaluation is considered contentious by some [33, 34], yet conversely, there are increasing calls for the use of theory in the literature. While there is this tension betwixt using or not using theory in evaluations, in that location are many reported advantages to theory-driven evaluations [29, 33, 34], withal more than threescore% of the included studies were not informed by theory. Electric current enquiry prove suggests that using theory can help to blueprint studies that increase KT and enable better interpretation and replication of findings of implementation studies [35]. In alignment with Moore and colleagues, we encourage researchers to consider utilizing theory when designing process evaluations. There is no shortage of KT theories available. Recently, Strifler and colleagues identified 159 KT theories, models, and frameworks in the literature [36]. In the words of Moore and colleagues who were citing the revised MRC guidance (2008), "an understanding of the causal assumptions underpinning the intervention and use of evaluation to understand how interventions work in practice are vital in edifice an testify base that informs policy and practice" [ix].
Limitations
Every bit with all reviews, there is the possibility of incomplete retrieval of identified research; however, this review entailed a comprehensive search of published literature and rigorous review methods. Limitations include the eligibility restrictions (just published studies in the English language were included, for example), and data collection did not extend across data reported in included studies.
Conclusions
The current state of the quality of evidence base of process evaluations in KT is weak. Policy makers and funding organizations should call for theory-based multi or mixed method designs with a complimentary process evaluation component. Mixed method designs, with an integrated process evaluation component, would assistance to inform conclusion makers virtually effective process evaluation approaches, and research funding organizations could farther promote theory-based designs to guide the development and behave of implementation studies with a rigorous process evaluation component. Achieving this goal may require well-assembled implementation teams including clinical experts, as well as stiff researchers with methodological expertise.
We recommend that future investigators employ rigorous theory-guided multi or mixed method approaches to evaluate the processes of implementation of KT interventions. Our findings highlighted that to date, qualitative study designs in the form of separate (stand-lonely) process evaluations are the about ofttimes reported approaches. The predominant information collection method of using qualitative interviews helps to better understand process evaluations and to answer questions about why the implementation processes work or not, but does not provide an respond near the effectiveness of the implementation processes used. In calorie-free of the work of Moore and colleagues [12], nosotros advocate that future process evaluation investigators should use both qualitative and quantitative methods (mixed methods) with an integrated process evaluation component to evaluate implementation processes in KT research.
Nosotros identified the timing of data collection every bit some other methodological weakness in this systematic review. It remains unclear why almost half of the included process evaluation studies nerveless data only post-implementation. To provide high-certainty bear witness for process evaluations, nosotros advocate for the drove of pre-, during, and post-implementation measures and the apply of statistical uncertainty measures (e.g., standard difference, standard error, p values, and confidence intervals). This would permit a rigorous assessment of the implementation processes and sound recommendations supported past statistical measures. The timing of pre-evaluations as well helps to address bug before implementation occurs. At that place is widespread credence that the generalizability of quantitative trials of KT interventions would be significantly enhanced through complementary process evaluations. Most data collection occurred post-intervention undermining the power to evaluate the process of implementation.
Potent science and methodological guidance is needed to underpin and guide the blueprint and execution of process evaluations in KT scientific discipline. A theory-based approach to inform process evaluations of KT interventions would allow investigators to reach conclusions, not just about the processes by which interventions were implemented and the outcomes they have generated, just also about the reliability of the causal assumptions that link intervention processes and outcomes. Future enquiry is needed that could provide country-of-the-art recommendations on how to blueprint, conduct, and report rigorous procedure evaluations equally part of a theory-based mixed methods evaluation of KT projects. Intervention theory should be used to inform the pattern of implementation studies to investigate the success or failure of the strategies used. This could atomic number 82 to more generalizable findings to inform researchers and knowledge users about effective implementation strategies.
Supplementary information
Acknowledgements
We would like to thank CIHR for providing the funding for the systematic review. We would like to thank our Knowledge User Advisory Console members for providing guidance and feedback, including Dr. Thomas Rotter, Brenda VandenBeld, Lisa Halma, Christine Jensen-Ross, Gayle Knapik, and Klaas VandenBeld. We would lastly similar to acknowledge the contributions of Xuan Wu in data analysis.
Abbreviations
EPOC | Effective Practice and System of Intendance |
HCP | Healthcare professional person |
KT | Knowledge translation |
MMAT | Mixed Methods Appraisal Tool |
RCT | Randomized controlled trial |
Authors' contributions
SDS conceptualized and designed the written report and secured the report funding from CIHR. She led all aspects of the study process. TC conducted the search. TR, KHBM, RF, TP, and HMB contributed to the data collection. KHBM, RF, TP, and HMB contributed to the information analysis. TR and LH contributed to the data interpretation. All authors contributed to the manuscript drafts and reviewed the final manuscript. All authors read and approved the final manuscript.
Authors' information
SDS holds a Canada Inquiry Chair for Knowledge Translation in Child Health. LH holds a Canada Enquiry Chair in Knowledge Synthesis and Translation.
Funding
Canadian Institutes of Health Research (CIHR) Knowledge Synthesis Grant #305365.
Availability of data and materials
The datasets used and/or analyzed during the current study are bachelor from the corresponding author on reasonable request.
Ethics blessing and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they take no competing interests.
Footnotes
Publisher's Annotation
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
Shannon D. Scott, Electronic mail: ac.atreblau@ttocs.nonnahs.
Thomas Rotter, Email: ac.usneeuq@rettor.samoht.
Rachel Flynn, Email: ac.atreblau@nnylfmr.
Hannah K. Brooks, Email: air conditioning.atreblau@skoorb.hannah.
Tabatha Plesuk, Email: ac.atreblau@kuselp.
Katherine H. Bannar-Martin, E-mail: moc.liamg@mrannabk.
Thane Chambers, Email: air-conditioning.atreblau@enaht.
Lisa Hartling, Email: ac.atreblau@gniltrah.asil.
Supplementary information
Supplementary data accompanies this newspaper at 10.1186/s13643-019-1161-y.
References
1. Bhattacharyya OK, Estey EA, Zwarenstein G. Methodologies to evaluate the effectiveness of knowledge translation interventions: a primer for researchers and health care managers. J Clin Epidemiol. 2011;64(1):32–twoscore. doi: x.1016/j.jclinepi.2010.02.022. [PubMed] [CrossRef] [Google Scholar]
3. Kotaska A. Inappropriate utilise of randomised trials to evaluate complex phenomena: case report of vaginal breech delivery. BMJ. 2004;329(7473):1039–1042. doi: 10.1136/bmj.329.7473.1039. [PMC free commodity] [PubMed] [CrossRef] [Google Scholar]
four. Seers K. Evaluating circuitous interventions. Worldviews Evid-Based Nurs. 2007;4(2):67–68. doi: 10.1111/j.1741-6787.2007.00083.x. [PubMed] [CrossRef] [Google Scholar]
5. Hawe P, Shiell A, Riley T. Complex interventions: how "out of control" tin a randomised controlled trial exist? BMJ. 2004;328(7455):1561–1563. doi: 10.1136/bmj.328.7455.1561. [PMC free commodity] [PubMed] [CrossRef] [Google Scholar]
six. Wolff Due north. Using randomized controlled trials to evaluate socially complex services: problems, challenges and recommendations. J Ment Wellness Policy Econ. 2000;iii(2):97–109. doi: 10.1002/1099-176X(200006)3:2<97::Help-MHP77>iii.0.CO;ii-S. [PubMed] [CrossRef] [Google Scholar]
7. Patton MQ. Developmental evaluation: applying complexity concepts to heighten innovation and utilize. New York: Guilford Printing; 2010.
eight. Campbell Grand, Fitzpatrick R, Haines A, Kinmonth AL, Sandercock P, Spiegelhalter D, et al. Framework for design and evaluation of complex interventions to improve wellness. BMJ. 2000;321(7262):694–696. doi: x.1136/bmj.321.7262.694. [PMC free commodity] [PubMed] [CrossRef] [Google Scholar]
ix. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M, et al. Developing and evaluating complex interventions: the new Medical Research Quango guidance. BMJ. 2008;337:a1655-a. doi: 10.1136/bmj.a1655. [PMC costless article] [PubMed] [CrossRef] [Google Scholar]
x. Oakley A, Strange V, Bonell C, Allen E, Stephenson J. Procedure evaluation in randomised controlled trials of complex interventions. BMJ. 2006;332(7538):413–416. doi: 10.1136/bmj.332.7538.413. [PMC free commodity] [PubMed] [CrossRef] [Google Scholar]
11. Anderson P, Benford M, Harris N, Karavali M, Piercy J. Existent-world physician and patient behaviour across countries: disease-specific programmes–a ways to sympathise. Curr Med Res Opin. 2008;24(eleven):3063–3072. doi: 10.1185/03007990802457040. [PubMed] [CrossRef] [Google Scholar]
12. Moore GF, Audrey Due south, Barker G, Bail 50, Bonell C, Hardeman West, et al. Procedure evaluation of complex interventions: Medical Enquiry Council guidance. BMJ. 2015;350:h1258. doi: 10.1136/bmj.h1258. [PMC free commodity] [PubMed] [CrossRef] [Google Scholar]
13. Wallin L. Knowledge translation and implementation research in nursing. Int J Nurs Stud. 2009;46(4):576–587. doi: ten.1016/j.ijnurstu.2008.05.006. [PubMed] [CrossRef] [Google Scholar]
fourteen. Hasson H. Systematic evaluation of implementation allegiance of complex interventions in health and social care. Implement Sci. 2010;five(1):67. doi: 10.1186/1748-5908-five-67. [PMC free commodity] [PubMed] [CrossRef] [Google Scholar]
15. Hasson H, Blomberg S, Dunér A. Fidelity and moderating factors in complex interventions: a case study of a continuum of intendance program for frail elderly people in wellness and social care. Implement Sci. 2012;7(ane):23. doi: ten.1186/1748-5908-7-23. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
xvi. Grimshaw JM, Zwarenstein Thousand, Tetroe JM, Godin Thou, Graham ID, Lemyre L, et al. Looking inside the black box: a theory-based procedure evaluation alongside a randomised controlled trial of printed educational materials (the Ontario printed educational message, OPEM) to improve referral and prescribing practices in main care in Ontario, Canada. Implement Sci. 2007;2(1):38. doi: 10.1186/1748-5908-2-38. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
17. Ramsay CR, Thomas RE, Croal BL, Grimshaw JM, Eccles MP. Using the theory of planned behaviour as a procedure evaluation tool in randomised trials of noesis translation strategies: a case study from UK primary care. Implement Sci. 2010;v(1):71. doi: 10.1186/1748-5908-five-71. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
eighteen. Herr K, Titler M, Fine PG, Sanders S, Cavanaugh JE, Swegle J, et al. The effect of a translating enquiry into practice (TRIP)-cancer intervention on cancer pain management in older adults in hospice. Pain Med. 2012;thirteen(8):1004–1017. doi: 10.1111/j.1526-4637.2012.01405.ten. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
nineteen. Fretheim A, Aaserud 1000, Oxman Ad. Rational prescribing in primary intendance (RaPP): economic evaluation of an intervention to improve professional practice. PLoS Med. 2006;3(6):e216. doi: x.1371/journal.pmed.0030216. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
21. Skivington K, Matthews L, Craig P, Simpson Southward, Moore Fifty. Developing and evaluating complex interventions: updating Medical Research Council guidance to take account of new methodological and theoretical approaches. Lancet. 2018;392:S2. doi: 10.1016/S0140-6736(18)32865-4. [CrossRef] [Google Scholar]
23. Lewin South, Glenton C, Oxman Advertising. Utilise of qualitative methods alongside randomised controlled trials of complex healthcare interventions: methodological study. BMJ. 2009;339:b3496. doi: x.1136/bmj.b3496. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
24. Scott SD, Rotter T, Hartling L, Chambers T, Bannar-Martin KH. A protocol for a systematic review of the use of procedure evaluations in knowledge translation inquiry. Syst Rev. 2014;iii(1):149. doi: 10.1186/2046-4053-3-149. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
25. Franx G, Oud Grand, de Lange J, Wensing M, Grol R. Implementing a stepped-care approach in primary care: results of a qualitative report. 2012. [PMC gratuitous article] [PubMed] [Google Scholar]
26. Pluye P, RE, Cargo Thou, Bartlett 1000, O'Cathain A, Griffiths F, Boardman F, Gagnon MP, Rousseau MC. Proposal: a mixed methods appraisal tool for systematic mixed studies reviews. 2011. [Google Scholar]
27. Stride R, Pluye P, Bartlett G, Macaulay Air conditioning, Salsberg J, Jagosh J, et al. Testing the reliability and efficiency of the pilot Mixed Methods Appraisal Tool (MMAT) for systematic mixed studies review. Int J Nurs Stud. 2012;49(1):47–53. doi: 10.1016/j.ijnurstu.2011.07.002. [PubMed] [CrossRef] [Google Scholar]
29. Steckler A, Linnen L. Process evaluation for public health interventions and enquiry. San Francisco: Jessey-Bass A Wiley Imprint; 2002. [Google Scholar]
30. F. Moore G, Raisanen L, Ud Din Due north, White potato S, Moore Fifty. Mixed-method process evaluation of the Welsh National Practice Referral Scheme. Health Educ 2013;113(6):476–501.
31. Steckler AB, Linnan L. In: Process evaluation for public health interventions and research. 1 Jossey-Bass, editor. 2002. [Google Scholar]
32. Weiss CH. Evaluation. 2. Upper Saddle River: Prentice Hall, Inc; 1998. [Google Scholar]
33. De Silva MJ, Breuer E, Lee Fifty, Asher L, Chowdhary Due north, Lund C, et al. Theory of Alter: a theory-driven arroyo to enhance the Medical Research Quango's framework for circuitous interventions. Trials. 2014;15(1):267. doi: 10.1186/1745-6215-15-267. [PMC gratuitous commodity] [PubMed] [CrossRef] [Google Scholar]
34. Fretheim A, Flottorp Southward, Oxman AD. It is a capital mistake to conjecture earlier one has data: a response to Eccle'due south criticism of the OFF theory of research utilization. J Clin Epidemiol. 2005;58(two):119–120. doi: ten.1016/j.jclinepi.2004.x.003. [CrossRef] [Google Scholar]
35. Rycroft-Malone J. Theory and knowledge translation: setting some coordinates. Nurs Res. 2007;56(4 Suppl):S78–S85. doi: x.1097/01.NNR.0000280631.48407.9b. [PubMed] [CrossRef] [Google Scholar]
36. Strifler Fifty, Cardoso R, McGowan J, Cogo East, Nincic V, Khan PA, et al. Scoping review identifies significant number of cognition translation theories, models, and frameworks with limited utilise. J Clin Epidemiol. 2018;100:92–102. doi: 10.1016/j.jclinepi.2018.04.008. [PubMed] [CrossRef] [Google Scholar]
Articles from Systematic Reviews are provided hither courtesy of BioMed Fundamental
Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6836407/
0 Response to "Why Theory Is Important in an Outcome Evaluations Peer Review"
Post a Comment