Evaluation Development and Implementation
Through its network of implementation science (IS) experts and organizations, the ISC works collaboratively to evaluate and improve global health policy and programs through research, policy analysis, and evaluation.
Evaluation studies are essential for understanding the effectiveness, efficiency, and impact of programs and policies. Increasingly, evaluation strategies are being better used to generate useful knowledge about ongoing implementation and understanding how and why programs are functioning, serving as a valuable tool in implementation science. Supporting this implementation learning and shifting to systems for continuous learning by engaging in-country, regional, and global learning partners are the objective and challenge of an implementation science approach to program evaluation. Driven by technical experts, and together with country and implementing partners, we study the implementation of global health programs and policies using collaborative, country-driven approaches. This is crucial for improving health and development in low and middle-income countries.
Learn more about Evaluation Development and Implementation:
IS Approach to Evaluations
If there is a gap in existing data, or a need to understand how to improve practices, experts in our network support research studies, policy analysis and evaluations to improve implementation of global health. From systematic reviews of literature to economic evaluations to process evaluations, we focus on data liberation and the sustainability of health service delivery.
The use of evidence to improve global health delivery and diplomacy requires the active engagement of collaborators representing an extensive array of skill sets. Current partners include governments, regional health bodies, policy advocacy groups, civil society, research organizations, and academic institutions.
By engaging the right mix of partners at the right time with the right evaluation tools and methods, we can determine the most relevant priorities and questions while minimizing the “stalling” of evidence in the research-to-use pathway. Whenever possible, we encourage the co-creation of evaluation designs and implementation.
The HEARD Project emphasizes rapid and robust scoping of proposed evaluations, which serves the purpose of quickly refining the objectives of the evaluation, and identifying and framing opportunities for the best approach and value. Scoping is participatory and involves the client/requestor (often a Mission), implementing partners, and ideally additional in-country and global/regional learning and leverage partners, to understand learning needs from multiple perspectives and how the evaluation can address those.
Independent Evaluation Team Leads
Evaluations undertaken by the HEARD Project are generally conducted using independent (i.e. not employed by or representing the HEARD Project implementing partners or USAID) Evaluation Team Leads to ensure the independence and objectivity of the evaluations. Team Leads are experienced senior-level individuals with expertise in the relevant subject matter and/or evaluations. Team leads are responsible for oversight to the design and implementation of the evaluation; maintaining regular communication with the HEARD Management Team focal point and USAID focal point; and ensuring the quality and timely completion of the evaluation and reporting.
Evaluations undertaken by the HEARD Project draw on HEARD Partners to form the evaluation teams, wherein, supported by the HEARD Core Team:
- HEARD Global Technical Anchors lead on the evaluation design and methodology, participating in the scoping activities as needed, to understand the request and best support evaluation design. In collaboration with the Evaluation Lead, the Design Lead is responsible for leading the design of the evaluation, developing the protocol and tools, and managing the data analysis process.
- The Evaluation Implementation Team is composed of individuals from the HEARD Project Anchor partners, the HEARD Core Team, and HEARD Sub-Regional Anchors for evaluations in their respective regions. Other HEARD Technical Resource Partners can be brought in through a competitive process, as needed. The size and composition of the team will be determined for each evaluation.
A Strategy Reference Group (SRG) is established for evaluations that would benefit from applying the consideration of a broader expert group to the evaluation findings. The SRG will review the evaluation findings and take a consensus building approach to develop recommendations in areas of interest for the client/requestor.