Online workshops, June 17

Evaluation in a changing world

June 19 to 21, 2023; workshops on June 17 and 18; international sharing on June 22

All workshops on Saturday, June 17, are offered online. Anyone can register for them, even conference non-attendees. Attendance is capped at 20 persons. Workshops will be delivered in the language of their title and abstract.

Half-day workshops, the morning of June 17

Salah Eddine Bouyousfi: The emergence of artificial intelligence and big data: a threat to the supremacy of social scientists or an opportunity for evaluators to depict a social transformative change? INTERMEDIATE LEVEL

Prerequisites: No specific prerequisite knowledge.

This presentation is about the emergent use of big data and artificial intelligence (AI) in the evaluation field. It is aligned with the sub-theme 1. Indeed, the proliferation of big data and AI has significantly transformed the landscape of evaluation research and practices. Big data's nature and utility and ownership are inherently linked to methodological concerns and ethical considerations for evaluation research. Despite big data richness, it is still not used enough in evaluation research. Given the potential of big data and AI, and the growing demand for more real-time data, this presentation explores the challenges related to the use of big data and AI to enhance evaluation research and practices. It sheds light on evaluation issues that have been enabled by using big data and AI, as representativeness and sampling biases, causality, unclear data access processes, operationalization of variables, confounding variables not necessarily taken into account and ethical challenge.

I'm a researcher and evaluator and an associate professor at the Faculté des Sciences Juridiques, Economiques et Sociales, Mohamed V University in Rabat, Morocco. I'm an engineer and I had worked for twenty years in different sectors. During my doctoral studies, I developed my passion for evaluation research. I focus on enhancing evaluation use with particular emphasis on realistic evaluation. I conducted five evaluations funded by national and international organizations.

Coumba Toure: Evaluating funders and grantmakers fostering learning BEGINNER LEVEL

Prerequisite: A basic understanding on the decolonization trends in the development sector would be helpful.  some experience in evaluating funders is very welcome. Because we are working for d for change and for a balance of power, people who are interested in conserving evaluation processes and methodologies and maintaining a status quo might feel uncomfortable in the workshop.

The world is changing, and so should the philanthropy sector and the learning and evaluation philosophies and practices to better serve the communities struggling for changes toward social justice. For the longest time, the burden of evaluation, learning, and capacity building has been placed on the recipients of funds, particularly on small organizations on the African continent. Those who receive grants are the subject of evaluations. This presentation will revisit five years of an opportunity to evaluate the international reproductive health strategy to support local advocacy in sub-Saharan Africa of the Hewlett foundation launched in 2016. We will share the findings and discuss how to evaluate and foster learning among funders.

Born and raised between Mali and Senegal, Coumba Toure is a writer of children's books and a storyteller. She publishes children's stories and organizes art events through Kuumbati.com production house and through Association Falia a collective of educators and artists. She also designs or evaluates programs focusing on women and children. Coumba is the chair of the board of TrustAfrica foundation and the Baobab center.

Jean Serge Quesnel: Panorama mondial de l'évaluation dans un monde changeant NIVEAU INTERMÉDIAIRE

Prérequis : Une connaissance générale de l'environnement de l'évaluation des interventions de développement publique.

Depuis trente ans la fonction d'évaluation s'est développée à un rythme exponentiel. L'évaluation s'insère dans des contextes socio-politiques fort diversifiés. Le facteur culturel influence l'efficacité de l'utilisation de l'évaluation puisque la langue est l'expression de paradigmes sociaux enracinés dans des évolutions historiques. L'univers de l'évaluation comprend plusieurs niveaux de réseaux d'évaluation allant du global au local. Chaque niveau a ses caractéristiques particulières. Néanmoins il y a de nombreuses influences entre ces niveaux, à la fois verticalement et horizontalement. Un examen de ces situations diverses permet de dégager des méga tendances vers une harmonisation de la professionnalisation de l'évaluation, et ce, dans le respect de la diversité.

Professeur associé à ENAP. Fut Directeur d'évaluation à l'ACDI, BID et UNICEF. Fut Président du Groupe d'expert en évaluation de l'OCDE, Président du Groupe de coopération en évaluation des Institutions financières internationales, Président du Groupe des Normes d'évaluation des Nations Unies, Coordonnateur fondateur du Réseau francophone d'évaluation. A participé à la création de IDEAS et de l'OICE. A enseigné au Collège des Nations Unies. Est président de la Société québécoise d'évaluation de programme.

Brian Hoessler: Creating FUSE-ion: A Practical Tutorial For Evaluators Working In Urban Contexts

Bustling with potential for change, urban contexts - shaped by density, diversity, and interlocking systems - exacerbate ongoing social, economic, and environmental challenges while nourishing opportunities and encouraging experimentation towards thriving. Evaluation can play a key role in supporting positive change in cities by evolving our practice to integrate approaches and methods that account for the urban context. This expert tutorial, led by an evaluator with extensive knowledge and applied experience at the intersection of evaluation and urban community practice, will offer key principles from FUSE: A Framework for Urban Systems Evaluation for use in this context alongside insights on methods, the evaluator role, equity considerations, and other implications for practice.

Brian Hoessler is the Founder and Principal Consultant of Strong Roots Consulting, a Saskatoon-based firm that catalyzes learning and growth through strategic planning, capacity building, and program evaluation. As a consultant, Brian has supported over 40 non-profit organizations, government agencies, and multi-stakeholder initiatives, with a focus on understanding community contexts, identifying common purpose, and centring participant and community voices.

Half-day workshops, the afternoon of June 17

Mike Trevisan, Tamara Walser: The Transformative Power and Potential of Evaluability Assessment INTERMEDIATE LEVEL

Prerequisite: Participants should have a foundational knowledge of evaluation including standards in evaluation. In addition, exposure to current concepts in evaluation including program complexity, evaluation capacity building, and culturally-responsive evaluation is also expected.

The purpose of this workshop is to explore and engage the transformative use and potential of evaluability assessment (EA). EA was developed in the 1970s as a pre-evaluation activity for determining If a program was ready for outcome evaluation, with management as primary intended users. EA theory and practice has evolved to address the complex needs of programs and their communities. No longer tied exclusively to management decisions about outcome evaluation, EA can be used as a collaborative evaluation approach at any point in a program's lifecycle. Transforming our understanding and application of EA unlocks its potential to engage program and organization communities in evaluation, address program complexity, support culturally responsive and equity-focused evaluation, and build evaluation capacity. Using our four-component EA model, and through examples, case scenarios, group activities, and discussion; workshop participants will consider and apply a transformative EA approach.

Mike Trevisan has been conducting educational research and evaluation for 35 years. He is a professor of Educational Psychology at Washington State University. Prior to his current position, Mike worked as an evaluator for a for-profit organization. Mike has conducted numerous evaluations. He Is widely published in the field of evaluation.

Tamara Walser has worked as an evaluator for more than 25 years. She is a professor at the University of North Carolina Wilmington where she coordinates the Evaluation and Organizational Learning M.S. and Evaluation Certificate programs. Tamara previously worked in non-profit and for-profit organizations as a program evaluator.

 Jean Serge Quesnel : Diversité et complémentarité systémique des approches d'évaluation des interventions publiques NIVEAU INTERMÉDIAIRE

Prérequis : Une connaissance générale de l'environnement de l'évaluation des interventions publiques.

Toutes interventions publiques sont réalisées de façons interreliées et multi-dimensionnelles. Pour discerner leur diversité et complémentarité, nous aurons recours à une stratification de six niveaux dont chacun oblige l'application de concepts évaluatifs spécifiques et appropriés aux objets d'évaluation. Le premier niveau est celui des activités administratives réalisées via une procédure ayant un ciblage précis. Le second niveau est celui des projets qui impliquent plusieurs parties prenantes interagissant selon un scénario déterminé. Le troisième niveau est celui des programmes multidimensionnels réalisés en partenariat. Le quatrième niveau concerne l'efficacité institutionnelle des intervenants. Le cinquième niveau est celui des stratégies interactives d'intervention et le sixième, celui des impacts des politiques publiques. Durant l'atelier nous examinerons l'outillage nécessaire à chaque niveau et les liens inter-niveaux pour une meilleure utilisation.

Professeur associé à ENAP. Fut Directeur d'évaluation à l'ACDI, BID et UNICEF. Fut Président du Groupe d'expert en évaluation de l'OCDE, Président fondateur du Groupe de coopération en évaluation des Institutions financières internationales, Président du Groupe des Normes d'évaluation des Nations Unies, Coordonnateur fondateur du Réseau francophone d'évaluation.   A enseigné au Collège des cadres des Nations Unies. A été fonctionnaire fédéral et international.

Full-day workshops, June 17

Thomas Archibald: Evaluative Thinking to Enhance Evaluation Capacity and Quality BEGINNER LEVEL

Prerequisite: None. Some experience with evaluation practice and theory is helpful.

How does one 'think like an evaluator'? How can program implementers learn to think like evaluators? Recent years have witnessed an increased use of the term 'evaluative thinking,' yet this particular way of thinking, reflecting, and reasoning is not always well understood. Patton warns that as attention evaluative thinking has increased, we face the danger that the term 'will become vacuous through sheer repetition and lip service' (2010, p. 162). This workshop can help avoid that pitfall. Drawing from our research and practice in evaluation capacity building, in this workshop we use discussion and hands-on activities to address: (1) What evaluative thinking (ET) is and how it pertains to your context; (2) How to promote and strengthen ET among individuals and organizations with whom you work; and (3) How to use ET to identify assumptions, articulate program theory, and conduct evaluation with an emphasis on learning and adaptive management.

Thomas Archibald is the Executive Director of the Center for International Research, Education and Development at Virginia Tech, where he is also an affiliated faculty member in the Department of Agricultural, Leadership, and Community Education. He serves on the Board of Directors of the American Evaluation Association and is an Associate Editor of the journal Evaluation and Program Planning. Tom is passionate about unleashing the power of inquiry to support a more just and sustainable world.

Jérôme Leblanc: Formation en récolte des effets (outcome harvesting) NIVEAU INTERMÉDIAIRE

Prérequis : Il est utile d'avoir une connaissance générale du domaine de l'évaluation pour comprendre pourquoi cette approche est novatrice, cherche à s'adapter à la complexité, n'utilise pas à priori d'indicateurs, ne mesure pas directement l'atteinte d'objectifs précis, mais peut inclure une panoplie de méthodes de collecte existantes ou innovantes. Les débutants très motivés sont néanmoins les bienvenus.

La récolte des effets (RE) est une méthode qualitative d'évaluation des effets, innovante, participative, adaptée pour les contextes de complexité, ainsi que pour analyser les tendances en matière de changement/transformation auxquels a contribué un projet/programme de développement.  Elle ne vise pas la mesure d'indicateurs ou d'objectifs SMART prédéfinis, la RE procède plutôt de façon inverse en ce sens où elle permet de collecter des effets dans un champ d'intervention pour ensuite établir des relations plausibles de contribution depuis le projet/programme évalué. Elle peut être adaptée de diverses façons et elle peut inclure des méthodes de collecte variées. Créée par Wilson-Grau et collègues, son 1er manuel a été publié en 2012. Elle est inspirée surtout de la cartographie des incidences et de l'évaluation axée sur l'utilisation. Elle a été mise en oeuvre dans plus de 140 pays par diverses organisations : ONG, gouvernement, fondations.

Jérôme Leblanc est Chargé de programme en évaluation, apprentissage et innovation chez SUCO. Il détient une maîtrise en Sc. Po. et un baccalauréat en sociologie de l'UQAM. Il a 25 ans d'expérience en design, gestion et évaluation de projets/programmes, principalement en Afrique de l'Ouest et au Québec. Il a occupé des postes chez la Maison de l'innovation sociale, Avenir d'enfants, Fondation BDA, Uniterra, Pimiento, CUSO, groupes de recherche à l'UQAM, etc.