Developing Evaluations That Are Used Wellington | New Zealand February 17, 2017
9540-145 Street Edmonton, Alberta, CA T5N 2W8 P: 780-451-8984 F: 780-447-4246 E: Mark@here2there.ca 2
A Little Context
Agenda Six Conversations • Getting Grounded • Evaluation 101 • User Oriented Design • The Design Box • Troika Consulting • Emerging Issues
Getting Grounded Conversation #1
TRIADS What questions are you bringing to today’s session?
The Challenge of Use Opening Comments
1960s • 1969: “A dearth of examples of where evaluation informed policy”. • 1970: “ A general failure” to use evaluation”. • 1975: “The influence of social science research and evaluation on program decisions has been “with few exceptions, nil”. • 1976: A celebrated educational researcher published an article, “Evaluation: Who Needs It? Who Cares?”
st Ce The 21 st Th Century • 2003. Quebec researchers found few government officials made regular use of social science research and evaluation. • 2004. Two senior administrators and policy advisors write “Why Measure: Nonprofits use metrics to show that they are efficient, but what if donors don’t care?” • 2005. International Evaluation Gap Working Group find few rigorous evaluation studies of the thirty four billion dollars spent on foreign assistance that year. • 2006. Researchers estimate ten percent of evaluations helped shape the decisions of their intended users.
When decisions are made using evaluative judgments, evaluation results are combined with other considerations to support decision making. Politics, values, competing priorities, the state of knowledge about a problem, the scope of the problem, the history of the program, the availability of resources, public support, and managerial competence all come into play in program and policy decision processes. Evaluation findings, if used at all, are usually one piece of the decision making pie, not the whole pie. Rhetoric about "data-based decision making" and "evidence-based practice" can give the impression that one simply looks at evaluation results and a straightforward decision follows. Erase that image from your mind . That is seldom, if ever, the case.
Five Types of Use GOOD! Instrumental Use: informs decisions Process Use: encourages evaluative thinking Conceptual Use: offers new insights Non-Use: findings are ignored Mis-Use: findings are selectively or improperly used BAD! In Pairs: Each share an example of each.
Evaluation 101 Conversation #2
Key Poin oints 1. We are all capable of evaluative thinking, though formal evaluation is a living multi ti-disciplinary ry art & sci science. 2. There are no no universal recipes for methods and indicators in evaluation: there are only steps, principles & standards and emerging practices that help create them. 3. Though social al innovators without experience, training and practice in evaluation are (very unlikely) to master evaluation design, as a primary user of evaluation, they must productively participate in all the key steps of an evaluation.
#1 We are all capable of evaluative thinki king, though formal evaluation is a living multi- dis iscip iplin linary ry art art & sci science.
The Genesis of Evaluation In the beginning, God created the heaven and the earth. And God saw everything that he made, “Behold,” God said, “it is very good.” And the evening and the morning were the sixth day. And on the seventh day, God rested from all His work. His archangel came then unto Him asking, “God, how do you know that what you have created is “very good”? What are your criteria? On what data do you base your judgment? Just exactly what results were you expecting to attain? And, aren’t you a little close to the situation to make a fair and unbiased evaluation?” God thought about these questions all that day and His rest was greatly disturbed. On the eighth day God said, “Lucifer, go to hell.” Thus was evaluation born in a blaze of glory. From Halcolm’s The Real Story of Paradise Lost.
Blin link Versus Thin ink “Think” “Blink” (aka Evaluative (aka Rapid Thinking) Cognition)
Some Im Important Parts of Evalu luative Thin inking Key Tasks 1. Clarifying intervention & The Core Challenge unit of analysis • How will we ‘measure’ 2. Developing purpose & questions change? • How will we know if our 3. Developing a design efforts contributed to the 4. Gathering, analyzing data change? 5. Interpreting data • How will we ‘judge’ the outcomes? 6. Drawing conclusions 7. Making recommendations 8. Informing decisions
A Brie ief His istory of Evalu luation • First 12,000 Years: Trial & Error (e.g. brewing beer & making bread) • Renaissance: empiricism (e.g. sun around earth or vice versa) • Enlightenment: experimentalism (e.g. scurvy). • Industrial: measurement (e.g., factories) & action learning (e.g. gangs) • Early 20 th c: program evaluation (e.g., agriculture, education, propaganda) • 1960s: centralized experimentalism 2.0 (e.g., War on Poverty) • 1970s: diversity of approaches & paradigms; low levels of use • 1980s: shift to accountability purpose, downloading of responsibility • 1990s: bottom up efforts (e.g. logic models), business ideas (e.g., social return on investment) and new managerialism • 2000s: renewed emphasis on complexity, social innovation, learning and evaluation culture • 2010s: sophisticated field, culture wars, unclear expectations
Evalu luation Today The Evaluation Field Social Innovators • An evolving, diverse and • Desire (and pressure) to use ‘contested’ multi -disciplinary evaluation for multiple purposes. practice. • Limited in-house expertise and/or • Focus on (at least) six purposes: resources for evaluation. e.g. monitoring, developmental, • Funders and social innovators accountability. unclear about what is reasonable • Professionalization of the field. to expect re: role and quality of evaluation. • Increasingly agreed upon • Low level of evaluation use (aka principles, standards and competencies with variations. credentialization).
Evaluation Purposes
There are steps! Better Evaluation American Evaluation Utilization-Focused Book Website Society
Evaluation Standards Canadian Aotearoa New Zealand 1. Utility – does the evaluation generate useful data for the users? • Respectful, meaningful 2. Feasibility – is the evaluation design relationships efficient and do’able with the financial, technical and timing constraints? • Ethic of care 3. Proprietary – is the evaluation done in a way that is proper, fair, legal, right and just? • Responsive methodologies 4. Accuracy – to what extent are the and trustworthy results evaluation representations, propositions, and findings, especially those that support • Competence and interpretations and judgments about quality, truthful and dependable? usefulness. 5. Accountable – to what extent are evaluators and evaluator users focusing on documenting and improving the evaluation processes?
Evaluator Competencies
#2 There are no no universal reci cipes for methods and indicators in evaluation: there are only steps, principles & standards and emerging practices that help create them.
Th The Challenge Forget Standardized Recipes Think “Chopped Canada”
Examples Toronto Native The Energy Region Counseling Futures Lab Immigrant Services of & Circular Employment Alberta Economy Lab Council
Th The Design Box User Purpose, Time & Resource Questions, Constraints Preferences Evaluation Evaluation Expertise & Experience Standards
From One of the World’s Best [Evaluation] isn’t some particular method of recipe-like steps to follow. It doesn’t offer a template of standard questions. It’s a mindset of inquiry into how to brin ring data to bear r on what’s Michael Quinn Patton. unfolding so as to guide and develop the Developmental unfolding. What that means and the Evaluation. 2010: pp.75-6. timing of the inquiry will depend on the situation, context, people involved, and the fundamental principle of doing what makes sense for program development.
# So Socia ial l inn innovators without experience, training and practice in evaluation are (very unlikely) to master evaluation design, but as a primary user of evaluation, they must productively participate in all the phases of an evaluation.
An Evaluative Culture An organization with a strong evaluative culture: engages in self-reflection and self-examination: • deliberately seeks evidence on what it is achieving, such as through monitoring and evaluation, • uses results information to challenge and support what it is doing, and • values candor, challenge and genuine dialogue; embraces evidence-based learning: • makes time to learn in a structured fashion, • learns from mistakes and weak performance, and • encourages knowledge sharing; encourages experimentation and change: • supports deliberate risk taking, and • seeks out new ways of doing business
Recommend
More recommend