Anti-Racist #Eval (etc.) Work for White Folks Like Me

The recent murders of George Floyd, Breonna Taylor, and Ahmaud Arbery—which, while highly visible and in quick succession under especially horrendous circumstances, are just three in a long line of countless lynchings and killings of black and brown people by violent white supremacists (often from within the ranks of the police)—in juxtaposition with Amy Cooper (a white liberal) trying to weaponize the police to kill Christian Cooper, a black birdwatcher in New York’s Central Park, plus the fresh police brutality inflicted in the response to protests and demonstrations in cities all around the U.S., plus the disproportionate infection and death rate from COVID-19 among people of color, all has me reflecting on my whiteness, my white privilege, and whether my espoused values (diversity, inclusion, justice, etc.) are actually manifest and evident in the work I do in the world.

As someone involved in higher education (i.e., teaching and mentoring graduate students), evaluation, including evaluation capacity building (ECB) and research on evaluation (RoE), and so-called international development in Senegal, I’m in all sorts of spaces and places where on the surface I’m ‘doing good,’ to ‘make the world a better place,’ but the starkness of the suffering, injustice, and terror unleashed on black and brown people—among them students, colleagues, workshop participants, and community partners with whom I have the honor of interacting—each and every day in this country demands a response.

From my position of extreme privilege, I knew that ‘silence is violence.’ Prompted by an essay by Nylah Burton in The Independent (available here), I was reminded of Martin Luther King, Jr.’s scathing indictment of white moderate liberals in his ‘Letters from a Birmingham Jail:’

“Lukewarm acceptance is much more bewildering than outright rejection.”

This reminded me of Dr. Ibram X. Kendi’s point:

“… there’s no such thing as being ‘not racist.’ We are either being racist or antiracist. And in order to be antiracist, we must, first and foremost, be willing to admit the times we are being racist.”

Which had me thinking: I am that white moderate that MLK so rightly despised. Where’s my anti-racist work? I can espouse values of equity and inclusion but where’s my praxis. If I can say this I can also ask my fellow white people, where is yours? This is not white guilt, just awareness and responsibility.

Reflecting further on my beloved field of #evaluation, I wondered what am I doing to challenge the reality of #EvalSoWhite (the topic of a recent Eval Central UnWebinar with Dr. Vidhya Shanker)? This, along with another UnWebinar with Jara Dean-Coffey on being more of her true self in her work and using her power to speak truth, made me stop and ask, What am I doing to live out, promote, and advocate for #EquitableEval?

I have been trying to learn, and will hopefully continue to learn, thanks in large part to the guidance, patience, and generosity of Black, Indigenous and People of Color (BIPOC) who, through their (usually unpaid) labor, teach white folks like me how to do better. This all prompted a twitter thread, when Jara wrote “Looking forward to your answers – note there is not a singular response.” My tentative response …

Here’s what I am trying to and will increasingly do. It is inadequate, incomplete, and probably wrong in some ways:

  1. Educate other white people in #eval or other contexts about racism, white privilege, etc. by talking to them and sharing reading material. Sadly, this means I should post on EvalTalk in reply to the periodic older white male snowflake posts, which arrive with disturbing frequency
  2. Educate myself, dry my own #WhiteTears, when racist things happen, so I don’t add to the emotional burden of BIPOC already suffering from the situation
  3. Speak up in all white rooms and anywhere to demand substantial BIPOC representation. Substantial = equal, fair pay; decision-making power
  4. Speak up quickly and clearly in any #eval, academic, or other context where I witness microaggression, overt racism, institutional racism, etc.
  5. Continue to recruit, fund, and mentor graduate students of color in #eval, using guidance on the nuance of how to do so well from @AyeshaBoyce and also @MiChicana4ever’s Collectors, Nightlights, and Allies article
  6. Mentor #YEEs from Senegal and other VOPEs serving the global community of young evaluators @EvalYouth
  7. Cite BIPOC, especially women, and add BIPOC, especially BIWOC to my syllabus
  8. Use editorial and conference organizing opportunities to promote and feature writing and presentations from BIPOC, especially women
  9. Find ways to implement the three @EquitableEval principles in my #eval, #ECB, and #RoE work
  10. Critically reflect on my own #WhitePrivilege and mistakes I’ve made so anti-racist values become more thoroughly part of my praxis and life

And lastly, provide direct financial and material support to BIPOC-led anti-racist causes (e.g., the bail-out fund for protestors in Minneapolis).

Thank you Jara, Vidhya, Leah, Andrea, Nicky, Geri, Dominica, and so many more for helping me learn. My learning journey will continue. Most recently, it continued when I was able to join Libby, Tiffany, and Deven with their Radical (Re)imagining (@RadReImagining) initiative, for a conversation on being more human and bringing our values into our work, here.

Note: Writing about what I am doing or am trying to do feels self-aggrandizing or self-serving. But one thing I’ve learned from these amazing BIPOC guides is that we white people need to start learning from each other, too! If we stay quiet out of guilt, shame, fear of saying the wrong thing, or fear of whatever else, I’m not helping anyone! We need to get over that. So that’s what I’m trying to do here.



Resources for #eval in a time of crisis

Some colleagues and I have been working to rapidly help our organizations and community partners use evaluative thinking and other evaluation approaches to promote learning and adaptive management in this difficult moment of crisis brought about by COVID-19. I had noticed some good ideas on this topic flying around, so I decided to pull a few of them together into one spot. The result is below: 

Evaluation During Crisis – COVID 19 (Infographic) – Tips on evaluating during a crisis. From the UNDP Independent Evaluation Office (IEO) (@UNDP_Evaluation)

Evaluation Implications of the Coronavirus Global Health Pandemic Emergency (Blog post) – How Michael Quinn Patton is making sense of the global health emergency and what he thinks the implications may be for evaluation. By Michael Quinn Patton (@MQuinnP) at Blue Marble Evaluation (@BMEvaluation)

Discussion on Challenges and Strategies for M&E in the Time of COVID-19 (Online conversation, April 1; notes to be shared following the event) – The M&E community is adjusting its practices to support program needs during the COVID-19 pandemic. This online discussion will provide an opportunity for practitioners to:

  • Discuss M&E challenges during the COVID-19 pandemic with fellow practitioners
  • Hear how other organizations are addressing these challenges
  • Connect with M&E colleagues to brainstorm strategies for prioritizing and adapting M&E within current activities

Hosted by the USAID Implementer-Led Design, Evidence, Analysis and Learning (IDEAL) activity.

Reflecting on the Role of Evaluator During This Global Pandemic (Blog post) – Tips, tools, and ideas from internal evaluators who have been scrambling to adapt to the ever-shifting and urgent demands placed upon non-profits by the COVID-19 pandemic. By Miranda Yates on aea365 (@aeaweb)

The Evaluation Mindset: Evaluation in a Crisis (Bog post) – Cartoons and thoughts on evaluator roles during a crisis. How we can use our evaluation expertise and skills to support our society in an unprecedented time. By Chris Lysy (@clysy) on freshspectrum

Navigating Together: Learning, Evaluation, and COVID-19 (Facebook Group) – The evaluation and social change world as we know it is rapidly changing in ways we can’t predict. This group is for people facilitating learning and evaluation in a COVID-19 world. Let’s navigate this together, sharing questions, concerns, resources, support, and inspiration. Hosted by Inspire to Change (@inspiretochang8)

Living (and Working Virtually) in Uncertainty (Blog post) – Several principles and practices to support community as the Interaction Institute for Social Change, like so many other organizations, moves to largely virtual work. By Cynthia Silva Parker (@CynthiaSParker) at the Interaction Institute for Social Change (@IISCBlog) via @InnoNet_Eval

Developmental Evaluation Resources (Resource compilation) – Guidance on an evaluation approach that can assist social innovators develop social change initiatives in complex or uncertain environments (useful in a crisis). From BetterEvaluation (@BetterEval)

Complexity-Aware Monitoring Discussion Note (Brief paper) – A discussion paper intended for those seeking cutting-edge solutions to monitoring complex aspects of strategies and projects. By Heather Britt and @USAIDlearning

A Quick Primer on Running Online Events and Meetings (Resource compilation) – A set of resources on online meetings. By Emma Smith at BetterEvaluation (@BetterEval)

“Cultivating a Life of the Mind for Practice”

Frequent readers of this infrequently updated blog may have noticed that most if not all of what is posted here pertains to “evaluative thinking.” Keeping with that tendency, yet taking a slightly different form, I’d like to share some of my thoughts on an excellent recent book by Tom Schwandt, Evaluation Foundations Revisited: Cultivating a Life of the Mind for Practice (Stanford University Press, 2015). The book covers many topics which are salient to discussions of evaluative thinking, grounding them within a broader and deeper overview of evaluation’s foundations.

The text presented below is a preprint version of a book review published in the American Journal of Evaluation. The full published version is available online here and can be cited as:

Archibald, T. (2016). Review of Evaluation Foundations Revisited: Cultivating a Life of the Mind for Practice, by Thomas Schwandt. American Journal of Evaluation, 37(3), 448-452. doi:10.1177/1098214016648794.

Schwandt cover

What is evaluation—as a professional practice and, more generally, as an endeavor? How is it done well? In Evaluation Foundations Revisited: Cultivating a Life of the Mind for Practice, Thomas Schwandt offers a thoughtful response to these questions in a way that is both timely and potentially timeless. The book is timely because it presents a nuanced discussion of some of the hottest topics in evaluation, e.g., what counts as credible evidence; how evaluation can, should, and does influence society at large; and the professionalization of the field. One reason the book may prove timeless is that it so clearly and accessibly presents an overview of evaluation, making it an excellent reading assignment for an introductory evaluation course. Another reason is that it engages with some of the most fundamental theoretical and philosophical questions at the heart of evaluation. Thus, the book is appropriate for both evaluation beginners and experts alike.

On the more profound side of the spectrum, Schwandt provides a theoretically rich exploration of two essential issues in evaluation, which unfortunately tend to be inadequately taught and discussed. One is the intersection of theory and practice in evaluation—a topic that has benefited from increased attention via the Eastern Evaluation Research Society’s Chelimsky Forum on Evaluation Theory and Practice, for which Schwandt was an inaugural speaker in 2013 (Schwandt, 2014). The other is the value judgment question: how should and how do evaluators render evaluative judgments?

Across all of these issues, the book’s most noteworthy contribution—its thesis—is aptly summed up in its subtitle: “cultivating a life of the mind for practice.” As Schwandt describes in the prologue, this phrase came to his attention via a Carnegie Foundation for the Advancement of Teaching seminar. To help explain the notion, Schwandt refers to Argyris’s (2004) triple-loop learning. Single-loop learning pertains to “knowing how” and “doing things right.” Double-loop learning asks the question “Are we doing the right things?” Triple-loop learning goes further to question underlying assumptions and mental models, asking “What makes this the right thing to do?” For Schwandt, the concept of “a life of the mind for practice” incorporates all three types of learning.

Schwandt advocates for treating evaluation and evaluation training as more than purely technical endeavors. This aim reflects his “longstanding concern that training in technique in evaluation must be wedded to education in both the disposition and the capacity to engage in moral, ethical, and political reflection on the aim of one’s professional undertaking” (p. 9). Sullivan and Rosin (2008), organizers of the Carnegie Foundation seminar, frame a life of the mind for practice in terms of practical reason, which “values embodied responsibility as the resourceful blending of critical intelligence with moral commitment” (p. xvi). In essence, Schwandt’s book is a primer on imbuing practical reason into evaluation.

The book parallels another recent philosophically oriented text that foregrounds practical reason in evaluation: House’s (2015) Evaluating: Values, Biases and Practical Wisdom. Practical wisdom is manifest when skilled evaluators use clinical expertise to “recognise patterns, perceive and frame situations, draw on intuition, deliberate on available courses of action, empathise, balance conflicting aims, improvise, make judgments and act in ways appropriate to the time and circumstances” (Astbury, 2016, p. 64). Additionally, Schwandt’s book aligns with Scriven’s work on the “logic of evaluation” (Scriven, 2016). All three—Schwandt, House, and Scriven—call into question the dubious ‘value-free doctrine’ of the social sciences, a vestige of positivism, to emphasize the obvious yet frequently ignored primacy of values and valuing in evaluation.

Credible Evidence Debates

What counts as credible evidence and how does such evidence provide warrant for evaluative conclusions? In Schwandt’s words, “the professional evaluator needs to be familiar with the nature, types, and properties of evidence as well as several controversies surrounding what constitutes the best evidence for evaluative judgments” (p. 70). Those controversies, summarized in Donaldson, Christie, and Mark (2015), pertain to the establishment of hierarchies for the quality of evidence. For example, some (though by no means all) argue that experimental designs produce the strongest kind of evidence, followed respectively by quasi-experimental designs, case control studies, and observational studies.

One problem with these hierarchies is that they ignore the wide variety of questions that an evaluation may address. Schwandt (like Scriven) reminds us that the practice of evaluation is much broader than the commonly understood notion of “program evaluation.” For example, there are also product evaluation, personnel evaluation, metaevaluation, and so on, all of which require different varieties of questions and evidence. Then, even within program evaluation, there are explanatory, normative, and descriptive questions, such as ‘How many?’ and ‘What does this program look like?’ (p. 72), though elsewhere in the book, Schwandt questions whether these descriptive questions alone are really evaluation: “Evaluation is a judgment-oriented practice—it does not aim simply to describe some state of affairs but to offer a considered and reasoned judgment about the value of that state of affairs” (p. 47).

Schwandt helps us think about the evidence debates in conceptual and philosophical terms, not just technical or procedural ones. He unpacks the argument structure for an evaluative judgment, in which the pathway from evidence to conclusion is mediated by warrants (i.e., the principles or chains of reasoning that connect evidence to claims). Warrants themselves are contextually-mediated and must appeal to some authority, such as legislative authority or a community of professional inquirers. Schwandt reminds us that discussions of evidentiary quality are meaningless without consideration of how that evidence is marshalled in evaluative arguments. And based on the fallibility of evidence, plus the many rhetorical, political, and otherwise unsystematic considerations that often influence policy making, he writes, “…the idea that evidence literally forms a base or foundation for policy or practice decisions is problematic” (p. 78, emphasis in original). In brief, questions of evidence and argument have implications for how evaluation can, should, and does influence society at large.

Questions of Use and Influence

Schwandt is well-placed to discuss use and influence, especially at the level of policy and governance—he was an editor of a National Research Council (2012) report on the use of scientific evidence to inform policy. As Carol Weiss, Michael Patton, and others have written, there are many types of evaluation use, such as instrumental, conceptual, process, and symbolic use (which can be a kind of misuse). Especially in recent years, evaluation use has been related to efforts such as data-driven decision making, evidence-based practice, translational research, and the diffusion, dissemination, extension, transfer, and translation of knowledge.

In relation to a life of the mind for practice, Schwandt connects research and evaluation use to the broader role of inquiry in society. Here, there are linkages to both Dahler-Larsen’s (2011) “evaluation society” and Campbell’s (1991) “experimenting society,” both of which provide a vision of how evaluation could and should contribute to shaping the contours of society by guiding decision making, on both large and small scales. Citing Chelimsky, Schwandt discusses how, ideally, “evaluation is considered necessary to the effective functioning of democratic societies” (p. 95). However, for these ideals to be realized, policy makers and the general public alike must have an “intelligent belief in evaluation,” which Schwandt describes as “a particular attitude and outlook on self and society … demonstrated in a thorough understanding of what is involved in evaluative reasoning as well as a robustly held, warranted conviction that such reasoning is vital to our well-being” (Schwandt, 2008, p. 139).

Without such belief, evaluation can become ritualistic, a type of impression management used as a source of legitimization (p. 110). Or, as with New Public Management (whereby efforts to improve the efficiency and performance of public sector entities are derived from private sector techniques focused on benchmarking and performance management), it can become a tool for institutional control (p. 95). This can yield an “evaluative state” in which “evaluation functions less like a critical voice weighing in on the value (or lack thereof) of public programs and policies and more like a technology that operates with well-defined procedures and indicators” (p. 96) for constant checking and verification—an audit society. Accounting for the complex nature of the “evaluation-politics-policymaking nexus” (p. 102), in which professional evaluation is not simply the application of evaluation science to public problems, the need to carefully consider the current trend toward professionalization of evaluation is clear.


The topic of professionalization in evaluation is not new, but in recent years it has gained renewed interest. Ironically, some discussions of certification and accreditation risk reinforcing the trend that Schwandt aims to interrupt: the rendering of evaluation practice as “a tool for quality assurance, performance management, and for assessing conformance to standards or benchmarks … the province of the technician who principally relies on following procedures or scripts and correctly applying methods” (p. 144).

Schwandt draws from Schön’s (1983) studies of practitioners in action, noting, “the idea that dominates most thinking about knowledge for the professions is that practice is the site where this theoretical knowledge is applied to solutions to problems of instrumental choice … a matter of applying a toolkit or following a pre-approved set of procedures or practice guidelines” (p. 32, emphasis in original). However, faced with “wicked problems”—problems for which “goals, means, and constraints are not particularly clear; there is the possibility of multiple solutions; there is uncertainty about which concepts, principles, or rules are necessary to address the problem; and the problems are continuous” (p. 32)—practitioners more often engage in “reflection-in-action, a kind of ongoing experimentation, as a means to finding a viable solution to such problems” leading to “a particular kind of craft knowledge (or the wisdom of practice)” (pp. 32-33).

With so much variability in evaluation practice—a point Schwandt illustrates in the first chapter—there is not a “uniform definition of who is an ‘evaluator’” (p. 124), let alone a definition of a ‘good evaluator.’ However, “absent a credentialing or certification process, is it possible to provide the kind of assurance that funders and clients of evaluation seek while preserving the diversity in evaluation approaches, aims, and methods that currently characterize the field?” (p. 130). The credentialing question remains unresolved.

Theory-Practice Integration and Making Value Judgments

So professional evaluators are not just atheoretical technicians, but neither is practice just a place where theories are applied. Many evaluators, especially independent consultants, find theory irrelevant to their work, perhaps because the way it has been presented to them is unclear or misleading. According to Schwandt and others, to redress this, one must first distinguish between social science theories in general, evaluation theories, and program theories. Second, one must give “theory” a more everyday, practical meaning.

To this end, Schwandt discusses how practitioners “theorize” for every case, subjecting “the beliefs, ideas, and the justifications they give for their ongoing practical activities to rational criticism” (p. 33). Here, conceptual or theoretical knowledge serves “as heuristics, ‘tools to think with’” (p. 33). This relates to what has been called reflective thought, reflective practice, critical thinking, and evaluative thinking (p. 67). In this light, hopefully all evaluators can see theory as an essential part of their practice. As Schwandt writes, paraphrasing Kant, “experience without theory is blind, but theory without experience is mere intellectual play” (p. 34). In essence, every page of Schwandt’s book is about connecting theory and practice, since such praxis is a prerequisite for practical wisdom.

For me, one major purpose of Schwandt’s thesis on theory-practice integration for practical wisdom—or a life of the mind for practice—is to illuminate how evaluators should engage with values and valuing. Scriven (2016, p. 29) has lamented the ironic “phenomenon of valuephobia” in evaluation. Despite the ubiquity of his longstanding definition of evaluation based on the determination of merit, worth, and significance, Scriven’s analysis of the major schools of thought in evaluation—such as those championed by Alkin, Rossi and Freeman, Stake, Cronbach, and others—finds that almost all of them “can be seen as a series of attempts to avoid direct statements about the merit or worth of things” (Scriven, 1993, p. 8). Schwandt, on the other hand, does not suffer from valuephobia.

Contradicting those schools of thought in evaluation that do not position the valuing of merit, worth, and significance as their central purpose, and against a backdrop of increased attention within the field on how and when evaluators make value judgments, Schwandt unabashedly proclaims his position on valuing. The prologue opens with Scriven’s classic definition of evaluation as “the act of judging the value, merit, worth, or significance of things” (p. 1). Later Schwandt specifies, “This book is based on the objectivist premise that judgments of policy or program value can be and should be made on the basis of evidence and argument” (p. 46). His writing on this topic is strong because he offers a more nuanced discussion than most. He considers how criteria are established, how competing criteria can be juggled, and how evaluative syntheses unfold. This begins to address the irony that “for a practice concerned with making warranted judgments of value, the lack of explicit justification for a synthesis procedure continues to be regarded as the Achilles’ heel of the practice” (p. 59).

I find it interesting that Schwandt takes as self-evident his objectivist “strong decision-support” view (whereby the role of the evaluator is to make value judgments to support decisions, rather than, for instance, reporting on participants’ experiences with the program or the extent to which the program met its goals); this potentially leaves behind those evaluators who either do not try to make value judgments, or who subscribe to Scriven’s “determination of merit, worth, and significance” definition but never quite arrive at actual value judgments in their reports. Schwandt might respond that neither of these two groups is actually doing evaluation.

Perhaps the book’s only flaw is that it ends too abruptly. In the closing pages, Schwandt offers some thoughts on what types of formal education and training are needed to cultivate a life of the mind for practice. He briefly touches on the importance of a holistic university education in addition to specialized technical training, but stops short of offering specifics. Then again, the entire book is a suggestion of how to incorporate practical wisdom into evaluation; it is thus up to us to put Schwandt’s suggestions into practice.



Argyris, C. (2004). Reasons and rationalizations: The limits to organizational knowledge. Oxford, UK: Oxford University Press.

Astbury, B. (2016). Reframing how evaluators think and act: New insights from Ernest House. Evaluation, 22(1), 58-71.

Campbell, D. T. (1991). Methods for the experimenting society. Evaluation Practice, 12(3), 223-260.

Dahler-Larsen, P. (2011). The evaluation society. Stanford, CA: Stanford University Press.

Donaldson, S. I., Christie, C. A., & Mark, M. M. (Eds.) (2015). Credible and actionable evidence: The foundation for rigorous and influential evaluations (2nd ed.). Los Angeles: Sage.

House, E. R. (2015). Evaluating: Values, biases, and practical wisdom. Charlotte, NC: Information Age Publishing.

National Research Council. (2012). Using Science as Evidence in Public Policy. Committee on the Use of Social Science Knowledge in Public Policy. K. Prewitt, T. A. Schwandt, & M. L. Straf (Eds.). Division of Behavioral and Social Sciences and Education. Washington, DC: National Academies Press.

Schön, D. A. (1983). The reflective practitioner: How professionals think in action. New York: Basic Books.

Schwandt, T. A. (2008). Educating for intelligent belief in evaluation. American Journal of Evaluation, 29(2), 139-150.

Schwandt, T. A. (2014). On the mutually informing relationship between practice and theory in evaluation. American Journal of Evaluation, 35(2), 231-236.

Scriven, M. (1993). Hard-won lessons in program evaluation. New Directions for Program Evaluation, 58.

Scriven, M. (2016). Roadblocks to recognition and revolution. American Journal of Evaluation, 37(1), 27-44.

Sullivan, W. M., & Rosin, M. S. (2008). A new agenda for higher education: Shaping a life of the mind for practice. San Francisco: Jossey-Bass.


Fostering Evaluative Thinking (part 2)

Many months ago, I promised more details on how my colleagues and I have been working to promote evaluative thinking. Now, inspired by an excellent Stanford Social Innovation Review blog post mentioning the importance of evaluative thinking, I’m finally getting to it. The post, “How Evaluation Can Strengthen Communities,” is by Kien Lee and David Chavis, principal associates with Community Science.

They describe how—in their organization’s efforts to build healthy, just, and equitable communities—supporting evaluative thinking can provide “the opportunity for establishing shared understanding, developing relationships, transforming disagreements and conflicts, engaging in mutual learning, and working together toward a common goal—all ingredients for creating a sense of community.” Along with Jane Buckley and Guy Sharrock, in our work to promote evaluative thinking in Catholic Relief Services and other community development organizations, we have definitely seen this happen as well.

But how does one support evaluative thinking? On aea365 and in an earlier post here, we share some guiding principles we have developed for promoting evaluative thinking. Below, I briefly introduce a few practices and activities we have found to be successful in supporting evaluative thinking (ET). Before I do that, though, I must first give thanks and credit to both the Cornell Office of Research on Evaluation, whose Systems Evaluation Protocol guides the approach to articulating theories of change which has been instrumental in our ET work, and to Stephen Brookfield, whose work on critical reflection and teaching for critical thinking has opened up new worlds of ET potential for us and the organizations with which we work! Now, on to the practices and activities:

  • Create an intentional ET learning environment
    • Display logic models or other theory of change diagrams in the workplace—in meeting rooms, within newsletters, etc.
    • Create public spaces to record and display questions and assumptions.
    • Post inspirational questions, such as, “How do we know what we think we know?” (as suggested by Michael Patton here).
    • Highlight the learning that comes from successful programs and evaluations and also from “failures” or dead ends.
  • Establish a habit of scheduling meeting time focused on ET practice
    • Have participants “mine” their logic model for information about assumptions and how to focus evaluation work (for example, by categorizing outcomes according to stakeholder priorities) (Trochim et al., 2012).
    • Use “opening questions” to start an ET discussion, such as, “How can we check these assumptions out for accuracy and validity?” (Brookfield, 2012, p. 195); “What ‘plausible alternative explanations’ are there for this finding?” (see Shadish, Cook, & Campbell, 2002, p. 6).
    • Engage in critical debate on a neutral topic.
    • Conduct a media critique (critically review and identify assumptions in a published article, advertisement, etc.) (an activity introduced to us by evaluation capacity building pioneer Ellen Taylor-Powell).
  • Use role-play when planning evaluation work
    • Conduct a scenario analysis (have individuals or groups analyze and identify assumptions embedded in a written description of a fictional scenario) (Brookfield, 2012).
    • Take on various stakeholder perspectives using the “thinking hats” method in which participants are asked to role play as a particular stakeholder (De Bono, 1999).
    • Conduct an evaluation simulation (simulate data collection and analysis for your intended evaluation strategy).
  • Diagram or illustrate thinking with colleagues
    • Have teams or groups create logic and pathway models (theory of change diagrams or causal loop diagrams) together (Trochim et al., 2012).
    • Diagram the program’s history.
    • Create a system, context and/or organization diagram.
  • Engage in supportive, critical peer review
    • Review peer logic models (help identify leaps in logic, assumptions, strengths in their theory of change, etc.).
    • Use the Critical Conversation Protocol (a structured approach to critically reviewing a peer’s work through discussion) (Brookfield, 2012).
    • Take an appreciative pause (stop to point out the positive contributions, and have individuals thank each other for specific ideas, perspectives or helpful support) (Brookfield, 2012).
  • Engage in evaluation
    • Ensure that all evaluation work is participatory and that members of the organization at all levels are offered the opportunity to contribute their perspectives.
    • Encourage members of the organization to engage in informal, self-guided evaluation work.
    • Access tools and resources necessary to support all formal and informal evaluation efforts (including the support of external evaluators, ECB professionals, data analyzers, etc.).

What other techniques and practices have you used to promote and support evaluative thinking?


A theory of change ‘pathway model’ from CRS Zambia, helping practitioners to identify and critically reflect on assumptions.

Note: The ideas above are presented in greater detail in a recent article in the American Journal of Evaluation:

Buckley, J., Archibald, T., Hargraves, M., & Trochim, W. M. (2015). Defining and teaching evaluative thinking: Insights from research on critical thinking. American Journal of Evaluation. Advance online publication. doi:10.1177/1098214015581706



Brookfield, S. (2012). Teaching for critical thinking: Tools and techniques to help students question their assumptions. San Francisco, CA: Jossey-Bass.

De Bono, E. (1999). Six thinking hats. London: Penguin.

Shadish, W., Cook, T., & Campbell, D. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.

Trochim, W., Urban, J. B., Hargraves, M., Hebbard, C., Buckley, J., Archibald, T., Johnson, M., & Burgermaster, M. (2012). The Guide to the Systems Evaluation Protocol (V2.2). Ithaca NY. Retrieved from

Fostering Evaluative Thinking (part 1)

Linda Keuntje recently launched an excellent discussion on LinkedIn, in the group, Monitoring & Evaluation Professionals, with the question: “Does anyone have any experiences they can share increasing the amount of evaluative thinking in their organization?” Anyone who knows me could imagine I was excited to jump in the conversation! (And thanks, Sheila Robinson, for alerting me to the discussion!)

One thing I noticed is that, in order to discuss Linda’s original question, those of us posting to the discussion were all implicitly or explicitly talking about what we think evaluative thinking even IS.

From my perspective, I think of evaluative thinking as critical thinking applied in the context of evaluation, motivated by an attitude of inquisitiveness and a belief in the value of evidence, that involves identifying assumptions, posing thoughtful questions, pursuing deeper understanding through reflection and perspective taking, and informing decisions in preparation for action. And when I say, “in the context of evaluation,” I mean evaluation in a very broad sense, such that it incorporates any M&E, learning, accountability, and even general organizational development and organizational management functions.

As far as promoting or increasing evaluative thinking, this is something my colleagues and I have been working on a great deal lately. For instance, in addition to a really lively and productive professional development workshop at AEA a couple of weeks ago, I have some experience working with Guy Sharrock of Catholic Relief Services (CRS) on this topic in Ethiopia and Zambia. We recently facilitated 5-day workshops in both countries focused specifically on promoting evaluative thinking among CRS and partner organizations staff (including both staff who do and who do not have formal responsibilities for M&E).

Without going into the specific activities and experiences we facilitated (I’ll save those for part 2), let me share some of the principles that guided our approach:

  1. Promoters of evaluative thinking should be opportunist about engaging learners in evaluative thinking processes in a way that builds on and maximizes intrinsic motivation.
  2. Promoting evaluative thinking should incorporate incremental experiences, following the developmental process of “scaffolding.”
  3. Evaluative thinking is not a born-in skill, nor does it depend on any particular educational background; therefore, promoters should offer opportunities for it to be intentionally practiced by all who wish to develop as evaluative thinkers.
  4. Evaluative thinkers must be aware of—and work to overcome—assumptions and belief preservation.
  5. In order to best learn to think evaluatively, the skill should be applied and practiced in multiple contexts and alongside peers and colleagues.

Based on requests from people who have participated in evaluative thinking workshops with me, I have recently created an online community to share ideas and resources around evaluative thinking. If you are interested in joining, please email me or comment with your email and I will add you.

Evaluative Thinking at AEA2014

The annual American Evaluation Association (AEA) conference in Denver is right around the corner! I get excited to attend this great event every year—it is a wonderful opportunity to learn, to make new friends, and to visit with old friends—and this year I am especially excited to be facilitating a preconference professional development workshop entitled “Evaluative Thinking: Principles and Practices to Enhance Evaluation Capacity and Quality” on Wednesday, October 15.

As discussed in a previous post (long, long ago …), evaluative thinking (ET) is an emergent topic in the field of program evaluation. It has been defined in many ways, yet in brief, ET has to do with thinking critically, valuing evidence, questioning assumptions, taking multiple perspectives, and posing thoughtful questions, and pursuing deeper understanding in preparation for informed action. It has much in common with reflective practice. It is key to evaluation, yet also has a place in all of an organization’s processes. In a more complexity-aware world, ET is a way to instill rapid learning and feedback cycles in the ongoing management of programs and organizations.

In addition to my session, I have identified some ET-related sessions at the conference that I would like to check out:

Thinking about Thinking: Using Metacognition to Improve Program Evaluation, with Rhonda Jones, Thursday, 1:00 PM – 1:45 PM in 106 (Organizational Learning & Evaluation Capacity Building TIG)

Infusing Evaluative Thinking into Programs and Organizations, with Jane Kwon, David Shellard, & Boris Volkov, Friday, 4:30 PM – 5:15 PM in Mineral E (Internal Evaluation TIG)

Integrating Systems and Tools into the Grantmaking Process to Promote Evaluative Learning, with PeiYao Chen & Kelly Gannon, Thursday, 2:00 PM – 2:45 PM in Room 103 (Nonprofit and Foundations TIG)

Striking a Balance: Walking the Fine Line between the Complexities of Evaluation Practice and Systems to Facilitate Evaluative Thinking, with Paola Babos, Matthew J. Birnbaum, Jasmin Rocha, Joanna Kocsis, & Angela R. Moore, Friday, 8:00 AM – 9:30 AM in Capitol 5 (Organizational Learning & Evaluation Capacity Building TIG)

Rethinking Evaluative Reflection: Promoting Creativity and Critical Thinking in Youth, with Janet Fox & Melissa Cater, Thursday, 1:00 PM – 1:45 PM in Mineral B (Youth Focused Evaluation TIG)

Are you presenting on something related to ET that I have missed? Please let me know!

Stay tuned for a post following the conference in which I will summarize the new lessons I learn about ET. Also, stay tuned for another upcoming post on principles and practices that can be used to promote ET among non-evaluators (based on my AEA session and some recent ET workshops with Catholic Relief Services in Ethiopia and Zambia).

In addition to my perennial interest in all things ET, at this year’s conference I’m also hoping to learn new ideas and approaches regarding some other hot topics I’m excited about: data viz, culturally responsive evaluation, gold standard debates, new directions in research on evaluation, and more.

I hope to see you in Denver!

Evaluative Thinking Lit Review

As Ann Emery’s comment suggests, evaluative thinking (ET) is an important part of the work that many of us do, and we know it is mentioned in the evaluation literature, but few of us have the time to dig through that literature to see what people are saying about it. As an ET nerd, I’ve done that for you.

Below, in an example of Chris Lysy’s Evaluation/Research Blog Style #4, I offer you something like an annotated bibliography of ET. Disclaimer: This lit review is neither systematic nor comprehensive; however, it does provide a fairly good review of some of the more substantive engagements with the idea of evaluative thinking that have appeared in evaluation journals, books, and reports over the past few years.


In her 2007 presidential address to the American Evaluation Association (AEA), Hallie Preskill brought ET to the big stage and highlighted the construct’s importance, asking, “How do we build the capacity of individuals, teams, and organizations to think evaluatively and engage in evaluation practice?” (p. 129). But what does she mean “to think evaluatively,” and how should we answer that “how” question? The quotations below offer some answers to those questions

Michael Quinn Patton, in an interview with Lisa Waldick of the International Development Research Center (IDRC), describes ET as including “a willingness to do reality testing, to ask the question: how do we know what we think we know? … Evaluative thinking is not just limited to evaluation projects…It’s an analytical way of thinking that infuses everything that goes on.”

In fact, the IDRC has been at the forefront of working with and writing about ET. For instance, Fred Carden and Sarah Earl describe how ET was infused into the culture of the ICRC, primarily through changes made to the organization’s reporting structure. They echo Patton’s description of ET:

“Evaluative thinking shifts the view of evaluation from only the study of completed projects and programs toward an analytical way of thinking that infuses and informs everything the center does. Evaluative thinking is being clear and specific about what results are sought and what means are used to achieve them. It ensures systematic use of evidence to report on progress and achievements. Thus information informs action and decision making.” (p. 72, n. 2)

Tricia Wind (also of the IDRC) and Carden write that “Evaluative thinking involves being results oriented, reflective, questioning, and using evidence to test assumptions” (p. 31).

Gavin Bennett and Nasreen Jessani—editors of an IDRC publication on knowledge translation—agree. They define ET as a “questioning, reflecting, learning, and modifying … conducted all the time. It is a constant state-of-mind within an organization’s culture and all its systems” (p. 24).

In addition to the IDRC, the Bruner Foundation is another organization that has led the way in working with ET. They also emphasize the necessity that ET (ideally) be ubiquitous within an organization, not limited solely to evaluation tasks. In their report on Integrating Evaluative Capacity into Organizational Practice, Anita Baker and Beth Bruner describe ET as “a type of reflective practice” that integrates the same skills that characterize good evaluation—“asking questions of substance, determining what data are required to answer specific questions, collecting data using appropriate strategies, analyzing collected data and summarizing findings, and using the findings”—throughout all of an organization’s work practices (p. 1).

All of these statements about how ET should be pervasive throughout an organization evoke the Jean King quote that gave rise to this blog its title: “The concept of free-range evaluation captures the ultimate outcome of ECB: evaluative thinking that lives unfettered in an organization” (p. 46).

Along these lines, Boris Volkov discusses evaluative thinking as an important component of the work of internal evaluators. He proposes the notion of “the evaluation meme” to help conceptualize how ideas, behaviors, and styles of evaluation can spread through an organization via the work of an internal evaluator:

“Modern internal evaluators will understand how to integrate evaluation into programs and staff development in a way that reinforces the importance of evaluation, contributes to its habituation, but at the same time prevents its harmful routinization (senseless, repetitive use of the same techniques or instruments). Evaluative thinking is not only a process, but also a mind-set and capacity, in other words, a person’s or organization’s ability, willingness, and readiness to look at things evaluatively and to strive to utilize the results of such observations. A challenging role for the internal evaluators will be to promulgate such a mind-set throughout the entire organization.” (2011, p. 38)

Jane Davidson, Michael Howe, and Michael Scriven, in their chapter in Foundations and Evaluation: Contexts and Practices for Effective Philanthropy, also articulate the multidimensionality of this construct (as do Preskill, Sandy Taut, and others). They define ET as “a combination of commitment and expertise, involving an understanding of the performance gap [between the current level of performance and a desired level of performance] and knowing how to gauge it” (pp. 260-261). Essentially, they focus on two distinct components of ET: evaluative know-how and passion for improvement (or, an evaluative attitude).

In Utilization-Focused Evaluation, Patton represents many of the sentiments contained within the quotations above, in cartoon form (p. 192):

way of thinking

Again, these are just some of the ways in which ET is discussed in the evaluation literature. If ET is part of your work, do you think these quotations represent your experiences and perceptions well? What addition perspectives on what ET is and why it’s important would you add?