“Cultivating a Life of the Mind for Practice”

Frequent readers of this infrequently updated blog may have noticed that most if not all of what is posted here pertains to “evaluative thinking.” Keeping with that tendency, yet taking a slightly different form, I’d like to share some of my thoughts on an excellent recent book by Tom Schwandt, Evaluation Foundations Revisited: Cultivating a Life of the Mind for Practice (Stanford University Press, 2015). The book covers many topics which are salient to discussions of evaluative thinking, grounding them within a broader and deeper overview of evaluation’s foundations.

The text presented below is a preprint version of a book review published in the American Journal of Evaluation. The full published version is available online here and can be cited as:

Archibald, T. (2016). Review of Evaluation Foundations Revisited: Cultivating a Life of the Mind for Practice, by Thomas Schwandt. American Journal of Evaluation, 37(3), 448-452. doi:10.1177/1098214016648794.

Schwandt cover

What is evaluation—as a professional practice and, more generally, as an endeavor? How is it done well? In Evaluation Foundations Revisited: Cultivating a Life of the Mind for Practice, Thomas Schwandt offers a thoughtful response to these questions in a way that is both timely and potentially timeless. The book is timely because it presents a nuanced discussion of some of the hottest topics in evaluation, e.g., what counts as credible evidence; how evaluation can, should, and does influence society at large; and the professionalization of the field. One reason the book may prove timeless is that it so clearly and accessibly presents an overview of evaluation, making it an excellent reading assignment for an introductory evaluation course. Another reason is that it engages with some of the most fundamental theoretical and philosophical questions at the heart of evaluation. Thus, the book is appropriate for both evaluation beginners and experts alike.

On the more profound side of the spectrum, Schwandt provides a theoretically rich exploration of two essential issues in evaluation, which unfortunately tend to be inadequately taught and discussed. One is the intersection of theory and practice in evaluation—a topic that has benefited from increased attention via the Eastern Evaluation Research Society’s Chelimsky Forum on Evaluation Theory and Practice, for which Schwandt was an inaugural speaker in 2013 (Schwandt, 2014). The other is the value judgment question: how should and how do evaluators render evaluative judgments?

Across all of these issues, the book’s most noteworthy contribution—its thesis—is aptly summed up in its subtitle: “cultivating a life of the mind for practice.” As Schwandt describes in the prologue, this phrase came to his attention via a Carnegie Foundation for the Advancement of Teaching seminar. To help explain the notion, Schwandt refers to Argyris’s (2004) triple-loop learning. Single-loop learning pertains to “knowing how” and “doing things right.” Double-loop learning asks the question “Are we doing the right things?” Triple-loop learning goes further to question underlying assumptions and mental models, asking “What makes this the right thing to do?” For Schwandt, the concept of “a life of the mind for practice” incorporates all three types of learning.

Schwandt advocates for treating evaluation and evaluation training as more than purely technical endeavors. This aim reflects his “longstanding concern that training in technique in evaluation must be wedded to education in both the disposition and the capacity to engage in moral, ethical, and political reflection on the aim of one’s professional undertaking” (p. 9). Sullivan and Rosin (2008), organizers of the Carnegie Foundation seminar, frame a life of the mind for practice in terms of practical reason, which “values embodied responsibility as the resourceful blending of critical intelligence with moral commitment” (p. xvi). In essence, Schwandt’s book is a primer on imbuing practical reason into evaluation.

The book parallels another recent philosophically oriented text that foregrounds practical reason in evaluation: House’s (2015) Evaluating: Values, Biases and Practical Wisdom. Practical wisdom is manifest when skilled evaluators use clinical expertise to “recognise patterns, perceive and frame situations, draw on intuition, deliberate on available courses of action, empathise, balance conflicting aims, improvise, make judgments and act in ways appropriate to the time and circumstances” (Astbury, 2016, p. 64). Additionally, Schwandt’s book aligns with Scriven’s work on the “logic of evaluation” (Scriven, 2016). All three—Schwandt, House, and Scriven—call into question the dubious ‘value-free doctrine’ of the social sciences, a vestige of positivism, to emphasize the obvious yet frequently ignored primacy of values and valuing in evaluation.

Credible Evidence Debates

What counts as credible evidence and how does such evidence provide warrant for evaluative conclusions? In Schwandt’s words, “the professional evaluator needs to be familiar with the nature, types, and properties of evidence as well as several controversies surrounding what constitutes the best evidence for evaluative judgments” (p. 70). Those controversies, summarized in Donaldson, Christie, and Mark (2015), pertain to the establishment of hierarchies for the quality of evidence. For example, some (though by no means all) argue that experimental designs produce the strongest kind of evidence, followed respectively by quasi-experimental designs, case control studies, and observational studies.

One problem with these hierarchies is that they ignore the wide variety of questions that an evaluation may address. Schwandt (like Scriven) reminds us that the practice of evaluation is much broader than the commonly understood notion of “program evaluation.” For example, there are also product evaluation, personnel evaluation, metaevaluation, and so on, all of which require different varieties of questions and evidence. Then, even within program evaluation, there are explanatory, normative, and descriptive questions, such as ‘How many?’ and ‘What does this program look like?’ (p. 72), though elsewhere in the book, Schwandt questions whether these descriptive questions alone are really evaluation: “Evaluation is a judgment-oriented practice—it does not aim simply to describe some state of affairs but to offer a considered and reasoned judgment about the value of that state of affairs” (p. 47).

Schwandt helps us think about the evidence debates in conceptual and philosophical terms, not just technical or procedural ones. He unpacks the argument structure for an evaluative judgment, in which the pathway from evidence to conclusion is mediated by warrants (i.e., the principles or chains of reasoning that connect evidence to claims). Warrants themselves are contextually-mediated and must appeal to some authority, such as legislative authority or a community of professional inquirers. Schwandt reminds us that discussions of evidentiary quality are meaningless without consideration of how that evidence is marshalled in evaluative arguments. And based on the fallibility of evidence, plus the many rhetorical, political, and otherwise unsystematic considerations that often influence policy making, he writes, “…the idea that evidence literally forms a base or foundation for policy or practice decisions is problematic” (p. 78, emphasis in original). In brief, questions of evidence and argument have implications for how evaluation can, should, and does influence society at large.

Questions of Use and Influence

Schwandt is well-placed to discuss use and influence, especially at the level of policy and governance—he was an editor of a National Research Council (2012) report on the use of scientific evidence to inform policy. As Carol Weiss, Michael Patton, and others have written, there are many types of evaluation use, such as instrumental, conceptual, process, and symbolic use (which can be a kind of misuse). Especially in recent years, evaluation use has been related to efforts such as data-driven decision making, evidence-based practice, translational research, and the diffusion, dissemination, extension, transfer, and translation of knowledge.

In relation to a life of the mind for practice, Schwandt connects research and evaluation use to the broader role of inquiry in society. Here, there are linkages to both Dahler-Larsen’s (2011) “evaluation society” and Campbell’s (1991) “experimenting society,” both of which provide a vision of how evaluation could and should contribute to shaping the contours of society by guiding decision making, on both large and small scales. Citing Chelimsky, Schwandt discusses how, ideally, “evaluation is considered necessary to the effective functioning of democratic societies” (p. 95). However, for these ideals to be realized, policy makers and the general public alike must have an “intelligent belief in evaluation,” which Schwandt describes as “a particular attitude and outlook on self and society … demonstrated in a thorough understanding of what is involved in evaluative reasoning as well as a robustly held, warranted conviction that such reasoning is vital to our well-being” (Schwandt, 2008, p. 139).

Without such belief, evaluation can become ritualistic, a type of impression management used as a source of legitimization (p. 110). Or, as with New Public Management (whereby efforts to improve the efficiency and performance of public sector entities are derived from private sector techniques focused on benchmarking and performance management), it can become a tool for institutional control (p. 95). This can yield an “evaluative state” in which “evaluation functions less like a critical voice weighing in on the value (or lack thereof) of public programs and policies and more like a technology that operates with well-defined procedures and indicators” (p. 96) for constant checking and verification—an audit society. Accounting for the complex nature of the “evaluation-politics-policymaking nexus” (p. 102), in which professional evaluation is not simply the application of evaluation science to public problems, the need to carefully consider the current trend toward professionalization of evaluation is clear.

Professionalization

The topic of professionalization in evaluation is not new, but in recent years it has gained renewed interest. Ironically, some discussions of certification and accreditation risk reinforcing the trend that Schwandt aims to interrupt: the rendering of evaluation practice as “a tool for quality assurance, performance management, and for assessing conformance to standards or benchmarks … the province of the technician who principally relies on following procedures or scripts and correctly applying methods” (p. 144).

Schwandt draws from Schön’s (1983) studies of practitioners in action, noting, “the idea that dominates most thinking about knowledge for the professions is that practice is the site where this theoretical knowledge is applied to solutions to problems of instrumental choice … a matter of applying a toolkit or following a pre-approved set of procedures or practice guidelines” (p. 32, emphasis in original). However, faced with “wicked problems”—problems for which “goals, means, and constraints are not particularly clear; there is the possibility of multiple solutions; there is uncertainty about which concepts, principles, or rules are necessary to address the problem; and the problems are continuous” (p. 32)—practitioners more often engage in “reflection-in-action, a kind of ongoing experimentation, as a means to finding a viable solution to such problems” leading to “a particular kind of craft knowledge (or the wisdom of practice)” (pp. 32-33).

With so much variability in evaluation practice—a point Schwandt illustrates in the first chapter—there is not a “uniform definition of who is an ‘evaluator’” (p. 124), let alone a definition of a ‘good evaluator.’ However, “absent a credentialing or certification process, is it possible to provide the kind of assurance that funders and clients of evaluation seek while preserving the diversity in evaluation approaches, aims, and methods that currently characterize the field?” (p. 130). The credentialing question remains unresolved.

Theory-Practice Integration and Making Value Judgments

So professional evaluators are not just atheoretical technicians, but neither is practice just a place where theories are applied. Many evaluators, especially independent consultants, find theory irrelevant to their work, perhaps because the way it has been presented to them is unclear or misleading. According to Schwandt and others, to redress this, one must first distinguish between social science theories in general, evaluation theories, and program theories. Second, one must give “theory” a more everyday, practical meaning.

To this end, Schwandt discusses how practitioners “theorize” for every case, subjecting “the beliefs, ideas, and the justifications they give for their ongoing practical activities to rational criticism” (p. 33). Here, conceptual or theoretical knowledge serves “as heuristics, ‘tools to think with’” (p. 33). This relates to what has been called reflective thought, reflective practice, critical thinking, and evaluative thinking (p. 67). In this light, hopefully all evaluators can see theory as an essential part of their practice. As Schwandt writes, paraphrasing Kant, “experience without theory is blind, but theory without experience is mere intellectual play” (p. 34). In essence, every page of Schwandt’s book is about connecting theory and practice, since such praxis is a prerequisite for practical wisdom.

For me, one major purpose of Schwandt’s thesis on theory-practice integration for practical wisdom—or a life of the mind for practice—is to illuminate how evaluators should engage with values and valuing. Scriven (2016, p. 29) has lamented the ironic “phenomenon of valuephobia” in evaluation. Despite the ubiquity of his longstanding definition of evaluation based on the determination of merit, worth, and significance, Scriven’s analysis of the major schools of thought in evaluation—such as those championed by Alkin, Rossi and Freeman, Stake, Cronbach, and others—finds that almost all of them “can be seen as a series of attempts to avoid direct statements about the merit or worth of things” (Scriven, 1993, p. 8). Schwandt, on the other hand, does not suffer from valuephobia.

Contradicting those schools of thought in evaluation that do not position the valuing of merit, worth, and significance as their central purpose, and against a backdrop of increased attention within the field on how and when evaluators make value judgments, Schwandt unabashedly proclaims his position on valuing. The prologue opens with Scriven’s classic definition of evaluation as “the act of judging the value, merit, worth, or significance of things” (p. 1). Later Schwandt specifies, “This book is based on the objectivist premise that judgments of policy or program value can be and should be made on the basis of evidence and argument” (p. 46). His writing on this topic is strong because he offers a more nuanced discussion than most. He considers how criteria are established, how competing criteria can be juggled, and how evaluative syntheses unfold. This begins to address the irony that “for a practice concerned with making warranted judgments of value, the lack of explicit justification for a synthesis procedure continues to be regarded as the Achilles’ heel of the practice” (p. 59).

I find it interesting that Schwandt takes as self-evident his objectivist “strong decision-support” view (whereby the role of the evaluator is to make value judgments to support decisions, rather than, for instance, reporting on participants’ experiences with the program or the extent to which the program met its goals); this potentially leaves behind those evaluators who either do not try to make value judgments, or who subscribe to Scriven’s “determination of merit, worth, and significance” definition but never quite arrive at actual value judgments in their reports. Schwandt might respond that neither of these two groups is actually doing evaluation.

Perhaps the book’s only flaw is that it ends too abruptly. In the closing pages, Schwandt offers some thoughts on what types of formal education and training are needed to cultivate a life of the mind for practice. He briefly touches on the importance of a holistic university education in addition to specialized technical training, but stops short of offering specifics. Then again, the entire book is a suggestion of how to incorporate practical wisdom into evaluation; it is thus up to us to put Schwandt’s suggestions into practice.

 

References

Argyris, C. (2004). Reasons and rationalizations: The limits to organizational knowledge. Oxford, UK: Oxford University Press.

Astbury, B. (2016). Reframing how evaluators think and act: New insights from Ernest House. Evaluation, 22(1), 58-71.

Campbell, D. T. (1991). Methods for the experimenting society. Evaluation Practice, 12(3), 223-260.

Dahler-Larsen, P. (2011). The evaluation society. Stanford, CA: Stanford University Press.

Donaldson, S. I., Christie, C. A., & Mark, M. M. (Eds.) (2015). Credible and actionable evidence: The foundation for rigorous and influential evaluations (2nd ed.). Los Angeles: Sage.

House, E. R. (2015). Evaluating: Values, biases, and practical wisdom. Charlotte, NC: Information Age Publishing.

National Research Council. (2012). Using Science as Evidence in Public Policy. Committee on the Use of Social Science Knowledge in Public Policy. K. Prewitt, T. A. Schwandt, & M. L. Straf (Eds.). Division of Behavioral and Social Sciences and Education. Washington, DC: National Academies Press.

Schön, D. A. (1983). The reflective practitioner: How professionals think in action. New York: Basic Books.

Schwandt, T. A. (2008). Educating for intelligent belief in evaluation. American Journal of Evaluation, 29(2), 139-150.

Schwandt, T. A. (2014). On the mutually informing relationship between practice and theory in evaluation. American Journal of Evaluation, 35(2), 231-236.

Scriven, M. (1993). Hard-won lessons in program evaluation. New Directions for Program Evaluation, 58.

Scriven, M. (2016). Roadblocks to recognition and revolution. American Journal of Evaluation, 37(1), 27-44.

Sullivan, W. M., & Rosin, M. S. (2008). A new agenda for higher education: Shaping a life of the mind for practice. San Francisco: Jossey-Bass.

 

Advertisements

Fostering Evaluative Thinking (part 2)

Many months ago, I promised more details on how my colleagues and I have been working to promote evaluative thinking. Now, inspired by an excellent Stanford Social Innovation Review blog post mentioning the importance of evaluative thinking, I’m finally getting to it. The post, “How Evaluation Can Strengthen Communities,” is by Kien Lee and David Chavis, principal associates with Community Science.

They describe how—in their organization’s efforts to build healthy, just, and equitable communities—supporting evaluative thinking can provide “the opportunity for establishing shared understanding, developing relationships, transforming disagreements and conflicts, engaging in mutual learning, and working together toward a common goal—all ingredients for creating a sense of community.” Along with Jane Buckley and Guy Sharrock, in our work to promote evaluative thinking in Catholic Relief Services and other community development organizations, we have definitely seen this happen as well.

But how does one support evaluative thinking? On aea365 and in an earlier post here, we share some guiding principles we have developed for promoting evaluative thinking. Below, I briefly introduce a few practices and activities we have found to be successful in supporting evaluative thinking (ET). Before I do that, though, I must first give thanks and credit to both the Cornell Office of Research on Evaluation, whose Systems Evaluation Protocol guides the approach to articulating theories of change which has been instrumental in our ET work, and to Stephen Brookfield, whose work on critical reflection and teaching for critical thinking has opened up new worlds of ET potential for us and the organizations with which we work! Now, on to the practices and activities:

  • Create an intentional ET learning environment
    • Display logic models or other theory of change diagrams in the workplace—in meeting rooms, within newsletters, etc.
    • Create public spaces to record and display questions and assumptions.
    • Post inspirational questions, such as, “How do we know what we think we know?” (as suggested by Michael Patton here).
    • Highlight the learning that comes from successful programs and evaluations and also from “failures” or dead ends.
  • Establish a habit of scheduling meeting time focused on ET practice
    • Have participants “mine” their logic model for information about assumptions and how to focus evaluation work (for example, by categorizing outcomes according to stakeholder priorities) (Trochim et al., 2012).
    • Use “opening questions” to start an ET discussion, such as, “How can we check these assumptions out for accuracy and validity?” (Brookfield, 2012, p. 195); “What ‘plausible alternative explanations’ are there for this finding?” (see Shadish, Cook, & Campbell, 2002, p. 6).
    • Engage in critical debate on a neutral topic.
    • Conduct a media critique (critically review and identify assumptions in a published article, advertisement, etc.) (an activity introduced to us by evaluation capacity building pioneer Ellen Taylor-Powell).
  • Use role-play when planning evaluation work
    • Conduct a scenario analysis (have individuals or groups analyze and identify assumptions embedded in a written description of a fictional scenario) (Brookfield, 2012).
    • Take on various stakeholder perspectives using the “thinking hats” method in which participants are asked to role play as a particular stakeholder (De Bono, 1999).
    • Conduct an evaluation simulation (simulate data collection and analysis for your intended evaluation strategy).
  • Diagram or illustrate thinking with colleagues
    • Have teams or groups create logic and pathway models (theory of change diagrams or causal loop diagrams) together (Trochim et al., 2012).
    • Diagram the program’s history.
    • Create a system, context and/or organization diagram.
  • Engage in supportive, critical peer review
    • Review peer logic models (help identify leaps in logic, assumptions, strengths in their theory of change, etc.).
    • Use the Critical Conversation Protocol (a structured approach to critically reviewing a peer’s work through discussion) (Brookfield, 2012).
    • Take an appreciative pause (stop to point out the positive contributions, and have individuals thank each other for specific ideas, perspectives or helpful support) (Brookfield, 2012).
  • Engage in evaluation
    • Ensure that all evaluation work is participatory and that members of the organization at all levels are offered the opportunity to contribute their perspectives.
    • Encourage members of the organization to engage in informal, self-guided evaluation work.
    • Access tools and resources necessary to support all formal and informal evaluation efforts (including the support of external evaluators, ECB professionals, data analyzers, etc.).

What other techniques and practices have you used to promote and support evaluative thinking?

SAMSUNG CAMERA PICTURES

A theory of change ‘pathway model’ from CRS Zambia, helping practitioners to identify and critically reflect on assumptions.

Note: The ideas above are presented in greater detail in a recent article in the American Journal of Evaluation:

Buckley, J., Archibald, T., Hargraves, M., & Trochim, W. M. (2015). Defining and teaching evaluative thinking: Insights from research on critical thinking. American Journal of Evaluation. Advance online publication. doi:10.1177/1098214015581706

—————————-

References:

Brookfield, S. (2012). Teaching for critical thinking: Tools and techniques to help students question their assumptions. San Francisco, CA: Jossey-Bass.

De Bono, E. (1999). Six thinking hats. London: Penguin.

Shadish, W., Cook, T., & Campbell, D. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.

Trochim, W., Urban, J. B., Hargraves, M., Hebbard, C., Buckley, J., Archibald, T., Johnson, M., & Burgermaster, M. (2012). The Guide to the Systems Evaluation Protocol (V2.2). Ithaca NY. Retrieved from https://core.human.cornell.edu/research/systems/protocol/index.cfm.

Fostering Evaluative Thinking (part 1)

Linda Keuntje recently launched an excellent discussion on LinkedIn, in the group, Monitoring & Evaluation Professionals, with the question: “Does anyone have any experiences they can share increasing the amount of evaluative thinking in their organization?” Anyone who knows me could imagine I was excited to jump in the conversation! (And thanks, Sheila Robinson, for alerting me to the discussion!)

One thing I noticed is that, in order to discuss Linda’s original question, those of us posting to the discussion were all implicitly or explicitly talking about what we think evaluative thinking even IS.

From my perspective, I think of evaluative thinking as critical thinking applied in the context of evaluation, motivated by an attitude of inquisitiveness and a belief in the value of evidence, that involves identifying assumptions, posing thoughtful questions, pursuing deeper understanding through reflection and perspective taking, and informing decisions in preparation for action. And when I say, “in the context of evaluation,” I mean evaluation in a very broad sense, such that it incorporates any M&E, learning, accountability, and even general organizational development and organizational management functions.

As far as promoting or increasing evaluative thinking, this is something my colleagues and I have been working on a great deal lately. For instance, in addition to a really lively and productive professional development workshop at AEA a couple of weeks ago, I have some experience working with Guy Sharrock of Catholic Relief Services (CRS) on this topic in Ethiopia and Zambia. We recently facilitated 5-day workshops in both countries focused specifically on promoting evaluative thinking among CRS and partner organizations staff (including both staff who do and who do not have formal responsibilities for M&E).

Without going into the specific activities and experiences we facilitated (I’ll save those for part 2), let me share some of the principles that guided our approach:

  1. Promoters of evaluative thinking should be opportunist about engaging learners in evaluative thinking processes in a way that builds on and maximizes intrinsic motivation.
  2. Promoting evaluative thinking should incorporate incremental experiences, following the developmental process of “scaffolding.”
  3. Evaluative thinking is not a born-in skill, nor does it depend on any particular educational background; therefore, promoters should offer opportunities for it to be intentionally practiced by all who wish to develop as evaluative thinkers.
  4. Evaluative thinkers must be aware of—and work to overcome—assumptions and belief preservation.
  5. In order to best learn to think evaluatively, the skill should be applied and practiced in multiple contexts and alongside peers and colleagues.

Based on requests from people who have participated in evaluative thinking workshops with me, I have recently created an online community to share ideas and resources around evaluative thinking. If you are interested in joining, please email me or comment with your email and I will add you.

Evaluative Thinking at AEA2014

The annual American Evaluation Association (AEA) conference in Denver is right around the corner! I get excited to attend this great event every year—it is a wonderful opportunity to learn, to make new friends, and to visit with old friends—and this year I am especially excited to be facilitating a preconference professional development workshop entitled “Evaluative Thinking: Principles and Practices to Enhance Evaluation Capacity and Quality” on Wednesday, October 15.

As discussed in a previous post (long, long ago …), evaluative thinking (ET) is an emergent topic in the field of program evaluation. It has been defined in many ways, yet in brief, ET has to do with thinking critically, valuing evidence, questioning assumptions, taking multiple perspectives, and posing thoughtful questions, and pursuing deeper understanding in preparation for informed action. It has much in common with reflective practice. It is key to evaluation, yet also has a place in all of an organization’s processes. In a more complexity-aware world, ET is a way to instill rapid learning and feedback cycles in the ongoing management of programs and organizations.

In addition to my session, I have identified some ET-related sessions at the conference that I would like to check out:

Thinking about Thinking: Using Metacognition to Improve Program Evaluation, with Rhonda Jones, Thursday, 1:00 PM – 1:45 PM in 106 (Organizational Learning & Evaluation Capacity Building TIG)

Infusing Evaluative Thinking into Programs and Organizations, with Jane Kwon, David Shellard, & Boris Volkov, Friday, 4:30 PM – 5:15 PM in Mineral E (Internal Evaluation TIG)

Integrating Systems and Tools into the Grantmaking Process to Promote Evaluative Learning, with PeiYao Chen & Kelly Gannon, Thursday, 2:00 PM – 2:45 PM in Room 103 (Nonprofit and Foundations TIG)

Striking a Balance: Walking the Fine Line between the Complexities of Evaluation Practice and Systems to Facilitate Evaluative Thinking, with Paola Babos, Matthew J. Birnbaum, Jasmin Rocha, Joanna Kocsis, & Angela R. Moore, Friday, 8:00 AM – 9:30 AM in Capitol 5 (Organizational Learning & Evaluation Capacity Building TIG)

Rethinking Evaluative Reflection: Promoting Creativity and Critical Thinking in Youth, with Janet Fox & Melissa Cater, Thursday, 1:00 PM – 1:45 PM in Mineral B (Youth Focused Evaluation TIG)

Are you presenting on something related to ET that I have missed? Please let me know!

Stay tuned for a post following the conference in which I will summarize the new lessons I learn about ET. Also, stay tuned for another upcoming post on principles and practices that can be used to promote ET among non-evaluators (based on my AEA session and some recent ET workshops with Catholic Relief Services in Ethiopia and Zambia).

In addition to my perennial interest in all things ET, at this year’s conference I’m also hoping to learn new ideas and approaches regarding some other hot topics I’m excited about: data viz, culturally responsive evaluation, gold standard debates, new directions in research on evaluation, and more.

I hope to see you in Denver!

Evaluative Thinking Lit Review

As Ann Emery’s comment suggests, evaluative thinking (ET) is an important part of the work that many of us do, and we know it is mentioned in the evaluation literature, but few of us have the time to dig through that literature to see what people are saying about it. As an ET nerd, I’ve done that for you.

Below, in an example of Chris Lysy’s Evaluation/Research Blog Style #4, I offer you something like an annotated bibliography of ET. Disclaimer: This lit review is neither systematic nor comprehensive; however, it does provide a fairly good review of some of the more substantive engagements with the idea of evaluative thinking that have appeared in evaluation journals, books, and reports over the past few years.

Enjoy!

In her 2007 presidential address to the American Evaluation Association (AEA), Hallie Preskill brought ET to the big stage and highlighted the construct’s importance, asking, “How do we build the capacity of individuals, teams, and organizations to think evaluatively and engage in evaluation practice?” (p. 129). But what does she mean “to think evaluatively,” and how should we answer that “how” question? The quotations below offer some answers to those questions

Michael Quinn Patton, in an interview with Lisa Waldick of the International Development Research Center (IDRC), describes ET as including “a willingness to do reality testing, to ask the question: how do we know what we think we know? … Evaluative thinking is not just limited to evaluation projects…It’s an analytical way of thinking that infuses everything that goes on.”

In fact, the IDRC has been at the forefront of working with and writing about ET. For instance, Fred Carden and Sarah Earl describe how ET was infused into the culture of the ICRC, primarily through changes made to the organization’s reporting structure. They echo Patton’s description of ET:

“Evaluative thinking shifts the view of evaluation from only the study of completed projects and programs toward an analytical way of thinking that infuses and informs everything the center does. Evaluative thinking is being clear and specific about what results are sought and what means are used to achieve them. It ensures systematic use of evidence to report on progress and achievements. Thus information informs action and decision making.” (p. 72, n. 2)

Tricia Wind (also of the IDRC) and Carden write that “Evaluative thinking involves being results oriented, reflective, questioning, and using evidence to test assumptions” (p. 31).

Gavin Bennett and Nasreen Jessani—editors of an IDRC publication on knowledge translation—agree. They define ET as a “questioning, reflecting, learning, and modifying … conducted all the time. It is a constant state-of-mind within an organization’s culture and all its systems” (p. 24).

In addition to the IDRC, the Bruner Foundation is another organization that has led the way in working with ET. They also emphasize the necessity that ET (ideally) be ubiquitous within an organization, not limited solely to evaluation tasks. In their report on Integrating Evaluative Capacity into Organizational Practice, Anita Baker and Beth Bruner describe ET as “a type of reflective practice” that integrates the same skills that characterize good evaluation—“asking questions of substance, determining what data are required to answer specific questions, collecting data using appropriate strategies, analyzing collected data and summarizing findings, and using the findings”—throughout all of an organization’s work practices (p. 1).

All of these statements about how ET should be pervasive throughout an organization evoke the Jean King quote that gave rise to this blog its title: “The concept of free-range evaluation captures the ultimate outcome of ECB: evaluative thinking that lives unfettered in an organization” (p. 46).

Along these lines, Boris Volkov discusses evaluative thinking as an important component of the work of internal evaluators. He proposes the notion of “the evaluation meme” to help conceptualize how ideas, behaviors, and styles of evaluation can spread through an organization via the work of an internal evaluator:

“Modern internal evaluators will understand how to integrate evaluation into programs and staff development in a way that reinforces the importance of evaluation, contributes to its habituation, but at the same time prevents its harmful routinization (senseless, repetitive use of the same techniques or instruments). Evaluative thinking is not only a process, but also a mind-set and capacity, in other words, a person’s or organization’s ability, willingness, and readiness to look at things evaluatively and to strive to utilize the results of such observations. A challenging role for the internal evaluators will be to promulgate such a mind-set throughout the entire organization.” (2011, p. 38)

Jane Davidson, Michael Howe, and Michael Scriven, in their chapter in Foundations and Evaluation: Contexts and Practices for Effective Philanthropy, also articulate the multidimensionality of this construct (as do Preskill, Sandy Taut, and others). They define ET as “a combination of commitment and expertise, involving an understanding of the performance gap [between the current level of performance and a desired level of performance] and knowing how to gauge it” (pp. 260-261). Essentially, they focus on two distinct components of ET: evaluative know-how and passion for improvement (or, an evaluative attitude).

In Utilization-Focused Evaluation, Patton represents many of the sentiments contained within the quotations above, in cartoon form (p. 192):

way of thinking

Again, these are just some of the ways in which ET is discussed in the evaluation literature. If ET is part of your work, do you think these quotations represent your experiences and perceptions well? What addition perspectives on what ET is and why it’s important would you add?

Evaluative Thinking?

As I mentioned previously, this blog’s primary focus (for now) is on “evaluative thinking.” That begs the questions: What is evaluative thinking and why is it worth talking about? To begin answering those questions, I’d like to first share how I became so interested in this topic.

Jane Buckley—my friend and colleague at the Cornell Office for Research on Evaluation—began exploring the idea of evaluative thinking (ET) in 2010 and I quickly joined her. We were working together to build the evaluation capacity of non-formal educators. We found, time and time again, that after participating in trainings and workshops on various topics related to evaluation planning and implementation, some folks seemed to have “Aha! moments” and really get it, while other folks didn’t.

We began asking ourselves what the difference was between those two groups of people. What was the je ne sais quoi that seemed to be so crucial for people who would go on to do meaningful, sustainable, quality evaluation?

The difference was evaluative thinking. That is, we found that for evaluation capacity building (ECB) to really work, evaluation know-how and skills were not enough. People really needed to tap into their ability to think evaluatively about their programs for ECB to be successful.

Reflecting on our work and pulling from various literatures (from evaluation yet also from education research, cognitive science, and critical thinking), we pulled together a few conference presentations and began to frame ET this way:

Evaluative Thinking is a cognitive process in the context of evaluation, motivated by an attitude of inquisitiveness and a belief in the value of evidence, that involves skills such as identifying assumptions, posing thoughtful questions, pursuing deeper understanding through reflection and perspective taking and making informed decisions in preparation for action.

Since we initially became interested in ET, we’ve discovered that there it is an idea on the rise. The term is used more and more frequently in the evaluation literature and at evaluation conferences. For a couple of years now, we’ve been discussing this idea with our friend and colleague Anne Vo, Associate Director of the University of California Educational Evaluation Center at UCLA, who worked on explicating the construct in her dissertation study. An ET community of practice is emerging; stay tuned for guest posts from fellow members of this ET community.

Stay tuned also for my next post that will focus on the many (and multiple) ways the term “evaluative thinking” is used in the evaluation literature.

On the origins of this blog

After returning home from AEA 2013 in Washington, D.C. (the annual conference of the American Evaluation Association), I decided to try to start blogging. I hear there was a great session on blogging, which I unfortunately did not attend. Regardless, I have now been adequately influenced by some of the excellent blogging I see going on in the AEA community for me to give it a try.

Specifically, I think of people like Stephanie Evergreen, Ann Emery, and others who I’ve known for a few years. Their blogs allow me to continue learning from them and to stay partially in touch during the year (i.e., the long months in-between AEA conferences). I also think of folks like Sheila Robinson and Chi Yan Lam, both of whom I connected with at the conference and have been able to follow up with via their blogs. Over the years, I’ve also enjoyed Patricia Rogers & Jane Davidson’s Genuine Evaluation (especially the “Friday Funny” installments).

Chris Lysy has been instrumental in convincing me (or pushing me over the edge) to give this a try. His lists, such as “12 blogging mistakes made by researchers and evaluators,” and “16 Blogging styles for researchers and evaluators,” help me think about the bigger picture of this type of blogging. I see a number of styles and purposes in his list of 16 that make sense to me; I will probably create some sort of hybrid between a few of those. His Eval Central site is also really cool, and helpful to people like me who are new to evaluation blogging. And then there are his cartoons. This one in particular is salient to my decision to blog:

Handling Vulnerability

So, I offer my thanks to all of those folks for helping me get this started. If I do anything of worth here, they deserve some credit; all blame for worthlessness herein should fall solely on me.

The blog’s title, “Free-Range Evaluation,” is from a Jean King phrase that I love. In an article on evaluation capacity building (ECB) and process use, she writes, “The concept of free-range evaluation captures the ultimate outcome of ECB: evaluative thinking that lives unfettered in an organization.” I love that. I have a wide array of interests in evaluation, but probably strongest among those is my interest in ECB. Specifically, along with Jane Buckley and other colleagues at the Cornell Office for Research on Evaluation, I have been focusing on the notion of evaluative thinking (ET). Anyone who has seen Jane and I present on ET has heard us make reference to free-range evaluation. Now, this blog will hopefully be one place where people interested in ET and related issues can get together and exchange ideas.