Wednesday, August 23, 2006

Why do we Use Logic Models?

We often use logic models when we do evaluations, and I must admit, I don't often wonder why I do it. The value is just implicit to me. On the AEA listserv, Sharon Stout put a summary together of what logic models are good for, and I agree with all of this. She writes:

“Below is my synopsis of Jonathan Morell's synopsis (plus later additions by Patricia Rogers) with additional text taken from a post of Doug Fraser's thrown in with a short bit credited earlier to Joseph Wholey.
See below ...

The logic model serves four key purposes:

-- Exploring what is valued (values clarification) – e.g., as in building consensus in developing a logic model, how elements interact in theory, and how this program compares;

-- Providing a conceptual tool to aid in designing an evaluation, research project, or experiment to use in supporting -- to the extent possible – or falsifying a hypothesized causal chain;

-- Describing what is, making gaps between what was supposed to happen and what actually happened more obvious, and more likely to be observed, measured, or investigated in future research or programming; and

-- Finally, developing a logic model may make evaluation unnecessary, as sometimes the logic model shows that the program is so ill-conceived that more work needs to be done before the program can be implemented – or if implemented, before the program is evaluated.”


Michael Scriven then took the discussion further, and again I absolutely agree with everything he says:

“Good job collecting the arguments for logic models together. Of course, they do look pretty pathetic when stripped down a bit--to the sceptical eye, at least. It might not be a bad idea to gather some alternative views of ways to achieve the claimed payoffs, if you’re after an overview. Here’s a overcompressed effort:

“Key purposes of logic models," as you've extracted them from the extended discussion:
1. Values clarification. Alternative approach: identify assumed or quoted values, and clarify them as values, mainly by identifying the standards on their dimensions that you will need in order to generate evaluative conclusions; a considerably more direct procedure.

2. An aid in designing the evaluation. Alternative approach: Do it as you would do it for a black box, since you need to have that skill anyway, and it’s simpler and less likely to get you offtrack (i.e., look for how the impact and process of the program score on the scales you have worked up for needs and other values)

3. Describing what ‘is’. The program theory isn’t part of what is, so do the program description directly and avoid getting emroiled in theory fights.

4. Possibly avoid doing the evaluation. None of your business whether they’re great thinkers; they have a pgrm, they want it evaluated, OK do your job.

Then there’s other relevant considerations like:

5. Reasons for NOT working out the logic model.
Reason A: you don’t need it, see 1 above.
Reason B: your job is evaluating not explaining, so you shouldn’t be doing it.
Reason C: doing it takes a lot of time and money in many cases, so it often cuts down on the time on doing the real evaluation, so if you budgeted that time, you’ll get underbid, and if you didn’t, you’ll go broke.
Reason D: in many cases, the logic model doesn’t make sense but the program works, so what’s the payoff from finding that the model is no good or improving--payofff meaning payoff for the application field that wants something that works and doesn’t care whether it’s based on the power of prayer, the invocation of demons, good science, or simple witchcraft.(NOW THIS HAD ME IN STITCHES!) Think about helping the people first, adding to science later, on a different contract.

In general, the obsession with logic models is running a serious risk of bringing bad reputations to evaluators, and to evaluation, since evaluators are not expert program designers and not the top experts in the subject matter field that are often the only ones that can produce better programs. You want to be a field guru, get a PhD and some other creds in the field and be a field guru; you want to find out if the field gurus can produce a program that works, be an evaluator. Just don’t get lost in the woods because you can’t get the two jobs distinguished (and try not to lure too many other innocents with you into the forest).

Olive branches:
(i) of course, the logic theory approach doesn’t always fail, it's just (mostly) a way of wasting time that sometimes produces a good idea, like doodling or concept mapping (when used out of place);
(ii) of course, it makes sense to listen to the logic theory of the client, since that's part of getting a grip on the context, and asking questions when you hear it may turn up some problems they should sort out. Fine, a bonus service from you. Just don't take fixing the logic model as one of your duties. After all:
(iii) Some of the most valuable additions to science come about from practices like primitive herbal medicine (eg the use of quinine, aspirin, curare) or the power of faith (hypnosis, faith-healing) that work although there's no good theory why they work; finding more of those is probably the best way that evaluators can contribute to science. If you require a good theory before you look at whether the program works, you'll never find these gold mines. So, though it may sound preachy, I think your first duty is to evaluate the program, even if your scientific training or recreational interests incline you to try for explanations first.

My conclusion is that next time, before I go about writing up a logic model “just because it is the way we do evaluations” I’ll be a little bit more critical and consider whether this isn’t an instance where I should not have to get a logic model.

No comments: