Wednesday, August 23, 2006

Why do we Use Logic Models?

We often use logic models when we do evaluations, and I must admit, I don't often wonder why I do it. The value is just implicit to me. On the AEA listserv, Sharon Stout put a summary together of what logic models are good for, and I agree with all of this. She writes:

“Below is my synopsis of Jonathan Morell's synopsis (plus later additions by Patricia Rogers) with additional text taken from a post of Doug Fraser's thrown in with a short bit credited earlier to Joseph Wholey.
See below ...

The logic model serves four key purposes:

-- Exploring what is valued (values clarification) – e.g., as in building consensus in developing a logic model, how elements interact in theory, and how this program compares;

-- Providing a conceptual tool to aid in designing an evaluation, research project, or experiment to use in supporting -- to the extent possible – or falsifying a hypothesized causal chain;

-- Describing what is, making gaps between what was supposed to happen and what actually happened more obvious, and more likely to be observed, measured, or investigated in future research or programming; and

-- Finally, developing a logic model may make evaluation unnecessary, as sometimes the logic model shows that the program is so ill-conceived that more work needs to be done before the program can be implemented – or if implemented, before the program is evaluated.”


Michael Scriven then took the discussion further, and again I absolutely agree with everything he says:

“Good job collecting the arguments for logic models together. Of course, they do look pretty pathetic when stripped down a bit--to the sceptical eye, at least. It might not be a bad idea to gather some alternative views of ways to achieve the claimed payoffs, if you’re after an overview. Here’s a overcompressed effort:

“Key purposes of logic models," as you've extracted them from the extended discussion:
1. Values clarification. Alternative approach: identify assumed or quoted values, and clarify them as values, mainly by identifying the standards on their dimensions that you will need in order to generate evaluative conclusions; a considerably more direct procedure.

2. An aid in designing the evaluation. Alternative approach: Do it as you would do it for a black box, since you need to have that skill anyway, and it’s simpler and less likely to get you offtrack (i.e., look for how the impact and process of the program score on the scales you have worked up for needs and other values)

3. Describing what ‘is’. The program theory isn’t part of what is, so do the program description directly and avoid getting emroiled in theory fights.

4. Possibly avoid doing the evaluation. None of your business whether they’re great thinkers; they have a pgrm, they want it evaluated, OK do your job.

Then there’s other relevant considerations like:

5. Reasons for NOT working out the logic model.
Reason A: you don’t need it, see 1 above.
Reason B: your job is evaluating not explaining, so you shouldn’t be doing it.
Reason C: doing it takes a lot of time and money in many cases, so it often cuts down on the time on doing the real evaluation, so if you budgeted that time, you’ll get underbid, and if you didn’t, you’ll go broke.
Reason D: in many cases, the logic model doesn’t make sense but the program works, so what’s the payoff from finding that the model is no good or improving--payofff meaning payoff for the application field that wants something that works and doesn’t care whether it’s based on the power of prayer, the invocation of demons, good science, or simple witchcraft.(NOW THIS HAD ME IN STITCHES!) Think about helping the people first, adding to science later, on a different contract.

In general, the obsession with logic models is running a serious risk of bringing bad reputations to evaluators, and to evaluation, since evaluators are not expert program designers and not the top experts in the subject matter field that are often the only ones that can produce better programs. You want to be a field guru, get a PhD and some other creds in the field and be a field guru; you want to find out if the field gurus can produce a program that works, be an evaluator. Just don’t get lost in the woods because you can’t get the two jobs distinguished (and try not to lure too many other innocents with you into the forest).

Olive branches:
(i) of course, the logic theory approach doesn’t always fail, it's just (mostly) a way of wasting time that sometimes produces a good idea, like doodling or concept mapping (when used out of place);
(ii) of course, it makes sense to listen to the logic theory of the client, since that's part of getting a grip on the context, and asking questions when you hear it may turn up some problems they should sort out. Fine, a bonus service from you. Just don't take fixing the logic model as one of your duties. After all:
(iii) Some of the most valuable additions to science come about from practices like primitive herbal medicine (eg the use of quinine, aspirin, curare) or the power of faith (hypnosis, faith-healing) that work although there's no good theory why they work; finding more of those is probably the best way that evaluators can contribute to science. If you require a good theory before you look at whether the program works, you'll never find these gold mines. So, though it may sound preachy, I think your first duty is to evaluate the program, even if your scientific training or recreational interests incline you to try for explanations first.

My conclusion is that next time, before I go about writing up a logic model “just because it is the way we do evaluations” I’ll be a little bit more critical and consider whether this isn’t an instance where I should not have to get a logic model.

Thursday, August 17, 2006

Cultural Competence of Evaluators

Hazel Symonette from the University of Wisconsin recently visited South Africa and presented M&E workshops in collaboration with the South African Monitoring and Evaluation Association. Unfortunately my diary did not allow me to attend any of the workshops, but I was lucky enough to have some interaction with her on an informal basis. This made me think about cultural competence required by evaluators. Look, we are long past the positivistic view where an evaluator was believed to be the expert able to look at behaviour and responses of people and categorise it objectively. What Hazel’s visit reinforced for me was the fact that cultural competence and identifying the lenses through which we look is extremely important if we want to do a good job as an evaluator.

This morning I read an article in the paper about learners in Mpumalanga schools:

'Teachers are bewitching us' 2006-08-16 19:07:56 http://www.mweb.co.za/news/?p=top_article&i=224129

There appears to be a growing tendency among Mpumalanga school pupils to accuse their teachers of witchcraft and then start a riot or boycott class. Nelspruit - There's a growing tendency among Mpumalanga school pupils to accuse their teachers of witchcraft and then start a riot or boycott class. Pupils at four schools have rioted in separate incidents since March, said provincial education spokesperson Hlahla Ngwenya on Wednesday. The latest incident happened on Monday when pupils at Mambane secondary school in Nkomazi, south of Malelane, refused to attend classes after allegations that teachers were bewitching them. The pupils returned to class on Tuesday. "Our preliminary reports indicate that the pupils protested after some of their peers died in succession over a short period," said Ngwenya. "They seem to believe this was the doing of their teachers." He said the department was investigating the incident and that pupils found guilty of instigating the boycott faced expulsion.


Imagine I was an evaluator in that community, working with the schools on the evaluation of some whole school development initiative. From my Westernised perspective witchcraft is just silly, and people believing in witchcraft are obviously mistaking one issue for another. Do I have the competence to be the evaluator in such a situation? How valid would my conclusions have been if I was in that situation?

I would probably have searched for alternative explanations, or more culturally acceptable explanations – I.e. There is obviously a problem in the relationship between the educators and the learners. It also seems that there are a range of very unfortunate circumstances (possibly a problem with HIV/AIDS?) in that community that needs attention. Just because I don’t accept their explanation and choose to come up with other explanations that are more culturally acceptable in my frame of reference (and probably in the frame of reference from which the programme donors come), does that mean it is the correct answer? Isn’t there maybe something beyond my perspective?

In my time as an evaluator I have come across a couple of other similarly absurd sets of behaviours – Teachers that toyi-toyi about catering whilst being on a government sponsored training session. Project beneficiaries refusing to disclose their names during interviews about an NGO’s performance. Clients being scared of saying anything out of fear that they might experience negative circumstances. Maybe these “absurdities”, when I recognize them, is a cue that I am out of my league?

Wednesday, August 02, 2006

Public Sector Accountability and Performance Measurement

"I have been working now for about 20 years in the area of evaluation and performance measurement, and I am so discouraged about performance measurment and results reporting and its supposed impact on accountability that I am just about ready to throw in the towel. So I have had to go right back to the basics of reporting and democracy to try to trace a line from what was intended to what we have ended up with." (Karen Hicks on 28 July 06 on the AEA Evaltalk listserv).

This made me think - In our government, at least in the departments I work with, this is also quite a prominent issue. We do so much reporting and performance measurement, but does it help us to be more accountable? Why do we do all of this reporting, and who do we do the reporting to?

A national departments' strategic planning and performance reporting manual explains what the intention is with the government M&E:

"Every five years the citizens of South Africa vote in national and provincial elections in order to choose the political party they want to govern the country or the province for the next five years. In essence the voters give the winning political party a mandate to implement over the next five years the policies and plans it spelt out in its election manifesto.
Following such elections the majority party (or majority coalition) in the National Assembly elects a President, who then selects a new Cabinet. The President and the Cabinet have the responsibility (mandate) of implementing the majority party’s election manifesto nationally. While at the provincial sphere, the majority party (or majority coalition) in each provincial legislature elects a Premier, who selects a new Executive Committee. The Premier and the Executive Committee have the responsibility (mandate) of implementing the majority party’s election manifesto within the province".

The governing party's election manifesto gets translated into policy and plans, and particularly the strategic plans and annual performance plans are key in this regard. The strategic plans spell out, for a five year period, what the department's goals, objectives and priorities will be. Since there has been quite an infusion of the idea that "what gets measured, gets managed" in South African Government, Government Departments are also encouraged to set Mesurable Objectives and Performance Indicators relating to all of the goals and objectives in the strategic plan. These Measurable Objectives and Indicators are then used to reflect on an annual basis on the performance of a Department.

A common problem with this approach is that Departments want to set indicators that measure the outcome of all the Departments' activities at activity level, rather than at programme level. This leads to the unfortunate result of a million and ten indicators that are too unwieldy to communicate and analyse effectively. Other common problems also include misalignment between the indicators and the objective it is supposed to measure, and some objectives just do not have any measurable objectives because the data that is available does not allow for efective measurement.

Besides all of these difficulties, though, the biggest drawback of this type of reporting for accountability is that it comes down to government reflecting on its own performance against governments' plans. For the sake of democracy it is important that reporting should go beyond this and place information in the hands of the public that would allow them to not only critically reflect on government's success in implementing its plans, but also critically reflect on the appropriateness of the plans and the prioritisation of objectives in the first place.

South Africa has come up with some sort of solution to this challenge by instituting the Public Services Commission with the mandate to evaluate the public service on an annual basis against nine constitutionally enshrined principles. The result of this evaluation is the PSC report entitled: State of the Public Service Report which is published annually. The 2006 report is available at:
http://www.psc.gov.za/docs/reports/2006/designed%20report%20220506.pdf