Wednesday, January 31, 2007

Monitoring without Indicators - Most Significant Change

On the Pelican list today, they sent through this handy reference to something that I think is infinitely useful for gathering proof and evidence when you don't have indicators and stacks of pre-developed evaluation mechanisms.

Check it out at:
http://www.mande.co.uk/docs/MSCGuide.htm. and http://www.mande.co.uk/MSC.htm


The guide (Prepared by Rick Davies and Jess Dart) explains the MSC technique as follows:

"The most significant change (MSC) technique is a form of participatory monitoring and evaluation. It is participatory because many project stakeholders are involved both in deciding the sorts of change to be recorded and in analysing the data. It is a form of monitoring because it occurs throughout the program cycle and provides information to help people manage the program. It contributes to evaluation because it provides data on impact and outcomes that can be used to help assess the performance of the program as a whole.
Essentially, the process involves the collection of significant change (SC) stories emanating from the field level, and the systematic selection of the most significant of these stories by panels of designated stakeholders or staff. The designated staff and stakeholders are initially involved by ‘searching’ for project impact. Once changes have been captured, various people sit down together, read the stories aloud and have regular and often in-depth discussions about the value of these reported changes. When the technique is implemented successfully, whole teams of people begin to focus their attention on program impact."

Certainly this looks like a very promising technique!

Tuesday, January 30, 2007

Report Back: Making Evaluation Our Own

A special stream was held on making Evaluation our own at the AfrEA conference. After the conference a small committee of African volunteers worked to capture some of the key points of the discussion. Thanks to Mine Pabari from Kenya for forwarding a copy!

What do you think of this?


Making Evaluation Our Own: Strengthening the Foundations for Africa-Rooted and Africa Led M&E

Overview & Recommendations to AfrEA

Niamey, 18th January, 2007

Discussion Overview

On 18 January 2007 a special stream was held to discuss the topic

Making Evaluation our own: Strengthening the Foundations for Africa-Rooted and Africa-Led M&E. It was designed to bring African and other international experiences in evaluation and in development evaluation to help stimulate debate on how M&E , which has generally been imposed from outside, can become Africa led and owned.

The introductory session aimed to set the scene for the discussion by considering i) What the African evaluation challenges are (Zenda Ofir) ii) The Trends Shaping M&E in the Developing World (Robert Piccioto) iii) The African Mosaic and Global Interactions: The Multiple Roles of and Approaches to Evaluation (Michael Patton & Donna Mertens). The last presentations explained, among others, the theoretical underpinnings of evaluation as it is practiced in the world today.

The next session briefly touched on some of the current evaluation methodologies used internationally in order to highlight the variety of methods that exist. It also stimulated debate over the controversial initiative on impact evaluation launched by the Center for Global Development in Washington. The discussion then moved to consider some of the international approaches that are currently useful or likely to become prominent in finding evidence about development in Africa (Jim Rugh, Bill Savedoff, Rob van den Berg, Fred Carden, Nancy MacPherson & Ross Conner)

The final session aimed to consider some possibilities for developing an evaluation culture rooted in Africa. (Bagele Chilisa). In this session some examples of how the African culture leans itself towards evaluation was given and also some examples that demonstrated that the currently used evaluation methodologies could be enriched if it considered an African world view.

Key issues emerging from the presentations and discussion formed the basis for the motions presented below:

  • Currently much of the evaluation practice in Africa is based on external values and contexts, is donor driven and the accountability mechanisms tend to be directed towards recipients of aid rather than both recipients and the providers of aim
  • For evaluation to have a greater contribution to development in Africa it needs to address challenges including those related to country ownership; the macro-micro disconnect; attribution; ethics and values; and power-relations.
  • A variety of methods and approaches are available and valuable to contributing to frame our questions and methods of collecting evidence. However, we first need to reexamine our own preconceived assumptions; underpinning values, paradigms (e.g. transformative v/s pragmatic); what is acknowledged as being evidence; and by whom before we can select any particular methodology/approach.

The lively discussion that ensued led towards the appointment of a small group of African evaluators to note down suggested actions that AfrEA could spearhead in order to fill the gap related to Africa-Rooted and Africa-Led M&E.

The stream acknowledges and extends its gratitude to the presenters for contributing their time to share their experiences and wealth of knowledge. Also, many thanks to NORAD for its contribution to the stream; and the generous offer to support an evaluation that may be used as a test case for an African-rooted approach – an important opportunity to contribute to evaluation in Africa.

In particular, the stream also extends much gratitude to Zenda Ofir and Dr. Sully Gariba for their enormous effort and dedication to ensure that AfrEA had the opportunity to discuss this important topic with the support of highly skilled and knowledgeable evaluation professionals.


Motions

In order for evaluation to contribute more meaningfully to development in Africa, there is a need to re-examine the paradigms that guide evaluation practice on the continent. Africa rooted and Africa led M&E requires ensuring that African values and ways of constructing knowledge are considered as valid. This, in turn, implies that:

§ African evaluation standards and practices should be based on African values & world views

§ The existing body of knowledge on African values & worldviews should be central to guiding and shaping evaluation in Africa

§ There is a need to foster and develop the intellectual leadership and capacity within Africa and ensure that it plays a greater role in guiding and developing evaluation theories and practices.

We therefore recommend the following for consideration by AfrEA:

o AfrEA guides and supports the development of African guidelines to operationalize the African evaluation standards and; in doing so, ensure that both the standards and operational guidelines are based on the existing body of knowledge on African values & worldviews

o AfrEA works with its networks to support and develop institutions, such as Universities, to enable them to establish evaluation as a profession and meta discipline within Africa

o AfrEA identifies mechanisms in which African evaluation practitioners can be mentored and supported by experienced African evaluation professionals

o AfrEA engages with funding agencies to explore opportunities for developing and adopting evaluation methodologies and practices that are based on African values and worldviews and advocate for their inclusion in future evaluations

o AfrEA encourages and supports knowledge generated from evaluation practice within Africa to be published and profiled in scholarly publications. This may include;

§ Supporting the inclusion of peer reviewed publications on African evaluation in international journals on evaluation (for example, the publication of a special issue on African evaluation)

§ The development of scholarly publications specifically related to evaluation theories and practices in Africa (e.g. a journal of the AfrEA)

Contributors

§ Benita van Wyk – South Africa

§ Bagele Chlisa – Botswana

§ Abigail Abandoh-Sam – Ghana

§ Albert Eneas Gakusi – AfDB

§ Ngegne Mbao – Senegal

§ Mine Pabari - Kenya

More Evaluation Checklists

Last week I put a reference to the UFE Check-list on my blog, and today I received a very useful link with all kinds of other evaluation check-lists on the AfrEA listserv . Try it out at:

http://www.wmich.edu/evalctr/checklists/checklistmenu.htm

It has Check-lists for

*Evaluation Management

*Evaluation Models

*Evaluation Values & Criteria

* Check-lists are useful for practitioners because it helps you to develop and test your methodology with view of improving it for the future.
* They are useful for those who commission evaluations because it reminds you what should be taken into account at all stages of the evaluation process.
* I think, however, that check-lists like these can be particularly powerful if they become institutionalised in practice - If an organisation requires the check-list to be considered as part of a day-to-day business process.

Monday, January 29, 2007

Making Evaluation our Own

At the AfrEA conference, there was a special stream on: 'Making Evaluation our Own'. It aimed to investigate where we are in terms of having Africa rooted, Africa lead evaluations.

I found it particularly useful because it became patently obvious that there are African world views and African methods of knowing that are not yet exploited for Evaluation in Africa. This of course brings the whole debate about "African" Evaluation theories to bear, and asks which kinds of evaluation theories are currently influencing our practice as evaluators in Africa.

Marvin C. Alkin and Christina A. Christie developed what they call the EVALUATION THEORY TREE. It splits the prominent (North-American) evaluation theorists into three big branches: Theories that focus on the use of evaluation, theories that focus on the methods of evaluation and theories that focus on how we value when evaluating. You can find more information about this at http://www.sagepub.com/upm-data/5074_Alkin_Chapter_2.pdf




The second tree is a slightly updated version. It was interesting to note that most of my reading about evaluation has been on "Methods" and "Use".

I think that if we are serious about developing our own African evaluation theories, we might need to develop our own African tree. Bob Piccioto mentioned that the African tree might use the branches of the above tree as roots, and grow its own unique branches.


A small commission from the conference put together a call for Action that outlines some key steps that should be taken if we hope to make progress soon. Hopefully I can post this at a later stage.

Keep well!

Thursday, January 25, 2007

UFE & The difference between Evaluation and Research

At the recent AFREA conference I was again reminded of what we are supposed to be doing in evaluation. Consider the word evaluation: It is about valuing something. Valuing for the purposes of accountability and for learning and improvement.

It is not just research, and although some people have indicated that they get irritated with our attempts at distinguishing evaluation from research, I think it is critically important to distinguish between research and evaluation.

Depending on which paradigm you come from, one might argue that research can be the same as evaluation. I don’t argue with that. What I do have a problem with is people approaching evaluations like research projects where the focus is all on “How do we collect evidence?” The methodology is critically important, agreed, and there is nothing that grates me more than seeing how people use poorly designed evaluation methodologies to collect “evidence”.

But evaluation is not just about how we collect information. Evaluation is supposed to take it a step further and make some evaluative judgments based on the data that was collected. Just describing your evaluation findings without saying what it means is senseless.

It is good and well if you find information about the level of maths capacity in rural schools interesting, but an evaluation will also go further and indicate whether the project is relevant, effective, efficient, has an impact and is sustainable or creates sustainable results. Without this additional “Valuing” judgments, an evaluation is only a research project that may increase our knowledge, but don’t help us to make decisions.

Something that may help more evaluations to be true evaluations is the Utilization Focused Evaluation approach of Michael Quinn Patton. It is all about how to ensure that an evaluation serves its intended purpose for the intended users. Go ahead – google Utilization Focused Evaluation and see how many hits come up. It literally is the biggest thing that has hit the Evaluation community in the past 30 years, yet many people are blissfully ignorant of this.

For those who commission evaluations, Patton specifically created a checklist that may be of value in making sure that evaluations are useful. www.wmich.edu/evalctr/checklists/ufe.pdf It might need to be adapted for use in your specific setting, but it definitely asks a couple of pretty critical questions about our evaluations.


Go ahead… I dare you to read up more about UFE (Utilization Focused Evaluation) and not be excited about the possibilities that evaluation has!


Have A good day!

PS. I hope to post some more of my thoughts on the AfrEA conference over the next month or so!

Thursday, January 04, 2007

IOCE

The IOCE is an international organisation for cooperation in evaluation and they have a couple of neat resources on their website:

http://www.ioce.net/resources/reports.shtml

The World Bank's Independent Evaluation Group Finds Progress On Growth, But Stronger Actions Needed For Sustainable Poverty Reduction



The World Bank's Independent Evaluation Group (IEG) is releasing its 2006 Annual Report on Operations Evaluation (AROE)

Joint UNICEF/IPEN Evaluation Working Paper on "New trends in development evaluation"

Resources for Evaluation and Social Research Methods

What Constitutes Credible Evidence in Evaluation and Applied Research? >>

When Will We Ever Learn: Recommendations to Improve Social Development through Enhanced Impact Evaluation

Very very usable Evaluation Journal - AJE

I’ve just paged through the December 2006 issue of the American Journal of Evaluation, and once again I am impressed.

It is such a usable journal for practitioners like myself, whilst still balancing it with the academic requirements that a journal should have. They do this by including

  • Articles – That deal with topics applicable to the broad field of program evaluation
  • Forum Pieces – A section were people get to present opinions and professional judgments relating to the philosophical, ethical and practical dilemmas of our profession.
  • Exemplars - Interviews with practitioners whose work can demonstrate in a specific evaluation study, the application of different models, theories and priciples described in evaluation literature.
  • Historical Record - Important turning points within the profession is analyzed, or historically significant evaluation works are discussed.
  • Method Notes – Which includes shorter papers describing methods and techniques that can improve evaluation practice.
  • Book Reviews – Recent books applicable to the broad field of program evaluation are reviewed.

I receive this journal as part of my membership to the American Evaluation Association – at a fraction of the costs that buying the publication on its own would have.


Go ahead – try it out – Here is a link to its archive:

http://aje.sagepub.com/archive/