This is probably the best excuse we have heard for not doing an evaluation. The report goes on to say that a year and a half after the quake, Chemonics hired one full-time M&E officer to monitor this $53 million contract. Needless to say, hiring one person was not sufficient to generate any type of meaningful evaluation. Chemonics’ implementation plans also did not have enough information for OTI to judge whether or not the program was on track:
"Chemonics officials said they decided not to prepare a formal [monitoring and evaluation] plan because the activities were so different from each other that monitoring and evaluation plans needed to be tailored to the activity level. At the time of the audit, HRI-II had 141 activities in progress. Because each activity had its own individual monitoring and evaluation plan, Chemonics was tracking 141 different plans, but had no single plan for the entire program. After an activity was completed, the Chemonics staff performed a final evaluation. OTI and Chemonics officials said they would use final evaluations, comparisons of “clusters” of similar activities, and “thematic reviews to provide lessons learned and a feedback loop” to guide future activities." (4)
How can one evaluate progress when there isn’t even a timeline for completion? Lack of clearly defined indicators The report raises another issue that is a crucial first step to evaluations: defining unambiguous indicators that accurately measure the goals of program, rather than intermediate inputs of expenditures.
"… the plan for providing Haiti’s Parliament with temporary offices and meeting space consisted of “The subcontractor will be responsible for the following: 1) Assembling and installation of steel- framed structures; 2) Connection of utilities.” No dates or estimated timelines were included." (5)
An engineering study to improve roads in one town was measured by the “number of reconstructed national governing institutions and systems that receive USG assistance to incorporate principles that support democracy and government legitimacy.” (4) There is no clear connection between point A (the actual project) and point B (the effect Chemonics measured); in fact the second example is simply confusing. Accurate evaluations cannot be conducted with such a dramatic mismatch between program inputs and intended accomplishments. Lack of local involvement We have written before that many Haitians felt excluded from the relief effort and were often not consulted about reconstruction. Beltway contractors came in with their own supplies and labor, often displacing local firms with similar subcontracting capabilities. The audit finds this same theme with Chemonics:
“Some of the performance indicators Chemonics developed were not well-defined. One activity provided computer equipment, chalkboards, benches, chairs, desks, and school kits consisting of backpacks and school supplies to two public schools in target communities. Yet a performance indicator for that activity measured the number of students who returned to school. (4)
Moving Forward USAID's auditors have provided detailed recommendations for better monitoring and evaluation; it is now up to program staff to implement the findings of the report. And, we still believe publishing data through IATI and piloting competitive bidding would help track the money and progress in Haiti. To do this, USAID will likely need to invest more resources (including staff time) in ensuring that NGOs, private contractors and other recipients of taxpayer funds actually deliver the services they have promised. The people of Haiti and the United States deserve better than what they have been getting so far.
“Chemonics used contractors from Port-au-Prince to implement a number of activities in Cap-Haitien and Saint-Marc; these contractors brought their own people to do the jobs instead of hiring locals. As a result, residents saw jobs in their neighborhoods being done by outsiders, and without an understanding of the activities, they did not see how anyone local benefitted.” (6)
CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.