We’re getting closer to knowing how the USG spends its foreign assistance dollars. Recently, the State Department announced its first release of foreign assistance data on the ForeignAssistance.gov website (also known as “The Dashboard”). This may not sound terribly glamorous, but it’s actually important news. Since State’s spending makes up over a third of all US foreign assistance spending, the absence of its data has been a huge gap. With this recent State Department move, spending data for agencies responsible for 96 percent of US foreign assistance are now online. It’s great to see the Dashboard—now in its fourth year—slowly coming together. As it does, here are a few thoughts on why it’s still a good investment, the big challenges it faces, and how it can be improved.
Why We Should Cheer for the Dashboard
If well implemented, the Dashboard, an online resource of US foreign assistance spending (and potentially other) data, can:
Increase accountability and transparency: One of the Dashboard’s main goals is to enable easier access to information about US foreign assistance investments by US citizens, Congress, other US agencies, along with citizens and governments in recipient countries.
Ease agencies’ reporting burden (eventually): Behind the Dashboard lies a massive database that will eventually contain all of the underlying information necessary not just to populate the online interface but also to fulfill USG’s other regular reporting, like IATI, the Greenbook, and the OECD-DAC’s Creditor Reporting System. Once the Dashboard/IATI process is automated within the agencies, complying with all this reporting should become much more streamlined and, importantly, more institutionalized.
Create incentives for improved data quality: Publishing data can change the dynamic around data quality. The prospect of increased scrutiny can create an incentive for agencies to reinforce internal systems to produce cleaner, better organized data which can, in turn, bolster an agency’s own understanding of its internal operations.
Why It’s Taking So Long
The Dashboard was announced in 2010. The effort is led by State’s F Bureau, which coordinates with the (over 20!) USG agencies that deliver some form of foreign assistance, and collects, codes, and publishes their data submissions. Some agencies, however, are far more capable of reporting to the Dashboard than others. What’s so hard about data reporting, you may ask? Quite a few things, it turns out, including:
Existing information systems’ incompatibility with Dashboard requirements. Different agencies have different financial and project management information systems. In fact, individual agencies often have multiple, separate systems. Most of them long predate any notion of “open data” and are simply not designed to compile information in the way the Dashboard needs it. Changing IT systems is a massive, costly undertaking.
Foreign assistance funds must be parsed out from a broader portfolio. For agencies whose core mission isn’t foreign aid, internal systems weren’t set up to differentiate between foreign assistance and domestic spending. This makes it difficult to identify what’s right for the Dashboard and what’s not. MCC has it easy in this respect (foreign aid only); the Department of Health and Human Services, for example, does not (mostly domestic).
At this point, the Dashboard team over at State is focused principally on providing data (i.e., getting more agencies on board) as well as pushing for improved data quality. The team is pursuing a phased approach to populating the web portal, publishing agencies’ data as they have it ready. It’s a courageous move for the USG to publicly release information knowing that it’s incomplete (and highly imperfect). Yet, they recognize that an incremental approach maintains pressure for continued implementation and fosters competition among agencies. It may also help ease the culture shift towards transparency by gradually demonstrating that openness doesn’t have to be threatening.
This incremental approach also creates risks for users since:
A user can’t easily tell if data are complete—and often they’re not. By illustration, this graphic shows agency-by-agency reporting to the Dashboard. You’ll see that not a single year contains information from all agencies (2006 to current), and that most agencies have reporting gaps. It’s great that the Dashboard is frank about this, but the problem is that this is not clearly indicated where it needs to be. For instance, if you wanted to find out about aid to Tanzania from 2008 to 2012, you would probably go directly to the Tanzania page and assume that what you pulled for “all agencies” means just that. You’d be wrong. Only MCC and Treasury have 2008 data on the Dashboard, so “all agencies” means just those two for that year. More broadly, it’s hard for a user to tell easily if data that don’t show up are absent because they don’t exist (e.g. DOD didn’t spend foreign assistance money in Country X in a given year) or because it’s missing (e.g. DOD did spend foreign assistance money in Country X that year but hasn’t reported it). The Dashboard does include caveats about data limitations but they’re unintuitively scattered in way too many locations that aren’t near where users are looking at data. So they’re only helpful if a user thinks they should have a question about data quality or comprehensiveness and actively seeks this information.
Transaction-level data are incomplete (and sometimes unintelligible). Some important fields are missing from most agencies’ submissions. For example, State is uniformly missing project title and description making it nearly impossible for a user to tell what he or she is looking at. MCC has titles, but not descriptions. USAID has descriptions for most of its transactions, but many of these merely replicate the title, are unintuitive to outsiders, refer to supporting documents that are unavailable, and/or cut off mid-description. Start and end dates are also complicated. For USDA they’re missing. USAID provides only the year; MCC provides only the start date. State’s date reporting is spotty and contains apparently inconsistent information, like disbursements that happen before start dates.
Getting the data out there is important, and it’s the right thing to do. But doing so while simultaneously improving coverage and quality gives me two related (though opposite) concerns. I’m worried that:
1) People Will Use the Data and draw incorrect conclusions due to missing or poor quality data; and/or
2) People Won’t Use the Data because they are aware of its current limitations and will write off the Dashboard as an unreliable source, regardless of whether data coverage and quality improve later. In a bit of a chicken and egg conundrum, lack of use could in turn slow Dashboard progress, since, to some extent, agencies need to know people will use the data before they invest scarce resources to provide it and improve its quality.
Ideas to Increase the Dashboard’s Potential
State’s Dashboard team and the 20+ agencies with foreign assistance spending are working hard to make the Dashboard a useful, relevant tool. It’s a big undertaking. Here are four things I hope they are considering:
1) Help users better understand the data: The main risks to the Dashboard come from incomplete and thus unreliable data. Breadth and reliability are key requirements for data to be truly useful. Therefore, the Dashboard should be abundantly clear when users are looking at complete versus partial information, or preliminary versus final data. Users should not have to dig through multiple, separate “additional information” pages to find this out.
2) Improve transaction data: Agencies should strive to fill the gaps in their transaction data (especially critical things like titles that facilitate rolling up transactions to the project level), as well as improve the comprehensibility of the information (for example, make descriptions descriptive).
3) Don’t forget about usability: The current priority of the Dashboard is to publish as much data as possible in manipulable format and let users work with it as they wish. However, a single user interface is never going to be able to meet the needs of all stakeholders, so the USG should reinforce its efforts to: (i) define who their priority audiences are; and (ii) understand how these different groups want to use the data and tailor the interface accordingly. The Dashboard team is already taking steps in this direction with outreach to country missions and US-based stakeholders.
4) Publish agency specific implementation schedules: The Dashboard website does explain where each agency is in the implementation process. But, it should also include agency-by-agency schedules for reporting compliance (and not just with Dashboard requirements, but with IATI requirements, too). This would not only provide an accountability structure that would help motivate continued momentum, it would also serve as an important signal of commitment.