Following the money: preliminary remarks on IATI Traceability

[Summary: Exploring the social and technical dynamics of aid traceability: let’s learn what we can from distributed ledgers, without thinking that all the solutions are to be found in the blockchain.]

My colleagues at Open Data Services are working at the moment on a project for UN Habitat around traceability of aid flows. With an increasing number of organisations publishing data using the International Aid Transparency Initiative data standard, and increasing amounts of government contracting and spending data available online, the theory is that it should be possible to track funding flows.

In this blog post I’ll try and think aloud about some of the opportunities and challenges for traceability.

Why follow funds?

I can envisage a number of hypothetical use cases traceability of aid.

Firstly, donors want to be able to understand where their money has gone. This is important for at least three reasons:

  1. Effectiveness & impact: knowing which projects and programmes have been the most effective;
  2. Understanding and communication: being able to see more information about the projects funded, and to present information on projects and their impacts to the public to build support for development;
  3. Addressing fraud and corruption: identifying leakage and mis-use of funds.

Traceability is important because the relationship between donor and delivery is often indirect. A grant may pass through a number of intermediary organisations before it reaches the ultimately beneficiaries. For example, a country donor may fund a multi-lateral fund, which in turn commissions an international organisation to deliver a programme, and they in turn contract with country partners, who in turn buy in provision from local providers.

Secondly, communities where projects are funded, or where funds should have been receieved, may want to trace funding upwards: understanding the actors and policy agendas affecting their communities, and identifying when funds they are entitled to have not arrived (see the investigative work of Follow The Money Nigeria for a good example of this latter use case).

Short-circuiting social systems

It is important to consider the ways in which work on the traceability of funds potentially bypasses, ‘routes around’ or disrupts* (*choose your own framing) existing funding and reporting relationships – allowing donors or communities to reach beyond intermediaries to exert such authority and power over outcomes as they can exercise.

Take the example given above. We can represent the funding flows in a diagram as below:

downwards

But there are more than one-way-flows going on here. Most of the parties involved will have some sort of reporting responsibility to those giving them funds, and so we also have a report

upwards

By the time reporting gets to the donor, it is unlikely to include much detail on the work of the local partners or providers (indeed, the multilateral, for example, may not report specifically on this project, just on the development co-operation in general). The INGO may even have very limited information about what happens just a few steps down the chain on the ground, having to trust intermediary reports.

In cases where there isn’t complete trust in this network of reporting, and clear mechanisms to ensure each party is excercising it’s responsibility to ensure the most effective, and corruption-free, use of resources by the next party down, the case for being able to see through this chain, tracing funds and having direct ability to assess impacts and risks is clearly desirable.

Yet – it also needs to be approached carefully. Each of the relationships in this funding chain is about more than just passing on some clearly defined packet of money. Each party may bring specific contextual knowledge, skills and experience. Enabling those at the top of a funding chain to leap over intermediaries doesn’t inevitably having a positive impact: particularly given what the history of development co-operative has to teach about how power dynamics and the imposition of top-down solutions can lead to substantial harms.

None of this is a case against traceability – but it is a call for consideration of the social dynamics of traceability infrastructures – and considering of how to ensure contextual knowledge is kept accessible when it becomes possible to traverse the links of a funding chain.

The co-ordination challenge of traceability

Right now, the IATI data standard has support for traceability at the project and transaction level.

  • At the project level the related-activity field can be used to indicate parent, child and co-funded activities.
  • At the transaction level, data on incoming funds can specify the activity-id used by the upstream organisation to identify the project the funds come from, and data on outgoing funds can specify the activity-id used by the downstream organisation.

This supports both upwards and downwards linking (e.g. a funder can publish the identified of the funded project, or a receipient can publish the identifier of the donor project that is providing funds), but is based on explicit co-ordination and the capture of additional data.

As a distributed approach to the publication of open data, there are no consistency checks in IATI to ensure that providers and recipients agree on identifiers, and often there can be practical challenges to capture this data, not least that:

  • A) Many of the accounting systems in which transaction data is captured have no fields for upstream or downstream project identifier, nor any way of conceptually linking transactions to these externally defined projects;
  • B) Some parties in the funding chain may not publish IATI data, or may do so in forms that do not support traceability, breaking the chain;
  • C) The identifier of a downstream project may not be created at the time an upstream project assigns funds – exchanging identifiers can create a substantial administrative burden;

At the last IATI TAG meeting in Ottawa, this led to some discussion of other technologies that might be explored to address issues of traceability.

Technical utopias and practical traceability

Let’s start with a number of assorted observations:

  • UPS can track a package right around the world, giving me regular updates on where it is. The package has a barcode on, and is being transferred by a single company.
  • I can make a faster-payments bank transfer in the UK with a reference number that appears in both my bank statements, and the receipients statements, travelling between banks in seconds. Banks leverage their trust, and use centralised third-party providers as part of data exchange and reconciling funding transfers.
  • When making some international transfers, the money has effectively disappeared from view for quite a while, with lots of time spent on the phone to sender, recipient and intermediary banks to track down the funds. Trust, digital systems and reconciliation services function less well across international borders.
  • Transactions on the BitCoin Blockchain are, to some extent, traceable. BitCoin is a distributed system. (Given any BitCoin ‘address’ it’s possible to go back into the public ledger and see which addresses have transferred an amount of bitcoins there, and to follow the chain onwards. If you can match an address to an identity, the currency, far from being anonymous, is fairly transparent*. This is the reason for BitCoin mixer services, designed to remove the trackability of coins.)
  • There are reported experiments with using BlockChain technologies in a range of different settings, incuding for land registries.
  • There’s a lot of investment going into FinTech right now – exploring ways to update financial services

All of this can lead to some excitement about the potential of new technologies to render funding flows traceable. If we can trace parcels and BitCoins, the argument goes, why can’t we have traceability of public funds and development assistance?

Although I think such an argument falls down in a number of key areas (which I’ll get to in a moment), it does point towards a key component missing from the current aid transparency landscape – in the form of a shared ledger.

One of the reasons IATI is based on a distributed data publishing model, without any internal consistency checks between publishers, is prior experience in the sector of submitting data to centralised aid databases. However, peer-to-peer and block-chain like technologies now offer a way to separate out co-ordination and the creation of consensus on the state of the world, from the centralisation of data in a single database.

It is at least theoretically possible to imagine a world in which the data a government publishes about it’s transactions is only considered part of the story, and in which the recipient needs to confirm receipt in a public ledger to complete the transactional record. Transactions ultimately have two parts (sending and receipt), and open (distributed) ledger systems could offer the ability to layer an auditable record on top of the actual transfer of funds.

However (as I said, there are some serious limitations here), such a system is only an account giving of the funding flows, not the flows themself (unlike BitCoin) which still leaves space for corruption through maintaining false information in the ledger. Although trusted financial intermediaries (banks and others) could be brought into the picture, as others responsible for confirming transactions, it’s hard to envisage how adoption of such a system could be brought about over the short and medium term (particularly globally). Secondly, although transactions between organisations might be made more visible and traceable in this way, the transactions inside an organisation remain opaque. Working out which funds relate to which internal and external projects is still a matter of the internal businesses processes in organisations involved in the aid delivery chain.

There may be other traceability systems we should be exploring as inspirations for aid and public money traceable. What my brief look at BitCoin leads me to reflect on is potential role over the short-term of reconciliation services that can, at the very least, report on the extent to which different IATI publisers are mutually confirming each others information. Over the long-term, a move towards more real-time transparency infrastructures, rather than periodic data publication, might open up new opportunities – although with all sorts of associated challenges.

Ultimately – creating traceable aid still requires labour to generate shared conceptual understandings of how particular transactions and projects relate.

How much is enough?

Let’s loop back round. In this post (as in many of the conversations I’ve had about traceable), we started with some use cases for traceability; we saw some of the challenges; we got briefly excited about what new technologies could do to provide traceability; we saw the opportunities, but also the many limitations. Where do we end up then?

I think important is to loop back to our use cases, and to consider how technology can help but not completely solve, the problems set out. Knowing which provider organisations might have been funded through a particular donors money could be enough to help them target investigations in cases of fraud. Or knowing all the funders who have a stake in projects in a particular country, sector and locality can be enough for communities on the ground to do further research to identify the funders they need to talk to.

Rather than searching after a traceability data panopticon, can we focus traceability-enabling practices on breaking down the barriers to specific investigatory processes?

Ultimately, in the IATI case, getting traceability to work at the project level alone could be a big boost. But doing this will require a lot of social coordination, as much as technical innovation. As we think about tools for traceability, thinking about tools that support this social process may be an important area to focus on.

Where next

Steven Flower and the rest of the Open Data Services team will be working on coming weeks on a deeper investigation of traceability issues – with the goal of producing a report and toolkit later this year. They’ve already been digging into IATI data to look for the links that exist so far and building on past work testing the concept of traceability against real data.

Drop in comments below, or drop Steven a line, if you have ideas to share.

Three cross-cutting issues that UK data sharing proposals should address

[Summary: an extended discussion of issue arising from today’s discussion of UK data sharing open policymaking discussions]

I spend a lot of time thinking and writing about open data. But, as has often been said, not all of the data that government holds should be published as open data.

Certain registers and datasets managed by the state may contain, or be used to reveal, personally identifying and private information – justifying strong restrictions on how they are accessed and used. Many of the datasets governments collect, from tax records to detailed survey data collected for policy making and monitoring fall into this category. However, the principle that data collected for one purpose might have a legitimate use in another context still applies to this data: one government department may be able to pursue it’s public task with data from another, and there are cases where public benefit is to be found from sharing data with academic and private sector researchers and innovators.

However, in the UK, the picture of which departments, agencies and levels of government can share which data with others (or outside of the state) is complex to say the least. When it comes to sharing personally identifying datasets, agencies need to rely on specific ‘legal gateways’, with certain major data holders such as HM Revenue and Customs bound by restrictive rules that may require explicit legislation to pass through parliament before specific data shares are permitted.

That’s ostensibly why the UK Government has been working for a number of years now on bringing forward new data sharing proposals – creating ‘permissive powers’ for cross-departmental and cross-agency data sharing, increasing the ease of data flows between national and local government, whilst increasing the clarity of safeguards against data mis-use. Up until just before the last election, an Open Policy Making process, modelled broadly on the UK Open Government Partnership process was taking place – resulting in a refined set of potential proposals relating to identifiable data sharing, data sharing for fraud reduction, and use of data for targeted public services. Today that process was re-started, with a view to a public consultation on updated proposals in the coming months.

However, although much progress has been made in refining proposals based on private sector and civil society feedback, from the range of specific and somewhat disjointed proposals presented for new arrangements in today’s workshop, it appears the process is a way off from providing the kinds of clarification of the current regime that might be desirable. Missing from today’s discussions were clear cross-cutting mechanisms to build trust in government data sharing, and establish the kind of secure data infrastructures that are needed for handling personal data sharing.

I want to suggest three areas that need to be more clearly addressed – all of which were raised in the 2014/15 Open Policymaking process, but which have been somewhat lost in the latest iterations of discussion.

1. Maximising impact, minimising the data shared

One of the most compelling cases for data sharing presented in today’s workshop was work to address fuel poverty by automatically giving low-income pensioners rebates on their fuel bills. Discussions suggested that since the automatic rebate was introduced, 50% more eligible recipients are getting the rebates – with the most vulnerable who were far less likely to apply to recieve the rebates they were entitied to the biggest beneficiaries. With every degree drop in the temperature of a pensioners home correlating to increased hospital admissions – then the argument for allowing the data share, and indeed establishing the framework for current arrangements to be extended to others in fuel poverty (the current powers are specific to pensioners data in some way), is clear.

However, this case is also one where the impact is accompanied by a process that results in minimal data actually being shared from government to the private companies who apply the rebates to individuals energy bills. All that is shared in response to energy companies queries for each candidate on their customer list is a flag for whether the individual is eligible for the rebate or not.

This kind of approach does not require the sharing of a bulk dataset of personally identifying information – it requires a transactional service that can provide the minimum certification required to indicate, with some reasonable level of confidence, that an individual has some relevant credentials. The idea of privacy protecting identity services which operate in this way is not new – yet the framing of the current data sharing discussion has tended to focus on ‘sharing datasets’ instead of constructing processes and technical systems which can be well governed, and still meet the vast majority of use-cases where data shares may be required.

For example, when the General Records Office representative today posed the question of “In what circumstances would it be approciate to share civil registration data (e.g. Birth, Adoption, Marriage and Death) information?”, the use-cases that surfaced were all to do with verification of identity: something that could be achieved much more safely by providing a digital service than by handing over datasets in bulk.

Indeed, approached as a question of systems design, rather than data sharing, the fight against fraud may in practice be better served by allowing citizens to digitally access their own civil registration information and to submit that as evidence in their transactions with government, helping narrow the number of cases where fraud may be occurring – and focussing investigative efforts more tightly, instead of chasing after problematic big data analysis approaches.

(Aside #1: As one participant in today’s workshop insightfully noted, there are thousands of valid marriages in the UK which are not civil marriages and so may not be present in Civil Registers. A big data approach that seeks to match records of who is married to records of households who have declared they are married, to identify fraudulent claims, is likely to flag these households wrongly, creating new forms of discrimination. By contrast, an approach that helps individuals submit their evidence to government allows such ‘edge cases’ to be factored in – recognising that many ‘facts’ about citizens are not easily reduced to simple database fields, and that giving account of ones self to the state is a performative act which should not be too readily sidelined.)

(Aside #2: The case of civil registers also illustrates an interesting and significant qualitative difference between public records, and a bulk public dataset. Births, marriages and deaths are all ‘public events’: there is no right to keep them private, and they have long been recorded in registers which are open to inspection. However, when the model of access to these registers switches from the focussed inspection, looking for a particular individual, to bulk access, they become possible to use in new ways – for example, creating a ‘primary key’ of individuals to which other data can be attached, eroding privacy in ways which was not possible when each record needed to be explored individually. The balance of benefits and harms from this qualitative change will vary from dataset to dataset. For example, I would strongly advocate the open sharing of company registers, including details of beneficial owners, both because of the public benefit of this data, and because registering a company is a public act involving a certain social contract. By contrast, I would be more cautious about the full disclosure of all civil registers, due to the different nature of the social contract involved, and the greater risk of vulnerable individuals being targetted through intentional or unintentional misuse of the data.)

All of which is a long way to say:

  • Where the cross-agency or cross-departmental use-cases for access to a particular can be reduced to sharing assertions about individuals, rather than bulk datasets, this route should be explored first.

This does not remove the need for governance of both access and data use. However, it does ease the governance of access, and audit logs of access to a service are easier to manage than audit logs of what users in possession of a dataset have done.

Even the sharing of a ‘flag’ that can be applied to an individuals data record needs careful thought: and those in receipt of such flags need to ensure they govern the use of that data carefully. For example, as one participant today noted, pensioners have raised fears that energy companies may use a ‘fuel poverty’ flag in their records to target them with advertising. Ensuring that later analysts in the company do not stumble upon the rebate figures in invoices, and feed this into profiling of customers, for example, will require very careful data governance – and it is not clear that companies practices are robust enough to protect against this right now.

2. Algorithmic transparency

Last year the Detroit Digital Justice Coalition produced a great little zine called ‘Opening Data’ which takes a practical look at some of the opportunities and challenges of open data use. They look at how data is used to profile communities, and how the classifications and clustering approaches applied to data can create categories that may be skewed and biased against particular groups, or that reinforce rather than challenge social divides (see pg 30 onwards). The same issues apply to data sharing.

Whilst current data protection legislation gives citizens a right to access and correct information about themselves, the algorithms used to process that data, and derive analysis from it are rarely shared or open to adequate scrutiny.

In the process of establishing new frameworks for data sharing, the algorithms used to process that data should be being brough in view as much as the datasets themselves.

If, for example, someone is offered a targetted public service, or targetted in a fraud investigation, there is question to be explored of whether they should be told which datasets, and which algorithms, led to them being selected. This, and associated transparency, could help to surface otherwise unseen biases which might otherwise lead to particular groups being unfairly targetted (or missed) by analysis. Transparency is no panacea, but it plays an important role as a safeguard.

3. Systematic transparency of sharing arrangements

On the theme of transparency, many of the proposals discussed today mentioned oversight groups, Privacy Impact Assessments, and publication of information on either those in receipt of shared data, or those refused access to datasets – yet across the piece no systematic framework for this was put forward.

This is an issue Reuben Binns and I wrote about in 2014, putting forward a proposal for a common standard for disclosure of data sharing arrangements that, in it’s strongest form would require:

  • Structured data on origin, destination, purpose, legal framework and timescales for sharing;
  • Publication of Privacy Impact Assessments and other associated documents;
  • Notices published through a common venue (such as the Gazette) in a timely fashion;
  • Consultation windows where relevant before a power comes into force;
  • Sharing to only be legally valid when the notice has been published.

Without such a framework, we are likely to end up with the current confused system in which no-one knows which shares are in place, how they are being used, and which legal gateways are functioning well or not. With a scattered set of spreadsheets and web pages listing approved sharing, citizens have no hope of understanding how their data is being used.

If only one of the above issues could be addressed in the upcoming consultation on data sharing, then I certainly hope progress could be made on addressing this missing piece of a robust common framework for the transparency principles of data sharing to be put into practice.

Towards a well governed infrastructure?

Ultimately, the discussion of data sharing is a discussion about one aspect of our national data infrastructure. There has been a lot of smart work going on, both inside and outside government, on issues such as identity assurance, differential privacy, and identifying core derived datasets which should be available as open data to bypass need for sharing gateways. A truly effective data sharing agenda needs to link with these to ensure it is neither creating over-broad powers which are open to abuse, nor establishing a new web of complex and hard to operate gateways.

Further reading

My thinking on these issues has been shaped in part by inputs from the following:

Data & Discrimination – Collected Essays

White House Report on Big Data, and associated papers/notes from The Social, Cultural & Ethical Dimensions of “Big Data.” conference

A distributed approach to co-operative data

[Summary: rough notes from a workshop on cooperative sector data.]

Principle 6 of the International Co-operative Alliance calls for ‘co-operation amongst co-operatives’. Yet, for many co-ops, finding other worker owned businesses to work with can be challenging. Although there are over 7,000 co-operatives in the UK, and many more worldwide, it can be challenging to find out much about them.

This was one of the key drivers behind a convening at the Old Music Hall in Oxford just before Christmas where cooperators from the Institute for Solidarity Economics, Open Data Services Co-operative, Coops UK and Transformap gathered to explore opportunities for ‘Principle 6 Data’: open data to build up a clearer picture of the co-operative economy.

We started out articulating different challenges to be explored through the day, including:

  • Helping researchers better understand the co-operative sector. With co-ops employing thousands of people, and co-operatives adding £37bn to the UK economy last year, having a clearer picture of where they operate, what they do and how they work is vital. Yet information is scarce. For researchers at the Institute for Solidarity Economics, there is a need to dig beyond headline organisation types to understand how the activities of organisations contribute to worker owned, social impact enterprise.

  • Support trade between co-operatives. For example, earlier this year when we were planning a face-to-face gathering of Open Data Services Co-op we tried to find co-operatively run venues to use, and we’ve been trying to understand where else we could support co-ops in our supply chain. Whilst Coops UK provide a directory of co-operatives, it is focussed on business-to-consumer, not business-to-business information.

  • Enabling distributed information management on co-ops. Right now, the best dataset we have for the UK comes from Coops UK, the membership body for the UK sector, who hold information on 7000 or so co-operatives, built up over the years from various sources. They have recently released some of this as open data, and are keen to build on the dataset in future. Yet if it can only be updated via Coops UK this creates a bottleneck to the creation of richer data resources.

My Open Data Services colleague, Edafe Onerhime, did some great work looking at the existing Coops UK dataset, which is written up here, and Dan from ISE explored ways of getting microformat markup into the Institute for Solidarity Economics website to expose more structured data about the organisation, including the gender profile of the workforce. We also took at look at whether data from the .coop domain registry might provide insights into the sector, and set about exploring whether microformats were already in use on any of the websites of UK co-operatives.

Building on these experiments, we came to an exploration of potential social, organisational and technical challenges ahead if we want to see a distributed approach to greater data publication on the co-op sector. Ultimately, this boiled down to a couple of key questions:

  • How can co-operatives be encouraged to share more structured data on their activities?

  • How can the different data needs of different users be met?

  • How can that data be fed into different data-driven projects for research, or cooperative collaboration?

There are various elements to addressing these questions.

There is a standards element: identifying the different kinds of things about co-operatives that different users may want to know about, and looking for standards to model these. For example, alongside the basic details of registered organisations and their turnover collected for the co-operative economic report, business-to-business use cases may be interested in branch locations and product/service offerings from co-ops, and solidarity economics research may be interested in the different value commitments a co-operative has, and details of its democratic governance. We looked at how certifications, from formal Fairtrade certifications for products of a co-op, to social certifications where someone a user trusts vouches for an organisation, might be an important part of the picture also.

For many of the features of a cooperative that are of interest, common data standards already exist, such as those provided by schema.org. Although these need to be approached critically, they provide a pragmatic starting point for standardisation, an example with Coops UK Co-Operative economy dataset can be seen here.

There is a process element of working out how data that co-operatives might publish using common standards will be validated, aggregated and made into shared datasets. Here, we looked at how an annual process of data collection, such as the UK Co-operative Economy report might bootstrap a process of distributed data publishing.

Imagine a platform where co-operatives are offered three options to provide their data into the annual co-operative economy report:

  1. Fill in a form manually;

  2. Publish a spreadsheet of key data to your own website;

  3. Embed JSON-LD / microformat data in your own website;

Although 2 and 3 are more technically complex, they can provide richer and more open and re-useable data, and a platform can explain the advantages of taking the extra steps on these. Moving co-operatives from Step 1 to Step 2 can be bootstrapped by the form driven process generating a spreadsheet for co-ops to publish on their own sites at a set location, and then encouraging them to update those sheets in year 2.

With good interaction design, and a feedback loop that helps validate data quality and show the collective benefits of providing additional data, such a platform could provide the practical component of a campaign for open data publication and use by co-ops.

This points to the crucial **social element: **building community around the open data process, and making sure it is not a technical exercise, but one that meets real needs.

Did we end the day with a clear picture of where next for co-op sector data? Not yet. But it was clear that all the groups participating will continue to explore this space into 2016, and we’ll be looking for more opportunities to collaborate together.