Over the last year I’ve had the immense pleasure of getting to work with a fantastic group of colleagues creating ‘Open Data Services Co-operative‘. It was created in recognition of the fact that creating and using distributed open data requires ongoing labour to develop robust platforms, support publishers to create quality data, and help users access data in the forms they need.
Over the last year we’ve set up ongoing support systems for the Open Contracting Data Standard and 360Giving, and have worked on projects with NCVO, NRGI and the Financial Transparency Coalition, amongst others – focussing on places where open data can make a real difference to governance, accountability and participation. We’ve been doing that with a multidisciplinary team, combining the capacity to build and maintain technical tools, such as CoVE, which drives an accessible validation and data conversion tool, with a responsive analysis team – able to give bespoke support to data publishers and users.
And we’ve done this as a workers co-operative, meaning that staff who joined the team back in October last year are now co-owners of the company, sharing in setting it’s direction and decision making over how we use co-op resources to provide a good working environment, and further our social goals. A few weeks back we were able to vote on our first profit distributions, committing to become corporate sponsors of a number of software projects and social causes we support.
The difference that being organised as co-op makes was particularly brought home to me at a recent re-union of my MSc course: where it seemed many others have graduated into a start-up economy which is all about burning through staff, with people spending months rather than years in jobs, and constantly having dealing with stressful workloads. Operating as a workers co-op challenges us to create good and sustainable jobs.
Any that’s what we’re trying to do again now: recruiting for two new people to join the team.
We’re looking for a developer to join us, particularly someone with experience of managing technical roadmaps for projects; and we’re looking for someone to work with us as an analyst – combining a focus on policy and technology, and ready to work on outreach and engagement with potential users of the open data standards we support.
Last autumn the International Open Data Charter was launched, putting forward six key principles for governments to adopt to pursue an ‘open by default’ approach to key data.
However, for the Charter to have the greatest impacts requires more than just high-level principles. As the International Open Data Conferenceexplored last year, we need to focus on the application of open data to particular sectors to secure the greatest impact. That’s why a stream of work has been emerging to develop ‘Sector Packages’ as companion resources to the International Open Data Charter.
The first of these is focussing on anti-corruption. I’ve been supporting the Technical Working Group of the Charter to sketch a possible outline for this in this consultation document, which was shared at the G20 meeting last year.
To build on that we’ve just launched a call for a consultant to act as co-ordinating author for the package (closing date 28th Jan – please do share!), and a few weeks back I had the chance to drop into a mini-workshop at DFID to share an update on the Charter, and talk with staff from across the organisation about potential areas that the anti-corruption package should focus on.
Slides from the talk are below, and I’ve jotted down some brief notes from the discussions as well.
In the session we posed the question: “What one dataset would you like to see countries publish as open data to address corruption?”
The answers highlight a range of key areas for exploration as the anti-corruption sector package is developed further.
1) Repository of registered NGOs and their downstream partners – including details of their bank accounts, board, constitution and rules etc.
This kind of data is clearly useful to a donor wanting to understand who they are working with, or considering whether to work with potential partners. But it is also a very challenging dataset to collate and open. Firstly, many countries either lack comprehensive systems of NGO registration, or have thresholds that mean many community-level groups will be non-constituted community associations rather than formally registered organisations. Secondly, there can be risks associated with NGO registration, particularly in countries with shrinking civil society space, and where lists of organisations could be used to increase political control or restrictions on NGO activity.
Working these issues through will require thought about where to draw the lines between open and shared data, and how organisations can pool their self-collected intelligence about partnr organisations, whilst avoiding harms, and avoiding the creation of error-prone datasets where funding isn’t approved because ‘computer says no’.
2) Data on the whole contracting chain – particularly for large infrastructure projects.
Whilst issolated pockets of data on public contracts often exist, effort is needed to join these up, giving a view of the whole contracting chain. The Open Contracting Data Standard has been developing the technical foundations for this to happen, and work is not beginning to explore how it might be used to track the implementation of infrastructure projects. In the UK, civil society are calling for the next Open Government National Action Plan to include a committment to model contract clauses that encourage contractors to disclose key information on subcontracting arrangements, implementation milestons and the company’s beneficial owners.
3) Identifying organisations and the people involved.
The challenge of identifying the organisations who are counterparty to a funding transaction or a contract is not limited to NGOs. Identifying government agencies, departments, and the key actors within them, is also important.
Government entity identifiers is a challenge the International Aid Transparency Initiative has been grapling with for a few years now. Could the Open Data Charter process finally move forward some agreement on the core data infrastructure describing the state that is needed as a foundation for accountability and anti-corruption open data action?
4) Beneficial ownership.
Benefial ownership data reveals who is ultimately in control of, and reaping the profits from, a company. The UK is due to publish an open beneficial ownership register for the first time later this year – but there is still much to do to develop common standards for joined-up data on beneficial ownership. For example, the UK register will capture ownership information in bands at 25%, 50% and 75%, where other countries are exploring either detailed ownership percentage publication, or publication using other, non-overlapping bands. Without co-ordination on interoperability, potential impacts of beneficial ownership open data may be much harder to secure.
5) Localised datasets and public expenditure tracking data.
In thinking about the ‘national datasets’ that governments could publish as part of a sector package for anti-corruption, it is also important to not lose sight of data being generated and shared at the local level. There are lots of lessons to learn from existing work on Public Expenditure Tracking which traces the disbursement of funds from national budgets, through layers of administration, down to local services like schools. With the funding flows posted on posters on the side of school buildings there is a clearer answer to the question: “What does this mean to me?”, and data is more clearly connected with local citizen empowerment.
Look out for updates about the anti-corruption sector package on the Open Data Charter website over the first part of 2016.
[Summary: Exploring the social and technical dynamics of aid traceability: let’s learn what we can from distributed ledgers, without thinking that all the solutions are to be found in the blockchain.]
My colleagues at Open Data Services are working at the moment on a project for UN Habitat around traceability of aid flows. With an increasing number of organisations publishing data using the International Aid Transparency Initiative data standard, and increasing amounts of government contracting and spending data available online, the theory is that it should be possible to track funding flows.
In this blog post I’ll try and think aloud about some of the opportunities and challenges for traceability.
Why follow funds?
I can envisage a number of hypothetical use cases traceability of aid.
Firstly, donors want to be able to understand where their money has gone. This is important for at least three reasons:
Effectiveness & impact: knowing which projects and programmes have been the most effective;
Understanding and communication: being able to see more information about the projects funded, and to present information on projects and their impacts to the public to build support for development;
Addressing fraud and corruption: identifying leakage and mis-use of funds.
Traceability is important because the relationship between donor and delivery is often indirect. A grant may pass through a number of intermediary organisations before it reaches the ultimately beneficiaries. For example, a country donor may fund a multi-lateral fund, which in turn commissions an international organisation to deliver a programme, and they in turn contract with country partners, who in turn buy in provision from local providers.
Secondly, communities where projects are funded, or where funds should have been receieved, may want to trace funding upwards: understanding the actors and policy agendas affecting their communities, and identifying when funds they are entitled to have not arrived (see the investigative work of Follow The Money Nigeria for a good example of this latter use case).
Short-circuiting social systems
It is important to consider the ways in which work on the traceability of funds potentially bypasses, ‘routes around’ or disrupts* (*choose your own framing) existing funding and reporting relationships – allowing donors or communities to reach beyond intermediaries to exert such authority and power over outcomes as they can exercise.
Take the example given above. We can represent the funding flows in a diagram as below:
But there are more than one-way-flows going on here. Most of the parties involved will have some sort of reporting responsibility to those giving them funds, and so we also have a report
By the time reporting gets to the donor, it is unlikely to include much detail on the work of the local partners or providers (indeed, the multilateral, for example, may not report specifically on this project, just on the development co-operation in general). The INGO may even have very limited information about what happens just a few steps down the chain on the ground, having to trust intermediary reports.
In cases where there isn’t complete trust in this network of reporting, and clear mechanisms to ensure each party is excercising it’s responsibility to ensure the most effective, and corruption-free, use of resources by the next party down, the case for being able to see through this chain, tracing funds and having direct ability to assess impacts and risks is clearly desirable.
Yet – it also needs to be approached carefully. Each of the relationships in this funding chain is about more than just passing on some clearly defined packet of money. Each party may bring specific contextual knowledge, skills and experience. Enabling those at the top of a funding chain to leap over intermediaries doesn’t inevitably having a positive impact: particularly given what the history of development co-operative has to teach about how power dynamics and the imposition of top-down solutions can lead to substantial harms.
None of this is a case against traceability – but it is a call for consideration of the social dynamics of traceability infrastructures – and considering of how to ensure contextual knowledge is kept accessible when it becomes possible to traverse the links of a funding chain.
The co-ordination challenge of traceability
Right now, the IATI data standard has support for traceability at the project and transaction level.
At the project level the related-activity field can be used to indicate parent, child and co-funded activities.
At the transaction level, data on incoming funds can specify the activity-id used by the upstream organisation to identify the project the funds come from, and data on outgoing funds can specify the activity-id used by the downstream organisation.
This supports both upwards and downwards linking (e.g. a funder can publish the identified of the funded project, or a receipient can publish the identifier of the donor project that is providing funds), but is based on explicit co-ordination and the capture of additional data.
As a distributed approach to the publication of open data, there are no consistency checks in IATI to ensure that providers and recipients agree on identifiers, and often there can be practical challenges to capture this data, not least that:
A) Many of the accounting systems in which transaction data is captured have no fields for upstream or downstream project identifier, nor any way of conceptually linking transactions to these externally defined projects;
B) Some parties in the funding chain may not publish IATI data, or may do so in forms that do not support traceability, breaking the chain;
C) The identifier of a downstream project may not be created at the time an upstream project assigns funds – exchanging identifiers can create a substantial administrative burden;
At the last IATI TAG meeting in Ottawa, this led to some discussion of other technologies that might be explored to address issues of traceability.
Technical utopias and practical traceability
Let’s start with a number of assorted observations:
UPS can track a package right around the world, giving me regular updates on where it is. The package has a barcode on, and is being transferred by a single company.
I can make a faster-payments bank transfer in the UK with a reference number that appears in both my bank statements, and the receipients statements, travelling between banks in seconds. Banks leverage their trust, and use centralised third-party providers as part of data exchange and reconciling funding transfers.
When making some international transfers, the money has effectively disappeared from view for quite a while, with lots of time spent on the phone to sender, recipient and intermediary banks to track down the funds. Trust, digital systems and reconciliation services function less well across international borders.
Transactions on the BitCoin Blockchain are, to some extent, traceable. BitCoin is a distributed system. (Given any BitCoin ‘address’ it’s possible to go back into the public ledger and see which addresses have transferred an amount of bitcoins there, and to follow the chain onwards. If you can match an address to an identity, the currency, far from being anonymous, is fairly transparent*. This is the reason for BitCoin mixer services, designed to remove the trackability of coins.)
There are reported experiments with using BlockChain technologies in a range of different settings, incuding for land registries.
There’s a lot of investment going into FinTech right now – exploring ways to update financial services
All of this can lead to some excitement about the potential of new technologies to render funding flows traceable. If we can trace parcels and BitCoins, the argument goes, why can’t we have traceability of public funds and development assistance?
Although I think such an argument falls down in a number of key areas (which I’ll get to in a moment), it does point towards a key component missing from the current aid transparency landscape – in the form of a shared ledger.
One of the reasons IATI is based on a distributed data publishing model, without any internal consistency checks between publishers, is prior experience in the sector of submitting data to centralised aid databases. However, peer-to-peer and block-chain like technologies now offer a way to separate out co-ordination and the creation of consensus on the state of the world, from the centralisation of data in a single database.
It is at least theoretically possible to imagine a world in which the data a government publishes about it’s transactions is only considered part of the story, and in which the recipient needs to confirm receipt in a public ledger to complete the transactional record. Transactions ultimately have two parts (sending and receipt), and open (distributed) ledger systems could offer the ability to layer an auditable record on top of the actual transfer of funds.
However (as I said, there are some serious limitations here), such a system is only an account giving of the funding flows, not the flows themself (unlike BitCoin) which still leaves space for corruption through maintaining false information in the ledger. Although trusted financial intermediaries (banks and others) could be brought into the picture, as others responsible for confirming transactions, it’s hard to envisage how adoption of such a system could be brought about over the short and medium term (particularly globally). Secondly, although transactions between organisations might be made more visible and traceable in this way, the transactions inside an organisation remain opaque. Working out which funds relate to which internal and external projects is still a matter of the internal businesses processes in organisations involved in the aid delivery chain.
There may be other traceability systems we should be exploring as inspirations for aid and public money traceable. What my brief look at BitCoin leads me to reflect on is potential role over the short-term of reconciliation services that can, at the very least, report on the extent to which different IATI publisers are mutually confirming each others information. Over the long-term, a move towards more real-time transparency infrastructures, rather than periodic data publication, might open up new opportunities – although with all sorts of associated challenges.
Ultimately – creating traceable aid still requires labour to generate shared conceptual understandings of how particular transactions and projects relate.
How much is enough?
Let’s loop back round. In this post (as in many of the conversations I’ve had about traceable), we started with some use cases for traceability; we saw some of the challenges; we got briefly excited about what new technologies could do to provide traceability; we saw the opportunities, but also the many limitations. Where do we end up then?
I think important is to loop back to our use cases, and to consider how technology can help but not completely solve, the problems set out. Knowing which provider organisations might have been funded through a particular donors money could be enough to help them target investigations in cases of fraud. Or knowing all the funders who have a stake in projects in a particular country, sector and locality can be enough for communities on the ground to do further research to identify the funders they need to talk to.
Rather than searching after a traceability data panopticon, can we focus traceability-enabling practices on breaking down the barriers to specific investigatory processes?
Ultimately, in the IATI case, getting traceability to work at the project level alone could be a big boost. But doing this will require a lot of social coordination, as much as technical innovation. As we think about tools for traceability, thinking about tools that support this social process may be an important area to focus on.
In Just over two week’s time the Open Data Institute will be convening their second ‘ODI Summit‘ conference, under the banner ‘Celebrating Generation Open’.
The framing is broad, and rich in ideals:
“Global citizens who embrace network thinking
We are innovators and entrepreneurs, customers and citizens, students and parents who embrace network thinking. We are not bound by age, income or borders. We exist online and in every country, company, school and community.
Our attitudes are built on open culture. We expect everything to be accessible: an open web, open source, open cities, open government, open data. We believe in freedom to connect, freedom to travel, freedom to share and freedom to trade. Anyone can publish, anyone can broadcast, anyone can sell things, anyone can learn and everyone can share.
With this open mindset we transform sectors around the world, from business to art, by promoting transparency, accessibility, innovation and collaboration.”
But, it’s not just idealistic language. Right across the programme are programme are projects which are putting those ideals into action in concrete ways. I’m fortunate to get to spend some of my time working with a number of the projects and people who will be presenting their work, including:
Plus, my fellow co-founder at Open Data Services Co-operative, Ben Webb, will be speaking on some of the work we’ve been doing to support Open Contracting, 360Giving and projects with the Natural Resource Governance Institute.
Across the rest of the Summit there are also presentations on open data in arts, transport, biomedical research, journalism and safer surfing, to name just a few.
What is striking about this line up is that very few of these projects will be presenting on one-off demonstrations, but will be sharing increasingly mature projects: and projects which are increasingly diverse, as they recognise that data is one element of a theory of change, and being embedded in specific sectoral debates and action is just as important.
In some ways, it raises the question of how much a conference on open data in general can hold together: with so many different domains represented, is open data a strong enough thread to bind them together. On this question, I’m looking forward to Becky Hogge’s reflections when she launches a new piece of research at the Summit, five years on from her widely cited Open Data Study. In a preview of her new report, Becky argues that “It’s time for the open data community to stop playing nice” – moving away from trying to tie together divergent economic and political agendas, and putting full focus into securing and using data for specific change.
With ‘generation open’ announced: the question for us then is how does generation open cope with growing up. As the projects showcased at the summit move beyond the rhetoric, and we see that whilst in theory ‘anyone can do anything’ with data – in practice, access and ability is unequally distributed – how will debates over the ends to which we use the freedoms brought by ‘open’ play out?
Respondents to the paper have pointed to the way in which, in situations of unequal power, and in complex global markets, greater accessibility of data can have substantial downsides for farmers. For example, commodity speculation based on open weather data can drive up food prices, or open data on soil profiles can be used in order to extract greater margins from farmers when selling fertilizers. A number of responses to the ODI paper have noted that much of the information that feeds into emerging models of data-driven agriculture is coming from small-scale farmers themselves: whether through statistical collection by governments, or hoovered up by providers of farming technology, all aggregated into big datasets that practically inaccessible to local communities and farmers.
This has led to some focussing in response on the concept of data ownership: asserting that more emphasis should be placed on community ownership of the data generated at a local level. Equally, it has led to the argument that “opening data without enabling effective, equitable use can be considered a form of piracy”, making direct allusions to the biopiracy debate and the consequent responses to such concerns in the form of interventions such as the International Treaty on Plant Genetic Resources.
There are valid concerns here. Efforts to open up data must be interrogated to understand which actors stand to benefit, and to identify whether the configuration of openness sought is one that will promote the outcomes claimed. However, claims of data ownership and data sovereignty need to be taken as a starting point for designing better configurations of openness, rather than as a blocking counter-claim to ideas of open data.
Community ownership and openness
My thinking on this topic is shaped, albeit not to a set conclusion, by a debate that took place last year at a Berkman Centre Fellows Hour based on a presentation by Pushpa Kumar Lakshmanan on the Nagoya Protocol which sets out a framework for community ownership and control over genetic resources.
The debate raised the tension between the rights of communities to gain benefits from the resources and knowledge that they have stewarded, potentially over centuries, with an open knowledge approach that argues social progress is better served when knowledge is freely shared.
It also raised important questions of how communities can be demarcated (a long-standing and challenging issue in the philosophy of community rights) – and whether drawing a boundary to protect a community from external exploitation risks leaving internal patterns of power and exploitation within the community unexplored. For example, does community ownership of data really lead to certain elites in the community controlling it.
Ultimately, the debate taps into a conflict between those who see the greatest risk as being the exploitation of local communities by powerful economic actors, and those who see the greater risk as a conservative hoarding of knowledge in local communities in ways that inhibit important collective progress.
Exploring ownership claims
It is useful to note that much of the work on the Nagoya Protocol that Pushpa described was centred on controlling borders to regulate the physical transfer of plant genetic material. Thinking about rights over intangible data raises a whole new set of issues: ownership cannot just be filtered through a lens of possession and physical control.
Much data is relational. That is to say that it represents a relationship between two parties, or represents objects that may stand in ownership relationships with different parties. For example, in his response to the GODAN paper, Ajit Maru reports how “John Deere now considers its tractors and other equipment as legally ‘software’ and not a machine… [and] claims [this] gives them the right to use data generated as ‘feedback’ from their machinery”. Yet, this data about a tractor’s operation is also data about the farmers land, crops and work. The same kinds of ‘trade data for service’ concerns that have long been discussed with reference to social media websites are becoming an increasing part of the agriculture world. The concern here is with a kind of corporate data-grab, in which firms extract data, asserting their absolute ownership over something which is primarily generated by the farmer, and which is at best a co-production of farmer and firm.
It is in response to this kind of situation that grassroots data ownership claims are made.
These ownership claims can vary in strength. For example:
The farmer can claim that ‘this is my data’, and I should have ultimate control over how it is used, and the ability to treat it as a personally held asset;
The second runs that ‘I have a stake in this data’, and as a consequence, I should have access to it, and a say in how it is used;
Which claim is relevant depends very much on the nature of the data. For example, we might allow ownership claims over data about the self (personal data), and the direct property of an individual. For datasets that are more clearly relational, or collectively owned (for example, local statistics collected by agricultural extension workers, or weather data funded by taxation), the stakeholding claim is the more relevant.
It is important at this point to note that not all (perhaps even not many) concerns about the potential misuse of data can be dealt with effectively through a property right regime. Uses of data to abuse privacy, or to speculate and manipulate markets may be much better dealt with by regulations and prohibitions on those activities, rather than attempts to restrict the flow of data through assertions of data ownership.
Openness as a strategy
Once we know whether we are dealing with ownership claims, or stakeholding claims, in data, we can start thinking about different strategic configurations of openness, that take into account power relationships, and that seek to balance protection against exploitation, with the benefits that can come from collaboration and sharing.
For example, each farmer on their own has limited power vis-a-vis a high-tech tractor maker like John Deere. Even if they can assert a right to access their own data, John Deere will most likely retain the power to aggregate data from 1000s of farmers, maintaining an inequality of access to data vis-a-vis the farmer. If the farmer seeks to deny John Deere the right to aggregate their data with that of others: changes that (a) they will be unsuccessful, as making an absolute ownership claim here is difficult – using the tractor was a choice after all; and (b) they will potentially inhibit useful research and use of data that could improve cropping (even if some of the other uses of the data may run counter to the farmers interest). Some have suggested that creating a market in the data, where the data aggregator would pay the farmers for the ability to use their data, offers an alternative path here: but it is not clear that the price would compensate the farmer adequately, or lead to an efficient re-use of data.
However, in this setting openness potentially offers an alternative strategy. If farmers argue that they will only give data to John Deere if John Deere makes the aggregated data open, then they have the chance to challenge the asymmetry of power that otherwise develops. A range of actors and intermediaries can then use this data to provide services in the interests of the farmers. Both the technology provider, and the farmer, get access to the data in which they are both stakeholders.
This strategy (“I’ll give you data only if you make the aggregate set of data you gather open”), may require collective action from farmers. This may be the kind of arrangement GODAN can play a role in brokering, particularly as it may also turn out to be in the interest of the firm as well. Information economics has demonstrated how firms often under-share information which, if open, could lead to an expansion of the overall market and better equilibria in which, rather than a zero-sum game, there are benefits to be shared amongst market actors.
There will, however, be cases in which the power imbalances between data providers and those who could exploit the data are too large. For example, the above discussion assumes intermediaries will emerge who can help make effective use of aggregated data in the interests of farmers. Sometimes (a) the greatest use will need to be based on analysis of disaggregated data, which cannot be released openly; and (b) data providers need to find ways to work together to make use of data. In these cases, there may be a lot to learn from the history of commons and co-operative structures in the agricultural realm.
Co-operative and commons based strategies
Many discussions of openness conflate the concept of openness, and the concept of the commons. Yet there is an important distinction. Put crudely:
Open = anyone is free to use/re-use a resource;
Commons = mutual rights and responsibilities towards the resource;
In the context of digital works, Creative Commons provide a suite of licenses for content, some of which are ‘open’ (they place no responsibilities on users of a resource, but grant broad rights), and others of which adopt a more regulated commons approach, placing certain obligations on re-users of a document, photo or dataset, such as the responsibility to attribute the source, and share any derivative work under the same terms.
The Creative Commons draws upon an imagery from the physical commons. These commons were often in the form of land over which farmers held certain rights to graze cattle, of fisheries in which each fisher took shared responsibility for avoiding overfishing. Such commons are, in practice, highly regulated spaces – but that seek to pursue an approach based on sharing and stakeholding in resources, rather than absolute ownership claims. As we think about data resources in agriculture, reflecting more on learning from the commons is likely to prove fruitful. Of course, data, unlike land, is not finite in the same ways, nor does it have the same properties of excludability and rivalrousness.
In thinking about how to manage data commons, we might look towards another feature prevalent in agricultural production: that of the cooperative. The core idea of a data cooperative is that data can be held in trust by a body collectively owned by those who contribute the data. Such data cooperatives could help manage the boundary between data that is made open at some suitable level of aggregation, and data that is analysed and used to generate products of use to those contributing the data.
With Open Data Services Co-operative I’ve just started to dig more into learning about the cooperative movement: co-founding a workers cooperative that supports open data projects. However, we’ve also been thinking about how data cooperatives might work – and I’m certain there is scope for a lot more work in this area, helping deal with some of the critical questions that have come up for open data from the GODAN discussion paper.
Since the conference, we’ve been hard at work on a synthesis of the conference discussions, drawing on over 30 hours of video coverage, hundreds of slide decks and blog posts, and thousands of tweets, to capture some of the key issues discussed, and to put together a roadmap of priority areas for action.
The report was only made possible through the work of a team of volunteers – acting as rapporteurs for each sessions and blogging their reflections – and session organisers, preparing provocation blog posts in advance. That meant that in working to produce a synthesis of the different conferences I not only had video recordings and tweets from most sessions, but I also had diverse views and take-away insights written up by different participants, ensuring that the report was not just about what I took from the conference materials – but that it was shaped by different delegates views. In the Fold version of the report I’ve tried to link out to the recordings and blog posts to provide extra context in many sections – particularly in the ‘Data Plus’ section which covers open data in a range of contexts, from agriculture, to fiscal transparency and indigenous rights.
One of the most interesting, and challenging, sections of the report to compile has been the Roadmap for Action. The preparation for this began long in advance of the International Open Data Conference. Based on submissions to the conference open call, a set of action areas were identified. We then recruited a team of ‘action anchors’ to help shape inputs, provocations and conference workshops that could build upon the debates and case studies shared at the conference and it’s pre-events, and then look forward to set out an agenda for future collaboration and action in these areas. This process surfaced ideas for action at many different levels: from big-picture programmes, to small and focussed collaborative projects. In some areas, the conference could focus on socialising existing concrete proposals. In other areas, the need has been for moving towards shared vision, even if the exact next steps on the path there are not yet clear.
The agenda for action
Ultimately, in the report, the eight action areas explored at IODC2015 are boiled down to five headline categories in the final chapter, each with a couple of detailed actions underneath:
Shared principles for open data: “Common, fundamental principles are vital in order to unlock a sustainable supply of high quality open data, and to create the foundations for inclusive and effective open data use. The International Open Data Charter will provide principles for open data policy, relevant to governments at all levels of development and supported by implementation resources and working groups.”
Good practices and open standards for data publication: “Standards groups must work together for joined up, interoperable data, and must focus on priority practices rooted in user needs. Data publishers must work to identify and adopt shared standards and remove the technology and policy barriers that are frequently preventing data reuse.”
Building capacity to produce and use open data effectively: “Government open data leaders need increased opportunities for networking and peer-learning. Models are needed to support private sector and civil society open data champions in working to unlock the economic and social potential of open data. Work is needed to identify and embed core competencies for working with open data within existing organizational training, formal education, and informal learning programs.”
Strengthening open data innovation networks: “Investment, support, and strategic action is needed to scale social and economic open data innovations that work. Organizations should commit to using open data strategies in addressing key sectoral challenges. Open data innovation networks and thematic collaborations in areas such as health, agriculture, and parliamentary openness will facilitate the spread of ideas, tools, and skills— supporting context-aware and high-impact innovation exchange.”
Adopting common measurement and evaluation tools: “Researchers should work together to avoid duplication, to increase the rigour of open data assessments, and to build a shared, contextualized, evidence base on what works. Reusable methodological tools that measure the supply, use, and outcomes of open data are vital.To ensure the data revolution delivers open data, open data assessment methods must also be embedded within domain-specific surveys, including assessments of national statistical data.All stakeholders should work to monitor and evaluate their open data activities, contributing to research and shared learning on securing the greatest social impact for an open data revolution.”
In the full report, more detailed actions are presented in each of these categories. The true test of the roadmap will come with the 2016 International Open Data Conference, where we will be able to look at progress made in each of these areas, and to see whether action on open data is meeting the challenge of securing increased impact, sustainability and inclusiveness.
[Summary: Brief notes exploring a strategic and service-based approach to improve IATI data quality]
Filed under: rough ideas
At the International Aid Transparency Initiative (IATI) Technical Advisory Group meeting (#tag2015) in Ottawa last week I took part in two sessions exploring the need for Application Programming Interfaces (APIs) onto IATI data. It quickly became clear that there were two challenges to address:
(1) Many of the questions people around the table were asking were complex queries, not the simple data retrieval kinds of questions that an API is well suited to;
(2) ‘Out of the box’ IATI data is often not able to answer the kinds of questions being asked, either because
(a) the quality and consistency of data from distributed sources means that there are a range of special cases to handle when performing cross-donor analysis;
(b) the questions asked invite additional data preparation, such as currency conversion, or identifying a block of codes that relate to a particular sector (.e.g. identifying all the Water and Sanitation related codes)
These challenges also underlie the wider issue explored at TAG2015: that even though five years of effort have gone into data supply, few people are actually using IATI data day-today.
If the goal of the International Aid Transparency Initiative as a whole, distinct from the specific goal of securing data, is more informed decision making in the sector, then this got me thinking about the extent to which what we need right now is a primary focus on services rather than data and tools. And from that, thinking about whether intelligent funding of such services could lead to the right kinds of pressures for improving data quality.
Improving data through enquiries
Using any dataset to answer complex questions takes both domain knowledge, and knowledge of the data. Development agencies might have lots of one-off and ongoing questions, from “Which donors are spending on Agriculture and Nutrition in East Africa?”, to “What pipeline projects are planned in the next six months affecting women and children in Least Developed Countries?”. Against a suitably cleaned up IATI dataset, reasonable answers to questions like these could be generated with carefully written queries. Authoriative answers might require further cleaning and analysis of the data retrieved.
For someone working with a dataset every day, such queries might take anything from a few minutes to a few hours to develop and execute. Cleaning data to provide authoritative answers might take a bit longer.
For a programme officer, who has the question, but not the knowledge of the data structures, working out how to answer these questions might take days. In fact, the learning curve will mean often these questions are simply not asked. Yet, having the answers could save months, and $millions.
So – what if key donors sponsored an enquiries service that could answer these kinds of queries on demand? With the right funding structure, it could have incentives not only to provide better data on request, but also to put resources into improving data quality and tooling. For example: if there is a set price paid per enquiry successfully answered, and the cost of answering that enquiry is increased by poor data quality from publishers, then there can be an incentive on the service to invest some of it’s time in improving incoming data quality. How to prioritise such investments would be directly connected to user demand: if all the questions are made trickier to answer because of a particular donor’s data, then focussing on improving that data first makes most sense. This helps escape the current situation in which the goal is to seek perfection for all data. Beyond a certain point, the political pressures to publish may ceases to work to increase data quality, whereas requests to improve data that are directly connected to user demand and questions may have greater traction.
Of course, the incentive structures here are subtle: the quickest solution for an enquiry service might be to clean up data as it comes into its own data store rather than trying to improve data at source – and there remains a desire in open data projects to avoid creating single centralised databases, and to increase the resiliency of the ecosystem by improving original open data, which would oppose this strategy. This would need to be worked through in any full proposal.
I’m not sure what appetite there would be for a service like this – but I’m certain that in, what are ultimately niche open data ecosystems like IATI, strategic interventions will be needed to build the markets, services and feedback loops that lead to their survival.
(These notes are numbered for each of the key frames in the slide deck. You can move horizontally through the deck with the right arrow, or through each section with the down arrow. Hit escape when viewing the deck to get an overview. Or just hit space bar to go through as I did when presenting…)
(1) I’m Tim. I’ve been following the open data field as both a practitioner and a social researcher over the last five years. Much of this work as part of my PhD studies, and through my time as a fellow and affiliate at the Berkman Centre.
(2) First let’s get out the way the ‘trends’ that often get talked about somewhat breathlessly: the rapid growth of open data from niche idea, to part of the policy mainstream. I want to look at five more critical trends, emerging now, and to look at their future.
(3) First trend: the move from engagement with open data to solve problems, to a focus on infrastructure building – and the need to complete a cyclical move back again. Most people I know got interested in open data because of a practical issue, often a political issue, where they wanted data. The data wasn’t there, so they joined action to make it available. This can cycle into ongoing work on building the infrastructure of data needed to solve a problem – but there is a risk that the original problems get lost – and energy goes into infrastructure alone. There is a growing discourse about reconnecting to action. Key is to recognise data as problem solving, and data infrastructure building, as two distinct forms of open data action, complementary, but also in creative tension.
(4) Second trend: there are many forms of open data initiative, and growing data divides. For more on this, see the Open Data Barometer 2015 report, and this comparison of policies across six countries. Canada was up 1 place in the rankings from the first to second editions of the ODB. But that mainly looks at a standard model of doing open data. Too often we’re exporting an idea of open data based on ‘Data Portal + License + Developers & Apps = Open Data Initiative’ – but we need to recognise that there are many different ways to grow an open data initiative, and activity – and to be opening up space for a new wave of innovation, rather than embedding the results of our first years experimentation as the best practice.
(5) Third trend: the Open Data Barometer hints that impact is strongest where there are local initiatives. Urban initiatives? How do we ensure that we’re not designing initiatives that can only achieve impact with a critical mass of developers, community activists and supporting infrastructures.
(6) Fourth trend: There is a growing focus on data standards. We’ve moved beyond ‘Raw Data Now’ to see data publishers thinking about standards on everything from public budgets, to public transit, public contracts and public toilets. But when we recognise that our data is being sliced, diced and cooked, are we thinking about who it is being prepared for? Who is included, and who is excluded? (Remember, Raw Data is an Oxymoron). Even some of the basics of how to do diverse open data are not well resolved right now. How do we do multilingual data for example? Or how do we find measurement standards to assess open data in federal systems? Canada has a role as a well-resourced multi-lingual country in finding good solutions here.
(7) Fifth trend: There are bigger agendas on the policy scene right now than open data. But open data is still a big idea. Open data has been overtaken in many settings by talk of big data, smart cities, data revolutions and the possibility of data-driven governance. In the recent African Data Consensus process, 15 different ‘data communities’ were identified, from land data, and geo-data communities, to health data and conflict data communities. Open data was framed as another ‘data community’. Should we be seeing it this way? Or as an ethic and approach to be brought into all these different thematic areas: a different way of doing data – not another data domain. We need to look to the ideas of commons, and the power to create and collaborate that treating our data as a common resource can unlock. We need to reclaim the politics of open data as an idea that challenges secrecy, and that promotes a foundation for transparency, collaboration and participation. Only with this can we critique these bigger trends with the open data idea – and struggle for a context in which we are not database objects in the systems of the state, but are collaborating, self-determining, sovereign citizens.
(8) Recap & take-aways:
Embed open data in wider change
Innovate and experiment with different open data practices
Build community to unlock the impact of open data
Include users in shaping open data standards
Combine problem solving and infrastructure building
As open data becomes firmly cemented in the policy mainstream, there is a pressing need to dig deeper into the dynamics of how open data operates in practice, and the theoretical roots of open data activities. Researchers across the world have been looking at these issues, and this workshop offers an opportunity to bring together and have shared dialogue around completed studies and work-in-progress.
Submissions are invited on themes including:
Theoretical framing of open data as a concept and a movement;
Use and impacts of open data in specific countries or specific sectors, including, but not limited to: government agencies, cities, rural areas, legislatures, judiciaries, and the domains of health, education, transport, finance, environment, and energy;
The making, implementation and institutionalisation of open data policy;
Capacity building for wider availability and use of open data;
Conceptualising open data ecosystems and intermediaries;
Entrepreneurial usage and open data economies in developing countries;
Linkages between transparency, freedom of information and open data communities;
Measurement of open data policy and practices;
Critical challenges for open data: privacy, exclusion and abuse;
Situating open data in global governance and developmental context;
Development and adoption of technical standards for open data;
Submissions are invited from all disciplines, though with an emphasis on empirical social research. PhD students, independent and early career researchers are particularly encouraged to submit abstracts. Panels will provide an opportunity to share completed or in-progress research and receive constructive feedback.
Extended abstracts, in French, English, Spanish or Portuguese, of up to two pages, detailing the question addressed by the research, methods employed and findings should be submitted by February 28th 2015. Notifications will be provided by March 31st. Full papers will be due by May 1st.
Authors of accepted abstracts will be invited to submit full papers. These should be a maximum of 20 pages single spaced, exclusive of bibliography and appendixes. As an interdisciplinary and international workshop we welcome papers in a variety of formats and languages: French, English, Spanish and Portuguese. However, abstracts and paper presentations will need to be given in English.
Full papers should be provided in .odt, .doc, or .rtf or as .html. Where relevant, we encourage authors to also share in a repository, and link to, data collected as part of their research.
We are working to identify a journal special issue or other opportunity for publication of selected papers.
The Open Data Research Network was established in 2012 as part of the Exploring the Emerging Impacts of Open Data in Developing Countries (ODDC) project. It maintains an active newsletter, website and LinkedIn group, providing a space for researchers, policy makers and practitioners to interact.
This workshop will also include an opportunity to find out how to get involved in the Network as it transitions to a future model, open to new members and partners, and with a new governance structure.