Category Archives: Open Data

Data, openness, community ownership and the commons

[Summary: reflections on responses to the GODAN discussion paper on agricultural open data, ownership and the commons – posted ahead of Africa Open Data Conference GODAN sessions]

Photo Credit - CC-BY - South Africa Tourism

]3 Photo Credit – CC-BY – South Africa Tourism

Key points

  • We need to distinguish between claims to data ownership, and claims to be a stakeholder in a dataset;
  • Ownership is a relevant concept for a limited range of datasets;
  • Openness can be a positive strategy, empowering farmers vis-a-vis large corporate interests;
  • Openness is not universally good: can also be used as a ‘data grab’ strategy;
  • We need to think critically about the configurations of openness we are promoting;
  • Commons and cooperative based strategies for managing data and open data are a key area for further exploration;

Open or owned data?

Following the publication of a discussion paper by the ODI for the Global Open Data for Agriculture and Nutrition initiative, putting forward a case for how open data can help improve agriculture, food and nutrition, debate has been growing about how open data should be approached in the context of smallholder agriculture. In this post, I explore some provisional reflections on that debate.

Respondents to the paper have pointed to the way in which, in situations of unequal power, and in complex global markets, greater accessibility of data can have substantial downsides for farmers. For example, commodity speculation based on open weather data can drive up food prices, or open data on soil profiles can be used in order to extract greater margins from farmers when selling fertilizers. A number of responses to the ODI paper have noted that much of the information that feeds into emerging models of data-driven agriculture is coming from small-scale farmers themselves: whether through statistical collection by governments, or hoovered up by providers of farming technology, all aggregated into big datasets that practically inaccessible to local communities and farmers.

This has led to some focussing in response on the concept of data ownership: asserting that more emphasis should be placed on community ownership of the data generated at a local level. Equally, it has led to the argument that “opening data without enabling effective, equitable use can be considered a form of piracy”, making direct allusions to the biopiracy debate and the consequent responses to such concerns in the form of interventions such as the International Treaty on Plant Genetic Resources.

There are valid concerns here. Efforts to open up data must be interrogated to understand which actors stand to benefit, and to identify whether the configuration of openness sought is one that will promote the outcomes claimed. However, claims of data ownership and data sovereignty need to be taken as a starting point for designing better configurations of openness, rather than as a blocking counter-claim to ideas of open data.

Community ownership and openness

My thinking on this topic is shaped, albeit not to a set conclusion, by a debate that took place last year at a Berkman Centre Fellows Hour based on a presentation by Pushpa Kumar Lakshmanan on the Nagoya Protocol which sets out a framework for community ownership and control over genetic resources.

The debate raised the tension between the rights of communities to gain benefits from the resources and knowledge that they have stewarded, potentially over centuries, with an open knowledge approach that argues social progress is better served when knowledge is freely shared.

It also raised important questions of how communities can be demarcated (a long-standing and challenging issue in the philosophy of community rights) – and whether drawing a boundary to protect a community from external exploitation risks leaving internal patterns of power and exploitation within the community unexplored. For example, does community ownership of data really lead to certain elites in the community controlling it.

Ultimately, the debate taps into a conflict between those who see the greatest risk as being the exploitation of local communities by powerful economic actors, and those who see the greater risk as a conservative hoarding of knowledge in local communities in ways that inhibit important collective progress.

Exploring ownership claims

It is useful to note that much of the work on the Nagoya Protocol that Pushpa described was centred on controlling borders to regulate the physical transfer of plant genetic material. Thinking about rights over intangible data raises a whole new set of issues: ownership cannot just be filtered through a lens of possession and physical control.

Much data is relational. That is to say that it represents a relationship between two parties, or represents objects that may stand in ownership relationships with different parties. For example, in his response to the GODAN paper, Ajit Maru reports how “John Deere now considers its tractors and other equipment as legally ‘software’ and not a machine… [and] claims [this] gives them the right to use data generated as ‘feedback’ from their machinery”. Yet, this data about a tractor’s operation is also data about the farmers land, crops and work. The same kinds of ‘trade data for service’ concerns that have long been discussed with reference to social media websites are becoming an increasing part of the agriculture world. The concern here is with a kind of corporate data-grab, in which firms extract data, asserting their absolute ownership over something which is primarily generated by the farmer, and which is at best a co-production of farmer and firm.

It is in response to this kind of situation that grassroots data ownership claims are made.

These ownership claims can vary in strength. For example:

  • The farmer can claim that ‘this is my data’, and I should have ultimate control over how it is used, and the ability to treat it as a personally held asset;

  • The second runs that ‘I have a stake in this data’, and as a consequence, I should have access to it, and a say in how it is used;

Which claim is relevant depends very much on the nature of the data. For example, we might allow ownership claims over data about the self (personal data), and the direct property of an individual. For datasets that are more clearly relational, or collectively owned (for example, local statistics collected by agricultural extension workers, or weather data funded by taxation), the stakeholding claim is the more relevant.

It is important at this point to note that not all (perhaps even not many) concerns about the potential misuse of data can be dealt with effectively through a property right regime. Uses of data to abuse privacy, or to speculate and manipulate markets may be much better dealt with by regulations and prohibitions on those activities, rather than attempts to restrict the flow of data through assertions of data ownership.

Openness as a strategy

Once we know whether we are dealing with ownership claims, or stakeholding claims, in data, we can start thinking about different strategic configurations of openness, that take into account power relationships, and that seek to balance protection against exploitation, with the benefits that can come from collaboration and sharing.

For example, each farmer on their own has limited power vis-a-vis a high-tech tractor maker like John Deere. Even if they can assert a right to access their own data, John Deere will most likely retain the power to aggregate data from 1000s of farmers, maintaining an inequality of access to data vis-a-vis the farmer. If the farmer seeks to deny John Deere the right to aggregate their data with that of others: changes that (a) they will be unsuccessful, as making an absolute ownership claim here is difficult – using the tractor was a choice after all; and (b) they will potentially inhibit useful research and use of data that could improve cropping (even if some of the other uses of the data may run counter to the farmers interest). Some have suggested that creating a market in the data, where the data aggregator would pay the farmers for the ability to use their data, offers an alternative path here: but it is not clear that the price would compensate the farmer adequately, or lead to an efficient re-use of data.

However, in this setting openness potentially offers an alternative strategy. If farmers argue that they will only give data to John Deere if John Deere makes the aggregated data open, then they have the chance to challenge the asymmetry of power that otherwise develops. A range of actors and intermediaries can then use this data to provide services in the interests of the farmers. Both the technology provider, and the farmer, get access to the data in which they are both stakeholders.

This strategy (“I’ll give you data only if you make the aggregate set of data you gather open”), may require collective action from farmers. This may be the kind of arrangement GODAN can play a role in brokering, particularly as it may also turn out to be in the interest of the firm as well. Information economics has demonstrated how firms often under-share information which, if open, could lead to an expansion of the overall market and better equilibria in which, rather than a zero-sum game, there are benefits to be shared amongst market actors.

There will, however, be cases in which the power imbalances between data providers and those who could exploit the data are too large. For example, the above discussion assumes intermediaries will emerge who can help make effective use of aggregated data in the interests of farmers. Sometimes (a) the greatest use will need to be based on analysis of disaggregated data, which cannot be released openly; and (b) data providers need to find ways to work together to make use of data. In these cases, there may be a lot to learn from the history of commons and co-operative structures in the agricultural realm.

Co-operative and commons based strategies

Many discussions of openness conflate the concept of openness, and the concept of the commons. Yet there is an important distinction. Put crudely:

  • Open = anyone is free to use/re-use a resource;
  • Commons = mutual rights and responsibilities towards the resource;

In the context of digital works, Creative Commons provide a suite of licenses for content, some of which are ‘open’ (they place no responsibilities on users of a resource, but grant broad rights), and others of which adopt a more regulated commons approach, placing certain obligations on re-users of a document, photo or dataset, such as the responsibility to attribute the source, and share any derivative work under the same terms.

The Creative Commons draws upon an imagery from the physical commons. These commons were often in the form of land over which farmers held certain rights to graze cattle, of fisheries in which each fisher took shared responsibility for avoiding overfishing. Such commons are, in practice, highly regulated spaces – but that seek to pursue an approach based on sharing and stakeholding in resources, rather than absolute ownership claims. As we think about data resources in agriculture, reflecting more on learning from the commons is likely to prove fruitful. Of course, data, unlike land, is not finite in the same ways, nor does it have the same properties of excludability and rivalrousness.

In thinking about how to manage data commons, we might look towards another feature prevalent in agricultural production: that of the cooperative. The core idea of a data cooperative is that data can be held in trust by a body collectively owned by those who contribute the data. Such data cooperatives could help manage the boundary between data that is made open at some suitable level of aggregation, and data that is analysed and used to generate products of use to those contributing the data.

With Open Data Services Co-operative I’ve just started to dig more into learning about the cooperative movement: co-founding a workers cooperative that supports open data projects. However, we’ve also been thinking about how data cooperatives might work – and I’m certain there is scope for a lot more work in this area, helping deal with some of the critical questions that have come up for open data from the GODAN discussion paper.

Enabling the Data Revolution: IODC 2015 Conference Report

ReportCoverThe International Open Data Conference in Ottawa in May this year brought together over 200 speakers and close to 1000 in-person attendees to explore the open data landscape. I had the great privilege of working with the conference team to work on co-ordinating a series of sessions designed to weave together discussions from across the conference into a series of proposals for action, supporting shared action to take forward a progressive open data agenda. From the Open Data Research Symposium and Data Standards Day and other pre-events, to the impact presentations, panel discussions and individual action track sessions, a wealth of ideas were introduced and explored.

Since the conference, we’ve been hard at work on a synthesis of the conference discussions, drawing on over 30 hours of video coverage, hundreds of slide decks and blog posts, and thousands of tweets, to capture some of the key issues discussed, and to put together a roadmap of priority areas for action.

The result has just been published in English and French as a report for download, and as an interactive copy on Fold: embedding video and links alongside the report section by section.

Weaving it together

The report was only made possible through the work of a team of volunteers – acting as rapporteurs for each sessions and blogging their reflections – and session organisers, preparing provocation blog posts in advance. That meant that in working to produce a synthesis of the different conferences I not only had video recordings and tweets from most sessions, but I also had diverse views and take-away insights written up by different participants, ensuring that the report was not just about what I took from the conference materials – but that it was shaped by different delegates views. In the Fold version of the report I’ve tried to link out to the recordings and blog posts to provide extra context in many sections – particularly in the ‘Data Plus’ section which covers open data in a range of contexts, from agriculture, to fiscal transparency and indigenous rights.

One of the most interesting, and challenging, sections of the report to compile has been the Roadmap for Action. The preparation for this began long in advance of the International Open Data Conference. Based on submissions to the conference open call, a set of action areas were identified. We then recruited a team of ‘action anchors’ to help shape inputs, provocations and conference workshops that could build upon the debates and case studies shared at the conference and it’s pre-events, and then look forward to set out an agenda for future collaboration and action in these areas. This process surfaced ideas for action at many different levels: from big-picture programmes, to small and focussed collaborative projects. In some areas, the conference could focus on socialising existing concrete proposals. In other areas, the need has been for moving towards shared vision, even if the exact next steps on the path there are not yet clear.

The agenda for action

Ultimately, in the report, the eight action areas explored at IODC2015 are boiled down to five headline categories in the final chapter, each with a couple of detailed actions underneath:

  • Shared principles for open data: “Common, fundamental principles are vital in order to unlock a sustainable supply of high quality open data, and to create the foundations for inclusive and effective open data use. The International Open Data Charter will provide principles for open data policy, relevant to governments at all levels of development and supported by implementation resources and working groups.”
  • Good practices and open standards for data publication: “Standards groups must work together for joined up, interoperable data, and must focus on priority practices rooted in user needs. Data publishers must work to identify and adopt shared standards and remove the technology and policy barriers that are frequently preventing data reuse.”
  • Building capacity to produce and use open data effectively: “Government open data leaders need increased opportunities for networking and peer-learning. Models are needed to support private sector and civil society open data champions in working to unlock the economic and social potential of open data. Work is needed to identify and embed core competencies for working with open data within existing organizational training, formal education, and informal learning programs.”
  • Strengthening open data innovation networks: “Investment, support, and strategic action is needed to scale social and economic open data innovations that work. Organizations should commit to using open data strategies in addressing key sectoral challenges. Open data innovation networks and thematic collaborations in areas such as health, agriculture, and parliamentary openness will facilitate the spread of ideas, tools, and skills— supporting context-aware and high-impact innovation exchange.”
  • Adopting common measurement and evaluation tools: “Researchers should work together to avoid duplication, to increase the rigour of open data assessments, and to build a shared, contextualized, evidence base on what works. Reusable methodological tools that measure the supply, use, and outcomes of open data are vital.To ensure the data revolution delivers open data, open data assessment methods must also be embedded within domain-specific surveys, including assessments of national statistical data.All stakeholders should work to monitor and evaluate their open data activities, contributing to research and shared learning on securing the greatest social impact for an open data revolution.”

In the full report, more detailed actions are presented in each of these categories. The true test of the roadmap will come with the 2016 International Open Data Conference, where we will be able to look at progress made in each of these areas, and to see whether action on open data is meeting the challenge of securing increased impact, sustainability and inclusiveness.

Getting the incentives right: an IATI enquiry service?

[Summary: Brief notes exploring a strategic and service-based approach to improve IATI data quality]

Filed under: rough ideas

At the International Aid Transparency Initiative (IATI) Technical Advisory Group meeting (#tag2015) in Ottawa last week I took part in two sessions exploring the need for Application Programming Interfaces (APIs) onto IATI data. It quickly became clear that there were two challenges to address:

(1) Many of the questions people around the table were asking were complex queries, not the simple data retrieval kinds of questions that an API is well suited to;

(2) ‘Out of the box’ IATI data is often not able to answer the kinds of questions being asked, either because

  • (a) the quality and consistency of data from distributed sources means that there are a range of special cases to handle when performing cross-donor analysis;
  • (b) the questions asked invite additional data preparation, such as currency conversion, or identifying a block of codes that relate to a particular sector (.e.g. identifying all the Water and Sanitation related codes)

These challenges also underlie the wider issue explored at TAG2015: that even though five years of effort have gone into data supply, few people are actually using IATI data day-today.

If the goal of the International Aid Transparency Initiative as a whole, distinct from the specific goal of securing data, is more informed decision making in the sector, then this got me thinking about the extent to which what we need right now is a primary focus on services rather than data and tools. And from that, thinking about whether intelligent funding of such services could lead to the right kinds of pressures for improving data quality.

Improving data through enquiries

Using any dataset to answer complex questions takes both domain knowledge, and knowledge of the data. Development agencies might have lots of one-off and ongoing questions, from “Which donors are spending on Agriculture and Nutrition in East Africa?”, to “What pipeline projects are planned in the next six months affecting women and children in Least Developed Countries?”. Against a suitably cleaned up IATI dataset, reasonable answers to questions like these could be generated with carefully written queries. Authoriative answers might require further cleaning and analysis of the data retrieved.

For someone working with a dataset every day, such queries might take anything from a few minutes to a few hours to develop and execute. Cleaning data to provide authoritative answers might take a bit longer.

For a programme officer, who has the question, but not the knowledge of the data structures, working out how to answer these questions might take days. In fact, the learning curve will mean often these questions are simply not asked. Yet, having the answers could save months, and $millions.

So – what if key donors sponsored an enquiries service that could answer these kinds of queries on demand? With the right funding structure, it could have incentives not only to provide better data on request, but also to put resources into improving data quality and tooling. For example: if there is a set price paid per enquiry successfully answered, and the cost of answering that enquiry is increased by poor data quality from publishers, then there can be an incentive on the service to invest some of it’s time in improving incoming data quality. How to prioritise such investments would be directly connected to user demand: if all the questions are made trickier to answer because of a particular donor’s data, then focussing on improving that data first makes most sense. This helps escape the current situation in which the goal is to seek perfection for all data. Beyond a certain point, the political pressures to publish may ceases to work to increase data quality, whereas requests to improve data that are directly connected to user demand and questions may have greater traction.

Of course, the incentive structures here are subtle: the quickest solution for an enquiry service might be to clean up data as it comes into its own data store rather than trying to improve data at source – and there remains a desire in open data projects to avoid creating single centralised databases, and to increase the resiliency of the ecosystem by improving original open data, which would oppose this strategy. This would need to be worked through in any full proposal.

I’m not sure what appetite there would be for a service like this – but I’m certain that in, what are ultimately niche open data ecosystems like IATI, strategic interventions will be needed to build the markets, services and feedback loops that lead to their survival.

Comments and reflection welcome

#CODS15: Trends and attitudes in open data

[Summary: sharing slides from talk at Canadian Open Data Summit]

The lovely folks at Open North were kind enough to invite me to give some opening remarks at the Canadian Open Data Summit in Ottawa today. The subject I was set was ‘trends and attitudes in the global open data community’ – and so I tried to pick up on five themes I’ve been observing and reflecting on recently. The slides from my talk are below (or here), and I’ve jotted down a few fragmentary notes that go along with them (and represent some of what I said, and some of what I meant to say [check against delivery etc.]). There’s also a great take on some of the themes I explored, and that developed in the subsequent panel, in the Open Government Podcast recap here.

(These notes are numbered for each of the key frames in the slide deck. You can move horizontally through the deck with the right arrow, or through each section with the down arrow. Hit escape when viewing the deck to get an overview. Or just hit space bar to go through as I did when presenting…)

(1) I’m Tim. I’ve been following the open data field as both a practitioner and a social researcher over the last five years. Much of this work as part of my PhD studies, and through my time as a fellow and affiliate at the Berkman Centre.

(2) First let’s get out the way the ‘trends’ that often get talked about somewhat breathlessly: the rapid growth of open data from niche idea, to part of the policy mainstream. I want to look at five more critical trends, emerging now, and to look at their future.

(3) First trend: the move from engagement with open data to solve problems, to a focus on infrastructure building – and the need to complete a cyclical move back again. Most people I know got interested in open data because of a practical issue, often a political issue, where they wanted data. The data wasn’t there, so they joined action to make it available. This can cycle into ongoing work on building the infrastructure of data needed to solve a problem – but there is a risk that the original problems get lost – and energy goes into infrastructure alone. There is a growing discourse about reconnecting to action. Key is to recognise data as problem solving, and data infrastructure building, as two distinct forms of open data action, complementary, but also in creative tension.

(4) Second trend: there are many forms of open data initiative, and growing data divides. For more on this, see the Open Data Barometer 2015 report, and this comparison of policies across six countries. Canada was up 1 place in the rankings from the first to second editions of the ODB. But that mainly looks at a standard model of doing open data. Too often we’re exporting an idea of open data based on ‘Data Portal + License + Developers & Apps = Open Data Initiative’ – but we need to recognise that there are many different ways to grow an open data initiative, and activity – and to be opening up space for a new wave of innovation, rather than embedding the results of our first years experimentation as the best practice.

(5) Third trend: the Open Data Barometer hints that impact is strongest where there are local initiatives. Urban initiatives? How do we ensure that we’re not designing initiatives that can only achieve impact with a critical mass of developers, community activists and supporting infrastructures.

(6) Fourth trend: There is a growing focus on data standards. We’ve moved beyond ‘Raw Data Now’ to see data publishers thinking about standards on everything from public budgets, to public transit, public contracts and public toilets. But when we recognise that our data is being sliced, diced and cooked, are we thinking about who it is being prepared for? Who is included, and who is excluded? (Remember, Raw Data is an Oxymoron). Even some of the basics of how to do diverse open data are not well resolved right now. How do we do multilingual data for example? Or how do we find measurement standards to assess open data in federal systems? Canada has a role as a well-resourced multi-lingual country in finding good solutions here.

(7) Fifth trend: There are bigger agendas on the policy scene right now than open data. But open data is still a big idea. Open data has been overtaken in many settings by talk of big data, smart cities, data revolutions and the possibility of data-driven governance. In the recent African Data Consensus process, 15 different ‘data communities’ were identified, from land data, and geo-data communities, to health data and conflict data communities. Open data was framed as another ‘data community’. Should we be seeing it this way? Or as an ethic and approach to be brought into all these different thematic areas: a different way of doing data – not another data domain. We need to look to the ideas of commons, and the power to create and collaborate that treating our data as a common resource can unlock. We need to reclaim the politics of open data as an idea that challenges secrecy, and that promotes a foundation for transparency, collaboration and participation. Only with this can we critique these bigger trends with the open data idea – and struggle for a context in which we are not database objects in the systems of the state, but are collaborating, self-determining, sovereign citizens.

(8) Recap & take-aways:

  • Embed open data in wider change
  • Innovate and experiment with different open data practices
  • Build community to unlock the impact of open data
  • Include users in shaping open data standards
  • Combine problem solving and infrastructure building

2015 Open Data Research Symposium – Ottawa

There are a few days left to submit abstracts for the 2015 Open Data Research Symposium due to take place alongside 3rd International Open Government Data Conference in Ottawa, on May 27th 2015.

Registration is also now open for participants as well as presenters.

Call for Abstracts: (Deadline 28th Feb 2015; submission portal)

As open data becomes firmly cemented in the policy mainstream, there is a pressing need to dig deeper into the dynamics of how open data operates in practice, and the theoretical roots of open data activities. Researchers across the world have been looking at these issues, and this workshop offers an opportunity to bring together and have shared dialogue around completed studies and work-in-progress.

Submissions are invited on themes including:

  • Theoretical framing of open data as a concept and a movement;
  • Use and impacts of open data in specific countries or specific sectors, including, but not limited to: government agencies, cities, rural areas, legislatures, judiciaries, and the domains of health, education, transport, finance, environment, and energy;
  • The making, implementation and institutionalisation of open data policy;
  • Capacity building for wider availability and use of open data;
  • Conceptualising open data ecosystems and intermediaries;
  • Entrepreneurial usage and open data economies in developing countries;
  • Linkages between transparency, freedom of information and open data communities;
  • Measurement of open data policy and practices;
  • Critical challenges for open data: privacy, exclusion and abuse;
  • Situating open data in global governance and developmental context;
  • Development and adoption of technical standards for open data;

Submissions are invited from all disciplines, though with an emphasis on empirical social research. PhD students, independent and early career researchers are particularly encouraged to submit abstracts. Panels will provide an opportunity to share completed or in-progress research and receive constructive feedback.

Submission details

Extended abstracts, in French, English, Spanish or Portuguese, of up to two pages, detailing the question addressed by the research, methods employed and findings should be submitted by February 28th 2015. Notifications will be provided by March 31st. Full papers will be due by May 1st. 

Registration for the symposium will open shortly after registration for the main International Open Government Data Conference.

Abstracts should be submitted via Easy Chair

Paper format

Authors of accepted abstracts will be invited to submit full papers. These should be a maximum of 20 pages single spaced, exclusive of bibliography and appendixes. As an interdisciplinary and international workshop we welcome papers in a variety of formats and languages: French, English, Spanish and Portuguese. However, abstracts and paper presentations will need to be given in English. 

Full papers should be provided in .odt, .doc, or .rtf or as .html. Where relevant, we encourage authors to also share in a repository, and link to, data collected as part of their research. 

We are working to identify a journal special issue or other opportunity for publication of selected papers.


Contact or for more details.

Programme committee

About the Open Data Research Network

The Open Data Research Network was established in 2012 as part of the Exploring the Emerging Impacts of Open Data in Developing Countries (ODDC) project. It maintains an active newsletter, website and LinkedIn group, providing a space for researchers, policy makers and practitioners to interact. 

This workshop will also include an opportunity to find out how to get involved in the Network as it transitions to a future model, open to new members and partners, and with a new governance structure. 

Exploring the Open Data Barometer

[Summary: ODI Lunchtime lecture about the Open Data Barometer]


Screen Shot 2015-02-24 at 20.39.15

Just over a month ago, the World Wide Web Foundation launched the second edition of the Open Data Barometer to coincide with BBC Democracy Day. This was one of the projects I was worked on at the Web Foundation before I completed my projects there at the end of last year. So, on Friday I had the opportunity to join with my successor at Web Foundation, Savita Bailur, to give an ODI Friday lunchtime talk about the methods and findings of the study.

A recording of the talk and slides are embedded below:

Friday lunchtime lecture: Exploring the Open Data Barometer: the challenges ahead for an open data revoluti…

And, as the talk mentions – all the data from the Open Data Barometer is available in the interactive report at

Unpacking open data: power, politics and the influence of infrastructures

[Summary: recording of Berkman Centre Lunch Talk on open data]

Much belatedly, below you will find the video from the Berkman Centre Talk I gave late last year on ‘Unpacking open data: power, politics and the influence of infrastructures

You can find a live-blog of the talk from Matt Stempeck and Erhardt Graff over on the MIT Media Lab blog, and Willow Brugh drew the fantastic visual record of themes in the talk shown below:


The slides are also up on Slideshare here.

I’m now in the midst of trying to make more sense of the themes in this talk whilst in the writing up stage for my PhD… and much of the feedback I had from the talk has been incredibly valuable in that – so comments are always welcome.

20 ways to connect open data and local democracy

[Summary: notes for a workshop on local democracy and open data]

At the Local Democracy for Everyone (#notInWestminister) workshop in Huddersfield today I led a session titled ‘20 ways to connect open data and local democracy‘. Below is the list of ideas we started the workshop with

In the workshop we explored how these, and other approaches, could be used to respond to priority local issues, from investing funds in environmental projects, to shaping local planning processes, and dealing with nuisance pigeons.

Graphic recording from break-out session by [@Jargonautical](]

There is more to do to re-imagine how local open data should work, but the conversations today offered an interesting start.

1. Practice open data engagement

Data portals can be very impersonal things. But behind every dataset is a council officer or a team working to collect, manage and use the data. Putting a human face on datasets, linking them to the policy areas they affect, and referencing datasets from reports that draw upon them can all help put data in context and make it more engaging.

The Five Stars of Open Data Engagement provides a model for stepping up engagement activities, from providing better and more social meta-data, through to hosting regular office-hours and drop-in sessions to help the local community understand and use data better.

2. Showing the council contribution

A lot of the datasets required by the Local Government Transparency Code are about the cost of services. What information and data is needed to complete the picture and to show the impact of services and spending?

The Caring for my Neighbourhood project in Sao Paulo looked to geocode government budget and spending data, to understand where funds were flowing, and have opened up a conversation with government about how to collected data in ways that make connecting budget data and its impacts easier in future.

Local government in the UK has access to a rich set of service taxonomies which could be used to link together data on staff salaries, contracts and spending, with stats and stories on the service they provide and their performance. Finding ways to make this full picture accesssible and easy to digest can provide the foundation for more informed local dialogue.

3. Open Data Discourses

In Massachussetts the Open Data Discourse project has been developing the idea of data challenges: based not just one app-building, but also on using data to create policy ideas that can address an identified local challenge.

For Cambridge, Mass, the focus for the first challenge in fall 2014 was on pedestrian, bicycle, and car accidents in the City. Data on accidents was provided, and accesed over 2,000 times in a six-week challenge period. The challenge resulted in eight submissions “that addressed policy-relevant issues such as how to format traffic accident data to enable trend analysis across the river into Boston, or how to reduce accidents and encourage cycling by having a parked car buffer.”

The challenge processes culminated in a friday evening meeting that brought together community members who had worked on challenge ideas, with councillors and representatives of the local authority, to showcase the solutions and provide an award for a winning idea.

4. Focus on small data

There’s a lot of talk out there about ‘big data’ and how big data analytics can revolutionise government. But many of the datasets that matter are small data: spreadsheets created by an officer, or records held by community groups in various structures and formats.

Rahul Bhargava defines small data as:

“the thing that community groups have always used to do their work better in a few ways:

  • Evaluate: Groups use Small Data to evaluate programs so they can improve them
  • Communicate: Groups use Small Data to communicate about their programs and topics with the public and the communities they serve
  • Advocate: Groups use Small Data to make evidence-based arguments to those in power”

Simple steps to share and work with small data can make a big difference: and keep citizens rather than algorythms in control.

5. Tactile data and data murals

The Data Therapy project has been exploring a range of ways to make data more tactile: from laser-cutting food security information into vegetables to running ‘low tech data’ workshops that use pipe-cleaners, lego and crayons to explore representations of data about a local community.

Turning complex comparisons and numbers into physical artefacts, and finding the stories inside the statitics can offer communities a way into data-informed dialogue, without introducing lots of alienating graphs and numbers.

The Data Therapy project’s data murals connect discussions of data with traditional community arts practice: painting large scale artworks that represent a community interpretation of local data and information.

6. Data-driven art

The Open Data Institute’s Data as Culture project has run a series of data art commissions: leading to a number of data-driven art works that bring real-time data flows into the physical environment. In 2011 Bristol City Council commissioned a set of art works, ‘Invisible Airs‘ that included a device stabbing books in response to library cuts, and a spud gun triggered by spending records.

Alongside these political art works that add an explicit emotional dimension to public data, low-cost network connected devices can also be used to make art that passively informs – introducing indicators that show the state of local data into public space.

7. Citizen science

Not all the data that matters to local decision making comes from government. Citizens can create their own data, via crowdsourcing and via citizen-science approaches to data collection.

The Public Lab describes itself as a ‘DIY Environmental Science Community’ and provides How To information on how citizens groups can build their own sensors or tools for everything from arial mapping to water quality monitoring. Rather than ‘smart cities’ that centralise data from sensor networks, citizen science offers space for a collaboration between government and communities – creating smart citizens who can collect and make sense of data alongside local officials.

In China, citizens started their own home water quality testing to call for government to recognise and address clean water problems.

8. Data dives & hackathons

DataKind works to bring together expert analysts with social-sector organisations that have data in order to look for trends and insights. Modelled on a hackathon, where activity takes place over an intense day or weekend of work, DataDives can generate new findings, new ideas about hwo to use data, and new networks for the local authority to draw upon.

Unlike a hackathon where the focus is often on developing a technical app or innovation and where programme skill is often a pre-requisite, a Data Dive might be based around answering a particular question, or around finding what data means to multi-disciplinary teams.

It is possible to design inclusive hackathons which connect up the lived experience of communities with digital skills from inside and outside the community. The Hackathon FAQ explores some of the common pitfals of holding a civic hackathons: encouraging critical thought about whether prizes and other common features are likely to incentivise contributions, or distort the kinds of team building and collaboration wanted in a civic setting.

9. Contextualised consultation

Too often local consultations ask questions without providing citizens with the information they might need to explore and form their opinions. For example, a online consultation on green spaces, simply by asking for the Ward or Postcode of a respondent, could provide tailored information (and questions) about the current green spaces nearby.

Live open data feedback on the demographics and diversity of consultation respondents could also play a role in incentivising people to take part to ensure their views are represented.

It’s important though not to make too many assumptions when providing contextualised data: a respondent might care about the context near where their parents or children live, as much as their own for example – and so interfaces should offer the ability to look at data around areas other than your home.

10. Adopt a dataset

When it snows in America, Fire Hydrants on the street can get frozen under the ice, and so its important to dig them out after snowfall. However, the council don’t have resources to always get to all the hydrants in time. Code for America found an ingenious solution, taking an open dataset of fire hydrants, and creating a campaign for people to ‘Adopt a Hydrant‘, committing to dig it out when the blizzards come. They combined data with a social layer.

The same approach could work for many other community assets, but it could also work for datasets. Which dataset could be co-created with the community? Could walkers help adopt footpath data and help keep it updated? Could the local bus user group adopt data on accessibility of public tranport roots, helping keep it updated?

The relationships created around a data quality feedback loop might also become important relationships for improving the services that the data describes. ?

11. Data-rich press releases

Local authorities are used to putting out press releases, often with selected statistics in. But how can those releases also contain links to key datasets, and even interactive assets that journalists and the public can draw upon to dig deeper into the data.

Data visualisation expert David McCandless has argued that interactivity plays an important role in allowing people to explore structured data and information, and to turn it into knowledge. The Guardian Data Blog has shown how engaging information can be created from datasets. Whilst the Data Journalism Handbook offers some pointers for journalists (and local bloggers) to get started with data, many local newspapers don’t have the dedicated data-desks of big media houses – so the more the authority can do to provide data in ready-to-reuse forms, the more it can be turned into a resource to support local debate.

12. URLs for everything – with a call to action

Which is more likely to turn up on Twitter and get clicked on:

“What do you think of new cycle track policy? Look on page 23, paragraph 2 or report at bottom of this page:″? or

“What do you think of new cycle track policy?″

Far too often the important information citizens might want might be online, but is burried away in documents or provided in ways that are impossible to link to.

When any proposal, policy, decision or transaction gets a permenant URL (web address) it can become a social object: something people can talk about on twitter and facebook and in other spaces.

For Linked Data advocates, giving everything in a dataset its own URL plays an important role in machine-to-machine communication, but it also plays a really important role in human communication. Think about how visitors to a data item might also be offered a ‘call to action’, whether it’s to report concerns about a spending transaction, or volunteer to get involved in events at a park represented by a data item.

13. Participatory budgeting – with real data

What can £5000 buy you? How much does it cost to run a local carnival? Or a swimming pool? Or to provide improved social care? Or cycle lanes? Answers to these questions might exist inside spending data – but often when participatory budgeting activities take place the information needed to work out what kinds of options may be affordable only comes into the picture late in the process.

Open Spending, the World Bank, NESTA and the Finish Institute have all explored how open data could change the participatory budgeting process – although as yet there have been few experiments to really explore the possibilities.

14. Who owns it?

Kirlees Council have put together the ‘Who Owns My Neighbourhood?’ site to let residents explore land holdings and to “help take responsibility for land, buildings and activities in your neighbourhood”. Similar sites, with the goal of improving how land is used and addressing the problem of vacant lots, are cropping up across American cities.

These tools can enable citizens to identify land and government assets that could be better used by the community: but unchecked they may also risk giving more power to wealthy property speculators as a widely cited case study from Bangalore has warned.

15. Social audits

In many parts of the developing world, particularly across India, the Social Audit is an important process, focussed on “reviewing official records and determining whether state reported expenditures reflect the actual monies spent on the ground” (Aiyar & Samji, 2009).

Social Audits involve citizens groups trained up to look at records and ‘ground truth’ whether or not resources have been used in the way authorities say. Crucially, Social Audits culminate in public hearings: meetings where the findings are presented and discussed.

Models of citizen-led investigation, followed by formal public meetings, are also a feature of the London Citizens community organising approach, where citizens assemblies put community views to people in power. How could key local datasets form part of an evidence gathering audit process, whether facilitated by local government or led by independent community organisations?

16. Geofenced bylaws, licenses and regulations: building the data layer of the local authority

After seeing some of the projects to open up the legal codes of US cities I started where I would find out about the Byelaws in my home town of Oxford. As the page on the City Council website that hosts them explaines: “Byelaws generally require something to be done – or not done – in a particular location.”. Unfortunately, in Oxford, what is required to be done, and where is locked up inside scanned PDFs of typewritten minutes.

There are all sorts of local rules and regulations, licenses and other information that authorities issue which is tied to a particular geographic location: yet this is rarely a layer in the Geographic Information Systems that authorities use. How might geocoding this data, or even making it available through geofencing apps help citizens to navigate, explore and debate the rules that shape their local places.?

17. Conversations around the contracts pipeline?

The Open Contracting project is calling for transparency and participation in public contracting. As part of the UK Local Government Transparency Code authorities have to publish the contracts they have entered into – but publishing the contract pipeline and planned procurement offers an important opportunity to work out if there are fresh ideas or important insights that could shape how funds are spent.

The Open Contracting Data Standard provides a way of sharing a flow of data about the early stages of a contracting process. Combine that information with a call to action, and a space for conversation, and there are ways to get citizens shaping tenders and the selection of suppliers.

18. Participatory planning: visualising the impacts of decisions

What data should a local authority ask developers submitting planning applications to provide?

For many developments there might be detailed CAD models available which could be shared and explored in mapping software to support a more informed conversation about proposed building projects. ?

19. Stats that matter

?Local authorities often conduct one-off surveys and data collection excercises. These are a vital opportunity to build up an understanding of the local area. What opportunities are there to work in partnership with local community groups to identify the important questions that they want to ask? How can local government and community groups collaborate to collect actionable stats that matter: pooling needs, and even resources, to get the best sample and the best depth of insight?

20. Spreadsheet scorecards and dashboards

Dig deep enough in most local organisations and you will find one or more ‘super spreadsheets’ that capture and analyse key statistics and performance indicators. Many more people can easily pick up the skills to create a spreadsheet scorecard than can become overnight app developers.

Google Docs spreadsheets can pick up data live from the web. What dashboards might a local councillor want? Or a local residents association? What information would make them better able to do their job?

Five reflections for an open data hackathon

Future Food HackI was asked to provide a short talk at the start of the Future Food Hackathon that kicked off in Wageningen, NL today, linked to the Global Open Data on Agriculture and Nutrition workshop taking place over the next few days.

Below are the speaker notes I jotted down for the talk.

On open data and impact

I want to start with an admission. I’m a sceptic about open data.

In the last five years we’ve seen literally millions of datasets placed online as part of a broad open data movement – with grand promises made about the way this will revolutionise politics, governance and economies.

But, when you look for impact, with the exception of a few specific domains such as transport, the broad society wide impact of that open data is hard to find. Hundreds of hack-days have showcased what could be possible with data, but few have delivered truly transformative innovations that have made it to scale.

And many of the innovations that result often seem to focus #FirstWorldProblems – if not purely ‘empowering the already empowered’, then at least not really engaging with social issues in ways that are set to tip the balance in favour of those with least advantage.

I’m sceptical, but I’m not pessimistic. In fact, understood as part of a critique of the closed way we’ve been doing aid, policy making, production and development – open data is an incredibly exciting idea.

However, far to much open data thinking has stopped at the critique, without moving on to propose something new and substantive. It offers a negation (data which is not proprietary; not in PDF; not kept from public view), without talking enough about how new open datasets should be constructed. Because opening data is not just about taking a dataset from inside the government or company and putting it online, in practice it involves the creation of new datasets: selecting and standardising fields and deciding how to model data. This ultimately involves the construction of new systems of data.

And this links to a second blind spot of current open data thinking: the emphasis on the dataset, to the exclusion of the social relationships around it.

Datasets do not stand alone. The are produced by someone, or some group, for some purpose. They get meaning from their relationship to other data, and from the uses to which they are put. As Lisa Gitelman and colleagues have put it in ‘Raw Data is an Oxymoron’, datasets have histories, and we need to understand these to reshape their futures.

Matthew Smith and colleagues at the IDRC have spent a number of years exploring the idea of openness in development. They distinguish between openness defined in ‘universal legal and technical terms’, and openness as a practice – and argue that we need to put open practices at the centre of our theory of openness. These practices are, to some extent, enabled by the formalities of creative common licenses, or open data formats, but they are something more, and draw upon the cultures of peer-to-peer production and open source, not just the legal and technical devices.

Ultimately, then, I’m optimistic about the potential of open data if we can to think about the work of projects like GODAN not just as a case of gaining permission to work with a few datasets, but as about building new open and collaborative infrastructures, through which we can use data to communicate, collaborate and reshape our world.

I’m also hopeful about the potential of colliding cultures from open source and open data, with current cultures in the agriculture and nutrition communities. Can we bring these into a dialogue that builds shared understanding of how to solve problems, and lets us rethink both openness, and agriculture, to be more effective, inclusive and just?

Five observations on hacking with open data

Ok: so let me pause. I recognise that the last few minutes might have been a bit abstract and theoretical for 9am on a Monday morning. Let me try then and offer then five somewhat more practical thoughts about approaching an open data hackathon:

1. Hacking is learning.

A common experience of the hackathon is frustration at the data not just being ready to use. Yet the process of struggling with data is a process of learning about the world it represents – and sometimes one of the most important outcomes of a hack is the induction of a new community of people, from different backgrounds, into shared understanding of some data and domain.

One of the most fascinating things about the open government data processes I’ve been tracking in the UK has been the way in which it has supported civic learning amongst technology communities – coming to understand more how the state works by coming to understand its data.

So – at an interdisciplinary hack like this, there is the opportunity to see peculiarities of the data as opportunities to understand the process and politics of the agriculture and nutrition field, and to be better equipped to propose new approaches that don’t try to make perfect data out of problematic situations – but that try and engage with the real challenges and problems of the field.

2. Hacking is political.

I’ve had the pleasure over the last few years of working an number of times with the team at the iHub in Nairobi, and of following the development of [Kenya’s open data initiative]. In their study of an ‘incubator’ project to encourage developers to use Kenyan open government data, Leo Mutuku and her team made an interesting discovery.

Some developers did not understand their apps as products to be taken to scale – but instead saw them as rhetorical acts. A demonstration to government of how ICTs could be used, and a call on government to rethinking its own ICTs, rather than an attempt by outside developers to replace those ICTs for government.

Norfolk based developer, Rupert Reddington, once referred to this as ‘digital pamphleteering’ in which the application is a provocation in a debate – rather than primarily, or at all, a tool for everyday use.

Think about how you present a openness-oriented provocation to the status quo when you pitch your ideas and creations.

3. You are building infrastructure.

Apps created with open data are just one part of the change process. Even a transport app that lets people know when the next bus is only has an impact if it becomes part of people’s everyday practice, and they rely on it in ways that change their behaviour.

Infrastructure is something which fades into the background: when it becomes established and works well, we don’t see it. It is only when it is disrupted that it becomes notable (as I learned trying to cross the channel yesterday – when the Channel Tunnel became a very visible piece of infrastructure exactly because it was blocked and not working).

One of the questions I’m increasingly asking in my research work, is how we can build ‘inclusive infrastructures’, and what steps we need to take to ensure that the data infrastructures we have are tipped in favour of the least advantaged rather than the most powerful. Sometimes the best innovations are ones that complement and extend an existing infrastructure, bringing hitherto unheard voices into the debate, or surfacing hitherto unseen assumptions.

Sustainability is also important to infrastructure. What you create today may just be a prototype – but if you are proposing it as part of a new infrastructure of action – consider if you can how it might be made sustainable. Would building for sustainability change the concept or idea?

4. Look at the whole value chain.

There is a tendency in hackthons to focus on the ‘end user’ – building consumer oriented apps and platforms. Often that approach makes sense: disintermediation can make many systems work better. But it’s not always the way to make the most difference.

When I worked with CABI and the Institute for Development Studies in 2013 to host a ‘Research to Impact’ hackathon at the iHub in Nairobi, we brought together people involved in improving the quality of agriculture and the lives of smallholder farmers. After a lot of discussion, it became clear that between ‘research’ and the ‘farm’ were all sorts of important intermediaries, from seed-sellers, to agricultural extension workers. Instead of building direct-to-farmer information systems, teams explored the kinds of tools that could help an agriculture extension worker deliver better support, or that could help a seed-seller to improve their product range.

Apps with 10s or 100s of back-office users may be much more powerful than apps with 1000s of ‘end users’.

When the two Open Data in Developing Countries project research partners in Kenya launched their research in the middle of last year, an interesting argument broke out between advocates of ‘disintermediation’, and ‘empowering intermediaries’. One the one hand, intermediaries contextualise information, and may be trusted: helping communities adopt information as actionable insights, when they may not understand or trust the information direct from source. On the other hand, intermediaries are often seen as a problem: middle-men using their position for self-interest, and limiting the freedoms of those they are the intermediary to.

Open approaches can offer an important ‘pressure valve’ in these contexts: focussing on creating platforms for intermediary, but not restricting information to intermediaries only.

5. Evolution can be as powerful as revolution.

The UN Secretary General has led the call for a ‘data revolution for development’, with the Independent Expert Group he appointed proposing a major updated in practices of data use and practice.

This revolution narratives often implies that organisations needs to shift direction; completely transforming data practices; throwing out existing report-writing and paper-based approaches in place of new ‘digital by default’ technology-driven processes. But what happens if we think differently and start from the existing strengths of organisations:

  • What is going well when it comes to data in the international potato trade?
  • Who are the organisations with promising practice in localising climate-change relevant information for farmers?
  • What have been the stories of progress in tracking food-borne disease?

How can we extend these successes? What innovations have made their first iteration, but are just waiting for the next?

One of the big challenges of ‘data revolution’ is the organisational change curve it demands, and the complex relationship between data supply and demand. Often the data available right now is not great. For example, if you are currently running a crop monitoring project with documents and meetings, but a new open dataset becomes available that is relevant to your work, starting a ‘data revolution’ tomorrow will involve lots of time working with bad data and finding new ways to work around the peculiarities of the new system: the investment this year to do the same work you were doing with ‘inefficient’ analogue approaches last year might be double, as you scale the learning curve.

Of course, in year 3 or 4, the more efficient way of working may start to pay off: but often projects never get there. And because use of the new open dataset dropped away in year 2, when early adopters realised they could not afford to transform their practices to work with it, government publishers get discouraged, and by year 3 and 4 the data might not be there.

An evolution approach works out how to change practices year-by-year: iterating and negotiating the place of data in the future of food.

(See Open Data in Developing Countries – Insights from Phase I for more on this point)

In conclusion

Ok. Still a bit abstract for 9.15am on a Monday morning: but I hope the general point is clear.

Ultimately, the most important thing about the creations at a hackathon is their ‘theory of change’: how does the time spent hacking show the way towards real change? I’m certainly very optimistic that when it comes to the pitch back tomorrow, the ideas and energy in this room will offer some key pointers for us all.

OCDS – Notes on a standard

logo-open-contracting Today sees the launch of the first release of the Open Contracting Data Standard (OCDS). The standard, as I’ve written before, brings together concrete guidance on the kinds of documents and data that are needed for increased transparency in processes of public contracting, with a technical specification describing how to represent contract data and meta-data in common ways.

The video below provides a brief overview of how it works (or you can read the briefing note), and you can find full documentation at

When I first jotted down a few notes on how to go forward from the rapid prototype I worked on with Sarah Bird in 2012, I didn’t realise we would actually end up with the opportunity to put some of those ideas into practice. However: we did – and so in this post I wanted to reflect on some aspects of the standard we’ve arrived at, some of the learning from the process, and a few of the ideas that have guided at least my inputs into the development process.

As, hopefully, others pick up and draw upon the initial work we’ve done (in addition to the great inputs we’ve had already), I’m certain there will be much more learning to capture.

(1) Foundations for ‘open by default’

Early open data advocacy called for ‘raw data now‘, asking for governments to essentially export and dump online existing datasets, with issues of structure and regular publishing processes to be sorted out later. Yet, as open data matures, the discussion is shifting to the idea of ‘open by default’, and taken seriously this means more than just data dumps that are created being openly licensed as the default position, but should mean that data is released from government systems as a matter of course in part of their day-to-day operation.

green_compilation.svgThe full OCDS model is designed to support this kind of ‘open by default’, allowing publishers to provide small releases of data every time some event occurs in the lifetime of a contracting process. A new tender is a release. An amendment to that tender is a release. The contract being awarded, or then signed, are each releases. These data releases are tied together by a common identifier, and can be combined into a summary record, providing a snapshot view of the state of a contracting process, and a history of how it has developed over time.

This releases and records model seeks to combine together different user needs: from the firm seeking information about tender opportunities, to the civil society organisation wishing to analyse across a wide range of contracting processes. And by allowing core stages in the business process of contracting to be published as they happen, and then joined up later, it is oriented towards the development of contracting systems that default to timely openness.

As I’ll be exploring in my talk at the Berkman Centre next week, the challenge ahead for open data is not just to find standards to make existing datasets line-up when they get dumped online, but is to envisage and co-design new infrastructures for everyday transparent, effective and accountable processes of government and governance.

(2) Not your minimum viable product

Different models of standard

Many open data standard projects adopt either a ‘Minimum Viable Product‘ approach, looking to capture only the few most common fields between publishers, or are developed through focussing on the concerns of a single publisher or users. Whilst MVP models may make sense for small building blocks designed to fit into other standardisation efforts, when it came to OCDS there was a clear user demand to link up data along the contracting process, and this required an overarching framework from into which simple component could be placed, or from which they could be extracted, rather than the creation of ad-hoc components, with the attempt to join them up made later on.

Whilst we didn’t quite achieve the full abstract model + idiomatic serialisations proposed in the initial technical architecture sketch, we have ended up with a core schema, and then suggested ways to represent this data in both structured and flat formats. This is already proving useful for example in exploring how data published as part of the UK Local Government Transparency Code might be mapped to OCDS from existing CSV schemas.

(3) The interop balancing act & keeping flex in the framework

OCDS is, ultimately, not a small standard. It seeks to describe the whole of a contracting process, from planning, through tender, to contract award, signed contract, and project implementation. And at each stage it provides space for capturing detailed information, linking to documents, tracking milestones and tracking values and line-items.

This shape of the specification is a direct consequence of the method adopted to develop it: looking at a diverse set of existing data, and spending time exploring the data that different users wanted, as well as looking at other existing standards and data specifications.

However, OCDS by not means covers all the things that publishers might want to state about contracting, nor all the things users may want to know. Instead, it focusses on achieving interoperability of data in a number of key areas, and then providing a framework into which extensions can be linked as the needs of different sub-communities of open data users arise.

We’re only in the early stages of thinking about how extensions to the standard will work, but I suspect they will turn out to be an important aspect: allowing different groups to come together to agree (or contest) the extra elements that are important to share in a particular country, sector or context. Over time, some may move into the core of the standard, and potentially elements that appear core right now might move into the realm of extensions, each able to have their own governance processes if appropriate.

As Urs Gasser and John Palfrey note in their work on Interop, the key in building towards interoperability is not to make everything standardised and interoperable, but is to work out the ways in which things should be made compatible, and the ways in which they should not. Forcing everything into a common mould removes the diversity of the real world, yet leaving everything underspecified means no possibility to connect data up. This is both a question of the standards, and the pressures that shape how they are adopted.

(4) Avoiding identity crisis

green_organisation.svgData describes things. To be described, those things need to be identified. When describing data on the web, it helps if those things can be unambiguously identified and distinguished from other things which might have the same names or identification numbers. This generally requires the use of globally unique identifiers (guid): some value which, in a universe of all available contracting data, for example, picks out a unique contracting process; or, in the universe of all organizations, uniquely identifies a specific organization. However, providing these identifiers can turn out to be both a politically and technically challenging process.

The Open Data Institute have recently published a report on the importance of identifiers that underlines how important identifiers are to processes of opening data. Yet, consistent identifiers often have key properties of public goods: everyone benefits from having them, but providing and maintaining them has some costs attached, which no individual identifier user has an incentive to cover. In some cases, such as goods and service identifiers, projects have emerged which take a proprietary approach to fund the maintenance of those identifiers, selling access to the lookup lists which match the codes for describing goods and services to their descriptions. This clearly raises challenges for an open standard, as when proprietary identifiers are incorporated into data, then users may face extra costs to interpret and make sense of data.

In OCDS we’ve sought to take as distributed an approach to identifiers as possible, only requiring globally unique identifiers where absolutely necessary (identifying contracts, organizations and goods and services), and deferring to existing registration agencies and identity providers, with OCDS maintaining, at most, code lists for referring to each identity ‘scheme’.

In some cases, we’ve split the ‘scheme’ out into a separate field: for example, an organization identifier consists of a scheme field with a value like ‘GB-COH’ to stand for UK Companies House, and then the identifier given in that scheme, like ‘5381958’. This approach allows people to store those identifiers in their existing systems without change (existing databases might hold national company numbers, with the field assumed to come from a particular register), whilst making explicit the scheme they come from in the OCDS. In other cases, however, we look to create new composite string identifiers, combining a prefix, and some identifier drawn from an organizations internal system. This is particularly the case for the Open Contracting ID (ocid). By doing this, the identifier can travel between systems more easily as a guid – and could even be incorporated in unstructured data as a key for locating documents and resources related to a given contracting process.

However, recent learning from the project is showing that many organisations are hesistant about the introduction of new IDs, and that adoption of an identifier schema may require as much advocacy as adoption of a standard. At a policy level, bringing some external convention for identifying things into a dataset appears to be seen as affecting the, for want of a better word, sovereignty of a specific dataset: even if in practice the prefix approach of the ocid means it only need to be hard coded in the systems that expose data to the world, not necessarily stored inside organizations databases. However, this is an area I suspect we will need to explore more, and keep tracking, as OCDS adoption moves forward.

(5) Bridging communities of practice

If you look closely you might in fact notice that the specification just launched in Costa Rica is actually labelled as a ‘release candidate‘. This points to another key element of learning in the project, concerning the different processes and timelines of policy and technical standardisation. In the world of funded projects and policy processes, deadlines are often fixed, and the project plan has to work backwards from there. In a technical standardisation process, there is no ‘standard’ until a specification is in use: and has been robustly tested. The processes for adopting a policy standard, and setting a technical one, differ – and whilst perhaps we should have spoken from the start of the project of an overall standard, embedding within it a technical specification, we were too far down the path towards the policy launch before this point. As a result, the Release Candidate designation is intended to suggest the specification is ready to draw upon, but that there is still a process to go (and future governance arrangements to be defined) before it can be adopted as a standard per-se.

(6) The schema is just the start of it

This leads to the most important point: that launching the schemas and specification is just one part of delivering the standard.

In a recent e-mail conversation with Greg Bloom about elements of standardisation, linked to the development of the Open Referral standard, Greg put forward a list of components that may be involved in delivering a sustainable standards project, including:

  • The specification – with its various components and subcomponents);
  • Tools that assesses compliance according to the spec (e.g. validation tools, and more advanced assessment tools);
  • Some means of visualizing a given set of data’s level of compliance;
  • Incentives of some kind (whether positive or negative) for attaining various levels of compliance;
  • Processes for governing all of the above;
  • and of course the community through which all of this emerges and sustains;

To this we might also add elements like documentation and tutorials, support for publishers, catalysing work with tool builders, guidance for users, and so-on.

Open government standards are not something to be published once, and then left, but require labour to develop and sustain, and involve many social processes as much as technical ones.

In many ways, although we’ve spent a year of small development iterations working towards this OCDS release, the work now is only just getting started, and there are many technical, community and capacity-building challenges ahead for the Open Contracting Partnership and others in the open contracting movement.