Category Archives: Open Data

2015 Open Data Research Symposium – Ottawa

There are a few days left to submit abstracts for the 2015 Open Data Research Symposium due to take place alongside 3rd International Open Government Data Conference in Ottawa, on May 27th 2015.

Registration is also now open for participants as well as presenters.

Call for Abstracts: (Deadline 28th Feb 2015; submission portal)

As open data becomes firmly cemented in the policy mainstream, there is a pressing need to dig deeper into the dynamics of how open data operates in practice, and the theoretical roots of open data activities. Researchers across the world have been looking at these issues, and this workshop offers an opportunity to bring together and have shared dialogue around completed studies and work-in-progress.

Submissions are invited on themes including:

  • Theoretical framing of open data as a concept and a movement;
  • Use and impacts of open data in specific countries or specific sectors, including, but not limited to: government agencies, cities, rural areas, legislatures, judiciaries, and the domains of health, education, transport, finance, environment, and energy;
  • The making, implementation and institutionalisation of open data policy;
  • Capacity building for wider availability and use of open data;
  • Conceptualising open data ecosystems and intermediaries;
  • Entrepreneurial usage and open data economies in developing countries;
  • Linkages between transparency, freedom of information and open data communities;
  • Measurement of open data policy and practices;
  • Critical challenges for open data: privacy, exclusion and abuse;
  • Situating open data in global governance and developmental context;
  • Development and adoption of technical standards for open data;

Submissions are invited from all disciplines, though with an emphasis on empirical social research. PhD students, independent and early career researchers are particularly encouraged to submit abstracts. Panels will provide an opportunity to share completed or in-progress research and receive constructive feedback.

Submission details

Extended abstracts, in French, English, Spanish or Portuguese, of up to two pages, detailing the question addressed by the research, methods employed and findings should be submitted by February 28th 2015. Notifications will be provided by March 31st. Full papers will be due by May 1st. 

Registration for the symposium will open shortly after registration for the main International Open Government Data Conference.

Abstracts should be submitted via Easy Chair

Paper format

Authors of accepted abstracts will be invited to submit full papers. These should be a maximum of 20 pages single spaced, exclusive of bibliography and appendixes. As an interdisciplinary and international workshop we welcome papers in a variety of formats and languages: French, English, Spanish and Portuguese. However, abstracts and paper presentations will need to be given in English. 

Full papers should be provided in .odt, .doc, or .rtf or as .html. Where relevant, we encourage authors to also share in a repository, and link to, data collected as part of their research. 

We are working to identify a journal special issue or other opportunity for publication of selected papers.

Contact

Contact savita.bailur@webfoundation.org or tim.davies@soton.ac.uk for more details.

Programme committee

About the Open Data Research Network

The Open Data Research Network was established in 2012 as part of the Exploring the Emerging Impacts of Open Data in Developing Countries (ODDC) project. It maintains an active newsletter, website and LinkedIn group, providing a space for researchers, policy makers and practitioners to interact. 

This workshop will also include an opportunity to find out how to get involved in the Network as it transitions to a future model, open to new members and partners, and with a new governance structure. 

Exploring the Open Data Barometer

[Summary: ODI Lunchtime lecture about the Open Data Barometer]

odb-logo

Screen Shot 2015-02-24 at 20.39.15

Just over a month ago, the World Wide Web Foundation launched the second edition of the Open Data Barometer to coincide with BBC Democracy Day. This was one of the projects I was worked on at the Web Foundation before I completed my projects there at the end of last year. So, on Friday I had the opportunity to join with my successor at Web Foundation, Savita Bailur, to give an ODI Friday lunchtime talk about the methods and findings of the study.

A recording of the talk and slides are embedded below:

Friday lunchtime lecture: Exploring the Open Data Barometer: the challenges ahead for an open data revoluti…

And, as the talk mentions – all the data from the Open Data Barometer is available in the interactive report at http://opendatabarometer.org/

Unpacking open data: power, politics and the influence of infrastructures

[Summary: recording of Berkman Centre Lunch Talk on open data]

Much belatedly, below you will find the video from the Berkman Centre Talk I gave late last year on ‘Unpacking open data: power, politics and the influence of infrastructures

You can find a live-blog of the talk from Matt Stempeck and Erhardt Graff over on the MIT Media Lab blog, and Willow Brugh drew the fantastic visual record of themes in the talk shown below:

Unpacking_open_data

The slides are also up on Slideshare here.

I’m now in the midst of trying to make more sense of the themes in this talk whilst in the writing up stage for my PhD… and much of the feedback I had from the talk has been incredibly valuable in that – so comments are always welcome.

20 ways to connect open data and local democracy

[Summary: notes for a workshop on local democracy and open data]

At the Local Democracy for Everyone (#notInWestminister) workshop in Huddersfield today I led a session titled ‘20 ways to connect open data and local democracy‘. Below is the list of ideas we started the workshop with

In the workshop we explored how these, and other approaches, could be used to respond to priority local issues, from investing funds in environmental projects, to shaping local planning processes, and dealing with nuisance pigeons.

Graphic recording from break-out session by [@Jargonautical](http://www.twitter.com/jargonautical]

There is more to do to re-imagine how local open data should work, but the conversations today offered an interesting start.

1. Practice open data engagement

Data portals can be very impersonal things. But behind every dataset is a council officer or a team working to collect, manage and use the data. Putting a human face on datasets, linking them to the policy areas they affect, and referencing datasets from reports that draw upon them can all help put data in context and make it more engaging.

The Five Stars of Open Data Engagement provides a model for stepping up engagement activities, from providing better and more social meta-data, through to hosting regular office-hours and drop-in sessions to help the local community understand and use data better.

2. Showing the council contribution

A lot of the datasets required by the Local Government Transparency Code are about the cost of services. What information and data is needed to complete the picture and to show the impact of services and spending?

The Caring for my Neighbourhood project in Sao Paulo looked to geocode government budget and spending data, to understand where funds were flowing, and have opened up a conversation with government about how to collected data in ways that make connecting budget data and its impacts easier in future.

Local government in the UK has access to a rich set of service taxonomies which could be used to link together data on staff salaries, contracts and spending, with stats and stories on the service they provide and their performance. Finding ways to make this full picture accesssible and easy to digest can provide the foundation for more informed local dialogue.

3. Open Data Discourses

In Massachussetts the Open Data Discourse project has been developing the idea of data challenges: based not just one app-building, but also on using data to create policy ideas that can address an identified local challenge.

For Cambridge, Mass, the focus for the first challenge in fall 2014 was on pedestrian, bicycle, and car accidents in the City. Data on accidents was provided, and accesed over 2,000 times in a six-week challenge period. The challenge resulted in eight submissions “that addressed policy-relevant issues such as how to format traffic accident data to enable trend analysis across the river into Boston, or how to reduce accidents and encourage cycling by having a parked car buffer.”

The challenge processes culminated in a friday evening meeting that brought together community members who had worked on challenge ideas, with councillors and representatives of the local authority, to showcase the solutions and provide an award for a winning idea.

4. Focus on small data

There’s a lot of talk out there about ‘big data’ and how big data analytics can revolutionise government. But many of the datasets that matter are small data: spreadsheets created by an officer, or records held by community groups in various structures and formats.

Rahul Bhargava defines small data as:

“the thing that community groups have always used to do their work better in a few ways:

  • Evaluate: Groups use Small Data to evaluate programs so they can improve them
  • Communicate: Groups use Small Data to communicate about their programs and topics with the public and the communities they serve
  • Advocate: Groups use Small Data to make evidence-based arguments to those in power”

Simple steps to share and work with small data can make a big difference: and keep citizens rather than algorythms in control.

5. Tactile data and data murals

The Data Therapy project has been exploring a range of ways to make data more tactile: from laser-cutting food security information into vegetables to running ‘low tech data’ workshops that use pipe-cleaners, lego and crayons to explore representations of data about a local community.

Turning complex comparisons and numbers into physical artefacts, and finding the stories inside the statitics can offer communities a way into data-informed dialogue, without introducing lots of alienating graphs and numbers.

The Data Therapy project’s data murals connect discussions of data with traditional community arts practice: painting large scale artworks that represent a community interpretation of local data and information.

6. Data-driven art

The Open Data Institute’s Data as Culture project has run a series of data art commissions: leading to a number of data-driven art works that bring real-time data flows into the physical environment. In 2011 Bristol City Council commissioned a set of art works, ‘Invisible Airs‘ that included a device stabbing books in response to library cuts, and a spud gun triggered by spending records.

Alongside these political art works that add an explicit emotional dimension to public data, low-cost network connected devices can also be used to make art that passively informs – introducing indicators that show the state of local data into public space.

7. Citizen science

Not all the data that matters to local decision making comes from government. Citizens can create their own data, via crowdsourcing and via citizen-science approaches to data collection.

The Public Lab describes itself as a ‘DIY Environmental Science Community’ and provides How To information on how citizens groups can build their own sensors or tools for everything from arial mapping to water quality monitoring. Rather than ‘smart cities’ that centralise data from sensor networks, citizen science offers space for a collaboration between government and communities – creating smart citizens who can collect and make sense of data alongside local officials.

In China, citizens started their own home water quality testing to call for government to recognise and address clean water problems.

8. Data dives & hackathons

DataKind works to bring together expert analysts with social-sector organisations that have data in order to look for trends and insights. Modelled on a hackathon, where activity takes place over an intense day or weekend of work, DataDives can generate new findings, new ideas about hwo to use data, and new networks for the local authority to draw upon.

Unlike a hackathon where the focus is often on developing a technical app or innovation and where programme skill is often a pre-requisite, a Data Dive might be based around answering a particular question, or around finding what data means to multi-disciplinary teams.

It is possible to design inclusive hackathons which connect up the lived experience of communities with digital skills from inside and outside the community. The Hackathon FAQ explores some of the common pitfals of holding a civic hackathons: encouraging critical thought about whether prizes and other common features are likely to incentivise contributions, or distort the kinds of team building and collaboration wanted in a civic setting.

9. Contextualised consultation

Too often local consultations ask questions without providing citizens with the information they might need to explore and form their opinions. For example, a online consultation on green spaces, simply by asking for the Ward or Postcode of a respondent, could provide tailored information (and questions) about the current green spaces nearby.

Live open data feedback on the demographics and diversity of consultation respondents could also play a role in incentivising people to take part to ensure their views are represented.

It’s important though not to make too many assumptions when providing contextualised data: a respondent might care about the context near where their parents or children live, as much as their own for example – and so interfaces should offer the ability to look at data around areas other than your home.

10. Adopt a dataset

When it snows in America, Fire Hydrants on the street can get frozen under the ice, and so its important to dig them out after snowfall. However, the council don’t have resources to always get to all the hydrants in time. Code for America found an ingenious solution, taking an open dataset of fire hydrants, and creating a campaign for people to ‘Adopt a Hydrant‘, committing to dig it out when the blizzards come. They combined data with a social layer.

The same approach could work for many other community assets, but it could also work for datasets. Which dataset could be co-created with the community? Could walkers help adopt footpath data and help keep it updated? Could the local bus user group adopt data on accessibility of public tranport roots, helping keep it updated?

The relationships created around a data quality feedback loop might also become important relationships for improving the services that the data describes. ?

11. Data-rich press releases

Local authorities are used to putting out press releases, often with selected statistics in. But how can those releases also contain links to key datasets, and even interactive assets that journalists and the public can draw upon to dig deeper into the data.

Data visualisation expert David McCandless has argued that interactivity plays an important role in allowing people to explore structured data and information, and to turn it into knowledge. The Guardian Data Blog has shown how engaging information can be created from datasets. Whilst the Data Journalism Handbook offers some pointers for journalists (and local bloggers) to get started with data, many local newspapers don’t have the dedicated data-desks of big media houses – so the more the authority can do to provide data in ready-to-reuse forms, the more it can be turned into a resource to support local debate.

12. URLs for everything – with a call to action

Which is more likely to turn up on Twitter and get clicked on:

“What do you think of new cycle track policy? Look on page 23, paragraph 2 or report at bottom of this page: http://localcouncil.gov/reports/1234″? or

“What do you think of new cycle track policy? http://localcouncil.gov/policy/ab12″

Far too often the important information citizens might want might be online, but is burried away in documents or provided in ways that are impossible to link to.

When any proposal, policy, decision or transaction gets a permenant URL (web address) it can become a social object: something people can talk about on twitter and facebook and in other spaces.

For Linked Data advocates, giving everything in a dataset its own URL plays an important role in machine-to-machine communication, but it also plays a really important role in human communication. Think about how visitors to a data item might also be offered a ‘call to action’, whether it’s to report concerns about a spending transaction, or volunteer to get involved in events at a park represented by a data item.

13. Participatory budgeting – with real data

What can £5000 buy you? How much does it cost to run a local carnival? Or a swimming pool? Or to provide improved social care? Or cycle lanes? Answers to these questions might exist inside spending data – but often when participatory budgeting activities take place the information needed to work out what kinds of options may be affordable only comes into the picture late in the process.

Open Spending, the World Bank, NESTA and the Finish Institute have all explored how open data could change the participatory budgeting process – although as yet there have been few experiments to really explore the possibilities.

14. Who owns it?

Kirlees Council have put together the ‘Who Owns My Neighbourhood?’ site to let residents explore land holdings and to “help take responsibility for land, buildings and activities in your neighbourhood”. Similar sites, with the goal of improving how land is used and addressing the problem of vacant lots, are cropping up across American cities.

These tools can enable citizens to identify land and government assets that could be better used by the community: but unchecked they may also risk giving more power to wealthy property speculators as a widely cited case study from Bangalore has warned.

15. Social audits

In many parts of the developing world, particularly across India, the Social Audit is an important process, focussed on “reviewing official records and determining whether state reported expenditures reflect the actual monies spent on the ground” (Aiyar & Samji, 2009).

Social Audits involve citizens groups trained up to look at records and ‘ground truth’ whether or not resources have been used in the way authorities say. Crucially, Social Audits culminate in public hearings: meetings where the findings are presented and discussed.

Models of citizen-led investigation, followed by formal public meetings, are also a feature of the London Citizens community organising approach, where citizens assemblies put community views to people in power. How could key local datasets form part of an evidence gathering audit process, whether facilitated by local government or led by independent community organisations?

16. Geofenced bylaws, licenses and regulations: building the data layer of the local authority

After seeing some of the projects to open up the legal codes of US cities I started where I would find out about the Byelaws in my home town of Oxford. As the page on the City Council website that hosts them explaines: “Byelaws generally require something to be done – or not done – in a particular location.”. Unfortunately, in Oxford, what is required to be done, and where is locked up inside scanned PDFs of typewritten minutes.

There are all sorts of local rules and regulations, licenses and other information that authorities issue which is tied to a particular geographic location: yet this is rarely a layer in the Geographic Information Systems that authorities use. How might geocoding this data, or even making it available through geofencing apps help citizens to navigate, explore and debate the rules that shape their local places.?

17. Conversations around the contracts pipeline?

The Open Contracting project is calling for transparency and participation in public contracting. As part of the UK Local Government Transparency Code authorities have to publish the contracts they have entered into – but publishing the contract pipeline and planned procurement offers an important opportunity to work out if there are fresh ideas or important insights that could shape how funds are spent.

The Open Contracting Data Standard provides a way of sharing a flow of data about the early stages of a contracting process. Combine that information with a call to action, and a space for conversation, and there are ways to get citizens shaping tenders and the selection of suppliers.

18. Participatory planning: visualising the impacts of decisions

What data should a local authority ask developers submitting planning applications to provide?

For many developments there might be detailed CAD models available which could be shared and explored in mapping software to support a more informed conversation about proposed building projects. ?

19. Stats that matter

?Local authorities often conduct one-off surveys and data collection excercises. These are a vital opportunity to build up an understanding of the local area. What opportunities are there to work in partnership with local community groups to identify the important questions that they want to ask? How can local government and community groups collaborate to collect actionable stats that matter: pooling needs, and even resources, to get the best sample and the best depth of insight?

20. Spreadsheet scorecards and dashboards

Dig deep enough in most local organisations and you will find one or more ‘super spreadsheets’ that capture and analyse key statistics and performance indicators. Many more people can easily pick up the skills to create a spreadsheet scorecard than can become overnight app developers.

Google Docs spreadsheets can pick up data live from the web. What dashboards might a local councillor want? Or a local residents association? What information would make them better able to do their job?

Five reflections for an open data hackathon

Future Food HackI was asked to provide a short talk at the start of the Future Food Hackathon that kicked off in Wageningen, NL today, linked to the Global Open Data on Agriculture and Nutrition workshop taking place over the next few days.

Below are the speaker notes I jotted down for the talk.

On open data and impact

I want to start with an admission. I’m a sceptic about open data.

In the last five years we’ve seen literally millions of datasets placed online as part of a broad open data movement – with grand promises made about the way this will revolutionise politics, governance and economies.

But, when you look for impact, with the exception of a few specific domains such as transport, the broad society wide impact of that open data is hard to find. Hundreds of hack-days have showcased what could be possible with data, but few have delivered truly transformative innovations that have made it to scale.

And many of the innovations that result often seem to focus #FirstWorldProblems – if not purely ‘empowering the already empowered’, then at least not really engaging with social issues in ways that are set to tip the balance in favour of those with least advantage.

I’m sceptical, but I’m not pessimistic. In fact, understood as part of a critique of the closed way we’ve been doing aid, policy making, production and development – open data is an incredibly exciting idea.

However, far to much open data thinking has stopped at the critique, without moving on to propose something new and substantive. It offers a negation (data which is not proprietary; not in PDF; not kept from public view), without talking enough about how new open datasets should be constructed. Because opening data is not just about taking a dataset from inside the government or company and putting it online, in practice it involves the creation of new datasets: selecting and standardising fields and deciding how to model data. This ultimately involves the construction of new systems of data.

And this links to a second blind spot of current open data thinking: the emphasis on the dataset, to the exclusion of the social relationships around it.

Datasets do not stand alone. The are produced by someone, or some group, for some purpose. They get meaning from their relationship to other data, and from the uses to which they are put. As Lisa Gitelman and colleagues have put it in ‘Raw Data is an Oxymoron’, datasets have histories, and we need to understand these to reshape their futures.

Matthew Smith and colleagues at the IDRC have spent a number of years exploring the idea of openness in development. They distinguish between openness defined in ‘universal legal and technical terms’, and openness as a practice – and argue that we need to put open practices at the centre of our theory of openness. These practices are, to some extent, enabled by the formalities of creative common licenses, or open data formats, but they are something more, and draw upon the cultures of peer-to-peer production and open source, not just the legal and technical devices.

Ultimately, then, I’m optimistic about the potential of open data if we can to think about the work of projects like GODAN not just as a case of gaining permission to work with a few datasets, but as about building new open and collaborative infrastructures, through which we can use data to communicate, collaborate and reshape our world.

I’m also hopeful about the potential of colliding cultures from open source and open data, with current cultures in the agriculture and nutrition communities. Can we bring these into a dialogue that builds shared understanding of how to solve problems, and lets us rethink both openness, and agriculture, to be more effective, inclusive and just?

Five observations on hacking with open data

Ok: so let me pause. I recognise that the last few minutes might have been a bit abstract and theoretical for 9am on a Monday morning. Let me try then and offer then five somewhat more practical thoughts about approaching an open data hackathon:

1. Hacking is learning.

A common experience of the hackathon is frustration at the data not just being ready to use. Yet the process of struggling with data is a process of learning about the world it represents – and sometimes one of the most important outcomes of a hack is the induction of a new community of people, from different backgrounds, into shared understanding of some data and domain.

One of the most fascinating things about the open government data processes I’ve been tracking in the UK has been the way in which it has supported civic learning amongst technology communities – coming to understand more how the state works by coming to understand its data.

So – at an interdisciplinary hack like this, there is the opportunity to see peculiarities of the data as opportunities to understand the process and politics of the agriculture and nutrition field, and to be better equipped to propose new approaches that don’t try to make perfect data out of problematic situations – but that try and engage with the real challenges and problems of the field.

2. Hacking is political.

I’ve had the pleasure over the last few years of working an number of times with the team at the iHub in Nairobi, and of following the development of [Kenya’s open data initiative]. In their study of an ‘incubator’ project to encourage developers to use Kenyan open government data, Leo Mutuku and her team made an interesting discovery.

Some developers did not understand their apps as products to be taken to scale – but instead saw them as rhetorical acts. A demonstration to government of how ICTs could be used, and a call on government to rethinking its own ICTs, rather than an attempt by outside developers to replace those ICTs for government.

Norfolk based developer, Rupert Reddington, once referred to this as ‘digital pamphleteering’ in which the application is a provocation in a debate – rather than primarily, or at all, a tool for everyday use.

Think about how you present a openness-oriented provocation to the status quo when you pitch your ideas and creations.

3. You are building infrastructure.

Apps created with open data are just one part of the change process. Even a transport app that lets people know when the next bus is only has an impact if it becomes part of people’s everyday practice, and they rely on it in ways that change their behaviour.

Infrastructure is something which fades into the background: when it becomes established and works well, we don’t see it. It is only when it is disrupted that it becomes notable (as I learned trying to cross the channel yesterday – when the Channel Tunnel became a very visible piece of infrastructure exactly because it was blocked and not working).

One of the questions I’m increasingly asking in my research work, is how we can build ‘inclusive infrastructures’, and what steps we need to take to ensure that the data infrastructures we have are tipped in favour of the least advantaged rather than the most powerful. Sometimes the best innovations are ones that complement and extend an existing infrastructure, bringing hitherto unheard voices into the debate, or surfacing hitherto unseen assumptions.

Sustainability is also important to infrastructure. What you create today may just be a prototype – but if you are proposing it as part of a new infrastructure of action – consider if you can how it might be made sustainable. Would building for sustainability change the concept or idea?

4. Look at the whole value chain.

There is a tendency in hackthons to focus on the ‘end user’ – building consumer oriented apps and platforms. Often that approach makes sense: disintermediation can make many systems work better. But it’s not always the way to make the most difference.

When I worked with CABI and the Institute for Development Studies in 2013 to host a ‘Research to Impact’ hackathon at the iHub in Nairobi, we brought together people involved in improving the quality of agriculture and the lives of smallholder farmers. After a lot of discussion, it became clear that between ‘research’ and the ‘farm’ were all sorts of important intermediaries, from seed-sellers, to agricultural extension workers. Instead of building direct-to-farmer information systems, teams explored the kinds of tools that could help an agriculture extension worker deliver better support, or that could help a seed-seller to improve their product range.

Apps with 10s or 100s of back-office users may be much more powerful than apps with 1000s of ‘end users’.

When the two Open Data in Developing Countries project research partners in Kenya launched their research in the middle of last year, an interesting argument broke out between advocates of ‘disintermediation’, and ‘empowering intermediaries’. One the one hand, intermediaries contextualise information, and may be trusted: helping communities adopt information as actionable insights, when they may not understand or trust the information direct from source. On the other hand, intermediaries are often seen as a problem: middle-men using their position for self-interest, and limiting the freedoms of those they are the intermediary to.

Open approaches can offer an important ‘pressure valve’ in these contexts: focussing on creating platforms for intermediary, but not restricting information to intermediaries only.

5. Evolution can be as powerful as revolution.

The UN Secretary General has led the call for a ‘data revolution for development’, with the Independent Expert Group he appointed proposing a major updated in practices of data use and practice.

This revolution narratives often implies that organisations needs to shift direction; completely transforming data practices; throwing out existing report-writing and paper-based approaches in place of new ‘digital by default’ technology-driven processes. But what happens if we think differently and start from the existing strengths of organisations:

  • What is going well when it comes to data in the international potato trade?
  • Who are the organisations with promising practice in localising climate-change relevant information for farmers?
  • What have been the stories of progress in tracking food-borne disease?

How can we extend these successes? What innovations have made their first iteration, but are just waiting for the next?

One of the big challenges of ‘data revolution’ is the organisational change curve it demands, and the complex relationship between data supply and demand. Often the data available right now is not great. For example, if you are currently running a crop monitoring project with documents and meetings, but a new open dataset becomes available that is relevant to your work, starting a ‘data revolution’ tomorrow will involve lots of time working with bad data and finding new ways to work around the peculiarities of the new system: the investment this year to do the same work you were doing with ‘inefficient’ analogue approaches last year might be double, as you scale the learning curve.

Of course, in year 3 or 4, the more efficient way of working may start to pay off: but often projects never get there. And because use of the new open dataset dropped away in year 2, when early adopters realised they could not afford to transform their practices to work with it, government publishers get discouraged, and by year 3 and 4 the data might not be there.

An evolution approach works out how to change practices year-by-year: iterating and negotiating the place of data in the future of food.

(See Open Data in Developing Countries – Insights from Phase I for more on this point)

In conclusion

Ok. Still a bit abstract for 9.15am on a Monday morning: but I hope the general point is clear.

Ultimately, the most important thing about the creations at a hackathon is their ‘theory of change’: how does the time spent hacking show the way towards real change? I’m certainly very optimistic that when it comes to the pitch back tomorrow, the ideas and energy in this room will offer some key pointers for us all.

OCDS – Notes on a standard

logo-open-contracting Today sees the launch of the first release of the Open Contracting Data Standard (OCDS). The standard, as I’ve written before, brings together concrete guidance on the kinds of documents and data that are needed for increased transparency in processes of public contracting, with a technical specification describing how to represent contract data and meta-data in common ways.

The video below provides a brief overview of how it works (or you can read the briefing note), and you can find full documentation at http://standard.open-contracting.org.

When I first jotted down a few notes on how to go forward from the rapid prototype I worked on with Sarah Bird in 2012, I didn’t realise we would actually end up with the opportunity to put some of those ideas into practice. However: we did – and so in this post I wanted to reflect on some aspects of the standard we’ve arrived at, some of the learning from the process, and a few of the ideas that have guided at least my inputs into the development process.

As, hopefully, others pick up and draw upon the initial work we’ve done (in addition to the great inputs we’ve had already), I’m certain there will be much more learning to capture.

(1) Foundations for ‘open by default’

Early open data advocacy called for ‘raw data now‘, asking for governments to essentially export and dump online existing datasets, with issues of structure and regular publishing processes to be sorted out later. Yet, as open data matures, the discussion is shifting to the idea of ‘open by default’, and taken seriously this means more than just data dumps that are created being openly licensed as the default position, but should mean that data is released from government systems as a matter of course in part of their day-to-day operation.

green_compilation.svgThe full OCDS model is designed to support this kind of ‘open by default’, allowing publishers to provide small releases of data every time some event occurs in the lifetime of a contracting process. A new tender is a release. An amendment to that tender is a release. The contract being awarded, or then signed, are each releases. These data releases are tied together by a common identifier, and can be combined into a summary record, providing a snapshot view of the state of a contracting process, and a history of how it has developed over time.

This releases and records model seeks to combine together different user needs: from the firm seeking information about tender opportunities, to the civil society organisation wishing to analyse across a wide range of contracting processes. And by allowing core stages in the business process of contracting to be published as they happen, and then joined up later, it is oriented towards the development of contracting systems that default to timely openness.

As I’ll be exploring in my talk at the Berkman Centre next week, the challenge ahead for open data is not just to find standards to make existing datasets line-up when they get dumped online, but is to envisage and co-design new infrastructures for everyday transparent, effective and accountable processes of government and governance.

(2) Not your minimum viable product

Different models of standard

Many open data standard projects adopt either a ‘Minimum Viable Product‘ approach, looking to capture only the few most common fields between publishers, or are developed through focussing on the concerns of a single publisher or users. Whilst MVP models may make sense for small building blocks designed to fit into other standardisation efforts, when it came to OCDS there was a clear user demand to link up data along the contracting process, and this required an overarching framework from into which simple component could be placed, or from which they could be extracted, rather than the creation of ad-hoc components, with the attempt to join them up made later on.

Whilst we didn’t quite achieve the full abstract model + idiomatic serialisations proposed in the initial technical architecture sketch, we have ended up with a core schema, and then suggested ways to represent this data in both structured and flat formats. This is already proving useful for example in exploring how data published as part of the UK Local Government Transparency Code might be mapped to OCDS from existing CSV schemas.

(3) The interop balancing act & keeping flex in the framework

OCDS is, ultimately, not a small standard. It seeks to describe the whole of a contracting process, from planning, through tender, to contract award, signed contract, and project implementation. And at each stage it provides space for capturing detailed information, linking to documents, tracking milestones and tracking values and line-items.

This shape of the specification is a direct consequence of the method adopted to develop it: looking at a diverse set of existing data, and spending time exploring the data that different users wanted, as well as looking at other existing standards and data specifications.

However, OCDS by not means covers all the things that publishers might want to state about contracting, nor all the things users may want to know. Instead, it focusses on achieving interoperability of data in a number of key areas, and then providing a framework into which extensions can be linked as the needs of different sub-communities of open data users arise.

We’re only in the early stages of thinking about how extensions to the standard will work, but I suspect they will turn out to be an important aspect: allowing different groups to come together to agree (or contest) the extra elements that are important to share in a particular country, sector or context. Over time, some may move into the core of the standard, and potentially elements that appear core right now might move into the realm of extensions, each able to have their own governance processes if appropriate.

As Urs Gasser and John Palfrey note in their work on Interop, the key in building towards interoperability is not to make everything standardised and interoperable, but is to work out the ways in which things should be made compatible, and the ways in which they should not. Forcing everything into a common mould removes the diversity of the real world, yet leaving everything underspecified means no possibility to connect data up. This is both a question of the standards, and the pressures that shape how they are adopted.

(4) Avoiding identity crisis

green_organisation.svgData describes things. To be described, those things need to be identified. When describing data on the web, it helps if those things can be unambiguously identified and distinguished from other things which might have the same names or identification numbers. This generally requires the use of globally unique identifiers (guid): some value which, in a universe of all available contracting data, for example, picks out a unique contracting process; or, in the universe of all organizations, uniquely identifies a specific organization. However, providing these identifiers can turn out to be both a politically and technically challenging process.

The Open Data Institute have recently published a report on the importance of identifiers that underlines how important identifiers are to processes of opening data. Yet, consistent identifiers often have key properties of public goods: everyone benefits from having them, but providing and maintaining them has some costs attached, which no individual identifier user has an incentive to cover. In some cases, such as goods and service identifiers, projects have emerged which take a proprietary approach to fund the maintenance of those identifiers, selling access to the lookup lists which match the codes for describing goods and services to their descriptions. This clearly raises challenges for an open standard, as when proprietary identifiers are incorporated into data, then users may face extra costs to interpret and make sense of data.

In OCDS we’ve sought to take as distributed an approach to identifiers as possible, only requiring globally unique identifiers where absolutely necessary (identifying contracts, organizations and goods and services), and deferring to existing registration agencies and identity providers, with OCDS maintaining, at most, code lists for referring to each identity ‘scheme’.

In some cases, we’ve split the ‘scheme’ out into a separate field: for example, an organization identifier consists of a scheme field with a value like ‘GB-COH’ to stand for UK Companies House, and then the identifier given in that scheme, like ‘5381958’. This approach allows people to store those identifiers in their existing systems without change (existing databases might hold national company numbers, with the field assumed to come from a particular register), whilst making explicit the scheme they come from in the OCDS. In other cases, however, we look to create new composite string identifiers, combining a prefix, and some identifier drawn from an organizations internal system. This is particularly the case for the Open Contracting ID (ocid). By doing this, the identifier can travel between systems more easily as a guid – and could even be incorporated in unstructured data as a key for locating documents and resources related to a given contracting process.

However, recent learning from the project is showing that many organisations are hesistant about the introduction of new IDs, and that adoption of an identifier schema may require as much advocacy as adoption of a standard. At a policy level, bringing some external convention for identifying things into a dataset appears to be seen as affecting the, for want of a better word, sovereignty of a specific dataset: even if in practice the prefix approach of the ocid means it only need to be hard coded in the systems that expose data to the world, not necessarily stored inside organizations databases. However, this is an area I suspect we will need to explore more, and keep tracking, as OCDS adoption moves forward.

(5) Bridging communities of practice

If you look closely you might in fact notice that the specification just launched in Costa Rica is actually labelled as a ‘release candidate‘. This points to another key element of learning in the project, concerning the different processes and timelines of policy and technical standardisation. In the world of funded projects and policy processes, deadlines are often fixed, and the project plan has to work backwards from there. In a technical standardisation process, there is no ‘standard’ until a specification is in use: and has been robustly tested. The processes for adopting a policy standard, and setting a technical one, differ – and whilst perhaps we should have spoken from the start of the project of an overall standard, embedding within it a technical specification, we were too far down the path towards the policy launch before this point. As a result, the Release Candidate designation is intended to suggest the specification is ready to draw upon, but that there is still a process to go (and future governance arrangements to be defined) before it can be adopted as a standard per-se.

(6) The schema is just the start of it

This leads to the most important point: that launching the schemas and specification is just one part of delivering the standard.

In a recent e-mail conversation with Greg Bloom about elements of standardisation, linked to the development of the Open Referral standard, Greg put forward a list of components that may be involved in delivering a sustainable standards project, including:

  • The specification – with its various components and subcomponents);
  • Tools that assesses compliance according to the spec (e.g. validation tools, and more advanced assessment tools);
  • Some means of visualizing a given set of data’s level of compliance;
  • Incentives of some kind (whether positive or negative) for attaining various levels of compliance;
  • Processes for governing all of the above;
  • and of course the community through which all of this emerges and sustains;

To this we might also add elements like documentation and tutorials, support for publishers, catalysing work with tool builders, guidance for users, and so-on.

Open government standards are not something to be published once, and then left, but require labour to develop and sustain, and involve many social processes as much as technical ones.

In many ways, although we’ve spent a year of small development iterations working towards this OCDS release, the work now is only just getting started, and there are many technical, community and capacity-building challenges ahead for the Open Contracting Partnership and others in the open contracting movement.

Creating the capacity building game…

Open Development Camp Logo[Summary: crowdsourcing contributions to a workshop at Open Development Camp]

There is a lot of talk of ‘capacity building’ in the open data world. As the first phase of the ODDC project found, there are many gaps between the potential of open data and it’s realisation: and many of these gaps can be described as capacity gaps – whether on the side of data suppliers, or potential data users.

But how does sustainable capacity for working with open data develop? At the Open Development Camp in a few weeks time I’ll be facilitating a workshop to explore this question, and to support participants to share learning about how different capacity building approaches fit in different settings.

The basic idea is that we’ll use a simple ‘cards and scenarios’ game (modelled, as ever, on the Social Media Game), where we identify a set of scenarios with capacity building needs, and then work in teams to design responses, based on combining a selection of different approaches, each of which will be listed one of the game cards.

But, rather than just work from the cards, I’m hoping that for many of these approaches there will be ‘champions’ on hand, able to make the case for that particular approach, and to provide expert insights to the team. So:

  • (1) I’ve put together a list of 24+ different capacity building approaches I’ve seen in the open data world – but I need your help to fill in the details of their strengths, weaknesses and examples of them in action.
  • (2) I’m looking for ‘champions’ for these approaches, either who will be at the Open Development Camp, or who could prepare a short video input in advance to make the case for their preferred capacity building approach;

If you could help with either, get in touch, or dive in direct on this Google Doc.

If all goes well, I’ll prepare a toolkit after the Open Development Camp for anyone to run their own version of the Capacity Building Game.

The list so far

Click each one to jump direct to the draft document

Exploring Wikidata

WikiData[Summary: thinking aloud – brief notes on learning about the wikidata project, and how it might help addressing the organisational identifiers problem]

I’ve spent a fascinating day today at the Wikimania Conference at the Barbican in London, mostly following the programmes ‘data’ track in order to understand in more depth the Wikidata project. This post shares some thinking aloud to capture some learning, reflections and exploration from the day.

As the Wikidata project manager, Lydia Pintscher, framed it, right now access to knowledge on wikipedia is highly skewed by language. The topics of articles you have access to, the depth of meta-data about them (such as the locations they describe), and the detail of those articles, and their liklihood of being up to date, is greatly affected by the language you speak. Italian or Greek wikipedia may have great coverage of places in Italy or Greece, but go wider and their coverage drops off. In terms of seeking more equal access to knowledge, this is a problem. However, whilst the encyclopedic narrative of a French, Spanish of Catalan page about the Barbican Center in London will need to be written by someone in command of that language, many of the basic facts that go into an article are language-neutral, or translatable as small units of content, rather than sentences and paragraphs. The date the building was built, the name of the architect, the current capacity of the building – all the kinds of things which might appear in infoboxes – are all things that could be made available to bootstrap new articles, or that, when changed, could have their changes cascaded across all the different language pages that draw upon them.

That is one of the motivating cases for Wikidata: separating out ‘items’ and their ‘properties’ that might belong in Wikipedia from the pages, making this data re-usable, and using it to build a better encyclopedia.

However, wikidata is also generating much wider interest – not least because it is taking on a number of problems that many people want to see addressed. These include:

  • Somewhere ‘institutional’ and well governed on the web to put data – and where each data item also gains the advantage of a discussion page.
  • The long-term preservation, and versioning, of data;
  • Providing common identifiers on the web for arbitrary things – and providing URIs for these things that can be looked up (building on the idea of DBPedia as a crystalisation point for the web of linked data);
  • Providing a data model that can cope with change over time, and with data from heterogenous sources – all of the properties in wikidata can have qualifiers, such as when the statement is true from, or until, source information, and other provenance data.

Wikidata could help address these issues on two levels:

  • By allowing anyone to add items and properties to the central wikidata instance, and making these available for re-use;
  • By providing an open source software platform for anyone to use in managing their own corpus of wikified, versioned data*;

A particular use case I’m interested in is whether it might help in addressing the perenial Organisational Identifiers problem faced by data standards such as IATI and Open Contracting, where it turns out that having shared identifiers for government agencies, and lots of existing, but non-registered, entities like charities and associations that give and recieve funds, is really difficult. Others at Wikimania spoke of potential use cases around maintaining national statistics, and archiving the datasets underlying scientific publications.

However, in thinking about the use cases wikidata might have, its important to keep in mind it’s current scope:

  • It is a store of ‘items’ and then ‘statements’ about them (essentially a graph store). This is different from being a place to store datasets (as you might want to do with the archival of the dataset used in a scientific paper), and it means that, once created, items are the first class entities of wikidata, able to exist in multiple collection.
  • It currently inherits Wikipedia’s notability criteria for items. That is, the basic building blocks of wikidata – the items that can be identified and described, such as the Barbican, Cheese or Government of Grenada – can only be included in the main wikidata instance if they have a corresponding wikipedia page in some language wikipedia (or similar: this requirement is a little more complex).
  • It can be edited by anyone, at any time. That is, systems that rely on the data need to consider what levels of consistence they need. Of course, as wikipedia has shown, editability is often a great strength – and as Rufus Pollock noted in the ‘data roundtable’ session, updating and versioning of open data are currently big missing parts of our data infrastructures.

Unlike the entirely distributed open world assumption on the web of data, where the AAA assumption holds (Anyone can say Anything about Anything), wikidata brings both a layer of regulation to the statements that can be made, and the potential of community driven editorial control. It sits somewhere between the controlled description sets of Schema.org, and an entirely open proliferation of items and ontologies to describe them.

Can it help the organisational identifiers problem?

I’ve started to carry out some quick tests to see how far wikidata might be a resource to help with the aforementioned organisational identifiers problem.

Using Kasper Brandt‘s fantastically useful linked data rendering of IATI, I queried for the names of a selection of government and non-government organisations occurring in the International Aid Transparency Initiative data. I then used Open Refine to look up a selection of these on the DBPedia endpoint (which it seems now incorporates wikidata info as well). This was very rough-and-ready (just searching for full name matches), but by cross-checking negative results (where there were no matches) by searching wikipedia manually, it’s possible to get a sense of how many organisations might be identifiable within Wikipedia.

So far I’ve only tested the method, and haven’t run a large scale test – but I found around 1/2 the organisations I checked had a Wikipedia entry of some form, and thus would currently be eligible to be Wikidata items right away. For others, Wikipedia pages would need to be created, and whether or not all the small voluntary organisations that might occur in an IATI or Open Contracting dataset would be notable for inclusion is something that would need to be explored more.

Exploring the Wikidata pages for some of the organisations I did find threw up some interesting additional possibilities to help with organisation identifiers. A number of pages were linked to identifiers from Library Authority Files, including VIAF identifiers such as this set of examples returned for a search on Malawi Ministry of Finance. Library Authority Files would tend to only include entries when a government agency has a publication of some form in that library, but at a quick glance coverage seems pretty good.

Now, as Chris Taggart would be quick to point out, neither wikipedia pages, nor library authority file identifiers, act as a registry of legal entities. They pick out everyday concepts of an organisation, rather than the legally accountably body which enters into contracts. Yet, as they become increasingly backed by data, these identifiers do provide access to look up lots of contextual information that might help in understanding issues like organisational change over time. For example, the Wikipedia page for the UK’s Department for Education includes details on the departments that preceeded it. In wikidata form, a statement like this could even be qualified to say if that relationship of being a preceeding department is one that passes legal obligations from one to the other.

I’ve still got to think about this a lot more, but it seems that:

  • There are many things it might be useful to know about organisations, but which are not going to be captured in official registries anytime soon. Some of these things will need to be subject of discussion, and open to agreement through dialogue. Wikidata, as a trusted shared space with good community governance practices might be a good place to keep these things, albeit recognising that in its current phase it has no goal of being a comprehensive repository of records about all organisations in the world (and other spaces such as Open Corporates are already solving the comprehensive coverage problem for particular classes of organiastion).

  • There are some organisations for which, in many countries, no official registry exists (particularly Government Departments and Agencies). Many of these things are notable (Government Departments for example), and so even if no Wikipedia entry yet exists, one could and should. A project to manage and maintain government agency records and identifiers in Wikidata may be worth exploring.

Whether a shift from seeking to solve some aspects of the organisational identifiers problem through finding some authority to provide master lists, to developing a distributed best-efforts community approach is one that would make sense to the open government community is something yet to be explored.

Notes

*I here acknowledge SJ Klein‘s counsel was that this (encouraging multiple domain specific instances of a wikidata platform) is potentially a very bad idea, as the ‘forking’ of wiki-projects has rarely been a successful journey: particularly with respect to the sustainability of forked content. As SJ outlined, even though there may be technical and social challenges to a mega graph store, these could be compared to the apparant challenges of making the first encyclopedias (the idea of 50,000 page book must have seemed crazy at first), or the social challenges envisioned to Wikipedia at its genesis (‘how could non-experts possible edit an enecylopedia?’). On this view, it is only by setting the ambition of a comprehensive shared store of the worlds propositional data (with the qualifiers that Wikidata supports to make this possible without a closed world assumption) that such limits might be overcome. Perhaps with data there is a greater possibility to support forking, and remerging, of wikidata instances, permitting short-term pragmatic creation of datasets outside the core wikidata project, which can later be brought back in if they are considered, as a set, notable (although this still carries risks that forked projects diverge in their values, governance and structure so far that re-connecting later is made prohibitively difficult).

Fifteen open data insights

ODDC Phase 1 Report - Cover[Summary: blogging the three-page version of Open Data in Developing Countries – Emerging Insights from Phase I paper, with some preamble]

I’m back living in Oxford after my almost-year in the USA at the Berkman Center. Before we returned, Rachel and I took a month to travel around the US – by Amtrak. The delightfully ponderous pace of US trains gave me plenty of time for reading, which was just as well, given June was the month when most of the partners in the Open Data in Developing Countries project I coordinate were producing their final reports. So, in-between time staring at the stunning scenery as we climbed through the Rockies, or watching amazing lightening storms from the viewing car, I was digging through in-depth reports into open data in the global south, and trying to pick out common themes and issues. A combination of post-it notes and scrivener index cards later, and finally back at my desk in Oxford, the result was a report, released alongside the ODDC Research Sharing Event in Berlin last week, that seeks to snapshot 15 insights or provocations for policy-makers and practitioners drawn out from the ODDC case study reports.

These are just the first stage of the synthesis work to be carried out in the ODDC project. In the network meeting also hosted in Berlin last week, we worked on mapping these and other findings from projects onto the original conceptual framework of the project, and looked at identifying further cross-cutting write-ups required. But, for now, below are the 15 points from the three-page briefing version, and you can find a full write-up of these points for download. You can also find reports from all the individual project partners, including a collection of quick-read research posters over on the Open Data Research Network website.

15 insights into open data supply, use and impacts

(1) There are many gaps to overcome before open data availability, can lead to widespread effective use and impact. Open data can lead to change through a ‘domino effect’, or by creating ripples of change that gradually spread out. However, often many of the key ‘domino pieces’ are missing, and local political contexts limit the reach of ripples. Poor data quality, low connectivity, scarce technical skills, weak legal frameworks and political barriers may all prevent open data triggering sustainable change. Attentiveness to all the components of open data impact is needed when designing interventions.

(2) There is a frequent mismatch between open data supply and demand in developing countries. Counting datasets is a poor way of assessing the quality of an open data initiative. The datasets published on portals are often the datasets that are easiest to publish, not the datasets most in demand. Politically sensitive datasets are particularly unlikely to be published without civil society pressure. Sometimes the gap is on the demand side – as potential open data users often do not articulate demands for key datasets.

(3) Open data initiatives can create new spaces for civil society to pursue government accountability and effectiveness. The conversation around transparency and accountability that ideas of open data can support is as important as the datasets in some developing countries.

(4) Working on open data projects can change how government creates, prepares and uses its own data. The motivations behind an open data initiative shape how government uses the data itself. Civil society and entrepreneurs interacting with government through open data projects can help shape government data practices. This makes it important to consider which intermediaries gain insider roles shaping data supply.

(5) Intermediaries are vital to both the supply and the use of open data. Not all data needed for governance in developing countries comes from government. Intermediaries can create data, articulate demands for data, and help translate open data visions from political leaders into effective implementations. Traditional local intermediaries are an important source of information, in particular because they are trusted parties.

(6) Digital divides create data divides in both the supply and use of data. In some developing countries key data is not digitised, or a lack of technical staff has left data management patchy and inconsistent. Where Internet access is scarce, few citizens can have direct access to data or services built with it. Full access is needed for full empowerment, but offline intermediaries, including journalists and community radio stations, also play a vital role in bridging the gaps between data and citizens.

(7) Where information is already available and used, the shift to open data involves data evolution rather than data revolution. Many NGOs and intermediaries already access the information which is now becoming available as data. Capacity building should start from existing information and data practices in organisations, and should look for the step-by-step gains to be made from a data-driven approach.

(8) Officials’ fears about the integrity of data are a barrier to more machine-readable data being made available. The publication of data as PDF or in scanned copies is often down to a misunderstanding of how open data works. Only copies can be changed, and originals can be kept authoritative. Helping officials understand this may help increase the supply of data.

(9) Very few datasets are clearly openly licensed, and there is low understanding of what open licenses entail. There are mixed opinions on the importance of a focus on licensing in different contexts. Clear licenses are important to building a global commons of interoperable data, but may be less relevant to particular uses of data on the ground. In many countries wider conversation about licensing are yet to take place.

(10) Privacy issues are not on the radar of most developing country open data projects, although commercial confidentiality does arise as a reason preventing greater data transparency. Much state held data is collected either from citizens or from companies. Few countries in the ODDC study have weak or absent privacy laws and frameworks, yet participants in the studies raised few personal privacy considerations. By contrast, a lack of clarity, and officials’ concerns, about potential breaches of commercial confidentiality when sharing data gathered from firms was a barrier to opening data.

(11) There is more to open data than policies and portals. Whilst central open data portals act as a visible symbol of open data initiatives, a focus on portal building can distract attention from wider reforms. Open data elements can also be built on existing data sharing practices, and data made available through the locations where citizens, NGOs are businesses already go to access information.

(12) Open data advocacy should be aware of, and build upon, existing policy foundations in specific countries and sectors. Sectoral transparency policies for local government, budget and energy industry regulation, amongst others, could all have open data requirements and standards attached, drawing on existing mechanisms to secure sustainable supplies of relevant open data in developing countries. In addition, open data conversations could help make existing data collection and disclosure requirements fit better with the information and data demands of citizens.

(13) Open data is not just a central government issue: local government data, city data, and data from the judicial and legislative branches are all important. Many open data projects focus on the national level, and only on the executive branch. However, local government is closer to citizens, urban areas bring together many of the key ingredients for successful open data initiatives, and transparency in other branches of government is important to secure citizens democratic rights.

(14) Flexibility is needed in the application of definitions of open data to allow locally relevant and effective open data debates and advocacy to emerge. Open data is made up of various elements, including proactive publication, machine-readability and permissions to re-use. Countries at different stages of open data development may choose to focus on one or more of these, but recognising that adopting all elements at once could hinder progress. It is important to find ways to both define open data clearly, and to avoid a reductive debate that does not recognise progressive steps towards greater openness.

(15) There are many different models for an open data initiative: including top-down, bottom-up and sector-specific. Initiatives may also be state-led, civil society-led and entrepreneur-led in their goals and how they are implemented – with consequences for the resources and models required to make them sustainable. There is no one-size-fits-all approach to open data. More experimentation, evaluation and shared learning on the components, partners and processes for putting open data ideas into practice must be a priority for all who want to see a world where open-by-default data drives real social, political and economic change.

You can read more about each of these points in the full report.

New Paper – Mixed incentives: Adopting ICT innovations for transparency, accountability, and anti-corruption

7353-U4Issue-2014-03-04-WEB

[Summary: critical questions to ask when planning, funding or working on ICTs for transparency and accountability]

Last year I posted some drafts of a paper I’ve been writing with Silvana Fumega at the invitation of the U4 Anti-Corruption Center, looking at the incentives for, and dynamics of, adoption of ICTs as anti-corruption tools. Last week the final paper was published in the U4 Issue series, and you can find it for download here.

In the final iteration of the paper we have sought to capture the core of the analysis in the form of a series of critical questions that funders, planners and implementers of anti-corruption ICTs can ask. These are included in the executive summary below, and elaborated more in the full paper.

Adopting ICT innovations for transparency, accountability, and anti-corruption – Executive Summary

Initiatives facilitated by information and communication technology (ICT) are playing an increasingly central role in discourses of transparency, accountability, and anti-corruption. Both advocacy and funding are being mobilised to encourage governments to adopt new technologies aimed at combating corruption. Advocates and funders need to ask critical questions about how innovations from one setting might be transferred to another, assessing how ICTs affect the flow of information, how incentives for their adoption shape implementation, and how citizen engagement and the local context affect the potential impacts of their use.

ICTs can be applied to anti-corruption efforts in many different ways. These technologies change the flow of information between governments and citizens, as well as between different actors within governments and within civil society. E?government ICTs often seek to address corruption by automating processes and restricting discretion of officials. However, many contemporary uses of ICTs place more emphasis on the concept of transparency as a key mechanism to address corruption. Here, a distinction can be made between technologies that support “upward transparency,” where the state gains greater ability to observe and hear from its citizens, or higher-up actors in the state gain greater ability to observe their subordinates, and “downward transparency,” in which “the ‘ruled’ can observe the conduct, behaviour, and/or ‘results’ of their ‘rulers’” (Heald 2006). Streamlined systems that citizens can use to report issues to government fall into the former category, while transparency portals and open data portals are examples of the latter. Transparency alone can only be a starting point for addressing corruption, however: change requires individuals, groups, and institutions who can access and respond to the information.

In any particular application of technology with anti-corruption potential, it is important to ask:

  • What is the direction of the information flow: from whom and to whom?
  • Who controls the flow of information, and at what stages?
  • Who needs to act on the information in order to address corruption?

Different incentives can drive government adoption of ICTs. The current wave of interest in ICT for anti-corruption is relatively new, and limited evidence exists to quantify the benefits that particular technologies can bring in a given context. However, this is not limiting enthusiasm for the idea that governments, particularly developing country governments, can adopt new technologies as part of open government and anti-corruption efforts. Many technologies are “sold” on the basis of multiple promised benefits, and governments respond to a range of different incentives. For example, governments may use ICTs to:

  • Improve information flow and government efficiency, creating more responsive public institutions, supporting coordination.
  • Provide open access to data to enable innovation and economic growth, responding to claims about the economic value of open data and its role as a resource for private enterprise.
  • Address principal-agent problems, allowing progressive and reformist actors within the state to better manage and regulate other parts of the state by detecting and addressing corruption through upward and downward transparency.
  • Respond to international pressure, following the trends in global conversations and pressure from donors and businesses, as well as the availability of funding for pilots and projects.
  • Respond to bottom-up pressure, both from established civil society and from an emerging global network of technology-focussed civil society actors. Governments may do this either as genuine engagement or to “domesticate” what might otherwise be seen as disruptive innovations.

In supporting ICTs for anti-corruption, advocates and donors should consider several key questions related to incentives:

  • What are the stated motivations of government for engaging with this ICT?
  • What other incentives and motivations may be underlying interest in this ICT?
  • Which incentives are strongest? Are any of the incentives in conflict?
  • Which incentives are important to securing anti-corruption outcomes from this ICT?
  • Who may be motivated to oppose or inhibit the anti-corruption applications of this ICT?

The impact of ICTs for anti-corruption is shaped by citizen engagement in a local context. Whether aimed at upward or downward transparency, the successful anti-corruption application of an ICT relies upon citizen engagement. Many factors affect which citizens can engage through technology to share reports with government or act upon information provided by government. ICTs that worked in one context might not achieve the same results in a different setting (McGee and Gaventa 2010). The following questions draw attention to key aspects of context:

  • Who has access to the relevant technologies? What barriers of connectivity, literacy, language, or culture might prevent a certain part of the population from engaging with an ICT innovation?
  • What alternative channels (SMS, offline outreach) might be required to increase the reach of this innovation?
  • How will the initiative close the feedback loop? Will citizens see visible outcomes over the short or long term that build rather than undermine trust?
  • Who are the potential intermediary groups and centralised users for ICTs that provide upward or downward transparency? Are both technical and social intermediaries present? Are they able to work together?

Towards sustainable and effective anti-corruption use of ICTs. As Strand (2010) argues, “While ICT is not a magic bullet when it comes to ensuring greater transparency and less corruption . . . it has a significant role to play as a tool in a number of important areas.” Although taking advantage of the multiple potential benefits of open data, transparency portals, or digitised communication with government can make it easier to start a project, funders and advocates should consider the incentives for ICT adoption and their likely impact on how the technology will be applied in practice. Each of the questions above is important to understanding the role a particular technology might play and the factors that affect how it is implemented and utilised in a particular country.

 

You can read the full paper here.