Category Archives: Open Government

A workshop on open data for anti-corruption

Last autumn the International Open Data Charter was launched, putting forward six key principles for governments to adopt to pursue an ‘open by default’ approach to key data.

However, for the Charter to have the greatest impacts requires more than just high-level principles. As the International Open Data Conference explored last year, we need to focus on the application of open data to particular sectors to secure the greatest impact. That’s why a stream of work has been emerging to develop ‘Sector Packages’ as companion resources to the International Open Data Charter.

The first of these is focussing on anti-corruption. I’ve been supporting the Technical Working Group of the Charter to sketch a possible outline for this in this consultation document, which was shared at the G20 meeting last year. 

To build on that we’ve just launched a call for a consultant to act as co-ordinating author for the package (closing date 28th Jan – please do share!), and a few weeks back I had the chance to drop into a mini-workshop at DFID to share an update on the Charter, and talk with staff from across the organisation about potential areas that the anti-corruption package should focus on. 

Slides from the talk are below, and I’ve jotted down some brief notes from the discussions as well. 

Datasets of interest

In the session we posed the question: “What one dataset would you like to see countries publish as open data to address corruption?”

The answers highlight a range of key areas for exploration as the anti-corruption sector package is developed further. 

1) Repository of registered NGOs and their downstream partners – including details of their bank accounts, board, constitution and rules etc.

This kind of data is clearly useful to a donor wanting to understand who they are working with, or considering whether to work with potential partners. But it is also a very challenging dataset to collate and open. Firstly, many countries either lack comprehensive systems of NGO registration, or have thresholds that mean many community-level groups will be non-constituted community associations rather than formally registered organisations. Secondly, there can be risks associated with NGO registration, particularly in countries with shrinking civil society space, and where lists of organisations could be used to increase political control or restrictions on NGO activity. 

Working these issues through will require thought about where to draw the lines between open and shared data, and how organisations can pool their self-collected intelligence about partnr organisations, whilst avoiding harms, and avoiding the creation of error-prone datasets where funding isn’t approved because ‘computer says no’. 

2) Data on the whole contracting chain – particularly for large infrastructure projects.

Whilst issolated pockets of data on public contracts often exist, effort is needed to join these up, giving a view of the whole contracting chain. The Open Contracting Data Standard has been developing the technical foundations for this to happen, and work is not beginning to explore how it might be used to track the implementation of infrastructure projects. In the UK, civil society are calling for the next Open Government National Action Plan to include a committment to model contract clauses that encourage contractors to disclose key information on subcontracting arrangements, implementation milestons and the company’s beneficial owners.

3) Identifying organisations and the people involved

The challenge of identifying the organisations who are counterparty to a funding transaction or a contract is not limited to NGOs. Identifying government agencies, departments, and the key actors within them, is also important. 

Government entity identifiers is a challenge the International Aid Transparency Initiative has been grapling with for a few years now. Could the Open Data Charter process finally move forward some agreement on the core data infrastructure describing the state that is needed as a foundation for accountability and anti-corruption open data action?

4) Beneficial ownership

Benefial ownership data reveals who is ultimately in control of, and reaping the profits from, a company. The UK is due to publish an open beneficial ownership register for the first time later this year – but there is still much to do to develop common standards for joined-up data on beneficial ownership. For example, the UK register will capture ownership information in bands at 25%, 50% and 75%, where other countries are exploring either detailed ownership percentage publication, or publication using other, non-overlapping bands. Without co-ordination on interoperability, potential impacts of beneficial ownership open data may be much harder to secure. 

5) Localised datasets and public expenditure tracking data

In thinking about the ‘national datasets’ that governments could publish as part of a sector package for anti-corruption, it is also important to not lose sight of data being generated and shared at the local level. There are lots of lessons to learn from existing work on Public Expenditure Tracking which traces the disbursement of funds from national budgets, through layers of administration, down to local services like schools. With the funding flows posted on posters on the side of school buildings there is a clearer answer to the question: “What does this mean to me?”, and data is more clearly connected with local citizen empowerment. 

Where next

Look out for updates about the anti-corruption sector package on the Open Data Charter website over the first part of 2016.

Following the money: preliminary remarks on IATI Traceability

[Summary: Exploring the social and technical dynamics of aid traceability: let’s learn what we can from distributed ledgers, without thinking that all the solutions are to be found in the blockchain.]

My colleagues at Open Data Services are working at the moment on a project for UN Habitat around traceability of aid flows. With an increasing number of organisations publishing data using the International Aid Transparency Initiative data standard, and increasing amounts of government contracting and spending data available online, the theory is that it should be possible to track funding flows.

In this blog post I’ll try and think aloud about some of the opportunities and challenges for traceability.

Why follow funds?

I can envisage a number of hypothetical use cases traceability of aid.

Firstly, donors want to be able to understand where their money has gone. This is important for at least three reasons:

  1. Effectiveness & impact: knowing which projects and programmes have been the most effective;
  2. Understanding and communication: being able to see more information about the projects funded, and to present information on projects and their impacts to the public to build support for development;
  3. Addressing fraud and corruption: identifying leakage and mis-use of funds.

Traceability is important because the relationship between donor and delivery is often indirect. A grant may pass through a number of intermediary organisations before it reaches the ultimately beneficiaries. For example, a country donor may fund a multi-lateral fund, which in turn commissions an international organisation to deliver a programme, and they in turn contract with country partners, who in turn buy in provision from local providers.

Secondly, communities where projects are funded, or where funds should have been receieved, may want to trace funding upwards: understanding the actors and policy agendas affecting their communities, and identifying when funds they are entitled to have not arrived (see the investigative work of Follow The Money Nigeria for a good example of this latter use case).

Short-circuiting social systems

It is important to consider the ways in which work on the traceability of funds potentially bypasses, ‘routes around’ or disrupts* (*choose your own framing) existing funding and reporting relationships – allowing donors or communities to reach beyond intermediaries to exert such authority and power over outcomes as they can exercise.

Take the example given above. We can represent the funding flows in a diagram as below:

downwards

But there are more than one-way-flows going on here. Most of the parties involved will have some sort of reporting responsibility to those giving them funds, and so we also have a report

upwards

By the time reporting gets to the donor, it is unlikely to include much detail on the work of the local partners or providers (indeed, the multilateral, for example, may not report specifically on this project, just on the development co-operation in general). The INGO may even have very limited information about what happens just a few steps down the chain on the ground, having to trust intermediary reports.

In cases where there isn’t complete trust in this network of reporting, and clear mechanisms to ensure each party is excercising it’s responsibility to ensure the most effective, and corruption-free, use of resources by the next party down, the case for being able to see through this chain, tracing funds and having direct ability to assess impacts and risks is clearly desirable.

Yet – it also needs to be approached carefully. Each of the relationships in this funding chain is about more than just passing on some clearly defined packet of money. Each party may bring specific contextual knowledge, skills and experience. Enabling those at the top of a funding chain to leap over intermediaries doesn’t inevitably having a positive impact: particularly given what the history of development co-operative has to teach about how power dynamics and the imposition of top-down solutions can lead to substantial harms.

None of this is a case against traceability – but it is a call for consideration of the social dynamics of traceability infrastructures – and considering of how to ensure contextual knowledge is kept accessible when it becomes possible to traverse the links of a funding chain.

The co-ordination challenge of traceability

Right now, the IATI data standard has support for traceability at the project and transaction level.

  • At the project level the related-activity field can be used to indicate parent, child and co-funded activities.
  • At the transaction level, data on incoming funds can specify the activity-id used by the upstream organisation to identify the project the funds come from, and data on outgoing funds can specify the activity-id used by the downstream organisation.

This supports both upwards and downwards linking (e.g. a funder can publish the identified of the funded project, or a receipient can publish the identifier of the donor project that is providing funds), but is based on explicit co-ordination and the capture of additional data.

As a distributed approach to the publication of open data, there are no consistency checks in IATI to ensure that providers and recipients agree on identifiers, and often there can be practical challenges to capture this data, not least that:

  • A) Many of the accounting systems in which transaction data is captured have no fields for upstream or downstream project identifier, nor any way of conceptually linking transactions to these externally defined projects;
  • B) Some parties in the funding chain may not publish IATI data, or may do so in forms that do not support traceability, breaking the chain;
  • C) The identifier of a downstream project may not be created at the time an upstream project assigns funds – exchanging identifiers can create a substantial administrative burden;

At the last IATI TAG meeting in Ottawa, this led to some discussion of other technologies that might be explored to address issues of traceability.

Technical utopias and practical traceability

Let’s start with a number of assorted observations:

  • UPS can track a package right around the world, giving me regular updates on where it is. The package has a barcode on, and is being transferred by a single company.
  • I can make a faster-payments bank transfer in the UK with a reference number that appears in both my bank statements, and the receipients statements, travelling between banks in seconds. Banks leverage their trust, and use centralised third-party providers as part of data exchange and reconciling funding transfers.
  • When making some international transfers, the money has effectively disappeared from view for quite a while, with lots of time spent on the phone to sender, recipient and intermediary banks to track down the funds. Trust, digital systems and reconciliation services function less well across international borders.
  • Transactions on the BitCoin Blockchain are, to some extent, traceable. BitCoin is a distributed system. (Given any BitCoin ‘address’ it’s possible to go back into the public ledger and see which addresses have transferred an amount of bitcoins there, and to follow the chain onwards. If you can match an address to an identity, the currency, far from being anonymous, is fairly transparent*. This is the reason for BitCoin mixer services, designed to remove the trackability of coins.)
  • There are reported experiments with using BlockChain technologies in a range of different settings, incuding for land registries.
  • There’s a lot of investment going into FinTech right now – exploring ways to update financial services

All of this can lead to some excitement about the potential of new technologies to render funding flows traceable. If we can trace parcels and BitCoins, the argument goes, why can’t we have traceability of public funds and development assistance?

Although I think such an argument falls down in a number of key areas (which I’ll get to in a moment), it does point towards a key component missing from the current aid transparency landscape – in the form of a shared ledger.

One of the reasons IATI is based on a distributed data publishing model, without any internal consistency checks between publishers, is prior experience in the sector of submitting data to centralised aid databases. However, peer-to-peer and block-chain like technologies now offer a way to separate out co-ordination and the creation of consensus on the state of the world, from the centralisation of data in a single database.

It is at least theoretically possible to imagine a world in which the data a government publishes about it’s transactions is only considered part of the story, and in which the recipient needs to confirm receipt in a public ledger to complete the transactional record. Transactions ultimately have two parts (sending and receipt), and open (distributed) ledger systems could offer the ability to layer an auditable record on top of the actual transfer of funds.

However (as I said, there are some serious limitations here), such a system is only an account giving of the funding flows, not the flows themself (unlike BitCoin) which still leaves space for corruption through maintaining false information in the ledger. Although trusted financial intermediaries (banks and others) could be brought into the picture, as others responsible for confirming transactions, it’s hard to envisage how adoption of such a system could be brought about over the short and medium term (particularly globally). Secondly, although transactions between organisations might be made more visible and traceable in this way, the transactions inside an organisation remain opaque. Working out which funds relate to which internal and external projects is still a matter of the internal businesses processes in organisations involved in the aid delivery chain.

There may be other traceability systems we should be exploring as inspirations for aid and public money traceable. What my brief look at BitCoin leads me to reflect on is potential role over the short-term of reconciliation services that can, at the very least, report on the extent to which different IATI publisers are mutually confirming each others information. Over the long-term, a move towards more real-time transparency infrastructures, rather than periodic data publication, might open up new opportunities – although with all sorts of associated challenges.

Ultimately – creating traceable aid still requires labour to generate shared conceptual understandings of how particular transactions and projects relate.

How much is enough?

Let’s loop back round. In this post (as in many of the conversations I’ve had about traceable), we started with some use cases for traceability; we saw some of the challenges; we got briefly excited about what new technologies could do to provide traceability; we saw the opportunities, but also the many limitations. Where do we end up then?

I think important is to loop back to our use cases, and to consider how technology can help but not completely solve, the problems set out. Knowing which provider organisations might have been funded through a particular donors money could be enough to help them target investigations in cases of fraud. Or knowing all the funders who have a stake in projects in a particular country, sector and locality can be enough for communities on the ground to do further research to identify the funders they need to talk to.

Rather than searching after a traceability data panopticon, can we focus traceability-enabling practices on breaking down the barriers to specific investigatory processes?

Ultimately, in the IATI case, getting traceability to work at the project level alone could be a big boost. But doing this will require a lot of social coordination, as much as technical innovation. As we think about tools for traceability, thinking about tools that support this social process may be an important area to focus on.

Where next

Steven Flower and the rest of the Open Data Services team will be working on coming weeks on a deeper investigation of traceability issues – with the goal of producing a report and toolkit later this year. They’ve already been digging into IATI data to look for the links that exist so far and building on past work testing the concept of traceability against real data.

Drop in comments below, or drop Steven a line, if you have ideas to share.

Three cross-cutting issues that UK data sharing proposals should address

[Summary: an extended discussion of issue arising from today’s discussion of UK data sharing open policymaking discussions]

I spend a lot of time thinking and writing about open data. But, as has often been said, not all of the data that government holds should be published as open data.

Certain registers and datasets managed by the state may contain, or be used to reveal, personally identifying and private information – justifying strong restrictions on how they are accessed and used. Many of the datasets governments collect, from tax records to detailed survey data collected for policy making and monitoring fall into this category. However, the principle that data collected for one purpose might have a legitimate use in another context still applies to this data: one government department may be able to pursue it’s public task with data from another, and there are cases where public benefit is to be found from sharing data with academic and private sector researchers and innovators.

However, in the UK, the picture of which departments, agencies and levels of government can share which data with others (or outside of the state) is complex to say the least. When it comes to sharing personally identifying datasets, agencies need to rely on specific ‘legal gateways’, with certain major data holders such as HM Revenue and Customs bound by restrictive rules that may require explicit legislation to pass through parliament before specific data shares are permitted.

That’s ostensibly why the UK Government has been working for a number of years now on bringing forward new data sharing proposals – creating ‘permissive powers’ for cross-departmental and cross-agency data sharing, increasing the ease of data flows between national and local government, whilst increasing the clarity of safeguards against data mis-use. Up until just before the last election, an Open Policy Making process, modelled broadly on the UK Open Government Partnership process was taking place – resulting in a refined set of potential proposals relating to identifiable data sharing, data sharing for fraud reduction, and use of data for targeted public services. Today that process was re-started, with a view to a public consultation on updated proposals in the coming months.

However, although much progress has been made in refining proposals based on private sector and civil society feedback, from the range of specific and somewhat disjointed proposals presented for new arrangements in today’s workshop, it appears the process is a way off from providing the kinds of clarification of the current regime that might be desirable. Missing from today’s discussions were clear cross-cutting mechanisms to build trust in government data sharing, and establish the kind of secure data infrastructures that are needed for handling personal data sharing.

I want to suggest three areas that need to be more clearly addressed – all of which were raised in the 2014/15 Open Policymaking process, but which have been somewhat lost in the latest iterations of discussion.

1. Maximising impact, minimising the data shared

One of the most compelling cases for data sharing presented in today’s workshop was work to address fuel poverty by automatically giving low-income pensioners rebates on their fuel bills. Discussions suggested that since the automatic rebate was introduced, 50% more eligible recipients are getting the rebates – with the most vulnerable who were far less likely to apply to recieve the rebates they were entitied to the biggest beneficiaries. With every degree drop in the temperature of a pensioners home correlating to increased hospital admissions – then the argument for allowing the data share, and indeed establishing the framework for current arrangements to be extended to others in fuel poverty (the current powers are specific to pensioners data in some way), is clear.

However, this case is also one where the impact is accompanied by a process that results in minimal data actually being shared from government to the private companies who apply the rebates to individuals energy bills. All that is shared in response to energy companies queries for each candidate on their customer list is a flag for whether the individual is eligible for the rebate or not.

This kind of approach does not require the sharing of a bulk dataset of personally identifying information – it requires a transactional service that can provide the minimum certification required to indicate, with some reasonable level of confidence, that an individual has some relevant credentials. The idea of privacy protecting identity services which operate in this way is not new – yet the framing of the current data sharing discussion has tended to focus on ‘sharing datasets’ instead of constructing processes and technical systems which can be well governed, and still meet the vast majority of use-cases where data shares may be required.

For example, when the General Records Office representative today posed the question of “In what circumstances would it be approciate to share civil registration data (e.g. Birth, Adoption, Marriage and Death) information?”, the use-cases that surfaced were all to do with verification of identity: something that could be achieved much more safely by providing a digital service than by handing over datasets in bulk.

Indeed, approached as a question of systems design, rather than data sharing, the fight against fraud may in practice be better served by allowing citizens to digitally access their own civil registration information and to submit that as evidence in their transactions with government, helping narrow the number of cases where fraud may be occurring – and focussing investigative efforts more tightly, instead of chasing after problematic big data analysis approaches.

(Aside #1: As one participant in today’s workshop insightfully noted, there are thousands of valid marriages in the UK which are not civil marriages and so may not be present in Civil Registers. A big data approach that seeks to match records of who is married to records of households who have declared they are married, to identify fraudulent claims, is likely to flag these households wrongly, creating new forms of discrimination. By contrast, an approach that helps individuals submit their evidence to government allows such ‘edge cases’ to be factored in – recognising that many ‘facts’ about citizens are not easily reduced to simple database fields, and that giving account of ones self to the state is a performative act which should not be too readily sidelined.)

(Aside #2: The case of civil registers also illustrates an interesting and significant qualitative difference between public records, and a bulk public dataset. Births, marriages and deaths are all ‘public events’: there is no right to keep them private, and they have long been recorded in registers which are open to inspection. However, when the model of access to these registers switches from the focussed inspection, looking for a particular individual, to bulk access, they become possible to use in new ways – for example, creating a ‘primary key’ of individuals to which other data can be attached, eroding privacy in ways which was not possible when each record needed to be explored individually. The balance of benefits and harms from this qualitative change will vary from dataset to dataset. For example, I would strongly advocate the open sharing of company registers, including details of beneficial owners, both because of the public benefit of this data, and because registering a company is a public act involving a certain social contract. By contrast, I would be more cautious about the full disclosure of all civil registers, due to the different nature of the social contract involved, and the greater risk of vulnerable individuals being targetted through intentional or unintentional misuse of the data.)

All of which is a long way to say:

  • Where the cross-agency or cross-departmental use-cases for access to a particular can be reduced to sharing assertions about individuals, rather than bulk datasets, this route should be explored first.

This does not remove the need for governance of both access and data use. However, it does ease the governance of access, and audit logs of access to a service are easier to manage than audit logs of what users in possession of a dataset have done.

Even the sharing of a ‘flag’ that can be applied to an individuals data record needs careful thought: and those in receipt of such flags need to ensure they govern the use of that data carefully. For example, as one participant today noted, pensioners have raised fears that energy companies may use a ‘fuel poverty’ flag in their records to target them with advertising. Ensuring that later analysts in the company do not stumble upon the rebate figures in invoices, and feed this into profiling of customers, for example, will require very careful data governance – and it is not clear that companies practices are robust enough to protect against this right now.

2. Algorithmic transparency

Last year the Detroit Digital Justice Coalition produced a great little zine called ‘Opening Data’ which takes a practical look at some of the opportunities and challenges of open data use. They look at how data is used to profile communities, and how the classifications and clustering approaches applied to data can create categories that may be skewed and biased against particular groups, or that reinforce rather than challenge social divides (see pg 30 onwards). The same issues apply to data sharing.

Whilst current data protection legislation gives citizens a right to access and correct information about themselves, the algorithms used to process that data, and derive analysis from it are rarely shared or open to adequate scrutiny.

In the process of establishing new frameworks for data sharing, the algorithms used to process that data should be being brough in view as much as the datasets themselves.

If, for example, someone is offered a targetted public service, or targetted in a fraud investigation, there is question to be explored of whether they should be told which datasets, and which algorithms, led to them being selected. This, and associated transparency, could help to surface otherwise unseen biases which might otherwise lead to particular groups being unfairly targetted (or missed) by analysis. Transparency is no panacea, but it plays an important role as a safeguard.

3. Systematic transparency of sharing arrangements

On the theme of transparency, many of the proposals discussed today mentioned oversight groups, Privacy Impact Assessments, and publication of information on either those in receipt of shared data, or those refused access to datasets – yet across the piece no systematic framework for this was put forward.

This is an issue Reuben Binns and I wrote about in 2014, putting forward a proposal for a common standard for disclosure of data sharing arrangements that, in it’s strongest form would require:

  • Structured data on origin, destination, purpose, legal framework and timescales for sharing;
  • Publication of Privacy Impact Assessments and other associated documents;
  • Notices published through a common venue (such as the Gazette) in a timely fashion;
  • Consultation windows where relevant before a power comes into force;
  • Sharing to only be legally valid when the notice has been published.

Without such a framework, we are likely to end up with the current confused system in which no-one knows which shares are in place, how they are being used, and which legal gateways are functioning well or not. With a scattered set of spreadsheets and web pages listing approved sharing, citizens have no hope of understanding how their data is being used.

If only one of the above issues could be addressed in the upcoming consultation on data sharing, then I certainly hope progress could be made on addressing this missing piece of a robust common framework for the transparency principles of data sharing to be put into practice.

Towards a well governed infrastructure?

Ultimately, the discussion of data sharing is a discussion about one aspect of our national data infrastructure. There has been a lot of smart work going on, both inside and outside government, on issues such as identity assurance, differential privacy, and identifying core derived datasets which should be available as open data to bypass need for sharing gateways. A truly effective data sharing agenda needs to link with these to ensure it is neither creating over-broad powers which are open to abuse, nor establishing a new web of complex and hard to operate gateways.

Further reading

My thinking on these issues has been shaped in part by inputs from the following:

Data & Discrimination – Collected Essays

White House Report on Big Data, and associated papers/notes from The Social, Cultural & Ethical Dimensions of “Big Data.” conference

Unpacking open data: power, politics and the influence of infrastructures

[Summary: recording of Berkman Centre Lunch Talk on open data]

Much belatedly, below you will find the video from the Berkman Centre Talk I gave late last year on ‘Unpacking open data: power, politics and the influence of infrastructures

You can find a live-blog of the talk from Matt Stempeck and Erhardt Graff over on the MIT Media Lab blog, and Willow Brugh drew the fantastic visual record of themes in the talk shown below:

Unpacking_open_data

The slides are also up on Slideshare here.

I’m now in the midst of trying to make more sense of the themes in this talk whilst in the writing up stage for my PhD… and much of the feedback I had from the talk has been incredibly valuable in that – so comments are always welcome.

Do we need eligibility criteria for private sector involvement in OGP?

I’ve been in Costa Rica for the Open Government Partnership (OGP) Latin America Regional Meeting (where we were launching the Open Contracting Data Standard), and on Tuesday attended a session around private sector involvement in the OGP.

The OGP was always envisaged as a ‘multi-stakeholder forum’ – not only for civil society and governments, but also to include the private sector. But, as Martin Tisne noted in opening the session, private sector involvement has so far been limited – although an OGP Private Sector Council is currently developing.

In his remarks (building on notes from 2013), Martin outlined six different roles for the private sector in open government, including:

  1. Firms as mediators of open government data – making governance related public data more accessible;
  2. Firms as beneficiaries and users of open data – building businesses of data releases, and fostering demand for, and sustainable supply of, open data;
  3. Firms as anti-corruption advocates – particularly rating agencies whose judgements on risk of investment in a country as a result of poor governance environments can strongly incentivise governments to institute reforms;
  4. Firms practising corporate accountability – including by being transparent about their own activities.
  5. Technology firms providing platforms for citizen-state interaction – from large platforms like Facebook which have played a role in democracy movements, to specifically civic private-sector provided platforms like change.org or SeeClickFix.
  6. Companies providing technical assistance and advice to governments on their OGP action plans.

The discussion panel then went on to look at a number of examples of private sector involvement in open government, ranging from Chambers of Commerce acting as advocates for anti-corruption and governance reforms, to large firms like IBM providing software and staff time to efforts to meet the challenge of Ebola through data-driven projects. A clear theme in the discussion was the need to recognise that, like government and civil society, the private sector is not monolithic. Indeed, I have to remember that I’ve participated in the UK OGP process as a result of being able to subsidise my time via Practical Participation Ltd.

Reflecting on public and private interests

Regardless of the positive contributions and points made by all the panelists in the session, I do find myself approaching the general concept of private sector engagement with OGP with a constructive scepticism, and one that I hope supports wider reflections about the role and accountability of all stakeholders in the process. Many of these reflections are driven by a concern about the relative power of different stakeholders in these processes, and the fact that, in a world where the state is often in retreat, civil society spread increasingly thin, and wealth accumulated in vastly uneven ways, ensuring a fair process of multi-stakeholder dialogue requires careful institutional design. In light of the uneven flow of resources in our world, these reflections also draw on an important distinction between public and private interest.

Whilst there are institutional mechanisms in place (albeit flawed in many cases) that mean both government and non-profits should operate in the public interest, the essential logic of the private sector is to act in private interest. Of course, the extent of this logic varies by type of firm, but large multi-nationals have legal obligations to their shareholders which can, at least when shareholders are focussed on short-term returns, create direct tensions with responsible corporate behaviour. This is relevant for OGP in at least two ways:

Firstly, when private firms are active contributors to open government activities, whether mediating public data, providing humanitarian interventions, offering platforms for citizen interaction, or providing technical assistance, mechanisms are needed in a public interest forum such as the OGP to ensure that such private sector interventions provide a net gain to the public good.

Take for example a private firm that offers hardware or software to a government for free to support it in implementing an open government project. If the project has a reasonable chance of success, this can be a positive contribution to the public good. However, if the motivation for the project comes from private rather than a public interest, and leads to a government being locked into future use of a proprietary software platform, or to an ongoing relationship with the company who have gained special access as a result of their ‘CSR’ support for the open government project – then it is possible for the net-result to be against the public interest.

It should be possible to establish governance mechanisms that address these concerns, and allow the genuine public interest, and win-win contributions of the private sector to open government and development to be facilitated, whilst establishing checks against abuse of the power imbalance, whether due to relative wealth, scale or technical know-how, that can exist between firms and states.

Secondly, corporate contributions to aspects of the OGP agenda should not distract from a focus on key issues of large-scale corporate behaviour that undermine the capacity and effectiveness of governments, such as the use of complex tax avoidance schemes, or the exploitation of workforces and suppression of wages such that citizens have little time or energy left after achieving the essentials of daily living to give to civic engagement.

A proposal

In Tuesday’s session these reflections led me towards thinking about whether the Open Government Partnership should have some form of eligibility criteria for corporate participants, as a partial parallel to those that exist for states. To keep this practical and relevant, they could relate to the existence of key disclosures by the firm for all the settings they operate in: such as disclosure of amount of tax paid, the beneficial owners of the firm, and of the amount of funding the firm is putting towards engagement in the OGP process.

Such requirements need not necessarily operate in an entirely gatekeeping fashion (i.e. it should not be that participants cannot engage at all without such disclosures), but could be instituted initially as a recommended transparency practice, creating space for social pressures to encourage compliance, and giving extra information to those considering the legitimacy of, and weight to give to, the contributions of corporate participants within the OGP process.

As noted earlier, these critical reflection might also be extended to civil society participants: there can also be legitimate concerns about the interests being represented through the work of CSOs. The Who Funds You campaign is a useful point of reference here: CSO participants could be encouraged to disclosure information on who is funding their work, and again, how much resource they are dedicating to OGP work.

Conclusions

This post provides some initial reflections as a discussion starter. The purpose is not to argue against private sector involvement in OGP – but is to, in engaging proactively with a multi-stakeholder model, to raise the need for critical thinking in the open government debate not only about the transparency and accountability of governments, but also about the transparency and accountability of other parties who are engaged.

OCDS – Notes on a standard

logo-open-contracting Today sees the launch of the first release of the Open Contracting Data Standard (OCDS). The standard, as I’ve written before, brings together concrete guidance on the kinds of documents and data that are needed for increased transparency in processes of public contracting, with a technical specification describing how to represent contract data and meta-data in common ways.

The video below provides a brief overview of how it works (or you can read the briefing note), and you can find full documentation at http://standard.open-contracting.org.

When I first jotted down a few notes on how to go forward from the rapid prototype I worked on with Sarah Bird in 2012, I didn’t realise we would actually end up with the opportunity to put some of those ideas into practice. However: we did – and so in this post I wanted to reflect on some aspects of the standard we’ve arrived at, some of the learning from the process, and a few of the ideas that have guided at least my inputs into the development process.

As, hopefully, others pick up and draw upon the initial work we’ve done (in addition to the great inputs we’ve had already), I’m certain there will be much more learning to capture.

(1) Foundations for ‘open by default’

Early open data advocacy called for ‘raw data now‘, asking for governments to essentially export and dump online existing datasets, with issues of structure and regular publishing processes to be sorted out later. Yet, as open data matures, the discussion is shifting to the idea of ‘open by default’, and taken seriously this means more than just data dumps that are created being openly licensed as the default position, but should mean that data is released from government systems as a matter of course in part of their day-to-day operation.

green_compilation.svgThe full OCDS model is designed to support this kind of ‘open by default’, allowing publishers to provide small releases of data every time some event occurs in the lifetime of a contracting process. A new tender is a release. An amendment to that tender is a release. The contract being awarded, or then signed, are each releases. These data releases are tied together by a common identifier, and can be combined into a summary record, providing a snapshot view of the state of a contracting process, and a history of how it has developed over time.

This releases and records model seeks to combine together different user needs: from the firm seeking information about tender opportunities, to the civil society organisation wishing to analyse across a wide range of contracting processes. And by allowing core stages in the business process of contracting to be published as they happen, and then joined up later, it is oriented towards the development of contracting systems that default to timely openness.

As I’ll be exploring in my talk at the Berkman Centre next week, the challenge ahead for open data is not just to find standards to make existing datasets line-up when they get dumped online, but is to envisage and co-design new infrastructures for everyday transparent, effective and accountable processes of government and governance.

(2) Not your minimum viable product

Different models of standard

Many open data standard projects adopt either a ‘Minimum Viable Product‘ approach, looking to capture only the few most common fields between publishers, or are developed through focussing on the concerns of a single publisher or users. Whilst MVP models may make sense for small building blocks designed to fit into other standardisation efforts, when it came to OCDS there was a clear user demand to link up data along the contracting process, and this required an overarching framework from into which simple component could be placed, or from which they could be extracted, rather than the creation of ad-hoc components, with the attempt to join them up made later on.

Whilst we didn’t quite achieve the full abstract model + idiomatic serialisations proposed in the initial technical architecture sketch, we have ended up with a core schema, and then suggested ways to represent this data in both structured and flat formats. This is already proving useful for example in exploring how data published as part of the UK Local Government Transparency Code might be mapped to OCDS from existing CSV schemas.

(3) The interop balancing act & keeping flex in the framework

OCDS is, ultimately, not a small standard. It seeks to describe the whole of a contracting process, from planning, through tender, to contract award, signed contract, and project implementation. And at each stage it provides space for capturing detailed information, linking to documents, tracking milestones and tracking values and line-items.

This shape of the specification is a direct consequence of the method adopted to develop it: looking at a diverse set of existing data, and spending time exploring the data that different users wanted, as well as looking at other existing standards and data specifications.

However, OCDS by not means covers all the things that publishers might want to state about contracting, nor all the things users may want to know. Instead, it focusses on achieving interoperability of data in a number of key areas, and then providing a framework into which extensions can be linked as the needs of different sub-communities of open data users arise.

We’re only in the early stages of thinking about how extensions to the standard will work, but I suspect they will turn out to be an important aspect: allowing different groups to come together to agree (or contest) the extra elements that are important to share in a particular country, sector or context. Over time, some may move into the core of the standard, and potentially elements that appear core right now might move into the realm of extensions, each able to have their own governance processes if appropriate.

As Urs Gasser and John Palfrey note in their work on Interop, the key in building towards interoperability is not to make everything standardised and interoperable, but is to work out the ways in which things should be made compatible, and the ways in which they should not. Forcing everything into a common mould removes the diversity of the real world, yet leaving everything underspecified means no possibility to connect data up. This is both a question of the standards, and the pressures that shape how they are adopted.

(4) Avoiding identity crisis

green_organisation.svgData describes things. To be described, those things need to be identified. When describing data on the web, it helps if those things can be unambiguously identified and distinguished from other things which might have the same names or identification numbers. This generally requires the use of globally unique identifiers (guid): some value which, in a universe of all available contracting data, for example, picks out a unique contracting process; or, in the universe of all organizations, uniquely identifies a specific organization. However, providing these identifiers can turn out to be both a politically and technically challenging process.

The Open Data Institute have recently published a report on the importance of identifiers that underlines how important identifiers are to processes of opening data. Yet, consistent identifiers often have key properties of public goods: everyone benefits from having them, but providing and maintaining them has some costs attached, which no individual identifier user has an incentive to cover. In some cases, such as goods and service identifiers, projects have emerged which take a proprietary approach to fund the maintenance of those identifiers, selling access to the lookup lists which match the codes for describing goods and services to their descriptions. This clearly raises challenges for an open standard, as when proprietary identifiers are incorporated into data, then users may face extra costs to interpret and make sense of data.

In OCDS we’ve sought to take as distributed an approach to identifiers as possible, only requiring globally unique identifiers where absolutely necessary (identifying contracts, organizations and goods and services), and deferring to existing registration agencies and identity providers, with OCDS maintaining, at most, code lists for referring to each identity ‘scheme’.

In some cases, we’ve split the ‘scheme’ out into a separate field: for example, an organization identifier consists of a scheme field with a value like ‘GB-COH’ to stand for UK Companies House, and then the identifier given in that scheme, like ‘5381958’. This approach allows people to store those identifiers in their existing systems without change (existing databases might hold national company numbers, with the field assumed to come from a particular register), whilst making explicit the scheme they come from in the OCDS. In other cases, however, we look to create new composite string identifiers, combining a prefix, and some identifier drawn from an organizations internal system. This is particularly the case for the Open Contracting ID (ocid). By doing this, the identifier can travel between systems more easily as a guid – and could even be incorporated in unstructured data as a key for locating documents and resources related to a given contracting process.

However, recent learning from the project is showing that many organisations are hesistant about the introduction of new IDs, and that adoption of an identifier schema may require as much advocacy as adoption of a standard. At a policy level, bringing some external convention for identifying things into a dataset appears to be seen as affecting the, for want of a better word, sovereignty of a specific dataset: even if in practice the prefix approach of the ocid means it only need to be hard coded in the systems that expose data to the world, not necessarily stored inside organizations databases. However, this is an area I suspect we will need to explore more, and keep tracking, as OCDS adoption moves forward.

(5) Bridging communities of practice

If you look closely you might in fact notice that the specification just launched in Costa Rica is actually labelled as a ‘release candidate‘. This points to another key element of learning in the project, concerning the different processes and timelines of policy and technical standardisation. In the world of funded projects and policy processes, deadlines are often fixed, and the project plan has to work backwards from there. In a technical standardisation process, there is no ‘standard’ until a specification is in use: and has been robustly tested. The processes for adopting a policy standard, and setting a technical one, differ – and whilst perhaps we should have spoken from the start of the project of an overall standard, embedding within it a technical specification, we were too far down the path towards the policy launch before this point. As a result, the Release Candidate designation is intended to suggest the specification is ready to draw upon, but that there is still a process to go (and future governance arrangements to be defined) before it can be adopted as a standard per-se.

(6) The schema is just the start of it

This leads to the most important point: that launching the schemas and specification is just one part of delivering the standard.

In a recent e-mail conversation with Greg Bloom about elements of standardisation, linked to the development of the Open Referral standard, Greg put forward a list of components that may be involved in delivering a sustainable standards project, including:

  • The specification – with its various components and subcomponents);
  • Tools that assesses compliance according to the spec (e.g. validation tools, and more advanced assessment tools);
  • Some means of visualizing a given set of data’s level of compliance;
  • Incentives of some kind (whether positive or negative) for attaining various levels of compliance;
  • Processes for governing all of the above;
  • and of course the community through which all of this emerges and sustains;

To this we might also add elements like documentation and tutorials, support for publishers, catalysing work with tool builders, guidance for users, and so-on.

Open government standards are not something to be published once, and then left, but require labour to develop and sustain, and involve many social processes as much as technical ones.

In many ways, although we’ve spent a year of small development iterations working towards this OCDS release, the work now is only just getting started, and there are many technical, community and capacity-building challenges ahead for the Open Contracting Partnership and others in the open contracting movement.

Exploring Wikidata

WikiData[Summary: thinking aloud – brief notes on learning about the wikidata project, and how it might help addressing the organisational identifiers problem]

I’ve spent a fascinating day today at the Wikimania Conference at the Barbican in London, mostly following the programmes ‘data’ track in order to understand in more depth the Wikidata project. This post shares some thinking aloud to capture some learning, reflections and exploration from the day.

As the Wikidata project manager, Lydia Pintscher, framed it, right now access to knowledge on wikipedia is highly skewed by language. The topics of articles you have access to, the depth of meta-data about them (such as the locations they describe), and the detail of those articles, and their liklihood of being up to date, is greatly affected by the language you speak. Italian or Greek wikipedia may have great coverage of places in Italy or Greece, but go wider and their coverage drops off. In terms of seeking more equal access to knowledge, this is a problem. However, whilst the encyclopedic narrative of a French, Spanish of Catalan page about the Barbican Center in London will need to be written by someone in command of that language, many of the basic facts that go into an article are language-neutral, or translatable as small units of content, rather than sentences and paragraphs. The date the building was built, the name of the architect, the current capacity of the building – all the kinds of things which might appear in infoboxes – are all things that could be made available to bootstrap new articles, or that, when changed, could have their changes cascaded across all the different language pages that draw upon them.

That is one of the motivating cases for Wikidata: separating out ‘items’ and their ‘properties’ that might belong in Wikipedia from the pages, making this data re-usable, and using it to build a better encyclopedia.

However, wikidata is also generating much wider interest – not least because it is taking on a number of problems that many people want to see addressed. These include:

  • Somewhere ‘institutional’ and well governed on the web to put data – and where each data item also gains the advantage of a discussion page.
  • The long-term preservation, and versioning, of data;
  • Providing common identifiers on the web for arbitrary things – and providing URIs for these things that can be looked up (building on the idea of DBPedia as a crystalisation point for the web of linked data);
  • Providing a data model that can cope with change over time, and with data from heterogenous sources – all of the properties in wikidata can have qualifiers, such as when the statement is true from, or until, source information, and other provenance data.

Wikidata could help address these issues on two levels:

  • By allowing anyone to add items and properties to the central wikidata instance, and making these available for re-use;
  • By providing an open source software platform for anyone to use in managing their own corpus of wikified, versioned data*;

A particular use case I’m interested in is whether it might help in addressing the perenial Organisational Identifiers problem faced by data standards such as IATI and Open Contracting, where it turns out that having shared identifiers for government agencies, and lots of existing, but non-registered, entities like charities and associations that give and recieve funds, is really difficult. Others at Wikimania spoke of potential use cases around maintaining national statistics, and archiving the datasets underlying scientific publications.

However, in thinking about the use cases wikidata might have, its important to keep in mind it’s current scope:

  • It is a store of ‘items’ and then ‘statements’ about them (essentially a graph store). This is different from being a place to store datasets (as you might want to do with the archival of the dataset used in a scientific paper), and it means that, once created, items are the first class entities of wikidata, able to exist in multiple collection.
  • It currently inherits Wikipedia’s notability criteria for items. That is, the basic building blocks of wikidata – the items that can be identified and described, such as the Barbican, Cheese or Government of Grenada – can only be included in the main wikidata instance if they have a corresponding wikipedia page in some language wikipedia (or similar: this requirement is a little more complex).
  • It can be edited by anyone, at any time. That is, systems that rely on the data need to consider what levels of consistence they need. Of course, as wikipedia has shown, editability is often a great strength – and as Rufus Pollock noted in the ‘data roundtable’ session, updating and versioning of open data are currently big missing parts of our data infrastructures.

Unlike the entirely distributed open world assumption on the web of data, where the AAA assumption holds (Anyone can say Anything about Anything), wikidata brings both a layer of regulation to the statements that can be made, and the potential of community driven editorial control. It sits somewhere between the controlled description sets of Schema.org, and an entirely open proliferation of items and ontologies to describe them.

Can it help the organisational identifiers problem?

I’ve started to carry out some quick tests to see how far wikidata might be a resource to help with the aforementioned organisational identifiers problem.

Using Kasper Brandt‘s fantastically useful linked data rendering of IATI, I queried for the names of a selection of government and non-government organisations occurring in the International Aid Transparency Initiative data. I then used Open Refine to look up a selection of these on the DBPedia endpoint (which it seems now incorporates wikidata info as well). This was very rough-and-ready (just searching for full name matches), but by cross-checking negative results (where there were no matches) by searching wikipedia manually, it’s possible to get a sense of how many organisations might be identifiable within Wikipedia.

So far I’ve only tested the method, and haven’t run a large scale test – but I found around 1/2 the organisations I checked had a Wikipedia entry of some form, and thus would currently be eligible to be Wikidata items right away. For others, Wikipedia pages would need to be created, and whether or not all the small voluntary organisations that might occur in an IATI or Open Contracting dataset would be notable for inclusion is something that would need to be explored more.

Exploring the Wikidata pages for some of the organisations I did find threw up some interesting additional possibilities to help with organisation identifiers. A number of pages were linked to identifiers from Library Authority Files, including VIAF identifiers such as this set of examples returned for a search on Malawi Ministry of Finance. Library Authority Files would tend to only include entries when a government agency has a publication of some form in that library, but at a quick glance coverage seems pretty good.

Now, as Chris Taggart would be quick to point out, neither wikipedia pages, nor library authority file identifiers, act as a registry of legal entities. They pick out everyday concepts of an organisation, rather than the legally accountably body which enters into contracts. Yet, as they become increasingly backed by data, these identifiers do provide access to look up lots of contextual information that might help in understanding issues like organisational change over time. For example, the Wikipedia page for the UK’s Department for Education includes details on the departments that preceeded it. In wikidata form, a statement like this could even be qualified to say if that relationship of being a preceeding department is one that passes legal obligations from one to the other.

I’ve still got to think about this a lot more, but it seems that:

  • There are many things it might be useful to know about organisations, but which are not going to be captured in official registries anytime soon. Some of these things will need to be subject of discussion, and open to agreement through dialogue. Wikidata, as a trusted shared space with good community governance practices might be a good place to keep these things, albeit recognising that in its current phase it has no goal of being a comprehensive repository of records about all organisations in the world (and other spaces such as Open Corporates are already solving the comprehensive coverage problem for particular classes of organiastion).

  • There are some organisations for which, in many countries, no official registry exists (particularly Government Departments and Agencies). Many of these things are notable (Government Departments for example), and so even if no Wikipedia entry yet exists, one could and should. A project to manage and maintain government agency records and identifiers in Wikidata may be worth exploring.

Whether a shift from seeking to solve some aspects of the organisational identifiers problem through finding some authority to provide master lists, to developing a distributed best-efforts community approach is one that would make sense to the open government community is something yet to be explored.

Notes

*I here acknowledge SJ Klein‘s counsel was that this (encouraging multiple domain specific instances of a wikidata platform) is potentially a very bad idea, as the ‘forking’ of wiki-projects has rarely been a successful journey: particularly with respect to the sustainability of forked content. As SJ outlined, even though there may be technical and social challenges to a mega graph store, these could be compared to the apparant challenges of making the first encyclopedias (the idea of 50,000 page book must have seemed crazy at first), or the social challenges envisioned to Wikipedia at its genesis (‘how could non-experts possible edit an enecylopedia?’). On this view, it is only by setting the ambition of a comprehensive shared store of the worlds propositional data (with the qualifiers that Wikidata supports to make this possible without a closed world assumption) that such limits might be overcome. Perhaps with data there is a greater possibility to support forking, and remerging, of wikidata instances, permitting short-term pragmatic creation of datasets outside the core wikidata project, which can later be brought back in if they are considered, as a set, notable (although this still carries risks that forked projects diverge in their values, governance and structure so far that re-connecting later is made prohibitively difficult).

A Data Sharing Disclosure Standard?

DataSharing[Summary: Iterations on a proposal for a public register of government data sharing arrangements, setting out options for a Data Sharing Disclosure Standard to be used whenever government shares personal data. Draft for interactive comments here (and PDF for those in govt without access to Google Docs )

At the instigation of the UK Cabinet Office, an open policy making process is currently underway to propose new arrangements for data sharing in government. Data sharing arrangements are distinct from open data, as they may involve the limited exchange of personal and private data between government departments, or outside of government, with specific purpose of data use in mind.

The idea that new measures are needed is based on a perception that many opportunities to make better use of data for research, addressing debt and fraud, or tailoring the design of public services, are missed because either because of legal or practical barriers to data moving being exchanged or joined up between government departments. Some departments in particular, such as HMRC, require explicit legal permissions to share data, where in other department and public bodies, a range of existing ‘legal gateways’ and powers support exchange of data.

I’ve been following the process from afar, but on Monday last week I had the chance to attend one of the open full-day workshops that Involve are facilitating as part of the open policy making process. This brought together representatives of a range of public bodies, including central government departments and local authorities, with members of the Cabinet Office team leading on data sharing reforms, and a small number of civil society organisations and individuals. Monday’s discussion were centered on the introduction of new ‘permissive powers’ for data sharing to support tailored public services. For example, powers that would make it easier for local government to request and obtain HMRC data on 16 – 19 year olds in order to identify which young people in their area were already in employment or training, and so to target their resources on contacting those young people outside employment or training who they have a statutory obligation to support.

The exact wording of such a power, and the safeguards that need to be in place to ensure it is neither too broad, nor open to abuse, are being developed through the open policy making process. One safeguard I believe is important comes from introducing greater transparency into government data sharing arrangements.

A few months back, working with Reuben Binns, I put together a short note on a possible model for an ‘Open Register of Data Sharing‘. In Monday’s open policy making meeting, the topic of transparency as an important aspect of tailored public service data sharing came up, and provided an opportunity to discuss many of the ideas that the draft proposal had contained. Through the discussions, however, it became clear that there were a number of extra considerations needed to develop the proposal further, in particular:

  • Noting that public disclosure of planned data sharing was not only beneficial for transparency and scrutiny, but also for efficiency, coordination and consistency of data sharing: by allowing public bodies to pool data sharing arrangements, and to easily replicate approved shares, rather than starting from scratch with every plan and business case.
  • Recognising the concerns of local authorities and other public bodies about a centralised register, and the need to accommodate shares that might take place between public bodies at a local level only, without involvement of central government.
  • Recognising the need for both human and machine-readable information on data sharing arrangements, so that groups with a specific interest in particular data (e.g. associations looking out for the rights of homeless people) could track proposed or enacted arrangements without needing substantial technical know-how.
  • Recognising the importance of documents like Privacy Impact Assessments and Business Cases, but also noting that mandatory publication of these during their drafting could distort the drafting process (with the risk they become more PR documents making the case for a share, than genuine critical assessments), suggesting a mix of proactive and reactive transparency may be needed in practice.

As a result of the discussions with local authorities, government departments and others, I took away a number of ideas about how the proposal could be refined, and so this Friday, at the University of Southampton Web and Internet Science group annual gathering and weekend of projects (known locally as WAISFest) I worked in a stream on personal data, and spend a morning updating the proposals. The result is a reframed draft that, rather than focusing on the Register, focuses on a Data Sharing Disclosure Standard emphasising the key information that needs to be disclosed about each data share, and discussing when disclosure should take place, whilst leaving open a range of options for how this might be technically implemented.

You can find the updated document here, as a Google Doc open to comments. I would really welcome comments and suggestion for how this could be refined further over the coming weeks. If you do leave a comment and want to be credited / want to join in future discussion of this proposal, please also include your name / contact details.

The Gazette provides semantically enriched public notices: readable by humans and machines.

The Gazette provides semantically enriched public notices: readable by humans and machines.

A couple of things of particular note in the draft:

  • It is useful to identify (a) data controllers; (b) dataset; (c) legislation authorising data shares. Right now the Register of Data Controllers seems to provide a good resource for (a), and thanks to recent efforts at building out the digital information infrastructure of the UK, it turns out there are often good URLs that can be used as identifiers for datasets (data.gov.uk lists unpublished datasets from many central government departments) and legislation (through the data-all-the-way down approach of legislation.gov.uk).
  • It considers how the Gazette might be used as a publication route for Data Sharing Disclosures. The Gazette is an official paper of record, established since 1665 but recently re-envisioned with a semantic publishing platform. Using such a route to publish notices of data sharing has the advantage that it combines the long-term archival of information in a robust source, with making enriched openly licensed data available for re-use. This potentially offers a more robust route to disclosures, in which the data version is a progressive enhancement on top of an information disclosure.
  • Based on feedback from Javier Ruiz, it highlights the importance of flagging when shared data is going to be processed using algorithms that will determine individuals eligibility for services/trigger interventions affecting citizens, and raises of the question of whether the algorithms themselves should be disclosed as a mater of course.

I’ll be sharing a copy of the draft with the Data Sharing open policy process mailing list, and with the Cabinet Office team working on the data sharing brief. They are working to draft an updated paper on policy options by early September, with a view to a possible White Paper – so comments over the next few weeks are particularly valued.

Data, information, knowledge and power – exploring Open Knowledge’s new core purpose

[Summary: a contribution to debate about the development of open knowledge movements]

New 'Open Knowledge' data-earth logo.

New ‘Open Knowledge Foundation’ name and ‘data earth’ branding.

The Open Knowledge Foundation (re-named as as ‘Open Knowledge’) are soft-launching a new brand over the coming months.

Alongside the new logo, and details of how the new brand was developed, posted on the OK Wiki, appear a set of statements about the motivations, core purpose and tag-line of the organisation. In this post I want to offer an initial critical reading of this particular process and, more importantly, text.

Preliminary notes

Before going further, I want to offer a number of background points that frame the spirit in which the critique is offered.

  1. I have nothing but respect for the work of the leaders, staff team, volunteers and wider community of the Open Knowledge Foundation – and have been greatly inspired by the dedication I’ve seen to changing defaults and practices around how we handle data, information and knowledge. There are so many great projects, and so much political progress on openness, which OKFN as a whole can rightly take credit for.
  2. I recognise that there are massive challenges involved in founding, running and scaling up organisations. These challenges are magnified many times in community based and open organisations.
  3. Organisations with a commitment to openness, or democracy, whether the co-operative movement, open source communities like Mozilla, communities such as Creative Commons and indeed, the Open Knowledge Foundation – are generally held to much higher standards and face much more complex pressures from engaging their communities in what they do – than do closed and conventional organisations. And, as the other examples show, the path is not always an easy one. There are inevitably growing pains and challenges.
  4. It is generally better to raise concerns and critiques and talk about them, than leave things unsaid. A critique is about getting into the details. Details matter.
  5. See (1).

(Disclosure: I have previously worked as a voluntary coordinator for the open-development working group of OKF (with support from AidInfo), and have participated in many community activities. I have never carried out paid work for OKF, and have no current formal affiliation.)

The text

Here’s the three statements in the OK Branding notes that caught my attention and sparked some reflections:

About our brand and what motivates us:
A revolution in technology is happening and it’s changing everything we do. Never before has so much data been collected and analysed. Never before have so many people had the ability to freely, easily and quickly share information across the globe. Governments and corporations are using this data to create knowledge about our world, and make decisions about our future. But who should control this data and the ability to find insights and make decisions? The many, or the few? This is a choice that we get to make. The future is up for grabs. Do we want to live in a world where access to knowledge is “closed”, and the power and understanding it brings is controlled by the few? Or, do we choose a world where knowledge is “open” and we are all empowered to make informed choices about our future? We believe that knowledge should be open, and that everyone – from citizens to scientists, from enterprises to entrepreneurs, – should have access to the information they need to understand and shape the world around them.

Our core purpose:

  • A world where knowledge creates power for the many, not the few.
  • A world where data frees us – to make informed choices about how we live, what we buy and who gets our vote.
  • A world where information and insights are accessible – and apparent – to everyone.
  • This is the world we choose.

Our tagline:
See how data can change the world

The critique

My concerns are not about the new logo or name. I understand (all too well) the way that having ‘Foundation’ in a non-profits name can mean different things in different contexts (not least people expecting you to have an endowment and funds to distribute), and so the move to Open Knowledge as a name has a good rationale. Rather, I wanted to raise four concerns:

(1) Process and representativeness

Tag Cloud from Open Knowledge Foundation Survey. See http://blog.okfn.org/2014/02/12/who-are-you-community-survey-results-part-1/ for details.

Tag Cloud from Open Knowledge Foundation Survey. See blog post for details.

The message introducing the new brand to OKF-Discuss notes that “The network has been involved in the brand development process especially in the early stages as we explored what open knowledge meant to us all” referring primarily to the Community Survey run at the end of 2013 and written up here and here. However, the later parts of developing the brand appear to have been outsourced to a commercial brand consultancy consulting with a limited set of staff and stakeholders, and what is now presented appears to be being offered as given, rather than for consultation. The result has been a narrow focus on the ‘data’ aspects of OKF.

Looking back over the feedback from the 2013 survey, that data-centricity fails to represent the breadth of interests in the OKF community (particularly when looking beyond the quantitative survey questions which had an in-built bias towards data in the original survey design). Qualitative responses to the Survey talk of addressing specific global challenges, holding governments accountable, seeking diversity, and going beyond open data to develop broader critiques around intellectual property regimes. Yet none of this surfaces in the motivation statement, or visibly in the core purpose.

OKF has not yet grappled in full with idea of internal democracy and governance – yet as a network made up of many working groups, local chapters and more, for a ‘core purpose’ statement to emerge without wider consultation seem problematic. There is a big missed opportunity here for deeper discussion about ideas and ideals, and for the conceptualisation of a much richer vision of open knowledge. The result is, I think, a core purpose statement that fails to represent the diversity of the community OKF has been able to bring together, and that may threaten it’s ability to bring together those communities in shared space in future.

Process points aside however (see growing pains point above), there are three more substantive issues to be raised.

(2) Data and tech-centricity

A selection of OKF Working Groups

The Open Knowledge movement I’ve met at OKFestival and other events, and that is evident through the pages of the working groups is one committed to many forms of openness – education, hardware, sustainability, economics, political processes and development amongst others. It is a community that has been discussing diversity and building a global movement. Data may be an element of varying importance across the working groups and interest areas of OKF. And technology may be an enabler of action for each. But a lot are not fundamentally about data, or even technology, as their core focus. As we found when we explored how different members of the Open Development working group understood the concept of open development in 2012, many members focussed more upon open processes than on data and tech. Yet, for all this diversity of focus – the new OK tagline emphasises data alone.

I work on issues of open data everyday. I think it’s an important area. But it’s not the only element of open knowledge that should matter in the broad movement.

Whilst the Open Knowledge Foundation has rarely articulated the kinds of broad political critique of intellectual property regimes that might be found in prior Access to Knowledge movements, developing a concrete motivation and purpose statement gave the OKF chance to deepen it’s vision rather than narrow it. The risk Jo Bates has written about, of intellectual of the ‘open’ movement being co-opted into dominant narratives of neoliberalism, appears to be a very real one. In the motivation statement above, government and big corporates are cast as the problem, and technology and data in the hands of ‘citizens’, ‘scientists’, ‘entrepreneurs’ and (perhaps contradictorily) ‘enterprises’, as the solution. Alternative approaches to improving processes of government and governance through opening more spaces for participation is off the table here, as are any specific normative goals for opening knowledge. Data-centricity displaces all of these.

Now – it might be argued that although the motivation statement takes data as a starting point – is is really at its core about the balance of power: asking who should control data, information and knowledge. Yet – the analysis appears to entirely conflate the terms ‘data’, ‘information’ and ‘knowledge’ – which clouds this substantially.

(3) Data, Information and Knowledge

Data, Information, Knowledge ,Wisdom

The DIKW pyramid offers a useful way of thinking about the relationship between Data, Information, Knowledge (and Wisdom). This has sometimes been described as a hierarchy from ‘know nothing’ (data is symbols and signs encoding things about the world, but useless without interpretation), ‘know what’, ‘know how’ and ‘know why’.

Data is not the same as information, nor the same as knowledge. Converting data into information requires the addition of context. Converting information into knowledge requires skill and experience, obtained through practice and dialogue.

Data and information can be treated as artefacts/thigns. I can e-mail you some data or some information. But knowledge involves a process – sharing it involves more than just sending a file.

OKF has historically worked very much on the transition from data to information, and information to knowledge, through providing training, tools and capacity building, yet this is not captured at all in the core purpose. Knowledge, not data, has the potential to free, bringing greater autonomy. And it is arguably proprietary control of data and information that is at the basis of the power of the few, not any superior access to knowledge that they possess. And if we recognise that turning data into information and into knowledge involves contextualisation and subjectivity, then ‘information and insights’ cannot be by simultaneously ‘apparent’ to everyone, if this is taken to represent some consensus on ‘truths’, rather than recognising that insights are generated, and contested, through processes of dialogue.

It feels like there is a strong implicit positivism within the current core purpose: which stands to raise particular problems for broadening the diversity of Open Knowledge beyond a few countries and communities.

(4) Power, individualism and collective action

I’ve already touched upon issues of power. Addressing “global challenges like justice, climate changes, cultural matters” (from survey responses) will not come from empowering individuals alone – but will have to involve new forms of co-ordination and collective action. Yet power in the ‘core purpose’ statement appears to be primarily conceptualised in terms of individual “informed choices about how we live, what we buy and who gets our vote”, suggesting change is purely the result of aggregating ‘choice’, yet failing to explore how knowledge needs to be used to also challenge the frameworks in which choices are presented to us.

The ideas that ‘everyone’ can be empowered, and that when “knowledge is ‘open’ […] we are all empowered to make informed choices about our future” fails to take account of the wider constraints to action and choice that many around the world face, and that some of the global struggles that motivate many to pursue greater openness are not always win-win situations. Those other constraints and wider contexts might not be directly within the power of an open knowledge movement to address, or the core preserve of open knowledge, but they need to be recognised and taken into account in the theories of change developed.

In summary

I’ve tried to deal with the Motivation, Core Purpose and Tag-line statements with as carefully as limited free time allows – but inevitably there is much more to dig into – and there will be other ways of reading these statements. More optimistic readings are possible – and I certainly hope might turn out to be more realistic – but in the interest of dialogue I hope that a critical reading is a more useful contribution to the debate, and I would re-iterate my preliminary notes 1 – 5 above.

To recap the critique:

  • Developing a brand and statement of core purpose is an opportunity for dialogue and discussion, yet right now this opportunity appears to have be mostly missed;
  • The motivation, core purpose and tagline are more tech-centric and data-centric than the OKF community, risking sidelining other aspects of the open knowledge community;
  • There need to be a recognition of the distinction of data, information and knowledge, to develop a coherent theory of change and purpose;
  • There appears to be an implicit libertarian individualism in current theories of change, and it is not clear that this is compatible with working to address the shared global challenges that have brought many people into the open knowledge community.

Updates:

There is some discussion of these issues taking place on the OKFN-Discuss list, and the Wiki page has been updated from that I was initially writing about, to re-frame what was termed ‘core purpose’ as ‘brand core purpose’.

ICTs and Anti-Corruption: theory and examples

[Summary: draft section from U4 paper on exploring the incentives for adopting ICT innovation in the fight against corruption]

As mentioned a few days ago, I’ve currently got a paper online for comment which I’m working on with Silvana Fumega for the U4 anti-corruption centre. I’ll be blogging each of the sections here, and if you’ve comments on any element of it, please do drop in comments to the Google Doc draft. 

ICTS AND ANTI-CORRUPTION

Corruption involves the abuse of entrusted power for personal gain (Transparency International, 2009). Grönlund has identified a wide range of actions that can be taken with ICTs to try and combat corruption, from service automation and the creation of online and mobile phone based corruption-reporting channels to the online publication of government transparency information (Grönlund, 2010). In the diagram below we offer eight broad categories of ICTs interventions with a potential role in fighting corruption.

U4-Diagram

These different ICT interventions can be divided between transactional reforms and transparency reforms. Transactional reforms seek to reduce the space for corrupt activity by controlling and automating processes inside government, or seek to increase the detection of corruption by increasing the flow of information into existing government oversight and accountability mechanisms. Often these developments are framed as part of e-government. Transparency reforms, by contrast, focus on increasing external rather than internal control over government actors by making the actions of the state and its agents more visible to citizens, civil society and the private sector. In the diagram, categories of ICT intervention and related examples are positioned along a horizontal axis to indicate, in general, whether these initiatives have emerged as ‘citizen led’ or ‘government led’ projects, and along the vertical axis to indicate whether the focus of these activities is primarily on transactional reforms, or transparency. In practice, where any actual ICT intervention falls is a matter as much of the details of implementation as it is to do with the technology, although we find these archetypes useful to highlight the different emphasis and origins of different ICT-based approaches.

Many ICT innovations for transparency and accountability[1] have emerged from within civil society and the private sector, only later adopted by governments. In this paper our focus is specifically upon government adoption of innovations: when the government is taking the lead role in implementing some technology with an anti-corruption potential, albeit a technology that may have originally been developed elsewhere, and where similar instances of such technologies may still be deployed by groups outside government. For example, civil society groups in a number of jurisdictions have deployed the Alaveteli open source software[2] which brokers the filing of Right to Information act requests online, logging and making public requests to, and replies from, government. Some government agencies have responded by building their own direct portals for filing requests, which co-exist with the civil society run Alaveteli implementations. The question of concern for this paper is why government has chosen to adopt the innovation and provide its own RTI portals.

Although there are different theories of change underlying ICT enabled transactional and transparency reforms, the actual technologies involved can be highly inter-related. For example, digitising information about a public service as part of an e-government management process means that there is data about its performance that can be released through a data portal and subjected to public pressure and scrutiny. Without the back-office systems, no digital records are available to open (Thurston, 2012).

The connection between transactional e-government and anti-corruption has only relatively recently been explored. As Bhatnagar notes, most e-government reforms did not begin as anti-corruption measures. Instead, they were adopted for their promise to modernise government and make it more efficient (Bhatnagar, 2003). Bhatnagar explains that “…reduction of corruption opportunities has often been an incidental benefit, rather than an explicit objective of e-government”. A focus on the connection between e-government and transparency is more recent still. Kim et. al. (2009) note that “E-government’s potential to increase transparency and combat corruption in government administration is gaining popularity in communities of e-government practitioners and researchers…”, arguably as a result of increased Internet diffusion meaning that for the first time data and information from within government can, in theory, be made directly accessible to citizens through computers and mobile phones, without passing through intermediaries.

In any use of ICTs for anti-corruption, the technology itself is only one part of the picture. Legal frameworks, organisational processes, leadership and campaign strategies may all be necessary complements of digital tools in order to secure effective change. ICTs for accountability and anti-corruption have developed in a range of different sectors and in response to many different global trends. In the following paragraphs we survey in more depth the emergence and evolution of three kinds of ICTs with anti-corruption potential, looking at both the technologies and the contexts they are embedded within. 

2.1 TRANSPARENCY PORTALS

A transparency portal is a website where government agencies routinely publish defined sets of information. They are often concerned with financial information and might include details of laws and regulations alongside more dynamic information such as government debt, departmental budget allocations and government spending (Solana, 2004). They tend to have a specific focus, and are often backed by a legal mandate, or regulatory requirement, that information is published to them on an ongoing basis. National transparency portals have existed across Latin America since the early 2000s, developed by finance ministries following over 15 years investment in financial management capacity building in the region. Procurement portals have also become common, linked to efforts to make public procurement more efficient, and comply with regulations and good practice on public tenders.

More recently, a number of governments have mandated the creation of local government transparency portals, or the creation of dedicated transparency pages on local government websites. For example, in the United Kingdom, the Prime Minister requested that governments publish all public spending over £500 on their websites, whilst in the Philippines the Department of Interior and Local Government (DILG) has pushed the implementation of a Full Disclosure Policy requiring Local Government Units to post a summary of revenues collected, funds received, appropriations and disbursement of funds and procurement–related documents on their websites. The Government of the Philippines has also created an online portal to support local government units in publishing the documents demanded by the policy[3].

In focus: Peru Financial Transparency Portal A transparency portal is a website where government agencies routinely publish defined sets of information. They are often concerned with financial information and might include details of laws and regulations alongside more dynamic information such as government debt, departmental budget allocations and government spending.

Country: Peru

Responsible: Government of Peru- Ministry of Economic and Financial Affairs

Brief description: The Peruvian Government implemented a comprehensive transparency strategy in early 2000. That strategy comprised several initiatives (law on access to financial information, promotion of citizen involvement in transparency processes, among others). The Financial Transparency Portal was launched as one of the elements of that strategy. In that regard, Solanas (2003) suggests that the success of the portal is related to the existence of a comprehensive transparency strategy, in which the portal serves as a central element. The Portal (http://www.mef.gob.pe/) started to operate in 2001 and, at that time, it was praised as the most advanced in the region. Several substantial upgrades to the portal have taken place since the launch.

Current situation:

The portal presents several changes from its early days. In the beginning, the portal provided access to documents on economic and financial information. After more than a decade, it currently publishes datasets on several economic and financial topics, which are provided by each of the agencies in charge of producing or collecting the information. Those datasets are divided in 4 main modules: budget performance monitoring, implementation of investment projects, inquiry on transfers to national, local and regional governments, and domestic and external debt. The portal also includes links to request information, under the Peruvian FOI law, as well as track the status of the request.

Sources:

http://www.politikaperu.org/directorio/ficha.asp?id=355

http://www.egov4dev.org/transparency/case/laportals.shtml

http://www.worldbank.org/socialaccountability_sourcebook/Regional%20database/Case%20 studies/Latin%20America%20&%20Caribbean/TOL-V.pdf#page=71

In general, financial transparency portals have focussed on making government records available: often hosting image file version of printed, signed and scanned documents which mean that anyone wanting to analyse the information from across multiple reports must re-type it into spreadsheets or other software. Although a number of aid and budget transparency portals are linked directly to financial management systems, it is only recently that a small number of portals have started to add features giving direct access to datasets on budget and spending.

Some of the most data-centric transparency portals can be found in the International Aid field, where Aid Transparency Portals have been built on top of Aid Management Platforms used by aid-recipient governments to track their donor-funded projects and budgets. Built with funding and support from International donors, aid transparency portals such as those in Timor Leste and Nepal offer search features across a database of projects. In Nepal, donors have funded the geocoding of project information, allowing a visual map of where funding flows are going to be displayed.

Central to the hypothesis underlying the role of transparency portals in anti-corruption is the idea that citizens and civil society will demand and access information from the portals, and will use it to hold authorities to account (Solana, 2004). In many contexts whilst transparency portals have become well-established, direct demand from citizens and civil society for the information they contain remains, as Alves and Heller put it in relation to Brazil’s fiscal transparency, “frustratingly low” (in Khagram, Fung, & Renzio, 2013). However, transparency portals may also be used by the media and other intermediaries, providing an alternative more indirect theory of change in which coverage of episodes of corruption creates electoral pressures (in functioning democracies at least) against corruption. Though, Power and Taylor’s work on democracy and corruption in Brazil suggests that whilst such mechanisms can have impacts, they are often confounded in practice by other non-corruption related factors that influence voters preferences, and a wide range of contingencies, from electoral cycles to political party structures and electoral math (Power & Taylor, 2011).

2.2 OPEN DATA PORTALS

Where transparency portals focus on the publication of specific kinds of information (financial; aid; government projects etc.), open data portals act as a hub for bringing together diverse datasets published by different government departments.

Open data involves the publication of structured machine-readable data files online with explicit permission granted for anyone to re-use the data in any way. This can be contrasted with examples where transparency portals may publish scanned documents that cannot be loaded into data analysis software, or under copyright restrictions that deny citizens or businesses right to re-use the data.  Open data has risen to prominence over the last five years, spurred on by the 2009 Memorandum on Transparency and Open Government from US President Obama (Obama, 2010) which led to the creation of thedata.gov portal, bringing together US government datasets. This built on principles of Open Government Data elaborated in 2007 by a group of activists meeting in Sebastopol California, calling for government to provide data online that was complete, primary (I.e. not edited or interpreted by government before publication), timely, machine-readable, standardised and openly licensed (Malmud & O’Reilly, 2007)

In focus: Kenya Open Data Initiative (KODI) Open data involves the publication of structured machine-readable data files online with explicit permission granted for anyone to re-use the data in any way. Open data portals act as a hub for bringing together diverse datasets published by different government departments. One of those platforms is: Kenya Open Data Initiative (opendata.go.ke)

Country: Kenya

Responsible: Government of Kenya

Brief description:

Around 2008, projects from Ushahidi to M-PESA put Kenya on the map of ICT innovation. Kenyan government – in particular, then-PS Ndemo of the Ministry of Information and Communications – eager to promote and to encourage that market, started to analyze the idea of publishing government datasets for this community of ICT experts to use.  In that quest, he received support from actors outside of the government such as the World Bank, Google and Ushahidi. Adding to that context, in 2010 a new constitution, recognizing the right to access to information by citizens, was enacted in Kenya (however, a FOI law is still a pending task for the Kenyan government). On July 8 2011, President Mwai Kibaki launched the Kenya Open Data Initiative, making government datasets available to the public through a web portal: opendata.go.ke

Current situation:

Several activist and analyst are starting to write about the lack of updates and updated information of the Kenya Open Data Initiative. The portal has not been updated in several months, and its traffic has slowed down significantly.

Sources:

http://www.scribd.com/doc/75642393/Open-Data-Kenya-Long-Version

http://blog.openingparliament.org/post/63629369190/why-kenyas-open-data-portal-is-failing-and-why-it

http://www.code4kenya.org/?p=469

http://www.ict.go.ke/index.php/hot-topic/416-kenya-open-data

http://www.theguardian.com/global-development/poverty-matters/2011/jul/13/kenya-open-data-initiative

Open data portals have caught on as a policy intervention, with hundreds now online across the world, including an increasing number in developing countries. Brazil, India and Kenya all have national open government data portals, and Edo State in Nigeria recently launched one of the first sub-national open data portals on the continent, expressing a hope that it would “become a platform for improving transparency, catalyzing innovation, and enabling social and economic development”[4]. However, a number of open data portals have already turned out to be short-lived, with the Thai governments open data portal launched[5] in 2011, already defunct and offline at the time of writing.

The data hosted on open data portals varies widely: ranging from information on the locations of public services, and government service performance statistics, to public transport timetables, government budgets, and environmental monitoring data gathered by government research institutions. Not all of this data is useful for anti-corruption work: although the availability of information as structured data makes it far easier to third-parties to analyse a wide range of government datasets not traditionally associated with anti-corruption work to look for patterns and issues that might point to causes for concern. In general, theories of change around open data for anti-corruption assume that skilled intermediaries will access, interpret and work with the datasets published, as portals are generally designed with a technical audience in mind.

Data portals can act as both a catalyst of data publication, providing a focal point that encourages departments to publish data that was not otherwise available, and as an entry-point helping actors outside government to locate datasets that are available. At their best they provide a space for engagement between government and citizens, although few currently incorporate strong community features (De Cindio, 2012).

Recently, transparency and open data efforts have also started to focus on the importance of cross-cutting data standards, that can be used to link up data published in different data portals, and to solicit the publication of sectoral data. Again the aid sector has provided a lead here, with the development the International Aid Transparency Initiative (IATI) data standard, and a data portal collating all the information on aid projects published by donors to this standard[6]. New efforts are seeking to build on experiences from IATI with data standards for contracts information in the Open Contracting initiative, which not only targets information from governments, but also potentially disclosure of contract information in the private sector[7].

2.3 CITIZEN REPORTING CHANNELS

Transparency and open data portals primarily focus on the flow of information from government to citizen. Many efforts to challenge corruption require a flow of information the other way: citizens reporting instances of corruption or providing the information agents of government need to identify and address corrupt behaviour. When reports are filed on paper, or to local officials, it can be hard for central governments to ensure reports are adequately addressed. By contrast, with platforms like the E-Grievance Portal in the Indian State of Orissa[8], when reports are submitted they can be tracked, meaning that where there is will to challenge corruption, citizen reports can be better handled.

Many online channels for citizen reporting have in fact grown up outside of government. Platforms like FixMyStreet in the UK, and the many similar platforms across the world, have been launched by civil society groups frustrated at having to deal with government through seemingly antiquated paper processes. FixMyStreet allows citizens to point out on a map where civil infrastructure requires fixing and forward the citizen reports to the relevant level of government. Government agents are invited to report back to the site when the issue is fixed, giving a trackable and transparent record of government responsiveness. In some areas, governments have responded to these platforms by building their own alternative citizen reporting channels, though often without the transparency of the civil society platforms (reports simply go to the public authority; no open tracking is provided), or, in other cases, by working to integrate the civil society provided solution with their own systems.

In focus: I Paid a BribeMany online channels for citizen reporting have been developed outside of government. One of those platforms is “I Paid a Bribe”, and Indian website aimed at collating bribe’s stories and prices from citizens across the country and then use it to present a snapshot of trends in bribery.

Country: India

Responsible: Janaagraha (www.janaagraha.org) a Bangalore based not-for-profit organizatio

Brief description:

The initiative was first launched on August 15, 2010 (India’s Independence Day), and the website became fully functional a month later. I Paid a Bribe aims to understand the role of bribery in public service delivery by transforming the data collected from the reports into knowledge to inform the government about gaps in public transactions and in strengthening citizen engagement to improve the quality of service delivery. For example, in Bangalore, Bhaskar Rao, the Transport Commissioner for the state of Karnataka, used the data collected on I Paid a Bribe to push through reforms in the motor vehicle department. As a result, and in order to avoid bribes, licenses are now applied for online (Strom, 2012).

Current situation: Trying to reach a greater audience, ipaidabribe.com launched, in mid 2013, “Maine Rishwat Di”, the Hindi language version of the website: http://hindi.ipaidabribe.com/ At the same time, they launched Mobile Apps and SMS services in order to make bribe reporting easier and more accessible to citizens all across India. “I paid a Bribe” has also been replicated with partners in a number of other countries such as Pakistan, Kenya,Morocco and Greece, among others.

Sources: https://www.ipaidabribe.com/about-us

http://southasia.oneworld.net/Files/ict_facilitated_access_to_information_innovations.pdf/at_download/file

http://www.firstpost.com/india/after-reporting-bribes-now-report-rishwats-hindi-version-of-i-paid-a-bribe-launched-1022627.html

http://www.ipaidabribe.com/comment-pieces/“maine-rishwat-di”-hindi-language-version-ipaidabribecom-launched-shankar-mahadevan

Strom, Stephanie (2012) Web Sites Shine Light on Petty Bribery Worldwide. The New York Times. March 6th. Available:  http://www.nytimes.com/2012/03/07/business/web-sites-shine-light-on-petty-bribery-worldwide.html

References

Bhatnagar, S. (2003). Transparency and Corruption?: Does E-Government Help??, 1–9.

De Cindio, F. (2012, April 4). Guidelines for Designing Deliberative Digital Habitats: Learning from e-Participation for Open Data Initiatives. The Journal of Community Informatics.

Fox, J. (2007). The uncertain relationship between transparency and accountability. Development in Practice, 17(4-5), 663–671. doi:10.1080/09614520701469955

Grönlund, Å. (2010). Using ICT to combat corruption – tools, methods and results. In C. Strand (Ed.), Increasing transparency and fighting corruption through ICT: empowering people and communities (pp. 7–26). SPIDER.

Khagram, S., Fung, A., & Renzio, P. de. (2013). Open Budgets: The Political Economy of Transparency, Participation, and Accountability (p. 264). Brookings Institution Press.

Kim, S., Kim, H. J., & Lee, H. (2009). An institutional analysis of an e-government system for anti-corruption: The case of OPEN. Government Information Quarterly, 26(1), 42–50. doi:10.1016/j.giq.2008.09.002

Malmud, C., & O’Reilly, T. (2007, December). 8 Principles of Open Government Data. Retrieved June 01, 2010, from http://resource.org/8_principles.html

Obama, B. (2010). Memo from President Obama on Transparency and Open Government (in Open Government: Collaboration, Transparency and Participation in Practice. In D. Lathrop & L. Ruma (Eds.), .

Power, T. J., & Taylor, M. M. (2011). Corruption and Democracy in Brazil: The struggle for accountability. University of Notre Dame.

Solana, M. (2004). Transparency Portals: Delivering public financial information to Citizens in Latin America. In K. Bain, I. Franka Braun, N. John-Abraham, & M. Peñuela (Eds.), Thinking Out Loud V: Innovative Case Studies on Participatory Instruments (pp. 71–80). World Bank.

Thurston, A. C. (2012). Trustworthy Records and Open Data. The Journal of Community Informatics, 8(2).

Transparency International. (2009). The Anti-Corruption Plain Language Guide.


[1] It is important to clarify that transparency does not necessarily lead to accountability. Transparency, understood as the disclosure of information that sheds light on institutional behavior, can be also defined as answerability. However, accountability (or “hard accountability” according to Fox, 2007) not only implies answerability but also the possibility of sanctions (Fox, 2007).

[2] http://www.alaveteli.org/about/where-has-alaveteli-been-installed/

[4] http://data.edostate.gov.ng/ Accessed 10th October 2013

[8] http://cmgcorissa.gov.in