Author Archives: Tim

UK Open Contracting goes local in Gloucestershire?

[Summary: Explore arguments for Gloucestershire County Council to support Open Contracting on Weds 7th December]

This Wednesday, on the eve of the Open Government Partnership summit in Paris, where I expect we’ll be hearing updates from Open Contracting projects across the world, my local County Council in Gloucestershire will be voting on an Open Contracting motion.

screen-shot-2016-12-04-at-15-29-45

If you’re not familiar with Open Contracting, it’s a simple idea, described by the Open Contracting Partnership here.

The motion, introduced by our local Green Party councillor Sarah Lunnon, calls on the Council to:

commit to the Open Contracting Global Principles and takes action to ensure that: – By the end of 2017 complete information for all contracting processes over £1m, including details of the tender, award and contract process, the full text of contracts and amendments, and performance information, are proactively published;* – By the end of 2018 complete information is available for all contracting processes, over £500, is proactively published”

It goes on to state that:

“The presumption should be that the text of all contracts is open by default. Redactions should only be permitted: (a) at the explicit written request of the parties to the contract; (b) subject to the public interest tests of the Freedom of Information act; (c) with the minimum possible redactions; and (d) with full justifications for any redaction given.”

Passing this motion would, as far as I’m aware, make Gloucestershire the first local authority in the UK to explicitly commit to the Open Contracting Global Principles. Although the Gloucestershire motion may be, at least in part, a response to the opaqueness of one particular contract, by being framed in terms of the Global Open Contracting Principles, if offers something of a win-win for local business (greater access to opportunities), citizens (greater understanding of how funds are spent) and the authority (better deals, and better scrutiny of spending).

screen-shot-2016-12-04-at-15-31-49

If you’re a Gloucestershire resident, consider contacting your Councillor to ask them to support the motion.

If you’re not – perhaps there might be an opportunity to bring forward an open contracting initiative in your own area? As it turns out – all the national policy foundations are in place in the UK – it just needs commitment from local areas to put them into practice.

The framework exists for local open contracting in the UK

Below are a few of the resources I found to address common questions raised about Open Contracting, and disclosure of contracting documents, when I was researching a short letter to local councillors on the motion.

National procurement policy already establishes a presumption in favour of disclosure The UK’s Public Sector Procurement Policy incorporates a set of Transparency Principles that state that:

There should be a presumption in favour of disclosing information, with exemptions following the provisions of the Freedom of Information Act – for example, on national security or commercial confidentiality grounds. The presumption in favour of disclosure should apply to the vast majority of commercial information about government contracts, with commercial confidentiality being the exception rather than the rule.

The principles go on to note that exemptions may be available for pricing information, but that “This means the way the supplier has arrived at the price they are charging government in a contract, ****but should not usually be grounds for withholding the price itself.****” (emphasis added)

Commercial confidentiality concerns can be addressed through good planning

In 2014, the Centre for Global Development facilitated a multi-stakeholder working group on Publishing Government Contracts, including public and private sector representatives. This group concluded that:

While there are legitimate commercial, national-security, and privacy concerns, they involve a small minority of contracts and can be addressed using a principles-based redaction policy. (From issue brief. Full report here)

The findings of the CDG working group show that a principle of ‘open by default’ is viable and practice.

Data protection concerns rarely justify non-disclosure of contracts

The Local Government Transparency Code 2015 provides guidance on managing any data protection concerns that may arise from contract publication, stating that:

The Data Protection Act 1998 also does not automatically prohibit information being published naming the suppliers with whom the authority has contracts, including sole traders, because of the public interest in accountability and transparency in the spending of public money.

Section 20 of the code addresses commercial confidentiality, stating that:

The Government has not seen any evidence that publishing details about contracts entered into by local authorities would prejudice procurement exercises or the interests of commercial organisations, or breach commercial confidentiality unless specific confidentiality clauses are included in contracts. Local authorities should expect to publish details of contracts newly entered into – commercial confidentiality should not, in itself, be a reason for local authorities to not follow the provisions of this Code. Therefore, local authorities should consider inserting clauses in new contracts allowing for the disclosure of data in compliance with this Code.

Model transparency clauses can be used to manage disclosure of structured performance information

Initial advocacy for a Model Transparency Clause in the UK was led by Institute for Government, and the national transparency clause, now included in the Model Services Contract, was developed with input from the National Council for Voluntary Services, Open Data Institute, and major private sector contractors. The extensive work that has taken place to develop models of disclosure that balance commercial practicalities and government and public interests in having access to clear information on contract performance, provides a tried-and-tested template for local authorities to build upon.

Contracts Finder and the Open Contracting Data Standard provide ready-made tools to implement disclosure

Contracts Finder is the government’s national platform for publication of contracting opportunities and awards. Local authorities are mandated to submit above-threshold procurements through the platform, but, as far as I understand, can also submit data on all procurement opportunities via Contracts Finder if they choose.

This means there is little extra cost for a local authority to make structured information on all its procurement processes available.

Some planning may be required to manage contract document publication effectively, but the technical complexity involved should be no more than making space available on a local council website for the documents, and linking to these from submissions to Contracts Finder.

Contracts Finder has recently launched an Open Contracting Data Standard API, allowing access to structured information about contracting processes, and a discovery phase is currently underway to improve the platform: with the opportunity to feed in ideas about how local authorities might wish to get their data back, to be able to display it for contracting transparency locally.

Looking ahead

I hope I’ll be able to update this post with news of a successful motion on Thursday. In any case, I’ve been quite struck when working on the research above on the potential to really develop a local open contracting agenda in the UK.

Interested in getting involved too? Drop me a line and let’s explore (tim@timdavies.org.uk)

ConDatos Talk notes: Open data as strategy

[Summary: Notes from a conference talk]

Last week I was in Colombia for AbreLatam and ConDatos, Latin America’s open data conference. Thanks to a kind invitation from [Fabrizio Scrollini](http://www.twitter.com/Fscrollini], I had the opportunity to share a few thoughts in one of the closing keynotes. Here is a lightly edited version of my speaker notes, and full slides are available here.

Open Data as strategy

screen-shot-2016-11-08-at-18-05-11In a few months, Barak Obama will leave the White House. As one of his first acts as US President, was to issue a memorandum on Transparency and Open Government, framed in terms of Transparency, Participation and Collaboration.

This memo has often been cited as the starting gun for a substantial open data movement around the world, although the roots of the open data movement go deeper, and in many countries adopting open data policies, they have blended with long-standing political priorities and agendas.

For myself, I started studying the open data field in 2009: exploring the interaction between open data and democracy, and I’ve been interested ever since in exploring the opportunities and challenges of using open data as a tool for social change.

So, it seems like a good time to be looking back and asking where have we got to eight years on from Obama’s memo, and nearly ten years since the Sebastapol Open Government Data Principles?

We’ve got an increasing number of datasets published by governments. Data portals abound. And there are many people now working in roles that involve creating, mediating, or using open data. But we’ve still got an impact gap. Many of the anticipated gains from open data, in terms of both innovation and accountability, appear not to have been realised. And as studies such as the Open Data Barometer have shown, many open data policies have a narrow scope, trying to focus on data for innovation, without touching upon data for transparency, participation or collaboration.

Eight years on – many are questioning the open data hype. We increasingly hear the question: with millions of datasets out there, who is using the data?

My argument is that we’ve spent too much time thinking about open data as an object, and not enough thinking about it as an approach: a strategy for problem solving.

Open data as an approach

screen-shot-2016-11-08-at-18-07-39
What do I mean by this?

Well, if you think of open data as an object, it has a technical definition. It is a dataset which is machine-readable, published online, and free to re-use.

The trouble with thinking of open data in this way is that it ignores the contents of the data. It imagines that geodata maps of an entire country, when published as open data, are the same sort of thing as hospital statistics, or meta-data on public contracts. It strips these datasets from their context, and centralises them as static resources uploaded onto open data portals.

But, if we think of open data as an approach, we can get towards a much clearer picture of the kinds of strategies needed to secure an impact from each and every dataset.

What is the open data approach?

Well – it is about transparency, participation and collaboration.

So many of the policy or business problems we face today need access to data. A closed approach goes out and gathers that data inside the organisation. It uses the data to address the local problem, and all other potential uses of that data are ignored.

An open approach considers the opportunities to build shared open data infrastructure. An infrastructure is not just something technical: it involves processes of governance, of data quality assurance, and community building.

Building an open infrastructure involves thinking about your needs, and then also considering the needs of other potential data users – and working together to create and maintain datasets that meet shared goals.

Ultimately, it recognises open data as a public good.

Let me give an example

In the United States, ‘211 Community Directory services play an important role in helping refer people to sources of support for health or social welfare issues. Local 211 providers need to gather and keep up-to-date information on the services available in their area. This can be expensive and time consuming, and often involves groups collecting overlapping information – duplicating efforts.

The Open Referral initiative is working to encourage directory providers to publish their directories as open data, and to adopt a common standard for publishing the data. The lead organiser of the initiative, Greg Bloom, has invested time in working with existing system vendors and information providers, to understand how an open approach can strengthen, rather than undermine their business models.

In the early stages, and over the short-term, for any individual referal provider, getting involved in a collaborative open data effort, might involve more costs than benefits. But the more data that is provided, the more network effects kick in, and the greater the public good, and private value, that is generated.

This demonstrates open data as an approach. There isn’t an open referral dataset to begin with: just issolated proprietary directories. But through participation and collaboration, groups can come together to build shared open data that enables them all to do their job better.

It’s not just about governments pubishing data

It is important to note that an open data approach is not just about governent data. It can also be about data from the voluntary sector and the private sector.

With targetted transparency policies governemnts can mandate private sector data sharing to support consumer choice, and create a level playing field amongst firms.

As in the Open Referral example, voluntary sector and government organisations can share data together to enable better cross-organisation collaboration.

One of the most interesting findings from work of the Open Data in Developing Countries research network in Brazil, was that work on open data created a space for conversations between government and civil society about processes of data collection and use. The impact of an open data approach was not just on the datasets made available, but also on the business processes inside government. By engaging with external data re-users, government had the opportunity to rethink the data it collected, with potential impacts on the data available for internal decision making, as well as external re-use. We are seeing the same thing happen in our work on Open Contracting, which I will discuss more shortly.

The falacy of more data now

Before I move on, however, I want to caution against the common ‘falacy of more data now’.

There are many people who got into working with open data because they care about a particular problem: from local transport or environmental sustainability, to accountable politics, or better education. In exploring those problem, they have identified a need for data and have allied with the open data movement to get hold of datasets. But it is easy at this point to lose sight of the original problem – and to focus on getting access to more data. Just like research papers that conclude calling for more research, an open data approach can get stuck in always looking for more data.

It is important to regularly loop back to problem solving: using the data we do have available to address problems. Checking what role data really plays in the solution, and thinking about the other elements it sits alongside. Any only with a practical understanding, developed from trying to use data, of the remaining gaps, iterating back to further advocacy and action to improve data supply.

Being strategic

screen-shot-2016-11-08-at-18-07-49
So, if open data is, as I’ve argued, an approach, how do we approach it strategically? And how do we get beyond local pilots, to impacts at scale?

Firstly, ‘open by default’ is a good starting point. Strategically speaking. If the default when a dataset is created is to share it, and only restrict access when there is a privacy case, or strong business case, for doing so – then it is much easier for initiatives that might use data for problem solving to get started.

But, ‘open by default’ is not enough. We need to think about standards, governance, and the ecosystem of different actors involved in creating, using, providing access to, and adding value on top of open data. And we need to recognise that each dataset involves it’s own politics and power dynamics.

Let’s use a case study of Open Contracting to explore this more. Colombia has been an Open Contracting leader, one of the founder members of the Open Contracting Partnership, and part of the C5 along with Mexico, France, Ukraine and the UK. In fact, it’s worth noting that Latin America has been a real leader in Open Contracting – with leading work also in Paraguay, and emerging activities in Argentina.

Open Contracting in focus

Public contracting is a major problem space. $9.5tn a year are spent through public contracts, yet some estimates find as much as 30% of that might leak out of the system without leading to public benefit. Not only can poorly managed contracts lead to the loss of taxpayers money, but corruption and mismanagement can be a source of conflict and instability. For countries experiencing political change, or engaged in post-conflict reconstruction, this issue is in even sharper relief. In part this explains why Ukraine has been such an Open Contracting leader, seeking to challenge a political history of corruption through new transparent systems.

Open Contracting aims to bring about better public contracting through transparency and participation.

Standards

To support implementation of open contracting principles, the Open Contracting Partnership (OCP) led the development of OCDS – the Open Contracting Data Standard (OCDS). When working with the Web Foundation I was involved in the design of the standard, and now my team at Open Data Services Co-operative continue to develop and support the standard for OCP.

OCDS sets out a common technical specification for providing data on all stages of a contracting process, and for linking out to contracting documents. It describes what to publish, and how to publish it.

The standard helps Open Contracting scale in two ways:

  • Firstly, it makes it easier for data publishers to follow good practices in making their data re-usable. The standard itself is produced through an open and collaborative process, and so someone adopting the standard can take advantage of all the thinking that has gone into how to model contracting processes, and manage complex issues like changes over time, or uniquely identifying organisations.

  • Secondly, the standard is built around a number the needs of a number of different users: from the SME looking for business opportunities, to the government official looking to understand their own spend, and the civil society procurement monitor tracking contract delivery. By acting as a resource for all these different stakeholders, they can jointly advocate for OCDS data, rather than working separately on securing separate access to the particular data points they individually care about.

Importantly though, the standard is responsive to local needs. In Mexico, where both the federal government and Mexico City have been leading adopters, work has taken place to translate the standard, and then to look at how it can be extended and localised to fit with national law, whilst also maintaining connections with data shared in other countries.

Governance & dialogue

When it comes to enabling collaboration through open data, governance becomes vitally important. No-one is going to build their business or organisation on top of a government dataset if they don’t trust that the dataset will be available next year, and the year after that.

And governments are not going to be able to guarantee that they will provide a dataset year after year unless they have good information governance in place. We’ve seen a number of cases where data publishers have had to withdraw datasets because they did not think carefully about privacy issues when preparing the data for release.

For Open Contracting, the standard itself has an open governance process. And in Open Contracting Partnership ‘Showcase and Learning Projects’ there is a strong emphasis on building local partnerships, making sure there is dialogue between data publishers and users – creating the feedback loops needed to build a data infrastructure that can be relied upon.

In the UK, adopting of OCDS will soon give the government a better view of how far different departments and agencies are meeting their obligations to publish contracting information. By being transparent with the data, and being transparent about data quality, civil society and the private sector can get more involved in pushing for policies to be properly implemented: combining top-down and bottom-up pressure for change.

Support and community

One of the most important lessons for us from Open Contracting has been that scaling up open data initiatives is not just about standards and technical specs, but it is also about relationships, community and providing the right support at the right time.

The Open Contracting Partnership invest in bringing together champions of open contracting from across the world to get inspired and to share learning. Because they are working with common standards, ideas and tools are more easily transferable. And as I mentioned earlier, thinking about how to improve their open data also creates opportunities for groups to think about improving their internal systems and processes.

In addition, my team at Open Data Services Co-operative provide the ‘technical helpdesk’ for OCDS. We offer e-mail, phone and workshop support to governments working to publish their data, and to groups seeking to use open contracting data. Our goal is to make sure that when data is published, it is easy-to-use, and that all the small barriers to data re-use that exist for so many other datasets are not there when you come to an open contracting dataset.

We do this because data standards are only as strong as their early implementations. But we’re not aiming to be the only support provider for OCDS. In fact, we’re aiming to stimulate an ecosystem of support and data re-use.

Ecosystem

A strategic approach to problem solving with open data needs us to recognise the different roles in a data value chain. And to think about what elements need to be kept open for a vibrant ecosystem, and where to create space for proprietary business models.

If governments need consultancy support to improve their systems to produce OCDS data, and a marketplace of expert consultants develops, this is a good thing for scaling adoption. If private firms build value-added data analysis tools on top of contracting data, this is something to welcome that can scale use.

But if the originally published data is messy, and firms have to spend lots of money cleaning up the raw data before they use it, then barriers to market entry are created. This stiffles innovation, and leads to services only accessible to wealthy private sector, excluding civil society data users.

That’s why there is a need for a range of different actors, public, civil society and private, involved in a data ecosytem – and space for a range of business models.

Business models

I’ve been asked to touch in particular on business models in this talk – not least because the company I’m part of, Open Data Services Co-operative, has been exploring a different model to scale up support for open data.

We’re set up as a Workers Co-operative: a model where Latin America has a strong history. In a workers co-op, the staff own the business: and make decisions about it’s future. This might not sound that significant, but it has a number of distinction against other models:

(1) It’s a business, not a charity. This can give us the flexibility to innovate, and the drive to find sustainable models for our work. Right now, we work through a mix of contracts for technology research and development, and through providing ongoing support for data standards, often ultimately funded by donors who believe in investing in public good open data infrastructure.

(2) Organic instead of investment growth. A lot of the examples used when talking about tech businesses are born out of massive-scale silicon valley investments. Most co-operative models are based on growing through member contributions and revenue, rather than selling equity. Although we are set up as a workers co-op, there are growing discussions around ‘platform co-operatives’ and ‘data co-operatives’, in which those who could benefit from shared data infrastructure collectively support its development through a co-op model.

(3) Social mission focus. We want to provide jobs for people – particularly growing the community of professionals working on open data, as we recognise there are limited opportunities for stable open-data and social change focussed jobs. But we also want to have an impact on the world, through enabling open data-approaches to problem solving. As a worker owned business, we’re not focussed on profit for shareholders or an external owner, but on developing effective projects, and contributing to the wider community and issues we care about.

When it comes to scale, for a co-operative the question is about reaching the right scale, not about unlimited growth. That’s why as demand has been growing for support on the Open Contracting Data Standard in Latin America, we’ve been working with the Open Contracting Partnership to put out a call for a new partner organisation to take on that role – co-operating alongside Open Data Services to provide services across the region.

If anyone would like to find out more about that opportunity – please do check out the details online here.

I’m not here to argue that co-operatives are the only or the best business model for working with open data – but I do encourage you to think about the different models: from supporting individual entrepreneurs, to building open data activities into existing organisations, and supporting the emergence of co-operative groups that can catalyse a wider re-use ecosystem.

Recap

So let’s recap.

  • Open data is an approach, not an object

  • Open approaches win out over closed approaches when it comes to creating both social value, and opportunities for innovation and growth

  • But, we need to be strategic about open data: using it for problem solving, and making sure data quality and reliability is good enough for ongoing re-use

  • An we need sector-specific approaches, with a mix of different organisations involved

I’ve shared case studies of Open Referral and Open Contracting. But in other sectors the right approaches may be different.

My challenge to you is to think about you can apply values of transparency, participation and collaboration to your open data projects, and how you can act strategically to make use of standards, community engagement and support for data publishers and users in order to build vibrant ecosystems.

Open Government – Gouvernement Ouvert: Same same but different

[Summary: preliminary notes for a roundtable discussion on open government research]

I’m talking tomorrow at a workshop in Paris to explore the research agenda on Open Government. The first panel, under the title “Open Government – Gouvernement Ouvert: Same same but different?” aims to dig into the question of whether open government takes different forms in different countries and contexts.

To what extent is open government about countries moving towards some set of universal principles and practices for modern accountable and participatory governance? And to what extent is it about diverse culturally specific reforms to existing national systems of governance? We’ll be getting into these questions by looking at the open government situation in a number of different (European) countries, followed by a roundtable discussion.

The organisers have set three questions to get discussions going. I’ve jotted down the thoughts below by way of my preparation (and sharing here in the spirit of blogging things before I try and edit them too much, which means they never make it out).

Question 1: What has been done lately in the UK that could qualify as open government?

(1) Brexit and open government

It’s hard to answer this question without first considering the recent EU Referendum, and subsequent political fall-out of the Brexit vote. How does this fit into the open government landscape?

In general, democratic process, elections and referenda have fallen outside the scope of an open government agenda. These votes might be the means by which we choose the legislative branch of government, and through which mass civic input is solicited, but when it comes to open government discourse, focus has been placed firmly on the executive and administrative branch. Whether this is sustainable in future is an open question (indeed, the OGP is moving towards a greater engagement with legislatures and legislative process).

An analysis of the EU Referendum, even though it engaged more voters than the last general election in directly addressing a substantive policy issue , would find it to be far from a model of open government. The abuse of statistics by all sides during the campaign, and the lack of attention given to substantive debate, represent failures of both political integrity from campaigners, and a failure of effective scrutiny from the media.

The subsequent position of the new administration, interpreting the referendum vote without any process of dialogue with the parliament, let along the wider public, demonstrates a retreat from ideas of open government, rather than an engagement with them. Rather than addressing social divisions through dialogue, government appears to be pursuing policies that deepen them.

At the same time it is worth noting how success and failures of open government may have contributed to the referendum result. The Open Government declaration talks of harnessing new “technologies to make more information public in ways that enable people to both understand what their governments do and to influence decisions.”. When it comes to British citizens understanding the EU, it is clear that much of the information that was available was not making it through, and few felt about to influence decisions at this supranational level. However, it is also clear that where data was available, on budgets, spending, regulation and more – that information alone was not enough to lead to better informed citizens, and that simply adding data does not make for more informed debate, or more open governance.

This raises some big questions for open governance advocacy in the UK: whether future action should engage with bigger political questions of rights, representation, media ethics and political integrity? Or whether these issues are part of a separate set of agendas to revisit, rethink and revitalise our democratic systems: whilst open government should remain focussed on administrative reforms for a more efficient, effective and responsive state?

(To explore answers to these questions I would argue there is much UK open government advocates can gain from approaching the Open Government Partnership as a space to learn from countries where the rights, freedoms and values we have often taken for granted are only recently won, or are still being fought for.)

(2) From open data to data instructures

When we look at the explicit open government commitments of the UK in most recent OGP National Action Plan, it is clear that the focus is firmly on the administrative side of open government. And very much on data and technology.

Of the 13 commitments, 8 are explicitly about data – representing the strong bias towards open data that has been present throughout the UK’s engagement in the Open Government Partnership. Because the most recent National Action Plan was published at the UK Anti-Corruption Summit in May, there is also a strong emphasis on data for anti-corruption. Asides from a process commitment to ongoing dialogue with civil society in developing the action plan itself, and a focussed set of engagement plans around how private sector and civil society actors should be involved in shaping the data that government publishes, there is little in the latest NAP on participation.

What is interesting to note however, is the move away from general commitments on open data, to a much more explicit focus on specific datasets, and the idea of building data infrastructures. The commitments cover publishing data on beneficial ownership for companies bidding on UK government contracts or owning property in the UK, gathering more structured extractives industry reporting, adopting the open contracting data standard for procurement data, publishing government grants data using the 360 Giving standard, and working towards standardised elections data. I’ll return shortly to the global nature of these commitments, and the infrastructure being constructed.

Effectively implemented, disclosure of this data will qualify as open governent on the ‘output side’. However, the challenge remains to articulate in future versions of the National Action Plan the ‘input’ side for these initiatives. For example, we are, as yet, to articulate in the NAP the feedback loops through which, for example, a commitment to Open Contracting can be made not just about publishing data on contracts, but also about creating more opportunities and mechanisms for citizen engagement and oversight of contracting.

(3) Process and product

In a somewhat meta-step, the OGP National Action Plan itself is also often considered to be an interesting act of open government. Since the second NAP, there has been close engagement between officials and a civil society network to shape the plan. The plan itself was published with joint forewords from the Minister for Cabinet Office, and the Civil Society Network. This kind of ‘open policy making’ process has been explored as a template for a number of other policy areas also, although with less concrete joint outputs.

Increasingly I’m reflecting on whether to date this process has found the right balance between government and civil society collaboration on core reforms, and the risk of civil society being co-opted: securing formal practices of transparency, but doing little to translate that into accountability.

When asked ‘What has been done lately in the UK that could qualify as open government?’, I would like to be able to answer with stories of civil society actions that use transparency to call government more to account – yet I’m struggling to identify such stories to share.

Question 2: What are the main open government issues in the UK and what are their political impacts?

There are three main trends I want to focus on in addressing this second question: privatisation and private sector accountability, anti-corruption and devolution.

Privatisation and private sector accountability

Firstly, privatisation. More and more we are seeing public services contracted out to the private sector. Instead of lines of management accountability from elected officials, through administrators, to front-line staff, the relationship between governments and fron-line services has become a contractual one. This changes structures of accountability and governance. If a service is not being delivered adequately, the levers which government can pull to fix it may be constrained by the terms of a contract. That makes opening up contracts a particularly important area of open government right now.

On a related note, it is worth noting that many reforms, such as open contracting, extractives industry transparency, beneficial ownership transparency, and the emerging area of tax transparency, are not solely about holding governments to account for the use of public funds, but also extend to scrutinising the actions of the private sector, and trying to rebalance power between citizens, state and private sector.

Anti-corruption

Secondly, as noted above, under Prime Minister Cameron, the UK Government placed a strong emphasis on the anti-corruption agenda, and on open government as a key mechanism in the fight against corruption. Whether the political will to prioritise this will continue under the new administration remains to be seen. However, a number of components of an anti-corruption data infrastructure are being put in place – albeit with major gaps when it comes to lobbying transparency, or structured data on interest and asset declarations.

Devolution

Thirdly, devolution. Although it might not be evident from the current UK OGP National Action Plan, many areas of the open government agenda are devolved responsibilities. By the end of the year we hope to see separate commitments in the NAP from Scotland, Wales and Northern Ireland. Scotland is one of the sub-national OGP pilot regions. And as more UK cities and regions get elected mayors, there is scope to build on the sub-national OGP model in future. However, with the regions of the UK controlled by different political parties, this raises interesting challenges for the open government agenda: whether it will lead to a politicisation of the agenda, or a further focus on depoliticised technical reforms is yet to be seen.

Question 3: Is there a specific UK perspective on open government?

Reading both forewords to the latest UK OGP National Action Plan, it appears to me that within the UK there are multiple perspectives on open government. Whilst the then Minister for Cabinet Office placed the empahsis on using “data to make decisions, and where a free society, free markets and the free flow of information all combine to drive our success in the 21st century”, the Civil Society forword talks of open government as a “building block for a more democratic, equal and sustainable society.”

However, when looked at alongside other OGP member nations, we can make a number of observations about UK angles on open government:

  1. The UK appears to be part of a cluster of technical advanced countries, who are making strong links between agendas for open government and agendas for technical reform inside the state.

  2. Civil Society advocacy on open government in the UK has been strongly influenced by international NGOs based in London/the UK, with a dual focus on the domestic reform, and the role of the UK as a key actor in global initiatives, such as IATI and the EITI. The government has also placed strong emphasis on international initiatives, such as beneficial ownership transparency and open contracting.

  3. This emphasis on international initiatives, and the recent link between the OGP National Action Plan and the May anti-corruption summit, has led to a particular focus on data standards and interoperability. This highlights the global component of open government: building data infrastructures that can be used to secure accountability in an era of highly mobile global finance, and in which sovereign states cannot fight corruption within their borders alone.

How this compares to the emphasis of initiatives in France, and the other countries to be considered on tomorrow panel is something I’m looking forward to exploring.

Join the Open Data Services Co-operative team…

Open_Data_Services_logos_1

[Summary: Developer and analysts jobs with a workers co-operative committed to using open data for social change]

Over the last year I’ve had the immense pleasure of getting to work with a fantastic group of colleagues creating ‘Open Data Services Co-operative‘. It was created in recognition of the fact that creating and using distributed open data requires ongoing labour to develop robust platforms, support publishers to create quality data, and help users access data in the forms they need.

Over the last year we’ve set up ongoing support systems for the Open Contracting Data Standard and 360Giving, and have worked on projects with NCVO, NRGI and the Financial Transparency Coalition, amongst others – focussing on places where open data can make a real difference to governance, accountability and participation. We’ve been doing that with a multidisciplinary team, combining the capacity to build and maintain technical tools, such as CoVE, which drives an accessible validation and data conversion tool, with a responsive analysis team – able to give bespoke support to data publishers and users.

And we’ve done this as a workers co-operative, meaning that staff who joined the team back in October last year are now co-owners of the company, sharing in setting it’s direction and decision making over how we use co-op resources to provide a good working environment, and further our social goals. A few weeks back we were able to vote on our first profit distributions, committing to become corporate sponsors of a number of software projects and social causes we support.

The difference that being organised as co-op makes was particularly brought home to me at a recent re-union of my MSc course: where it seemed many others have graduated into a start-up economy which is all about burning through staff, with people spending months rather than years in jobs, and constantly having dealing with stressful workloads. Operating as a workers co-op challenges us to create good and sustainable jobs.

Any that’s what we’re trying to do again now: recruiting for two new people to join the team.

We’re looking for a developer to join us, particularly someone with experience of managing technical roadmaps for projects; and we’re looking for someone to work with us as an analyst – combining a focus on policy and technology, and ready to work on outreach and engagement with potential users of the open data standards we support.

You can find more details of both roles over on the Open Data Services website, and applications are open until 14th March.

Practical Participation – 2016 update

pp-logo-2014-alpha-largeAlthough this year my primary focus is on PhD write-up, I’m still keeping active with the two companies I’ve co-founded. So, a couple of updates – firstly, the annual Practical Participation newsletter, compiled by Jennie Fleming.

Practical Participation 2016 – looking back and looking ahead

We wanted to get in contact with you with our annual update of what we are doing at Practical Participation. Tim, Bill and Jennie – are a team with complementary skills, backgrounds and interests and have extensive experience in a range of areas. If you are interested in working with any of the team, do please contact them personally to discuss how we can work together.  

Over the last year, Tim has been working on incubating and spinning out a couple of open data and engagement projects. 2015 started with Practical Participation acting as host to newly formed technical help-desk services for the Open Contracting Data Standard, and 360 Giving standard for philanthropy data. Those are now transferred over to a new workers co-operative, Open Data Services, where a growing team is supporting work to open up data for public good across the world. Tim also spent a large part of 2015 working on the International Open Data Conference (IODC) in Ottawa. His main role was facilitating an ‘action track’ – running a participatory process to bring together threads of discussion at the conference into a global roadmap for open data collaboration. The result is available online here. He’s continued to support the Global Open Data for Agriculture and Nutrition  (GODAN) network, working with the team on inputs to the Open Government Partnership (OGP) Summit in Mexico last October, and on a range of other research projects. 

Also at the OGP Summit, Tim co-hosted a workshop on the development resources to support the implementation of the recently launched International Open Data Charter. Over 2016 he’ll be working with the Open Data Charter network to support the creation of ‘Sector Packages’, showing key ways open data can make a difference in anti-corruption, amongst other places. You can contact Tim at tim@practicalparticipation.co.uk.

Jennie’s been continuing her evaluation work with the Children’s Society Young Carers in Focus project and Enthusiasm youth projects. She also undertook a review of the work of the Youth Team at Trafford Housing Trust. The review considered the activity and impact of the youth team to learn from the previous years’ work and to inform proposals for the future. With Practical Participation associate Sarah Hargreaves and young advisor Ruth Taylor she undertook research for Heritage Lottery Fund about youth involvement in decision making about a new grant programme they are establishing. The report reviewed current good practice in the area and set out models for how young people could be meaningfully involved in the decision making processes for the grants.

Jennie is also providing non-line managerial support to the Youth and Community team at Valley House and the youth worker at The Nottingham Refugee Forum. With CRAE’s merger with Just for Kids Law she is now a Trustee of Just for Kids Law and the Chair of the Policy and Strategic Litigation sub-committee. If you think Jennie’s skills and expertise could be useful to you – do get in contact with her jennie@practicalparticipation.co.uk

Bill’s main focus is supporting four local communities as part of the resident-led Lottery funded Big Local programme of £1m over ten years in 150 neighbourhoods in England. Each area has built a dynamic community conversation as the foundation for their plans. Each is seeing great outcomes for residents across a range of priorities they themselves have set. It’s an exciting and replicable model of community empowerment and control

Work relating to Children and Young People Improving Access to Psychological Services has taken Bill back to Rotherham, with a focused piece of work scoping children and young people’s voice and influence in mental health services and offering a practical model to help map and plan improvement. 

Bill remains involved with a number of youth services and especially with youth work within the Housing sector, facilitated by Joe Rich of Affinity Sutton. Youth services continue their freefall with occasional glimmers of hope as in Brighton where the worst of cuts were averted in part we hope through our support to young people’s voices being heard.

Work with young carers has continued through partnership with The Children’s Society and Carers Trust in the Making a Step Change programme. Working across a number of local authorities is a reminder of the power of the voice of experience, coupled with vital leadership and management.

And finally, Bill continues as a practice educator with three social work students this last year, helping retain the vital focus on quality of direct inter-personal practice.

A workshop on open data for anti-corruption

Last autumn the International Open Data Charter was launched, putting forward six key principles for governments to adopt to pursue an ‘open by default’ approach to key data.

However, for the Charter to have the greatest impacts requires more than just high-level principles. As the International Open Data Conference explored last year, we need to focus on the application of open data to particular sectors to secure the greatest impact. That’s why a stream of work has been emerging to develop ‘Sector Packages’ as companion resources to the International Open Data Charter.

The first of these is focussing on anti-corruption. I’ve been supporting the Technical Working Group of the Charter to sketch a possible outline for this in this consultation document, which was shared at the G20 meeting last year. 

To build on that we’ve just launched a call for a consultant to act as co-ordinating author for the package (closing date 28th Jan – please do share!), and a few weeks back I had the chance to drop into a mini-workshop at DFID to share an update on the Charter, and talk with staff from across the organisation about potential areas that the anti-corruption package should focus on. 

Slides from the talk are below, and I’ve jotted down some brief notes from the discussions as well. 

Datasets of interest

In the session we posed the question: “What one dataset would you like to see countries publish as open data to address corruption?”

The answers highlight a range of key areas for exploration as the anti-corruption sector package is developed further. 

1) Repository of registered NGOs and their downstream partners – including details of their bank accounts, board, constitution and rules etc.

This kind of data is clearly useful to a donor wanting to understand who they are working with, or considering whether to work with potential partners. But it is also a very challenging dataset to collate and open. Firstly, many countries either lack comprehensive systems of NGO registration, or have thresholds that mean many community-level groups will be non-constituted community associations rather than formally registered organisations. Secondly, there can be risks associated with NGO registration, particularly in countries with shrinking civil society space, and where lists of organisations could be used to increase political control or restrictions on NGO activity. 

Working these issues through will require thought about where to draw the lines between open and shared data, and how organisations can pool their self-collected intelligence about partnr organisations, whilst avoiding harms, and avoiding the creation of error-prone datasets where funding isn’t approved because ‘computer says no’. 

2) Data on the whole contracting chain – particularly for large infrastructure projects.

Whilst issolated pockets of data on public contracts often exist, effort is needed to join these up, giving a view of the whole contracting chain. The Open Contracting Data Standard has been developing the technical foundations for this to happen, and work is not beginning to explore how it might be used to track the implementation of infrastructure projects. In the UK, civil society are calling for the next Open Government National Action Plan to include a committment to model contract clauses that encourage contractors to disclose key information on subcontracting arrangements, implementation milestons and the company’s beneficial owners.

3) Identifying organisations and the people involved

The challenge of identifying the organisations who are counterparty to a funding transaction or a contract is not limited to NGOs. Identifying government agencies, departments, and the key actors within them, is also important. 

Government entity identifiers is a challenge the International Aid Transparency Initiative has been grapling with for a few years now. Could the Open Data Charter process finally move forward some agreement on the core data infrastructure describing the state that is needed as a foundation for accountability and anti-corruption open data action?

4) Beneficial ownership

Benefial ownership data reveals who is ultimately in control of, and reaping the profits from, a company. The UK is due to publish an open beneficial ownership register for the first time later this year – but there is still much to do to develop common standards for joined-up data on beneficial ownership. For example, the UK register will capture ownership information in bands at 25%, 50% and 75%, where other countries are exploring either detailed ownership percentage publication, or publication using other, non-overlapping bands. Without co-ordination on interoperability, potential impacts of beneficial ownership open data may be much harder to secure. 

5) Localised datasets and public expenditure tracking data

In thinking about the ‘national datasets’ that governments could publish as part of a sector package for anti-corruption, it is also important to not lose sight of data being generated and shared at the local level. There are lots of lessons to learn from existing work on Public Expenditure Tracking which traces the disbursement of funds from national budgets, through layers of administration, down to local services like schools. With the funding flows posted on posters on the side of school buildings there is a clearer answer to the question: “What does this mean to me?”, and data is more clearly connected with local citizen empowerment. 

Where next

Look out for updates about the anti-corruption sector package on the Open Data Charter website over the first part of 2016.

Following the money: preliminary remarks on IATI Traceability

[Summary: Exploring the social and technical dynamics of aid traceability: let’s learn what we can from distributed ledgers, without thinking that all the solutions are to be found in the blockchain.]

My colleagues at Open Data Services are working at the moment on a project for UN Habitat around traceability of aid flows. With an increasing number of organisations publishing data using the International Aid Transparency Initiative data standard, and increasing amounts of government contracting and spending data available online, the theory is that it should be possible to track funding flows.

In this blog post I’ll try and think aloud about some of the opportunities and challenges for traceability.

Why follow funds?

I can envisage a number of hypothetical use cases traceability of aid.

Firstly, donors want to be able to understand where their money has gone. This is important for at least three reasons:

  1. Effectiveness & impact: knowing which projects and programmes have been the most effective;
  2. Understanding and communication: being able to see more information about the projects funded, and to present information on projects and their impacts to the public to build support for development;
  3. Addressing fraud and corruption: identifying leakage and mis-use of funds.

Traceability is important because the relationship between donor and delivery is often indirect. A grant may pass through a number of intermediary organisations before it reaches the ultimately beneficiaries. For example, a country donor may fund a multi-lateral fund, which in turn commissions an international organisation to deliver a programme, and they in turn contract with country partners, who in turn buy in provision from local providers.

Secondly, communities where projects are funded, or where funds should have been receieved, may want to trace funding upwards: understanding the actors and policy agendas affecting their communities, and identifying when funds they are entitled to have not arrived (see the investigative work of Follow The Money Nigeria for a good example of this latter use case).

Short-circuiting social systems

It is important to consider the ways in which work on the traceability of funds potentially bypasses, ‘routes around’ or disrupts* (*choose your own framing) existing funding and reporting relationships – allowing donors or communities to reach beyond intermediaries to exert such authority and power over outcomes as they can exercise.

Take the example given above. We can represent the funding flows in a diagram as below:

downwards

But there are more than one-way-flows going on here. Most of the parties involved will have some sort of reporting responsibility to those giving them funds, and so we also have a report

upwards

By the time reporting gets to the donor, it is unlikely to include much detail on the work of the local partners or providers (indeed, the multilateral, for example, may not report specifically on this project, just on the development co-operation in general). The INGO may even have very limited information about what happens just a few steps down the chain on the ground, having to trust intermediary reports.

In cases where there isn’t complete trust in this network of reporting, and clear mechanisms to ensure each party is excercising it’s responsibility to ensure the most effective, and corruption-free, use of resources by the next party down, the case for being able to see through this chain, tracing funds and having direct ability to assess impacts and risks is clearly desirable.

Yet – it also needs to be approached carefully. Each of the relationships in this funding chain is about more than just passing on some clearly defined packet of money. Each party may bring specific contextual knowledge, skills and experience. Enabling those at the top of a funding chain to leap over intermediaries doesn’t inevitably having a positive impact: particularly given what the history of development co-operative has to teach about how power dynamics and the imposition of top-down solutions can lead to substantial harms.

None of this is a case against traceability – but it is a call for consideration of the social dynamics of traceability infrastructures – and considering of how to ensure contextual knowledge is kept accessible when it becomes possible to traverse the links of a funding chain.

The co-ordination challenge of traceability

Right now, the IATI data standard has support for traceability at the project and transaction level.

  • At the project level the related-activity field can be used to indicate parent, child and co-funded activities.
  • At the transaction level, data on incoming funds can specify the activity-id used by the upstream organisation to identify the project the funds come from, and data on outgoing funds can specify the activity-id used by the downstream organisation.

This supports both upwards and downwards linking (e.g. a funder can publish the identified of the funded project, or a receipient can publish the identifier of the donor project that is providing funds), but is based on explicit co-ordination and the capture of additional data.

As a distributed approach to the publication of open data, there are no consistency checks in IATI to ensure that providers and recipients agree on identifiers, and often there can be practical challenges to capture this data, not least that:

  • A) Many of the accounting systems in which transaction data is captured have no fields for upstream or downstream project identifier, nor any way of conceptually linking transactions to these externally defined projects;
  • B) Some parties in the funding chain may not publish IATI data, or may do so in forms that do not support traceability, breaking the chain;
  • C) The identifier of a downstream project may not be created at the time an upstream project assigns funds – exchanging identifiers can create a substantial administrative burden;

At the last IATI TAG meeting in Ottawa, this led to some discussion of other technologies that might be explored to address issues of traceability.

Technical utopias and practical traceability

Let’s start with a number of assorted observations:

  • UPS can track a package right around the world, giving me regular updates on where it is. The package has a barcode on, and is being transferred by a single company.
  • I can make a faster-payments bank transfer in the UK with a reference number that appears in both my bank statements, and the receipients statements, travelling between banks in seconds. Banks leverage their trust, and use centralised third-party providers as part of data exchange and reconciling funding transfers.
  • When making some international transfers, the money has effectively disappeared from view for quite a while, with lots of time spent on the phone to sender, recipient and intermediary banks to track down the funds. Trust, digital systems and reconciliation services function less well across international borders.
  • Transactions on the BitCoin Blockchain are, to some extent, traceable. BitCoin is a distributed system. (Given any BitCoin ‘address’ it’s possible to go back into the public ledger and see which addresses have transferred an amount of bitcoins there, and to follow the chain onwards. If you can match an address to an identity, the currency, far from being anonymous, is fairly transparent*. This is the reason for BitCoin mixer services, designed to remove the trackability of coins.)
  • There are reported experiments with using BlockChain technologies in a range of different settings, incuding for land registries.
  • There’s a lot of investment going into FinTech right now – exploring ways to update financial services

All of this can lead to some excitement about the potential of new technologies to render funding flows traceable. If we can trace parcels and BitCoins, the argument goes, why can’t we have traceability of public funds and development assistance?

Although I think such an argument falls down in a number of key areas (which I’ll get to in a moment), it does point towards a key component missing from the current aid transparency landscape – in the form of a shared ledger.

One of the reasons IATI is based on a distributed data publishing model, without any internal consistency checks between publishers, is prior experience in the sector of submitting data to centralised aid databases. However, peer-to-peer and block-chain like technologies now offer a way to separate out co-ordination and the creation of consensus on the state of the world, from the centralisation of data in a single database.

It is at least theoretically possible to imagine a world in which the data a government publishes about it’s transactions is only considered part of the story, and in which the recipient needs to confirm receipt in a public ledger to complete the transactional record. Transactions ultimately have two parts (sending and receipt), and open (distributed) ledger systems could offer the ability to layer an auditable record on top of the actual transfer of funds.

However (as I said, there are some serious limitations here), such a system is only an account giving of the funding flows, not the flows themself (unlike BitCoin) which still leaves space for corruption through maintaining false information in the ledger. Although trusted financial intermediaries (banks and others) could be brought into the picture, as others responsible for confirming transactions, it’s hard to envisage how adoption of such a system could be brought about over the short and medium term (particularly globally). Secondly, although transactions between organisations might be made more visible and traceable in this way, the transactions inside an organisation remain opaque. Working out which funds relate to which internal and external projects is still a matter of the internal businesses processes in organisations involved in the aid delivery chain.

There may be other traceability systems we should be exploring as inspirations for aid and public money traceable. What my brief look at BitCoin leads me to reflect on is potential role over the short-term of reconciliation services that can, at the very least, report on the extent to which different IATI publisers are mutually confirming each others information. Over the long-term, a move towards more real-time transparency infrastructures, rather than periodic data publication, might open up new opportunities – although with all sorts of associated challenges.

Ultimately – creating traceable aid still requires labour to generate shared conceptual understandings of how particular transactions and projects relate.

How much is enough?

Let’s loop back round. In this post (as in many of the conversations I’ve had about traceable), we started with some use cases for traceability; we saw some of the challenges; we got briefly excited about what new technologies could do to provide traceability; we saw the opportunities, but also the many limitations. Where do we end up then?

I think important is to loop back to our use cases, and to consider how technology can help but not completely solve, the problems set out. Knowing which provider organisations might have been funded through a particular donors money could be enough to help them target investigations in cases of fraud. Or knowing all the funders who have a stake in projects in a particular country, sector and locality can be enough for communities on the ground to do further research to identify the funders they need to talk to.

Rather than searching after a traceability data panopticon, can we focus traceability-enabling practices on breaking down the barriers to specific investigatory processes?

Ultimately, in the IATI case, getting traceability to work at the project level alone could be a big boost. But doing this will require a lot of social coordination, as much as technical innovation. As we think about tools for traceability, thinking about tools that support this social process may be an important area to focus on.

Where next

Steven Flower and the rest of the Open Data Services team will be working on coming weeks on a deeper investigation of traceability issues – with the goal of producing a report and toolkit later this year. They’ve already been digging into IATI data to look for the links that exist so far and building on past work testing the concept of traceability against real data.

Drop in comments below, or drop Steven a line, if you have ideas to share.

Three cross-cutting issues that UK data sharing proposals should address

[Summary: an extended discussion of issue arising from today’s discussion of UK data sharing open policymaking discussions]

I spend a lot of time thinking and writing about open data. But, as has often been said, not all of the data that government holds should be published as open data.

Certain registers and datasets managed by the state may contain, or be used to reveal, personally identifying and private information – justifying strong restrictions on how they are accessed and used. Many of the datasets governments collect, from tax records to detailed survey data collected for policy making and monitoring fall into this category. However, the principle that data collected for one purpose might have a legitimate use in another context still applies to this data: one government department may be able to pursue it’s public task with data from another, and there are cases where public benefit is to be found from sharing data with academic and private sector researchers and innovators.

However, in the UK, the picture of which departments, agencies and levels of government can share which data with others (or outside of the state) is complex to say the least. When it comes to sharing personally identifying datasets, agencies need to rely on specific ‘legal gateways’, with certain major data holders such as HM Revenue and Customs bound by restrictive rules that may require explicit legislation to pass through parliament before specific data shares are permitted.

That’s ostensibly why the UK Government has been working for a number of years now on bringing forward new data sharing proposals – creating ‘permissive powers’ for cross-departmental and cross-agency data sharing, increasing the ease of data flows between national and local government, whilst increasing the clarity of safeguards against data mis-use. Up until just before the last election, an Open Policy Making process, modelled broadly on the UK Open Government Partnership process was taking place – resulting in a refined set of potential proposals relating to identifiable data sharing, data sharing for fraud reduction, and use of data for targeted public services. Today that process was re-started, with a view to a public consultation on updated proposals in the coming months.

However, although much progress has been made in refining proposals based on private sector and civil society feedback, from the range of specific and somewhat disjointed proposals presented for new arrangements in today’s workshop, it appears the process is a way off from providing the kinds of clarification of the current regime that might be desirable. Missing from today’s discussions were clear cross-cutting mechanisms to build trust in government data sharing, and establish the kind of secure data infrastructures that are needed for handling personal data sharing.

I want to suggest three areas that need to be more clearly addressed – all of which were raised in the 2014/15 Open Policymaking process, but which have been somewhat lost in the latest iterations of discussion.

1. Maximising impact, minimising the data shared

One of the most compelling cases for data sharing presented in today’s workshop was work to address fuel poverty by automatically giving low-income pensioners rebates on their fuel bills. Discussions suggested that since the automatic rebate was introduced, 50% more eligible recipients are getting the rebates – with the most vulnerable who were far less likely to apply to recieve the rebates they were entitied to the biggest beneficiaries. With every degree drop in the temperature of a pensioners home correlating to increased hospital admissions – then the argument for allowing the data share, and indeed establishing the framework for current arrangements to be extended to others in fuel poverty (the current powers are specific to pensioners data in some way), is clear.

However, this case is also one where the impact is accompanied by a process that results in minimal data actually being shared from government to the private companies who apply the rebates to individuals energy bills. All that is shared in response to energy companies queries for each candidate on their customer list is a flag for whether the individual is eligible for the rebate or not.

This kind of approach does not require the sharing of a bulk dataset of personally identifying information – it requires a transactional service that can provide the minimum certification required to indicate, with some reasonable level of confidence, that an individual has some relevant credentials. The idea of privacy protecting identity services which operate in this way is not new – yet the framing of the current data sharing discussion has tended to focus on ‘sharing datasets’ instead of constructing processes and technical systems which can be well governed, and still meet the vast majority of use-cases where data shares may be required.

For example, when the General Records Office representative today posed the question of “In what circumstances would it be approciate to share civil registration data (e.g. Birth, Adoption, Marriage and Death) information?”, the use-cases that surfaced were all to do with verification of identity: something that could be achieved much more safely by providing a digital service than by handing over datasets in bulk.

Indeed, approached as a question of systems design, rather than data sharing, the fight against fraud may in practice be better served by allowing citizens to digitally access their own civil registration information and to submit that as evidence in their transactions with government, helping narrow the number of cases where fraud may be occurring – and focussing investigative efforts more tightly, instead of chasing after problematic big data analysis approaches.

(Aside #1: As one participant in today’s workshop insightfully noted, there are thousands of valid marriages in the UK which are not civil marriages and so may not be present in Civil Registers. A big data approach that seeks to match records of who is married to records of households who have declared they are married, to identify fraudulent claims, is likely to flag these households wrongly, creating new forms of discrimination. By contrast, an approach that helps individuals submit their evidence to government allows such ‘edge cases’ to be factored in – recognising that many ‘facts’ about citizens are not easily reduced to simple database fields, and that giving account of ones self to the state is a performative act which should not be too readily sidelined.)

(Aside #2: The case of civil registers also illustrates an interesting and significant qualitative difference between public records, and a bulk public dataset. Births, marriages and deaths are all ‘public events’: there is no right to keep them private, and they have long been recorded in registers which are open to inspection. However, when the model of access to these registers switches from the focussed inspection, looking for a particular individual, to bulk access, they become possible to use in new ways – for example, creating a ‘primary key’ of individuals to which other data can be attached, eroding privacy in ways which was not possible when each record needed to be explored individually. The balance of benefits and harms from this qualitative change will vary from dataset to dataset. For example, I would strongly advocate the open sharing of company registers, including details of beneficial owners, both because of the public benefit of this data, and because registering a company is a public act involving a certain social contract. By contrast, I would be more cautious about the full disclosure of all civil registers, due to the different nature of the social contract involved, and the greater risk of vulnerable individuals being targetted through intentional or unintentional misuse of the data.)

All of which is a long way to say:

  • Where the cross-agency or cross-departmental use-cases for access to a particular can be reduced to sharing assertions about individuals, rather than bulk datasets, this route should be explored first.

This does not remove the need for governance of both access and data use. However, it does ease the governance of access, and audit logs of access to a service are easier to manage than audit logs of what users in possession of a dataset have done.

Even the sharing of a ‘flag’ that can be applied to an individuals data record needs careful thought: and those in receipt of such flags need to ensure they govern the use of that data carefully. For example, as one participant today noted, pensioners have raised fears that energy companies may use a ‘fuel poverty’ flag in their records to target them with advertising. Ensuring that later analysts in the company do not stumble upon the rebate figures in invoices, and feed this into profiling of customers, for example, will require very careful data governance – and it is not clear that companies practices are robust enough to protect against this right now.

2. Algorithmic transparency

Last year the Detroit Digital Justice Coalition produced a great little zine called ‘Opening Data’ which takes a practical look at some of the opportunities and challenges of open data use. They look at how data is used to profile communities, and how the classifications and clustering approaches applied to data can create categories that may be skewed and biased against particular groups, or that reinforce rather than challenge social divides (see pg 30 onwards). The same issues apply to data sharing.

Whilst current data protection legislation gives citizens a right to access and correct information about themselves, the algorithms used to process that data, and derive analysis from it are rarely shared or open to adequate scrutiny.

In the process of establishing new frameworks for data sharing, the algorithms used to process that data should be being brough in view as much as the datasets themselves.

If, for example, someone is offered a targetted public service, or targetted in a fraud investigation, there is question to be explored of whether they should be told which datasets, and which algorithms, led to them being selected. This, and associated transparency, could help to surface otherwise unseen biases which might otherwise lead to particular groups being unfairly targetted (or missed) by analysis. Transparency is no panacea, but it plays an important role as a safeguard.

3. Systematic transparency of sharing arrangements

On the theme of transparency, many of the proposals discussed today mentioned oversight groups, Privacy Impact Assessments, and publication of information on either those in receipt of shared data, or those refused access to datasets – yet across the piece no systematic framework for this was put forward.

This is an issue Reuben Binns and I wrote about in 2014, putting forward a proposal for a common standard for disclosure of data sharing arrangements that, in it’s strongest form would require:

  • Structured data on origin, destination, purpose, legal framework and timescales for sharing;
  • Publication of Privacy Impact Assessments and other associated documents;
  • Notices published through a common venue (such as the Gazette) in a timely fashion;
  • Consultation windows where relevant before a power comes into force;
  • Sharing to only be legally valid when the notice has been published.

Without such a framework, we are likely to end up with the current confused system in which no-one knows which shares are in place, how they are being used, and which legal gateways are functioning well or not. With a scattered set of spreadsheets and web pages listing approved sharing, citizens have no hope of understanding how their data is being used.

If only one of the above issues could be addressed in the upcoming consultation on data sharing, then I certainly hope progress could be made on addressing this missing piece of a robust common framework for the transparency principles of data sharing to be put into practice.

Towards a well governed infrastructure?

Ultimately, the discussion of data sharing is a discussion about one aspect of our national data infrastructure. There has been a lot of smart work going on, both inside and outside government, on issues such as identity assurance, differential privacy, and identifying core derived datasets which should be available as open data to bypass need for sharing gateways. A truly effective data sharing agenda needs to link with these to ensure it is neither creating over-broad powers which are open to abuse, nor establishing a new web of complex and hard to operate gateways.

Further reading

My thinking on these issues has been shaped in part by inputs from the following:

Data & Discrimination – Collected Essays

White House Report on Big Data, and associated papers/notes from The Social, Cultural & Ethical Dimensions of “Big Data.” conference

A distributed approach to co-operative data

[Summary: rough notes from a workshop on cooperative sector data.]

Principle 6 of the International Co-operative Alliance calls for ‘co-operation amongst co-operatives’. Yet, for many co-ops, finding other worker owned businesses to work with can be challenging. Although there are over 7,000 co-operatives in the UK, and many more worldwide, it can be challenging to find out much about them.

This was one of the key drivers behind a convening at the Old Music Hall in Oxford just before Christmas where cooperators from the Institute for Solidarity Economics, Open Data Services Co-operative, Coops UK and Transformap gathered to explore opportunities for ‘Principle 6 Data’: open data to build up a clearer picture of the co-operative economy.

We started out articulating different challenges to be explored through the day, including:

  • Helping researchers better understand the co-operative sector. With co-ops employing thousands of people, and co-operatives adding £37bn to the UK economy last year, having a clearer picture of where they operate, what they do and how they work is vital. Yet information is scarce. For researchers at the Institute for Solidarity Economics, there is a need to dig beyond headline organisation types to understand how the activities of organisations contribute to worker owned, social impact enterprise.

  • Support trade between co-operatives. For example, earlier this year when we were planning a face-to-face gathering of Open Data Services Co-op we tried to find co-operatively run venues to use, and we’ve been trying to understand where else we could support co-ops in our supply chain. Whilst Coops UK provide a directory of co-operatives, it is focussed on business-to-consumer, not business-to-business information.

  • Enabling distributed information management on co-ops. Right now, the best dataset we have for the UK comes from Coops UK, the membership body for the UK sector, who hold information on 7000 or so co-operatives, built up over the years from various sources. They have recently released some of this as open data, and are keen to build on the dataset in future. Yet if it can only be updated via Coops UK this creates a bottleneck to the creation of richer data resources.

My Open Data Services colleague, Edafe Onerhime, did some great work looking at the existing Coops UK dataset, which is written up here, and Dan from ISE explored ways of getting microformat markup into the Institute for Solidarity Economics website to expose more structured data about the organisation, including the gender profile of the workforce. We also took at look at whether data from the .coop domain registry might provide insights into the sector, and set about exploring whether microformats were already in use on any of the websites of UK co-operatives.

Building on these experiments, we came to an exploration of potential social, organisational and technical challenges ahead if we want to see a distributed approach to greater data publication on the co-op sector. Ultimately, this boiled down to a couple of key questions:

  • How can co-operatives be encouraged to share more structured data on their activities?

  • How can the different data needs of different users be met?

  • How can that data be fed into different data-driven projects for research, or cooperative collaboration?

There are various elements to addressing these questions.

There is a standards element: identifying the different kinds of things about co-operatives that different users may want to know about, and looking for standards to model these. For example, alongside the basic details of registered organisations and their turnover collected for the co-operative economic report, business-to-business use cases may be interested in branch locations and product/service offerings from co-ops, and solidarity economics research may be interested in the different value commitments a co-operative has, and details of its democratic governance. We looked at how certifications, from formal Fairtrade certifications for products of a co-op, to social certifications where someone a user trusts vouches for an organisation, might be an important part of the picture also.

For many of the features of a cooperative that are of interest, common data standards already exist, such as those provided by schema.org. Although these need to be approached critically, they provide a pragmatic starting point for standardisation, an example with Coops UK Co-Operative economy dataset can be seen here.

There is a process element of working out how data that co-operatives might publish using common standards will be validated, aggregated and made into shared datasets. Here, we looked at how an annual process of data collection, such as the UK Co-operative Economy report might bootstrap a process of distributed data publishing.

Imagine a platform where co-operatives are offered three options to provide their data into the annual co-operative economy report:

  1. Fill in a form manually;

  2. Publish a spreadsheet of key data to your own website;

  3. Embed JSON-LD / microformat data in your own website;

Although 2 and 3 are more technically complex, they can provide richer and more open and re-useable data, and a platform can explain the advantages of taking the extra steps on these. Moving co-operatives from Step 1 to Step 2 can be bootstrapped by the form driven process generating a spreadsheet for co-ops to publish on their own sites at a set location, and then encouraging them to update those sheets in year 2.

With good interaction design, and a feedback loop that helps validate data quality and show the collective benefits of providing additional data, such a platform could provide the practical component of a campaign for open data publication and use by co-ops.

This points to the crucial **social element: **building community around the open data process, and making sure it is not a technical exercise, but one that meets real needs.

Did we end the day with a clear picture of where next for co-op sector data? Not yet. But it was clear that all the groups participating will continue to explore this space into 2016, and we’ll be looking for more opportunities to collaborate together.

Principles for responding to mass surveilance and the draft Investigatory Powers Bill

[Summary: notes written up on the train back from Paris & London, and following a meeting with Open Rights Group focussing on the draft Investigatory Powers Bill]

It can be hard to navigate the surveillance debate. On the one hand, whilstblower revalations, notably those from Edward Snowden, have revealed the way in which states are accumulating collection of mass communications data, creating new regimes of deeply intrusive algorithmic surveillance, and unsettling the balance of power between citizens, officials and overseers in politics and the media. On the other, as recent events in Paris, London, the US and right across the world have brought into sharp focus, there are very real threats to the life and liberty posed by non-state terrorist actors – and meeting the risks posed must surely involve the security services.

Fortunately, rather than the common pattern of rushed legislative proposals after terrorist attacks, after the attacks in Paris, the UK has kept to the planned timetable for debate of the proposed Investigatory Powers Bill.

The Bill primarily works to put on a legal footing many of the actions that surveillance agencies have already been engaged in when it comes to bulk data collection and bulk hacking of services (equipment interference, and obtaining data). But the Bill also proposes a number of further extensions of powers, including provisions to mandate storage of ‘Internet Connection Records’ – branded as creating a ‘snoopers charter’ in media debates because of the potential for law enforcement and other government agencies to gain access to this detailed information in individuals web browsing histories.

Page 33 of the draft includes a handy ‘Investigatory Powers at a Glance’ table, setting out who will have access to Communications Data, powers of Interception and Bulk Datasets – and what the access and oversight processes might be.

PowersAtAGlance

Reading through the case for new powers put in the pre-amble to the Bill it is important to critically unpack the claims made for new powers. For example, point 47 notes that “From a sample of 6025 referrals to the Child Exploitation and Online Protection Command (CEOP) of the NCA, 862 (14%) cannot be progressed”. The document extrapolates from this “a minimum of 862 suspected paedophiles, involved in the distribution of indecent imagery of children, who cannot be identified without this legislation.”, yet this is premised on the proposed storage of Internet Connection Records being a ‘magic bullet’ to secure investigation of all these suspects. In reality – the number may be much lower.

Yet, getting drawn into a calculus of costs and benefits, trading off the benefits of the protection of one group, with the harms of surveillance to another group, is a tricky business, and unlikely to create a well reasoned surveillance debate. We’re generally not very good at calculating as a society where risks are involved. And there will always be polarisation between those who weight apparently opposing goods (security/liberty?) particularly highly.

The alternative to this cost/benefit calculus is to develop a response based on principles. Principles we can check against evidence, but clear guiding principles none-the-less.

Here’s my first attempt at four principles to consider in exploring how to response to the Investigatory Powers Bill:

(1) Data minimisation without suspicion. We should collect and store the minimum possible amount of data about individuals where there is no reason to suspect the threat of harm to others, or of serious crime.

This point builds upon both principles and pragmatism. Individuals should be innocent until proven guilty, and space for individual freedom of thought and action respected. Equally, surveillance services need more signal, not more noise.

When it comes to address terrorism, creating an environment in which whole communities feel subject to mass surveilance is an entirely counterproductive strategy: undermining rather than promoting the liberal values we must work to protect.

(2) Data maximisation with suspicion. Where there is suspicion of individuals posing a threat, or of serious crime, then proportionate surveillance is justified, and should be pursued.

As far as I understand, few disagree with targetted surveillance. Unlike mass surveillance, targetted approachs can be intelligence rather than algorithmically led, and more tightly connect information collection, analysis and consideration of actions that can be taken against those who pose threats to society.

(3) Strong scrutiny. Sustained independent oversight of secret services is hard to achieve – but is vital to ensure tagetted surveillance capabilities are used responsibly, and to balance the power this gives to those who weild them.

The current Investigatory Powers Bill includes notable scrutiny loopholes, in which once issued, a Warrant can be modified to include new targets without new review and oversight.

(4) A safe Internet. Bulk efforts to undermine encyption and Internet security are extremely risky. Our societies rely upon a robust Internet, and it is important for governments to be working to make the network stronger for all.


Of course, putting principles into practive involves trade offs. But identifying principles is an important starting point to a deeper debate.

Do these principles work for you? I’ll be reflecting more on whether they capture enough to provide a route through the debate, and what their implications are for responding to the Investigatory Powers Bill in the coming months.

(P.S. If you care about the future of the Investigatory Powers Bill in the UK, and you are not already a member of the Open Rights Group – do consider joining to support their work as one of very few dedicated groups focussing on promoting digital freedoms in this debate.

Disclosure: I’m a member of the ORG Advisory Council)