Reflections on two reunions

[Cross-posted from my Connected by Data weeknotes]

Are we capable of governing ourselves?

Asking this of the emerging online world was one of the driving questions that led Charlie Nesson (and colleagues) to establish the Berkman-Klein Center (BKC) for Internet and Society at Harvard Universtity 25 years ago. It was a question that was posed again on Thursday morning to the gathered community of current and former fellows, affiliates and faculty in Cambridge, MA to celebrate the 25th anniversary of BKC.

While we gathered in celebration, and with much joy at reconnecting with old colleagues and meeting new ones, we also gathered, I hazard to say, with a degree of weariness carried in from our respective work. For a community that has, collectively, been studying, building, litigating, organising and advocating in the hope that we might develop, deploy and govern technologies for the public good, 2023 offers a tough reality check.

Online discourse feels more fractious. Digital power more concentrated. Our climate in crisis. And our diagnosis of priorities for action less unified. Openness, conventionally the go-to tool of the place that gave birth to Creative Commons, feels less effective, and even counterproductive, in confronting and challenging the power dynamics at play. And as much as BKC has operated for over two decades as an impressive institutional hack, seeking to share out some of the privilege embodied in Harvard to unusual suspects, the institutional dynamics and legacy relationships can at times feel in the way of, rather than in service of, transformational scholarship and action.

I could tell perhaps a similar story of the start of my week spent with open government advocates in Tallinn as we hosted a fringe workshop on the sides of the Open Government Partnership. Although I didn’t get chance to stay for the full OGP Summit, I had the opportunity to reconnect with old colleagues, and meet some new. Amongst the energy from meeting together, we had little illusion that the stakes, and the challenges, have rarely been so big. And there was ample recognition that ad-hoc tools of openness alone do not deliver the kinds of accountability, reallocations of power and social justice interventions so desperately needed.

Are we capable of governing ourselves?

Perhaps first we have to (re-)learn how to facilitate first…

Our workshop venue in Tallinn was a little unconventional. Often used a yoga studio rather than meeting space, the room had a mix of armchairs, sofa and futons in place of the usual tables and chairs. As I was staring to unpack flip-charts and post-it notes, Veronica Cretu, who had kindly arrived early to help us set up, took one look at the arrangement, with two rows of chairs and obstructed sight-lines, and set about re-arranging to help our conversations flow. Into this re-arranged room we brought in findings from background interviews, expressed, thanks to the insight of Helena Hollis, though large-printed mind-maps that invited discussion, addition and elaboration. And we tried to structure the day to flow from building shared understanding of a problem, to considering solutions, and sketching potential actions. By the end of the day we had sketched out some promising policy proposals.

In Cambridge, within the imposing setting of Harvard Law School, the BKC team demonstrated characteristic thoughtful action to disrupt the formality, and bring the whimsy and warmth for which the center has a quiet reputation (a lot more costumes than I see at most academic gatherings for starters…). But as the programme itself progressed through a series of conventional panel sessions, I started to wonder if we were missing the critical role of facilitation in fully unlocking the wisdom, ideas and energy of the assembled group? And why?

This year is also ten years since my own fellowship year at BKC, and so the ‘class of 2013’ were well represented, and were recalling our months of weekly ‘fellows hour’ (2 hours duration), working groups and collaborations. I was reminded by Amy Johnson that one of the particular interventions we made as a group was to create a small ‘facilitation team’ that would hold a weekly standing meeting to help the rotating host of the next weeks fellows hour to think of a creative and engaging way to lead their topic (not for us a weekly seminar: think instead hands on-workshops, learning games and curated discussions). There was an important recognition here of content presentation and facilitation as distinct roles, and of chances to gather not just as a moment for a transfer of thinking, but as generative moments of deeper exchange.

Over drinks in Tallinn, I had the chance to briefly reflect with Alex Howard on OGP Summits past. One notable feature of early summits were the national or regional sessions. Slots on the agenda to share what had made it onto the open government National Action Plans of different states, and, crucially, where governments and civil society shared the room and stage in talking about them. These have dropped from the agenda in recent years. And with that, a critical moment around which to structure other conversations in the run up to, and follow up from, a summit. Formal panels have their place.

For many years, BKC had an active unmoderated discussion e-mail list, linking current and former fellows, affiliates and faculty. In the last few years, the list was paused, after a number of heated discussions and clashes, and it has not (yet?) returned. I don’t think it’s an exaggeration to say that, at least for those not in residence in Cambridge, the listserv felt like the heart of the BKC community – albeit not a perfect forum. Yet, its role was not addressed from the stage over two days of discussion about the past, future and present of the Center, and while I heard some side discussions exploring what kinds of moderation a renewed list might need, I heard little on the question of facilitation: an active and justice oriented process to build conversations across community.

Effective facilitation frequently also involves thinking about the who as well as the how of discussion. Part of this is about boundaries and curation. The Open Government Partnership has a set of criteria for which countries can, or cannot, be formal members, although civil society from non-members are not excluded. One of the reasons I suspect a self-organised listserv has not emerged in place of BKCs discussion list, is that the gatekeeping function of a BKC affiliation helps draw boundaries of an official list, in a way that a self-organised list could not easily achieve. Equally, facilitation also needs to consider invitation, space-making, and sharing of power. In our workshop in Tallinn, we tried to get a balance of countries, sectors and disciplines represented, while looking for enough shared interest and background to enable productive conversation.

Are we capable of governing ourselves?

The BKC reunion did not just start with the question. The opening panel also pointed towards one possible answer answer. For too long, it was argued, we’ve been looking at how to build governance top-down. Instead, one panellist urged, we need to start from the bottom up. I heard this as an educators answer: to start from cultivating the virtue and capability of the individual student, and to build out from this to collectives, organisations and states that can be self-governing. It is a good answer. Yet, bringing in the lens from our work at Connected by Data, I wonder if we also need to start from community and collective. When we focus on governing from the group-up, then the facilitation of sense-making and position-taking both within, and between, groups becomes central to the question.

I suspect my own approach remains deeply informed by early engagement with informal education, youth work and group work. We learn in groups. Not open, unbounded groups. But intentional groups, with the right mix of support and challenge. The groups we belong to also confer privileges, interests, burdens and oppressions: and the extra labour that some collectives and individuals face in joining dialogue, particularly racialised or majority world citizens, need recognition[1].

This is something I’ll be reflecting on more as we come to think about resources for deliberative engagement on data and AI governance : thinking about how inputs for dialogue might be received (and responded to) differently in groups brough together by sortition, or by those connected through solidarity and shared experience.

Are we capable of governing ourselves?

Given the challenges and crisis we face as both local and global communities, we can only keep trying to work that out.

[1] I must acknowledge here my own imperfect journey with being attentive enough to the impacts of privilege in my work and practice, or being clear and direct enough as an ally of communities facing inequity and injustice.

Fostering open ecosystems around data: The role of data standards, infrastructure and institutions

[Summary: an introduction to data standards, their role in development projects, and critical perspectives for thinking about effective standardisation and its social impacts]

I was recently invited to give a presentation as part of a GIZ ICT for Agriculture talk series, focussing on the topic of data standards. It was a useful prompt to try and pull together various threads I’ve been working on around concepts of standardisation, infrastructure, ecosystem and institution – particularly building on recent collaboration with the Open Data Institute. As I wrote out the talk fairly verbatim, I’ve reproduced it in blog form here, with images from the slides. The slides with speaker notes are also available shared here. Thanks to Lars Kahnert for the invite and opportunity to share these thoughts.

Introduction

In this talk I will explore some of the ways in which development programmes can think about shaping the role of data and digital platforms as tools of economic, social and political change. In particular, I want to draw attention to the often dry-sounding world of data standards, and to highlight the importance of engaging with open standardisation in order to avoid investing in new data silos, to tackle the increasing capture and enclosure of data of public value, and to make sure social and developmental needs are represented in modern data infrastructures.

Mind map of Tim's work, and logos of organisations worked with, including: Open Data Services, Open Contracting Partnership, Open Ownership, IATI, 360 Giving, Open Data in Development Countries, Open Data Barometer, Land Portal and UK Open Government Civil Society Network.

By way of introduction to myself: I’ve worked in various parts of data standard development and adoption – from looking at the political and organisational policies and commitments that generate demand for standards in the first place, through designing technical schema and digging into the minutiae of how particular data fields should be defined and represented, to supporting standard adoption and use – including supporting the creation of developer and user ecosystems around standardised data. 

I also approach this from a background in civic participation, and with a significant debt to work in Information Infrastructure Studies, and currently unfolding work on Data Feminism, Indigenous Data Sovereignty, and other critical responses to the role of data in society.

This talk also draws particularly on some work in progress developed through a residency at the Rockefeller Bellagio Centre looking at the intersection of standards and Artificial Intelligence: a point I won’t labour – as I fear a focus on ‘AI’ – in inverted commas – can distract us from looking at many more ‘mundane’ (also in inverted commas) uses of data: but I will say at this point that when we think about the datasets and data infrastructures our work might create, we need to keep in mind that these will likely end up being used to feed machine learning models in the future, and so what gets encoded, and what gets excluded from shared datasets is a powerful driver of bias before we even get to the scope of the training sets, or the design of the AI algorithms themselves. 

Standards work is AI work.

Enough preamble. To give a brief outline: I’m going to start with a brief introduction to what I’m talking about when I say ‘data standards’, before turning to look at the twin ideas of data ecosystems and data infrastructures. We’ll then touch on the important role of data institutions, before asking why we often struggle to build open infrastructures for data.

An introduction to standards

Each line in the image above is a date. In fact – they are all the same date. Or, at least, they could be. 

Showing that 12/10/2021 can be read in the US as 10th December, or in Europe as 12th October.

Whilst you might be able to conclusively work out most of them are the same date, and we could even write computer rules to convert them, because the way we write dates in the world is so varied, some remain ambiguous.

Fortunately, this is (more or less) a solved problem. We have the ISO8601 standard for representing dates. Generally, developers present ‘ISO Dates’ in a string like this:

2021-10-12T00:00:00+00:00

This has some useful properties. You can use simple sorting to get things in date order, you can include the time or leave it out, you can provide timezone offsets for different countries, and so-on. 

If everyone exchanging date converts their dates into this standard form, the risk of confusion is reduced, and a lot less time has to be spent cleaning up the data for analysis.

It’s also a good example of a building block of standardisation for a few other reasons:

  • The ISO in the name stands for ‘International Organization for Standardization’: the vast international governance and maintenance effort behind this apparently simple standard, which was first released in 1988, and last revised just two years ago.
  • The ‘8601’ is the standard number. There are a lot of standards (though not all ISO standards are data standards)
  • Uses of this one standard relies on lots of other standards: such as the way the individual numbers and characters are encoded when sent over the Internet or other media, and even standards for the reliable measurement of time itself.
  • And, like many standards, ISO 8601 is, in practice, rarely fully implemented. For example, whilst developers talk of using the ISO standard, what they actually rely on is from RFC3339, which leaves out lots of things in the ISO standard such as imprecise dates. As a rule of thumb: people copy implementations rather than read specifications.

Diagram showing ISO8601 as an interchange standard

ISO8601 is called an Interchange standard– that is, most systems don’t internally store data in ISO8601 when they want to process it, and it’s a cumbersome form that makes everyone write out the date in ISO format, instead – the standard sits in the middle – limiting the need to understand the specific quirks of each origin of data, and allowing receivers to streamline the import of data into their own models and methods.

And to introduce the first critical issue of standardisation – as actually implemented – it constrains what can be expressed: sometimes for good, and sometimes problematically.

Worked example: (a) The event took place in early November ; (b) 2021-11 fails validation; (c) To enter data, user adds arbitrary day: 2021-11-01; (d) Data from multiple sources can be analysed (all dates standardised) but the data might mislead: “The 1st of the Month is the best day to run events.”

For example, RFC3339 omits imprecise dates. That is, if you know that something happened in October 2021, but not which date – your data will fail validation if you leave out the day. So to exchange data using the standard you are forced to make up a day – often 1st of the month. A paper form would have no such constraint: users would just leave the ‘day’ box blank. The impact may be nothing, or, if you are trying to exchange data from certain places where, for legacy reasons, the day of the month of an event is not easily known – that data could end up distorting later analysis.

So – even these small building blocks of standardisation can have significant uses – and significant implications. But, when we think of standardisation in, for example, a data project to better understand supply chains, we might also be considering standards at the level of schema- the agreed way to combine lots of small atoms of data to build up a picture of some actions or phenomena we care about.

Diagrams showing table-based and tree-structured data.

A standardised data schema can take many forms. In data exchange, we’re often not dealing with relational database schemas, but with schemas that allow us to share an extract from a database or system.

That extract might take the form of tabular data, where our schema can be thought of as the common header row in a spreadsheet accompanied by definitions for what should go in each column.

Or it might be a schema for JSON or XML data: where the schema may also describe hierarchical relationships between, say a company, its products, and their locations. 

At a very simplified and practical level, a schema usually needs three things:

  • Field names (or paths)
  • Definition
  • Validation rules

Empty table with column headings: site_code, commodity_name, commodity_code, quantity and unit

For example, we might agree that whenever we are recording the commodities produced at a given site we use the column names site_code commodity_name, commodity_code, quantity and unit. 

We then need human readable definitions for what counts as each. E.g:

site_code - (string) a unique identifier for the site. commodity_name - (string) the given name for a primary agricultural product that can be bought and sold. Note: this is used for labelling only, and might be over-ridden by values taken from the commodity_code reference list. commodity_code- (string; enum) a value from the approved codelist that uniquely identifies the specific commodity. quantity- (number) the number of unit of the commodity. unit- (string; enum) the unit the quantity is measured in, from the list kg for kilograms or tonne for metric tonne. If quantities were collected in an unlisted unit, they should be converted before storage.

And we need validation rules to say that if we find, for example, a non-number character in the quantity column – the data should be rejected – as it will be tricky for systems to process.

Note that three of these potential columns point us to another aspect of standardisation: codelists.

A codelist is a restricted set of reference values. You can see a number of varieties here:

The commodity code is a conceptual codelist. In designing this hypothetical standard we need to decide about how to be sure apples are apples, and oranges are oranges. We could invent our own list of commodities, but often we would look to find an existing source.

We could, for example, use ‘080810’ taken from the World Custom Organisations Harmonised System codes

Or we could c_541 taken from AGROVOC – the FAO’s Agricultural Vocabulary.

The choice has a big impact: aligning us with export applications of the data standard, or perhaps more with agricultural science uses of the data.

site_code, by contrast, is not about concepts – but about agreed identifiers for real-world entities, locations or institutions. 

Without agreement across different dataset of how to refer to a farm, factory or other site, integrating data and data exchange can be complex: but maintaining these reference lists is also a big job, a potential point of centralisation and power, and an often neglected piece of data infrastructures.

For example: The Open Apparel Registry has developed unique production site identifiers by combining different existing datasets

Now – I could spend the rest of this talk digging deeper into just this small example – but let’s surface to make the broader points.

  1. Data standards are technical specifications of how to exchange data

The data should be collected in the way that makes the most sense at a local level (subject to ability to then fit into a standard) – and should be presented in the way that meets users needs. But when data is coming from many sources, and going to many destinations, a standard is a key tool of collaboration.

The standard is not the form.

The standard is not the report.

But, well designed, the standard is the powerful bit in the middle that ties together many different forms and sources of data, with many different reports, applications and data uses.

  1. Designing standards is a technical task

It needs an understanding of the systems you need to interface with.

  1. Designing standards goes beyond the technical tasks

It needs an awareness of the impact of each choice – each alignment – each definition and each validation rule.

There are a couple of critical dimensions to this: from thinking about whose knowledge is captured and shared through a standard, and who gains positions of power by being selected as the source of definitions and codelists. 

At a more practical level, I often use the following simple economic model to consider who bears the costs of making data interoperable:

Diagram showing 'Creator-->Intermediary-->User'

In any data exchange there is a creator, there may be an intermediary, and there is a data user.

The real value of standards comes when you have multiple creators, multiple users, and potentially, multiple intermediaries.

If data isn’t standardised, either an intermediary needs to do lots of work cleaning up the data….

Diagram showing different 'colours' of data from three creators, made interoperable by the work of an intermediary, to support three users.

…or each user has all that work to do before they can spend any time on data use and analysis.

The design choices made in a standard distribute the labour of making data interoperable across the parties in a data ecosystem. 

There may still be some work for users and intermediaries to do even after standardisation. 

For example, if you make it too burdensome for creators to map their data to a standard, they may stop providing it altogether. 

Diagram showing three creators, two of which have decided not to provide standardised data.

Or if you rely too much on intermediaries to clean up data, they may end up creating paywalls that limit use.

Or, in some cases, you may be relying on an exploitative ‘click-work’ market to clean data, that could have been better standardised at source.

So – to work out where the labour of making data interoperable should and can be located involves more than technical design.

You need to think about the political incentives and levers to motivate data creators to go to the effort of standardising their data.

You need to think about the business or funding model of intermediaries.

And you need to understand the wide range of potential data users: considering carefully whose needs are served simply, and who will end up still having to carry out lots of additional data collection and cleaning before their use-cases are realised. 

But, hoping I’ve not put you off with some of the complexity here, my fourth and final general point on standards is that: 

4. Data standards could be a powerful tool for the projects you are working on.

And to talk about this, let’s turn to the twin concepts of ecosystem and infrastructure.

Data ecosystems

Diagram showing: Decentralised and centralised networks. Supporting text: Standards can support decentralisation and innovation.

When the Internet and World Wide Web emerged, they were designed as distributed systems. Over recent decades, we’ve come to experience a web that is much more based around dominant platforms and applications. 

This ‘application thinking’ can also pervade development programmes – where, when thinking about problems that would benefit from better data exchange, funders might leap to building a centralised platform or application to gather data.

Diagram showing closed and open networks. Supporting text: Approaches open to decentralisation can support greater generativity, freedom and resilience

But – this ‘build it and they will come’ approach has some significant problems: not least that it only scales so far, it creates single points of failure, and risks commercial capture of data flows. 

By contrast, an open standard can describe the data we want to see better shared, and then encourage autonomous parties to create applications, tools, support and analysis on top of the standard. 

It can also give data creators and users much greater freedom to build data processes around their needs, rather than being constrained by the features of a central platform.

In early open data thinking, this was referred to as the model of ‘let many flowers bloom’, and as ‘publish once, use everywhere’ – but over time we’ve seen that, just like natural ecosystems – creating a thriving data ecosystem can require more intentional and ongoing action.

Diagrams showing natural and human-built 'ecosystems'.

Just like their natural counterparts, data ecosystems are complex, dynamic, and equilibrium can be hard to reach and fragile. Keystone species can support an ecosystems growth; whilst a local resource drought harming some key actors could lead to a cascading ecosystem collapse.

To give a more concrete example – a data ecosystem around the International Aid Transparency Initiative has grown up over the last decade – with hundreds of aid donors big and small sharing data on their projects in a common data format: the IATI Standard. ‘Keystone species’ such as D-Portal, which visualises the collected data, have helped create an instant feedback loop for data publishers to ‘see’ their data, whilst behind the scenes, a Datastore and API layer feeds all sorts of specific research and analysis projects, and operational systems – that one their own would have had little leverage to convince data publishers to send them standardised data.

However, elements of this ecosystem are fragile: much data enters through a single tool – AidStream – which over time has come to be the tool of choice for many NGOs and if unavailable would diminish the freshness of data. Many users accessing data rely on ‘the datastore’ which aggregates published IATI files and acts as an intermediary so users don’t need to download data from hundreds of different publishers. If the datastore is down, many usage applications may fail. Recently, when new versions of the official datastore hit technical trouble, an old open source software was brought back initially by volunteers.

Ultimately, this data ecosystem is more resilient than it would otherwise be because it’s built on an open standard. Even if the data entry tool, or datastore become inaccessible, new tools can be rapidly plugged in. But that they will can’t just be taken for granted: data ecosystems need careful management just as natural ones do.

Support standards over apps

The biggest point I want to highlight here is a design one. Instead of designing platforms and applications, it’s possible to design for, and work towards, thriving data ecosystems. 

That can require a different approach: building partnerships with all those who might have a shared interested in the kind of data you are dealing with, building political commitments to share data, investing in the technical work of standard development, and fostering ecosystem players through grants, engagement and support.

Building data ecosystems through standardisation can crowd-in investment: having a significant multiplier effect. 

Screenshot of Open Contracting Partnership worldwide map from https://www.open-contracting.org

For example, most of the implementations of the Open Contracting Data Standard have not been funded by the Open Contracting Partnership which stewards it’s design – yet they incorporate particular ideas the standard encodes, such as providing linked information on tender and contract award, and providing clear identifiers to the companies involved in procurement. 

For the low millions of dollars invested in maintaining OCDS since it’s first half-million dollar year-long development cycle – many many more millions from a myriad of sources have gone into building bespoke and re-usable tools, supported by for-profit and non-profit business models right across the world. 

And innovation has come from the edges, such as the adoption of the standard in Ukraine’s corruption-busting e-procurement system, or the creation of tools using the standard to analyse paper-based procurement systems in Nigeria.

As a side note here – I’m not necessarily saying ‘build a standard’: often times, the standards you need might almost be there already. Investing in standardisation can be a process of adaptation and engagement to improve what already exists.

And whilst I’m about to talk a bit more about some of the technical components that make standards for open data work well, my own experience helping to develop the ecosystem around public procurement transparency with the Open Contracting Data Standard has underscored for me the vitally important human community building element of data ecosystem building. This includes supporting governments and building their confidence to map their data into a common standard: walking the narrow line between making data inter-operable at a global level, and responding to the diverse situations in terms of legacy data systems, relevant regulation and political will different that countries found themselves in.

Infrastructures

Icon for concept of Infrastructure

I said earlier that it is productive to pair the concept of an ecosystem with that of an infrastructure. If ecosystems contain components adapted to each niche: an infrastructure, in data terms, is something shared across all sites of action. We’re familiar with physical infrastructures like the road and rail networks, or energy grid. These can provide useful analogies for thinking about data infrastructure. Well managed, data infrastructures are the underlying public good which enable an ecosystems to thrive.

Some components of data infrastructure: Schema & documentation; Validation tools; Reference implementations & code; Reference data; Data registries; Aggregators and APIs

In practice, the data infrastructure of a standard can involve a number of components:

  • There’s the schema itself and it’s documentation.
  • There might be validation tools that tell you if a dataset conforms with the standard or not.
  • There might be reference implementations and examples to work from.
  • And there might be data registries, or data aggregators and APIs that make it easier to get a corpus of standardised data.

Just like our physical infrastructure, there are choices to make in a data infrastructure over whose needs it will be designed around, how it will be funded, and how it will be owned and maintained.

For example, if the data ecosystem you are working with involves sensitive data, you may find you need to pair an open standard with a  more centralised infrastructure for data exchange, in which standardised data is available through a clearing-house which manages who has access or not to the data, or which assures anonymisation and privacy protecting practices have taken place.

By contrast, a data standard to support bi-lateral data exchange along a supply chain may need a good data validator tool to be maintained and provided for public access, but may have little need for a central datastore.

There’s a lot more written on the concept of data infrastructures: both drawing on technology literatures, and some rich work on the economics of infrastructure. 

But before sharing some closing thoughts, I want to turn briefly to thinking about ‘data institutions’ – the organisational arrangements that can make infrastructures and ecosystems more stable and effective – and that can support cooperation where cooperation works best, and create the foundations for competition where competition is beneficial.

Institutions

Standards adoption requires trust. Ownership, stewardship and institutions matter

A data standard is only a standard if it achieves a level of adoption and use. And securing that requires trust.

It requires those who might building systems that will work with the standard to be able to trust that the standard is well managed, and has robust governance. It requires users of schema and codelists to trust that they will remain open and free for use – rather than starting open and then later enclosed like the bait-and-switch that we’ve seen with many online platforms. And it requires leadership committing to adopting a standard to trust that the promises made for what it will do can be delivered. 

Behind many standards and their associated infrastructures – you will find carefully designed organisational structures and institutions.

Image showing governance structure of the Global Legal Entity Identifier Foundation

For example, the Global Legal Entity Identifier – designed to identify counter-parties in financial transactions – and avoid the kind of contagion of the 2008 financial crash, has layers of international governance to support a core infrastructure and policies for data standards and management, paired with licensed ‘Local Operating Units’ who can take a more entrepreneurial approach to registering companies for identifiers and verifying their identities.

The LEI standard itself has been taken through ISO committees to deliver a standard that is likely to secure adoption from enterprise users. 

Image showing the revision process described at https://standard.open-contracting.org/latest/en/governance/#revision-process

By contrast, the Open Contracting Data Standard I mentioned earlier is stewarded by an independent non-profit,

However, OCDS – seeking as I would argue it should – to disrupt some of the current procurement technology landscape – has not taken a formal standards body route – where there is a risk that well resourced incumbents would water down the vision within the standard. Instead, the OCDS team have  developed a set of open governance processes for changes to the standard that aim to make sure it retains trust from government, civil society and private sector stakeholders, whilst also retaining some degree of agility.

We’ve seen over the last decade that standards and software tools alone are not enough: they need institutional homes, and multi-disciplinary teams who can provide the ongoing support that technical maintenance work, with stakeholder engagement, and strategic focus on the kinds of change the standards were developed to deliver.

If you are sponsoring data standards development work that’s aiming for scale, are you thinking about the long-term institutional home that will sustain it?

If you’re not directly developing standards, but the data that matters to you is shaped by existing data standardisation, I’d also encourage you to ask: who is standing up for the public interest and the needs of our stakeholders in the standardisation process?

For example, over the last few weeks we’ve heard how a key mechanism to meet the goals of the COP26 Climate Conference will be in the actions of finance and investment – and a raft of new standards, many likely backed by technical data standards – are emerging for corporate Environmental, Social and Governance reporting.

There are relatively few groups out there like ECOS, the NGO network engaging in technical standards committees to champion sustainability interests. I’ve struggled to locate any working specifically on data standards. Yet, in the current global system of ‘voluntary standardisation’, standards get defined by those who can afford to follow the meetings and discussions and turn-up. Too often, that restricts those shaping standards to corporate and developed government interests alone.

If we are to have a world of technical and data standards that supports social change, we need more support for the social change voices in the room.

Closing reflections & challenges

As I was preparing this talk, I looked back at The State of Open Data – the book I worked on with IDRC a few years ago to review how open data ecosystems had developed across a wide range of sectors. One of things that struck me when editing the collection was the significant difference between how far different sectors have developed effective ecosystems for standardised, generative, and solution-focussed, data sharing. 

Whilst there are some great examples out there of data standards impacting policy, supporting smart data sharing and analysis, and fostering innovation – there are many places where we see dominant private platforms and data ecosystems developing that do not seem to serve the public interest – including, I’d suggest (although this is not my field of expertise) in a number of areas of agriculture.

So I asked myself why? What stops us building effective open standards? I have six closing observations, and in that, challenges, to share.

(1) Underinvestment

Data standards are not, in the scheme of things, expensive to develop: but neither is the cost zero – particularly if we want to take an inclusive path to standard development.

We consistently underinvest in public infrastructure, digital public infrastructure even more so. Supporting standards doesn’t deliver shiny apps straight away – but it can prepare the ground for them to flourish.

(2) Stopping short of version 2

How many of you are using Version 1.0 of a software package? Chances are most of the software you are using today has been almost entirely rewritten many times: each time building on the learning from the last version, and introducing new ideas to make it fit better with a changing digital world. But, many standards get stuck at 1.0. Funders and policy champions are reluctant to invest in iterations beyond a standard’s launch.

Managing versioning of data schema can involve some added complexity over versioning software – but so many opportunities are lost by a funding tendency to see standards work as done when the first version is released, rather than to plan for the ‘second curve’ of development.

(3) Tailoring to the dominant use case and (4) Trying to meet all use cases

Standards are often developed because their author or sponsor has a particular problem or use-case in mind. For GTFS, the General Transit Feed Specification that drives many of the public transport directions you find in apps like GoogleMaps, that problem was ‘finding the next bus’ in Portland Oregon. That might be the question that 90% of data users come to the data with; but there are also people asking: “Is the bus stop and bus accessible to me as a wheelchair user?” or “Can we use this standard to describe the informal Matatu routes in Nairobi where we don’t have fixed bus stop locations?”

A brittle standard that only focusses on the dominant use case will likely crowd out space for these other questions to be answered. But a standard that was designed to try and cater for every single user need would likely collapse under its own weight. In the GTFS case, this has been handled by an open governance process that has allowed disability information to become part of the standard over time, and an openness to local extensions.

There is an art and an ethics of standardisation here – and it needs interdisciplinary teams. Which brings me to recap my final learning on where things can go wrong, and what we need to do to design standards well

(5) Treating standards as a technical problem and (6) Neglecting the technical details

I suspect many here would not self-identify as ‘data experts’: yet everyone here could have a role to play in data standard development and adoption. Data standards are, at their core, a solution to coordination and collaboration problems, and making sure they work as such requires all sorts of skills, from policy, programme and institution design, to stakeholder engagement and communications. 

But – at the same time – data standards face real technical constraints, and require creative technical problem solving.

Showing that 12/10/2021 can be read in the US as 10th December, or in Europe as 12th October.

Without technical expertise in the room after all, we may well end up turning up a month late. 

Coda

I called this talk ‘Fostering open ecosystems’ because we face some very real risks that our digital futures will not be open and generative without work to create open standards. As the perceived value of data becomes ever higher, Silicon Valley capital may seek to capture the data spaces essential to solving development challenges. Or we may simply end up with development data locked in silos created by our failures to coordinate and collaborate. A focus on data standards is no magic bullet, but it is part of the pathway to create the future we need.

Tackling the climate crisis with data: what the built-environment sector can do

One of my first assignments now that I’m back working as an independent consultant was to support the Open Data Institute to develop a working paper on data, the built environment and climate action.

The draft paper was released on Monday, to coincide with the start of COP26, and ahead of the Open Data Institute’s Summit – and it will be open for comments and further input until 19th October.

Over here on Twitter I’ve put together a brief thread of some of the key messages in the report.

 

A dirty cloud hangs over Javelin Park as auditors fail to establish legality of £600m contract

In what I think is now the longest running series of posts on this blog, I’m back reviewing documents relating to Gloucestershire’s Javelin Park Incinerator saga (see here). This is a long and particularly wonky post, put together quickly ahead of tomorrow’s Audit Committee Meeting. I’ll try and draw out some of these ad-hoc observations into something more accessible soon.

A brief history

To put the latest developments in context:

The Auditors response stops short of issuing a ‘Report in the Public Interest’, and triggering the associated public meetings. This appears to be at odds with the outcome that the objectors, Community R4C had expected based on their access to earlier drafts of the report from the auditor which concluded that a Public Interest Report was required [Addition: 29/09/21].

This post is my notes from reading the auditors letter and report.

What does the report find?

In short, the auditors conclude that they cannot draw a definitive conclusion with respect to audit objections:

  • Because of the length of time passed since the contract was signed;
  • Because of the complexity of the contacts financial model, and the assumed cost of assessing whether it’s renegotiation in 2016 shifted the economic balance in favour of the operator (UBB);
  • Because other Energy from Waste contracts are not transparent, making comparison to other authority contracts difficult.

They write that:

” our main objective is to explain why we have not reached a definitive conclusion, rather than to express such a conclusion,”

although then then state that:

“we do not consider that anything further by way of public reporting and consideration by a Committee of the Council is required”

with perhaps an implicit suggestion it seems that the council wanted to avoid public reporting of this at all?

However, on the substantive matters, the report finds (page 12):

  • The council conclusively did not consider whether the 2016 renegotiation shifted the economic balance in favour of UBB
  • The auditors consider it would have been appropriate to conduct such an assessment and to keep records of it;
  • The auditor does not agree with the council’s legal opinion that it was not required to produce such an assessment, but accepts that the council was acting on its own legal advice.

They go on to say:

“From an audit perspective, a decision making process followed by a council which accorded with its legal view at the time is not in itself necessarily a cause for concern simply because that legal view may have been erroneous. Such a process does not necessarily indicate that the council lacks appropriate internal procedures for ensuring the sound management of its resources.”

So, whilst the council relying upon faulty legal advice for a 25-year contract appears not to be grounds for a negative independent audit conclusion – it should surely be a significant matter of serious concern for the Audit and Governance Committee.

Put another way, the auditors conclude that:

“Our view, in line with the advice we have received from independent Counsel, is that the material we have so far considered is insufficient to enable us to reach a firm conclusion as to the lawfulness under procurement law of the modifications.

Which, although it appears nothing can now be done to exit the Javelin Park contract, leaves at 25-year, £600m commitments by Gloucestershire Taxpayers under a significant cloud.

Establishing the legality of their actions is surely the least we should expect from our local authorities, let alone establishing that they operate in the best-interests of local residents and the wider environment.

It is also notable that, had the authority not fought against disclosure of contract details until late 2018, more contemporary examination of the case may have been possible, lessening the auditors objection that too much time has passed to allow them to conduct a proper investigation. The auditor however studiously avoids this point by stating:

“It is not our function to ‘regulate’ the Council in terms of whether it is sufficiently committed to transparency, or whether it has unjustifiably refused to release information in response to Freedom of Information Act requests.”

Yet, transparency is a central part of our public governance arrangements (not least in supporting meaningful public engagement with audit objections), and for it to fall entirely outside the scope of auditors comments about whether processes were robust is notable.

Observations to explore further

As I’ve read through, a few different things have struck me – often in connection with past documents I’ve reviewed. I’ve made brief notes on each of these below.

Wider procurement issues

Page 12 of the letter states “We have not seen evidence that suggests that there may be a pattern of non-compliance with procurement law by the Council.”but does not detail whether any evidence was sought, or what steps were taken to be satisfied as to this conclusion. Notably, public news reports covering the periods 2015 – 2019 highlight other governance failings related to procurement (though not necessarily procurement law), and at least from a public perspective raise some red flags about whether appropriate controls and oversight have been in place at GCC.

Recycling incentives

On page 19, considering the impact of the contract on incentives to recycle states that:

“While the average cost per tonne does clearly reduce as the level of waste increases, which may be as a result of lower recycling rates, the Council does not have direct influence over recycling rates.”

This appears at odds with the fact the authority provide Waste Incentive Payments to Waste Collection Authorities designed to influence recycling rates, and that these rates have been altered since the Incinerator became operational.

What’s a fair comparison?

A considerable part of the case the Council rely upon to prove Value for Money of the incinerator is the report produced by EY that compares the cost of the renegotiated 2016 contract with the cost of terminating the contract and relying on landfill for the next 25 years.

The auditors note that:

“the Council was professionally advised during the negotiation process, including by EY on the VfM of the RPP in comparison to terminating the contract and relying on landfill.”

However, the scope of the EY report, which compares to the “Council’s internal Landfill Comparator” (see covering letter) was set not on the expert advice of EY, but at the instruction of the Council’s procurement lead, Ian Mawdsley. As I established in a 2019 FOI, when I asked for:

the written instructions (as Terms of Reference, e-mail confirmation or other documentary evidence) of the work that was requested from Ernst and Young. Depending on the process of commissioning this work, I would anticipate it would form a written document, part of a work order, or an e-mail from GCC to E&Y.

and the council replied:

“We have completed our investigation into the points you raise and can confirm that the council do not hold any separate written terms of reference as these were initially verbal and recorded in the document itself.

It seems reasonable to me that an expert advisor, given scope to properly consider alternatives, may have been able to, for example, compare termination against short-term landfill, followed by re-procurement. This should have been informed by the outcome of the Residual Waste Working Group Fallback Strategy that considered alternatives in 2013, but appears to have been entirely ignored by the Council administration.

If the council is to rely on ‘expert advice’ to establish that it, in good faith, sought to secure value for money on the project – then the processes to commission that advice, and the extent to which consultants were given a brief that allowed them to properly consider alternatives, should surely be considered?

Cancellation costs are a range: where are the error bars?

The auditor briefly considers whether councillors were given accurate information when, in meetings in 2015, they were debating contract cancellation costs of £60m – £100m.

My reading of the EY report, is that, on the very last page, it gives a range of possible termination costs for force majeure planning permission-related termination, with the lowest being £35.4m (against a high of £69.8m). Higher up, it reports a single figure of £59.8m. The figure of £100m is quoted as relating to ‘Authority Voluntary Termination’ by EY note they have not calculated this figure in detail. It therefore seems surprising to me for the auditors to conclude that, a meeting in 2015 considering contract cancellation, that was not provided with an officer report explaining either figure, but being told that cancellation costs were in the range £60m to £100m was:

“not distorted by inaccurate information.”

As surely the accurate information that should have been presented would have simply been:

  • EY have produced estimated costs in a range from £35.4m – £69.8m if we cancel due to passing the planning long-stop date. Their best estimate for a single figure in this range of £59.8m
  • EY have produced a rough estimate (but no calculations) of a cost of £100m if the authority cancels for other reasons outside the panning delay.
  • The council estimates that sticking with landfill for a period of time, and carrying out another procurement exercise could add up to X to the cost.

Eversheds advice

Re-reading the EY report, I note that it refers to separate advice provided by Eversheds on the issues of State Aid, Documentation Changes and Procurement risks including risks of challenge.

To my knowledge this advice has never been put in the public domain. It may be notable however, that the auditor does not reference this advice in their reply on the objectors allegation that the contract could have constituted illegal state aid.

Perhaps another FOI request if someone else wants to pick up the baton on that?

We should have recorded meetings!

I was present at the March 2019 meeting when the chief executive admitted that the council were in a poor negotiating position in relation the contract. My partner, Cllr Smith, raised the failure of the minutes to include this point at the subsequent meeting but it appears the administration were already attempting to remove this admission from the record.

Whilst the auditor states:

“In our view, even assuming that such a statement was made by the Chief Executive (and we make no finding as to whether it was: we note that the Council does not accept that your record of the meeting is accurate), it would not in itself justify our making a finding that the contract modifications shifted the balance of the contract in UBB’s favour.”

That this point is addressed, and that the Council administration have attempted to keep admissions in a public meeting of their weak negotiation position from the record, is of note.

With hindsight, given the Council chose to hold this meeting in a room that was not webcast, we should have arranged independent citizen led recording of the meeting.

A problem with facts?

The final line of the auditors letter, in their reasons for not seeking to make an application to the court for a declaration that council acts may have been against the law is rather curious:

“the issues underlying these matters are very fact specific such that there would be limited wider public interest in a court declaration.”

An argument for or against Open Contracting?

The report appears to make a strong case for wholesale Open Contracting when it comes to large EfW projects. They state:

“We accept that the comparisons included in the WRAP report do have significant limitations, mainly because they are, as the Council notes, quoted at a point in time and in isolation from the underlying contractual terms such as length of contract, risk share etc. Without access to such information on the contracts in place elsewhere, it is impossible to do a conclusive comparison, and even with full information on the various contracts, there would still be a good many judgements and assumptions involved in making a comparison because of, for example, variations in the ‘values’ associated with particular risks.”

In other words – the lack of transparency in Energy from Waste projects makes it nearly impossible to verify that the waste ‘market’ (which, because of geographical constraints and other factors is a relatively inflexible market in any case), has generated value for money for the public.

I’m also not sure why ‘values’ gets scare quotes in the extract above…

However, it appears to me that, rather than calling for greater publication of contracts, the auditors want to go the other way, and argue that contracting transparency could be bad for local authorities:

“Procurement law pursues objectives that are wider than promoting the efficient use of public resources. In particular, procurement law, as applicable at the relevant time, sought to pursue EU internal market objectives and to ensure the compliance of EU member states with obligations under the World Trade Organisation Global Procurement Agreement, by ensuring that contract opportunities were opened up to competition and that public procurement procedures were non-discriminatory and transparent. In some circumstances, public procurement law could potentially operate to preclude an authority from selecting an approach which could reasonably be regarded by the authority as the most economically efficient option available to it in the circumstances.”

This critique of the laws ‘as applicable at the relevant time’ (i.e. during EU membership) also raises a potential red flag about arguments post-Brexit Britain may increasingly see.

Is Local Audit Independent and Effective?

I recall hearing some critique of Grant Thornton’s audit quality – and struck by some of my concerns about how this objection letter reads, did a brief bit of digging into the regulators opinion.

In 2019/20, the Financial Reporting Council reviewed six local audits by Grant Thornton. None were fully accepted, with the regulator concluding that:

Thee audit quality results for our inspection of the six audits are unacceptable, with five audits assessed as requiring improvement, although no audits were assessed as requiring significant improvement.

going on to note that:

“At least two key findings were identified on all audits requiring improvement and therefore areas of focus are the audit of property valuation, assessment and subsequent testing of fraud risks, audit procedures over the completeness and accuracy of expenditure and EQC review procedures.”

Whilst this does not cover assessment of the quality of reports in relation to audit objections, it is notable that in their response to the report Grant Thornton state:

“We consider that VfM audit is at the centre of local audit. We take VfM work seriously, invest time and resources in getting it right, and give difficult messages where warranted. In the last year, we have issued a Report in the Public Interest at a major audit, Statutory Recommendations and Adverse VfM Conclusions.”

Yet, in the Gloucestershire case, the auditors failed have studiously avoided asking any substantive Value for Money questions about the largest ever contract for the local authority, either at the time the contract was negotiated, or following concerns raised by objectors.

In their response to objectors, Grant Thornton rely a number of times on the time elapsed since the contract was signed as a reason that they cannot conduct any VfM analysis. Yet, they were the auditors at the time significant multi-million capital sums were committed the project: which surely should have triggered contemporary VfM questions?

It’s notable that local electors are being asked to trust that Grant Thornton have very robust processes in place to protect against conflict of interest, not only because a finding that VfM was not secured would surely call into question the comprehensiveness of Grant Thornton’s past audit work (none of which is referenced in the report) and because, as we learnt from the £1m Ernst and Young Report relied upon to assert that the council had sought suitable independent advice, the financial models of Incinerator operator UBB were written by, none other than, Grant Thornton.

Scope of the EY report

(Oh, and today’s news on Grant Thornton doesn’t add to public confidence either.)

Effective objection, and the need for dialogue

One thing that has come across in years of reading the documents on this process, from Information Tribunal rulings, Court rulings and the Auditors letter, is the ‘frustration’ of the authorities (e.g. Judges, Auditors) being asked to ‘adjudicate’ in this case with one or other of the parties. At times, the Council has come in for thinly veiled or outright criticism for lack of co-operation, and Community R4C appear to have at times undermined their case by making what the auditors view as excessive or out-of-scope objections.

A few takeaways from this:

  • There is a high bar for citizen-objectors to clear in making effective objections, and little support for this. Community R4C have drawn on extensive pro-bono legal advice, crowd-funding and other resources – and yet their core case, that the project is neither Value for Money, nor in-line with the waste hierarchy, has never been properly heard: always ruled out of consideration on ‘technicalities’.
  • Objection processes need to be made more user-friendly: and at the same time, objectors need to be supported with advice and even intermediaries who can help support filtered and strategic use of public scrutiny powers.
  • The lack of openness from Gloucestershire County Council to dialogue has been perhaps the biggest cause of this saga running on: leading to frustrating, irritable and costly interactions through courts and auditors – rather than public discussion of constructive ways forward for waste management in the County.

Where next?

I’ll be interested to see the outcome of tomorrow’s meeting of the audit committee, where, even though there were only a few hours between the report and question deadline, I understand there will be a substantial number of public questions asked.

My sense is there still remains a strong case for an independent process to learn lessons from, what remains to my mind, a likely significant set of governance failures at Gloucestershire County Council, and to ensure future waste management is properly in line with the goal of increased recycling and waste reduction.

Rhodes must fall

17 years ago I was an undergraduate at Oriel College, Oxford. I lived for my first year in the ‘Rhodes building’ – not many metres from the statue of Cecil Rhodes that adorns the front of the building.

The only narrative of Rhodes I recall from that time, was one of the college’s proud connection to its alumni and benefactor. To my shame, whilst with student campaigners I was active against contemporary donations to the University that appeared to buy naming rights and launder the reputations of questionable modern day donors – I left unexplored how the ongoing honouring of past donors had allowed them to ‘buy’ a ‘controversial reputation instead of the condemnation their actions deserve. Nor did I consider then how the memorialisation of Rhodes plays a part (even if small compared to other factors) in perpetuating the continued exclusion of marginalised communities from Oxford, and in reinforcing barriers to people from (Oxford) minorities taking greater ownership over the institutions of the University. 

The college has a (belated) opportunity to make the right statement with the removal of the Rhodes statue. Leave it there, and Rhodes remains a ‘controversial figure’ and the college an institution concerned only with reproducing “an educated ruling class” (to quote from the college’s essay on Rhodes). Move it to a museum where it belongs, and the conversation with every undergraduate can be about our importance of questioning and learning from history – using education as a means of creating a more just future. The teachable moment will be all the stronger when the statue’s niche stands empty. 

Rhodes must fall.

Open Contracting & Inclusion – notes from an online discussion

[Summary: Exploring inclusion impacts of data and standards in response to a paper on Open Contracting & inclusion]

Yesterday I had the pleasure of joining a call hosted by HIVOS, and chaired by ILDA’s Ana Sofia Ruiz, to discuss a recent paper from Michael Canares and François van Schalkwyk on “Open Contracting and Inclusion”. The paper is well worth a read, and includes a review of five cases against a theoretical framework looking across data flows, opportunities for action, infomediary presence, and through to inclusion outcomes (see table below for example of how these play out in a few of the cases reviewed)

Table 2: Summary of conditions met by the cases with regard to open contracting and social inclusion

After the discussion, we were asked to summarise some of our inputs – hopefully feeding into a wide write-up. However, in case what I’ve written up doesn’t really fit the format of that, I’m posting a cleaned up and slightly expanded version of the remarks I made below:

This paper, and the discussion around it, raises a number of valuable questions – drawing on a rich theoretical landscape to post them. 

Firstly, it asks us “How are data flows being disrupted?”. This question is important, because in many ‘open contracting’ projects it is rarely explicitly asked. We’re living in a time of mass disruption, yet open contracting is often ‘sold’ as a kind of reform. One of the widely used success stories for work on open contracting data comes from Ukraine, where there was a true disruption in data flows – using the moment of revolution to reconfigure patterns of procurement, and to create data infrastructures that enabled those new more open practices. 

Secondly, this paper calls on us to question “what is the value of data in bringing about inclusion?” In the past we’ve talked about whether open data is either necessary or sufficient to create change. The answer I take from this paper is that increased accessibility of information and data is ‘a very useful, but nowhere near sufficient’ condition for inclusive change. 

Thirdly, the use of Castell’s framework from Communication Power of ‘network power’ (shaping the information that can be transmitted), ‘networking power’ (gatekeeping which information is transmitted), and ‘networked power’ (control by one node in the network of others), and ideas of ‘programming the network’ and ‘reprogramming the network’, raise some critical questions about the role of data standards. Often treated as neutral artefacts, standards are in fact sites of power, and of the negotiation of network and networking power. A standard defines what can be expressed, and its implementation involves choosing what will be expressed. Standards can be at once tools that cross contexts, taking with them the potential of inclusion and exclusion (network power), and at the same time, have that potential left inert if the localised networking power decides not to take up inclusion oriented features. 

To put this more concretely (if still a little complex I fear), the Open Contracting Data Standard was explicitly designed with a technical architecture that permits data about any given contracting process to be published by any actor, not only the ‘official’ information provider, and with a mechanism for extensions, supporting new fields of data to be attached to a contracting process. The ‘protocol’ sought to be inclusive. However, in practice, most tools have not been built to exploit this feature – meaning that in practice, the ‘platforms’ that exist don’t support inclusion of alternative perspectives on the state of a contracting process. This highlights that even at the level of the technical infrastructures, these are not made once, but have to be constantly remade, and their inclusive potential reinforced.

Fourth, the paper calls for a renewed focus on both governance context, and on intermediaries. Whilst technical artefacts can cross contexts, intermediary capacity building needs significant investment setting-by-setting. Equally, the discussion brought into view that this cannot be a short-term process. Intermediaries need not only skills, but also stocks of trust, in order to broker connections and communication. One of the evaluation team who had worked on a case covered by the paper discussed how it was individuals’ ability to maintain trusted relationships across different stakeholder groups that was critical to connecting information and empowerment. The importance of this cannot be overstated. 

Fifth, and finally, in his opening statement, Michael Canares challenged us to consider whether Open Contracting is different from other public sector reforms? After all – there have been decades of procurement reform. To this, I’m prepared to advance an answer: There is a meaningful qualitative difference with government reforms that start from the premise of openness. When a commitment to being open by default is put into practice, the configuration of actors involved in creating change is different, and conventional patterns of bureaucratic reform can be disrupted. Whether they are disrupted or not depends still on individual internal and external actors, and on whether the culture, as well as the practice, of openness has been brought into play. Nevertheless, Open Contracting has certain potential that is simply absent from past procurement reforms – and that is something to continue to build on.

The challenge ahead now is to work out what to do with these questions. We’re starting to unpack the complexity of open contracting practice – and the nuances for each individual setting. But, if all we have are critical questions, we risk inaction rather than advances in inclusion. During the early development of the Open Contracting Data Standard we often turned to the mantra that we should not let the perfect be the enemy of the good. This carries forward: as we avoid the perfect being the enemy of making things better. I’d contend that we need to continue turning our learning into tooling – whether technical tools, evaluation frameworks, to simple planning tools for new initiatives. Only then can we be part of taking on the large scale reforms that this time of disruption needs. 

2019 in Review

I started writing this just before the Christmas break, but got interrupted by both festivities and flu. So, below, a slightly belated look back at 2019: where yet again my blogging has been far too sporadic.

January – FOI & Javelin Park Protests

Last Christmas eve, I was pouring over the newly released details of a £100m+ cost increase in the contract for the Javelin Park incinerator I’ve written about before. Over Christmas, we put together calls for an Independent Inquiry into the project, and come January, I was outside the plant, taking part in protests at the price rise.

Since then, the County Council have been taken to court over the contract, putting the calls for an inquiry on hold (although questions were finally put to the Chief Executive of the Council in March, with updates on the court case expected in early 2020.

My other FOI adventures of 2019 have been less conclusive:

  • Gloucestershire’s refusal to provide prices and buyers of the public land they have sold off means the only way to piece this together would be by spending £100s on land registry records: something I’ve not had space to pursue. Promises that this information would be published proactively from September have been broken by Cabinet – and our experiment in using the Local Audit and Accountability Act in June to look at relevant documents didn’t appear to provide a full overview. It seems profoundly odd that there is so little transparency over how public assets are being disposed of.

February – Exploring Arts and Data

At the start of the year, I kicked off a part-time role as ‘Data Catalyst’ with Create Gloucestershire working on a number of fronts to support their internal data practices, but also to scope out ways to connect artists with debates around data. I shared some initial research back in February and in September had great fun co-facilitating a ‘Creative Lab’ at Atelier in Stroud, where we co-created a range of data-informed art works – from VR Design Teachers, to fabric chromatography creations that visualised data on school subject choice.

March – TicTec & The State of Open Data

Much of March was spent working on final editing of chapters for The State of Open Data, and then, late in the month, heading to Paris for The Impacts of Civic Technology (TicTec) conference to present initial finings with my co-editor, Mor. An evening reception and hearing about digital democracy and participation projects at French National Assembly was particularly inspiring.

April – Printing and Driving

2019 was supposed to be a bit of a sabbatical year (learning point: I’m not very good at sabbaticals), but in late March and April I did finally get round to my two main goals of: (a) learning a bit about printmaking; (b) passing my driving test.

A wonderful two day workshop with Rod Nelson had me exploring woodcut designs exploring field patterns and the Stroud landscape.

And Bob Waters got me through my test first time.

I’ve promptly failed to do any more printing or driving this year, but at least I now know a bit more about how to!

First ‘field patterns’ print drying on Rod Nelson’s studio

May – State of Open Data Book Tour and OGP

May took me to the US, for a few weeks of #slowTravel by train around the East Coast, and then up to Canada, for the full launch of the the State of Open Data book. It was a real pleasure to catch up with old friends, and to take part in some really stimulating workshops, including a fascinating Belfer Center session on ‘Data as Development’ which gave rise to this note on the idea of a ‘a data extraction transparency initiative.

Getting hold of physical copies of The State of Open Data book was a great moment: as at times the project has felt quite beyond delivery. I’m pretty pleased indeed with how it turned out – with contributions from 60+ authors, and many more reviewers and contributors.

I’ve still got a few hard copies that can go free to University or organisational libraries, so if you’ve read this far, and you would like one – do drop me a note.

At IDRC for book talk on State of Open Data
With the editors of State of Open Data sharing findings from the book at IDRC HQ.

June – Facilitation fun with IATI

In June I took another #slowTravel trip – heading to Copenhagen by train to facilitate a workshop for the International Aid Transparency Initiative’s technical community on the draft strategy.

This followed some online facilitation work for strategy dialogues earlier in the year. I’ve also had chance this year to co-facilitate an online dialogue for Land Portal: reminding me how much I enjoy this kind of blended online and offline facilitation work. Perhaps something to explore more in 2020.

July – Coast to Coast

In July, Rachel and I set out walking across the UK on Wainright’s Coast to Coast path – raising funds for  Footsteps Counselling and Care .

The weather and walk was stunning – and a real chance for reflection. Photos from the coast to coast walk

 

August – Impact Bonds and Waste Management

Besides the annual August pilgrimage to Greenbelt, it was a month of interesting UK projects – including work with the Government Outcomes Lab at the University of Oxford to scope out ways to improve transparency and data sharing around Social Impact Bonds, and contributing to a  (sadly unsuccessful) pitch by Open Data Manchester and Dsposal to secure innovation funding to build on their prototype KnoWaste standard.

September – Civic Media Observatory

In September, I had my first opportunity to work in-depth on a project with the fantastic Global Voices team – using AirTable to rapid prototype a database and workflow for tracking and analysing mainstream media, social media, and offline events through a local lens, and understanding the context and subtext of the media that platform moderators may be asked to make snap judgements over.

A three-day workshop in Skopje, Northern Macedonia, looking at coverage of the EU Accession talks, put the prototype to the test (and introduced me to some quite remarkable monumental architecture….). 

October – AI at Bellagio

I spent all of October in Italy, first as a residential fellow at the Rockefeller Bellagio Center in Italy, and then with a brief vacation in Verona, and quick trip to Rome to work with Land Portal.

Taking part in the Bellagio Center’s thematic month on Artificial Intelligence was quite simply a once in a lifetime opportunity. I didn’t write much about it at the time (as I was busy trying to pull together the outline of a new book proposal) and with an election called in the UK just as we were heading home, haven’t had the space to follow up. Hopefully some point next year I’ll be publishing a few outputs from the month.

However, I can’t leave my fellow resident’s work un-shared, so if I’ve not already signposted the below to you, do take time to:

I should also mention one of the other highlights of the residency: enjoying two shows, numerous tricks. and sage advice from ‘Magician in residence’ Brad Barton, Reality Thief – go see him if you are ever in the Bay Area!

November – Elections!

I returned from Italy right into the middle of the biggest General Election campaign Stroud District Green Party have ever run, for the fantastic Molly Scott Cato. It was a month both spent both on the doorstep, and juggling spreadsheets – exploring the reality of values-based volunteer-driven political campaigning in an era of data.

December – Global Data Barometer

Over November and December I was also working on the scoping for a potential new project – the Global Data Barometer – a successor to the Open Data Barometer study I helped create at the Web Foundation back in 2013. The goal is to explore how a 100+ country study could provide insight into patterns of ‘responsible re-use’ of data around the world – capturing both use of data as a resource for sustainable development – and efforts to manage the risks that the unregulated collection and processing of ever increasing quantities of data might create. I published the initial draft research framework just before Christmas, and will be exploring the project more in a workshop in Washington next week.

2020 plans

Over 2020 I’m looking forward to more work on the Global Data Barometer, and with the Open Ownership team, as well as some further facilitation projects, and, hopefully, a bit more writing time! We’ll see.