Notes from a RightsCon panel on AI, Open Data and Privacy

[Summary: Preliminary notes on open data, privacy and AI]

At the heart of open data is the idea that when information is provided in a structured form, freely accessible, and with permission granted for anyone to re-use it, latent social and economic value within it can be unlocked.

Privacy positions assert the right of individuals to control their information and data, and data about them, and to have protection from harms that might occur through exploitation of their data.

Artificial intelligence is a field of computing concerned with equipping machines with the ability to perform tasks that many previously have required human intelligence, including recognising patterns, making judgements, and extracting and analysing semi-structured information.

Around each of these concepts vibrant (and broad based) communities exist: advocating respectively for policy to focus on openness, privacy and the transformative use of AI. At first glance, there seem to be some tensions here: openness may be cast as the opposite of privacy; or the control sought in privacy as starving artificial intelligence models of the data they could use for social good. The possibility within AI of extracting signals from messy records might appear to negate the need to construct structured public data, and as data-hungry AI draws increasingly upon proprietary data sources, the openness of data on which decisions are made may be undermined. At some points these tensions are real. But if we dig beneath surface level oppositions, we may find arguments that unite progressive segments of each distinct community – and that can add up to a more coherent contemporary narrative around data in society.

This was the focus of a panel I took part in at RightsCon in Toronto last week, curated by Laura Bacon of Omidyar Network, and discussing with Carlos Affonso Souza (ITS Rio) and Frederike Kaltheuner (Privacy International) – and the first in a series of panels due to take place over this year at a number of events. In this post I’ll reflect on five themes that emerged both from our panel discussion, and more widely from discussions I had at RightsCon. These remarks are early fragments, rather than complete notes, and I’m hoping that a number may be unpacked further in the upcoming panels.

The historic connection of open data and AI

The current ‘age of artificial intelligence’ is only the latest in a series of waves of attention the concept has had over the years. In this wave, the emphasis is firmly upon the analysis of large collections of data, predominantly proprietary data flows. But it is notable that a key thread in advocacy for open government data in the late 2000s came from Artificial Intelligence and semantic web researchers such as Prof. Nigel Shadbolt, whose Advanced Knowledge Technologies (AKT) programme was involved in many early re-use projects with UK public data, and Prof. Jim Hendler at TWC. Whilst I’m not aware of any empirical work that explores the extent to which open government data has gone on to feed into machine-learning models, in terms of bootstrapping data-hungry research, there is a connection here to be explored.

There also an argument to be made that open data advocacy, implementation and experiences over the last ten years have played an important role in contributing to growing public understandings of data, and in embedding cultural norms around seeking access to the raw data underlying decisions. Without the last decade of action on open data, we might be encountering public sector AI based purely on proprietary models, as opposed to now navigating a mixed ecology of public and private AI.

(Some) open data is getting personal

Its not uncommon to hear open data advocates state that open data only covers ‘non-personal data’. It’s certainly true that many of the datasets sought through open data policy, such as bus timetables, school rankings, national maps, weather reports and farming statistics don’t contain an personally identifying information (PII). Yet, whilst we should be able to mark a sizable teritory of the open data landscape as free from privacy concerns, there are increasingly blurred lines at points where ‘public data’ is also ‘personal data’.

In some cases, this may be due to mosaic effects: where the combination of multiple open datasets could be personally identifying. In other cases, the power to AI to extract structured data from public records about people raises interesting questions about how far permissive regimes of access and re-use around those documents should also apply to datasets derived from them. However, there are also cases where open data strategies are being applied to the creation of new datasets that directly contain personally identifying information.

In the RightsCon panel I gave the example of Beneficial Ownership data: information about the ultimate owners of companies that can be used to detect ilicit use of shell companies for money laundering or tax evasion, or that can support better due dilligence on supply chains. Transparency campaigners have called for beneficial ownership registers to be public and available as open data, citing the risk that restricted registers will be underused and will much less effective than open registers, and drawing on the idea of a social contract that means the limited liability conferred by a company comes with the responsibility to be identified as party to that company. We end up then with data that is both public (part of the public record), but also personal (containing information about identified individuals).

Privacy is not secrecy: but consent remains key

Frederike Kaltheuner kicked off our discussions of privacy on the panel by reminding us that privacy and secrecy are not the same thing. Rather, privacy is related to control: and the ability of individuals and communities to excercise rights over the presentation and use of their data. The beneficial ownership example highlights that not all personal data can or should be kept secret, as taking an ownership role in a company comes with a consequent publicity requirement. However, as Ann Cavoukian forcefully put the point in our discussions, the principle of consent remains vitally important. Individuals need to be informed enough about when and how their personal information may be shared in order to make an informed choice about entering into any relationship which requests or requires information disclosure.

When we reject a framing of privacy as secrecy, and engage with ideas of active consent, we can see, as the GDPR does, that privacy is not a binary choice, but instead involves a set of choices in granting permissions for data use and re-use. Where, as in the case of company ownership, the choice is effectively between being named in the public record vs. not taking on company ownership, it is important for us to think more widely about the factors that might make that choice trickier for some individuals or groups. For example, as Kendra Albert expained to me, for trans-people a business process that requires current and former names to be on the public record may have substantial social consequences. This highlights the need for careful thinking about data infrastructures that involve personal data, such that they can best balance social benefits and individual rights, giving a key place to mechanisms of acvice consent: and avoiding the creation of circumstances in which individuals may find themselves choosing uncomfortably between ‘the lesser of two harms’.

Is all data relational?

One of the most challenging aspects of the receny Cambridge Analytica scandal is the fact that even if individuals did not consent at any point to the use of their data by Facebook apps, there is a chance they were profiled as a result of data shared by people in their wider network. Whereas it might be relatively easy to identify the subject of a photo, and to give that individual rights of control over the use and distribution of their image, an individual ownership and rights framework is difficult can be difficult to apply to many modern datasets. Much of the data of value to AI analysis, for example, concerns the relationship between individuals, or between individuals and the state. When there are multiple parties to a dataset, each with legitimate interests in the collection and use of the data, who holds the rights to govern its re-use?

Strategies of regulation

What unites the progressive parts of the open data, privacy and AI communities? I’d argue that each has a clear recognition of the power of data, and a concern with minimising harm (albeit with a primary focus in individual harm in privacy contexts, and with the emphasis placed on wider social harms from corruption or poor service delivery by open data communities)*. As Martin Tisné has suggested, in a context where harmful abuses of data power are all around us, this common ground is worth building on. But in charting a way forward, we need to more fully unpack where there are differences of emphasis, and different preferences for regulatory strategies – produced in part by the different professional backgrounds of those playing leadership roles in each community.

(*I was going to add ‘scepticism about centralised power’ (of companies and states) to the list of common attributes across progressive privacy, open data and AI communities, but I don’t have a strong enough sense of whether this could apply in an AI context.)

In our RightsCon panel I jotted down and shared five distinct strategies that may be invoked:

  • Reshaping inputs – for example, where an AI system is generated biased outputs, work can take place to make sure the inputs it recieves are more representative. This strategy essentially responds to negative outcomes from data by adding more, corrective, data.
  • Regulating ownership – for example, asserting that individuals have ownership of their data, and can use ownership rights to make claims of control over that data. Ownership plays an important role in open data licensing arrangements, but runs up against the ‘relational data problem’ in many cases, where its not clear who has ownership rights.
  • Regulating access – for example, creating a dataset of company ownership only available to approved actors, or keeping potentially disclosive AI training datasets from being released.
  • Regulating use – for example, allowing that a beneficial ownership register is public, but ensuring that uses of the data to target individuals is strictly prohibited, and prohibitions are enforced.
  • Remediating consequences – for example, recognising that harm is caused to some groups by the publicity of certain data, but judging that the net public benefit is such that the data should remain public, but the harm should be redressed by some other aspect of policy.

By digging deeper into questions of motivations, goals and strategies my sense is we will better be able to find the points where AI, privacy and open data intersect in a joint critical engagement with todays data environment.

Where next?

I’m looking forward to exploring these themes more, both attending the next panel in this series at the Open Government Partnership meeting in Tblisi in July, and through the State of Open Data project.

Publishing with purpose? Reflections on designing with standards and locating user engagement

[Summary: Thinking aloud about open data and data standards as governance tools]

There are interesting shifts in the narratives of open data taking place right now.

Earlier this year, the Open Data Charter launched their new stategy: “Publishing with purpose”, situating it as a move on from the ‘raw data now’ days where governments have taken an open data initaitive to mean just publishing easy-to-open datasets online, and linking to them from data catalogues.

The Open Contracting Partnership, which has encouraged governments to purposely prioritise publication of procurement data for a number of years now, has increasingly been exploring questions of how to design interventions so that they can most effectively move from publication to use. The idea enters here that we should be spending more time with governments focussing on their use cases for data disclosure.

The shifts are welcome: and move closer to understanding open data as strategy. However, there are also risks at play, and we need to take a critical look at the way these approaches could or should play out.

In this post, I introduce a few initial thoughts, though recognising these are as yet underdeveloped. This post is heavily influenced by a recent conversation convened by Alan Hudson of Global Integrity at the OpenGovHub, where we looked at the interaction of ‘(governance) measurement, data, standards, use and impact ‘.

(1) Whose purpose?

The call for ‘raw data now‘ was not without purpose: but it was about the purpose of particular groups of actors: not least semantic web reseachers looking for a large corpus of data to test their methods on. This call configured open data towards the needs and preferences of a particular set of (technical) actors, based on the theory that they would then act as intermediaries, creating a range of products and platforms that would serve the purpose of other groups. That theory hasn’t delivered in practice, with lots of datasets languishing unused, and governments puzzled as to why the promised flowering of re-use has not occurred.

Purpose itself then needs unpacking. Just as early research into the open data agenda questioned how different actors interests may have been co-opted or subverted – we need to keep the question of ‘whose purpose’ central to the publish-with-purpose debate.

(2) Designing around users

Sunlight Foundation recently published a write-up of their engagement with Glendale, Arizona on open data for public procurement. They describe a process that started with a purpose (“get better bids on contract opportunities”), and then engaged with vendors to discuss and test out datasets that were useful to them. The resulting recommendations emphasise particular data elements that could be prioritised by the city administration.

Would Glendale have the same list of required fields if they had started asking citizens about better contract delivery? Or if they had worked with government officials to explore the problems they face when identifying how well a vendor will deliver? For example, the Glendale report doesn’t mention including supplier information and identifiers: central to many contract analysis or anti-corruption use cases.

If we see ‘data as infrastructure’, then we need to consider the appropriate design methods for user engagement. My general sense is that we’re currently applying user centred design methods that were developed to deliver consumer products to questions of public infrastructure: and that this has some risks. Infrastructures differ from applications in their iterability, durability, embeddedness and reach. Premature optimisation for particular data users needs may make it much harder to reach the needs of other users in future.

I also have the concern (though, I should note, not in any way based on the Glendale case) that user-centred design done badly, can be worse than user-centred design done not at all. User engagement and research is a profession with it’s own deep skill set, just as work on technical architecture is, even if it looks at first glance easier to pick up and replicate. Learning from the successes, and failures, of integrating user-centred design approaches into bureacratic contexts and government incentives structures need to be taken seriously. A lot of this is about mapping the moments and mechanisms for user engagement (and remembering that whilst it might help the design process to talk ‘user’ rather than ‘citizen’, sometimes decisions of purpose should be made at the level of the citizenry, not their user stand-ins).

(3) International standards, local adoption

(Open) data standards are a tool for data infrastructure building. They can represent a wide range of user needs to a data publisher, embedding requirement distilled from broad research, and can support interoperabiliy of data between publishers – unlocking cross-cutting use-cases and creating the economic conditions for a marketplace of solutions that build on data. (They can, of course, also do none of these things: acting as interventions to configure data to the needs of a particular small user group).

But in seeking to be generally usable, standard are generally not tailored to particular combinations of local capacity and need. (This pairing is important: if resource and capacity were no object, and each of the requirements of a standard were relevant to at least one user need, then there would be a case to just implement the complete standard. This resource unconstrained world is not one we often find ourselves in.)

How then do we secure the benefits of standards whilst adopting a sequenced publication of data given the resources available in a given context? This isn’t a solved problem: but in the mix are issues of measurement, indicators and incentive structures, as well as designing some degree of implementation levels and flexibility into standards themselves. Validation tools, guidance and templated processes all help too in helping make sure data can deliver both the direct outcomes that might motivate an implementer, whilst not cutting off indirect or alternative outcomes that have wider social value.

(I’m aware that I write this from a position of influence over a number of different data standards. So I have to also introspect on whether I’m just optimising for my own interests in placing the focus on standard design. I’m certainly concerned with the need to develop a clearer articulation of the interaction of policy and technical artefacts in this element of standard setting and implementation, in order to invite both more critique, and more creative problem solving, from a wider community. This somewhat densely written blog post clearly does not get there yet.)

Some preliminary conclusions

In thinking about open data as strategy, we can’t set rules for the relative influence that ‘global’ or ‘local’ factors should have in any decision making. However, the following propositions might act as starting point for decision making at different stages of an open data intervention:

  • Purpose should govern the choice of dataset to focus on
  • Standards should be the primary guide to the design of the datasets
  • User engagement should influence engagement activities ‘on top of’ published data to secure prioritised outcomes
  • New user needs should feed into standard extension and development
  • User engagement should shape the initiatives built on top of data

Some open questions

  • Are there existing theoretical frameworks that could help make more sense of this space?
  • Which metaphors and stories could make this more tangible?
  • Does it matter?