Three cross-cutting issues that UK data sharing proposals should address

[Summary: an extended discussion of issue arising from today’s discussion of UK data sharing open policymaking discussions]

I spend a lot of time thinking and writing about open data. But, as has often been said, not all of the data that government holds should be published as open data.

Certain registers and datasets managed by the state may contain, or be used to reveal, personally identifying and private information – justifying strong restrictions on how they are accessed and used. Many of the datasets governments collect, from tax records to detailed survey data collected for policy making and monitoring fall into this category. However, the principle that data collected for one purpose might have a legitimate use in another context still applies to this data: one government department may be able to pursue it’s public task with data from another, and there are cases where public benefit is to be found from sharing data with academic and private sector researchers and innovators.

However, in the UK, the picture of which departments, agencies and levels of government can share which data with others (or outside of the state) is complex to say the least. When it comes to sharing personally identifying datasets, agencies need to rely on specific ‘legal gateways’, with certain major data holders such as HM Revenue and Customs bound by restrictive rules that may require explicit legislation to pass through parliament before specific data shares are permitted.

That’s ostensibly why the UK Government has been working for a number of years now on bringing forward new data sharing proposals – creating ‘permissive powers’ for cross-departmental and cross-agency data sharing, increasing the ease of data flows between national and local government, whilst increasing the clarity of safeguards against data mis-use. Up until just before the last election, an Open Policy Making process, modelled broadly on the UK Open Government Partnership process was taking place – resulting in a refined set of potential proposals relating to identifiable data sharing, data sharing for fraud reduction, and use of data for targeted public services. Today that process was re-started, with a view to a public consultation on updated proposals in the coming months.

However, although much progress has been made in refining proposals based on private sector and civil society feedback, from the range of specific and somewhat disjointed proposals presented for new arrangements in today’s workshop, it appears the process is a way off from providing the kinds of clarification of the current regime that might be desirable. Missing from today’s discussions were clear cross-cutting mechanisms to build trust in government data sharing, and establish the kind of secure data infrastructures that are needed for handling personal data sharing.

I want to suggest three areas that need to be more clearly addressed – all of which were raised in the 2014/15 Open Policymaking process, but which have been somewhat lost in the latest iterations of discussion.

1. Maximising impact, minimising the data shared

One of the most compelling cases for data sharing presented in today’s workshop was work to address fuel poverty by automatically giving low-income pensioners rebates on their fuel bills. Discussions suggested that since the automatic rebate was introduced, 50% more eligible recipients are getting the rebates – with the most vulnerable who were far less likely to apply to recieve the rebates they were entitied to the biggest beneficiaries. With every degree drop in the temperature of a pensioners home correlating to increased hospital admissions – then the argument for allowing the data share, and indeed establishing the framework for current arrangements to be extended to others in fuel poverty (the current powers are specific to pensioners data in some way), is clear.

However, this case is also one where the impact is accompanied by a process that results in minimal data actually being shared from government to the private companies who apply the rebates to individuals energy bills. All that is shared in response to energy companies queries for each candidate on their customer list is a flag for whether the individual is eligible for the rebate or not.

This kind of approach does not require the sharing of a bulk dataset of personally identifying information – it requires a transactional service that can provide the minimum certification required to indicate, with some reasonable level of confidence, that an individual has some relevant credentials. The idea of privacy protecting identity services which operate in this way is not new – yet the framing of the current data sharing discussion has tended to focus on ‘sharing datasets’ instead of constructing processes and technical systems which can be well governed, and still meet the vast majority of use-cases where data shares may be required.

For example, when the General Records Office representative today posed the question of “In what circumstances would it be approciate to share civil registration data (e.g. Birth, Adoption, Marriage and Death) information?”, the use-cases that surfaced were all to do with verification of identity: something that could be achieved much more safely by providing a digital service than by handing over datasets in bulk.

Indeed, approached as a question of systems design, rather than data sharing, the fight against fraud may in practice be better served by allowing citizens to digitally access their own civil registration information and to submit that as evidence in their transactions with government, helping narrow the number of cases where fraud may be occurring – and focussing investigative efforts more tightly, instead of chasing after problematic big data analysis approaches.

(Aside #1: As one participant in today’s workshop insightfully noted, there are thousands of valid marriages in the UK which are not civil marriages and so may not be present in Civil Registers. A big data approach that seeks to match records of who is married to records of households who have declared they are married, to identify fraudulent claims, is likely to flag these households wrongly, creating new forms of discrimination. By contrast, an approach that helps individuals submit their evidence to government allows such ‘edge cases’ to be factored in – recognising that many ‘facts’ about citizens are not easily reduced to simple database fields, and that giving account of ones self to the state is a performative act which should not be too readily sidelined.)

(Aside #2: The case of civil registers also illustrates an interesting and significant qualitative difference between public records, and a bulk public dataset. Births, marriages and deaths are all ‘public events’: there is no right to keep them private, and they have long been recorded in registers which are open to inspection. However, when the model of access to these registers switches from the focussed inspection, looking for a particular individual, to bulk access, they become possible to use in new ways – for example, creating a ‘primary key’ of individuals to which other data can be attached, eroding privacy in ways which was not possible when each record needed to be explored individually. The balance of benefits and harms from this qualitative change will vary from dataset to dataset. For example, I would strongly advocate the open sharing of company registers, including details of beneficial owners, both because of the public benefit of this data, and because registering a company is a public act involving a certain social contract. By contrast, I would be more cautious about the full disclosure of all civil registers, due to the different nature of the social contract involved, and the greater risk of vulnerable individuals being targetted through intentional or unintentional misuse of the data.)

All of which is a long way to say:

  • Where the cross-agency or cross-departmental use-cases for access to a particular can be reduced to sharing assertions about individuals, rather than bulk datasets, this route should be explored first.

This does not remove the need for governance of both access and data use. However, it does ease the governance of access, and audit logs of access to a service are easier to manage than audit logs of what users in possession of a dataset have done.

Even the sharing of a ‘flag’ that can be applied to an individuals data record needs careful thought: and those in receipt of such flags need to ensure they govern the use of that data carefully. For example, as one participant today noted, pensioners have raised fears that energy companies may use a ‘fuel poverty’ flag in their records to target them with advertising. Ensuring that later analysts in the company do not stumble upon the rebate figures in invoices, and feed this into profiling of customers, for example, will require very careful data governance – and it is not clear that companies practices are robust enough to protect against this right now.

2. Algorithmic transparency

Last year the Detroit Digital Justice Coalition produced a great little zine called ‘Opening Data’ which takes a practical look at some of the opportunities and challenges of open data use. They look at how data is used to profile communities, and how the classifications and clustering approaches applied to data can create categories that may be skewed and biased against particular groups, or that reinforce rather than challenge social divides (see pg 30 onwards). The same issues apply to data sharing.

Whilst current data protection legislation gives citizens a right to access and correct information about themselves, the algorithms used to process that data, and derive analysis from it are rarely shared or open to adequate scrutiny.

In the process of establishing new frameworks for data sharing, the algorithms used to process that data should be being brough in view as much as the datasets themselves.

If, for example, someone is offered a targetted public service, or targetted in a fraud investigation, there is question to be explored of whether they should be told which datasets, and which algorithms, led to them being selected. This, and associated transparency, could help to surface otherwise unseen biases which might otherwise lead to particular groups being unfairly targetted (or missed) by analysis. Transparency is no panacea, but it plays an important role as a safeguard.

3. Systematic transparency of sharing arrangements

On the theme of transparency, many of the proposals discussed today mentioned oversight groups, Privacy Impact Assessments, and publication of information on either those in receipt of shared data, or those refused access to datasets – yet across the piece no systematic framework for this was put forward.

This is an issue Reuben Binns and I wrote about in 2014, putting forward a proposal for a common standard for disclosure of data sharing arrangements that, in it’s strongest form would require:

  • Structured data on origin, destination, purpose, legal framework and timescales for sharing;
  • Publication of Privacy Impact Assessments and other associated documents;
  • Notices published through a common venue (such as the Gazette) in a timely fashion;
  • Consultation windows where relevant before a power comes into force;
  • Sharing to only be legally valid when the notice has been published.

Without such a framework, we are likely to end up with the current confused system in which no-one knows which shares are in place, how they are being used, and which legal gateways are functioning well or not. With a scattered set of spreadsheets and web pages listing approved sharing, citizens have no hope of understanding how their data is being used.

If only one of the above issues could be addressed in the upcoming consultation on data sharing, then I certainly hope progress could be made on addressing this missing piece of a robust common framework for the transparency principles of data sharing to be put into practice.

Towards a well governed infrastructure?

Ultimately, the discussion of data sharing is a discussion about one aspect of our national data infrastructure. There has been a lot of smart work going on, both inside and outside government, on issues such as identity assurance, differential privacy, and identifying core derived datasets which should be available as open data to bypass need for sharing gateways. A truly effective data sharing agenda needs to link with these to ensure it is neither creating over-broad powers which are open to abuse, nor establishing a new web of complex and hard to operate gateways.

Further reading

My thinking on these issues has been shaped in part by inputs from the following:

Data & Discrimination – Collected Essays

White House Report on Big Data, and associated papers/notes from The Social, Cultural & Ethical Dimensions of “Big Data.” conference

A distributed approach to co-operative data

[Summary: rough notes from a workshop on cooperative sector data.]

Principle 6 of the International Co-operative Alliance calls for ‘co-operation amongst co-operatives’. Yet, for many co-ops, finding other worker owned businesses to work with can be challenging. Although there are over 7,000 co-operatives in the UK, and many more worldwide, it can be challenging to find out much about them.

This was one of the key drivers behind a convening at the Old Music Hall in Oxford just before Christmas where cooperators from the Institute for Solidarity Economics, Open Data Services Co-operative, Coops UK and Transformap gathered to explore opportunities for ‘Principle 6 Data’: open data to build up a clearer picture of the co-operative economy.

We started out articulating different challenges to be explored through the day, including:

  • Helping researchers better understand the co-operative sector. With co-ops employing thousands of people, and co-operatives adding £37bn to the UK economy last year, having a clearer picture of where they operate, what they do and how they work is vital. Yet information is scarce. For researchers at the Institute for Solidarity Economics, there is a need to dig beyond headline organisation types to understand how the activities of organisations contribute to worker owned, social impact enterprise.

  • Support trade between co-operatives. For example, earlier this year when we were planning a face-to-face gathering of Open Data Services Co-op we tried to find co-operatively run venues to use, and we’ve been trying to understand where else we could support co-ops in our supply chain. Whilst Coops UK provide a directory of co-operatives, it is focussed on business-to-consumer, not business-to-business information.

  • Enabling distributed information management on co-ops. Right now, the best dataset we have for the UK comes from Coops UK, the membership body for the UK sector, who hold information on 7000 or so co-operatives, built up over the years from various sources. They have recently released some of this as open data, and are keen to build on the dataset in future. Yet if it can only be updated via Coops UK this creates a bottleneck to the creation of richer data resources.

My Open Data Services colleague, Edafe Onerhime, did some great work looking at the existing Coops UK dataset, which is written up here, and Dan from ISE explored ways of getting microformat markup into the Institute for Solidarity Economics website to expose more structured data about the organisation, including the gender profile of the workforce. We also took at look at whether data from the .coop domain registry might provide insights into the sector, and set about exploring whether microformats were already in use on any of the websites of UK co-operatives.

Building on these experiments, we came to an exploration of potential social, organisational and technical challenges ahead if we want to see a distributed approach to greater data publication on the co-op sector. Ultimately, this boiled down to a couple of key questions:

  • How can co-operatives be encouraged to share more structured data on their activities?

  • How can the different data needs of different users be met?

  • How can that data be fed into different data-driven projects for research, or cooperative collaboration?

There are various elements to addressing these questions.

There is a standards element: identifying the different kinds of things about co-operatives that different users may want to know about, and looking for standards to model these. For example, alongside the basic details of registered organisations and their turnover collected for the co-operative economic report, business-to-business use cases may be interested in branch locations and product/service offerings from co-ops, and solidarity economics research may be interested in the different value commitments a co-operative has, and details of its democratic governance. We looked at how certifications, from formal Fairtrade certifications for products of a co-op, to social certifications where someone a user trusts vouches for an organisation, might be an important part of the picture also.

For many of the features of a cooperative that are of interest, common data standards already exist, such as those provided by schema.org. Although these need to be approached critically, they provide a pragmatic starting point for standardisation, an example with Coops UK Co-Operative economy dataset can be seen here.

There is a process element of working out how data that co-operatives might publish using common standards will be validated, aggregated and made into shared datasets. Here, we looked at how an annual process of data collection, such as the UK Co-operative Economy report might bootstrap a process of distributed data publishing.

Imagine a platform where co-operatives are offered three options to provide their data into the annual co-operative economy report:

  1. Fill in a form manually;

  2. Publish a spreadsheet of key data to your own website;

  3. Embed JSON-LD / microformat data in your own website;

Although 2 and 3 are more technically complex, they can provide richer and more open and re-useable data, and a platform can explain the advantages of taking the extra steps on these. Moving co-operatives from Step 1 to Step 2 can be bootstrapped by the form driven process generating a spreadsheet for co-ops to publish on their own sites at a set location, and then encouraging them to update those sheets in year 2.

With good interaction design, and a feedback loop that helps validate data quality and show the collective benefits of providing additional data, such a platform could provide the practical component of a campaign for open data publication and use by co-ops.

This points to the crucial **social element: **building community around the open data process, and making sure it is not a technical exercise, but one that meets real needs.

Did we end the day with a clear picture of where next for co-op sector data? Not yet. But it was clear that all the groups participating will continue to explore this space into 2016, and we’ll be looking for more opportunities to collaborate together.

Principles for responding to mass surveilance and the draft Investigatory Powers Bill

[Summary: notes written up on the train back from Paris & London, and following a meeting with Open Rights Group focussing on the draft Investigatory Powers Bill]

It can be hard to navigate the surveillance debate. On the one hand, whilstblower revalations, notably those from Edward Snowden, have revealed the way in which states are accumulating collection of mass communications data, creating new regimes of deeply intrusive algorithmic surveillance, and unsettling the balance of power between citizens, officials and overseers in politics and the media. On the other, as recent events in Paris, London, the US and right across the world have brought into sharp focus, there are very real threats to the life and liberty posed by non-state terrorist actors – and meeting the risks posed must surely involve the security services.

Fortunately, rather than the common pattern of rushed legislative proposals after terrorist attacks, after the attacks in Paris, the UK has kept to the planned timetable for debate of the proposed Investigatory Powers Bill.

The Bill primarily works to put on a legal footing many of the actions that surveillance agencies have already been engaged in when it comes to bulk data collection and bulk hacking of services (equipment interference, and obtaining data). But the Bill also proposes a number of further extensions of powers, including provisions to mandate storage of ‘Internet Connection Records’ – branded as creating a ‘snoopers charter’ in media debates because of the potential for law enforcement and other government agencies to gain access to this detailed information in individuals web browsing histories.

Page 33 of the draft includes a handy ‘Investigatory Powers at a Glance’ table, setting out who will have access to Communications Data, powers of Interception and Bulk Datasets – and what the access and oversight processes might be.

PowersAtAGlance

Reading through the case for new powers put in the pre-amble to the Bill it is important to critically unpack the claims made for new powers. For example, point 47 notes that “From a sample of 6025 referrals to the Child Exploitation and Online Protection Command (CEOP) of the NCA, 862 (14%) cannot be progressed”. The document extrapolates from this “a minimum of 862 suspected paedophiles, involved in the distribution of indecent imagery of children, who cannot be identified without this legislation.”, yet this is premised on the proposed storage of Internet Connection Records being a ‘magic bullet’ to secure investigation of all these suspects. In reality – the number may be much lower.

Yet, getting drawn into a calculus of costs and benefits, trading off the benefits of the protection of one group, with the harms of surveillance to another group, is a tricky business, and unlikely to create a well reasoned surveillance debate. We’re generally not very good at calculating as a society where risks are involved. And there will always be polarisation between those who weight apparently opposing goods (security/liberty?) particularly highly.

The alternative to this cost/benefit calculus is to develop a response based on principles. Principles we can check against evidence, but clear guiding principles none-the-less.

Here’s my first attempt at four principles to consider in exploring how to response to the Investigatory Powers Bill:

(1) Data minimisation without suspicion. We should collect and store the minimum possible amount of data about individuals where there is no reason to suspect the threat of harm to others, or of serious crime.

This point builds upon both principles and pragmatism. Individuals should be innocent until proven guilty, and space for individual freedom of thought and action respected. Equally, surveillance services need more signal, not more noise.

When it comes to address terrorism, creating an environment in which whole communities feel subject to mass surveilance is an entirely counterproductive strategy: undermining rather than promoting the liberal values we must work to protect.

(2) Data maximisation with suspicion. Where there is suspicion of individuals posing a threat, or of serious crime, then proportionate surveillance is justified, and should be pursued.

As far as I understand, few disagree with targetted surveillance. Unlike mass surveillance, targetted approachs can be intelligence rather than algorithmically led, and more tightly connect information collection, analysis and consideration of actions that can be taken against those who pose threats to society.

(3) Strong scrutiny. Sustained independent oversight of secret services is hard to achieve – but is vital to ensure tagetted surveillance capabilities are used responsibly, and to balance the power this gives to those who weild them.

The current Investigatory Powers Bill includes notable scrutiny loopholes, in which once issued, a Warrant can be modified to include new targets without new review and oversight.

(4) A safe Internet. Bulk efforts to undermine encyption and Internet security are extremely risky. Our societies rely upon a robust Internet, and it is important for governments to be working to make the network stronger for all.


Of course, putting principles into practive involves trade offs. But identifying principles is an important starting point to a deeper debate.

Do these principles work for you? I’ll be reflecting more on whether they capture enough to provide a route through the debate, and what their implications are for responding to the Investigatory Powers Bill in the coming months.

(P.S. If you care about the future of the Investigatory Powers Bill in the UK, and you are not already a member of the Open Rights Group – do consider joining to support their work as one of very few dedicated groups focussing on promoting digital freedoms in this debate.

Disclosure: I’m a member of the ORG Advisory Council)

Is Generation Open Growing Up? ODI Summit 2015

[Summary: previewing the upcoming Open Data Institute Summit (discount registration link)]

ODISummit

In Just over two week’s time the Open Data Institute will be convening their second ‘ODI Summit‘ conference, under the banner ‘Celebrating Generation Open’.

The framing is broad, and rich in ideals:

“Global citizens who embrace network thinking

We are innovators and entrepreneurs, customers and citizens, students and parents who embrace network thinking. We are not bound by age, income or borders. We exist online and in every country, company, school and community.

Our attitudes are built on open culture. We expect everything to be accessible: an open web, open source, open cities, open government, open data. We believe in freedom to connect, freedom to travel, freedom to share and freedom to trade. Anyone can publish, anyone can broadcast, anyone can sell things, anyone can learn and everyone can share.

With this open mindset we transform sectors around the world, from business to art, by promoting transparency, accessibility, innovation and collaboration.”

But, it’s not just idealistic language. Right across the programme are programme are projects which are putting those ideals into action in concrete ways. I’m fortunate to get to spend some of my time working with a number of the projects and people who will be presenting their work, including:

Plus, my fellow co-founder at Open Data Services Co-operative, Ben Webb, will be speaking on some of the work we’ve been doing to support Open Contracting, 360Giving and projects with the Natural Resource Governance Institute.

Across the rest of the Summit there are also presentations on open data in arts, transport, biomedical research, journalism and safer surfing, to name just a few.

What is striking about this line up is that very few of these projects will be presenting on one-off demonstrations, but will be sharing increasingly mature projects: and projects which are increasingly diverse, as they recognise that data is one element of a theory of change, and being embedded in specific sectoral debates and action is just as important.

In some ways, it raises the question of how much a conference on open data in general can hold together: with so many different domains represented, is open data a strong enough thread to bind them together. On this question, I’m looking forward to Becky Hogge’s reflections when she launches a new piece of research at the Summit, five years on from her widely cited Open Data Study. In a preview of her new report, Becky argues that “It’s time for the open data community to stop playing nice” – moving away from trying to tie together divergent economic and political agendas, and putting full focus into securing and using data for specific change.

With ‘generation open’ announced: the question for us then is how does generation open cope with growing up. As the projects showcased at the summit move beyond the rhetoric, and we see that whilst in theory ‘anyone can do anything’ with data – in practice, access and ability is unequally distributed – how will debates over the ends to which we use the freedoms brought by ‘open’ play out?

Let’s see.


I’ll be blogging on the ideas and debates at the summit, as the folk at ODI have kindly invited Open Data Services as a media supporter. As a result they’ve also given me this link to share which will get anyone still to book 20% of their tickets. Perhaps see you there.

Data, openness, community ownership and the commons

[Summary: reflections on responses to the GODAN discussion paper on agricultural open data, ownership and the commons – posted ahead of Africa Open Data Conference GODAN sessions]

Photo Credit - CC-BY - South Africa Tourism
]3 Photo Credit – CC-BY – South Africa Tourism

Key points

  • We need to distinguish between claims to data ownership, and claims to be a stakeholder in a dataset;
  • Ownership is a relevant concept for a limited range of datasets;
  • Openness can be a positive strategy, empowering farmers vis-a-vis large corporate interests;
  • Openness is not universally good: can also be used as a ‘data grab’ strategy;
  • We need to think critically about the configurations of openness we are promoting;
  • Commons and cooperative based strategies for managing data and open data are a key area for further exploration;

Open or owned data?

Following the publication of a discussion paper by the ODI for the Global Open Data for Agriculture and Nutrition initiative, putting forward a case for how open data can help improve agriculture, food and nutrition, debate has been growing about how open data should be approached in the context of smallholder agriculture. In this post, I explore some provisional reflections on that debate.

Respondents to the paper have pointed to the way in which, in situations of unequal power, and in complex global markets, greater accessibility of data can have substantial downsides for farmers. For example, commodity speculation based on open weather data can drive up food prices, or open data on soil profiles can be used in order to extract greater margins from farmers when selling fertilizers. A number of responses to the ODI paper have noted that much of the information that feeds into emerging models of data-driven agriculture is coming from small-scale farmers themselves: whether through statistical collection by governments, or hoovered up by providers of farming technology, all aggregated into big datasets that practically inaccessible to local communities and farmers.

This has led to some focussing in response on the concept of data ownership: asserting that more emphasis should be placed on community ownership of the data generated at a local level. Equally, it has led to the argument that “opening data without enabling effective, equitable use can be considered a form of piracy”, making direct allusions to the biopiracy debate and the consequent responses to such concerns in the form of interventions such as the International Treaty on Plant Genetic Resources.

There are valid concerns here. Efforts to open up data must be interrogated to understand which actors stand to benefit, and to identify whether the configuration of openness sought is one that will promote the outcomes claimed. However, claims of data ownership and data sovereignty need to be taken as a starting point for designing better configurations of openness, rather than as a blocking counter-claim to ideas of open data.

Community ownership and openness

My thinking on this topic is shaped, albeit not to a set conclusion, by a debate that took place last year at a Berkman Centre Fellows Hour based on a presentation by Pushpa Kumar Lakshmanan on the Nagoya Protocol which sets out a framework for community ownership and control over genetic resources.

The debate raised the tension between the rights of communities to gain benefits from the resources and knowledge that they have stewarded, potentially over centuries, with an open knowledge approach that argues social progress is better served when knowledge is freely shared.

It also raised important questions of how communities can be demarcated (a long-standing and challenging issue in the philosophy of community rights) – and whether drawing a boundary to protect a community from external exploitation risks leaving internal patterns of power and exploitation within the community unexplored. For example, does community ownership of data really lead to certain elites in the community controlling it.

Ultimately, the debate taps into a conflict between those who see the greatest risk as being the exploitation of local communities by powerful economic actors, and those who see the greater risk as a conservative hoarding of knowledge in local communities in ways that inhibit important collective progress.

Exploring ownership claims

It is useful to note that much of the work on the Nagoya Protocol that Pushpa described was centred on controlling borders to regulate the physical transfer of plant genetic material. Thinking about rights over intangible data raises a whole new set of issues: ownership cannot just be filtered through a lens of possession and physical control.

Much data is relational. That is to say that it represents a relationship between two parties, or represents objects that may stand in ownership relationships with different parties. For example, in his response to the GODAN paper, Ajit Maru reports how “John Deere now considers its tractors and other equipment as legally ‘software’ and not a machine… [and] claims [this] gives them the right to use data generated as ‘feedback’ from their machinery”. Yet, this data about a tractor’s operation is also data about the farmers land, crops and work. The same kinds of ‘trade data for service’ concerns that have long been discussed with reference to social media websites are becoming an increasing part of the agriculture world. The concern here is with a kind of corporate data-grab, in which firms extract data, asserting their absolute ownership over something which is primarily generated by the farmer, and which is at best a co-production of farmer and firm.

It is in response to this kind of situation that grassroots data ownership claims are made.

These ownership claims can vary in strength. For example:

  • The farmer can claim that ‘this is my data’, and I should have ultimate control over how it is used, and the ability to treat it as a personally held asset;

  • The second runs that ‘I have a stake in this data’, and as a consequence, I should have access to it, and a say in how it is used;

Which claim is relevant depends very much on the nature of the data. For example, we might allow ownership claims over data about the self (personal data), and the direct property of an individual. For datasets that are more clearly relational, or collectively owned (for example, local statistics collected by agricultural extension workers, or weather data funded by taxation), the stakeholding claim is the more relevant.

It is important at this point to note that not all (perhaps even not many) concerns about the potential misuse of data can be dealt with effectively through a property right regime. Uses of data to abuse privacy, or to speculate and manipulate markets may be much better dealt with by regulations and prohibitions on those activities, rather than attempts to restrict the flow of data through assertions of data ownership.

Openness as a strategy

Once we know whether we are dealing with ownership claims, or stakeholding claims, in data, we can start thinking about different strategic configurations of openness, that take into account power relationships, and that seek to balance protection against exploitation, with the benefits that can come from collaboration and sharing.

For example, each farmer on their own has limited power vis-a-vis a high-tech tractor maker like John Deere. Even if they can assert a right to access their own data, John Deere will most likely retain the power to aggregate data from 1000s of farmers, maintaining an inequality of access to data vis-a-vis the farmer. If the farmer seeks to deny John Deere the right to aggregate their data with that of others: changes that (a) they will be unsuccessful, as making an absolute ownership claim here is difficult – using the tractor was a choice after all; and (b) they will potentially inhibit useful research and use of data that could improve cropping (even if some of the other uses of the data may run counter to the farmers interest). Some have suggested that creating a market in the data, where the data aggregator would pay the farmers for the ability to use their data, offers an alternative path here: but it is not clear that the price would compensate the farmer adequately, or lead to an efficient re-use of data.

However, in this setting openness potentially offers an alternative strategy. If farmers argue that they will only give data to John Deere if John Deere makes the aggregated data open, then they have the chance to challenge the asymmetry of power that otherwise develops. A range of actors and intermediaries can then use this data to provide services in the interests of the farmers. Both the technology provider, and the farmer, get access to the data in which they are both stakeholders.

This strategy (“I’ll give you data only if you make the aggregate set of data you gather open”), may require collective action from farmers. This may be the kind of arrangement GODAN can play a role in brokering, particularly as it may also turn out to be in the interest of the firm as well. Information economics has demonstrated how firms often under-share information which, if open, could lead to an expansion of the overall market and better equilibria in which, rather than a zero-sum game, there are benefits to be shared amongst market actors.

There will, however, be cases in which the power imbalances between data providers and those who could exploit the data are too large. For example, the above discussion assumes intermediaries will emerge who can help make effective use of aggregated data in the interests of farmers. Sometimes (a) the greatest use will need to be based on analysis of disaggregated data, which cannot be released openly; and (b) data providers need to find ways to work together to make use of data. In these cases, there may be a lot to learn from the history of commons and co-operative structures in the agricultural realm.

Co-operative and commons based strategies

Many discussions of openness conflate the concept of openness, and the concept of the commons. Yet there is an important distinction. Put crudely:

  • Open = anyone is free to use/re-use a resource;
  • Commons = mutual rights and responsibilities towards the resource;

In the context of digital works, Creative Commons provide a suite of licenses for content, some of which are ‘open’ (they place no responsibilities on users of a resource, but grant broad rights), and others of which adopt a more regulated commons approach, placing certain obligations on re-users of a document, photo or dataset, such as the responsibility to attribute the source, and share any derivative work under the same terms.

The Creative Commons draws upon an imagery from the physical commons. These commons were often in the form of land over which farmers held certain rights to graze cattle, of fisheries in which each fisher took shared responsibility for avoiding overfishing. Such commons are, in practice, highly regulated spaces – but that seek to pursue an approach based on sharing and stakeholding in resources, rather than absolute ownership claims. As we think about data resources in agriculture, reflecting more on learning from the commons is likely to prove fruitful. Of course, data, unlike land, is not finite in the same ways, nor does it have the same properties of excludability and rivalrousness.

In thinking about how to manage data commons, we might look towards another feature prevalent in agricultural production: that of the cooperative. The core idea of a data cooperative is that data can be held in trust by a body collectively owned by those who contribute the data. Such data cooperatives could help manage the boundary between data that is made open at some suitable level of aggregation, and data that is analysed and used to generate products of use to those contributing the data.

With Open Data Services Co-operative I’ve just started to dig more into learning about the cooperative movement: co-founding a workers cooperative that supports open data projects. However, we’ve also been thinking about how data cooperatives might work – and I’m certain there is scope for a lot more work in this area, helping deal with some of the critical questions that have come up for open data from the GODAN discussion paper.

Enabling the Data Revolution: IODC 2015 Conference Report

ReportCoverThe International Open Data Conference in Ottawa in May this year brought together over 200 speakers and close to 1000 in-person attendees to explore the open data landscape. I had the great privilege of working with the conference team to work on co-ordinating a series of sessions designed to weave together discussions from across the conference into a series of proposals for action, supporting shared action to take forward a progressive open data agenda. From the Open Data Research Symposium and Data Standards Day and other pre-events, to the impact presentations, panel discussions and individual action track sessions, a wealth of ideas were introduced and explored.

Since the conference, we’ve been hard at work on a synthesis of the conference discussions, drawing on over 30 hours of video coverage, hundreds of slide decks and blog posts, and thousands of tweets, to capture some of the key issues discussed, and to put together a roadmap of priority areas for action.

The result has just been published in English and French as a report for download, and as an interactive copy on Fold: embedding video and links alongside the report section by section.

Weaving it together

The report was only made possible through the work of a team of volunteers – acting as rapporteurs for each sessions and blogging their reflections – and session organisers, preparing provocation blog posts in advance. That meant that in working to produce a synthesis of the different conferences I not only had video recordings and tweets from most sessions, but I also had diverse views and take-away insights written up by different participants, ensuring that the report was not just about what I took from the conference materials – but that it was shaped by different delegates views. In the Fold version of the report I’ve tried to link out to the recordings and blog posts to provide extra context in many sections – particularly in the ‘Data Plus’ section which covers open data in a range of contexts, from agriculture, to fiscal transparency and indigenous rights.

One of the most interesting, and challenging, sections of the report to compile has been the Roadmap for Action. The preparation for this began long in advance of the International Open Data Conference. Based on submissions to the conference open call, a set of action areas were identified. We then recruited a team of ‘action anchors’ to help shape inputs, provocations and conference workshops that could build upon the debates and case studies shared at the conference and it’s pre-events, and then look forward to set out an agenda for future collaboration and action in these areas. This process surfaced ideas for action at many different levels: from big-picture programmes, to small and focussed collaborative projects. In some areas, the conference could focus on socialising existing concrete proposals. In other areas, the need has been for moving towards shared vision, even if the exact next steps on the path there are not yet clear.

The agenda for action

Ultimately, in the report, the eight action areas explored at IODC2015 are boiled down to five headline categories in the final chapter, each with a couple of detailed actions underneath:

  • Shared principles for open data: “Common, fundamental principles are vital in order to unlock a sustainable supply of high quality open data, and to create the foundations for inclusive and effective open data use. The International Open Data Charter will provide principles for open data policy, relevant to governments at all levels of development and supported by implementation resources and working groups.”
  • Good practices and open standards for data publication: “Standards groups must work together for joined up, interoperable data, and must focus on priority practices rooted in user needs. Data publishers must work to identify and adopt shared standards and remove the technology and policy barriers that are frequently preventing data reuse.”
  • Building capacity to produce and use open data effectively: “Government open data leaders need increased opportunities for networking and peer-learning. Models are needed to support private sector and civil society open data champions in working to unlock the economic and social potential of open data. Work is needed to identify and embed core competencies for working with open data within existing organizational training, formal education, and informal learning programs.”
  • Strengthening open data innovation networks: “Investment, support, and strategic action is needed to scale social and economic open data innovations that work. Organizations should commit to using open data strategies in addressing key sectoral challenges. Open data innovation networks and thematic collaborations in areas such as health, agriculture, and parliamentary openness will facilitate the spread of ideas, tools, and skills— supporting context-aware and high-impact innovation exchange.”
  • Adopting common measurement and evaluation tools: “Researchers should work together to avoid duplication, to increase the rigour of open data assessments, and to build a shared, contextualized, evidence base on what works. Reusable methodological tools that measure the supply, use, and outcomes of open data are vital.To ensure the data revolution delivers open data, open data assessment methods must also be embedded within domain-specific surveys, including assessments of national statistical data.All stakeholders should work to monitor and evaluate their open data activities, contributing to research and shared learning on securing the greatest social impact for an open data revolution.”

In the full report, more detailed actions are presented in each of these categories. The true test of the roadmap will come with the 2016 International Open Data Conference, where we will be able to look at progress made in each of these areas, and to see whether action on open data is meeting the challenge of securing increased impact, sustainability and inclusiveness.

Getting the incentives right: an IATI enquiry service?

[Summary: Brief notes exploring a strategic and service-based approach to improve IATI data quality]

Filed under: rough ideas

At the International Aid Transparency Initiative (IATI) Technical Advisory Group meeting (#tag2015) in Ottawa last week I took part in two sessions exploring the need for Application Programming Interfaces (APIs) onto IATI data. It quickly became clear that there were two challenges to address:

(1) Many of the questions people around the table were asking were complex queries, not the simple data retrieval kinds of questions that an API is well suited to;

(2) ‘Out of the box’ IATI data is often not able to answer the kinds of questions being asked, either because

  • (a) the quality and consistency of data from distributed sources means that there are a range of special cases to handle when performing cross-donor analysis;
  • (b) the questions asked invite additional data preparation, such as currency conversion, or identifying a block of codes that relate to a particular sector (.e.g. identifying all the Water and Sanitation related codes)

These challenges also underlie the wider issue explored at TAG2015: that even though five years of effort have gone into data supply, few people are actually using IATI data day-today.

If the goal of the International Aid Transparency Initiative as a whole, distinct from the specific goal of securing data, is more informed decision making in the sector, then this got me thinking about the extent to which what we need right now is a primary focus on services rather than data and tools. And from that, thinking about whether intelligent funding of such services could lead to the right kinds of pressures for improving data quality.

Improving data through enquiries

Using any dataset to answer complex questions takes both domain knowledge, and knowledge of the data. Development agencies might have lots of one-off and ongoing questions, from “Which donors are spending on Agriculture and Nutrition in East Africa?”, to “What pipeline projects are planned in the next six months affecting women and children in Least Developed Countries?”. Against a suitably cleaned up IATI dataset, reasonable answers to questions like these could be generated with carefully written queries. Authoriative answers might require further cleaning and analysis of the data retrieved.

For someone working with a dataset every day, such queries might take anything from a few minutes to a few hours to develop and execute. Cleaning data to provide authoritative answers might take a bit longer.

For a programme officer, who has the question, but not the knowledge of the data structures, working out how to answer these questions might take days. In fact, the learning curve will mean often these questions are simply not asked. Yet, having the answers could save months, and $millions.

So – what if key donors sponsored an enquiries service that could answer these kinds of queries on demand? With the right funding structure, it could have incentives not only to provide better data on request, but also to put resources into improving data quality and tooling. For example: if there is a set price paid per enquiry successfully answered, and the cost of answering that enquiry is increased by poor data quality from publishers, then there can be an incentive on the service to invest some of it’s time in improving incoming data quality. How to prioritise such investments would be directly connected to user demand: if all the questions are made trickier to answer because of a particular donor’s data, then focussing on improving that data first makes most sense. This helps escape the current situation in which the goal is to seek perfection for all data. Beyond a certain point, the political pressures to publish may ceases to work to increase data quality, whereas requests to improve data that are directly connected to user demand and questions may have greater traction.

Of course, the incentive structures here are subtle: the quickest solution for an enquiry service might be to clean up data as it comes into its own data store rather than trying to improve data at source – and there remains a desire in open data projects to avoid creating single centralised databases, and to increase the resiliency of the ecosystem by improving original open data, which would oppose this strategy. This would need to be worked through in any full proposal.

I’m not sure what appetite there would be for a service like this – but I’m certain that in, what are ultimately niche open data ecosystems like IATI, strategic interventions will be needed to build the markets, services and feedback loops that lead to their survival.

Comments and reflection welcome

#CODS15: Trends and attitudes in open data

[Summary: sharing slides from talk at Canadian Open Data Summit]

The lovely folks at Open North were kind enough to invite me to give some opening remarks at the Canadian Open Data Summit in Ottawa today. The subject I was set was ‘trends and attitudes in the global open data community’ – and so I tried to pick up on five themes I’ve been observing and reflecting on recently. The slides from my talk are below (or here), and I’ve jotted down a few fragmentary notes that go along with them (and represent some of what I said, and some of what I meant to say [check against delivery etc.]). There’s also a great take on some of the themes I explored, and that developed in the subsequent panel, in the Open Government Podcast recap here.

(These notes are numbered for each of the key frames in the slide deck. You can move horizontally through the deck with the right arrow, or through each section with the down arrow. Hit escape when viewing the deck to get an overview. Or just hit space bar to go through as I did when presenting…)

(1) I’m Tim. I’ve been following the open data field as both a practitioner and a social researcher over the last five years. Much of this work as part of my PhD studies, and through my time as a fellow and affiliate at the Berkman Centre.

(2) First let’s get out the way the ‘trends’ that often get talked about somewhat breathlessly: the rapid growth of open data from niche idea, to part of the policy mainstream. I want to look at five more critical trends, emerging now, and to look at their future.

(3) First trend: the move from engagement with open data to solve problems, to a focus on infrastructure building – and the need to complete a cyclical move back again. Most people I know got interested in open data because of a practical issue, often a political issue, where they wanted data. The data wasn’t there, so they joined action to make it available. This can cycle into ongoing work on building the infrastructure of data needed to solve a problem – but there is a risk that the original problems get lost – and energy goes into infrastructure alone. There is a growing discourse about reconnecting to action. Key is to recognise data as problem solving, and data infrastructure building, as two distinct forms of open data action, complementary, but also in creative tension.

(4) Second trend: there are many forms of open data initiative, and growing data divides. For more on this, see the Open Data Barometer 2015 report, and this comparison of policies across six countries. Canada was up 1 place in the rankings from the first to second editions of the ODB. But that mainly looks at a standard model of doing open data. Too often we’re exporting an idea of open data based on ‘Data Portal + License + Developers & Apps = Open Data Initiative’ – but we need to recognise that there are many different ways to grow an open data initiative, and activity – and to be opening up space for a new wave of innovation, rather than embedding the results of our first years experimentation as the best practice.

(5) Third trend: the Open Data Barometer hints that impact is strongest where there are local initiatives. Urban initiatives? How do we ensure that we’re not designing initiatives that can only achieve impact with a critical mass of developers, community activists and supporting infrastructures.

(6) Fourth trend: There is a growing focus on data standards. We’ve moved beyond ‘Raw Data Now’ to see data publishers thinking about standards on everything from public budgets, to public transit, public contracts and public toilets. But when we recognise that our data is being sliced, diced and cooked, are we thinking about who it is being prepared for? Who is included, and who is excluded? (Remember, Raw Data is an Oxymoron). Even some of the basics of how to do diverse open data are not well resolved right now. How do we do multilingual data for example? Or how do we find measurement standards to assess open data in federal systems? Canada has a role as a well-resourced multi-lingual country in finding good solutions here.

(7) Fifth trend: There are bigger agendas on the policy scene right now than open data. But open data is still a big idea. Open data has been overtaken in many settings by talk of big data, smart cities, data revolutions and the possibility of data-driven governance. In the recent African Data Consensus process, 15 different ‘data communities’ were identified, from land data, and geo-data communities, to health data and conflict data communities. Open data was framed as another ‘data community’. Should we be seeing it this way? Or as an ethic and approach to be brought into all these different thematic areas: a different way of doing data – not another data domain. We need to look to the ideas of commons, and the power to create and collaborate that treating our data as a common resource can unlock. We need to reclaim the politics of open data as an idea that challenges secrecy, and that promotes a foundation for transparency, collaboration and participation. Only with this can we critique these bigger trends with the open data idea – and struggle for a context in which we are not database objects in the systems of the state, but are collaborating, self-determining, sovereign citizens.

(8) Recap & take-aways:

  • Embed open data in wider change
  • Innovate and experiment with different open data practices
  • Build community to unlock the impact of open data
  • Include users in shaping open data standards
  • Combine problem solving and infrastructure building

Slow down with the standards talk: it’s interoperability & information quality we should focus on

[Summary: cross-posting a contribution to the discussions on the International Open Data Conference blog]

There is a lot of focus in the run up the International Open Data Conference in Ottawa next week. Two of the Action Area workshops on Friday are framed in terms of standards – at the level of data publication best practices, and collaboration between the standards projects working on thematic content standards at the global level.

It’s also a conversation of great relevance to local initiatives, with CTIC writing on the increasing tendancy of national open data regulations to focus on specific datasets that should be published, and to prescribe data standards to be used. This is trend mirrored in the UK Local Government Transparency code, accompanied by schema guidance from Local Government Association, and even where governments are not mandating standards, community efforts have emerged in the US and Australia to develop common schemas for publication of local data – covering topics from budgets to public toilet locations.

But – is all this work on standards heading in the right direction? In his inimitable style, Friedrich Lindenberg has offered a powerful provocation, challenging those working on standards to consider whether the lofty goal of creating common ways of describing the world so that all our tools just seamlessly work together is really a coherent or sensible one to be aiming for.

As Friedrich notes, there are many different meanings of the word ‘standard’, and often multiple versions of the word are in play in our discussions and our actions. Data standards like the the General Transit Feed Specification, International Aid Transparency Initiative Schema, or Open Contracting Data Standard are not just technical descriptions of how to publish data: they are also rhetorical and discplinary interventions, setting out priorities about what should be published, and how it should be represented. The long history of (failed) attempts to find general logical languages to describe the world across different contexts should tell us that data standards are always going to encode all sorts of social and cultural assumptions – and that the complexity of our real-world relationships, and all that we want to know about the different overalapping institutional domains that affect our lives will never be easily rendered into a single set of schema.

This is not to say we should not pursue standardisation: standards are an important tool. But I want to suggest that we should embed our talk of standards within a wider discussion about interoperability, and information quality.

An interop approach

I had the chance to take a few minutes out of IODC conference preparations last week to catch up with Urs Gaser, co-author of Interop: The Promise and Perils of Highly Interconnected Systems, and one of the leaders of the ongoing interop research effort. As Urs explained, an interoperability lens provides another way of thinking about the problem standards are working to address.

Where a focus on standards leads us to focus on getting all data represented in a common format, and on using technical specifications to pursue policy goals – an interoperability focus can allow us to incorporate a wider range of strategies: from allowing the presence of translation and brokering layers between different datasets, to focussing on policy problems directly to secure the collection and disclosure of important information.

And even more importantly, an interop approach allows us to discuss what the right level of interoperability to aim for is in any situation: recognising, for example, that as standards become embedded, and sunk into our information infrastructures, they can shift from being a platform for innovation, to a source of innertia and constraints on progress. Getting the interopabiliy level right in global standards is also important from a power perspective: too much interoperability can constrain the ability of countries and localities to adapt how they express data to meet their own needs.

For example, looked at through a standards lense, the existence of different data schema for describing the location of public toilets in Sydney, Chennai and London is a problem. From the standards perspective we want everyone to converge on the same schema and to use the same file formats. For that we’re going to need a committee to manage a global standard, and an in-depth process of enrolling people in the standard. And the result with almost undoubtedly be just one more standard out there, rather than one standard to rule them all, as the obligatory XKCD cartoon contends.

But through an interoperability lense, the first question is what level of interoperability do we really need? Andwhat are the consequences of the level we are striving for?. It invites us to think about the different users of data, and how interoperablity affects them. For example, a common data schema used by all cities might allow a firm providing a loo-location app in Ottawa to use the same technical framework in Chennai, but is this really the ideal outcome? But the consequences of this could be to crowd out local developers who could build something much more culturally contextualised. And there is generally nothing to stop the Ottawa firm from building a translation layer between the schemas used in their app, and the data disclosed in other cities – as long as the disclosure of data in each context include certain key elements, and are internally consistent.

Secondly, an interoperability lens encourages us to consider a whole range of strategies: from regulations that call consistent disclosure of certain information without going as far as giving schema, to programmes to develop common identification infrastructures, to the development and co-funding of tools that bridge between data captured in different countries and contexts, and the fostering of collaborations between organisations to work together on aggregating heterogenous data.

As conversations develop around how to enable collaboration between groups working on open aid data, public contracts, budgets, extractives and so-on, it is important to keep the full range of tools on the table for how we might enable users to find connections between data, and how the interoperability of different data sources might be secured: from building tools and platforms, working together on identifiers and small building-blocks of common infrastructure, to advocating for specific disclosure policies and, of course, discussing standards.

Information quality

When it comes down to it – for many initiatives, standards and interoperability are only a means to another end. The International Aid Transparency Initiative cares about giving aid recieving governments a clear picture of the resources available to them. The Open Contracting Partnership want citizens to have the data they need to be more engaged in contracting, and for corruption in procurement to be identified and stopped. And the architects of public loo data standards don’t want you to get caught short.

Yet often our information quality goals can get lost as we focus on assessing and measuring the compliance of data with schema specs. Interoperability and quality are distinct concepts, although they are closely linked. Having standardised, or at least interoperable data, makes it easier to build tools which go some of the way to assessing information quality for example.

interop-and-quality

But assessing information quality goes beyond this. Assessments need to take place from the perspective of real use-cases. Whilst often standardisation aims at abstraction, our work on promoting the quality, relevance and utility of data sharing – at both the local and global levels – has to be rooted in very grounded problems and projects. Some of the work Johanna Walker and Mark Frank have started on user-centered methods for open data assessment, and Global Integrity’s bottom-up Follow The Money work starts us down this path, but we’ve much more work to do to make sure our discussions of data quality are substantive as well as technical.

Thinking about assessing information quality distinct from interoperability can also help us to critically analyse the interoperability ecosystems that are being developed. We can look at whether an interoperability approach is delivering information quality for a suitable diverse range of stakeholders, or whether the costs of getting information to the required quality for use are falling disproportionately one one group rather than another, or are leading to certain use-cases for data being left unrealised.

Re-framing the debate

I’m not calling for us to abandon a focus on standards. Indeed, much of the work I’m committed to in the coming year is very much involved in rolling out data standards. But I do want to invite us to think about framing our work on standards within a broader debate on interoperability and information quality (and ideally to embed this conversation within the even broader context of thinking on Information Justice, and an awareness of critical information infrastructure studies, and work on humanistic approaches to data).

Exactly what shape that debate takes: I don’t know yet… but I’m keen to see where it could take us…

2015 Open Data Research Symposium – Ottawa

There are a few days left to submit abstracts for the 2015 Open Data Research Symposium due to take place alongside 3rd International Open Government Data Conference in Ottawa, on May 27th 2015.

Registration is also now open for participants as well as presenters.

Call for Abstracts: (Deadline 28th Feb 2015; submission portal)

As open data becomes firmly cemented in the policy mainstream, there is a pressing need to dig deeper into the dynamics of how open data operates in practice, and the theoretical roots of open data activities. Researchers across the world have been looking at these issues, and this workshop offers an opportunity to bring together and have shared dialogue around completed studies and work-in-progress.

Submissions are invited on themes including:

  • Theoretical framing of open data as a concept and a movement;
  • Use and impacts of open data in specific countries or specific sectors, including, but not limited to: government agencies, cities, rural areas, legislatures, judiciaries, and the domains of health, education, transport, finance, environment, and energy;
  • The making, implementation and institutionalisation of open data policy;
  • Capacity building for wider availability and use of open data;
  • Conceptualising open data ecosystems and intermediaries;
  • Entrepreneurial usage and open data economies in developing countries;
  • Linkages between transparency, freedom of information and open data communities;
  • Measurement of open data policy and practices;
  • Critical challenges for open data: privacy, exclusion and abuse;
  • Situating open data in global governance and developmental context;
  • Development and adoption of technical standards for open data;

Submissions are invited from all disciplines, though with an emphasis on empirical social research. PhD students, independent and early career researchers are particularly encouraged to submit abstracts. Panels will provide an opportunity to share completed or in-progress research and receive constructive feedback.

Submission details

Extended abstracts, in French, English, Spanish or Portuguese, of up to two pages, detailing the question addressed by the research, methods employed and findings should be submitted by February 28th 2015. Notifications will be provided by March 31st. Full papers will be due by May 1st. 

Registration for the symposium will open shortly after registration for the main International Open Government Data Conference.

Abstracts should be submitted via Easy Chair

Paper format

Authors of accepted abstracts will be invited to submit full papers. These should be a maximum of 20 pages single spaced, exclusive of bibliography and appendixes. As an interdisciplinary and international workshop we welcome papers in a variety of formats and languages: French, English, Spanish and Portuguese. However, abstracts and paper presentations will need to be given in English. 

Full papers should be provided in .odt, .doc, or .rtf or as .html. Where relevant, we encourage authors to also share in a repository, and link to, data collected as part of their research. 

We are working to identify a journal special issue or other opportunity for publication of selected papers.

Contact

Contact savita.bailur@webfoundation.org or tim.davies@soton.ac.uk for more details.

Programme committee

About the Open Data Research Network

The Open Data Research Network was established in 2012 as part of the Exploring the Emerging Impacts of Open Data in Developing Countries (ODDC) project. It maintains an active newsletter, website and LinkedIn group, providing a space for researchers, policy makers and practitioners to interact. 

This workshop will also include an opportunity to find out how to get involved in the Network as it transitions to a future model, open to new members and partners, and with a new governance structure.