Intelligent Impact: Evaluating an open data capacity building with voluntary sector organisations

[Summary: sharing the evaluation report (9 pages, PDF) of an open data skills workshop for voluntary sector organisations]

Banner

Late last year, through the CSO network on the Open Government Partnership, I got talking with Deirdre McGrath of the Your Voice, Your City project about ways of building voluntary sector capacity to engage with open data. We talked about the possibility of a hack-day, but realised the focus at this stage needed to be on building skills, rather than building tools. It also needed to be on discovering what was possible with open data in the voluntary sector, rather than teaching people a limited set of skills. And as the Your Voice, Your City project was hosted within the London Voluntary Services Council (LVSC), an infrastructure organisation with a policy and research team, we had the possibility of thinking about the different roles needed to make the most of open data, and how a capacity building pilot could work both with frontline Voluntary and Community Sector (VCS) organisations, and an infrastructure organisation. A chance meeting with Nick Booth of podnosh gave form to a theme in our conversations about the need to focus on both ‘stats’ and ‘stories’ ensuring that capacity building worked with both quantitative and qualitative data and information. The result: plans for a short project, centred on a one-day workshop on ‘Intelligent Impact’, exploring the use of social media and open data for VCS organisations.

The day involved staff from VCS organisations coming along with questions or issues they wanted to explore, and then splitting into groups with a team of open data and social media mentors (Nick Booth, Caroline Beavon, Steven Flower, Paul Bradshaw and Stuart Harrison) to look at how existing online resources, or self-created data and media, could help respond to those questions and issues. Alex Farrow captured the story of the day for us using Storify and I’ve just completed a short evaluation report telling the story in more depth, capturing key learning from the event, and setting out possible next steps (PDF).

Following on from the event, the LVSC team have been exploring how a combination of free online tools for curating open data, collating questions, and sharing findings can be assembled into a low-cost and effective ‘intelligence hub‘, where data, analysis and presentation layers are all made accessible to VCS organisations in London.

Linked-Development: notes from Research to Impact at the iHub

[Summary: notes from a hackathon in Nairobi built around linked open data]

Research to Impact HackI’ve just got back from an energising week exploring open data and impact in Kenya, working with R4D and IDS at Nairobi’s iHub to run a three-day hackathon titled ‘Research to Impact’. You can read Pete Cranston’s blog posts on the event here (update: and iHub’s here). In this post, after a quick pre-amble, I reflect particularly on working with linked data as part of the event.

The idea behind the event was fairly simple: lots of researchers are producing reports and publications related to international development, and these are logged in catalogues like R4D and ELDIS, but often it stops there, and research doesn’t make it into the hands of those who can use it to bring about economic and social change. By opening up the data held on these resources, and then working with subject experts and developers, we were interested to see whether new ideas would emerge for taking research to where it is needed.

The Research to Impact hack focused in on ‘agriculture and nutrition’ research so that we could spend the first day working with a set of subject experts to identify the challenges research could help meet, and to map out the different actors who might be served by new digital tools. We were hosted for the whole event at the inspiring iHub and mLab venue by iHub Research. iHub provides a space for the growing Kenya tech community, acting as a meeting space, incubator and workspace for developers and designers. With over 10,000 members of it’s network, iHub also helped us to recruit around 20 developers who worked over the second two days of the hackathon to build prototype applications responding to the challenges identified on day one, and to the data available from R4D and IDS.

A big focus of the hackathon development turned out to be on mobile applications: as in Kenya mobile phones are the primary digital tool for accessing information. On day four, our developers met again with the subject experts, and pitched their creations to a judging panel, who awarded first, second and third prizes. Many of the apps created had zeroed in on a number of key issues: working through intermediaries (in this case, the agricultural extension worker), rather than trying to use tech to entirely disinter-mediate information flows; embedding research information into useful tools, rather than providing it through standalone portals (for example, a number of teams build apps which allowed extension workers to keep track of the farmers they were interacting with, and that could then use this information to suggest relevant research); and, most challengingly, the need for research abstracts and descriptions to be translated into easy-to-understand language that can fit into SMS-size packages. Over the coming weeks IDS and R4D are going to be exploring ways to work with some of the hackathon teams to take their ideas further.

Linked-development: exploring the potential of linked data

Linked Data StructureThe event also provided us with an opportunity to take forward explorations of how linked data might be a useful technology in supporting research knowledge sharing. I recently wrote a paper with Duncan Edwards of IDS exploring the potential of linked data for development communication, and I’ve been exploring linked data in development for a while. However, this time we were running a hackathon directly from a linked data source, which was a new experience.

Ahead of the event I set up linked-development.org as a way to integrate R4D data (already available in RDF), and ELDIS data (which I wrote a quick scraper for), both modelled using the FAO’s AGRIS model. In order to avoid having to teach SPARQL for access to the data, I also (after quite a steep learning curve) put together a very basic Puelia Linked Data API implementation over the top of the data. To allow for a common set of subject terms between the R4D and ELDIS data, I made use of the Maui NLP indexer to tag ELDIS agriculture and nutrition documents against the FAO’s Agrovoc (R4D already had editor assigned terms against this vocabulary), giving us a means of accessing the documents from the two datasets alongside each other.

The potential value of this approach become clear on the first day of the event, when one of the subject experts showed us their own repository of Kenyan-focussed agricultural research publications and resources, which was already modelled and theoretically accessible as RDF using the Agris model. Although our attempts to integrate this into our available dataset failed due to the Drupal site serving the data hitting memory limits (linked data still remains something that tends to need a lot of server power thrown at it, and that can have significant impacts where the relative cost of hosting and tech capacity is high), the potential to bring more local content into linked-development.org alongside data from R4D and ELDIS was noted by many of the developers taking part as something which would be likely to make their applications a lot more successful and useful: ensuring that the available information is built around users needs, not around organisational or project boundaries.

At the start of the developer days, we offered a range of ways for developers to access the research meta-data on offer. We highlighted the linked data API, the ELDIS API (although it only provided access to one of the datasets, I found it would be possible for us to create an compatible API speaking to the linked data in future), and SPARQL as means to work with the data. Feedback forms from the event suggest that formats like JSON were new to many of our participants, and linked data was a new concept to all. However, in the end, most teams chose to use some of the prepared SPARQL queries to access the data, returning results as JSON into PHP or Python. In practice, over the two days this did not end up realising the full value of linked data, as teams generally appeared to use code samples to pull SPARQL ‘SELECT’ result sets into relational databases, and then to build their applications from there (a common issue I’ve noted at hack days, where the first step of developers is to take data into the platform they use most). However, a number of teams were starting to think about both how they could use more advanced queries or direct access to the linked data through code libraries in future, and most strikingly, were talking about how they might be able to write data back to the linked-development.org data store.

This struck me as particularly interesting. A lot of the problems teams faced in creating their application was that the research meta-data available was not customised to agricultural extension workers or farmers. Abstracts would need to be re-written and translated. Good quality information needed to be tagged. New classifications of the resources were needed, such as tagging research that is useful in the planting season. Social features on mobile apps could help discover who likes what and could be used to rate research. However, without a means to write back to the shared data store, all this added value will only ever exist in the local and fragmented ecosystems around particular applications. Getting feedback to researchers about whether their research was useful was also high on the priority list of our developers: yet without somewhere to put this feedback, and a commitment from upstream intermediaries like R4D and ELDIS to play a role feeding back to authors, this would be very difficult to do effectively.

This links to one of the points that came out in our early IKM Emergent work on linked data, noting that the relatively high costs and complexity of the technology, and the way in which servers and services are constructed, may lead to an information environment dominated by those with the capacity to publish; but that it has the potential, with the right platforms, configurations and outreach, to bring about a more pluralistic space, where the annotations from local users of information can be linked with, and equally accessible as, the research meta-data coming from government funded projects. I wish we had thought about this more in advance of the hackathon, and provided each team with a way to write data back to the linked-development.org triple store (e.g. giving them named graphs to write to; and providing some simple code samples or APIs), as I suspect this would have opened up a whole new range of spaces for innovation.

Overall though, the linked-development.org prototype appears to have done some useful work, not least providing a layer to connect two DFID funded projects working on mobilising research. I hope it is something we can build upon in future.

Complexity and complementarity – why more raw material alone won’t necessarily bring open data driven growth

[Summary: reflections on an open data hack day, complexity, and complements to open data for economic and social impact. Cross posted from Open Data Impacts blog.]

“Data is the raw material of the 21st Century”.

It’s a claim that has been made in various forms by former US CIO Vivek Kundra (PDF), by large consultancies and tech commentators, and that is regularly repeated in speeches by UK Cabinet Office Minister Francis Maude, mostly in relation to the drive to open up government data. This raw material, it is hoped, will bring about new forms of economic activity and growth. There is certainly evidence to suggest that for some forms of government data, particularly ‘infrastructural’ data, moving to free and open access can stimulate economic activity. But, for many open data advocates, the evidence is not showing the sorts of returns on investment, or even the ‘gold rush’ of developers picking over data catalogues to exploit newly available data that they had expected.

At a hack-event held at the soon-to-be-launched Open Data Institute in London this week, a number of speakers highlighted the challenge of getting open data used: the portals are built, but the users do not necessarily come. Data quality, poor meta-data, inaccessible language, and the difficulty of finding wheat amongst the chaff of data were all diagnosed as part of the problem, with some interesting interfaces and tools developed to try and improve data description and discovery. Yet these diagnosis and solutions are still based on linear thinking: when a dataset is truly accessible, then it will be used, and economic benefits will flow.

Owen Barder identifies the same sort of linear thinking in much macro-economic international development policy of the 70s and 80s in his recent Development Drums podcast lecture on complexity and development. The lecture explores the question of how countries with similar levels of ‘raw materials’ in terms of human and physical capital, could have had such different growth rates over the last half century. The answer, it suggests, lies in the complexity of economic development – where we need not just raw materials, but diverse sets of skills and supply chains, frameworks, cultures and practices. Making the raw materials available is rarely enough for economic growth. And this something that open data advocates focussed on economic returns on data need to grapple with.

Thinking about open data use as part of a complex system involves paying attention to many different dimensions of the environment around data. Jose Alonso highlights “the political, legal, organisation, social, technical and economic” as all being important areas to focus on. One way of grounding notions of complexity in thinking about open data use, that I was introduced to in working on a paper with George Kuk last year, is through the concept of ‘complementarity’. Essentially A complements B if A and B together are more than the sum of their parts. For example, a mobile phone application and an app store are complements: as the software in one, needs the business model and delivery mechanisms in the other in order to be used.

The challenge then is to identify all the things that may complement open data for a particular use; or, more importantly, to identify all those processes already out there in the economy to which certain open data sets are a complement. Whilst the example above of complements appears at first glance technological (apps and app stores), behind it are economic, social and legal complementarities, amongst others. Investors, payment processing services, app store business models, remmitance to developers, and often-times, stable jobs for developers in an existing buoyant IT industry that allow them to either work on apps for fun in spare time, or to leave work with enough capital to take a risk on building their own applications are all part of the economic background. Developer meet-ups, online fora, clear licensing of data, no fear of state censorship of applications built and so-on contribute to the social and legal background. These parts of the complex landscape generally cannot be centrally planned or controlled, but equally they cannot be ignored when we are asking why the provision of a raw material has not brought about anticipated use.

As I start work on the ‘Exploring the Emerging Impacts of Open Data in the South‘ project with the Web Foundation and IDRC, understanding the possible complements of open data for economic, political and social use may provide one route to explore which countries and contexts are likely to see strong returns from open data policy, and to see what sorts of strategies states, donors and communities can adopt to increase their opportunity to gain potential benefits and avoid possible pitfalls of greater access to open data. Perhaps for further Open Data Institute hack days, it can also encourage more action to address the complex landscape in which open data sits, rather than just linear extensions of data platforms that exist in the hope that the users will eventually come*.