Legacies of social reporting: an IGF09 example

[Summary: aggregating content from the Internet Governance Forum & exploring ways to develop the legacy of social reporting at events…]

Introducing social reporting to an event can bring many immediate benefits. From new skills for those participating in the social reporting, to increasing opportunities for conversation at the event, and building bridges between those present at an event, and those interested in the topic but unable to physically take part.

However, the wealth of content gathered through social reporting can also act as a resource ‘after the event’ – offering insights and narratives covering event themes, and offering contrasting and complementary perspectives to any ‘official’ event records that may exist.

Many of the tools I use when social reporting at an event have a certain ‘presentism’ about them. Newer content is prioritised over older content, and, in the case of dashboard aggregators like NetVibes, or services such as Twitter, good content can quickly disappear from the front page, or even altogether.

So, as we got towards the end of a frantic four days social reporting out at the Internet Governance Forum in Egypt earlier this year, I started thinking about how to make the most of the potential legacy impacts of the social reporting that was going on – both in the event-wide Twitterstream, and in the work of the young social reporters I was specifically working with.

Part of that legacy was about the skills and contacts gathered by the social reporters – so we quickly put together this handout for participants – but another part of that legacy was in the content. And gathering that together turned out to be trickier than I expected.

However, I now have a micro-site set up at http://igf2009.practicalparticipation.co.uk/ where you can find all the blog posts and blips created by our social reporters, as well as all the tagged tweets we could collect together. Over the coming weeks colleagues at Diplo will be tagging core content to make it easy to navigate and potentially use as part of online learning around Internet Governance. I’ve run the 3500+ twitter messages I managed to (eventually) aggregate through the Open Calais auto-tagging service as an experiment to see if this provide ways to identify insights within them – and I’ve been exploring different ways to present the information found in the site.

Learning: Next time set up the aggregator in advance
I didn’t start putting together the site (a quick bit of Drupal + FeedAPI, with the later addition of Views, Panels, Autotagging, Timeline and other handy modules) till the final day of IGF09, by which time over 50 blog posts had been added to our Ning website, and over 3000 twitter messages tagged #igf09.

Frustratingly, Ning only provides the last 20 items in any RSS feed, and, as far as I can tell, no way to page through past items; and the Twitter search API is limited to fetching just 1500 tweets.

Fortunately when it came to Twitter I had captured all the Tweets in Google Reader – but still had to scrape Twitter message IDs back out of there – and set up a slow script to spend a couple of days fetching original tweets (given the rate limiting again on the Twitter API).

For Ning, I ended up having to go through and find all the authors who had written on IGF09, and to fetch the feeds of their posts, run through a Yahoo Pipe to create an aggregate feed of only those items posted during the time of the IGF.

It would have been a lot easier if I set up the Drupal + FeedAPI aggregator beforehand, and added new feeds to it whenever I found them.

Discoveries: Language and noise
I’ve spent most of my time just getting the content into this aggregator, and setting up a basic interface for exploring it. I’ve not yet hand chance to dive in and really explore the content itself. However, two things I noticed:

1) There is mention of a francaphone hash-tag for IGF2009 in some of the tweets. Searching on that hash-tag now, over a month later, doesn’t turn up any results – but it’s quite possible that there were active conversations this aggregator fails to capture because we weren’t looking at the right tags.

Social Network Map of Tweets
Mapping Twitter @s with R and Iplot

2) A lot of the Twitter messages aggregated appear to be about the ‘censorship incident‘ that dominated external coverage of IGF09, but which was only a small part of all the goings on at IGF. Repeated tweeting and re-tweeting on one theme can drown out conversations on other themes unless there are effective ways to navigate and filter the content archives.

I’ve started to explore how @ messages, and RTs within Tweets could be used to visualise the structure, as well as content, of conversations – but have run up against the limitations of my meagre current skill set with R and iplot.

I’m now on the look out for good ways of potentially building some more intelligent analysis of tweets into future attempts to aggregate with Drupal – possibly by extracting information on @s and RTs at the time of import using the promising FeedAPI Scraper module from the great folk at Youth Agora.

Questions: Developing social reporting legacies
There is still a lot more to reflect upon when it comes to making the most of content from a socially reported event, not least:

1) How long should information be kept?

I’ve just been reading Delete, which very sensibly suggests that not all content should be online for ever – and particularly with conversational twitter messages or video clips, there may be a case for ensuring a social reporting archive only keeps content public for as long as there is a clear value in doing so.

2) Licensing issues

Aggregation on the model I’ve explored assumes licence to collect and share tweets and other content. Is this a fair assumption?

3) Repository or advocacy?

How actively should the legacy content from social reporting be used? Should managing the legacy of an event also involve setting up search and blog alerts, and pro-actively spreading content to other online spaces? If so – who should be responsible for that and how?


If you are interested in more exploration of Social Reporting, you may find the Social by Social network, and Social Reporters group there useful.

Hear by Right 2008 launched at last

Hear by Right 2008

It's been a long time coming, and it's still got a long way to go – but finally this afternoon I've been able to set the new version of the Hear by Right website live with the newly launched 2008 Hear by Right resources and a brand new design and CMS back-end (drupal). I won't write too much about it now… as heading off to the pub to celebrate… but a bit of background for you:

Hear by Right is a standards framework for the involvement of young people used by 100s of organisations from local authorities to small voluntary sector organisations. It's designed to help organisations change to embed the voice and influence of youth into their everyday fabric. Although as I mentioned in response to a post by David Wilcox last year Hear by Right has a lot to offer work on user engagement and participation in all organisations – not just those that work with young people.

Through the Hear by Right website we've been trying to:

  • Create a space to share learning from the many 100s of authorities and organisations using Hear by Right to map and plan for change
  • Curate and share some of the best resources to support the participation of young people in decision making
  • Encourage organisations to be more open about the challenges and successes in engaging young people in decision making
  • Make clear the neccessary link between participation in decision making and real change for the lives of young people

The site's new design and back-end should help us do that just a little better – and puts in some more solid foundations for us to build on than those that were provided by the somewhat hacked-together CMS I wrote back in 1999 whilst learning ASP (those days are fortunately long behind me…).

And now that the site is launched… I can finally start pulling together some plans to explore different ways of visualising the data it holds… expect more on that soon.

A knowledge jam session

I've just spent a very interesting two days taking part in a online Global Knowledge Jam around 'Collaborative Technology Requirements for Social Change'.

What's a jam?

The online knowledge jam formula (the name parallels a musicians 'jam session') is something along the lines of:

Bring together a group of interested / relevant participants and set aside a time-window when participants will regularly drop into an online 'jamming space' to contribute to discussion space on a specified topic.

In this case, the Jam consisted of around 150 participants from 40 countries with a diverse range of backgrounds but shared interests in online community and its role in social change. The time window was 48 hours. And the discussion space was a moodle forum. And the outcome was fascinating. I'm not sure that discussions stuck that closely to the core questions posed, and it was at times a little tricky to keep track of what was going on (the version of moodle we were using seems to lack a way of just looking at posts updated since you last visited) – but the content was both challenging and inspiring.

To jam again?

The time-limit and narrow nominal focus of the jam (coupled with the fantastic participant list it recruited) meant it was able to generate an impressive breadth and depth of content in a short time. The jam model is certainly one I'm going to explore more – particularly for brining together practioners and other actors in developing further thematic spaces on the Participation Works youth involvement portal.

Arising from the Jam?

That breadth of content generated means I'm going to need more time to fully digest all I've seen in the discussions – but a few headlines arising:

  • Shifting frames of reference
    • Users: My 'frame of reference' for thinking about users of online community and interaction is someone at a desk in an office / home office using their own computer on a broadband internet connection. I'm not thinking about the internet cafe user in Akra in Ghana, or the library user in the UK, the shared computer in a family home, the NGO office where there is just one computer or the remote vilage where the mobile phone is the communication tool, not the computer.
    • Tools: Following on from thinking about the role of mobile phones – near the end of the Jam someone commented about how 'keyboard and screen' focussed the discussions had been. Yet we need to be thinking beyond our online interaction with the online world being QWERTY and 800×600 or above.
  • Drupal for Communities of Practice: There was interest in the Jam in exploring how Drupal Installation Profiles could be used to support Communities of Practice. I'm hoping there are going to be some opportunities to explore this more with some action towards a flexible toolset Jam members and others may feel comfortable using.
  • Requirements for online community: I was suprised by how few of the tools I'm coming to think of as essential were on the radar. I found that perhaps I am coming at online community with a stronger focus on the 'data' than I found in others… which leads me to be really interested in tools for managing the data (visualisation tools / re-mixable content feeds / adding as much semantic data to posts as possible without burdening users). I need to focus some more reflection on whether a focus on the data can be fully compatible with a focus on the relationships mediated through online community – and whether there is a map for making sure they don't come into tension.
  • Technology stewarship: All of which leads me to needing to reflect more on how my work is taking me more and more into the role of (or the 'cult of'…) technology stewardship.

 

More reflections will no doubt unfold soon. But for now I have to head off and make a cheese sauce for a Lasange for visiting parents… (hmm… not sure I'm so good at these informal notes in blogging… but perhaps something else I need to explore more…)

UK Geocoding for Drupal

Interactive maps can be really effective ways of visualising information with a geographic compotent. For example, if you want to find participation workers near you making use of Hear by Right, the Local Network Map lets you see who is around far easier than any listing would.

Up until now, finding out the co-ordinates of a UK address to be able to plot address-linked information onto a map either meant paying a lot to commercial providers for access to a postcode or address database that could tell you – or putting (as the HbR Local Network Map does) with being accurate to within one or two miles by using free postcode databases and geocoders.

No more.

The lovely people at Google have somehow managed to get around the prohibitive costs to provide free UK geocoding giving street-level accuracy.

This makes a big difference to the sort of non-commercial geo-information services it could now be possible to provide for the UK. I'll certainly be looking again at whether we can finally exlore getting a good UK Fairtrade Map system working for local Fairtrade Town Campaigns.

Geolocating the UK in Drupal

Given my firm belief that the best possible Content Management System for almost all forms of web-application has to be Drupal, I've created an update for the Drupal Location Modules UK include files here which makes use of the new UK google geocoding.

Combined with my generic fallback patch which uses the fantastic geonames service to make sure Drupal returns a location wherever possible, this means worldwide geolocation coverage can be very workable indeed.

Still a way to go

Even though Google have made UK geocoding available to the masses, there is still a long way to go before we can really get the full benefit of UK geodata without being saddled with prohibitive costs. Even with the fantastic efforts of Open Street Map we still have no country-wide mapping data that anyone can freely print off to create a local community map, or to use for creative applications. Citizens continue to be charged stiffling fees for accessing public data. For more on the need to bring in big changes to the way public geodata and other data is managed – keep an eye on the Free Our Data blog here.