Expectations and Evidence: youth participation and open data

[Summary: Exploring ways to use data as part of a youth participation process.]

Over the last year and a bit I’ve been doing less work on youth engagement and civic engagement processes than I would ideally like. I’m fascinated by processes of participation, and how to design activities and frameworks within which people can actively influence change on issues that affect them – getting beyond simply asking different groups the question ‘what do you want?’ and then struggling to reconcile conflicting answers (or, oftentimes, simple ignoring this input), to create spaces in which the different factors and views affecting a decision are materialised and in which those affected by decisions get to engage with the real decision making process. I’ve had varying levels of successes doing that – but the more time I’ve been spending with public data – the more I’ve been struggling to work out how to bring it into participative discussions in ways that are accessible and empowering to participants.

Generally data is about aggregates: about trends and patterns rather than the specific details of individual cases. Yet in participation, the goal is often to allow people to bring their own specific experience into discussions and to engage with issues and decisions based upon their unique perspectives. How can open datasets complement that process?

The approach I started to explore in a workshop this evening was linking ‘expectations and evidence’ – asking a group to draw upon their experience to write down a list of expectations, based on the questions that had been asked in a survey they had carried out amongst their peers – and then helping them to use IBM Many Eyes to visualise and explore the survey evidence that might support or challenge their expectations (I’ve written up the process of using the free Many Eyes tool over in the Open Data Cook Book). It was a short session, and not all of the group were familiar with the survey questions, so I would be pushed to call it a great success, but it did generate some useful learning about introducing data into participation processes.

1) Stats are scary (and/or boring; and/or confusing)
Even using a fairly interactive data visualisation tool like IBM Many Eyes statistics and data are, for many people, pretty alien things. The idea of multi-variate analysis (looking at more than one variable at once and the relationship between variables) is not something most people spend much time on in school or college – and trying to introduce three-variable analysis in a short youth participation workshop is tricky without leading to quite a bit of confusion.

One participant in this evenings working made the suggestion that “It would be useful to have a reminder of how to read all these charts. What does all this mean?”. Next time I run a similar session (as I’m keen to develop the idea further) I’ll look into finding/preparing a cheat-sheet for reading any data visualisations that get created…

2) ‘Expectations and Evidence’ can provide a good framework to start engaging with data
In this evenings workshop after looking at data we turned to talk about interview questions the group might ask delegates at an upcoming conference. A number of the question ideas threw up new ideas for ‘expectations’ the group had (for example, that youth services were being cut in different ways in different places across the country), which there might be ‘evidence’ available to support or challenge. Whilst we didn’t have time to then go and seek out the relevant data there was potential here to try and then go and search data catalogues and use a range of visualisation and exploration approaches to test those bigger expectations more (our first expectations work focussed on some fairly localised survey data).

3) The questions and processes matter
When I started to think about how data and participation might fit together I sketched out different sorts of questions that participation processes might work with. Different questions link to different processes of decision making…

  • (a) What was your experience of…? (share your story…we’ll analyse)
  • (b) What do you think of…? (give your opinion … we’ll decide what to do with it)
  • (c) What should we do about…? (give us your proposals…)
  • (d) Share this decision with us… (we need to work from shared understanding…)

To introduce data into (a) and (b) is tricky. If the ‘trend’ contradicts an individuals own view or experience, it can be very demanding to ask them to reconcile that contradiction. Of course, creating opportunities for people with experience of a situtation to reconcile tensions between stats and stories is better than leaving it up to distant decision makers to choose whether to trust what the data says, or what people are saying, when it seems they don’t concur – but finding empowering participative processes for this seems tough.

It seems that data can feature in participation more easily when we shift from opinion gathering to decision sharing; but building shared understanding around narratives and around data is not something that can happen quickly in short sessions.

I’m not sure this post gets me towards any great answers on how to link data into participative processes. But, in interests of thinking aloud (and in an effort to reclaim my blogging as reflective practice, getting away from the ways it’s been rather news and reporting driven of late) I’ll let it make it onto the blog, with all reflections/comments very much welcomed…

Digital Futures – Trends in Technology, Youth and Policy

[Summary: What technologies will affect services for young people in 2011? Presentation, worksheet and reflections on a workshop]

I’ve read a lot of blog posts and watched a lot of presentations about technology trends, and future technologies that everyone needs to be aware of – but they can often feel pretty distant from the reality of frontline public services trying to make sense of how new technologies affect their work. So when I was offered the opportunity to run a workshop on ‘digital futures’ at the children’s services conference of a national children’s charity, right at the start of 2011, I thought it would provide an interesting opportunity to explore different ways of talking about and making sense of technology trends.

Continue reading “Digital Futures – Trends in Technology, Youth and Policy”

Reflections on Oxford Open Data Day

[Summary: creations and learning from Oxford Open Data Day]

Yesterday around 30 people got together in Oxford to take part in the first international Open Data Day, an initiative sparked off by David Eaves to get groups around the world exploring what they could create with public data. For many of the assembled Oxford crowd it was their first experience of both exploring public data, and taking part in a hack-day event, so, having started at 10am, it was fantastic that by 4.30pm we:

Thanks to everyone who took part in the day, and particularly to Ed, Kevin, Ed & Dave at White October for hosting the event, and to Incuna for sponsoring the lunch. Many thanks also to Sywia for blogging the event: you can find photos and video clips sharing the story here.

Quick Learning Notes

Skill building: I also took advantage of the Open Data Day to start exploring some of the ideas that might go into an Open Data Cook Book of ‘recipes’ for creating and working with open data. There are big challenges when it comes to building the capacity of both technical developers and non-developers alike to discover and then work with open data.

I’ve been reflecting on the discovery and design processes we could make use of at the start of any open data focussed workshops – whether with developers, civil servants, community groups or campaigners to provide the right level of context on what open data is, the potential and limitations of different datasets, and to provide a general awareness of where data can be discovered. At Open Data Day in Oxford we perhaps struggled to generate ideas for projects in the first half of the day – but understandably so given it takes a while to get familiar with the datasets available.

I wonder if for hack-day style events with people new to open data, some sort of training & team-building exercises for the first hour might be useful?

Data-led or problem-led: Most of the groups working were broadly data-led. They found some data of interest, and then explored what could be done with it. One group (the visualisations of impacts of tax changes for the Robin Hood Tax campaign) was more ‘problem led’ – starting with an issue to explore and then seeking data to work with. Both have their challenges: with the first, projects can struggle to find a focus; with the latter, it’s easy to get stuck because the data you imagine might be available turns out not to be. Finding the data you need isn’t available can provide a good spark for more open data campaigning (why, for example, are the details of prices in the Retail Price Index basket of goods not being published, and FOI requests for them being turned down on the basis of ‘personal information’ exemptions?), but when you can’t get that campaigning to produce results during the course of a single day, it can be pretty frustrating as well.

On the day or in advance?:
We held a pre-meeting for the Oxford Open Data Day – and it was useful in getting people to know each other and to discover some ideas and sources of data – but we perhaps didn’t carry through the ideas from that meeting into the hack-day very strongly. Encouraging a few more people to act as project leaders in advance may have been useful to for enabling those who came wanting to help on projects rather than create their own to get involved.

Data not just for developers:
My mantra. Yet still hard to plan for and make work. Perhaps trying to include a greater training element into a hack day would help here, or encouraging some technically-inclined folk to take on a role of data-facilitators – helping non-developers get the data into a shape they need for working with it in non-technical ways. Hopefully some of the open data cook book recipes might be useful here.

Sharing learning rather than simply products:
David Eaves set out three shared goals for the Open Data Day events:

1. Have fun

2. Help foster local supportive and diverse communities of people who advocate for open data

3. Help raise awareness of open data, why it matters, by building sites and applications

emphasising the importance of producing tangible things to demonstrate the potential of open data. This is definitely important – but I think we probably missed a trick by focussing on the products of the hack-day in presentations at the end of the day, rather than the learning and new skills people had picked up and could tell others about.

“Why should someone profit from me not living with my family?”

[Summary: More thoughts from the CROA conference on the details of a changing state]

“Why should someone profit from me not living with my family?”

That was the powerful question put by a young woman from member of Manchester’s Care to Change Council to Parliamentary Under-Secretary of State for Children and Families Tim Loughton at the CROA Conference last week, questioning the role of private providers building and running children’s homes.

As she explains in this video after the panel discussion where the question was put (approx 1m 30s in), Tim Loughton suggested the government was not concerned with who runs children’s homes, as long as the quality of care is good. However, the discussion did get me thinking about how, regardless of the quality of the service, some services, such as providing a caring environment for someone to grow up in when they can’t live with their family, could be (or feel to be) intrinsically different when provided through the private sector rather than the public sector.

How does moving into the market do more than just change the incentive structures for efficiency around specific public services?

Content analysis, tagging, linked data and digital objectivities

I’ve tried to keep musings on research methodology & epistemology mostly off this blog (they are mostly to be found over on my just-out-of-stealth-mode ‘Open Data Impacts’ research blog), however, for want of somewhere better to park the following brief(ish) reflections:

  • Content Analysis is a social science method that takes ‘texts’ and seeks to analyze them: usually involving ‘coding’ topics, people, places or other elements of interest in the texts, and seeking to identify themes that are emerging from them.
  • One of the challenges of any content analysis is developing a coding structure, and defending that coding structure as reasonable.In most cases, the coding structure will be driven by the research interest, and codes applied on the basis of subjective judgements by the researcher. In research based within more ‘objective’ epistemic frameworks, or at least trying to establish conclusions as valid independently of the particular researcher – multiple people may be asked to code a text, and then tests of ‘inter-coder reliability‘ (how much the coders agreed or disagreed) may be applied.
  • With the rise of social bookmarking sites such as Delicious, and the growth of conventions of tagging and folksonomy, much online content already has at least some set of ‘codes’ attached.For example, here you can see the tags people have applied to this blog on Delicious.
  • Looking up any tags that have been applied to an element of digital content could be useful for researchers as part of their reflective practice to ensure they have understood an element of content from a wide range of angles – beyond that which is primarily driving their research.
  • (With many caveats) It could also support some form of ‘extra-coder reliability’ providing a check of coding against ‘folk’ assessments of content’s meaning.
  • The growth of the semantic web also means that many of the objects which codes refer to (e.g. people, organizations, concepts) have referenceable URIs, and if not, the researcher can easily create them.Services such as Open Calais, and Open Amplify also draw on vast ‘knowledge bases’ to machine-classify and code elements of text – identifying, with re-usable concept labels, people, places, organizations and even emotions. (The implications of machine classification for content analysis are not, however, the primary topic of this point or post).
  • Researchers could chose to code their content using semantic web URIs and conventions – contributing their meta-data annotations of texts to either local, or global, hypertexts.For example, if I’m coding a paragraph of text about the launch of data.gov.uk, instead of just adding my own arbitrary tags to it, I could mark-up the paragraph based on some convention (RDFa?), and reference shared concepts.From a brief search of Subj3ct for ‘data’, I quickly find I have to make some fairly specific choices about which concepts of data I might be coding against, although hopefully if they have suitable relationships attached, I may be able to query my coded data in more flexible ways in the future.
  • All of this raises a mass of interesting epistemic issues, none of which I can do justice to in these brief notes, but which include:
    • Changing the relationship of the researcher to concept-creation – and encouraging both the re-use of concepts, and the shaping of shared semantic web concepts in line with the research;
    • The appropriateness, or not, of using concepts from the semantic web in social scientific research, where the relatively objectivist and context free framing of most current semantic web projects runs counter to often subjectivist and interpretivist  leanings within social science;
    • The role of key elements of the current web of concepts on the semantic web (for many social scientific concepts, primarily Wikipedia via the dbpedia project) where the choice of what concepts are easily referenceable or not depends on a complex social context involving both crowd-sourcing and centralised control (ref the policies of Wikipedia or other taxonomy / knowledge base providers).
  • The actual use of existing online tagging, and semantic web URIs as part of the content analysis coding process (or any other social scientific coding process for that matter) may remain, at present, both methodologically challenging, and impractical given the available tools – but is worth further reflection and exploration.

Reflections; points to literatures that are already exploring this; questions etc. all welcome…

Legacies of social reporting: an IGF09 example

[Summary: aggregating content from the Internet Governance Forum & exploring ways to develop the legacy of social reporting at events…]

Introducing social reporting to an event can bring many immediate benefits. From new skills for those participating in the social reporting, to increasing opportunities for conversation at the event, and building bridges between those present at an event, and those interested in the topic but unable to physically take part.

However, the wealth of content gathered through social reporting can also act as a resource ‘after the event’ – offering insights and narratives covering event themes, and offering contrasting and complementary perspectives to any ‘official’ event records that may exist.

Many of the tools I use when social reporting at an event have a certain ‘presentism’ about them. Newer content is prioritised over older content, and, in the case of dashboard aggregators like NetVibes, or services such as Twitter, good content can quickly disappear from the front page, or even altogether.

So, as we got towards the end of a frantic four days social reporting out at the Internet Governance Forum in Egypt earlier this year, I started thinking about how to make the most of the potential legacy impacts of the social reporting that was going on – both in the event-wide Twitterstream, and in the work of the young social reporters I was specifically working with.

Part of that legacy was about the skills and contacts gathered by the social reporters – so we quickly put together this handout for participants – but another part of that legacy was in the content. And gathering that together turned out to be trickier than I expected.

However, I now have a micro-site set up at http://igf2009.practicalparticipation.co.uk/ where you can find all the blog posts and blips created by our social reporters, as well as all the tagged tweets we could collect together. Over the coming weeks colleagues at Diplo will be tagging core content to make it easy to navigate and potentially use as part of online learning around Internet Governance. I’ve run the 3500+ twitter messages I managed to (eventually) aggregate through the Open Calais auto-tagging service as an experiment to see if this provide ways to identify insights within them – and I’ve been exploring different ways to present the information found in the site.

Learning: Next time set up the aggregator in advance
I didn’t start putting together the site (a quick bit of Drupal + FeedAPI, with the later addition of Views, Panels, Autotagging, Timeline and other handy modules) till the final day of IGF09, by which time over 50 blog posts had been added to our Ning website, and over 3000 twitter messages tagged #igf09.

Frustratingly, Ning only provides the last 20 items in any RSS feed, and, as far as I can tell, no way to page through past items; and the Twitter search API is limited to fetching just 1500 tweets.

Fortunately when it came to Twitter I had captured all the Tweets in Google Reader – but still had to scrape Twitter message IDs back out of there – and set up a slow script to spend a couple of days fetching original tweets (given the rate limiting again on the Twitter API).

For Ning, I ended up having to go through and find all the authors who had written on IGF09, and to fetch the feeds of their posts, run through a Yahoo Pipe to create an aggregate feed of only those items posted during the time of the IGF.

It would have been a lot easier if I set up the Drupal + FeedAPI aggregator beforehand, and added new feeds to it whenever I found them.

Discoveries: Language and noise
I’ve spent most of my time just getting the content into this aggregator, and setting up a basic interface for exploring it. I’ve not yet hand chance to dive in and really explore the content itself. However, two things I noticed:

1) There is mention of a francaphone hash-tag for IGF2009 in some of the tweets. Searching on that hash-tag now, over a month later, doesn’t turn up any results – but it’s quite possible that there were active conversations this aggregator fails to capture because we weren’t looking at the right tags.

Social Network Map of Tweets
Mapping Twitter @s with R and Iplot

2) A lot of the Twitter messages aggregated appear to be about the ‘censorship incident‘ that dominated external coverage of IGF09, but which was only a small part of all the goings on at IGF. Repeated tweeting and re-tweeting on one theme can drown out conversations on other themes unless there are effective ways to navigate and filter the content archives.

I’ve started to explore how @ messages, and RTs within Tweets could be used to visualise the structure, as well as content, of conversations – but have run up against the limitations of my meagre current skill set with R and iplot.

I’m now on the look out for good ways of potentially building some more intelligent analysis of tweets into future attempts to aggregate with Drupal – possibly by extracting information on @s and RTs at the time of import using the promising FeedAPI Scraper module from the great folk at Youth Agora.

Questions: Developing social reporting legacies
There is still a lot more to reflect upon when it comes to making the most of content from a socially reported event, not least:

1) How long should information be kept?

I’ve just been reading Delete, which very sensibly suggests that not all content should be online for ever – and particularly with conversational twitter messages or video clips, there may be a case for ensuring a social reporting archive only keeps content public for as long as there is a clear value in doing so.

2) Licensing issues

Aggregation on the model I’ve explored assumes licence to collect and share tweets and other content. Is this a fair assumption?

3) Repository or advocacy?

How actively should the legacy content from social reporting be used? Should managing the legacy of an event also involve setting up search and blog alerts, and pro-actively spreading content to other online spaces? If so – who should be responsible for that and how?


If you are interested in more exploration of Social Reporting, you may find the Social by Social network, and Social Reporters group there useful.

Social Reporting the Internet Governance Forum: Multiple Knowledges

Social Reporting in the Youth Corner
Social Reporting in the Youth Corner

I’ve just come back from a fascinating five days working with a team of young Egyptians and fellows of the Diplo Internet Governance Capacity Building Programme at the 2009 Internet Governance Forum (IGF). Amongst other things, one of the key things I was up to, working with Pete Cranston and Dejan Dincic, was training and supporting the youth team and Diplo fellows to use various digital online tools to ‘social report’ the IGF. The work was funded IKM Emergent – a project focussed on new perspectives on Knowledge Management (KM), particularly looking at ‘multiple knowledges‘.

In the process of working with a diverse international group at an incredibly diverse and complex event, we gained many insights into social reporting for multiple knowledges – and I’ve tried to unpack some of my reflections and learning below:

Social Reporting for multiple knowledges
One of the great transformations brought about by online digital media is that just about anyone can now create and share rich media to offer their own view of events or issues – and this media can be published where many of the worlds population with an Internet connection will be able to see it. As Deirdre from St Lucia pointed out, it’s not long ago that getting more than one news channel’s coverage of even major events was near impossible.

The main sessions of IGF09 were well recorded, with UN Webcasting in video or audio from every session or workshop, and live transcripts of many sessions available. Formal write-ups of each session will be available in due course. However, with social reporting our goal was not to duplicate these formal records of the event, but was to offer each participant, and particularly the youth team and Diplo fellows (henceforth referred to as ‘the social reporting team’), the chance to report on elements of the event of interest to them. And to do that, we were using simple, near-instant, online social media tools.

The idea of multiple knowledges is of course a complex one, and has many layers – but at IGF09 our core focus was on just one element – supporting the capture and sharing of different perspectives on the event from different actors in the event.

Reflection 1: Train in techniques, as well as tools

Few of the social reporting team we were working with had used twitter, online video or blogging before as a reporting tool. Before the IGF got started, Pete & Dejan ran a short afternoon’s training with some of the social reporting team, explaining how tools like Twitter worked, encouraging team members to sign up for accounts, and getting particpants to practice using Flip camera digital video recorders. They also introduced the team to the Social Reporting at the IGF handbook we had prepared.

However, whilst the handbook does offer a short introduction to the concept of social reporting itself, and mentions a few practical techniques for video interviews, it was only later in the week that we started to do more to demonstrate different techniques and to talk about ‘conceptual tools’ for creating social reporting content.

For example, the five interesting things about approach can be a very good technique to help new bloggers move away from replicating a ‘list of things that happened’ in a session, to capture a ‘list of interesting elements’ or a ‘list of controversies’ for social reporting.

It would be worth exploring in more depth the range of different techniques (and templates) that can help new (and experienced) social reporters to capture multiple knowledges in their reporting – and to explore how best to train and equip social reporters to choose and use these approaches.

Reflection Two: Let reporters choose their tools – and then build up multi-tool use

Picture 41A social reporter who is comfortable with many different digital tools, and who is covering a particular conference theme, may start by sharing some insight or quotes from a session by Twitter. They may follow up by catching the panelist who the quote came from, and asking them to share more of their views in a short video interview. They may then upload that video interview, keeping a copy on their computer to edit into a later remix, and when the video is available to view online, they would use Twitter again to alert others to the fact it has been published, actively alerting (by using the @username convention) anyone who expressed an interest in the earlier twitter messages on this topic. Later in the day, when things are quieter, they may embed a screen-shot of the original tweet, and a copy of the video, into a blog post in which they draw out a key message from the video, and link to other blog posts and websites which relate to the topic under discussion.

But – getting from no use of social media tools and no experience of social reporting – to that sort of platform-hopping mixed-media reporting in just a few days is a tall order. In fact, rather than trying to get new social reporters to be platform-hopping from the start, a quick show-and-tell, or hands-on demo of the different tools available, followed by an invitation to each member of the social reporting team to choose which tools they want to explore first, or which they feel most comfortable with, seemed to generate far better results.

Reflection Three: It helps to know your audience

It’s tricky to write when you don’t know who you are writing for. It’s a lot easier to carry out a video interview when you have a sense of who might watch it. And it’s often easier to allow yourself to be present in your own reporting when you know your main audience will be a community you are part of. We all present ourselves differently to different audiences, and so to capture multiple knowleges, it can be useful for a social reporting team to think about multiple audiences.

We didn’t get much time to explore with our social reporting teams who they saw as the audience for the content they were creating, nor to think about the different spaces the content could be published or aggregated to in order to reach out to different audiences – but I have a sense this could be a valuable additional part of training and preparation for social reporting. At first we found all the reporting was talking place in English, but we encouraged our social reporters to create content in whatever language they felt most comfortable with, or that they felt was most appropriate for the content in question.

There were a number of ‘remote hubs’ following the IGF via the web cast, and participating in discussions through Skype and Webex, and in our debrief we’ve reflected on how it may be possible to pair social reporters up with geographical or thematic remote hubs – giving each reporting a strong connection with a specific audience.

Reflection Four: Quick clips cannot capture all knowledges

Quick Clips
Quick Clips

The Internet Governance Forum is a complex event. Not only does it deal with some complex issues (socially, technically and culturally), but it also is comprised of a vast array of actors, from governments and industry, to individuals and civil society. As a non-decision making body the spirit is neither of consensus, nor of conflict – and black and white statements of positions are rare. The presence of all different shades of opinion, and of the experience of actors from many different countries and contexts, appears to make IGF the idea place to explore multiple knowledges. Yet at the same time, the complexity of context and content makes capturing the multiple perspectives on IGF in ‘social media snippets’ a challenge.

In video reporting, the social reporters needs to have a reasonable domain-knowledge in order to be able to ask questions that illicit insights from interviewees. In quick twitter based reporting, capturing the most relevant points without reducing them to soundbites can be tricky – or can lead to only the most ‘tweetable’ and no neccessarily the most interesting or important ideas being shared. In blogging, the lack of definitive positions to ‘side with’ in writing up a session or theme can mean the social reporter needs to pick a path through many different subtlely different perspectives and to express them in text.

Reflection Five: When the event ends, then things are just getting started…

Screen Capture of NetVibes Aggregator
Screen Capture of NetVibes Aggregator

On the last day of the IGF I hastily put together this ‘Social Reporting after IGF’ handout for our teams – as we realised it was important to make sure that, for the social reporters, the end of IGF09 was not neccesarily the end of their use of social media tools to capture and share ideas. (I’ve also created a ‘Social Reporting’ group over on the Diplo Internet Governance network). Having invited many of the youth team, and the fellows from Diplo, to sign up with various online spaces, including Twitter, for the first time, we also had a reponsibility to make sure they were aware of the implications of continued use of these tools.

But ensuring new social reporters know how they can continue to use social media tools to capture content and create networks is only part of the legacy of social reporting at an event. With the creation of a significant amount of content, there is some obligation upon us to do something with it.

During the IGF we were using a public NetVibes page as an aggregator of all the content being published, but this does not act as a longer-term archive of the content, nor does it allow us much flexibility to curate and contextualise the content gathered.

So, over the coming weeks we’ll be thinking about ways to aggregate, archive and curate the content we gathered – and thinking about whether any content can continue to be used in useful ways over the coming year.

There is little point in equipping people with the skills to capture multiple knowledges, and going some of that capture, if the skills are left un-used in future, and the content captured and the knowledges it expresses disappear entirely into Internet obscurity.


I am sure there are many more reflections and learning points from other membes of the team – which they will undoubtedly share in due course.

To find out more about Diplo and the Internet Governance Forum visit: http://www.diplomacy.edu/ig/ and http://www.diplointernetgovernance.org/

To find out more about Practical Participation – my work focussing on Youth Participation and Social Technologies visit http://www.practicalparticipation.co.uk

And to explore social reporting more in the context of IGF – please join the Diplo Internet Governance communitie’s Social Reporting group here: http://www.diplointernetgovernance.org/group/socialreporting

Do stop me if…

Picture 8I’ve just made it to the end of the first week of term on the MSc course I’m taking full time for the next year.

I’m aware that already my writing style is heading off towards the academic and even-more-verbose-than-usual; and the topics I’m exploring day-to-day are getting relatively specialised.

I’ve been wondering whether I should start another blog for study-related content, but I’ve decided for now, to stick to writing here.

But – this blog is for readers as well as it’s writer. I want to make sure that I can get a sense if readers think the blog is getting too technical. Or indeed, not technical enough.

Speaking at conferences before I’ve used the jargon busting red card system, where everyone in the audience has a red card to hold up should the speaker end up off-topic or using too much jargon. Scary for the person on the podium. But it encourage a great discipline for the speaker to maintain clarity and focus. Seeing a sea of red-cards start to shuffle in the audience certainly helps me to get back on track if I’ve mis-judged how to pitch a presentation.

So consider this post to gift you a virtual red card, and to be an open invitation and encouragement to give your feedback and to help keep this blog useful and practical.

One Interesting Thing about Five Interesting Things

(and a bonus reflection on the network society)

If you write a title of the form ‘Five interesting things about X’ at the top of a page or blog post, where X is an event, a workshop you’ve run, or a paper you’ve just read, chances are you can fairly quickly distil a page or post full of, well, five interesting things..

And chances are that other people will find it useful.

If you ask me, that’s fairly interesting. And it’s got interesting implications.

I’ve just spent three days with various academics, managers and practitioners from the world of human services (which included quite a few folk with experience of youth work) where we’ve been exploring the rise of the ‘Network Society‘ (if the term is not familiar, at least look at the Wikipedia page) and it’s impact on the whole field of human services.

We encountered a lot of challenges: Many of the practitioners and managers can perceive the need think about and adapt to the network society – but they struggle to find the time to engage with both the literature on the network society and the practice dilemmas and tensions that need to be resolved in responding to the rise of a digitally connected world. Many of the academics are doing in depth research and writing great articles (sometimes about the fact practitioners are ever more time pressured) – but are struggling to get that research to influence practice. And both academics and practitioners alike with teaching responsibilities were talking about the challenges of using old lecture & essay based formats of education to equip a new generation of youth workers, social workers and probation officers.

Talking about “Five interesting things” may not offer a resolution to all those challenges, but over lunch today we explored how it might have something to offer. What if:

  • Tutors encouraged students to read recent research and summarise the ‘Five Interesting Things’ from key articles.
  • We combined the lists of ‘interesting things’ produced by different students through an online tool like IdeaScale and encouraged students and practitioners to vote up and down the most and least significant ‘interesting things.
  • We took the top five ‘most interesting things’ and used them as abstracts alongside all the articles being input into big knowledge management systems for practitioners.

A new teaching approach. Turning research into practical nuggets of information & knowledge. And helping practitioners engage with contemporary learning about major social shifts and developments.

Interesting?

(BTW: Do check out the Connected Practice Ning if the idea of ‘human services in the network society’ is one that resonates with you)

TV, Channels and Social Networks – a day with DigiTV

I was speaking yesterday at the DigiTV Stakeholder Event alongside Steven Flower of Substance/Plings – exploring how information providers can ensure their information and services are ready not just to feed into the ‘channels’ young people use, but also into the networks and social networks through which young people and the wider population are increasingly accessing information*.

Presentation slides
If you were at the event and are looking for the slides I promised to share, you can find them as a a PDF download here or Slideshare here.

The future of TV
The last time I lived in a household with a TV was 2003 and that was an old analogue set, so I’ve not yet got my head fully around the current versions of digital TV. Which meant that and hearing Ian Valentine from Miniweb (and formally from the R&D team at Sky) speak about both the present and the future of digital TV was a bit of an eye-opener for me. The convergence of TV set and always-on broadband internet connection looks set to have some really interesting implications.

Below are a few quick reflections on some of the content of yesterday:

  • An interactive platform for the household? – The mobile phone, laptops, and even family computers are set up as private screens. One user at a time. The TV still appears to operate in most settings as a shared screen. With digital messaging (TV e-mail / RSS feed to TV?) and social interaction features (share with a friend etc.) built directly into the television watching experience, not as separate applications that requires a move away from TV watching to access, is there a potential for digital messaging and social-networking features based less around the individual, and more around the household?
  • Digital TV services are not just for access at home – Continuing the theme of the shared screen – one presenter talked about how they have installed digital TV in some community venues they work in, in order to provide access to the services they have developed digital TV interfaces for. At first this seems odd – surely those venues already had internet access and computers which could be used to access the very same services. But the Digital TV interface was, perhaps significantly because of the constraints of the platform, much easier for the target group of the service to use. The idea of simple interfaces to interactive tools on a shared screen is really quite appealing for a lot of contexts (e.g. youth group working on a consultation without other digital distractions etc.).
  • Service delivery via digital TV will no longer just be a way of reaching the 30% or so of internet non-adopters. As the TV becomes a broadband internet access device to parallel other alternative screens such as the phone screen, it is reasonable to assume it will become increasingly important to create TV-ready websites. Ian from MiniWeb spoke a bit about the wTVML markup gateways they have been developing as a way of translating standard CMS driven websites into TV-ready interfaces with the addition of a little XML to the website templates. This makes it more important than ever to develop standards compliant sites from the start.
  • Social Networking comes to the TV. Some of the features of next-generation digital TV shown at the event highlight the potential for rich social networking tools and platforms to be built into the TV. This is one to keep an eye on when it comes to Youth Work & Social Networking – and thinking about safer social networking.

Even so, I’m sticking to the Radio.