Support Us

You are browsing the archive for Our Work.

Open Data Training at the Open Knowledge Foundation

Laura James - September 26, 2013 in Business, CKAN, Featured, Open Data, Open Government Data, Open Knowledge Foundation, Our Work, School of Data, Technical, Training

We’re delighted to announce today the launch of a new portfolio of open data training programs.

For many years the Open Knowledge Foundation has been working — both formally and informally — with governments, civil society organisations and others to provide this kind of advice and training. Today marks the first time we’ve brought it all together in one place with a clear structure.

These training programs are designed for two main groups of people interested in open data:

  1. Those within government and other organisations seeking a short introduction to open data – what it is, why to “do” open data, what the challenges are, and how to get started with an open data project or policy.

  2. The growing group of those specialising in open data, perhaps as policy experts, open data program managers, technology specialists, and so on, generally within government or other organisations. Here we offer more in-depth training including detailed material on how to run an open data program or project, and also a technical course for those deploying or maintaining open data portals.

Our training programs are designed and delivered by our team of open data experts with many years of experience creating, maintaining and supporting open data projects around the world.

Please contact us for details on any of the these courses, or if you’d be interested in discussing a custom program tailored to your needs.

Our Open Data Training Programs

Open Data Introduction

Who is this for?

This course is a short introduction to open data for anyone and is perfectly suited to teams from diverse functions across organisations who are thinking about or adopting open data for the first time.

Topics covered

Everything you need to understand and start working in this exciting new area: what is open data, why should institutions open data, what are the benefits and opportunities to doing so, and of course how you can get started with an open data policy or project.

This is a one day course to help you and your team get started with open data.

Photo by Victor1558

Administrative Open Data Management

Who is this for?

Those specialising in open data, whether as policy experts, open data program managers and similar roles in government, civil service, and other organisations. This course is specifically for non-technical staff who are responsible for managing Open Data programs in their organisation. Such activities typically include implementing an Open Data strategy, designing/launching an Open Data portal, coordinating publication processes, preparing data for publication, and fostering data re-use.

Topics covered

Basics of Open Data (legal, managerial, technical); Success factors for the design and execution of an Open Data program; Overview of the technology landscape; Success factors for community re-use.

Open Data Portal Technology

Who is this for?

Those specializing in open data, whether as software or data experts, and open data delivery managers and similar roles in government, civil service, and other organisations. Technical staff who are responsible for maintaining or running an enterprise Open Data portal. Such activities typically include deployment, system administration and hosting, site theming, development of custom extensions and applications, ETL procedures, data conversions, data life-cycle management.

Topics covered

Basics of Open Data, publication process, and technology landscape; architecture and core functionality of a modern Open Data Management System (CKAN used as example). Deployment, administration and customisation; deploying extensions; integration; geospatial and other special capabilities; engaging with the CKAN community.

Photo by Victor1558

Custom training

We can offer training programs tailored to your specific needs, for your organisation, data domain, or locale. Get in touch today to discuss your requirements!

Working with data

We also run the School of Data, which helps civil society organisations, journalists and citizens learn the skills they need to use data effectively, through both online and in-person “learning through doing” workshops. The School of Data runs data-driven investigations and explorations, and data clinics and workshops from “What is Data” up to advanced visualisation and data handling. As well as general training and materials, we offer topic-specific and custom courses and workshops. Please contact to find out more.

As with all of our work, all relevant materials will be openly licensed, and we encourage others (in the global Open Knowledge Foundation network and beyond) to use and build on them.

Network Summit

Naomi Lillie - July 19, 2013 in Network, Open GLAM, Open Government Data, Open Humanities, Open Knowledge Foundation, Open Knowledge international Local Groups, Open Science, Our Work, Talks, Working Groups

Twice-yearly the whole community of the Open Knowledge Foundation gathers together to share with, learn from and support one another. The Summer Summit 2013 took place in Cambridge (UK) last week (10th-14th July), with staff updates on the Thursday and network representatives joining on the Friday, Saturday and Sunday.

It was so inspiring to hear what our network has been doing to further the Open movement recently and over the last 6 months!

We heard from Local Groups about how these groups have been effecting change in all our locations around the world:

  • Alberto for OKFN Spain has been promoting open transparency in budgets, including their own, and using the power of events to gather people;
  • OKFN Taiwan, represented by TH (who we believe travelled the furthest to be with us in person), has also been investing in many large events, including one event for developers and others attracting 2,000 people! They have also been supporting local and central governments on open data regulation;
  • Charalampos of OKFN Greece highlighted the recent support of their works by Neelie Kroes, and took us through which maps accidents using data from police departments and census data along with crowd-sourced data;
  • Pierre at OKF France reported that they have been helping redesign the national open data portal, as well as developing an open data portal for children and young people which kids which may align well with School of Data;
  •, the Swiss Chapter of the Open Knowledge Foundation of course is hosting OKCon in September, and Hannes updated on exciting developments here. He also reported on work to lobby and support government by developing visualisations of budget proposals, developing a federal-level open data strategy and policy, and promoting a national open data portal. Thanks to their efforts, a new law was accepted on open weather data, with geodata next up;
  • David updated on OKFN Australia where there is support from government to further the strong mandate for open scientific data. The newspaper the Age has been a firm ally, making data available for expenses and submissions to political parties, and a project to map Melbourne bicycle routes was very successful;
  • Francesca of OKF Italy has been working alongside Open Streetmap and Wikimedia Italy, as well as with parliament on the Open Transport manifesto. They have also been opening up ecological data, from “spaghetti open data”;
  • OKFN Netherlands was represented by Kersti, who reported a shared sense of strength in open government data and open development, as well as in the movement Open for Change (where OKCon is listed as the top ‘Open Development Event’!);
  • Dennis, for OKF Ireland, has been pushing the local events and gathering high-profile ‘rock stars’ of the open data world as well as senior government representatives. He has also presented on open data in parliament;
  • OKF Scotland is a growing grassroots community, as conveyed by Ewan – an Open Data Day asserted the importance of connecting to established grassroots communities who are already doing interesting things with data. They are also working closely with government to release data and organised local hackdays with children and young people;
  • Bill joined us remotely to update on OKF Hong Kong, where regular meet-ups and hackdays are providing a great platform for people to gather around open knowledge. Although not able to join us in person (like Everton / Tom from OKF Brasil) Bill was keen to report that OKF Hong Kong will be represented at OKCon!
  • OKF Austria‘s update was given by Walter, who informed us that transport data is now properly openly licensed and that several local instances of the international Working Groups have been set up. Which segues nicely, as…

It wasn’t just during the planned sessions where community-building and networking occurred: despite the scorching 30°C (86°F) heat – somewhat warmer than the Winter Summit in January! – people made the most of lunchtimes and breaks to share ideas and plan.

We also heard from Working Groups about how crossing international boundaries is making a difference to Open for all of us:

  • Open Sustainability was represented by Jack who explained Cleanweb (an initiative to use clean technologies for good, engaging with ESPA to open up data) and has set up @opensusty on Twitter as a communication route for anyone wanting to connect;
  • Ben, newly involved with Open Development, explained about the group’s plans to make IATI‘s released data useful, and bringing together existing initiatives to create a data revolution;
  • Open Science, represented by Ross, has been very active with lobbying and events, with the mailing list constantly buzzing with discussions on open data, licensing and convincing others;
  • Daniel explained that Open Government Data, being one of the largest groups with 924 mailing list members, has provided an important role as being at the heart of the Open Government Data movement, as a place for people to go to for questions and – hopefully! – answers. Daniel will be stepping down, so get in touch if you would like to help lead this group; in the meantime, the Steering Committee will be helping support the group;
  • OpenGLAM has also developed an Advisory Board, said Joris. There is good global reach for Open GLAM advocacy, and people are meeting every month. Documents, case studies, slide-decks and debates are available to new joiners to get started, and the Austrian instance of the Working Group demonstrated the process works. (Joris has now sadly left Open Knowledge Foundation ‘Central’, but we are delighted he will stay on as volunteer Coordinator for this group!);
  • Public Domain, with Primavera reporting, has been working on Public Domain Calculators in partnership with the government. PD Remix launched in France in May, and Culture de l’Europe will present at OKCon;
  • Primavera also updated on Open Design, where future planning has taken priority. The Open Design Definition has been a highlight but funding would help further activity and there are plans to seek this proactively. Chuff, the Open Knowledge Foundation Mascot, was pleased to get a mention…

It should be noted that these activities and updates are brief highlights only – distilling the activities of our groups into one or two sentences each is very much unrepresentative of the amount of things we could talk about here!

We also made time for socialising at the Summit, and much fun was had with Scrabble, playing frisbee and punting – not to mention celebrating Nigel‘s birthday!

As an aside, I was going to state that “we only need an Antarctic representative and the Open Knowledge Foundation will have all seven continents in our network”; however, it appears there is no definitive number of continents or agreed land-masses! An amalgamated list is Africa (Africa/Middle East and North Africa), America (Central/North/South), Antarctica, Australia (Australia/Oceania) and Eurasia (Europe/Asia)… but, however you wish to define the global divisions (and isn’t it pleasing that it’s difficult to do so?), Antarctica is the only area the Open Knowledge Foundation is not represented! Are you reading this from an outstation at the South Pole, or know someone there, and want to contribute to open knowledge? Apply to become an Ambassador and be the person to cement the Open Knowledge Foundation as the fully global demonstration of the Open movement.

If you’re in an unrepresented area – geographic or topic – we’d love to hear from you, and if you’re in a represented area we’d love to put you in touch with others. Get Involved and connect with the Open Knowledge Foundation Network – and maybe we’ll see you at the next Summit!

Images 1, 4-7 and front page: Velichka Dimitrova. Images 2 and 3: Marieke Guy, CC-BY-NC-ND

Open Knowledge: much more than open data

Laura James - May 1, 2013 in Featured, Ideas and musings, Join us, Open Data, Open Knowledge Foundation, Our Work

Book, Ball and Chain

We’ve often used “open knowledge” simply as a broad term to cover any kind of open data or content from statistics to sonnets, and more. However, there is another deeper, and far more important, reason why we are the “Open Knowledge” Foundation and not, for example, the “Open Data” Foundation. It’s because knowledge is something much more than data.

Open knowledge is what open data becomes when it’s useful, usable and used. At the Open Knowledge Foundation we believe in open knowledge: not just that data is open and can be freely used, but that it is made useful – accessible, understandable, meaningful, and able to help someone solve a real problem. —Open knowledge should be empowering – it should be enabling citizens and organizations understand the world, create insight and effect positive change.

It’s because open knowledge is much more than just raw data that we work both to have raw data and information opened up (by advocating and campaigning) and also by making, creating the tools to turn that raw material into knowledge that people can act upon. For example, we build technical tools, open source software to help people work with data, and we create handbooks which help people acquire the skills they need to do so. This combination, that we are both evangelists and makers, is extremely powerful in helping us change the world.

Achieving our vision of a world transformed through open knowledge, a world where a vibrant open knowledge commons empowers citizens and enables fair and sustainable societies, is a big challenge. We firmly believe it can done, with a global network of amazing people and organisations fighting for openness and making tools and more to support the open knowledge ecosystem, although it’s going to take a while!

We at the Open Knowledge Foundation are committed to this vision of a global movement building an open knowledge ecosystem, and we are here for the long term. We’d love you to join us in improving the world through open knowledge; there will be many different ways you can help coming up during the months ahead, so get started now by keeping in touch – by signing up to receive our Newsletter, or finding a local group or meetup near you.

Opening up the wisdom of crowds for science

Francois Grey - April 22, 2013 in Featured, News, Open Data, Open Science, Our Work, PyBossa, Releases

We are excited to announce the official launch of, an open source software platform – powered by our Pybossa technology – for developing and sharing projects that rely on the help of thousands of online volunteers.

crowdcrafting logo

At a workshop on Citizen Cyberscience held this week at University of Geneva, a novel open source software platform called Crowdcrafting was officially launched. This platform, which already has attracted thousands of participants during several months of testing, enables the rapid development of online citizen science applications, by both amateur and professional scientists.

Applications already running on Crowdcrafting range from classifying images of magnetic molecules to analyzing tweets about natural disasters. During the testing phase, some 50 new applications have been created, with over 50 more under development. The Crowdcrafting platform is hosted by University of Geneva, and is a joint initiative between the Open Knowledge Foundation and the Citizen Cyberscience Centre, a Geneva-based partnership co-founded by University of Geneva. The Sloan Foundation has recently awarded a grant to this joint initiative for the further development of the Crowdcrafting platform.

Crowdcrafting fills a valuable niche in the broad spectrum of online citizen science. There are already many citizen science projects that use online volunteers to achieve breakthrough results, in fields as diverse as proteomics and astronomy. These projects often involve hundreds of thousands of dedicated volunteers over many years. The objective of Crowdcrafting is to make it quick and easy for professional scientists as well as amateurs to design and launch their own online citizen science projects. This enables even relatively small projects to get started, which may require the effort of just a hundred volunteers for only a few weeks. Such initiatives may be small on the scale of most online social networks, but they still correspond to many man-years of scientific effort achieved in a short time and at low cost.

“By emphasizing openness and simplicity, Crowdcrafting is lowering the threshold in investment and expertise needed to develop online citizen science projects”, says Guillemette Bolens, Deputy Rector for Research at the University of Geneva. “As a result, dozens of projects are under development, many of them in the digital humanities and data journalism, some of them created by university students, others still by people outside of academia.”

An example occurred after the tropical storm that wreaked havoc in the Philippines late last year. A volunteer initiative called Digital Humanitarian Network used Crowdcrafting to launch a project called Philippines Typhoon. This enabled online volunteers to classify thousands of tweets about the impact of the storm, in order to more rapidly filter information that could be vital to first responders. “We are excited about how Crowdcrafting is assisting the digital volunteer community worldwide in responding to natural disasters,” says Francesco Pisano, Director of Research at UNITAR.

“Crowdcrafting is also enabling the general public to contribute in a direct way to fundamental science,” says Gabriel Aeppli, Director of the London Centre for Nanotechnology (LCN), a joint venture between UCL and Imperial College. A case in point is the project Feynman’s Flowers, set up by researchers at LCN. In this project, volunteers use Crowdcrafting to measure the orientation of magnetic molecules on a crystalline surface. This is part of a fundamental research effort aimed at creating novel nanoscale storage systems for the emerging field of quantum computing.

Commenting on the underlying technology, Rufus Pollock, founder of the Open Knowledge Foundation, said, “Crowdcrafting is powered by the open-source PyBossa software, developed by ourselves in collaboration with the Citizen Cyberscience Centre. Its aim is to make it quick and easy to do “crowdsourcing for good” – getting volunteers to help out with tasks such as image classification, transcription and geocoding in relation to scientific and humanitarian projects”. The Shuttleworth Foundation and the Open Society Foundations funded much of the early development work for this technology.

Francois Grey, coordinator of the Citizen Cyberscience Centre, says, “Our goal now, with support from the Sloan Foundation, is to integrate other apps for data collection, processing and storage, to make Crowdcrafting an open-source ecosystem for building a new generation of browser-based citizen science projects.”

For further information about Crowdcrafting, see

Announcing the Open Knowledge Conference 2013: Open Data – Broad, Deep, Connected

Rufus Pollock - March 21, 2013 in Events, Featured, News, OKCon, OKFest, Open Knowledge Foundation, Our Work

The Open Knowledge Foundation is pleased to announce that the 2013 Open Knowledge Conference (OKCon) will take place in Geneva, Switzerland on 17th -18th September. The theme of this year’s edition will be Open Data – Broad, Deep, Connected.

The world’s leading open data and open knowledge event, OKCon is the latest in an annual series run since 2005. Last year’s installment in Helsinki had more than 1000 participants from over 50 countries and was the largest event of its kind to date. Previous speakers have included inventor of the World Wide Web Sir Tim Berners-Lee, Hans Rosling of Gapminder, Brewster Kahle of the Internet Archive, and Ellen Miller of the Sunlight Foundation.

Located in Geneva, a major site for the United Nations and many other international institutions, this year’s event will focus on coordinating and strengthening public policy around the world to support a truly global and interconnected ecosystem of open data.

Open Data – Broad, Deep, Connected

In the last few years we’ve seen government open data initiatives grow from a handful to hundreds, and we’ve seen open data become important in areas such as research, culture and international development. This event will explore how open data is not only expanding geographically but also touching new sectors and new areas. How should governments and international institutions such as the UN react to these changes? How should business take advantage of new opportunities and contribute to the open data economy? How do citizens and civil society organizations turn data into accountability and into change?

This year’s OKCon will focus on the following questions:

  • How do we broaden open data – not only geographically across countries and regions, but also across domains and institutions? For example, whilst open data is now firmly on the agenda for government, in business its potential is only just starting to be explored. Similarly, though “open” is prominent in some areas of research, such as genomics, in others it is still barely known.

  • How do we deepen open data – ensuring a commitment not only for today but for the long term, and ensuring that open data is fully embedded into processes and policies? For example, though many governments have now signed up to the Open Government Partnership and announced open government data initiatives, in many cases the amount of data actually released remains limited.

  • How do we ensure the open data ecosystem is connected? Much of the value of open data will be lost if open data ends up locked into isolated silos – whether these are legal, technical or social. In today’s globalized world it makes no sense if open data ‘stops at the border’: we need data that extends across countries and institutions, and is easy to interconnect thanks to common standards and interoperable infrastructure.


The event is jointly organized by the Open Knowledge Foundation, Open Knowledge Foundation Switzerland with the support of the Federal Councillor Alain Berset and the Canton of Geneva and with Lift Events as an organizing partner.


Will there be other events in town during the Conference week?

Yes, we’re planning satellite workshops on Monday 16th September and Thursday 19th September. Please consider this when booking your travel!

When will the Call for Proposals be launched?

We will launch a Call for Proposals inviting you to send us your ideas for talks, panels and workshops in April. We can’t wait to make this happen together with you!

I’d like to offer my support as a volunteer. How can I apply?

We expect to welcome around 30 stewards in our team. Applications for these positions will be opening shortly, with preference given to those already in the Open Knowledge Foundation Task Force. Stewards will receive a free ticket.

Tips or support for travel and accommodation?

We’re planning to provide a travel bursary programme, and details of recommended hotels and hostels with good connections to the OKCon venue will be announced in the coming weeks.

What’s Happening with OKFestival?

Last year our annual Open Knowledge Conference expanded into the inaugural Open Knowledge Festival (OKFestival) which took place in Helsinki in September. This was a great event with a broad structure and festival atmosphere, and we look forward to future Open Knowledge Festivals. With their expanded format we’ll likely be running these in alternate years, giving plenty of time to plan and bring the community together.

Releasing the Automated Game Play Datasets

Velichka Dimitrova - March 7, 2013 in Open Economics, Our Work, WG Economics


This blog post is cross-posted from the Open Economics Blog.

We are very happy to announce that the Open Economics Working Group is releasing the datasets of the research project “Small Artificial Human Agents for Virtual Economies“, implemented by Professor David Levine and Professor Yixin Chen at the Washington University of St. Louis and funded by the National Science Foundation [See dedicated webpage].

The authors of the study have given their permission to publish their data online. We hope that through making this data available online we will aid researchers working in this field. This initiative is motivated by our belief that in order for economic research to be reliable and trusted, it should be possible to reproduce research findings – which is difficult or even impossible without the availability of the data and code. Making material openly available reduces to a minimum the barriers for doing reproducible research.

If you are interested to know more or you like to get help in releasing research data in your field, please contact us at: economics [at]

Project Background

An important requirement for developing better economic policy recommendations is improving the way we validate theories. Originally economics depended on field data from surveys and laboratory experiments. An alternative method of validating theories is through the use of artificial or virtual economies. If a virtual world is an adequate description of a real economy, then a good economic theory ought to be able to predict outcomes in that setting.

An artificial environment offers enormous advantages over the field and laboratory: complete control – for example, over risk aversion and social preferences – and great speed in creating economies and validating theories. In economics, the use of virtual economies can potentially enable us to deal with heterogeneity, with small frictions, and with expectations that are backward looking rather than determined in equilibrium. These are difficult or impractical to combine in existing calibrations or Monte Carlo simulations.

The goal of this project is to build artificial agents by developing computer programs that act like human beings in the laboratory. We focus on the simplest type of problem of interest to economists: the simple one-shot two-player simultaneous move games. The most well-known form of these are “Prisoner’s Dilemmas” – a much studied scenario in game theory which explores the circumstances of cooperation between two people. In its classic form, the model suggests that two agents who are fully self-interested and rational would always betray each other, even though the best outcome overall would be if they cooperated. However, laboratory humans show a tendency towards cooperation. Our challenge is therefore developing artificial agents who share this bias to the same degree as their human counterparts.

There is a wide variety of existing published data on laboratory behavior that will be the primary testing ground for the computer programs. As the project progresses, the programs will be challenged to see if they adapt themselves to changes in the rules in the same ways as human agents: for example, if payments are changed in a certain way, the computer programs will play differently: do people do the same? In some cases we may be able to answer these questions with data from existing studies; in others we will need to conduct our own experimental studies.

Find the full list of available datasets here

The Open Data Census – Tracking the State of Open Data Around the World

Rufus Pollock - February 20, 2013 in Events, Featured, Featured Project, Open Data, Open Government Data, Our Work, WG Open Government Data

Recent years have seen a huge expansion in open data activity around the world. This is very welcome, but at the same time it is now increasingly difficult to assess if, and where, progress is being made.

To address this, we started the Open Data Census in order to track the state of open data globally. The results so far, covering more than 35 countries and 200 datasets, are now available online at We’ll be building this up even more during Open Data Day this weekend.

This post explains why we started the census and why this matters now. This includes the importance of quality (not just quantity) of data, the state of the census so far, and some immediate next steps – such as expanding the census to the city level and developing an “open data index” to give a single measure of open data progress.

Why the Census?

In the last few years there has been an explosion of activity around open data and especially open government data. Following initiatives like and, numerous local, regional and national bodies have started open government data initiatives and created open data portals (from a handful 3 years ago there are now more than 250 open data catalogs worldwide).

But simply putting a few spreadsheets online under an open license is obviously not enough. Doing open government data well depends on releasing key datasets in the right way. Moreover, with the proliferation of sites it has become increasingly hard to track what is happening.

Which countries, or municipalities, are actually releasing open data and which aren’t?1 Which countries are making progress on releasing data on stuff that matters in the right way?

Quality not (just) Quantity

Progress in open government data is not (just) about the number of datasets being released. The quality of the datasets being released matters at least as much – and often more – than the quantity of these datasets.

We want to know whether governments around the world are releasing key datasets, for example critical information about public finances, locations and public transport rather than less critical information such as the location of park benches or the number of streetlights per capita.2

Similarly, is the data being released in a form that is comparable and interoperable or is it being release as randomly structured spreadsheets (or, worse, non-machine-readable PDFs)?

Tables like this are easy for humans, but difficult for machines.

This example of a table from US Bureau of Labor Statistics are easy for humans to interpret but very difficult for machines. (But at least it’s in plain text not PDF).)

The essential point here is that it is about quality as much quantity. Datasets aren’t all the same, whether in size, importance or format.

Enter the Census

And so was born the Open Knowledge Foundation’s Open Data Census – a community-driven effort to map and evaluate the progress of open data and open data initiatives around the world.

We launched the first round of data collection last April at the meeting of the Open Government Partnership in Brazil. Since then members of the Open Knowledge Foundation’s Open Government Data Working Group have been continuing to collect the data and our Labs team have been developing a site to host the census and present its results.

ogd census table

The central part of the census is an assessment based on 10 key datasets.

These were selected through a process of discussion and consultation with the Open Government Data Working Group and will likely be expanded in future (see some great suggestions from David Eaves last year). We’ll also be considering additional criteria: for example whether data is being released in a standard format that facilitates integration and reuse.

We focused on a specific list of core datasets (rather than e.g. counting numbers of open datasets) for a few important reasons:

  • Comparability: by assessing against the same datasets we would be able to compare across countries
  • Importance: Some datasets are more important than others and by specifically selecting a small set of key datasets we could make that explicit
  • Ranking: we want, ultimately, to be able to rank countries in an “Open Data Index”. This is much easier if we have a good list of cross-country comparable data. 3

Today, thanks to submissions from more than thirty contributors the census includes information on more 190 datasets from more than 35 countries around the world and we hope to get close to full coverage for more than 50 countries in the next couple of months.

ogd census map

The Open Data Index: a Scoreboard for Open Government Data

Having the census allows us to evaluate general progress on open data. But having a lot of information alone is not enough. We need to ensure the information is presented in a simple and understandable way especially if we want it to help drive improvements in the state of open government data around the world.

Inspired by work such as Open Budget Index from the International Budget Partnership, the Aid Transparency Index from Publish What You Fund, the Corruption Perception Index from Transparency International and many more, we felt a key aspect is to distill the results into a single overall ranking and present this clearly. (We’ve also been talking here with the great folks at the Web Foundation, who are also thinking about an Open Data Index connected with their work on the Web Index).

obp screenshot

As part of our first work on the Census dashboard last September for OKFestival we did some work on an “open data index”, which provided an overall rankings for countries. However, during that work, it became clear that building a proper index requires some careful thought. In particular, we probably wanted to incorporate other factors than just the pure census results, for example:

  • Some measure of the number of open datasets (appropriately calibrated!)
  • Whether the country has an open government data initiative and open data portal
  • Whether the country has joined the OGP
  • Existence (and quality) of an FoI law

In addition, there is the challenging question of weightings – not only between these additional factors and census scores but also for scoring the census. Should, for example, Belarus be scoring 5 or 6 out of 7 on the census despite it not being clear whether any data is actually openly licensed? How should we weight total number of datasets against the census score?

Nevertheless, we’re continuing to work on putting together an “open data index” and we hope to have an “alpha” version ready for the open government data community to use and critique within the next few months. (If you’re interested in contributing check out the details at the end of this post).

The City Census

The first version of the census was country oriented. But much of the action around open data happens at the city and regional level, and information about the area around us tends to be the most meaningful and important.

We’re happy to say plans are afoot to make this happen!

Specifically, we’ll be kicking off the city census with an Open Data Census Challenge this Saturday as part of Open Data Day.

If the Open Data Census has caught your interest, you are invited to become an Open Data Detective for a day and help locate open (and closed) datasets in cities around the world. Find out more and sign up here:

Get Involved

Interested in the Open Data Census? Want to contribute? There are a variety of ways:


  1. For example, we’ve seen several open data initiatives releasing data under non-open licenses that restrict, for example, derivative works, redistribution or commercial use. 

  2. This isn’t to say that less critical information isn’t important – one of the key reasons for releasing material openly is that you never know who may derive benefit from it, and the “long tail of data” may yield plenty of unexpected riches. 

  3. Other metrics, such as numbers of datasets are very difficult to compare – what is a single dataset in one country can easily become a 100 or more in another country, for example unemployment could be in a single dataset or split into many datasets one for each month and region). 

Open Research Data Handbook Sprint

Velichka Dimitrova - February 15, 2013 in Open Access, Open Content, Open Data, Open Economics, Open Science, Open Standards, Our Work, WG Economics

On February 15-16 we are updating the Open Research Data Handbook to include more detail on sharing research data from scientific work, and to remix the book for different disciplines and settings. We’re doing this through an open book sprint. The sprint will happen at the Open Data Institute, 65 Clifton Street, London EC2A 4JE.

The Friday lunch seminar will be streamed through the Open Economics Bambuser channel. If you would like to participate, please see the Online Participation Hub for links to documents and programme updates. You can follow this event at the IRC channel #okfn-rbook and follow on twitter with hashtags #openresearch and #okfnrbook.

The Open Research Data Handbook aims to provide an introduction to the processes, tools and other areas that researchers need to consider to make their research data openly available.

Join us for a book sprint to develop the current draft, and explore ways to remix it for different disciplines and contexts.

Who it is for:

  • Researchers interested in carrying out their work in more open ways
  • Experts on sharing research and research data
  • Writers and copy editors
  • Web developers and designers to help present the handbook online
  • Anyone else interested in taking part in an intense and collaborative weekend of action

What will happen:

The main sprint will take place on Friday and Saturday. After initial discussions we’ll divide into open space groups to focus on research, writing and editing for different chapters of the handbook, developing a range of content including How To guidance, stories of impact, collections of links and decision tools.

A group will also look at digital tools for presenting the handbook online, including ways to easily tag content for different audiences and remix the guide for different contexts.


Where: 65 Clifton Street, EC2A 4JE (3rd floor – the Open Data Institute)

Friday, February 15th

  • 13:00 – 13:30: Arrival and sushi lunch
  • 13:30 – 14:30: Open research data seminar with Steven Hill, Head of Open Data Dialogue at RCUK.
  • 14:30 – 17:30: Working in teams

Friday, February 16th

  • 10:00 – 10:30: Arrival and coffee
  • 10:30 – 11:30: Introducing open research lightning talks (your space to present your project on research data)
  • 11:30 – 13:30: Working in teams
  • 13:30 – 14:30: Lunch
  • 14:30 – 17:30: Working in teams
  • 17:30 – 18:30: Reporting back

As many already registered for online participation we will broadcast the lunch seminar through the Open Economics Bambuser channel. Please drop by in the IRC channel #okfn-rbook


OKF Open Science Working Group – creators of the current Open Research Data Handbook
OKF Open Economic Working Group – exploring economics aspects of open research
Open Data Research Network – exploring a remix of the handbook to support open social science
research in a new global research network, focussed on research in the Global South.
Open Data Institute – hosting the event

First Open Economics International Workshop Recap

Velichka Dimitrova - January 28, 2013 in Access to Information, Events, Featured, Open Access, Open Data, Open Economics, Open Standards, Our Work, WG Economics, Workshop

The first Open Economics International Workshop gathered 40 academic economists, data publishers and funders of economics research, researchers and practitioners to a two-day event at Emmanuel College in Cambridge, UK. The aim of the workshop was to build an understanding around the value of open data and open tools for the Economics profession and the obstacles to opening up information, as well as the role of greater openness of the academy. This event was organised by the Open Knowledge Foundation and the Centre for Intellectual Property and Information Law and was supported by the Alfred P. Sloan Foundation. Audio and slides are available at the event’s webpage.

Open Economics Workshop

Setting the Scene

The Setting the Scene session was about giving a bit of context to “Open Economics” in the knowledge society, seeing also examples from outside of the discipline and discussing reproducible research. Rufus Pollock (Open Knowledge Foundation) emphasised that there is necessary change and substantial potential for economics: 1) open “core” economic data outside the academy, 2) open as default for data in the academy, 3) a real growth in citizen economics and outside participation. Daniel Goroff (Alfred P. Sloan Foundation) drew attention to the work of the Alfred P. Sloan Foundation in emphasising the importance of knowledge and its use for making decisions and data and knowledge as a non-rival, non-excludable public good. Tim Hubbard (Wellcome Trust Sanger Institute) spoke about the potential of large-scale data collection around individuals for improving healthcare and how centralised global repositories work in the field of bioinformatics. Victoria Stodden (Columbia University / RunMyCode) stressed the importance of reproducibility for economic research and as an essential part of scientific methodology and presented the RunMyCode project.

Open Data in Economics

The Open Data in Economics session was chaired by Christian Zimmermann (Federal Reserve Bank of St. Louis / RePEc) and was about several projects and ideas from various institutions. The session examined examples of open data in Economics and sought to discover whether these examples are sustainable and can be implemented in other contexts: whether the right incentives exist. Paul David (Stanford University / SIEPR) characterised the open science system as a system which is better than any other in the rapid accumulation of reliable knowledge, whereas the proprietary systems are very good in extracting the rent from the existing knowledge. A balance between these two systems should be established so that they can work within the same organisational system since separately they are distinctly suboptimal. Johannes Kiess (World Bank) underlined that having the data available is often not enough: “It is really important to teach people how to understand these datasets: data journalists, NGOs, citizens, coders, etc.”. The World Bank has implemented projects to incentivise the use of the data and is helping countries to open up their data. For economists, he mentioned, having a valuable dataset to publish on is an important asset, there are therefore not sufficient incentives for sharing.

Eustáquio J. Reis (Institute of Applied Economic Research – Ipea) related his experience on establishing the Ipea statistical database and other projects for historical data series and data digitalisation in Brazil. He shared that the culture of the economics community is not a culture of collaboration where people willingly share or support and encourage data curation. Sven Vlaeminck (ZBW – Leibniz Information Centre for Economics) spoke about the EDaWaX project which conducted a study of the data-availability of economics journals and will establish publication-related data archive for an economics journal in Germany.

Legal, Cultural and other Barriers to Information Sharing in Economics

The session presented different impediments to the disclosure of data in economics from the perspective of two lawyers and two economists. Lionel Bently (University of Cambridge / CIPIL) drew attention to the fact that there is a whole range of different legal mechanism which operate to restrict the dissemination of information, yet on the other hand there is also a range of mechanism which help to make information available. Lionel questioned whether the open data standard would be always the optimal way to produce high quality economic research or whether there is also a place for modulated/intermediate positions where data is available only on conditions, or only in certain part or for certain forms of use. Mireille van Eechoud (Institute for Information Law) described the EU Public Sector Information Directive – the most generic document related to open government data and progress made for opening up information published by the government. Mireille also pointed out that legal norms have only limited value if you don’t have the internalised, cultural attitudes and structures in place that really make more access to information work.

David Newbery (University of Cambridge) presented an example from the electricity markets and insisted that for a good supply of data, informed demand is needed, coming from regulators who are charged to monitor markets, detect abuse, uphold fair competition and defend consumers. John Rust (Georgetown University) said that the government is an important provider of data which is otherwise too costly to collect, yet a number of issues exist including confidentiality, excessive bureaucratic caution and the public finance crisis. There are a lot of opportunities for research also in the private sector where some part of the data can be made available (redacting confidential information) and the public non-profit sector also can have a tremendous role as force to organise markets for the better, set standards and focus of targeted domains.

Current Data Deposits and Releases – Mandating Open Data?

The session was chaired by Daniel Goroff (Alfred P. Sloan Foundation) and brought together funders and publishers to discuss their role in requiring data from economic research to be publicly available and the importance of dissemination for publishing.

Albert Bravo-Biosca (NESTA) emphasised that mandating open data begins much earlier in the process where funders can encourage the collection of particular data by the government which is the basis for research and can also act as an intermediary for the release of open data by the private sector. Open data is interesting but it is even more interesting when it is appropriately linked and combined with other data and the there is a value in examples and case studies for demonstrating benefits. There should be however caution as opening up some data might result in less data being collected.

Toby Green (OECD Publishing) made a point of the different between posting and publishing, where making content available does not always mean that it would be accessible, discoverable, usable and understandable. In his view, the challenge is to build up an audience by putting content where people would find it, which is very costly as proper dissemination is expensive. Nancy Lutz (National Science Foundation) explained the scope and workings of the NSF and the data management plans required from all economists who are applying for funding. Creating and maintaining data infrastructure and compliance with the data management policy might eventually mean that there would be less funding for other economic research.

Trends of Greater Participation and Growing Horizons in Economics

Chris Taggart (OpenCorporates) chaired the session which introduced different ways of participating and using data, different audiences and contributors. He stressed that data is being collected in new ways and by different communities, that access to data can be an enormous privilege and can generate data gravities with very unequal access and power to make use of and to generate more data and sometimes analysis is being done in new and unexpected ways and by unexpected contributors. Michael McDonald (George Mason University) related how the highly politicised process of drawing up district lines in the U.S. (also called Gerrymandering) could be done in a much more transparent way through an open-source re-districting process with meaningful participation allowing for an open conversation about public policy. Michael also underlined the importance of common data formats and told a cautionary tale about a group of academics misusing open data with a political agenda to encourage a storyline that a candidate would win a particular state.

Hans-Peter Brunner (Asian Development Bank) shared a vision about how open data and open analysis can aid in decision-making about investments in infrastructure, connectivity and policy. Simulated models about investments can demonstrate different scenarios according to investment priorities and crowd-sourced ideas. Hans-Peter asked for feedback and input on how to make data and code available. Perry Walker (new economics foundation) spoke about the conversation and that a good conversation has to be designed as it usually doesn’t happen by accident. Rufus Pollock (Open Knowledge Foundation) concluded with examples about citizen economics and the growth of contributions from the wider public, particularly through volunteering computing and volunteer thinking as a way of getting engaged in research.

During two sessions, the workshop participants also worked on Statement on the Open Economics principles will be revised with further input from the community and will be made public on the second Open Economics workshop taking place on 11-12 June in Cambridge, MA.

Help Us to Cultivate the Digital Commons!

Jonathan Gray - January 24, 2013 in Featured, Network, Open Knowledge international Local Groups, Our Work, Policy, Working Groups

At the Open Knowledge Foundation we work to cultivate a global commons of digital material that everyone is free to use and enjoy.

This digital commons includes everything from open data about carbon emissions or spending from governments around the world; to open access research in the sciences, the humanities, and many other disciplines; to public domain works from galleries, libraries, archives and museums.

We want to change institutional policies so that public information, publicly funded research and public domain cultural works are common public goods that everyone can benefit from.

We want to change sociocultural norms and individual behaviour so that more people voluntarily open up and are willing to collaborate around the knowledge they create.

And finally we want to increase the impact of the commons on the world by encouraging more people to use open material to change the world for the better. We want to help more people to translate digital bits and bytes into knowledge, and knowledge into action.

In order to make progress towards these things we need a proactive global community to promote open knowledge around the world, across different domains, disciplines, fields and institutions.

We Need You!

In the last few months we’ve been looking at how we can better support local and domain specific affinity groups around the world. If you share our vision and want to work with us to realise it, then you can now:

What Can You Do?

We’re always looking for energetic and talented people to help us to promote the idea of open knowledge, and to think of new ways of putting it to work to improve the world. Regardless of your background or expertise there are many different things that you can do to help. For example, you could:

Get In Touch

Whether you want to help build a useful website, help to run a campaign, or connect with other people interested in the digital commons in your field or in your region, please join and introduce yourself on the relevant local group or working group mailing list, or join the taskforce (or drop us a line if you’d like to help out with anything else).

Many of our key working group and local group coordinators will be convening in Cambridge next week to discuss and plot how we can continue to build a stronger and better connected global network to support the digital commons. More on this very soon!

Get Updates