Support Us

You are browsing the archive for Our Work.

Open Knowledge: much more than open data

Laura James - May 1, 2013 in Featured, Ideas and musings, Join us, Open Data, Open Knowledge Foundation, Our Work


Book, Ball and Chain

We’ve often used “open knowledge” simply as a broad term to cover any kind of open data or content from statistics to sonnets, and more. However, there is another deeper, and far more important, reason why we are the “Open Knowledge” Foundation and not, for example, the “Open Data” Foundation. It’s because knowledge is something much more than data.

Open knowledge is what open data becomes when it’s useful, usable and used. At the Open Knowledge Foundation we believe in open knowledge: not just that data is open and can be freely used, but that it is made useful – accessible, understandable, meaningful, and able to help someone solve a real problem. —Open knowledge should be empowering – it should be enabling citizens and organizations understand the world, create insight and effect positive change.

It’s because open knowledge is much more than just raw data that we work both to have raw data and information opened up (by advocating and campaigning) and also by making, creating the tools to turn that raw material into knowledge that people can act upon. For example, we build technical tools, open source software to help people work with data, and we create handbooks which help people acquire the skills they need to do so. This combination, that we are both evangelists and makers, is extremely powerful in helping us change the world.

Achieving our vision of a world transformed through open knowledge, a world where a vibrant open knowledge commons empowers citizens and enables fair and sustainable societies, is a big challenge. We firmly believe it can done, with a global network of amazing people and organisations fighting for openness and making tools and more to support the open knowledge ecosystem, although it’s going to take a while!

We at the Open Knowledge Foundation are committed to this vision of a global movement building an open knowledge ecosystem, and we are here for the long term. We’d love you to join us in improving the world through open knowledge; there will be many different ways you can help coming up during the months ahead, so get started now by keeping in touch – by signing up to receive our Newsletter, or finding a local group or meetup near you.


Opening up the wisdom of crowds for science

Francois Grey - April 22, 2013 in Featured, News, Open Data, Open Science, Our Work, PyBossa, Releases

We are excited to announce the official launch of Crowdcrafting.org, an open source software platform – powered by our Pybossa technology – for developing and sharing projects that rely on the help of thousands of online volunteers.

crowdcrafting logo


At a workshop on Citizen Cyberscience held this week at
University of Geneva, a novel open source software platform called Crowdcrafting was officially launched. This platform, which already has attracted thousands
of participants during several months of testing, enables the rapid development of online citizen
science applications, by both amateur and professional scientists.


Applications already running on Crowdcrafting range from classifying images of magnetic
molecules to analyzing tweets about natural disasters. During the testing phase, some 50 new
applications have been created, with over 50 more under development. The Crowdcrafting
platform is hosted by University of Geneva, and is a joint initiative between the Open Knowledge Foundation and the Citizen
Cyberscience Centre
, a Geneva-based partnership co-founded by University of Geneva. The Sloan Foundation has recently awarded a grant to this joint initiative for
the further development of the Crowdcrafting platform.

Crowdcrafting fills a valuable niche in the broad spectrum of online citizen science. There are
already many citizen science projects that use online volunteers to achieve breakthrough results,

in fields as diverse as proteomics and astronomy. These projects often involve hundreds of
thousands of dedicated volunteers over many years. The objective of Crowdcrafting is to make
it quick and easy for professional scientists as well as amateurs to design and launch their own
online citizen science projects. This enables even relatively small projects to get started, which
may require the effort of just a hundred volunteers for only a few weeks. Such initiatives may be
small on the scale of most online social networks, but they still correspond to many man-years of
scientific effort achieved in a short time and at low cost.

“By emphasizing openness and simplicity, Crowdcrafting is lowering the threshold in investment
and expertise needed to develop online citizen science projects”
, says Guillemette Bolens,
Deputy Rector for Research at the University of Geneva. “As a result, dozens of projects are
under development, many of them in the digital humanities and data journalism, some of them
created by university students, others still by people outside of academia.”


An example occurred after the tropical storm that wreaked havoc in the Philippines late last
year. A volunteer initiative called Digital Humanitarian Network used Crowdcrafting to launch
a project called Philippines Typhoon. This enabled online volunteers to classify thousands
of tweets about the impact of the storm, in order to more rapidly filter information that could
be vital to first responders. “We are excited about how Crowdcrafting is assisting the digital
volunteer community worldwide in responding to natural disasters,”
says Francesco Pisano,
Director of Research at UNITAR.

“Crowdcrafting is also enabling the general public to contribute in a direct way to fundamental
science,”
says Gabriel Aeppli, Director of the London Centre for Nanotechnology (LCN), a joint
venture between UCL and Imperial College. A case in point is the project Feynman’s Flowers,
set up by researchers at LCN. In this project, volunteers use Crowdcrafting to measure the
orientation of magnetic molecules on a crystalline surface. This is part of a fundamental research effort aimed at creating novel nanoscale storage systems for the emerging field of quantum computing.

Commenting on the underlying technology, Rufus Pollock, founder of the Open Knowledge Foundation, said, “Crowdcrafting is powered by the open-source PyBossa software, developed by ourselves in collaboration with the Citizen Cyberscience Centre. Its aim is to make it quick and easy to do “crowdsourcing for good” – getting volunteers to help out with tasks such as image classification, transcription and geocoding in relation to scientific and humanitarian projects”. The Shuttleworth Foundation and the Open Society Foundations funded much of the early development work for this technology.

Francois Grey, coordinator of the Citizen Cyberscience Centre, says, “Our goal now, with
support from the Sloan Foundation, is to integrate other apps for data collection, processing
and storage, to make Crowdcrafting an open-source ecosystem for building a new generation of
browser-based citizen science projects.”

For further information about Crowdcrafting, see Crowdcrafting.org.

Announcing the Open Knowledge Conference 2013: Open Data – Broad, Deep, Connected

Rufus Pollock - March 21, 2013 in Events, Featured, News, OKCon, OKFest, Open Knowledge Foundation, Our Work

The Open Knowledge Foundation is pleased to announce that the 2013 Open Knowledge Conference (OKCon) will take place in Geneva, Switzerland on 17th -18th September. The theme of this year’s edition will be Open Data – Broad, Deep, Connected.

The world’s leading open data and open knowledge event, OKCon is the latest in an annual series run since 2005. Last year’s installment in Helsinki had more than 1000 participants from over 50 countries and was the largest event of its kind to date. Previous speakers have included inventor of the World Wide Web Sir Tim Berners-Lee, Hans Rosling of Gapminder, Brewster Kahle of the Internet Archive, and Ellen Miller of the Sunlight Foundation.

Located in Geneva, a major site for the United Nations and many other international institutions, this year’s event will focus on coordinating and strengthening public policy around the world to support a truly global and interconnected ecosystem of open data.

Open Data – Broad, Deep, Connected

In the last few years we’ve seen government open data initiatives grow from a handful to hundreds, and we’ve seen open data become important in areas such as research, culture and international development. This event will explore how open data is not only expanding geographically but also touching new sectors and new areas. How should governments and international institutions such as the UN react to these changes? How should business take advantage of new opportunities and contribute to the open data economy? How do citizens and civil society organizations turn data into accountability and into change?

This year’s OKCon will focus on the following questions:

  • How do we broaden open data – not only geographically across countries and regions, but also across domains and institutions? For example, whilst open data is now firmly on the agenda for government, in business its potential is only just starting to be explored. Similarly, though “open” is prominent in some areas of research, such as genomics, in others it is still barely known.

  • How do we deepen open data – ensuring a commitment not only for today but for the long term, and ensuring that open data is fully embedded into processes and policies? For example, though many governments have now signed up to the Open Government Partnership and announced open government data initiatives, in many cases the amount of data actually released remains limited.

  • How do we ensure the open data ecosystem is connected? Much of the value of open data will be lost if open data ends up locked into isolated silos – whether these are legal, technical or social. In today’s globalized world it makes no sense if open data ‘stops at the border’: we need data that extends across countries and institutions, and is easy to interconnect thanks to common standards and interoperable infrastructure.

Organizers

The event is jointly organized by the Open Knowledge Foundation, Open Knowledge Foundation Switzerland with the support of the Federal Councillor Alain Berset and the Canton of Geneva and with Lift Events as an organizing partner.

FAQs

Will there be other events in town during the Conference week?

Yes, we’re planning satellite workshops on Monday 16th September and Thursday 19th September. Please consider this when booking your travel!

When will the Call for Proposals be launched?

We will launch a Call for Proposals inviting you to send us your ideas for talks, panels and workshops in April. We can’t wait to make this happen together with you!

I’d like to offer my support as a volunteer. How can I apply?

We expect to welcome around 30 stewards in our team. Applications for these positions will be opening shortly, with preference given to those already in the Open Knowledge Foundation Task Force. Stewards will receive a free ticket.

Tips or support for travel and accommodation?

We’re planning to provide a travel bursary programme, and details of recommended hotels and hostels with good connections to the OKCon venue will be announced in the coming weeks.

What’s Happening with OKFestival?

Last year our annual Open Knowledge Conference expanded into the inaugural Open Knowledge Festival (OKFestival) which took place in Helsinki in September. This was a great event with a broad structure and festival atmosphere, and we look forward to future Open Knowledge Festivals. With their expanded format we’ll likely be running these in alternate years, giving plenty of time to plan and bring the community together.

Releasing the Automated Game Play Datasets

Velichka Dimitrova - March 7, 2013 in Open Economics, Our Work, WG Economics

 

This blog post is cross-posted from the Open Economics Blog.

We are very happy to announce that the Open Economics Working Group is releasing the datasets of the research project “Small Artificial Human Agents for Virtual Economies“, implemented by Professor David Levine and Professor Yixin Chen at the Washington University of St. Louis and funded by the National Science Foundation [See dedicated webpage].

The authors of the study have given their permission to publish their data online. We hope that through making this data available online we will aid researchers working in this field. This initiative is motivated by our belief that in order for economic research to be reliable and trusted, it should be possible to reproduce research findings – which is difficult or even impossible without the availability of the data and code. Making material openly available reduces to a minimum the barriers for doing reproducible research.

If you are interested to know more or you like to get help in releasing research data in your field, please contact us at: economics [at] okfn.org

Project Background

An important requirement for developing better economic policy recommendations is improving the way we validate theories. Originally economics depended on field data from surveys and laboratory experiments. An alternative method of validating theories is through the use of artificial or virtual economies. If a virtual world is an adequate description of a real economy, then a good economic theory ought to be able to predict outcomes in that setting.

An artificial environment offers enormous advantages over the field and laboratory: complete control – for example, over risk aversion and social preferences – and great speed in creating economies and validating theories. In economics, the use of virtual economies can potentially enable us to deal with heterogeneity, with small frictions, and with expectations that are backward looking rather than determined in equilibrium. These are difficult or impractical to combine in existing calibrations or Monte Carlo simulations.

The goal of this project is to build artificial agents by developing computer programs that act like human beings in the laboratory. We focus on the simplest type of problem of interest to economists: the simple one-shot two-player simultaneous move games. The most well-known form of these are “Prisoner’s Dilemmas” – a much studied scenario in game theory which explores the circumstances of cooperation between two people. In its classic form, the model suggests that two agents who are fully self-interested and rational would always betray each other, even though the best outcome overall would be if they cooperated. However, laboratory humans show a tendency towards cooperation. Our challenge is therefore developing artificial agents who share this bias to the same degree as their human counterparts.

There is a wide variety of existing published data on laboratory behavior that will be the primary testing ground for the computer programs. As the project progresses, the programs will be challenged to see if they adapt themselves to changes in the rules in the same ways as human agents: for example, if payments are changed in a certain way, the computer programs will play differently: do people do the same? In some cases we may be able to answer these questions with data from existing studies; in others we will need to conduct our own experimental studies.

Find the full list of available datasets here

The Open Data Census – Tracking the State of Open Data Around the World

Rufus Pollock - February 20, 2013 in Events, Featured, Featured Project, Open Data, Open Government Data, Our Work, WG Open Government Data

Recent years have seen a huge expansion in open data activity around the world. This is very welcome, but at the same time it is now increasingly difficult to assess if, and where, progress is being made.

To address this, we started the Open Data Census in order to track the state of open data globally. The results so far, covering more than 35 countries and 200 datasets, are now available online at http://census.okfn.org/. We’ll be building this up even more during Open Data Day this weekend.

This post explains why we started the census and why this matters now. This includes the importance of quality (not just quantity) of data, the state of the census so far, and some immediate next steps – such as expanding the census to the city level and developing an “open data index” to give a single measure of open data progress.

Why the Census?

In the last few years there has been an explosion of activity around open data and especially open government data. Following initiatives like data.gov and data.gov.uk, numerous local, regional and national bodies have started open government data initiatives and created open data portals (from a handful 3 years ago there are now more than 250 open data catalogs worldwide).

But simply putting a few spreadsheets online under an open license is obviously not enough. Doing open government data well depends on releasing key datasets in the right way. Moreover, with the proliferation of sites it has become increasingly hard to track what is happening.

Which countries, or municipalities, are actually releasing open data and which aren’t?1 Which countries are making progress on releasing data on stuff that matters in the right way?

Quality not (just) Quantity

Progress in open government data is not (just) about the number of datasets being released. The quality of the datasets being released matters at least as much – and often more – than the quantity of these datasets.

We want to know whether governments around the world are releasing key datasets, for example critical information about public finances, locations and public transport rather than less critical information such as the location of park benches or the number of streetlights per capita.2

Similarly, is the data being released in a form that is comparable and interoperable or is it being release as randomly structured spreadsheets (or, worse, non-machine-readable PDFs)?

Tables like this are easy for humans, but difficult for machines.

This example of a table from US Bureau of Labor Statistics are easy for humans to interpret but very difficult for machines. (But at least it’s in plain text not PDF).)

The essential point here is that it is about quality as much quantity. Datasets aren’t all the same, whether in size, importance or format.

Enter the Census

And so was born the Open Knowledge Foundation’s Open Data Census – a community-driven effort to map and evaluate the progress of open data and open data initiatives around the world.

We launched the first round of data collection last April at the meeting of the Open Government Partnership in Brazil. Since then members of the Open Knowledge Foundation’s Open Government Data Working Group have been continuing to collect the data and our Labs team have been developing a site to host the census and present its results.

ogd census table

The central part of the census is an assessment based on 10 key datasets.

These were selected through a process of discussion and consultation with the Open Government Data Working Group and will likely be expanded in future (see some great suggestions from David Eaves last year). We’ll also be considering additional criteria: for example whether data is being released in a standard format that facilitates integration and reuse.

We focused on a specific list of core datasets (rather than e.g. counting numbers of open datasets) for a few important reasons:

  • Comparability: by assessing against the same datasets we would be able to compare across countries
  • Importance: Some datasets are more important than others and by specifically selecting a small set of key datasets we could make that explicit
  • Ranking: we want, ultimately, to be able to rank countries in an “Open Data Index”. This is much easier if we have a good list of cross-country comparable data. 3

Today, thanks to submissions from more than thirty contributors the census includes information on more 190 datasets from more than 35 countries around the world and we hope to get close to full coverage for more than 50 countries in the next couple of months.

ogd census map

The Open Data Index: a Scoreboard for Open Government Data

Having the census allows us to evaluate general progress on open data. But having a lot of information alone is not enough. We need to ensure the information is presented in a simple and understandable way especially if we want it to help drive improvements in the state of open government data around the world.

Inspired by work such as Open Budget Index from the International Budget Partnership, the Aid Transparency Index from Publish What You Fund, the Corruption Perception Index from Transparency International and many more, we felt a key aspect is to distill the results into a single overall ranking and present this clearly. (We’ve also been talking here with the great folks at the Web Foundation, who are also thinking about an Open Data Index connected with their work on the Web Index).

obp screenshot

As part of our first work on the Census dashboard last September for OKFestival we did some work on an “open data index”, which provided an overall rankings for countries. However, during that work, it became clear that building a proper index requires some careful thought. In particular, we probably wanted to incorporate other factors than just the pure census results, for example:

  • Some measure of the number of open datasets (appropriately calibrated!)
  • Whether the country has an open government data initiative and open data portal
  • Whether the country has joined the OGP
  • Existence (and quality) of an FoI law

In addition, there is the challenging question of weightings – not only between these additional factors and census scores but also for scoring the census. Should, for example, Belarus be scoring 5 or 6 out of 7 on the census despite it not being clear whether any data is actually openly licensed? How should we weight total number of datasets against the census score?

Nevertheless, we’re continuing to work on putting together an “open data index” and we hope to have an “alpha” version ready for the open government data community to use and critique within the next few months. (If you’re interested in contributing check out the details at the end of this post).

The City Census

The first version of the census was country oriented. But much of the action around open data happens at the city and regional level, and information about the area around us tends to be the most meaningful and important.

We’re happy to say plans are afoot to make this happen!

Specifically, we’ll be kicking off the city census with an Open Data Census Challenge this Saturday as part of Open Data Day.

If the Open Data Census has caught your interest, you are invited to become an Open Data Detective for a day and help locate open (and closed) datasets in cities around the world. Find out more and sign up here: http://okfn.org/events/open-data-day-2013/census/

Get Involved

Interested in the Open Data Census? Want to contribute? There are a variety of ways:

Notes


  1. For example, we’ve seen several open data initiatives releasing data under non-open licenses that restrict, for example, derivative works, redistribution or commercial use. 

  2. This isn’t to say that less critical information isn’t important – one of the key reasons for releasing material openly is that you never know who may derive benefit from it, and the “long tail of data” may yield plenty of unexpected riches. 

  3. Other metrics, such as numbers of datasets are very difficult to compare – what is a single dataset in one country can easily become a 100 or more in another country, for example unemployment could be in a single dataset or split into many datasets one for each month and region). 

Open Research Data Handbook Sprint

Velichka Dimitrova - February 15, 2013 in Open Access, Open Content, Open Data, Open Economics, Open Science, Open Standards, Our Work, WG Economics

On February 15-16 we are updating the Open Research Data Handbook to include more detail on sharing research data from scientific work, and to remix the book for different disciplines and settings. We’re doing this through an open book sprint. The sprint will happen at the Open Data Institute, 65 Clifton Street, London EC2A 4JE.

The Friday lunch seminar will be streamed through the Open Economics Bambuser channel. If you would like to participate, please see the Online Participation Hub for links to documents and programme updates. You can follow this event at the IRC channel #okfn-rbook and follow on twitter with hashtags #openresearch and #okfnrbook.

The Open Research Data Handbook aims to provide an introduction to the processes, tools and other areas that researchers need to consider to make their research data openly available.

Join us for a book sprint to develop the current draft, and explore ways to remix it for different disciplines and contexts.

Who it is for:

  • Researchers interested in carrying out their work in more open ways
  • Experts on sharing research and research data
  • Writers and copy editors
  • Web developers and designers to help present the handbook online
  • Anyone else interested in taking part in an intense and collaborative weekend of action

What will happen:

The main sprint will take place on Friday and Saturday. After initial discussions we’ll divide into open space groups to focus on research, writing and editing for different chapters of the handbook, developing a range of content including How To guidance, stories of impact, collections of links and decision tools.

A group will also look at digital tools for presenting the handbook online, including ways to easily tag content for different audiences and remix the guide for different contexts.

Agenda:

Where: 65 Clifton Street, EC2A 4JE (3rd floor – the Open Data Institute)

Friday, February 15th

  • 13:00 – 13:30: Arrival and sushi lunch
  • 13:30 – 14:30: Open research data seminar with Steven Hill, Head of Open Data Dialogue at RCUK.
  • 14:30 – 17:30: Working in teams

Friday, February 16th

  • 10:00 – 10:30: Arrival and coffee
  • 10:30 – 11:30: Introducing open research lightning talks (your space to present your project on research data)
  • 11:30 – 13:30: Working in teams
  • 13:30 – 14:30: Lunch
  • 14:30 – 17:30: Working in teams
  • 17:30 – 18:30: Reporting back

As many already registered for online participation we will broadcast the lunch seminar through the Open Economics Bambuser channel. Please drop by in the IRC channel #okfn-rbook

Partners:

OKF Open Science Working Group – creators of the current Open Research Data Handbook
OKF Open Economic Working Group – exploring economics aspects of open research
Open Data Research Network - exploring a remix of the handbook to support open social science
research in a new global research network, focussed on research in the Global South.
Open Data Institute – hosting the event

First Open Economics International Workshop Recap

Velichka Dimitrova - January 28, 2013 in Access to Information, Events, Featured, Open Access, Open Data, Open Economics, Open Standards, Our Work, WG Economics, Workshop

The first Open Economics International Workshop gathered 40 academic economists, data publishers and funders of economics research, researchers and practitioners to a two-day event at Emmanuel College in Cambridge, UK. The aim of the workshop was to build an understanding around the value of open data and open tools for the Economics profession and the obstacles to opening up information, as well as the role of greater openness of the academy. This event was organised by the Open Knowledge Foundation and the Centre for Intellectual Property and Information Law and was supported by the Alfred P. Sloan Foundation. Audio and slides are available at the event’s webpage.

Open Economics Workshop

Setting the Scene

The Setting the Scene session was about giving a bit of context to “Open Economics” in the knowledge society, seeing also examples from outside of the discipline and discussing reproducible research. Rufus Pollock (Open Knowledge Foundation) emphasised that there is necessary change and substantial potential for economics: 1) open “core” economic data outside the academy, 2) open as default for data in the academy, 3) a real growth in citizen economics and outside participation. Daniel Goroff (Alfred P. Sloan Foundation) drew attention to the work of the Alfred P. Sloan Foundation in emphasising the importance of knowledge and its use for making decisions and data and knowledge as a non-rival, non-excludable public good. Tim Hubbard (Wellcome Trust Sanger Institute) spoke about the potential of large-scale data collection around individuals for improving healthcare and how centralised global repositories work in the field of bioinformatics. Victoria Stodden (Columbia University / RunMyCode) stressed the importance of reproducibility for economic research and as an essential part of scientific methodology and presented the RunMyCode project.

Open Data in Economics

The Open Data in Economics session was chaired by Christian Zimmermann (Federal Reserve Bank of St. Louis / RePEc) and was about several projects and ideas from various institutions. The session examined examples of open data in Economics and sought to discover whether these examples are sustainable and can be implemented in other contexts: whether the right incentives exist. Paul David (Stanford University / SIEPR) characterised the open science system as a system which is better than any other in the rapid accumulation of reliable knowledge, whereas the proprietary systems are very good in extracting the rent from the existing knowledge. A balance between these two systems should be established so that they can work within the same organisational system since separately they are distinctly suboptimal. Johannes Kiess (World Bank) underlined that having the data available is often not enough: “It is really important to teach people how to understand these datasets: data journalists, NGOs, citizens, coders, etc.”. The World Bank has implemented projects to incentivise the use of the data and is helping countries to open up their data. For economists, he mentioned, having a valuable dataset to publish on is an important asset, there are therefore not sufficient incentives for sharing.

Eustáquio J. Reis (Institute of Applied Economic Research – Ipea) related his experience on establishing the Ipea statistical database and other projects for historical data series and data digitalisation in Brazil. He shared that the culture of the economics community is not a culture of collaboration where people willingly share or support and encourage data curation. Sven Vlaeminck (ZBW – Leibniz Information Centre for Economics) spoke about the EDaWaX project which conducted a study of the data-availability of economics journals and will establish publication-related data archive for an economics journal in Germany.

Legal, Cultural and other Barriers to Information Sharing in Economics

The session presented different impediments to the disclosure of data in economics from the perspective of two lawyers and two economists. Lionel Bently (University of Cambridge / CIPIL) drew attention to the fact that there is a whole range of different legal mechanism which operate to restrict the dissemination of information, yet on the other hand there is also a range of mechanism which help to make information available. Lionel questioned whether the open data standard would be always the optimal way to produce high quality economic research or whether there is also a place for modulated/intermediate positions where data is available only on conditions, or only in certain part or for certain forms of use. Mireille van Eechoud (Institute for Information Law) described the EU Public Sector Information Directive – the most generic document related to open government data and progress made for opening up information published by the government. Mireille also pointed out that legal norms have only limited value if you don’t have the internalised, cultural attitudes and structures in place that really make more access to information work.

David Newbery (University of Cambridge) presented an example from the electricity markets and insisted that for a good supply of data, informed demand is needed, coming from regulators who are charged to monitor markets, detect abuse, uphold fair competition and defend consumers. John Rust (Georgetown University) said that the government is an important provider of data which is otherwise too costly to collect, yet a number of issues exist including confidentiality, excessive bureaucratic caution and the public finance crisis. There are a lot of opportunities for research also in the private sector where some part of the data can be made available (redacting confidential information) and the public non-profit sector also can have a tremendous role as force to organise markets for the better, set standards and focus of targeted domains.

Current Data Deposits and Releases – Mandating Open Data?

The session was chaired by Daniel Goroff (Alfred P. Sloan Foundation) and brought together funders and publishers to discuss their role in requiring data from economic research to be publicly available and the importance of dissemination for publishing.

Albert Bravo-Biosca (NESTA) emphasised that mandating open data begins much earlier in the process where funders can encourage the collection of particular data by the government which is the basis for research and can also act as an intermediary for the release of open data by the private sector. Open data is interesting but it is even more interesting when it is appropriately linked and combined with other data and the there is a value in examples and case studies for demonstrating benefits. There should be however caution as opening up some data might result in less data being collected.

Toby Green (OECD Publishing) made a point of the different between posting and publishing, where making content available does not always mean that it would be accessible, discoverable, usable and understandable. In his view, the challenge is to build up an audience by putting content where people would find it, which is very costly as proper dissemination is expensive. Nancy Lutz (National Science Foundation) explained the scope and workings of the NSF and the data management plans required from all economists who are applying for funding. Creating and maintaining data infrastructure and compliance with the data management policy might eventually mean that there would be less funding for other economic research.

Trends of Greater Participation and Growing Horizons in Economics

Chris Taggart (OpenCorporates) chaired the session which introduced different ways of participating and using data, different audiences and contributors. He stressed that data is being collected in new ways and by different communities, that access to data can be an enormous privilege and can generate data gravities with very unequal access and power to make use of and to generate more data and sometimes analysis is being done in new and unexpected ways and by unexpected contributors. Michael McDonald (George Mason University) related how the highly politicised process of drawing up district lines in the U.S. (also called Gerrymandering) could be done in a much more transparent way through an open-source re-districting process with meaningful participation allowing for an open conversation about public policy. Michael also underlined the importance of common data formats and told a cautionary tale about a group of academics misusing open data with a political agenda to encourage a storyline that a candidate would win a particular state.

Hans-Peter Brunner (Asian Development Bank) shared a vision about how open data and open analysis can aid in decision-making about investments in infrastructure, connectivity and policy. Simulated models about investments can demonstrate different scenarios according to investment priorities and crowd-sourced ideas. Hans-Peter asked for feedback and input on how to make data and code available. Perry Walker (new economics foundation) spoke about the conversation and that a good conversation has to be designed as it usually doesn’t happen by accident. Rufus Pollock (Open Knowledge Foundation) concluded with examples about citizen economics and the growth of contributions from the wider public, particularly through volunteering computing and volunteer thinking as a way of getting engaged in research.

During two sessions, the workshop participants also worked on Statement on the Open Economics principles will be revised with further input from the community and will be made public on the second Open Economics workshop taking place on 11-12 June in Cambridge, MA.

Help Us to Cultivate the Digital Commons!

Jonathan Gray - January 24, 2013 in Featured, Network, Open Knowledge Foundation Local Groups, Our Work, Policy, Working Groups

At the Open Knowledge Foundation we work to cultivate a global commons of digital material that everyone is free to use and enjoy.

This digital commons includes everything from open data about carbon emissions or spending from governments around the world; to open access research in the sciences, the humanities, and many other disciplines; to public domain works from galleries, libraries, archives and museums.

We want to change institutional policies so that public information, publicly funded research and public domain cultural works are common public goods that everyone can benefit from.

We want to change sociocultural norms and individual behaviour so that more people voluntarily open up and are willing to collaborate around the knowledge they create.

And finally we want to increase the impact of the commons on the world by encouraging more people to use open material to change the world for the better. We want to help more people to translate digital bits and bytes into knowledge, and knowledge into action.

In order to make progress towards these things we need a proactive global community to promote open knowledge around the world, across different domains, disciplines, fields and institutions.

We Need You!

In the last few months we’ve been looking at how we can better support local and domain specific affinity groups around the world. If you share our vision and want to work with us to realise it, then you can now:

What Can You Do?

We’re always looking for energetic and talented people to help us to promote the idea of open knowledge, and to think of new ways of putting it to work to improve the world. Regardless of your background or expertise there are many different things that you can do to help. For example, you could:

Get In Touch

Whether you want to help build a useful website, help to run a campaign, or connect with other people interested in the digital commons in your field or in your region, please join and introduce yourself on the relevant local group or working group mailing list, or join the taskforce (or drop us a line if you’d like to help out with anything else).

Many of our key working group and local group coordinators will be convening in Cambridge next week to discuss and plot how we can continue to build a stronger and better connected global network to support the digital commons. More on this very soon!

The Year in Review: Top Stories from 2012

Theodora Middleton - January 8, 2013 in Featured, Our Work

So it came and it went, and we seem to all have survived the End of the World. It’s been a big year, so as we bid it farewell and head full throttle into the very futuristic-sounding 2013, here’s a little review of the 5 most popular stories from the blog in the last twelve months:


So, the fifth most-read piece from 2012 was our Spring launch of the AnnotateIt and Annotator projects, enabling full web annotation and storage. As well as being fabulous developments in themselves, these have provided the foundations for our TEXTUS platform for sharing and collaborating around online texts. Look out for the first instance – OpenPhilosophy.org – in the coming year!

And it seems you were as excited as we were when the Open Data Handbook saw the light of day. Along with the School of Data, the handbook has formed part of our push to put the power of data into the hands of the many, and we’re looking forward to continuing iterations.

We standardised the shipping container, can we standardise data interoperation?

In third place, you were much taken by Francis Irving and Rufus Pollock’s joint piece, “From CMS to DMS: C is for Content, D is for Data”. What they will eventually look like remains a question for the future, but as everyone scrambles to develop the ultimate product, Francis and Rufus had some key suggestions about what they need to do.

Second most-read was the launch of the School of Data, our collaborative project to make open data really valuable, through arming citizens with the skills required to make it work for them. The School has done amazing stuff already, including taking part in the Data Bootcamps in Africa, and we’re looking forward to its growing importance within our activities.

And at number 1, the announcement that got you the most excited last year was the public release of Recline.JS, our simple but powerful open-source library for building open data applications in pure Javascript. The project is going strong – if you’ve missed it so far, go and have a look!

First Open Economics International Workshop

Velichka Dimitrova - December 17, 2012 in Access to Information, Events, Featured, Open Access, Open Data, Our Work, WG Economics, Workshop

You can follow all the goings-on today and tomorrow through the live stream.

On 17-18 December, economics and law professors, data publishers, practitioners and representatives from international institutions will gather at Emmanuel College, Cambridge for the First Open Economics International Workshop. From showcasing the examples of successes in collaborative economic research and open data to reviewing the legal cultural and other barriers to information sharing, this event aims to build an understanding of the value of open data and open tools for the economics profession and the obstacles to opening up information in economics. The workshop will also explore the role of greater openness in broadening understanding of and engagement with economics among the wider community including policy-makers and society.

This event is part of the Open Economics project, funded by the Alfred P. Sloan Foundation and is a key step in identifying best practice as well as legal, regulatory and technical barriers and opportunities for open economic data. A statement on the Open Economics Principles will be produced as a result of the workshop.

Session: “Open Data in Economics – Reasons, Examples, Potential”:
Examples of open data in economics so far and its potential benefits
Session host: Christian Zimmermann, (Federal Reserve Bank of St. Louis, RePEc), Panelists: Paul David (Stanford University, SIEPR), Eustáquio J. Reis (Institute of Applied Economic Research – Ipea), Johannes Kiess (World Bank), Sven Vlaeminck (ZBW – Leibniz Information Centre for Economics).
Session: “Legal, Cultural and other Barriers to Information Sharing in Economics” : Introduction and overview of challenges faced in information sharing in Economics
Session host: Lionel Bently, (University of Cambridge / CIPIL), Panelists: Mireille van Eechoud, (Institute for Information Law), David Newbery, (University of Cambridge), John Rust, (Georgetown University).
Session: “Current Data Deposit and Releases – Mandating Open Data?”: Round table discussion with stakeholders: Representatives of funders, academic publishing and academics.
Session host: Daniel L. Goroff, (Alfred P. Sloan Foundation), Panelists: Albert Bravo-Biosca, (NESTA), Toby Green, (OECD Publishing), Nancy Lutz, (National Science Foundation).
Session: Trends of Greater Participation and Growing Horizons in Economics: Opening up research and the academy to wider engagement and understanding with the general public, policy-makers and others.
Session host: Chris Taggart, (OpenCorporates), Panelists: Michael P. McDonald, (George Mason University), Hans-Peter Brunner, (Asian Development Bank), Perry Walker, (New Economics Foundation)

The workshop is a designed to be a small invite-only event with a round-table format allowing participants to to share and develop ideas together. For a complete description and a detailed programme visit the event website. Podcasts and slides will be available on the webpage after the event.

The event is being organized by the Centre for Intellectual Property and Information Law (CIPIL) at the University of Cambridge and Open Economics Working Group of the Open Knowledge Foundation and is funded by the Alfred P. Sloan Foundation. More information about the Working Group can be found online.

Interested in getting updates about this project and getting involved? Join the Open Economics mailing list:

Get Updates