Support Us

You are browsing the archive for Open Data.

This Index is yours!

Heather Leson - October 9, 2014 in Community, Open Data, Open Data Census, Open Data Census, Open Data Index

How is your country doing with open data? You can make a difference in 5 easy steps to track 10 different datasets. Or, you can help us spread the word on how to contribute to the Open Data Index. This includes the very important translation of some key items into your local language. We’ll keep providing you week-by-week updates on the status of the community-driven project.

We’ve got a demo and some shareable slides to help you on your Index path.

Priority country help wanted

The amazing community provided content for over 70 countries last year. This year we set the bar higher with a goal of 100 countries. If you added details for your country last year, please be sure to add any updates this year. Also, we need some help. Are you from one of these countries? Do you have someone in your network who could potentially help? Please do put them in touch with the index team – index at okfn dot org.

DATASETS WANTED: Armenia, Bolivia, Georgia, Guyana, Haiti, Kosovo, Moldova, Morocco, Nicaragua, Ukraine, and Yemen.

Video: Demo and Tips for contributing to the Open Data Index

This is a 40 minute video with some details all about the Open Data Index, including a demo to show you how to add datasets.

Text: Tutorial on How to help build the Open Data Index

We would encourage you to download this, make changes (add country specific details), translate and share back. Please simply share on the Open Data Census Mailing List or Tweet us @okfn.

Thanks again for sharing widely!

Open Definition v2.0 Released – Major Update of Essential Standard for Open Data and Open Content

Rufus Pollock - October 7, 2014 in Featured, News, Open Content, Open Data, Open Definition

Today Open Knowledge and the Open Definition Advisory Council are pleased to announce the release of version 2.0 of the Open Definition. The Definition “sets out principles that define openness in relation to data and content” and plays a key role in supporting the growing open data ecosystem.

Recent years have seen an explosion in the release of open data by dozens of governments including the G8. Recent estimates by McKinsey put the potential benefits of open data at over $1 trillion and others estimates put benefits at more than 1% of global GDP.

However, these benefits are at significant risk both from quality problems such as “open-washing” (non-open data being passed off as open) and from fragmentation of the open data ecosystem due to incompatibility between the growing number of “open” licenses.

The Open Definition eliminates these risks and ensures we realize the full benefits of open by guaranteeing quality and preventing incompatibility.See this recent post for more about why the Open Definition is so important.

The Open Definition was published in 2005 by Open Knowledge and is maintained today by an expert Advisory Council. This new version of the Open Definition is the most significant revision in the Definition’s nearly ten-year history.

It reflects more than a year of discussion and consultation with the community including input from experts involved in open data, open access, open culture, open education, open government, and open source. Whilst there are no changes to the core principles, the Definition has been completely reworked with a new structure and new text as well as a new process for reviewing licenses (which has been trialled with governments including the UK).

Herb Lainchbury, Chair of the Open Definition Advisory Council, said:

“The Open Definition describes the principles that define “openness” in relation to data and content, and is used to assess whether a particular licence meets that standard. A key goal of this new version is to make it easier to assess whether the growing number of open licenses actually make the grade. The more we can increase everyone’s confidence in their use of open works, the more they will be able to focus on creating value with open works.”

Rufus Pollock, President and Founder of Open Knowledge said:

“Since we created the Open Definition in 2005 it has played a key role in the growing open data and open content communities. It acts as the “gold standard” for open data and content guaranteeing quality and preventing incompatibility. As a standard, the Open Definition plays a key role in underpinning the “open knowledge economy” with a potential value that runs into the hundreds of billions – or even trillions – worldwide.”

What’s New

In process for more than a year, the new version was collaboratively and openly developed with input from experts involved in open access, open culture, open data, open education, open government, open source and wiki communities. The new version of the definition:

  • Has a complete rewrite of the core principles – preserving their meaning but using simpler language and clarifying key aspects.
  • Introduces a clear separation of the definition of an open license from an open work (with the latter depending on the former). This not only simplifies the conceptual structure but provides a proper definition of open license and makes it easier to “self-assess” licenses for conformance with the Open Definition.
  • The definition of an Open Work within the Open Definition is now a set of three key principles:
    • Open License: The work must be available under an open license (as defined in the following section but this includes freedom to use, build on, modify and share).
    • Access: The work shall be available as a whole and at no more than a reasonable one-time reproduction cost, preferably downloadable via the Internet without charge
    • Open Format: The work must be provided in a convenient and modifiable form such that there are no unnecessary technological obstacles to the performance of the licensed rights. Specifically, data should be machine-readable, available in bulk, and provided in an open format or, at the very least, can be processed with at least one free/libre/open-source software tool.
  • Includes improved license approval process to make it easier for license creators to check conformance of their license with the Open Definition and to encourage reuse of existing open licenses

More Information

  • For more information about the Open Definition including the updated version visit: http://opendefinition.org/
  • For background on why the Open Definition matters, read the recent article ‘Why the Open Definition Matters’

Authors

This post was written by Herb Lainchbury, Chair of the Open Definition Advisory Council and Rufus Pollock, President and Founder of Open Knowledge

Brazilian Government Develops Toolkit to Guide Institutions in both Planning and Carrying Out Open Data Initatives

Guest - October 7, 2014 in Open Data, Open Government Data

This is a guest post by Nitai Silva of the Brazilian government’s open data team and was originally published on the Open Knowledge Brazil blog here.

Recently Brazilian government released the Kit de Dados Abertos (open data toolkit). The toolkit is made up of documents describing the process, methods and techniques for implementing an open data policy within an institution. Its goal is to both demystify the logic of opening up data and to share with public employees observed best practices that have emerged from a number of Brazilian government initiatives.

The toolkit focuses on the Plano de Dados Abertos – PDA (Open Data Plan) as the guiding instrument where commitments, agenda and policy implementation cycles in the institution are registered. We believe that making each public agency build it’s own PDA is a way to perpetuate the open data policy, making it a state policy and not just a transitory governmental action.

It is organized to facilitate the implementation of the main activities cycles that must be observed in an institution and provides links and manuals to assist in these activities. Emphasis is given to the actors/roles involved in each step and their responsibilities. It also helps to define a central person to monitor and maintain the PDA. The following diagram summarizes the macro steps of implementing an open data policy in an institution:

 

Processo Sistêmico de um PDA

 

Open data theme has been part of the Brazilian government’s agenda for over three years. Over this period, we have accomplished a number of important achievement including passing the Lei de Acesso à Informação – LAI (FOIA) (Access to Information Law), making commitments as part of our Open Government Partnership Action Plan and developing the Infraestrutura Nacional de Dados Abertos (INDA) (Open Data National Infrastructure). However, despite these accomplishments, for many public managers, open data activities remain the exclusive responsibility of the Information Technology department of their respective institution. This gap is, in many ways, the cultural heritage of the hierarchical, departmental model of carrying out public policy and is observed in many institutions.

The launch of the toolkit is the first of a series of actions prepared by the Ministry of Planning to leverage open data initiatives in federal agencies, as was defined in the Brazilian commitments in the Open Government Partnership (OGP). The next step is to conduct several tailor made workshops designed to support major agencies in the federal government in the implementation of open data.

Despite it having been built with the aim of expanding the quality and quantity of open data made available by the federal executive branch agencies, we also made a conscious effort to make the toolkit generic enough generic enough for other branches and levels of government.

About the toolkit development:

It is also noteworthy to mention that the toolkit was developed on Github. Although the Github is known as an online and distributed environment for develop software, it has already being used for co-creation of text documents for a long time, even by governments. The toolkit is still hosted there, which allows anyone to make changes and propose improvements. The invitation is open, we welcome and encourage your collaboration.

Finally I would like to thank Augusto Herrmann, Christian Miranda, Caroline Burle and Jamila Venturini for participating in the drafting of this post!

Why the Open Definition Matters for Open Data: Quality, Compatibility and Simplicity

Rufus Pollock - September 30, 2014 in Featured, Open Data, Open Definition, Policy

The Open Definition performs an essential function as a “standard”, ensuring that when you say “open data” and I say “open data” we both mean the same thing. This standardization, in turn, ensures the quality, compatibility and simplicity essential to realizing one of the main practical benefits of “openness”: the greatly increased ability to combine different datasets together to drive innovation, insight and change.

Recent years have seen an explosion in the release of open data by dozens of governments including the G8. Recent estimates by McKinsey put the potential benefits of open data at over $100bn and others estimate benefits at more than 1% of global GDP.

However, these benefits are at significant risk both from quality-dilution and “open-washing”” (non-open data being passed off as open) as well as from fragmentation of the ecosystem as the proliferation of open licenses each with their own slightly different terms and conditions leads to incompatibility.

The Open Definition helps eliminates these risks and ensure we realize the full benefits of open. It acts as the “gold standard” for open content and data guaranteeing quality and preventing incompatibility.

This post explores in more detail why it’s important to have the Open Definition and the clear standard it provides for what “open” means in open data and open content.

Three Reasons

There are three main reasons why the Open Definition matters for open data:

Quality: open data should mean the freedom for anyone to access, modify and share that data. However, without a well-defined standard detailing what that means we could quickly see “open” being diluted as lots of people claim their data is “open” without actually providing the essential freedoms (for example, claiming data is open but actually requiring payment for commercial use). In this sense the Open Definition is about “quality control”.

Compatibility: without an agreed definition it becomes impossible to know if your “open” is the same as my “open”. This means we cannot know whether it’s OK to connect your open data and my open data together since the terms of use may, in fact, be incompatible (at the very least I’ll have to start consulting lawyers just to find out!). The Open Definition helps guarantee compatibility and thus the free ability to mix and combine different open datasets which is one of the key benefits that open data offers.

Simplicity: a big promise of open data is simplicity and ease of use. This is not just in the sense of not having to pay for the data itself, its about not having to hire a lawyer to read the license or contract, not having to think about what you can and can’t do and what it means for, say, your business or for your research. A clear, agreed definition ensures that you do not have to worry about complex limitations on how you can use and share open data.

Let’s flesh these out in a bit more detail:

Quality Control (avoiding “open-washing” and “dilution” of open)

A key promise of open data is that it can freely accessed and used. Without a clear definition of what exactly that means (e.g. used by whom, for what purpose) there is a risk of dilution especially as open data is attractive for data users. For example, you could quickly find people putting out what they call “open data” but only non-commercial organizations can access the data freely.

Thus, without good quality control we risk devaluing open data as a term and concept, as well as excluding key participants and fracturing the community (as we end up with competing and incompatible sets of “open” data).

Compatibility

A single piece of data on its own is rarely useful. Instead data becomes useful when connected or intermixed with other data. If I want to know about the risk of my home getting flooded I need to have geographic data about where my house is located relative to the river and I need to know how often the river floods (and how much).

That’s why “open data”, as defined by the Open Definition, isn’t just about the freedom to access a piece of data, but also about the freedom to connect or intermix that dataset with others.

Unfortunately, we cannot take compatibility for granted. Without a standard like the Open Definition it becomes impossible to know if your “open” is the same as my “open”. This means, in turn, that we cannot know whether it’s OK to connect (or mix) your open data and my open data together (without consulting lawyers!) – and it may turn out that we can’t because your open data license is incompatible with my open data license.

Think of power sockets around the world. Imagine if every electrical device had a different plug and needed a different power socket. When I came over to your house I’d need to bring an adapter! Thanks to standardization, at least in a given country, power-sockets are almost always the same – so I bring my laptop over to your house without a problem. However, when you travel abroad you may have to take adapter with you. What drives this is standardization (or its lack): within your own country everyone has standardized on the same socket type but different countries may not share a standard and hence you need to get an adapter (or run out of power!).

Whilst for power sockets, incompatibility may be a minor inconvenience, easily solved by buying a $10 adapter, for data it is huge issue. If the license for two datasets is incompatible it may by close to impossible to combine the. For example, if each dataset has many contributors, which is common for “open” projects where hundreds or thousands of volunteers have helped build the dataset, then one would need to get the agreement of all of those individual contributors – a huge task. Even where it is easier, where there is just one or a few “owners” for a dataset, resolving incompatibility may be very expensive both in terms of lawyers and in terms of payments.

For open data, the risk of incompatibility is growing as more open data is released and more and more open data publishers such as governments write their own “open data licenses” (with the potential for these different licenses to be mutually incompatible).

The Open Definition helps prevent incompatibility by:

Join the Global Open Data Index 2014 Sprint

Mor Rubinstein - September 29, 2014 in Community, Featured, Open Data

In 2012 the Open Knowledge launched the Global Open Data Index to help track the state of open data around the world. We’re now in the process of collecting submissions for the 2014 Open Data Index and we want your help!

Global Open Data Census: Survey

How can you contribute?

The main thing you can do is become a Contributor and add information about the state of open data in your country to the Open Data Index Survey. More details and quickstart guide to contributing here »

We also have other ways you can help:

Become a Mentor: Mentors support the Index in a variety of ways from engaging new contributors, mentoring them and generally promoting the Index in their community. Activities can include running short virtual “office hours” to support and advise other contributors, promoting the Index with civil society organizations – blogging, tweeting etc. To apply to be a Mentor, please fill in this form.

Become a Reviewer: Reviewers are specially selected experts who review submissions and check them to ensure information is accurate and up-to-date and that the Index is generally of high-quality. To apply to be a Reviewer, fill in this form.

Mailing Lists and Twitter

The Open Data Index mailing list is the main communication channel for folks who have questions or want to get in touch: https://lists.okfn.org/mailman/listinfo/open-data-census

For twitter, keep an eye on updates via #openindex14

Key dates for your calendar

We will kick off on September 30th, in Mexico City with a virtual and in-situ event at Abre LATAM and ConDatos (including LATAM regional skillshare meeting!). Keep an eye on Twitter to find out more details at #openindex14. Sprints will be taking place throughout October, with a global sprint taking place on 30 October!

More on this to follow shortly, keep an eye on this space.

Why the Open Data Index?

The last few years has seen an explosion of activity around open data and especially open government data. Following initiatives like data.gov and data.gov.uk, numerous local, regional and national bodies have started open government data initiatives and created open data portals (from a handful three years ago there are now nearly 400 open data portals worldwide).

But simply putting a few spreadsheets online under an open license is obviously not enough. Doing open government data well depends on releasing key datasets in the right way.

Moreover, with the proliferation of sites it has become increasingly hard to track what is happening: which countries, or municipalities, are actually releasing open data and which aren’t? Which countries are releasing data that matters? Which countries are releasing data in the right way and in a timely way?

The Global Open Data Index was created to answer these sorts of questions, providing an up-to-date and reliable guide to the state of global open data for policy-makers, researchers, journalists, activists and citizens.

The first initiative of its kind, the Global Open Data Index is regularly updated and provides the most comprehensive snapshot available of the global state of open data. The Index is underpinned by a detailed annual survey of the state of open data run by Open Knowledge in collaboration with open data experts and communities around the world.

Global Open Data Index: survey

A Data Revolution that Works for All of Us

Rufus Pollock - September 24, 2014 in Featured, Open Data, Open Development, Open Government Data, Our Work, Policy

Many of today’s global challenges are not new. Economic inequality, the unfettered power of corporations and markets, the need to cooperate to address global problems and the unsatisfactory levels of accountability in democratic governance – these were as much problems a century ago as they remain today.

What has changed, however – and most markedly – is the role that new forms of information and information technology could potentially play in responding to these challenges.

What’s going on?

The incredible advances in digital technology mean we have an unprecedented ability to create, share and access information. Furthermore, these technologies are increasingly not just the preserve of the rich, but are available to everyone – including the world’s poorest. As a result, we are living in a (veritable) data revolution – never before has so much data – public and personal – been collected, analysed and shared.

However, the benefits of this revolution are far from being shared equally.

On the one hand, some governments and corporations are already using this data to greatly increase their ability to understand – and shape – the world around them. Others, however, including much of civil society, lack the necessary access and capabilities to truly take advantage of this opportunity. Faced with this information inequality, what can we do?

How can we enable people to hold governments and corporations to account for the decisions they make, the money they spend and the contracts they sign? How can we unleash the potential for this information to be used for good – from accelerating research to tackling climate change? And, finally, how can we make sure that personal data collected by governments and corporations is used to empower rather than exploit us?

So how should we respond?

Fundamentally, we need to make sure that the data revolution works for all of us. We believe that key to achieving this is to put “open” at the heart of the digital age. We need an open data revolution.

We must ensure that essential public-interest data is open, freely available to everyone. Conversely, we must ensure that data about me – whether collected by governments, corporations or others – is controlled by and accessible to me. And finally, we have to empower individuals and communities – especially the most disadvantaged – with the capabilities to turn data into the knowledge and insight that can drive the change they seek.

In this rapidly changing information age – where the rules of the game are still up for grabs – we must be active, seizing the opportunities we have, if we are to ensure that the knowledge society we create is an open knowledge society, benefiting the many not the few, built on principles of collaboration not control, sharing not monopoly, and empowerment not exploitation.

Launching a new collaboration in Macedonia with Metamorphosis and the UK Foreign & Commonwealth Office

Guest - September 18, 2014 in Open Data

Dona

As part of the The Open Data Civil Society Network Project, School of Data Fellow, Dona Djambaska, who works with the local independent nonprofit, Metamorphosis, explains the value of the programme and what we hope to achieve over the next 24 months.

“The concept of Open Data is still very fresh among Macedonians. Citizens, CSOs and activists are just beginning to realise the meaning and power hidden in data. They are beginning to sense that there is some potential for them to use open data to support their causes, but in many cases they still don’t understand the value of open data, how to advocate for it, how to find it and most importantly – how to use it!

Metamorphosis was really pleased to get this incredible opportunity to work with the UK Foreign Office and our colleagues at Open Knowledge, to help support the open data movement in Macedonia. We know that an active open data ecosystem in Macedonia, and throughout the Balkan region, will support Metamorphosis’s core objectives of improving democracy and increasing quality of life for our citizens.

It’s great to help all these wonderful minds join together and co-build a community where everyone gets to teach and share. This collaboration with Open Knowledge and the UK Foreign Office is a really amazing stepping-stone for us.

We are starting the programme with meet-ups and then moving to more intense (online and offline) communications and awareness raising events. We hope our tailored workshops will increase the skills of local CSOs, journalists, students, activists or curious citizens to use open data in their work – whether they are trying to expose corruption or find new efficiencies in the delivery of government services.

We can already see the community being built, and the network spreading among Macedonian CSOs and hope that this first project will be part of a more regional strategy to support democratic processes across the Balkan region.”

Read our full report on the project: Improving governance and higher quality delivery of government services in Macedonia through open data


Dona Djambaska, Macedonia.

Dona graduated in the field of Environmental Engineering and has been working with the Metamorphosis foundation in Skopje for the past six years assisting on projects in the field of information society.

There she has focused on organising trainings for computer skills, social media, online promotion, photo and video activism. Dona is also an active contributor and member of the Global Voices Online community. She dedicates her spare time to artistic and activism photography.

Open data for Development Training Starts Tomorrow!

Katelyn Rogers - September 16, 2014 in Open Data, Open Government Data

This is a guest post written by Justyna Krol of the UNDP and originally posted on the UNDP blog.
development

>> Is data literacy the key to citizen engagement in anti-corruption efforts?

Access to open data is transforming the way we live of our lives, and the conversation in our region is just beginning.

Governments are opening their data, joining the Open Government Partnership, and trying to work together with the civil society organizations and the private sector to build an open data ecosystem in their countries.

This Wednesday, public officials from fifteen countries in the region will meet in Istanbul for the Open Data for Social and Economic Development Training.

 

In two days of intensive sessions, we will be discussing a number of pressing topics.

On the day one, the focus will be on the arguments for and against implementation of the open data agenda in the region.

We will look at how best to build an open data ecosystem in the country. Three sessions will provide space to discuss country-specific experiences with opening the data, alongside some of the challenges governments in the region might face.

The second day will be devoted to the technical aspects. We will analyze what it means to really open the data, where to start, and how much does it cost. We will test a few useful tools, and discuss the follow-up to the event for individual countries.

For those of you, who are interested in joining the event online, we are going to live stream the first session delivered by the World Bank on Wednesday (9:00 AM EEST).

A presentation by Oleg Petrov and Andrew Stott will be followed by a panel discussion with experts from Moldova, fYR Macedonia, and Kosovo* sharing their experiences in opening governmental data.

To watch this session, join us the hangout on air or simply play this video:

And of course, we’ll be tweeting!

I am proud to say that the event is co-sponsored by the Partnership for Open Data, which also means that we will have with us fantastic experts and trainers from: the World Bank, the Open Data Institute and the Open Knowledge Foundation. What a treat!

Join us online this Wednesday and Thursday and stay tuned for post-event blog posts and presentations!

 

Code for Germany launched!

Guest - August 6, 2014 in OKF Germany, Open Data

This is a guest blog post by Fiona Krakenbürger, research associate at Open Knowledge Foundation DE and Community Manager at Code for Germany

CFG_500x500.jpg

In July 2014, the Open Knowledge Foundation Germany launched its program “Code for Germany! Prior to the OK Festival in Berlin, we presented the project to the media, international partners, city representatives, members of our Advisory Board and friends from far and wide. It was a honour for us to welcome partners, supporters and members of the program to the stage. Among them were Lynn Fine from Code for America, Gabriella Goméz-Mont from the Laboratorio para la Ciudad, Prof. Dr. Gesche Joost (Digital Champion Germany) and Nicolas Zimmer (Technologiestiftung Berlin).

An essential focus of the launch and of the project was directed towards the community of Civic Tech pioneers and Open Data enthusiasts. We wanted developers and designers who are interested and active in the field of Open Data to get involved and inspired to start Open Knowledge Labs in their city. We started Code for Germany.

Bildschirmfoto 2014-08-06 um 12.43.33.png

The feedback so far has been amazing. In the past few months, fourteen Labs have sprouted up all across the country, bringing together more than 150 people on a regular basis to work on civic tech, use open data, and make the most of their skills to better their cities. This has all added up to more than 4000 hours of civic hacking and has resulted in multiple apps and projects.

The different OK Labs have been the source of a great variety of projects, tackling different topics and social challenges. For example, the OK Lab in Hamburg has a strong focus on urban development, and have created a map which shows the distribution of playgrounds in the city. An app from the OK Lab Heilbronn depicts the quality of tap water according to the region, and another from the OK Lab Cologne helps users find the closest defibrillator in their area. One more of our favourite developments is called “Kleiner Spatz”, which translates to “Little Sparrow” and helps parents find available child care spaces in their city.

We could go on and on listing our favourite projects, prototypes and ideas emerging from the OK Labs but why not check out the list for yourself to see what amazing things can be built with technology?

Bildschirmfoto 2014-08-06 um 12.39.20.png

Still, this is just the beginning. We are now going into the next phase: In the coming months we want to strengthen the various communities and establish ties with officials, governments and administrations. We believe that the government of the 21st Century should be open, transparent and accountable. Therefore we want to foster innovation in the field of Open Data, Civic Innovation and Public Services and create fertile collaborations between citizens and governments. Numerous useful visualizations and apps created by the OK Labs have now laid the foundation for these developments.

We are so excited about the upcoming events, projects, partners and inspiring people we have yet to meet. So far, Code for Germany has been a blast! And last (but certainly not least) we would like to express our most heartfelt gratitude towards the community of developers and designers who have contributed so much already. You rock & stay awesome!

Bildschirmfoto 2014-08-06 um 12.41.07.png

The Business Case for Open Data

Martin Tisne - June 23, 2014 in Business, Open Data

Martin Tisné, Omidyar Network’s director, policy (UK) and Nicholas Gruen, economist and CEO of Lateral Economics, last week unveiled in Canberra the report, Open for Business. It is the first study to quantify and illustrate the potential of Open Data to help achieve the G20’s economic growth target. Martin makes the economic case for open data below.

Manhattan

The G20 and Open Data: Open for Business

Open data cuts across a number of this year’s G20 priorities and could achieve more than half of the G20’s 2% growth target.

The business case for open data

Economic analysis has confirmed the significant contribution to economic growth and productivity achievable through an open data agenda. Governments, the private sector, individuals and communities all stand to benefit from the innovation and information that will inform investment, drive the creation of new industries, and inform decision making and research. To mark a step change in the way valuable information is created and reused, the G20 should release information as open data.

In May 2014, Omidyar Network commissioned Lateral Economics to undertake economic analysis on the potential of open data to support the G20’s 2% growth target and illustrate how an open data agenda can make a significant contribution to economic growth and productivity. Combining all G20 economies, output could increase by USD 13 trillion cumulatively over the next five years. Implementation of open data policies would thus boost cumulative G20 GDP by around 1.1 percentage points (almost 55%) of the G20’s 2% growth target over five years.

Recommendations

Importantly, open data cuts across a number of this year’s G20 priorities: attracting private infrastructure investment, creating jobs and lifting participation, strengthening tax systems and fighting corruption. This memo suggests an open data thread that runs across all G20 priorities. The more data is opened, the more it can be used, reused, repurposed and built on—in combination with other data—for everyone’s benefit.

We call on G20 economies to sign up to the Open Data Charter.

The G20 should ensure that data released by G20 working groups and themes is in line with agreed open data standards. This will lead to more accountable, efficient, effective governments who are going further to expose inadequacy, fight corruption and spur innovation.

Data is a national resource and open data is a ‘win-win’ policy. It is about making more of existing resources. We know that the cost of opening data is smaller than the economic returns, which could be significant. Methods to respect privacy concerns must be taken into account. If this is done, as the public and private sector share of information grows, there will be increasing positive returns.

The G20 opportunity

This November, leaders of the G20 Member States will meet in Australia to drive forward commitments made in the St Petersburg G20 Leaders Declaration last September and to make firm progress on stimulating growth. Actions across the G20 will include increasing investment, lifting employment and participation, enhancing trade and promoting competition.

The resulting ‘Brisbane Action Plan’ will encapsulate all of these commitments with the aim of raising the level of G20 output by at least 2% above the currently projected level over the next five years. There are major opportunities for cooperative and collective action by G20 governments.

Governments should intensify the release of existing public sector data – both government and publicly funded research data. But much more can be done to promote open data than simply releasing more government data. In appropriate circumstances, governments can mandate public disclosure of private sector data (e.g. in corporate financial reporting).

Recommendations for action

  • G20 governments should adopt the principles of the Open Data Charter to encourage the building of stronger, more interconnected societies that better meet the needs of our citizens and allow innovation and prosperity to flourish.
  • G20 governments should adopt specific open data targets under each G20 theme, as illustrated below, such as releasing open data related to beneficial owners of companies, as well revenues from extractive industries
  • G20 governments should consider harmonizing licensing regimes across the G20
  • G20 governments should adopt metrics for measuring the quantity and quality of open data publication, e.g. using the Open Data Institute’s Open Data Certificates as a bottom-up mechanism for driving the adoption of common standards.

Illustrative G20 examples

Fiscal and monetary policy

Governments possess rich real time data that is not open or accessed by government macro-economic managers. G20 governments should:

  • Open up models that lie behind economic forecasts and help assess alternative policy settings;
  • Publish spending and contractual data to enable comparative shopping by government between government suppliers.

Anti corruption

Open data may directly contribute to reduced corruption by increasing the likelihood corruption will be detected. G20 governments should:

  • Release open data related to beneficial owners of companies as well as revenues from extractive industries,
  • Collaborate on harmonised technical standards that permit the tracing of international money flows – including the tracing of beneficial owners of commercial entities, and the comparison and reconciliation of transactions across borders.

Trade

Obtaining and using trade data from multiple jurisdictions is difficult. Access fees, specific licenses, and non-machine readable formats all involve large transaction costs. G20 governments should:

  • Harmonise open data policies related to trade data.
  • Use standard trade schema and formats.

Employment

Higher quality information on employment conditions would facilitate better matching of employees to organizations, producing greater job-satisfaction and improved productivity. G20 governments should:

  • Open up centralised job vacancy registers to provide new mechanisms for people to find jobs.
  • Provide open statistical information about the demand for skills in particular areas to help those supporting training and education to hone their offerings.

Energy

Open data will help reduce the cost of energy supply and improve energy efficiency. G20 governments should:

  • Provide incentives for energy companies to publish open data from consumers and suppliers to enable cost savings through optimizing energy plans.
  • Release energy performance certifications for buildings
  • Publish real-time energy consumption for government buildings.

Infrastructure

Current infrastructure asset information is fragmented and inefficient. Exposing current asset data would be a significant first step in understanding gaps and providing new insights. G20 governments should:

  • Publish open data on governments’ infrastructure assets and plans to better understand infrastructure gaps, enable greater efficiency and insights in infrastructure development and use and analyse cost/benefits.
  • Publish open infrastructure data, including contracts via Open Contracting Partnership, in a consistent and harmonised way across G20 countries.

Other examples of value to date

  • In the United States, the National Oceanic and Atmospheric Administration’s decision nearly three decades ago to release their data sets to the public resulted in a burst of innovations — including forecasts, mobile applications, websites, research – and a multi-billion dollar weather industry.
  • Open government data in the EU would increase business activity by €40Bn. Indirect benefits (people using data driven services) total up to €140Bn a year[1].
  • Mckinsey research suggests that seven sectors alone could generate more than $3 trillion a year in additional value as a result of open data.
  • Releasing as open data in Denmark in 2002 gave €62m benefits 2005-2009 against €2m cost. ROI in 2010: €14m benefit against €0.2m cost[2].
  • Open data exposed C$3.2Bn misuse of charitable status in tax code in Canada[3].
  • Over £200m/year could have been saved by the NHS from the publication of open data on just one class of prescription drugs[4].

[1] https://www.ereg-association.eu/actualities/archive.php?action=show_article&news_id=167

[2] http://www.adresse-info.dk/Portals/2/Benefit/Value_Assessment_Danish_Address_Data_UK_2010-07-07b.pdf

[3] http://eaves.ca/2010/04/14/case-study-open-data-and-the-public-purse/

[4] http://theodi.org/news/prescription-savings-worth-millions-identified-odi-incubated-company

Get Updates