Support Us

Open Humanities Hack: 28 November 2014, London

Lieke Ploeger - October 10, 2014 in Events, Meetups, Open Humanities

This is a cross-post from the DM2E-blog, see the original here

On Friday 28 November 2014 the second Open Humanities Hack event will take place at King’s College, London. This is the second in a series of events organised jointly by the King’s College London Department of Digital Humanities , the Digitised Manuscripts to Europeana (DM2E) project, the Open Knowledge Foundation and the Open Humanities Working Group

Humanities-WG The event is focused on digital humanists and intended to target research-driven experimentation with existing humanities data sets. One of the most exciting recent developments in digital humanities include the investigation and analysis of complex data sets that require the close collaboration between Humanities and computing researchers. The aim of the hack day is not to produce complete applications but to experiment with methods and technologies to investigate these data sets so that at the end we can have an understanding of the types of novel techniques that are emerging.

Possible themes include but are not limited to

  • Research in textual annotation has been a particular strength of digital humanities. Where are the next frontiers? How can we bring together insights from other fields and digital humanities?

  • How do we provide linking and sharing humanities data that makes sense of its complex structure, with many internal relationships both structural and semantic. In particular, distributed Humanities research data often includes digital material combining objects in multiple media, and in addition there is diversity of standards for describing the data.

  • Visualisation. How do we develop reasonable visualisations that are practical and help build on overall intuition for the underlying humanities data set

  • How can we advance the novel humanities technique of network analysis to describe complex relationships of ‘things’ in social-historical systems: people, places, etc.

With this hack day we seek to form groups of computing and humanities researchers that will work together to come up with small-scale prototypes that showcase new and novel ways of working with humanities data.

Date: Friday 28 November 2014
Time: 9.00 – 21.00
Location: King’s College, Strand, London
Sign up: Attendance is free but places are limited: please fill in the sign-up form to register .

For an impression of the first Humanities Hack event, please check this blog report .

This Index is yours!

Heather Leson - October 9, 2014 in Community, Open Data, Open Data Census, Open Data Census, Open Data Index

How is your country doing with open data? You can make a difference in 5 easy steps to track 10 different datasets. Or, you can help us spread the word on how to contribute to the Open Data Index. This includes the very important translation of some key items into your local language. We’ll keep providing you week-by-week updates on the status of the community-driven project.

We’ve got a demo and some shareable slides to help you on your Index path.

Priority country help wanted

The amazing community provided content for over 70 countries last year. This year we set the bar higher with a goal of 100 countries. If you added details for your country last year, please be sure to add any updates this year. Also, we need some help. Are you from one of these countries? Do you have someone in your network who could potentially help? Please do put them in touch with the index team – index at okfn dot org.

DATASETS WANTED: Armenia, Bolivia, Georgia, Guyana, Haiti, Kosovo, Moldova, Morocco, Nicaragua, Ukraine, and Yemen.

Video: Demo and Tips for contributing to the Open Data Index

This is a 40 minute video with some details all about the Open Data Index, including a demo to show you how to add datasets.

Text: Tutorial on How to help build the Open Data Index

We would encourage you to download this, make changes (add country specific details), translate and share back. Please simply share on the Open Data Census Mailing List or Tweet us @okfn.

Thanks again for sharing widely!

Open Definition v2.0 Released – Major Update of Essential Standard for Open Data and Open Content

Rufus Pollock - October 7, 2014 in Featured, News, Open Content, Open Data, Open Definition

Today Open Knowledge and the Open Definition Advisory Council are pleased to announce the release of version 2.0 of the Open Definition. The Definition “sets out principles that define openness in relation to data and content” and plays a key role in supporting the growing open data ecosystem.

Recent years have seen an explosion in the release of open data by dozens of governments including the G8. Recent estimates by McKinsey put the potential benefits of open data at over $1 trillion and others estimates put benefits at more than 1% of global GDP.

However, these benefits are at significant risk both from quality problems such as “open-washing” (non-open data being passed off as open) and from fragmentation of the open data ecosystem due to incompatibility between the growing number of “open” licenses.

The Open Definition eliminates these risks and ensures we realize the full benefits of open by guaranteeing quality and preventing incompatibility.See this recent post for more about why the Open Definition is so important.

The Open Definition was published in 2005 by Open Knowledge and is maintained today by an expert Advisory Council. This new version of the Open Definition is the most significant revision in the Definition’s nearly ten-year history.

It reflects more than a year of discussion and consultation with the community including input from experts involved in open data, open access, open culture, open education, open government, and open source. Whilst there are no changes to the core principles, the Definition has been completely reworked with a new structure and new text as well as a new process for reviewing licenses (which has been trialled with governments including the UK).

Herb Lainchbury, Chair of the Open Definition Advisory Council, said:

“The Open Definition describes the principles that define “openness” in relation to data and content, and is used to assess whether a particular licence meets that standard. A key goal of this new version is to make it easier to assess whether the growing number of open licenses actually make the grade. The more we can increase everyone’s confidence in their use of open works, the more they will be able to focus on creating value with open works.”

Rufus Pollock, President and Founder of Open Knowledge said:

“Since we created the Open Definition in 2005 it has played a key role in the growing open data and open content communities. It acts as the “gold standard” for open data and content guaranteeing quality and preventing incompatibility. As a standard, the Open Definition plays a key role in underpinning the “open knowledge economy” with a potential value that runs into the hundreds of billions – or even trillions – worldwide.”

What’s New

In process for more than a year, the new version was collaboratively and openly developed with input from experts involved in open access, open culture, open data, open education, open government, open source and wiki communities. The new version of the definition:

  • Has a complete rewrite of the core principles – preserving their meaning but using simpler language and clarifying key aspects.
  • Introduces a clear separation of the definition of an open license from an open work (with the latter depending on the former). This not only simplifies the conceptual structure but provides a proper definition of open license and makes it easier to “self-assess” licenses for conformance with the Open Definition.
  • The definition of an Open Work within the Open Definition is now a set of three key principles:
    • Open License: The work must be available under an open license (as defined in the following section but this includes freedom to use, build on, modify and share).
    • Access: The work shall be available as a whole and at no more than a reasonable one-time reproduction cost, preferably downloadable via the Internet without charge
    • Open Format: The work must be provided in a convenient and modifiable form such that there are no unnecessary technological obstacles to the performance of the licensed rights. Specifically, data should be machine-readable, available in bulk, and provided in an open format or, at the very least, can be processed with at least one free/libre/open-source software tool.
  • Includes improved license approval process to make it easier for license creators to check conformance of their license with the Open Definition and to encourage reuse of existing open licenses

More Information

  • For more information about the Open Definition including the updated version visit: http://opendefinition.org/
  • For background on why the Open Definition matters, read the recent article ‘Why the Open Definition Matters’

Authors

This post was written by Herb Lainchbury, Chair of the Open Definition Advisory Council and Rufus Pollock, President and Founder of Open Knowledge

Brazilian Government Develops Toolkit to Guide Institutions in both Planning and Carrying Out Open Data Initatives

Guest - October 7, 2014 in Open Data, Open Government Data

This is a guest post by Nitai Silva of the Brazilian government’s open data team and was originally published on the Open Knowledge Brazil blog here.

Recently Brazilian government released the Kit de Dados Abertos (open data toolkit). The toolkit is made up of documents describing the process, methods and techniques for implementing an open data policy within an institution. Its goal is to both demystify the logic of opening up data and to share with public employees observed best practices that have emerged from a number of Brazilian government initiatives.

The toolkit focuses on the Plano de Dados Abertos – PDA (Open Data Plan) as the guiding instrument where commitments, agenda and policy implementation cycles in the institution are registered. We believe that making each public agency build it’s own PDA is a way to perpetuate the open data policy, making it a state policy and not just a transitory governmental action.

It is organized to facilitate the implementation of the main activities cycles that must be observed in an institution and provides links and manuals to assist in these activities. Emphasis is given to the actors/roles involved in each step and their responsibilities. It also helps to define a central person to monitor and maintain the PDA. The following diagram summarizes the macro steps of implementing an open data policy in an institution:

 

Processo Sistêmico de um PDA

 

Open data theme has been part of the Brazilian government’s agenda for over three years. Over this period, we have accomplished a number of important achievement including passing the Lei de Acesso à Informação – LAI (FOIA) (Access to Information Law), making commitments as part of our Open Government Partnership Action Plan and developing the Infraestrutura Nacional de Dados Abertos (INDA) (Open Data National Infrastructure). However, despite these accomplishments, for many public managers, open data activities remain the exclusive responsibility of the Information Technology department of their respective institution. This gap is, in many ways, the cultural heritage of the hierarchical, departmental model of carrying out public policy and is observed in many institutions.

The launch of the toolkit is the first of a series of actions prepared by the Ministry of Planning to leverage open data initiatives in federal agencies, as was defined in the Brazilian commitments in the Open Government Partnership (OGP). The next step is to conduct several tailor made workshops designed to support major agencies in the federal government in the implementation of open data.

Despite it having been built with the aim of expanding the quality and quantity of open data made available by the federal executive branch agencies, we also made a conscious effort to make the toolkit generic enough generic enough for other branches and levels of government.

About the toolkit development:

It is also noteworthy to mention that the toolkit was developed on Github. Although the Github is known as an online and distributed environment for develop software, it has already being used for co-creation of text documents for a long time, even by governments. The toolkit is still hosted there, which allows anyone to make changes and propose improvements. The invitation is open, we welcome and encourage your collaboration.

Finally I would like to thank Augusto Herrmann, Christian Miranda, Caroline Burle and Jamila Venturini for participating in the drafting of this post!

Streamlining the Local Groups network structure

Christian Villum - October 3, 2014 in Community, Open Knowledge Foundation Local Groups

We are now a little over a year into the Local Groups scheme that was launched in early 2013. Since then we have been receiving hundreds of applications from great community members wanting to start Local Groups in their countries and become Ambassadors and community leaders. From this great body of amazing talent, Local Groups in over 50 countries have been established and frankly we’ve been overwhelmed with the interest that this program has received!

Over the course of this time we have learned a lot. Not only have we seen that open knowledge first and foremost develops locally and how global peer support is a great driver for making a change in local environments. We’re humbled and proud to be able to help facilitate the great work that is being done in all these countries.

We have also learned, however, of things in the application process and the general network structure that can be approved. After collecting feedback from the community earlier in the year, we learned that the structure of the network and the different labels (Local Group, Ambassador, Initiative and Chapter) were hard to comprehend, and also that the waiting time that applicants wanting to become Ambassadors and starting Local Groups were met with was a little bit frustrating. People applying are eager to get started, and of course having to wait weeks or even longer (because of the number of applications that came in) was obviously a little bit frustrating.

Presenting a more streamlined structure and way of getting involved

We have now thoroughly discussed the feedback with our great Local Groups community and as a result we are excited to present a more streamlined structure and a much easier way of getting involved. The updated structure is written up entirely on the Open Knowledge wiki, and includes the following major headlines:

1. Ambassador and Initiative level merge into “Local Groups”

As mentioned, applying to become an Ambassador and applying to set up an Initiative were the two kinds of entry-level ways to engage; “Ambassador” implying that the applicant was – to begin with – just one person, and “Initiative” being the way for an existing group to join the network. These were then jointly labelled “Local Groups”, which was – admittedly – a lot of labels to describe pretty much the same thing: People wanting to start a Local Group and collaborate. Therefore we are removing the Initiative label all together, and from now everyone will simply apply through one channel to start a Local Group. If you are just one person doing that (even though more people will join later) you are granted the opportunity to take the title of Ambassador. If you are a group applying collectively to start a Local Group, then everyone in that group can choose to take the title of Local Group Lead, which is a more shared way to lead a new group (as compared to an Ambassador). Applying still happens through a webform, which has been revamped to reflect these changes.

2. Local Group applications will be processed twice per year instead of on a rolling basis

All the hundreds of applications that have come in over the last year have been peer-reviewed by a volunteer committee of existing community members (and they have been doing a stellar job!). One of the other major things we’ve learned is the work pressure that the sheer number of applications put on this hard-working group simply wasn’t long term sustainable. That is why that we as of now will replace the rolling basis processing and review of applications in favor of two annual sprints in October and April. This may appear as if waiting time for applicants becomes even longer, but that is not the case! In fact, we are implementing a measure that ensures no waiting at all! Keep reading.

3. Introducing a new easy “get-started-right-away” entry level: “Local Organiser”

This is the new thing we are most excited to introduce! Seeing how setting up a formal Local Group takes time (regardless of how many applications come in), it was clear that we needed a way for people to get involved in the network right away, without having to wait for weeks and weeks on formalities and practicalities. This has lead to the new concept of “Local Organiser”:

Anyone can pick up this title immediately and start to organise Open Knowledge activities locally in their own name, but by calling themselves Local Organiser. This can include organising meetups, contributing on discussion lists, advocating the use of open knowledge, building community and gather more people to join – or any other relevant activity aligned with the values of Open Knowledge.

Local Organisers needs to register by setting up a profile page on the Open Knowledge wiki as well as filling this short form. Shortly thereafter the Local Organiser will then be greeted officially into the community with an email from the Open Knowledge Local Group Team containing a link to the Local Organiser Code of Conduct that the person automatically agrees to adhere to when he/she picks up the title.

Local Organisers use existing, public tools such as Meetup.com, Tumblr, Twitter etc. – but can also request Open Knowledge to set up a public discussion list for their country (if needed – otherwise they can also use other existing public discussion lists). Additionally, they can use the Open Knowledge wiki as a place to put information and organize as needed. Local Organisers are enrouraged to publicly document their activities on their Open Knowledge wiki profile in order to become eligible to apply to start an official Open Knowledge Local Group later down the road.

A rapidly growing global network

What about Chapters you might wonder? Their status remain unchanged and continue to be the expert level entity that Local Groups can apply to become when reaching a certain level of prowess.

All in all it’s fantastic to see how Open Knowledge folks are organising locally in all corners of the world. We look forward to continue supporting you all!

If you have any questions, ideas or comments, feel free to get in touch!

Connect and Help Build the Global Open Data Index

Heather Leson - October 1, 2014 in Community, Events, Open Data Census, Open Data Index

Earlier this week we announced that October is the Global Open Data Index. Already people have added details about open data in Argentina, Colombia, and Chile! You can see all the collaborative work here in our change tracker. Each of you can make a difference to hold governments accountable for open data commitments plus create an easy way for civic technologies to analyze the state of open data around the world, hopefully with some shiny new data viz. Our goal at Open Knowledge is to help you shape the story of Open Data. We are hosting a number of community activities this month to help you learn and connect with each other. Most of all, it is our hope that you can help spread the word in your local language.

Open Data Index @ OkFest 14

Choose your own adventure for the Global Open Data Index

We’ve added a number of ways that you can get involved to the OKFN Wiki. But, here are some more ways to learn and share:

Community Sessions – Let’s Learn Together

Join the Open Knowledge Team and Open Data Index Mentors for a session all about the Global Open Data Index. It is our goal to show open data around the world. We need your help to add data from your region and reach new people to add details about their country.

We will share some best practices on finding and adding open dataset content to the Open Data Index. And, we’ll answer questions about the use of the Index. There are timeslots to help people connect globally.

These will be recorded. But, we encourage you to join us on G+ /youtube and bring your ideas/questions. Stay tuned as we may add more online sessions.

Community Office Hours

Searching for datasets and using the Global Open Data Index tool is all the better with a little help from mentors and fellow community members. If you are a mentor, it would be great if you could join us on a Community Session or host some local office hours. Simply add your name and schedule here.

Mailing Lists and Twitter

The Open Data Index mailing list is the main communication channel for folks who have questions or want to get in touch: https://lists.okfn.org/mailman/listinfo/open-data-census#sthash.HGagGu39.dpuf For twitter, keep an eye on updates via #openindex14

Translation Help

What better way to help others get involved than to share in your own language. We could use your help. We have some folks translating content into Spanish. Other priority languages are Yours!, Arabic, Portuguese, French and Swahili. Here are some ways to help translate:

Learn on your own

We know that you have limited time to contribute. We’ve created some FAQs and tips to help you add datasets on your own time. I personally like to think of it as a data expedition to check the quality of open data in many countries. Happy hunting and gathering! Last year I had fun reviewing data from around the world. But, what matters is that you have local context to review the language and data for your country. Here’s a quick screenshot of how to contribute:

Steps to track Open Data

Thanks again for making Open Data Matter in your part of the world!


(Photo by Marieke Guy, cc by license (cropped))

Support Diego Gomez, Join the Global Open Access Movement

Christian Villum - October 1, 2014 in Open Access

This is a post put together based on great contributions on the blogs of the Electronic Frontier Foundation (Adi Kamdar & Maira Sutton), Creative Commons (Timothy Vollmer) and the Open Access Button project (David Carroll).

Join the global Open Access movement!

diego_gomez-2bIn July the Electronic Frontier Foundation (EFF) wrote about the predicament that Colombian student Diego Gomez found himself in after he shared a research article online. Gomez is a graduate student in conservation and wildlife management at a small university. He has generally poor access to many of the resources and databases that would help him conduct his research. Paltry access to useful materials combined with a natural culture of sharing amongst researchers prompted Gomez to share a paper on Scribd so that he and others could access it for their work. The practice of learning and sharing under less-than-ideal circumstances could land Diego in prison.

Facing 4-8 years in prison for sharing an article

The EFF reports that upon learning of this unauthorized sharing, the author of the research article filed criminal complaint against Gomez. The charges lodged against Diego could put him in prison for 4-8 years. The trial has started, and the court will need to take into account several factors: including whether there was any malicious intent to the action, and whether there was any actual harm against the economic rights of the author.

Academics and students send and post articles online like this every day—it is simply the norm in scholarly communication. And yet inflexible digital policies, paired with senseless and outdated practices, have led to such extreme cases like Diego’s. People who experience massive access barriers to existing research—most often hefty paywalls—often have no choice but to find and share relevant papers through colleagues in their network. The Internet has certainly enabled this kind of information sharing at an unprecedented speed and scale, but we are still far from reaching its full capacity.

If open access were the default for scholarly communication, cases like Diego’s would become obsolete.

Let’s stand together to support Diego Gomez and promote Open Access worldwide.

Help Diego Gomez and join academics and users in fighting outdated laws and practices that keep valuable research locked up for no good reason. If open access were the default for scholarly communication, cases like Diego’s would become obsolete. Academic research would be free to access and available under an open license that would legally enable the kind of sharing that is so crucial for enabling scientific progress.

We at Open Knowledge have joined as signees of the petition in support of Diego alongside prominent organisations such as the Electronic Frontier Foundation, Creative Commons, Open Access Button, Internet Archive, Public Knowledge, and the Right to Research Coalition. Sign your support for Diego to express your support for open access as the default for scientific and scholarly publishing, so researchers like Diego don’t risk severe penalties for helping colleagues access the research they need:

[Click here to sign the petition]

Sign-on statement: “Scientific and scholarly progress relies upon the exchange of ideas and research. We all benefit when research is shared widely, freely, and openly. I support an Open Access system for academic publishing that makes research free for anyone to read and re-use; one that is inclusive of all and doesn’t force researchers like Diego Gomez to risk severe penalties for helping colleagues access the research they need.”

Why the Open Definition Matters for Open Data: Quality, Compatibility and Simplicity

Rufus Pollock - September 30, 2014 in Featured, Open Data, Open Definition, Policy

The Open Definition performs an essential function as a “standard”, ensuring that when you say “open data” and I say “open data” we both mean the same thing. This standardization, in turn, ensures the quality, compatibility and simplicity essential to realizing one of the main practical benefits of “openness”: the greatly increased ability to combine different datasets together to drive innovation, insight and change.

Recent years have seen an explosion in the release of open data by dozens of governments including the G8. Recent estimates by McKinsey put the potential benefits of open data at over $100bn and others estimate benefits at more than 1% of global GDP.

However, these benefits are at significant risk both from quality-dilution and “open-washing”” (non-open data being passed off as open) as well as from fragmentation of the ecosystem as the proliferation of open licenses each with their own slightly different terms and conditions leads to incompatibility.

The Open Definition helps eliminates these risks and ensure we realize the full benefits of open. It acts as the “gold standard” for open content and data guaranteeing quality and preventing incompatibility.

This post explores in more detail why it’s important to have the Open Definition and the clear standard it provides for what “open” means in open data and open content.

Three Reasons

There are three main reasons why the Open Definition matters for open data:

Quality: open data should mean the freedom for anyone to access, modify and share that data. However, without a well-defined standard detailing what that means we could quickly see “open” being diluted as lots of people claim their data is “open” without actually providing the essential freedoms (for example, claiming data is open but actually requiring payment for commercial use). In this sense the Open Definition is about “quality control”.

Compatibility: without an agreed definition it becomes impossible to know if your “open” is the same as my “open”. This means we cannot know whether it’s OK to connect your open data and my open data together since the terms of use may, in fact, be incompatible (at the very least I’ll have to start consulting lawyers just to find out!). The Open Definition helps guarantee compatibility and thus the free ability to mix and combine different open datasets which is one of the key benefits that open data offers.

Simplicity: a big promise of open data is simplicity and ease of use. This is not just in the sense of not having to pay for the data itself, its about not having to hire a lawyer to read the license or contract, not having to think about what you can and can’t do and what it means for, say, your business or for your research. A clear, agreed definition ensures that you do not have to worry about complex limitations on how you can use and share open data.

Let’s flesh these out in a bit more detail:

Quality Control (avoiding “open-washing” and “dilution” of open)

A key promise of open data is that it can freely accessed and used. Without a clear definition of what exactly that means (e.g. used by whom, for what purpose) there is a risk of dilution especially as open data is attractive for data users. For example, you could quickly find people putting out what they call “open data” but only non-commercial organizations can access the data freely.

Thus, without good quality control we risk devaluing open data as a term and concept, as well as excluding key participants and fracturing the community (as we end up with competing and incompatible sets of “open” data).

Compatibility

A single piece of data on its own is rarely useful. Instead data becomes useful when connected or intermixed with other data. If I want to know about the risk of my home getting flooded I need to have geographic data about where my house is located relative to the river and I need to know how often the river floods (and how much).

That’s why “open data”, as defined by the Open Definition, isn’t just about the freedom to access a piece of data, but also about the freedom connect or intermix that dataset with others.

Unfortunately, we cannot take compatibility for granted. Without a standard like the Open Definition it becomes impossible to know if your “open” is the same as my “open”. This means, in turn, that we cannot know whether it’s OK to connect (or mix) your open data and my open data together (without consulting lawyers!) – and it may turn out that we can’t because your open data license is incompatible with my open data license.

Think of power sockets around the world. Imagine if every electrical device had a different plug and needed a different power socket. When I came over to your house I’d need to bring an adapter! Thanks to standardization at least in a given country power-sockets are almost always the same – so I bring my laptop over to your house without a problem. However, when you travel abroad you may have to take adapter with you. What drives this is standardization (or its lack): within your own country everyone has standardized on the same socket type but different countries may not share a standard and hence you need to get an adapter (or run out of power!).

For open data, the risk of incompatibility is growing as more open data is released and more and more open data publishers such as governments write their own “open data licenses” (with the potential for these different licenses to be mutually incompatible).

The Open Definition helps prevent incompatibility by:

Join the Global Open Data Index 2014 Sprint

Mor Rubinstein - September 29, 2014 in Community, Featured, Open Data

In 2012 the Open Knowledge launched the Global Open Data Index to help track the state of open data around the world. We’re now in the process of collecting submissions for the 2014 Open Data Index and we want your help!

Global Open Data Census: Survey

How can you contribute?

The main thing you can do is become a Contributor and add information about the state of open data in your country to the Open Data Index Survey. More details and quickstart guide to contributing here »

We also have other ways you can help:

Become a Mentor: Mentors support the Index in a variety of ways from engaging new contributors, mentoring them and generally promoting the Index in their community. Activities can include running short virtual “office hours” to support and advise other contributors, promoting the Index with civil society organizations – blogging, tweeting etc. To apply to be a Mentor, please fill in this form.

Become a Reviewer: Reviewers are specially selected experts who review submissions and check them to ensure information is accurate and up-to-date and that the Index is generally of high-quality. To apply to be a Reviewer, fill in this form.

Mailing Lists and Twitter

The Open Data Index mailing list is the main communication channel for folks who have questions or want to get in touch: https://lists.okfn.org/mailman/listinfo/open-data-census

For twitter, keep an eye on updates via #openindex14

Key dates for your calendar

We will kick off on September 30th, in Mexico City with a virtual and in-situ event at Abre LATAM and ConDatos (including LATAM regional skillshare meeting!). Keep an eye on Twitter to find out more details at #openindex14. Sprints will be taking place throughout October, with a global sprint taking place on 30 October!

More on this to follow shortly, keep an eye on this space.

Why the Open Data Index?

The last few years has seen an explosion of activity around open data and especially open government data. Following initiatives like data.gov and data.gov.uk, numerous local, regional and national bodies have started open government data initiatives and created open data portals (from a handful three years ago there are now nearly 400 open data portals worldwide).

But simply putting a few spreadsheets online under an open license is obviously not enough. Doing open government data well depends on releasing key datasets in the right way.

Moreover, with the proliferation of sites it has become increasingly hard to track what is happening: which countries, or municipalities, are actually releasing open data and which aren’t? Which countries are releasing data that matters? Which countries are releasing data in the right way and in a timely way?

The Global Open Data Index was created to answer these sorts of questions, providing an up-to-date and reliable guide to the state of global open data for policy-makers, researchers, journalists, activists and citizens.

The first initiative of its kind, the Global Open Data Index is regularly updated and provides the most comprehensive snapshot available of the global state of open data. The Index is underpinned by a detailed annual survey of the state of open data run by Open Knowledge in collaboration with open data experts and communities around the world.

Global Open Data Index: survey

A Data Revolution that Works for All of Us

Rufus Pollock - September 24, 2014 in Featured, Open Data, Open Development, Open Government Data, Our Work, Policy

Many of today’s global challenges are not new. Economic inequality, the unfettered power of corporations and markets, the need to cooperate to address global problems and the unsatisfactory levels of accountability in democratic governance – these were as much problems a century ago as they remain today.

What has changed, however – and most markedly – is the role that new forms of information and information technology could potentially play in responding to these challenges.

What’s going on?

The incredible advances in digital technology mean we have an unprecedented ability to create, share and access information. Furthermore, these technologies are increasingly not just the preserve of the rich, but are available to everyone – including the world’s poorest. As a result, we are living in a (veritable) data revolution – never before has so much data – public and personal – been collected, analysed and shared.

However, the benefits of this revolution are far from being shared equally.

On the one hand, some governments and corporations are already using this data to greatly increase their ability to understand – and shape – the world around them. Others, however, including much of civil society, lack the necessary access and capabilities to truly take advantage of this opportunity. Faced with this information inequality, what can we do?

How can we enable people to hold governments and corporations to account for the decisions they make, the money they spend and the contracts they sign? How can we unleash the potential for this information to be used for good – from accelerating research to tackling climate change? And, finally, how can we make sure that personal data collected by governments and corporations is used to empower rather than exploit us?

So how should we respond?

Fundamentally, we need to make sure that the data revolution works for all of us. We believe that key to achieving this is to put “open” at the heart of the digital age. We need an open data revolution.

We must ensure that essential public-interest data is open, freely available to everyone. Conversely, we must ensure that data about me – whether collected by governments, corporations or others – is controlled by and accessible to me. And finally, we have to empower individuals and communities – especially the most disadvantaged – with the capabilities to turn data into the knowledge and insight that can drive the change they seek.

In this rapidly changing information age – where the rules of the game are still up for grabs – we must be active, seizing the opportunities we have, if we are to ensure that the knowledge society we create is an open knowledge society, benefiting the many not the few, built on principles of collaboration not control, sharing not monopoly, and empowerment not exploitation.

Get Updates