Support Us

You are browsing the archive for Open Definition.

Why the Open Definition Matters for Open Data: Quality, Compatibility and Simplicity

Rufus Pollock - September 30, 2014 in Featured, Open Data, Open Definition, Policy

The Open Definition performs an essential function as a “standard”, ensuring that when you say “open data” and I say “open data” we both mean the same thing. This standardization, in turn, ensures the quality, compatibility and simplicity essential to realizing one of the main practical benefits of “openness”: the greatly increased ability to combine different datasets together to drive innovation, insight and change.

This post explores in more detail why it’s important to have a clear standard in the form of the Open Definition for what open means for data.

Three Reasons

There are three main reasons why the Open Definition matters for open data:

Quality: open data should mean the freedom for anyone to access, modify and share that data. However, without a well-defined standard detailing what that means we could quickly see “open” being diluted as lots of people claim their data is “open” without actually providing the essential freedoms (for example, claiming data is open but actually requiring payment for commercial use). In this sense the Open Definition is about “quality control”.

Compatibility: without an agreed definition it becomes impossible to know if your “open” is the same as my “open”. This means we cannot know whether it’s OK to connect your open data and my open data together since the terms of use may, in fact, be incompatible (at the very least I’ll have to start consulting lawyers just to find out!). The Open Definition helps guarantee compatibility and thus the free ability to mix and combine different open datasets which is one of the key benefits that open data offers.

Simplicity: a big promise of open data is simplicity and ease of use. This is not just in the sense of not having to pay for the data itself, its about not having to hire a lawyer to read the license or contract, not having to think about what you can and can’t do and what it means for, say, your business or for your research. A clear, agreed definition ensures that you do not have to worry about complex limitations on how you can use and share open data.

Let’s flesh these out in a bit more detail:

Quality Control (avoiding “open-washing” and “dilution” of open)

A key promise of open data is that it can freely accessed and used. Without a clear definition of what exactly that means (e.g. used by whom, for what purpose) there is a risk of dilution especially as open data is attractive for data users. For example, you could quickly find people putting out what they call “open data” but only non-commercial organizations can access the data freely.

Thus, without good quality control we risk devaluing open data as a term and concept, as well as excluding key participants and fracturing the community (as we end up with competing and incompatible sets of “open” data).

Compatibility

A single piece of data on its own is rarely useful. Instead data becomes useful when connected or intermixed with other data. If I want to know about the risk of my home getting flooded I need to have geographic data about where my house is located relative to the river and I need to know how often the river floods (and how much).

That’s why “open data”, as defined by the Open Definition, isn’t just about the freedom to access a piece of data, but also about the freedom connect or intermix that dataset with others.

Unfortunately, we cannot take compatibility for granted. Without a standard like the Open Definition it becomes impossible to know if your “open” is the same as my “open”. This means, in turn, that we cannot know whether it’s OK to connect (or mix) your open data and my open data together (without consulting lawyers!) – and it may turn out that we can’t because your open data license is incompatible with my open data license.

Think of power sockets around the world. Imagine if every electrical device had a different plug and needed a different power socket. When I came over to your house I’d need to bring an adapter! Thanks to standardization at least in a given country power-sockets are almost always the same – so I bring my laptop over to your house without a problem. However, when you travel abroad you may have to take adapter with you. What drives this is standardization (or its lack): within your own country everyone has standardized on the same socket type but different countries may not share a standard and hence you need to get an adapter (or run out of power!).

For open data, the risk of incompatibility is growing as more open data is released and more and more open data publishers such as governments write their own “open data licenses” (with the potential for these different licenses to be mutually incompatible).

The Open Definition helps prevent incompatibility by:

Creative Commons 4.0 BY and BY-SA licenses approved conformant with the Open Definition

Guest - January 8, 2014 in Open Definition

cc40-itshere-2751

This post by Timothy Vollmer, Manager of Policy and Data at Creative Commons, originally appeared on the creativecommons.org website.

In November we released version 4.0 of the Creative Commons license suite, and today the Open Definition Advisory Council approved the CC 4.0 Attribution (BY) and Attribution-ShareAlike (BY-SA) International licenses as conformant with the Open Definition.

The Open Definition sets out principles that define “openness” in relation to data and content…It can be summed up in the statement that: “A piece of data or content is open if anyone is free to use, reuse, and redistribute it — subject only, at most, to the requirement to attribute and/or share-alike.”

Prior versions of Creative Commons BY and BY-SA licenses (1.0 – 3.0, including jurisdiction ports) are also aligned with the Open Definition, as is the CC0 Public Domain Dedication. Here’s the complete list of conformant licenses. None of the Creative Commons NonCommercial or NoDerivatives licenses comply with the Definition.

The Open Definition is an important marker that communicates the fundamental legal conditions that make content and data open, and CC is working on ways to better display which of our licenses conform to the Definition. We appreciate the open and participatory process conducted by the Open Definition Advisory Council in evaluating licenses and providing expert assistance and advice to license stewards. Individuals interested in participating in the Open Definition license review process may join the OD-discuss email list.

The Open Definition in context: putting open into practice

Laura James - October 16, 2013 in Featured, Linked Open Data, Open Data, Open Definition, Open Knowledge Definition, Open Standards

We’ve seen how the Open Definition can apply to data and content of many types published by many different kinds of organisation. Here we set out how the Definition relates to specific principles of openness, and to definitions and guidelines for different kinds of open data.

Why we need more than a Definition

The Open Definition does only one thing: as clearly and concisely as possible it defines the conditions for a piece of information to be considered ‘open’.

The Definition is broad and universal: it is a key unifying concept which provides a common understanding across the diverse groups and projects in the open knowledge movement.

At the same time, the Open Definition doesn’t provide in-depth guidance for those publishing information in specific areas, so detailed advice and principles for opening specific types of information – from government data, to scientific research, to the digital holdings of cultural heritage institutions – is needed alongside it.

For example, the Open Definition doesn’t specify whether data should be timely; and yet this is a great idea for many data types. It doesn’t make sense to ask whether census data from a century ago is “timely” or not though!

Guidelines for how to open up information in one domain can’t always be straightforwardly reapplied in another, so principles and guidelines for openness targeted at particular kinds of data, written specifically for the types of organisation that might be publishing them, are important. These sit alongside the Open Definition and help people in all kinds of data fields to appreciate and share open information, and we explain some examples here.

Principles for Open Government Data

In 2007 a group of open government advocates met to develop a set of principles for open government data, which became the “8 Principles of Open Government Data”.

In 2010, the Sunlight Foundation revised and built upon this initial set with their Ten Principles for Opening up Government Information, which have set the standard for open government information around the world. These principles may apply to other kinds of data publisher too, but they are specifically designed for open government, and implementation guidance and support is focused on this domain. The principles share many of the key aspects of the Open Definition, but include additional requirements and guidance specific to government information and the ways it is published and used. The Sunlight principles cover the following areas: completeness, primacy, timeliness, ease of physical and electronic access, machine readability, non-discrimination, use of commonly owned standards, licensing, permanence, and usage costs.

Tim Berners-Lee’s 5 Stars for Linked Data

In 2010, Web Inventor Tim Berners-Lee created his 5 Stars for Linked Data, which aims to encourage more people to publish as Linked Data – that is using a particular set of technical standards and technologies for making information interoperable and interlinked.

The first three stars (legal openness, machine readability, and non-proprietary format) are covered by the Open Definition, and the two additional stars add the Linked Data components (in the form of RDF, a technical specification).

The 5 stars have been influential in various parts of the open data community, especially those interested in the semantic web and the vision of a web of data, although there are many other ways to connect data together.

Principles for specific kinds of information

At the Open Knowledge Foundation many of our Working Groups have been involved with others in creating principles for various types of open data and fields of work with an open element. Such principles frame the work of their communities, set out best practice as well as legal, regulatory and technical standards for openness and data, and have been endorsed by many leading people and organisations in each field.

These include:

The Open Definition: the key principle powering the Global Open Knowledge Movement

All kinds of individuals and organisations can open up information: government, public sector bodies, researchers, corporations, universities, NGOs, startups, charities, community groups, individuals and more. That information can be in many formats – it may be spreadsheets, databases, images, texts, linked data, and more; and it can be information from any field imaginable – such as transport, science, products, education, sustainability, maps, legislation, libraries, economics, culture, development, business, design, finance and more.

Each of these organisations, kinds of information, and the people who are involved in preparing and publishing the information, has its own unique requirements, challenges, and questions. Principles and guidelines (plus training materials, technical standards and so on!) to support open data activities in each area are essential, so those involved can understand and respond to the specific obstacles, challenges and opportunities for opening up information. Creating and maintaining these is a major activity for many of the Open Knowledge Foundation’s Working Groups as well as other groups and communities.

At the same time, those working on openness in many different areas – whether open government, open access, open science, open design, or open culture – have shared interests and goals, and the principles and guidelines for some different data types can and do share many common elements, whilst being tailored to the specific requirements of their communities. The Open Definition provides the key principle which connects all these groups in the global open knowledge movement.

More about openness coming soon

Don’t miss our other posts about Defining Open Data, and exploring the Open Definition, why having a shared and agreed definition of open data is so important, and how one can go about “doing open data”.

Exploring openness and the Open Definition

Laura James - October 7, 2013 in Featured, Open Data, Open Definition, Open Knowledge Definition

We’ve set out the basics of what open data means, so here we explore the Open Definition in more detail, including the importance of bulk access to open information, commercial use of open data, machine-readability, and what conditions can be imposed by a data provider.

Commercial Use

A key element of the definition is that commercial use of open data is allowed – there should be no restrictions on commercial or for-profit use of open data.

In the full Open Definition, this is included as “No Discrimination Against Fields of Endeavor — The license must not restrict anyone from making use of the work in a specific field of endeavor. For example, it may not restrict the work from being used in a business, or from being used for genetic research.”

The major intention of this clause is to prohibit license traps that prevent open material from being used commercially; we want commercial users to join our community, not feel excluded from it.

Examples of commercial open data business models

It may seem odd that companies can make money from open data. Business models in this area are still being invented and explored but here are a couple of options to help illustrate why commercial use is a vital aspect of openness.

open data buttons

You can use an open data set to create a high capacity, reliable API which others can access and build apps and websites with, and to charge for access to that API – as long as a free bulk download is also available. (An API is a way for different pieces of software or different computers to connect and exchange information; most applications and apps use APIs to access data via the internet, such as the latest news or maps or prices for products.)

Businesses can also offer services around data improvement and cleaning; for example, taking several sets of open data, combining them and enhancing them (by creating consistent naming for items within the data, say, or connecting two different datasets to generate new insights).

(Note that charging for data licensing is not an option here – charging for access to the data means it is not open data! This business model is often talked about in the context of personal information or datasets which have been compiled by a business. These are perfectly fine business models for data but they aren’t open data.)

Attribution, “Integrity” and Share-alike

Whilst the Open Definition permits very few conditions to be placed on how someone can use open data it does allow a few specific exceptions:

  • Attribution: an open data provider may require attribution (. that you credit them in an appropriate way). This can be important in allowing open data providers to receive credit for their work, and for downstream users to know where data came from.
  • Integrity: an open data provider may require that a user of the data makes it clear if the data has been changed. This can be very relevant for governments, for example, who wish to ensure that people do not claim data is official if it has been modified.
  • Share-alike: an open data provider may impose a share-alike licence, requiring that any new datasets created using their data are also shared as open data.

Machine-readability and bulk access

Data can be provided in many ways, and this can have a significant impact on how easy it is to use it. The Open Definition requires that data be both machine-readable and available in “bulk” to help make sure it’s not too difficult to make useful.

Data is machine-readable if it can be easily processed by a computer. This does not just mean that it’s digital, but that it is in a digital structure that is appropriate for the relevant processing. For example, consider a PDF document containing tables of data. These are digital, but computers will struggle to extract the information from the PDF (even though it is very human readable!). The equivalent tables in a format such as a spreadsheet would be machine-readable. Read more about machine-readability in the open data glossary.

Some machine readable data being read by a machine

Data is available in bulk if you can download or access the whole dataset easily. It is not available in bulk if you are you limited to just getting parts of the dataset, for example, if you are restricted to getting just a few elements of the data at a time – imagine for example trying to access a dataset of all the towns in the world one country at a time.

APIs versus Bulk

Providing data through an API is great – and often more convenient for many of the things one might want to do with data than bulk access, such as presenting some useful information in a mobile app.

However, the Open Definition requires bulk access rather than an API. There are two main reasons for this:

  • Bulk access allows you to build an API (if you want to!). If you need all the data, using an API to get it can be difficult or inefficient. For example, think about Twitter: using their API to download all the tweets would be very hard and slow. Thus, bulk access is the only way to guarantee full access to the data for everyone. Once bulk access is available, anyone else can build an API which will help others use the data. You can also use bulk data to create interesting new things such as search indexes and complex visualisations.
  • Bulk access is significantly cheaper than providing an API. Today you can store gigabytes of data for less than a dollar a month; but running even a basic API can cost much more, and running a proper API that supports high demand can be very expensive.

So having an API is not a requirement for data to be open – although of course it is great if one is available.

Moreover, it is perfectly fine for someone to charge for access to open data through an API – as long as they also provide the data for free in bulk. (Strictly speaking, the requirement isn’t that the bulk data is available for free but that the charge is no more than the extra cost of reproduction. For online downloads, that’s very close to free!) This makes sense: open data must be free but open data services (such as an API) can be charged for.

(It’s worth considering what this means for real-time data, where new information is being generated all the time, such as live traffic information. The answer here depends somewhat on the situation, but for open real-time data one would imagine a combination of bulk download access, and some way to get rapid or regular updates. For example, you might provide a stream of the latest updates which is available all the time, and a bulk download of a complete day’s data every night.)

Licensing and the public domain

Generally, when we want to know whether a dataset is legally open, we check to see whether it is available under an open licence (or that it’s in the public domain by means of a “dedication”).

However, it is important to note that it is not always clear whether there are any exclusive, intellectual-property-style rights in the data such as copyright or sui-generis database rights (for example, this may depend on your jurisdiction). You can read more about this complex issue in the Open Definition legal overview of rights in data. If there aren’t exclusive rights in the data, then it would automatically be in the public domain, and putting it online would be sufficient to make it open.

However, since, this is an area where things are not very clear, it is generally recommended to apply an appropriate open license – that way if there are exclusive rights you’ve licensed them and if there aren’t any rights you’ve not done any harm (the data was already in the public domain!).

More about openness coming soon

In coming days we’ll post more on the theme of explaining openness, including the relationship of the Open Definition to specific sets of principles for openness – such as the Sunlight Foundation’s 10 principles and Tim Berners-Lee’s 5 star system, why having a shared and agreed definition of open data is so important, and how one can go about “doing open data”.

Defining Open Data

Laura James - October 3, 2013 in Featured, Open Data, Open Definition, Open Knowledge Definition

Open data is data that can be freely used, shared and built-on by anyone, anywhere, for any purpose. This is the summary of the full Open Definition which the Open Knowledge Foundation created in 2005 to provide both a succinct explanation and a detailed definition of open data.

As the open data movement grows, and even more governments and organisations sign up to open data, it becomes ever more important that there is a clear and agreed definition for what “open data” means if we are to realise the full benefits of openness, and avoid the risks of creating incompatibility between projects and splintering the community.

Open can apply to information from any source and about any topic. Anyone can release their data under an open licence for free use by and benefit to the public. Although we may think mostly about government and public sector bodies releasing public information such as budgets or maps, or researchers sharing their results data and publications, any organisation can open information (corporations, universities, NGOs, startups, charities, community groups and individuals).

Read more about different kinds of data in our one page introduction to open data

There is open information in transport, science, products, education, sustainability, maps, legislation, libraries, economics, culture, development, business, design, finance …. So the explanation of what open means applies to all of these information sources and types. Open may also apply both to data – big data and small data – or to content, like images, text and music!

So here we set out clearly what open means, and why this agreed definition is vital for us to collaborate, share and scale as open data and open content grow and reach new communities.

What is Open?

The full Open Definition provides a precise definition of what open data is. There are 2 important elements to openness:

  • Legal openness: you must be allowed to get the data legally, to build on it, and to share it. Legal openness is usually provided by applying an appropriate (open) license which allows for free access to and reuse of the data, or by placing data into the public domain.
  • Technical openness: there should be no technical barriers to using that data. For example, providing data as printouts on paper (or as tables in PDF documents) makes the information extremely difficult to work with. So the Open Definition has various requirements for “technical openness,” such as requiring that data be machine readable and available in bulk.

There are a few key aspects of open which the Open Definition explains in detail. Open Data is useable by anyone, regardless of who they are, where they are, or what they want to do with the data; there must be no restriction on who can use it, and commercial use is fine too.

Open data must be available in bulk (so it’s easy to work with) and it should be available free of charge, or at least at no more than a reasonable reproduction cost. The information should be digital, preferably available by downloading through the internet, and easily processed by a computer too (otherwise users can’t fully exploit the power of data – that it can be combined together to create new insights).

Open Data must permit people to use it, re-use it, and redistribute it, including intermixing with other datasets and distributing the results.

The Open Definition generally doesn’t allow conditions to be placed on how people can use Open Data, but it does permit a data provider to require that data users credit them in some appropriate way, make it clear if the data has been changed, or that any new datasets created using their data are also shared as open data.

There are 3 important principles behind this definition of open, which are why Open Data is so powerful:

  • Availability and Access: that people can get the data
  • Re-use and Redistribution: that people can reuse and share the data
  • Universal Participation: that anyone can use the data

Governance of the Open Definition

Since 2007, the Open Definition has been governed by an Advisory Council. This is the group formally responsible for maintaining and developing the Definition and associated material. Its mission is to take forward Open Definition work for the general benefit of the open knowledge community, and it has specific responsibility for deciding on what licences comply with the Open Definition.

The Council is a community-run body. New members of the Council can be appointed at any time by agreement of the existing members of the Advisory Council, and are selected for demonstrated knowledge and competence in the areas of work of the Council.

The Advisory Council operates in the open and anyone can join the mailing list.

About the Open Definition

The Open Definition was created in 2005 by the Open Knowledge Foundation with input from many people. The Definition was based directly on the Open Source Definition from the Open Source Initiative and we were able to reuse most of these well-established principles and practices that the free and open source community had developed for software, and apply them to data and content.

Thanks to the efforts of many translators in the community, the Open Definition is available in 30+ languages.

More about openness coming soon

In coming days we’ll post more on the theme of explaining openness, including a more detailed exploration of the Open Definition, the relationship of the Open Definition to specific sets of principles for openness – such as the Sunlight Foundation’s 10 principles and Tim Berners-Lee’s 5 star system, why having a shared and agreed definition of open data is so important, and how one can go about “doing open data”.

UK Open Government Licence is now compliant with the Open Definition

Jonathan Gray - July 1, 2013 in Open Data, Open Definition, Policy

On Friday the UK National Archives launched a new version of the Open Government Licence, which is now the default licence used by the UK government to publish the lion’s share of its public sector information.

While the announcement hardly made headlines, there is one small addition to the text of the licence that we were very pleased to see, namely:

The OGLv2.0 is Open Definition compliant.

The new version of the licence is now officially conformant with the Open Definition.

As you may know, the Open Definition gives principles for what we mean by ‘open’ in ‘open data’ or ‘open content’. This means that open material can be used and shared by anyone for any purpose, and – crucially – that open material can be freely combined without legal issues. This relatively short bit of text helps to keep the digital commons interoperable, serving as a green light for reuse and remixing.

The new release of the Open Government Licence is the culmination of months of consultation and feedback from the open data community – including members of the Open Definition Advisory Council – which resulted in several important changes.

The fact that the UK government’s new default licence is now compliant with the definition, formally makes good on official commitments to make open the new default for public sector data.

Jo Ellis, Information Policy Manager at the National Archives commented on the release:

With the Open Government Licence v2.0 we see the next step in the evolution of Open Government Licensing. The refinements provide improved clarity for users as does the launch of the new OGL symbol. We are also delighted that OGLv2.0 is now officially conformant with the Open Definition.

Protecting the foundations of Open Knowledge

Mike Linksvayer - February 13, 2013 in Open Definition, Open Knowledge Definition, Open Knowledge Foundation, Open Standards

The foundations of the Foundation

The Open Knowledge Definition (OKD) was one of the Open Knowledge Foundation’s very first projects: drafted in 2005, 1.0 in 2006. By stipulating what Open means, the OKD has been foundational to the OKF’s work, as illustrated by this several-years-old diagram of the Open Knowledge “stack”.

Knowing your foundations seems a must in any field, but even more so in an explosively growing and cross-disciplinary one. The OKD has kept the OKF itself on-track, as it has started and facilitated dozens of projects over the last years.

Burgeoning movements for open access, culture, data, education, government, and more have also benefited from a shared understanding of Open in face of “openwashing” on one hand, and lack of understanding on another. In either case, when works and projects claimed or intended as Open are actually closed, society loses: closed doesn’t create an interoperable commons.

A selection of OKF blog posts from the past few years illustrates how the OKD plays a low-profile but essential role in setting the standard for Open in a variety of fields:

Recent developments

In 2008 an Advisory Council was inaugurated to steward the OKD and related defintions. I joined the council later in 2008, and recently agreed to serve as its chair for a year.

Since then we’ve discussed and provided feedback on intended-open licenses, in particular an Open Government License Canada proposal, iterated on an ongoing discussion about refinements needed in the next version of the OKD, and made our processes for approving licenses – as well as new council members – slightly more rigorous.

We’ve also taken the crucial step of adding new council members with deep expertise in Public Sector Information/Open Government Data, where we expect much of the “action” in Open and intended-open licenses in the next years to be. I’m very happy to welcome:

  • Baden Appleyard, National Programme Director at AusGOAL
  • Tariq Khokhar, Open Data Evangelist at the World Bank
  • Herb Lainchbury, Citizen, Developer and Founder of OpenDataBC.ca
  • Federico Morando, Managing Director at the Nexa Center
  • Andrew Stott, Former Director for Transparency and Digital Engagement and Co-Chair of the Open Government Data Working Group at the Open Knowledge Foundation.

While many of them will be well known to many of our readers, you may find their brief bios and websites on the Advisory Council page.

It is also time to thank three former council members for their service in years past:

  • Paul Jacobson
  • Rob Styles
  • John Wilbanks

Open movements will continue to grow rapidly (unless we fail miserably). You can help ensure we succeed splendidly! We could always use more help reviewing and providing feedback on licenses, but there are also roles for designers, programmers, translators, writers, and people committed to sound open strategy. See a recent get involved update for more.

Most of all, make sure your open access / culture / education / government / science project is truly open — OpenDefinition.org is a good place for you and your colleagues to start!

Making a Real Commons: Creative Commons should Drop the Non-Commercial and No-Derivatives Licenses

Rufus Pollock - October 4, 2012 in Featured, Free Culture, Open Content, Open Data, Open Definition, Open Standards, Open/Closed, WG Open Licensing

Students for Free Culture recently published two excellent pieces about why Creative Commons should drop their Non-Commercial and No-Derivatives license variants:

As the first post says:

Over the past several years, Creative Commons has increasingly recommended free culture licenses over non-free ones. Now that the drafting process for version 4.0 of their license set is in full gear, this is a “a once-in-a-decade-or-more opportunity” to deprecate the proprietary NonCommercial and NoDerivatives clauses. This is the best chance we have to dramatically shift the direction of Creative Commons to be fully aligned with the definition of free cultural works by preventing the inheritance of these proprietary clauses in CC 4.0′s final release.

After reiterating some of the most common criticisms and objections against the NC and ND restrictions (if you are not familiar with these then they are worth reading up on), the post continues:

Most importantly, though, is that both clauses do not actually contribute to a shared commons. They oppose it.

This is a crucial point and one that I and others at the Open Knowledge Foundation have made time and time again. Simply: the Creative Commons licenses do not make a commons.

As I wrote on my personal blog last year:

Ironically, despite its name, Creative Commons, or more precisely its licenses, do not produce a commons. The CC licenses are not mutually compatible, for example, material with a CC Attribution-Sharealike (by-sa) license cannot be intermixed with material licensed with any of the CC NonCommercial licenses (e.g. Attribution-NonCommercial, Attribution-Sharealike-Noncommercial).

Given that a) the majority of CC licenses in use are ‘non-commercial’ b) there is also large usage of ShareAlike (e.g. Wikipedia), this is an issue affects a large set of ‘Creative Commons’ material.

Unfortunately, the presence of the word ‘Commons’ in CC’s name and the prominence of ‘remix’ in the advocacy around CC tends to make people think, falsely, that all CC licenses as in some way similar or substitutable.

The NC and ND licenses prevent CC licensed works forming a unified open digital commons that everyone is free to use, reuse and redistribute.

Perhaps if Creative Commons were instead called ‘Creative Choice’ and it were clearer that only a subset of the licenses (namely CC0, CC-BY and CC-BY-SA) contribute to the development of a genuine, unified, interoperable commons then this would not be so problematic. But the the fact that CC appears to promote such a commons (which in fact it does not) ultimately has a detrimental effect on the growth and development of the open digital commons.

As the Free Culture blog puts it:

Creative Commons could have moved towards being a highly-flexible modular licensing platform that enabled rightsholders to fine-tune the exact rights they wished to grant on their works, but there’s a reason that didn’t happen. We would be left with a plethora of incompatible puddles of culture. Copyright already gives rightsholdors all of the power. Creative Commons tries to offer a few simple options not merely to make the lives of rightsholders easier, but to do so towards the ends of creating a commons.

Whilst Free Culture are focused on “content” the situation is, if anything, more serious for data where combination and reuse is central and therefore interoperability (and the resulting open commons) are especially important.

We therefore believe this is the time for Creative Commons to either retire the NC and ND license variants, or spin them off into a separate entity which does not purport to promote or advance a digital commons (e.g. ‘Creative Choice’).

Please consider joining us and Students for a Free Culture in the call to Creative Commons to make the necessary changes:

Announcing the Open Definition Licenses Service

Rufus Pollock - February 16, 2012 in Open Content, Open Data, Open Definition, Open Knowledge Definition, Open Standards, Our Work, WG Open Licensing

We’re pleased to announce a simple new service from the Open Knowledge Foundation as part of the Open Definition Project: the (Open) Licenses Service.

open licensing

The service is ultra simple in purpose and function. It provides:

  • Information on licenses for open data, open content, and open-source software in machine readable form (JSON)
  • A simple web API that allows you retrieve this information over the web — including using javascript in a browser via JSONP

In addition to the service there’s also:

What’s Included

There’s data on more than 100 open (and a few closed) licenses including all OSI-approved open source licenses and all Open Definition conformant open data and content licenses. Also included are a few closed licenses as well as ‘generics’ — licensed representing a category (useful where a user does not know the exact license but knows, for example, that the material only requires attribution).

View all the licenses available »

In addition various generic groups are provided that are useful when constructing license choice lists, including non-commercial options, generic Public Domain and more. Pre-packaged groups include:

The source for all this material is a git licenses repo on github. Not only does it provide another way to get the data, but also means that if you spot an error, or have a suggestion for an improvement, you can file an issue on the Github repo or fork, patch and submit a pull request.

Why this Service?

The first reason is the most obvious: having a place to record license data in a machine readable way, especially for open licenses (i.e. for content and data those conforming to the Open Defnition and for Software the Open Source Definition).

The second reason is to make it easier for other people to include license info into their own apps and services. Literally daily, new sites and services are being created that allow users to share or create content and data. But when they do that, if there’s any intention for that data to get used and reused by others it’s essential that the material get licensed — and preferably, openly licensed.

By providing license data in a simple machine-usable, web friendly format we hope to make it easier for people to integrate license choosers — and good license defaults — into their sites. This will provide not only greater clarify, but also, more open content and data — remember, no license usually means defaulting to the most restrictive, all rights reserved, condition.

Open Knowledge Definition translated into Telugu (తెలుగు)

Theodora Middleton - November 29, 2011 in Open Definition, Open Knowledge Definition

The following post is by Theodora Middleton, the OKF blog editor.

We are pleased to announce that the Open Knowledge Definition has now been translated into Telugu (తెలుగు), thanks to the hard work of Sridhar Gutam. You can find this at:

The definition has now been translated into 27 languages. If you’d like to translate the Definition into another language, or if you’ve already done so, please get in touch on our discuss list, or on info at okfn dot org.

Get Updates