Support Us

You are browsing the archive for Open Knowledge Definition.

The Open Definition in context: putting open into practice

Laura James - October 16, 2013 in Featured, Linked Open Data, Open Data, Open Definition, Open Knowledge Definition, Open Standards

We’ve seen how the Open Definition can apply to data and content of many types published by many different kinds of organisation. Here we set out how the Definition relates to specific principles of openness, and to definitions and guidelines for different kinds of open data.

Why we need more than a Definition

The Open Definition does only one thing: as clearly and concisely as possible it defines the conditions for a piece of information to be considered ‘open’.

The Definition is broad and universal: it is a key unifying concept which provides a common understanding across the diverse groups and projects in the open knowledge movement.

At the same time, the Open Definition doesn’t provide in-depth guidance for those publishing information in specific areas, so detailed advice and principles for opening specific types of information – from government data, to scientific research, to the digital holdings of cultural heritage institutions – is needed alongside it.

For example, the Open Definition doesn’t specify whether data should be timely; and yet this is a great idea for many data types. It doesn’t make sense to ask whether census data from a century ago is “timely” or not though!

Guidelines for how to open up information in one domain can’t always be straightforwardly reapplied in another, so principles and guidelines for openness targeted at particular kinds of data, written specifically for the types of organisation that might be publishing them, are important. These sit alongside the Open Definition and help people in all kinds of data fields to appreciate and share open information, and we explain some examples here.

Principles for Open Government Data

In 2007 a group of open government advocates met to develop a set of principles for open government data, which became the “8 Principles of Open Government Data”.

In 2010, the Sunlight Foundation revised and built upon this initial set with their Ten Principles for Opening up Government Information, which have set the standard for open government information around the world. These principles may apply to other kinds of data publisher too, but they are specifically designed for open government, and implementation guidance and support is focused on this domain. The principles share many of the key aspects of the Open Definition, but include additional requirements and guidance specific to government information and the ways it is published and used. The Sunlight principles cover the following areas: completeness, primacy, timeliness, ease of physical and electronic access, machine readability, non-discrimination, use of commonly owned standards, licensing, permanence, and usage costs.

Tim Berners-Lee’s 5 Stars for Linked Data

In 2010, Web Inventor Tim Berners-Lee created his 5 Stars for Linked Data, which aims to encourage more people to publish as Linked Data – that is using a particular set of technical standards and technologies for making information interoperable and interlinked.

The first three stars (legal openness, machine readability, and non-proprietary format) are covered by the Open Definition, and the two additional stars add the Linked Data components (in the form of RDF, a technical specification).

The 5 stars have been influential in various parts of the open data community, especially those interested in the semantic web and the vision of a web of data, although there are many other ways to connect data together.

Principles for specific kinds of information

At the Open Knowledge Foundation many of our Working Groups have been involved with others in creating principles for various types of open data and fields of work with an open element. Such principles frame the work of their communities, set out best practice as well as legal, regulatory and technical standards for openness and data, and have been endorsed by many leading people and organisations in each field.

These include:

The Open Definition: the key principle powering the Global Open Knowledge Movement

All kinds of individuals and organisations can open up information: government, public sector bodies, researchers, corporations, universities, NGOs, startups, charities, community groups, individuals and more. That information can be in many formats – it may be spreadsheets, databases, images, texts, linked data, and more; and it can be information from any field imaginable – such as transport, science, products, education, sustainability, maps, legislation, libraries, economics, culture, development, business, design, finance and more.

Each of these organisations, kinds of information, and the people who are involved in preparing and publishing the information, has its own unique requirements, challenges, and questions. Principles and guidelines (plus training materials, technical standards and so on!) to support open data activities in each area are essential, so those involved can understand and respond to the specific obstacles, challenges and opportunities for opening up information. Creating and maintaining these is a major activity for many of the Open Knowledge Foundation’s Working Groups as well as other groups and communities.

At the same time, those working on openness in many different areas – whether open government, open access, open science, open design, or open culture – have shared interests and goals, and the principles and guidelines for some different data types can and do share many common elements, whilst being tailored to the specific requirements of their communities. The Open Definition provides the key principle which connects all these groups in the global open knowledge movement.

More about openness coming soon

Don’t miss our other posts about Defining Open Data, and exploring the Open Definition, why having a shared and agreed definition of open data is so important, and how one can go about “doing open data”.

Exploring openness and the Open Definition

Laura James - October 7, 2013 in Featured, Open Data, Open Definition, Open Knowledge Definition

We’ve set out the basics of what open data means, so here we explore the Open Definition in more detail, including the importance of bulk access to open information, commercial use of open data, machine-readability, and what conditions can be imposed by a data provider.

Commercial Use

A key element of the definition is that commercial use of open data is allowed – there should be no restrictions on commercial or for-profit use of open data.

In the full Open Definition, this is included as “No Discrimination Against Fields of Endeavor — The license must not restrict anyone from making use of the work in a specific field of endeavor. For example, it may not restrict the work from being used in a business, or from being used for genetic research.”

The major intention of this clause is to prohibit license traps that prevent open material from being used commercially; we want commercial users to join our community, not feel excluded from it.

Examples of commercial open data business models

It may seem odd that companies can make money from open data. Business models in this area are still being invented and explored but here are a couple of options to help illustrate why commercial use is a vital aspect of openness.

open data buttons

You can use an open data set to create a high capacity, reliable API which others can access and build apps and websites with, and to charge for access to that API – as long as a free bulk download is also available. (An API is a way for different pieces of software or different computers to connect and exchange information; most applications and apps use APIs to access data via the internet, such as the latest news or maps or prices for products.)

Businesses can also offer services around data improvement and cleaning; for example, taking several sets of open data, combining them and enhancing them (by creating consistent naming for items within the data, say, or connecting two different datasets to generate new insights).

(Note that charging for data licensing is not an option here – charging for access to the data means it is not open data! This business model is often talked about in the context of personal information or datasets which have been compiled by a business. These are perfectly fine business models for data but they aren’t open data.)

Attribution, “Integrity” and Share-alike

Whilst the Open Definition permits very few conditions to be placed on how someone can use open data it does allow a few specific exceptions:

  • Attribution: an open data provider may require attribution (. that you credit them in an appropriate way). This can be important in allowing open data providers to receive credit for their work, and for downstream users to know where data came from.
  • Integrity: an open data provider may require that a user of the data makes it clear if the data has been changed. This can be very relevant for governments, for example, who wish to ensure that people do not claim data is official if it has been modified.
  • Share-alike: an open data provider may impose a share-alike licence, requiring that any new datasets created using their data are also shared as open data.

Machine-readability and bulk access

Data can be provided in many ways, and this can have a significant impact on how easy it is to use it. The Open Definition requires that data be both machine-readable and available in “bulk” to help make sure it’s not too difficult to make useful.

Data is machine-readable if it can be easily processed by a computer. This does not just mean that it’s digital, but that it is in a digital structure that is appropriate for the relevant processing. For example, consider a PDF document containing tables of data. These are digital, but computers will struggle to extract the information from the PDF (even though it is very human readable!). The equivalent tables in a format such as a spreadsheet would be machine-readable. Read more about machine-readability in the open data glossary.

Some machine readable data being read by a machine

Data is available in bulk if you can download or access the whole dataset easily. It is not available in bulk if you are you limited to just getting parts of the dataset, for example, if you are restricted to getting just a few elements of the data at a time – imagine for example trying to access a dataset of all the towns in the world one country at a time.

APIs versus Bulk

Providing data through an API is great – and often more convenient for many of the things one might want to do with data than bulk access, such as presenting some useful information in a mobile app.

However, the Open Definition requires bulk access rather than an API. There are two main reasons for this:

  • Bulk access allows you to build an API (if you want to!). If you need all the data, using an API to get it can be difficult or inefficient. For example, think about Twitter: using their API to download all the tweets would be very hard and slow. Thus, bulk access is the only way to guarantee full access to the data for everyone. Once bulk access is available, anyone else can build an API which will help others use the data. You can also use bulk data to create interesting new things such as search indexes and complex visualisations.
  • Bulk access is significantly cheaper than providing an API. Today you can store gigabytes of data for less than a dollar a month; but running even a basic API can cost much more, and running a proper API that supports high demand can be very expensive.

So having an API is not a requirement for data to be open – although of course it is great if one is available.

Moreover, it is perfectly fine for someone to charge for access to open data through an API – as long as they also provide the data for free in bulk. (Strictly speaking, the requirement isn’t that the bulk data is available for free but that the charge is no more than the extra cost of reproduction. For online downloads, that’s very close to free!) This makes sense: open data must be free but open data services (such as an API) can be charged for.

(It’s worth considering what this means for real-time data, where new information is being generated all the time, such as live traffic information. The answer here depends somewhat on the situation, but for open real-time data one would imagine a combination of bulk download access, and some way to get rapid or regular updates. For example, you might provide a stream of the latest updates which is available all the time, and a bulk download of a complete day’s data every night.)

Licensing and the public domain

Generally, when we want to know whether a dataset is legally open, we check to see whether it is available under an open licence (or that it’s in the public domain by means of a “dedication”).

However, it is important to note that it is not always clear whether there are any exclusive, intellectual-property-style rights in the data such as copyright or sui-generis database rights (for example, this may depend on your jurisdiction). You can read more about this complex issue in the Open Definition legal overview of rights in data. If there aren’t exclusive rights in the data, then it would automatically be in the public domain, and putting it online would be sufficient to make it open.

However, since, this is an area where things are not very clear, it is generally recommended to apply an appropriate open license – that way if there are exclusive rights you’ve licensed them and if there aren’t any rights you’ve not done any harm (the data was already in the public domain!).

More about openness coming soon

In coming days we’ll post more on the theme of explaining openness, including the relationship of the Open Definition to specific sets of principles for openness – such as the Sunlight Foundation’s 10 principles and Tim Berners-Lee’s 5 star system, why having a shared and agreed definition of open data is so important, and how one can go about “doing open data”.

Defining Open Data

Laura James - October 3, 2013 in Featured, Open Data, Open Definition, Open Knowledge Definition

Open data is data that can be freely used, shared and built-on by anyone, anywhere, for any purpose. This is the summary of the full Open Definition which the Open Knowledge Foundation created in 2005 to provide both a succinct explanation and a detailed definition of open data.

As the open data movement grows, and even more governments and organisations sign up to open data, it becomes ever more important that there is a clear and agreed definition for what “open data” means if we are to realise the full benefits of openness, and avoid the risks of creating incompatibility between projects and splintering the community.

Open can apply to information from any source and about any topic. Anyone can release their data under an open licence for free use by and benefit to the public. Although we may think mostly about government and public sector bodies releasing public information such as budgets or maps, or researchers sharing their results data and publications, any organisation can open information (corporations, universities, NGOs, startups, charities, community groups and individuals).

Read more about different kinds of data in our one page introduction to open data

There is open information in transport, science, products, education, sustainability, maps, legislation, libraries, economics, culture, development, business, design, finance …. So the explanation of what open means applies to all of these information sources and types. Open may also apply both to data – big data and small data – or to content, like images, text and music!

So here we set out clearly what open means, and why this agreed definition is vital for us to collaborate, share and scale as open data and open content grow and reach new communities.

What is Open?

The full Open Definition provides a precise definition of what open data is. There are 2 important elements to openness:

  • Legal openness: you must be allowed to get the data legally, to build on it, and to share it. Legal openness is usually provided by applying an appropriate (open) license which allows for free access to and reuse of the data, or by placing data into the public domain.
  • Technical openness: there should be no technical barriers to using that data. For example, providing data as printouts on paper (or as tables in PDF documents) makes the information extremely difficult to work with. So the Open Definition has various requirements for “technical openness,” such as requiring that data be machine readable and available in bulk.

There are a few key aspects of open which the Open Definition explains in detail. Open Data is useable by anyone, regardless of who they are, where they are, or what they want to do with the data; there must be no restriction on who can use it, and commercial use is fine too.

Open data must be available in bulk (so it’s easy to work with) and it should be available free of charge, or at least at no more than a reasonable reproduction cost. The information should be digital, preferably available by downloading through the internet, and easily processed by a computer too (otherwise users can’t fully exploit the power of data – that it can be combined together to create new insights).

Open Data must permit people to use it, re-use it, and redistribute it, including intermixing with other datasets and distributing the results.

The Open Definition generally doesn’t allow conditions to be placed on how people can use Open Data, but it does permit a data provider to require that data users credit them in some appropriate way, make it clear if the data has been changed, or that any new datasets created using their data are also shared as open data.

There are 3 important principles behind this definition of open, which are why Open Data is so powerful:

  • Availability and Access: that people can get the data
  • Re-use and Redistribution: that people can reuse and share the data
  • Universal Participation: that anyone can use the data

Governance of the Open Definition

Since 2007, the Open Definition has been governed by an Advisory Council. This is the group formally responsible for maintaining and developing the Definition and associated material. Its mission is to take forward Open Definition work for the general benefit of the open knowledge community, and it has specific responsibility for deciding on what licences comply with the Open Definition.

The Council is a community-run body. New members of the Council can be appointed at any time by agreement of the existing members of the Advisory Council, and are selected for demonstrated knowledge and competence in the areas of work of the Council.

The Advisory Council operates in the open and anyone can join the mailing list.

About the Open Definition

The Open Definition was created in 2005 by the Open Knowledge Foundation with input from many people. The Definition was based directly on the Open Source Definition from the Open Source Initiative and we were able to reuse most of these well-established principles and practices that the free and open source community had developed for software, and apply them to data and content.

Thanks to the efforts of many translators in the community, the Open Definition is available in 30+ languages.

More about openness coming soon

In coming days we’ll post more on the theme of explaining openness, including a more detailed exploration of the Open Definition, the relationship of the Open Definition to specific sets of principles for openness – such as the Sunlight Foundation’s 10 principles and Tim Berners-Lee’s 5 star system, why having a shared and agreed definition of open data is so important, and how one can go about “doing open data”.

Protecting the foundations of Open Knowledge

Mike Linksvayer - February 13, 2013 in Open Definition, Open Knowledge Definition, Open Knowledge Foundation, Open Standards

The foundations of the Foundation

The Open Knowledge Definition (OKD) was one of the Open Knowledge Foundation’s very first projects: drafted in 2005, 1.0 in 2006. By stipulating what Open means, the OKD has been foundational to the OKF’s work, as illustrated by this several-years-old diagram of the Open Knowledge “stack”.

Knowing your foundations seems a must in any field, but even more so in an explosively growing and cross-disciplinary one. The OKD has kept the OKF itself on-track, as it has started and facilitated dozens of projects over the last years.

Burgeoning movements for open access, culture, data, education, government, and more have also benefited from a shared understanding of Open in face of “openwashing” on one hand, and lack of understanding on another. In either case, when works and projects claimed or intended as Open are actually closed, society loses: closed doesn’t create an interoperable commons.

A selection of OKF blog posts from the past few years illustrates how the OKD plays a low-profile but essential role in setting the standard for Open in a variety of fields:

Recent developments

In 2008
an Advisory Council was inaugurated to steward the OKD and related defintions. I joined the council later in 2008, and recently agreed to serve as its chair for a year.

Since then we’ve discussed and provided feedback on intended-open licenses, in particular an Open Government License Canada proposal, iterated on an ongoing discussion about refinements needed in the next version of the OKD, and made our processes for approving licenses – as well as new council members – slightly more rigorous.

We’ve also taken the crucial step of adding new council members with deep expertise in Public Sector Information/Open Government Data, where we expect much of the “action” in Open and intended-open licenses in the next years to be. I’m very happy to welcome:

  • Baden Appleyard, National Programme Director at AusGOAL
  • Tariq Khokhar, Open Data Evangelist at the World Bank
  • Herb Lainchbury, Citizen, Developer and Founder of OpenDataBC.ca
  • Federico Morando, Managing Director at the Nexa Center
  • Andrew Stott, Former Director for Transparency and Digital Engagement and Co-Chair of the Open Government Data Working Group at the Open Knowledge Foundation.

While many of them will be well known to many of our readers, you may find their brief bios and websites on the Advisory Council page.

It is also time to thank three former council members for their service in years past:

  • Paul Jacobson
  • Rob Styles
  • John Wilbanks

Open movements will continue to grow rapidly (unless we fail miserably). You can help ensure we succeed splendidly! We could always use more help reviewing and providing feedback on licenses, but there are also roles for designers, programmers, translators, writers, and people committed to sound open strategy. See a recent get involved update for more.

Most of all, make sure your open access / culture / education / government / science project is truly open — OpenDefinition.org is a good place for you and your colleagues to start!

New open access recommendations ten years on from Budapest Open Access Initiative

Jonathan Gray - September 12, 2012 in Open Access, Open Data, Open Knowledge Definition, Open Science, Open Standards, Policy

The notion of open access – or making research freely usable by all, without cost or legal barriers – has been in the news quite a bit this year.

It received significant media coverage on the back on the so-called Academic Spring, and subsequent high profile activities and announcements in the UK, the US and the EU.

One of the most significant milestones for open access advocates in the recent past is the Budapest Open Access Initiative, an international conference which convened experts from around the world to build consensus around a shared definition of ‘open access’. It is widely referred to as one of the defining events in the history of open access advocacy.

Ten years after this event, a diverse group of academics, advocates, librarians, and legal and policy experts met in Budapest. Today the group has issued a series of new recommendations for the next ten years of open access.

Some of the prefatory remarks to the recommendations are worth quoting in full:

Today we’re no longer at the beginning of this worldwide campaign, and not yet at the end. We’re solidly in the middle, and draw upon a decade of experience in order to make new recommendations for the next ten years.

We reaffirm the BOAI “statement of principle,…statement of strategy, and…statement of commitment.” We reaffirm the aspiration to achieve this “unprecedented public good” and to “accelerate research, enrich education, share the learning of the rich with the poor and the poor with the rich, make this literature as useful as it can be, and lay the foundation for uniting humanity in a common intellectual conversation and quest for knowledge.”

We reaffirm our confidence that “the goal is attainable and not merely preferable or utopian.” Nothing from the last ten years has made the goal less attainable. On the contrary, OA is well-established and growing in every field. We have more than a decade’s worth of practical wisdom on how to implement OA. The technical, economic, and legal feasibility of OA are well-tested and well-documented.

Nothing in the last ten years makes OA less necessary or less opportune. On the contrary, it remains the case that “scientists and scholars…publish the fruits of their research in scholarly journals without payment” and “without expectation of payment.” In addition, scholars typically participate in peer review as referees and editors without expectation of payment. Yet more often than not, access barriers to peer-reviewed research literature remain firmly in place, for the benefit of intermediaries rather than authors, referees, or editors, and at the expense of research, researchers, and research institutions.

Finally, nothing from the last ten years suggests that the goal is less valuable or worth attaining. On the contrary, the imperative to make knowledge available to everyone who can make use of it, apply it, or build on it is more pressing than ever.

If you believe in open access, the following four sections are worth reading in detail – and contain lots of ideas on policy, licensing, infrastructure, sustainability, and advocacy.

Following are a couple of excerpts that might be of particular interest to readers of the OKFN’s blog.

Firstly, while there have been no shortage of debates about the legal and practical meaning of ‘open access’ and associated questions of licensing and strategy (resulting in various inflections: strong/weak, libre/gratis, green/gold, etc), the recommendations contain a clear endorsement of a strong conception of open access which only requires attribution with the CC-BY license (which is compliant with the Open Knowledge Foundation’s Open Definition):

2.1. We recommend CC-BY or an equivalent license as the optimal license for the publication, distribution, use, and reuse of scholarly work.

  • OA repositories typically depend on permissions from others, such as authors or publishers, and are rarely in a position to require open licenses. However, policy makers in a position to direct deposits into repositories should require open licenses, preferably CC-BY, when they can.

  • OA journals are always in a position to require open licenses, yet most of them do not yet take advantage of the opportunity. We recommend CC-BY for all OA journals.

  • In developing strategy and setting priorities, we recognize that gratis access is better than priced access, libre access is better than gratis access, and libre under CC-BY or the equivalent is better than libre under more restrictive open licenses. We should achieve what we can when we can. We should not delay achieving gratis in order to achieve libre, and we should not stop with gratis when we can achieve libre.

Secondly, they explicitly suggest that open access advocates should more closely coordinate with advocacy for other forms of openness:

The worldwide campaign for OA to research articles should work more closely with the worldwide campaigns for OA to books, theses and dissertations, research data, government data, educational resources, and source code.

If you’re interested in finding out more about the Open Knowledge Foundation’s open access activities you can join our open-access mailing list.

Open Data – Chennai

Lucy Chambers - August 23, 2012 in Events, OKF India, Open Knowledge Definition, Open Knowledge Foundation Local Groups, Open Spending, School of Data, Workshop

This is part 2 of 5 of the Open Data India Series. You can read the first post ‘Open Data – Bangalore’ on the OKFN blog.

Chennai, formerly Madras, is only a short train ride away from Bangalore. Laura and I hadn’t been intending on travelling to Chennai on this trip, but a mail from Nithya Raman from Transparent Chennai on learning that we would be in India at the time of their Open Data Camp promised that, ‘the enthusiasm of my team to learn would make you glad you came’. That sounded like a tempting offer, so Laura and I packed our bags and headed down the hill to the coast to lead a workshop on open data, and what we had learnt from the previous two weeks in Bangalore…

Transparent Chennai & the Workshop

Transparent Chennai collects, creates, and disseminates maps, data, and research to support citizen advocacy, largely focussing on issues related to the urban poor. They were the first NGO on the trip to ask us how to open up data which they had got originally from governments through right to information requests and added value to, so that other people could benefit from their work. Via their website, you can build your own maps of Chennai with layers ranging from flyovers and special road projects, census data by Ward, slum information from the Slum Clearance Board and location of public toilets from data which they have meticulously compiled from various sources with their tiny, 6-person team. (More information on the data and the map layers).

The Transparent Chennai team had put together a lively workshop with topics ranging from What is data? through Open Data and picking the correct visualisation for your data, to live mapping sessions. Sessions were delivered to an audience made up largely of NGOs, many of whom had travelled from far and wide to be there.

Questions and debate flowed about where the boundaries should be drawn with what should be made open, licensing and even how and when to use specific services, such as OpenCorporates. We hope these discussion will continue.

For the benefit of those in the workshop, here is our presentation and some of the links we mentioned in answer to the questions:

  • OpenDefinition.org – the Open Definition, the underlying principle behind everything that we do.
  • DataHub.io – we mentioned when explaining how we ourselves show the steps when working with data, to ensure that anyone can track and replicate our working.
  • Licensing questions. We were delighted to hear that some of the NGOs present in the workshop were considering openly licensing some of the data they had collected themselves and wanted to know which licence to pick. There are still lots of grey areas to iron out where derivative works from government data is concerned; for example, Transparent Chennai were not sure whether they could release government datasets to which they had added geographic markers under an open licence. For this type of question, our recommendation would be to drop the community of experts and lawyers a message via the Open Definition discussion list.

Oh, and yes, we were glad we came (very!). Thank you Nithya for the invite, and we look forward to hearing a lot more from Transparent Chennai!

Next stop in the Open Data in India series – Mumbai.

Announcing the Open Definition Licenses Service

Rufus Pollock - February 16, 2012 in Open Content, Open Data, Open Definition, Open Knowledge Definition, Open Standards, Our Work, WG Open Licensing

We’re pleased to announce a simple new service from the Open Knowledge Foundation as part of the Open Definition Project: the (Open) Licenses Service.

open licensing

The service is ultra simple in purpose and function. It provides:

  • Information on licenses for open data, open content, and open-source software in machine readable form (JSON)
  • A simple web API that allows you retrieve this information over the web — including using javascript in a browser via JSONP

In addition to the service there’s also:

What’s Included

There’s data on more than 100 open (and a few closed) licenses including all OSI-approved open source licenses and all Open Definition conformant open data and content licenses. Also included are a few closed licenses as well as ‘generics’ — licensed representing a category (useful where a user does not know the exact license but knows, for example, that the material only requires attribution).

View all the licenses available »

In addition various generic groups are provided that are useful when constructing license choice lists, including non-commercial options, generic Public Domain and more. Pre-packaged groups include:

The source for all this material is a git licenses repo on github. Not only does it provide another way to get the data, but also means that if you spot an error, or have a suggestion for an improvement, you can file an issue on the Github repo or fork, patch and submit a pull request.

Why this Service?

The first reason is the most obvious: having a place to record license data in a machine readable way, especially for open licenses (i.e. for content and data those conforming to the Open Defnition and for Software the Open Source Definition).

The second reason is to make it easier for other people to include license info into their own apps and services. Literally daily, new sites and services are being created that allow users to share or create content and data. But when they do that, if there’s any intention for that data to get used and reused by others it’s essential that the material get licensed — and preferably, openly licensed.

By providing license data in a simple machine-usable, web friendly format we hope to make it easier for people to integrate license choosers — and good license defaults — into their sites. This will provide not only greater clarify, but also, more open content and data — remember, no license usually means defaulting to the most restrictive, all rights reserved, condition.

Open Knowledge Definition translated into Telugu (తెలుగు)

Theodora Middleton - November 29, 2011 in Open Definition, Open Knowledge Definition

The following post is by Theodora Middleton, the OKF blog editor.

We are pleased to announce that the Open Knowledge Definition has now been translated into Telugu (తెలుగు), thanks to the hard work of Sridhar Gutam. You can find this at:

The definition has now been translated into 27 languages. If you’d like to translate the Definition into another language, or if you’ve already done so, please get in touch on our discuss list, or on info at okfn dot org.

Public Data Consultations: Making Open Data a Reality

Lucy Chambers - August 9, 2011 in Legal, News, Open Government Data, Open Knowledge Definition, Policy

This post is from Lucy Chambers, Community Coordinator at the Open Knowledge Foundation.

Earlier this month, the UK Government published the ‘Open Data Consultation Paper’. Its aim is to establish a “culture of openness and transparency in public services” and the Government is turning to the general public for their preferences on how this should be achieved.

This is an incredibly important opportunity to influence government policy on open data. So if you care about open data – make sure to make your voice heard!

From the Cabinet Office’s Website

> “We want to hear from everyone – citizens, businesses, public services themselves,
> and other interest groups – on how we can best embed a culture of openness and
> transparency in our public services.”

 

Francis Maude, quoted from the paper

> Our proposed approach is, fundamentally, about creating both „pull‟ (a right to data) and
> „push‟ (a presumption of publication). With these forces, we will begin to embed openness
> and transparency in how we run government. This consultation seeks your views on these
> ideas.

 

 Participants from the general public are invited to voice their opinions on the following topics:

  • how we might enhance a ‘right to data’, establishing stronger rights for individuals, businesses and other actors to obtain data from public service providers
  • how to set transparency standards that enforce this right to data
  • how public service providers might be held to account for delivering open data
  • how we might ensure collection and publication of the most useful data
  • how we might make the internal workings of government and the public sector more open
  • how far there is a role for government to stimulate enterprise and market making in the use of open data.

More details on how to respond can be found below:

Send a written response to:

Open Data Consultation,
Transparency Team,
Efficiency and Reform Group,
Cabinet Office,
1 Horse Guards Road,
London SW1A 2HQ

Closing date for submissions is 27th October 2011

See also

Bulgarian translation of the Open Knowledge Definition (OKD)

Jonathan Gray - June 21, 2011 in OKF Projects, Open Definition, Open Knowledge Definition, WG Open Licensing

The following post is from Jonathan Gray, Community Coordinator at the Open Knowledge Foundation.

We are pleased to now have a Bulgarian translation of the Open Knowledge Definition thanks to Peio Popov. You can find this at:

If you’d like to translate the Definition into another language, or if you’ve already done so, please get in touch on our discuss list, or on info at okfn dot org.

Get Updates