Support Us

You are browsing the archive for Technical.

ORDF – the OKFN RDF Library

Rufus Pollock - July 2, 2010 in Bibliographica, Technical

Some months ago we started looking at how we might possibly use an RDF store instead of a SQL database behind data-driven websites — of which OKF has several. The reasons have to do with making the data reuseable in a better way than ad-hoc JSON APIs.

As we tend to program in Python and use the Pylons framework_static/write-ops.png this led us to consider some alternatives like RDFAlchemy and SuRF. Both of those build on top of RDFLib and try to present a programming interface reminiscent of SQL-ORM middleware like SQLObject and SQLAlchemy. They assume a single database-like storage for the RDF data and in some cases make some assumptions about the form of the data itself.

One important thing that they do not directly handle is customised indexing — and triplestores vary widely in terms of how well certain types of queries will perform, if they are supported at all. Overall, using RDFAlchemy or SuRF didn’t seem like much of a gain over using RDFLib directly. So we started writing our own middleware which we’ve named ORDF (OKFN RDF Library).

Code and documentation is at

ORDF Features and Structure

Key features of ORDF:

  • Open-source and python-based (builds on RDFLib)
  • Clean separation of functionality such as storage, indexing, web frontend
  • Easy pluggability of different storage and indexing engines (all those supported by RDFLib, 4store, simple-disk using pairtree etc)
  • Extensibility via messaging (we use rabbitmq)
  • Built-in rdf “revisioning”: every set of changes to the RDF store is kept in a “changeset”. This enables provenance, roll-back, change reporting “out-of-the-box”

To illustrate how this works, here’s a diagram showing a write operation in ORDF using most of the features described above. Below we go into detail describing how it all works.

Write operations in ORDF diagram

Forward Compatibility with RDFLib

The ORDF middleware solves several problems. The first, and most mundane, is to paper over the significant API changes between versions 2.4.2 and 3.0.0 of RDFLib. The RDFLib moved things around a bunch and this tends to break things because statements like from rdflib import Graph need to be changed to from rdflib.graph import Graph. So the first thing ORDF does is let you do from ordf.graph import Graph which will work no matter which version of RDFLib you have installed. This is important because the changes in 3.0.0 are deeper than just some renaming of modules, there is software, the FuXi reasoner and anything that uses the SPARQL query language, that will not work well with the new version. This means that we basically have a forward compatibility layer that means that software developed with ORDF should continue to work once newer RDFLib stabilises.

Pylons Support

Only slightly less mundane than the previous, ORDF includes some code that should be common amongst web applications using the Pylons framework for accessing the ORDF facilities. This means controllers for obtaining copies of graphs in various serialisations and for implementing a SPARQL endpoint.

Indices and Message Queues

Then we have indexes and queueing. Named graphs, the moral equivalent of the objects from the SQL-ORM world are stored in more than one place to facilitate different kinds of queries,

  • The pairtree filesystem index, which is good for retrieving a graph if you know its name and simply stores it as a file in a specialised directory hierarchy on the disk. This is not good for querying but is pretty much future-proof — at least as long as it is possible to read a file from the disk.
  • An rdflib supported storage, suitable for small to medium sized datasets, does not depend on any external software and allows SPARQL queries over the data for graph traversal operations
  • The 4store quad-store which fulfills a similar role for larger datasets, allowing SPARQL queries but requires an additional piece of software running (possibly on a cluster for very large datasets) and is somewhat harder to set up.
  • A xapian full-text search index, allows free-form queries over text strings, something that no triplestore does very well.

There are plans for further storage back-ends, specifically using Solr as well as other triplestores such as Jena and Virtuoso.

A key element of this indexing architecture is that it is distributed. Whilst you can configure all of these index types into a single running program — and it is common to do so for development — in reality some indexing operations are expensive and you don’t necessarily want the client program sitting and waiting while they are done synchronously. So there is also a pseudo-index that sends graphs to a rabbitmq messaging server and for each index a daemon is run that listens to a queue on a fan-out exchange.

Introducing a layer of message queueing also makes it possible to support inferencing or the process of deriving new information or statements from the given data. This is an operation that is considerably more computationally expensive than mere indexing. It is accomplished by using two queues. When a graph is save, it first gets put on a queue conventionally called reason. The FuXi reasoner listens to that queue, computes some new statements (known in the literature as a production rule or forward-chaining system), and then puts the resulting, augmented, graph onto a queue called index and thence to the indexers.

Ontology Logic

Until most recently there was only one ontology-specific behaviour coded into ORDF and that was the ChangeSet. It is still important. It provides low level, per-statement, provenance and change history information. This is built into the system. A save operation on a graph is accomplished by obtaining a change context and adding one or more graphs to it, then committing the changes. Before sending the graphs out for indexing or reasoning or queueing or whatnot, a copy of the previous version of the graphs is obtained (usually from pairtree storage) and the differences are calculated. These differences along with some metadata make up the ChangeSet which is saved and indexed along with the graphs themselves. This accomplishes what we call Syntactic Provenance because it operates at the level of individual statements.

Lately several more modules have been added to support other vocabularies. The work on the Bibliographica project led to the introduction of the OPMV vocabulary for Semantic Provenance. This is used to describe the way a data record from an external source (in this case MARC data) is transformed by a Process into a collection of statements or graph, and the way other graphs are derived from this first one. This is a distinct problem from Syntactic Provenance since it deals with the relationships between entities or objects and not simply add/remove operations on their attributes.

Another addition has been the ORE Aggregation vocabulary which is also used in Bibliographica. In our system since distinct entities or objects are stored as named graphs, we want to avoid having data duplicated in places where it should not be. For example, a book might have an author and users are ultimately interested in seeing the author’s name when they are looking at data about the book. But we do not want to store the author’s details in the book’s graph because that means that if someone notices and corrects an error the error must be corrected both in the author’s graph and their book’s. Better to keep such changes in one place. So what we actually do is create an aggregation. The aggregation contains (points at, aggregates) the book and author graph and also includes a pointer to some information on how to display it.

More to come on concrete implementation of ontology-specific behaviour, MARC processing and Aggregations in a following-up post on Bibliographica.

Next Steps

There is much more ontology-specific work to be done. First on the list is an implementation in Python of the Fresnel vocabulary that is used to describe how to display RDF data in HTML. It is more a set of instructions than a templating language and we have already written an implementation in JavaScript. It is crucial, however, that websites built with ORDF do not rely on JavaScript for presentation and we should rely on custom templates as little as possible.

ORDF is now stable enough to start using in other projects, at least within the OKF family. A first and fairly easy case will be updating the RDF interface to CKAN to use it — fitting as ORDF actually started out as a refactor of that very codebase.

CKAN v1.0 Released

Rufus Pollock - May 18, 2010 in CKAN, OKI Projects, Releases, Technical

We are pleased to announce the availability of version 1.0 of the CKAN software, our open source registry system for datasets (and other resources). After 3 years of development, twelve point releases and a several successful production deployments around the world CKAN has come of age!

CKAN around the world

As well as being used to power and CKAN is now helping run 7 data catalogues around the world including ones in Canada ( /, Germany ( and Norway ( has also continued to grow steadily and now has over 940 registered packages:


This is our largest release so far (56 tickets) with lots of new features and improvements. Main highlights (for a full listing of tickets please see the [trac milestone](

  • Package edit form: new pluggable architecture for custom forms (#281, #286)
  • Package revisions: diffs now include tag, license and resource changes (#303)
  • Web interface: visual overhaul (#182, #206, #214-#227, #260) including a tag cloud (#89)
  • i18n: completion in Web UI – now covers package edit form (#248)
  • API extended: revisions (#251, #265), feeds per package (#266)
  • Developer documentation expanded (#289, #290)
  • Performance improved and CKAN stress-tested (#201)
  • Package relationships (Read-Write in API, Read-Only in Web UI) (#253-257)
  • Statistics page (#184)
  • Group edit: add multiple packages at once (#295)
  • Package view: RDF and JSON formatted metadata linked to from package page (#247)


  • Resources revision history (#292)
  • Extra fields now work with spaces in the name (#278, #280) and international characters (#288)
  • Updating resources in the REST API (#293)


  • Licenses: now uses external License Service (‘licenses’ Python module)
  • Changesets introduced to support distributed revisioning of CKAN data – see doc/distributed.rst for more information.


Lastly a big thank-you to everyone who has contributed to this release and especially to the folks at!

Public Domain Calculators at Europeana

Guest - May 12, 2010 in COMMUNIA, External, OKI Projects, Open Knowledge Foundation, Public Domain, Public Domain Works, Technical, WG Public Domain, Working Groups

The following guest post is from Christina Angelopoulos at the Institute for Information Law (IViR) and Maarten Zeinstra at Nederland Kennisland who are working on building a series of Public Domain Calculators as part of the Europeana project. Both are also members of the Open Knowledge Foundation’s Working Group on the Public Domain.

Europeana Logo

Over the past few months the Institute for Information Law (IViR) of the University of Amsterdam and Nederland Kennisland have been collaborating on the preparation of a set of six Public Domain Helper Tools as part of the EuropeanConnect project. The Tools are intended to assist Europeana data providers in the determination of whether or not a certain work or other subject matter vested with copyright or neighbouring rights (related rights) has fallen into the public domain and can therefore be freely copied or re-used, through functioning as a simple interface between the user and the often complex set of national rules governing the term of protection. The issue is of significance for Europeana, as contributing organisations will be expected to clearly mark the material in their collection as being in the public domain, through the attachment of a Europeana Public Domain Licence, whenever possible.

The Tools are based on six National Flowcharts (Decisions Trees) built by IViR on the basis of research into the duration of the protection of subject matter in which copyright or neighbouring rights subsist in six European jurisdictions (the Czech Republic, France, Italy, the Netherlands, Spain and the United Kingdom). By means of a series of simple yes-or-no questions, the Flowcharts are intended to guide the user through all important issues relevant to the determination of the public domain status of a given item.

Researching Copyright Law

The first step in the construction of the flowcharts was the careful study of EU Term Directive. The Directive attempts the harmonisation of rules on the term of protection of copyright and neighbouring rights across the board of EU Member States. The rules of the Directive were integrated by IViR into a set of Generic Skeleton European Flowcharts. Given the essential role that the Term Directive has played in shaping national laws on the duration of protection, these generic charts functioned as the prototype for the six National Flowcharts. An initial version of the Generic European Flowchart, as well as the National Flowcharts for the Netherlands and the United Kingdom, was put together with the help of the Open Knowledge Foundation at a Communia workshop in November 2009.

Further information necessary for the refinement of these charts as well as the assembly of the remaining four National Flowcharts was collected either through the collaboration of National Legal Experts contacted by IViR (Czech Republic, Italy and Spain) or independently through IViR’s in-house expertise (EU, France, the Netherlands and the UK).

Both the Generic European Flowcharts and the National Flowcharts have been split into two categories: one dedicated to the rules governing the duration of copyright and the sui generis database right and one dedicated to the rules governing neighbouring rights. Although this division was made for the sake of usability and in accordance with the different subject matter of these categories of rights (works of copyright and unoriginal databases on the one hand and performances, phonograms, films and broadcasts on the other), the two types of flowcharts are intended to be viewed as connected and should be applied jointly if a comprehensive conclusion as to the public domain status of an examined item is to be reached (in fact the final conclusion in each directs the user to the application of the other). This is due to the fact that, although the protected subject matter of these two categories of rights differs, they may not be entirely unrelated. For example, it does not suffice to examine whether the rights of the author of a musical work have expired; it may also be necessary to investigate whether the rights of the performer of the work or of the producer of the phonogram onto which the work has been fixated have also expired, in order to reach an accurate conclusion as to whether or not a certain item in a collection may be copied or re-used.

Legal Complexities

A variety of legal complexities surfaced during the research into the topic. Condensing the complex rules that govern the term of protection in the examined jurisdictions into a user-friendly tool presented a substantial challenge. One of the most perplexing issues was that of the first question to be asked. Rather than engage in complicated descriptions of the scope of the subject matter protected by copyright and related rights, IViR decided to avoid this can of worms. Instead, the flowchart’s starting point is provided by the question “is the work an unoriginal database?” However, this solution seems unsatisfactory and further thought is being put into an alternative approach.

Other difficult legal issues stumbled upon include the following:

  • Term of protection vis-à-vis third countries
  • Term of protection of works of joint authorship and collective works
  • The term of protection (or lack thereof) for moral rights
  • Application of new terms and transitional provisions
  • Copyright protection of critical and scientific publications and of non-original photographs
  • Copyright protection of official acts of public authorities and other works of public origins (e.g. legislative texts, political speeches, works of traditional folklore)
  • Copyright protection of translations, adaptations and typographical arrangements
  • Copyright protection of computer-generated works

On the national level, areas of uncertainty related to such matters as the British provisions on the protection of films (no distinction is made under British law between the audiovisual or cinematographic work and its first fixation, contrary to the system applied on the EU level) or exceptional extensions to the term of protection, such as that granted in France due to World Wars I and II or in the UK to J.M. Barrie’s “Peter Pan”.

Web based Public Domain Calculators

Once the Flowcharts had been prepared they were translated into code by IViR’s colleagues at Kennisland, thus resulting in the creation of the current set of six web-based Public Domain Helper Tools.

Technically the flowcharts needed to be translated into formats that computers can read. In this project Kennisland choose for an Extensible Markup Language (XML) approach for describing the questions in the flowcharts and the relations between them. The resulting XML documents are both human and computer readable. Using XML documents also allowed Kennisland to keep the decision structure separate from the actual programming language, which makes maintenance of both content and code easier.

Kennisland then needed to build an XML reader that could translate the structures and questions of these XML files into a questionnaire or apply some set of data to the available questions, so as to make the automatic calculation of large datasets possible. For the EuropeanaConnect project Kennisland developed two of these XML readers. The first translates these XML schemes into a graphical user interface tool (this can be found at EuropeanaLabs) and the second can potentially automatically determine the status of a work which resides at the Public Domain Works project mercurial depository on KnowledgeForge. Both of these applications are open source and we encourage people to download, modify and work on these tools.

It should be noted that, as part of Kennisland’s collaboration with the Open Knowledge Foundation, Kennisland is currently assisting in the development of an XML base scheme for automatic determination of the rights status of a work using bibliographic information. Unfortunately however this information alone is usually not enough for the automatic identification on a European level. This is due to the many international treaties that have accumulated over the years; rules for example change depending on whether an author is born in a country party to the Berne convention, an EU Member State or a third country.

It should of course also be noted that there is a limit to the extent to which an electronic tool can replace a case-by-case assessment of the public domain status of a copyrighted work or other protected subject matter in complicated legal situations. The Tools are accordingly accompanied by a disclaimer indicating that they cannot offer an absolute guarantee of legal certainty.

Further fine-tuning is necessary before the Helper Tools are ready to be deployed. For the moment test versions of the electronic Tools can be found here. We invite readers to try these beta tools and give us feedback on the pd-discuss list!

Note from the authors: If the whole construction process for the Flowcharts has highlighted one thing that would be the bewildering complexity of the current rules governing the term of protection for copyright and related rights. Despite the Term Directive’s attempts at creating a level playing field, national legislative idiosyncrasies are still going strong in the post-harmonisation era – a single European term of protection remains very much a chimera. The relevant rules are hardly simple on the level of the individual Member States either. In particular in countries such as the UK and France, the term of protection currently operates under confusing entanglements of rules and exceptions that make the confident calculation of the term of protection almost impossible for a copyright layperson and difficult even for experts.

PD Calculators

Generic copyright flowchart by Christina Angelopoulos. PDF version available from Public Domain Calculators wiki page

CKAN 0.11 Released

Rufus Pollock - February 12, 2010 in CKAN, OKI Projects, Open Knowledge Foundation, Releases, Technical

We are pleased to announce the release of version 0.11 of the CKAN software, our open source registry of open data used in and

CKAN tag cloud

This is our biggest release so far (55 tickets) with lots of new features and improvements. This release also saw a major new production deployment with the CKAN software powering which had its public launch on Jan 21st!

Main highlights (for a full listing of tickets please see the trac milestone):

  • Package Resource object (multiple download urls per package): each package can have multiple ‘resources’ (urls) with each resource having additional metadata such as format, description and hash (#88, #89, #229)
  • “Full-text” searching of packages (#187)
  • Semantic web integration: RDFization of all data plus integration with an online RDF store (e.g. for at or Talis store) (#90 #163)
  • Package ratings (#77 #194)
  • i18n: we now have translations into German and French with deployments at and (#202)
  • Package diffs available in package history (#173)
  • Minor:
    • Package undelete (#21, #126)
    • Automated CKAN deployment via Fabric (#213)
    • Listings are sorted alphabetically (#195)
    • Add extras to rest api and to ckanclient (#158 #166)
  • Infrastructural:
    • Change to UUIDs for revisions and all domain objects
    • Improved search performance and better pagination
    • Significantly improved performance in API and WUI via judicious caching

New report on sharing aid information is now open for comments

Jonathan Gray - September 21, 2009 in News, OKI Projects, Open Data, Open Knowledge Definition, Open Knowledge Foundation, Open Standards, Technical, WG Development

We’re pleased to announce the publication of a new report, Unlocking the potential of aid information. The report, by the Open Knowledge Foundation and Aidinfo, looks at how to make information related to international development (i) legally open, (ii) technically open and (iii) easy to find.

The report and relevant background information can be found at:

It aims to inform the development of a new platform for publishing and sharing aid information:

The International Aid Transparency Initiative (IATI) aims to improve the availability and accessibility of aid information by designing common standards for publication of info about aid. It’s is not about creating another database on aid activities, but creating a platform that will enable existing databases – and potential new services – to access this aid information and create compelling application providing more detailed, timely, and accessible information about aid.

The idea of openness is crucial to creating this platform and achieving transparency. Information must be openly available with as few restrictions in how the information is accessed and used as possible. To this end, we need to design a technical architecture that enables information to be published and accessed in an open way.

There are three main recommendations in the report, which are as follows:

  • Recommendation 1 – Aid information should be legally open. The standard should require a core set of standard licenses for pubishing aid information under. It should require that either:
    • (i) information is published under one of a small number of recommended options:
      • Licenses for content: Creative Commons Attribution or Attribution Sharealike license
      • Legal tools for data: Open Data Commons Public Domain Dedication and License (PDDL), Open Data Commons Open Database License (ODbL) or Creative Commons CC0
    • or that (ii) information is published using a license/legal tool that is compliant with a standard such as the Open Knowledge Definition.
  • Recommendation 2 – Aid information should be technically open. The standard should require that raw data is made available in bulk (not just via an API or web interface) with any relevant schema information and either:
    • (i) in one of a small number of recommended formats:
      • Text: HTML, ODF, TXT, XML
      • Data: CSV, XML, RDF/XML
    • or (ii) in a format:
      • (a) which is machine readable and
      • (b) for which the specification is publicly and freely available and usable
  • Recommendation 3 – Aid information should be easily findable. The standard should require that aid organisations add their knowledge assets to a registry with some basic metadata describing the information.

We are now welcoming comments on the report until Sunday 1st November 2009. To submit comments you can:

  1. Directly annotate the documents with your comments:
  2. Submit your comments for discussion on the open development mailing list.
  3. Email your comments to info at okfn dot org.

CKAN 0.9 Released

Guest - August 13, 2009 in CKAN, News, OKI Projects, Open Knowledge Foundation, Releases, Technical

We are pleased to announce the release of CKAN version 0.9! CKAN is the Comprehensive Knowledge Archive Network, a registry of open knowledge packages and projects.

Changes include:

  • Add version attribute for package
  • Fix purge to use new version of Versioned Domain Model (vdm) (0.4)
  • Link to changed packages when listing revision
  • Show most recently registered or updated packages on front page
  • Bookmarklet to enable easy package registration on CKAN
  • Usability improvements (package search and creation on front page
  • Use external list of licenses from license repository
  • Convert from py.test to nosetests

There are now over 560 packages in the registry – which means that on average we’ve been adding a package a day since version 0.8 was released in May!

Open Data and the Semantic Web Workshop, London, 13th November 2009

Jonathan Gray - August 5, 2009 in Events, Open Data, Open Knowledge Foundation, Technical

Linking Open Data cloud

We’re currently organising a workshop on ‘open data and the semantic web’, which will take place in London this autumn. Details are as follows:

  • When: Friday 13th November 2009, 1000-1800
  • Where: London Knowledge Lab, 23-29 Emerald Street, London, WC1N 3QS. (See map)
  • Wiki:
  • Participation: Attendance is free. If you are planning to come along please add your name to the wiki.
  • Microbloggers: See notices on and Twitter

Further details:

Semantic web technologists and advocates are increasingly beginning to see the value of ‘open data’ for the data web. Tim Berners-Lee has spoken about the importance of open data, and being able to access raw data in easy to use formats, and the Linking Open Data project demonstrates what can be done by linking together a rich variety of publicly re-usable datasets.

This informal, hands-on workshop will bring together researchers, technologists, and people interested in open data and the semantic web from both public and private sector organisations for a day of talks and discussions.

Themes will include:

  • Linking Open Data
  • Legal tools for open data
  • Finding open data goes live!

Jonathan Gray - May 22, 2009 in News, Open Data, Policy, Technical

The US governments new site (which we blogged about last month) is now live!

There are currently a selection of core datasets available – from information about World Copper Smelters to results from the Residential Energy Consumption Survey. Raw data is available in XML, Text/CSV, KML/KMZ, Feeds, XLS, or ESRI Shapefile formats. As well as exploring and downloading the data that is available you can also suggest datasets that you’d like to be added!

From the site:

As a priority Open Government Initiative for President Obama’s administration, increases the ability of the public to easily find, download, and use datasets that are generated and held by the Federal Government. provides descriptions of the Federal datasets (metadata), information about how to access the datasets, and tools that leverage government datasets. The data catalogs will continue to grow as datasets are added. Federal, Executive Branch data are included in the first version of is a major milestone in the Obama administration’s Open Government Initiative. To mark the occasion, Sunlight Labs, Google, O’Reilly Media, and TechWeb have launched Apps for America 2 – inviting proposals for open source mashups, visualisations or other innovative re-uses of material from

You can watch a video of Vivek Kundra, the US’s CIO, talking about on YouTube.

Great news for open government data – and open data in general!

Launch of Open Data Grid

Jonathan Gray - May 13, 2009 in News, OKI Projects, Open Data, Open Knowledge Foundation, Technical

storage facility 37 by sevensixfive

In the last couple of months we’ve had several threads on the okfn-discuss list about distributed storage for open data (see here and here).

Last month we started a distributed storage project, aiming to provide distributed storage infrastructure for OKF and other open knowledge projects.

After researching various technical options, we’ve launched an Open Data Grid based on Allmydata’s open-source “Tahoe” system at:

Anyone can store open data on the grid, or start running a storage node. For more details see the readme. If you’d like to comment on the service feel free to post on the okfn-discuss list!

Get Updates