Published by the European Union on June 26th, the revised directive on open data and the re-use of public sector information – or PSI Directive – set out an updated set of rules relating to public sector documents, publicly funded research data and “high-value” datasets which should be made available for free via application programming interfaces or APIs.

EU member states have until July 2021 to incorporate the directive into law. 

While Open Knowledge Foundation is encouraged to see some of the new provisions, we have concerns – many of which we laid out in a 2018 blogpost – about missed opportunities for further progress towards a fair, free and open future across the EU.

Open data stickers

Lack of public input

Firstly, the revised directive gives responsibility for choosing which high-value datasets to publish over to member states but there are no established mechanisms for the public to provide input into the decisions. 

Broad thematic categories – geospatial; earth observation and environment; meteorological; statistics; companies and company ownership; and mobility – are set out for these datasets but the specifics will be determined over the next two years via a series of further implementing acts.

Datasets eventually deemed to be high-value shall be made “available free of charge … machine readable, provided via APIs and provided as a bulk download, where relevant”.

Despite drawing on our Global Open Data Index to generate a preliminary list of high-value datasets, this decision flies in the face of years of findings from the Index showing how important it is for governments to engage with the public as much and as early as possible to generate awareness and increase levels of reuse of open data.

We fear that this could lead to a further loss of public trust by opening the door for special interests, lobbyists and companies to make private arguments against the release of valuable datasets like spending records or beneficial ownership data which is often highly disaggregated and allows monetary transactions to be linked to individuals.

Partial definition of high-value data

Secondly, defining the value of data is also not straightforward. Papers from Oxford University, to Open Data Watch and the Global Partnership for Sustainable Development Data demonstrate disagreement about what data’s “value” is. What counts as high-value data should not only be based on quantitative indicators such as potential income generation, breadth of business applications or numbers of beneficiaries – as the revised directive sets out – but also use qualitative assessments and expert judgment from multiple disciplines.

Currently less than a quarter of the data with the biggest potential for social impact is available as truly open data even from countries seen as open data leaders, according to the latest Open Data Barometer report from our colleagues at the World Wide Web Foundation. Why? Because “governments are not engaging enough with groups beyond the open data and open government communities”.

Lack of clarity on recommended licenses

Thirdly, in line with the directive’s stated principle of being “open by design and by default”, we hope to see countries avoiding future interoperability problems by abiding by the requirement to use open standard licences when publishing these high-value datasets.

It’s good to see that the EU Commission itself has recently adopted Creative Commons licences when publishing its own documents and data. 

But we feel – in line with our friends at Communia – that the Commission should have made clear exactly which open licences they endorsed under the updated directive, by explicitly recommending the adoption of Open Definition compliant licences from Creative Commons or Open Data Commons to member states.

The directive also missed the opportunity to give preference to public domain dedication and attribution licences in accordance with the EU’s own LAPSI 2.0 licensing guidelines, as we recommended.

The European Data Portal indicates that there could be up to 90 different licences currently used by national, regional, or municipal governments. Their quality assurance report also shows that they can’t automatically detect the licences used to publish the vast majority of datasets published by open data portals from EU countries.

If they can’t work this out, the public definitely won’t be able to: meaning that any and all efforts to use newly-released data will be restrained by unnecessarily onerous reuse conditions. The more complicated or bespoke the licensing, the more likely data will end up unused in silos, our research has shown.

27 of the 28 EU member states may now have national open data policies and portals but, once discovered, it is currently likely that – in addition to confusing licencing – national datasets lack interoperability. For while the EU has substantial programmes of work on interoperability under the European Interoperability Framework, they are not yet having a major impact on the interoperability of open datasets.

Open Knowledge Foundation research report: Avoiding data use silos

More FAIR data

Finally, we welcome the provisions in the directive obliging member states to “[make] publicly funded research data openly available following the principle of open by default and compatible with FAIR principles.” We know there is much work to be done but hope to see wide adoption of these rules and that the provisions for not releasing publicly-funded data due to “confidentiality” or “legitimate commercial interests” will not be abused.

The next two years will be a crucial period to engage with these debates across Europe and to make sure that EU countries embrace the directive’s principle of openness by default to release more, better information and datasets to help citizens strive towards a fair, free and open future.

+ posts

Stephen Abbott Pugh was content development manager for the Open Knowledge Foundation.