Support Us

You are browsing the archive for Access to Information.

Cease and desist by the German government for publishing a document received under FOI law

Stefan Wehrmeyer - January 24, 2014 in Access to Information, Featured

Fragdenstaat screenshot

The German Federal Ministry of the Interior has sent a cease and desist order to the Freedom of Information (FOI) portal FragDenStaat.de for publishing a document received under the German federal FOI law. The document – a five page study written by government staff – analyses a ruling by the German constitutional court in November 2011 which declared the 5% party quota for the European Parliament elections as unconstitutional. The study concludes that setting any such quota would be unconstitutional according to the ruling. Despite this a recent change in election law set the party quota to 3%.

When the study in question was received from the Ministry of the Interior through an FOI request on FragDenStaat.de, the ministry prohibited publication of the document by claiming copyright. FragDenStaat.de has decided to publish the document anyway to take a stand against this blatant misuse of copyright. The government sent a cease and desist letter shortly after. The Open Knowledge Foundation Germany as the legal entity behind FragDenStaat.de is refusing to comply with the cease and desist order, and is looking forward to a court decision that will strengthen freedom of speech, freedom of the press and freedom of information rights in Germany.

This is the German campaign site with all documents and press release

We want to fight this case in court and need financial support. The organisation behind FragDenStaat.de is the Open Knowledge Foundation Germany, a German non-profit charitable organisation. Please donate via BetterPlace.org or with the following details:

Recipient: Open Knowledge Foundation Deutschland e.V IBAN: DE89830944950003009670 BIC: GENO DE F1 ETK If you can’t spare money but time, tell the European Commission to reform copyright and make government documents exempt from copyright.

Open Data Empowers Us to Answer Questions that Matter

Rufus Pollock - December 9, 2013 in Access to Information, Featured, Open Data, Open Knowledge Foundation

This article by Rufus Pollock, Founder and Director of the Open Knowledge Foundation, is cross-posted from “Telefonica Digital Hub” released on 5 December 2013.

Every day we face challenges – from the personal such as the quickest way to get to the work or what we should eat to global ones like climate change and how to sustainably feed and educate seven billion people. Here at the Open Knowledge Foundation we believe that opening up data – and turning that data into insight – can be crucial to addressing these challenges, and building a society in which is everyone – not just the few – are empowered with the knowledge they need to understand and effect change.

Neon sign Open 2005  Photographer User Justinc cc-by-sa

Open data and open knowledge are fundamentally about empowerment, about giving people – citizens, journalists, NGOs, companies and policy-makers – access to the information they need to understand and shape the world around them.

Through openness, we can ensure that technology and data improve science, governance, and society. Without it, we may see the increasing centralisation of knowledge – and therefore power – in the hands of the few, and a huge loss in our potential, individually and collectively, to innovate, understand, and improve the world around us.

Open Data is data that can be freely accessed, used, built upon and shared by anyone, for any purpose. With digital technology – from mobiles to the internet – increasingly everywhere, we’re seeing a data revolution. It’s a revolution both in the amount of data available and in our ability to use, and share, that data. And it’s changing everything we do – from how we travel home from work to how scientists do research, to how government set policy.

Now much of that data is personal, data about you and what you do – what you buy (your loyalty card, your bank statements), where you go (your mobile location or the apps you’ve installed) or who you interact with online (Facebook, Twitter etc). That data should never be “open”, freely accessible to anyone – its your data, and you should control who has access to it and how it is used.

But there’s a lot of data that isn’t personal. Data like the government’s budget, or road maps, or train times, or what’s in that candy bar, or where those jeans were made, or how much carbon dioxide was produced last year … Data like this could and should be open if the governments and corporations who control it can be persuaded to unlock it.

And that’s what we’ve been doing at the Open Knowledge Foundation for the last decade: working to get governments and corporations to unlock their data and make it open.

We’re doing this because of the power of open data to unleash innovation, creativity and insight. It has potential to empower anyone – whether it is an entrepreneur, an activist or a researcher – to get access to information and use it as they see fit. For example, citizens in Ghana using data on mining to ensure they get their fair share of tax revenues to pay for local schools and hospitals, or a startup like Open Healthcare UK using drug prescription data released by the UK government to identify hundreds of millions of pounds of savings for the health services.

It’s key to remember here that real impact doesn’t come directly from open data itself – no one’s life is immediately improved by a new open data initiative or an additional open dataset. Data has to be turned into knowledge, information into insight – and someone has to act on that knowledge.

To do that takes tools and skills – tools for processing, analysing and presenting data, and skills to do that. This is why this is another key area of the Open Knowledge Foundation’s work. With projects like SchoolofData we’re working to teach data skills to those who need them most, and in Open Knowledge Foundation Labs we’re creating lightweight tools to help people use data more easily and effectively.

Finally, it’s about people, the people who use data, and the people who use the insights from that data to drive change. We need to create a culture of “open data makers“, people able and ready to make apps and insights with open data. We need to connect open data with those who have the best questions and the biggest needs – a healthcare worker in Zambia, the London commuter travelling home – and go beyond the data geeks and the tech savvy.

Image “Neon Sign Open” by Justin Cormack, CC-BY

New petition to fix the EU lobby register

Jonathan Gray - November 8, 2013 in Access to Information, Open Data, Policy

The Alliance for Lobbying Transparency and Ethics Regulation (ALTER-EU), a coalition of over 200 civil society groups concerned about the effects of corporate lobbying on the EU (including the Open Knowledge Foundation), have recently launched a petition to fix the EU’s official register of lobbyists.

The current register is voluntary, incomplete and unreliable – giving only a small glimpse of the picture about the activities of big lobbyists in Brussels. ALTER-EU produced a detailed report earlier this year looking at some of the things that are wrong with the current register, and how to fix them.

This is an excellent opportunity for the EU to demonstrate their commitment to the principle that official information can be used to strengthen the democracy and public accountability of European institutions (which I wrote about on the EU digital agenda blog a couple of weeks ago).

If you want to see the EU increasing lobbying transparency and fixing the register, we strongly encourage you to sign and share the petition!

If you’re interested in pushing for greater lobbying transparency in your country, you can also join our recently launched global working group on lobbying transparency, that we’re co-hosting with the Sunlight Foundation.

What’s the point of open data?

Martin Tisne - September 17, 2013 in Access to Information, Open Data, Open Government Data

I’ve been puzzling for a while how the open data community can help the many great groups that have been fighting for transparency of key money flows for the past decade and more. I think one answer may be that open data helps us go beyond simply making information available. If done well, it can help us make it accessible and relevant to people, which has been the holy grail for transparency advocates for a long time.

The transparency community has focused too much on just getting information out there (making information available). But what’s the point of having information available if it’s not accessible? What’s the use of public reports that are only nominally ‘public’ because they languish in filing cabinets or ‘PDF deserts’ hidden within an obscure website?

If we can get this information more accessible, we can then work to increase participation and help people use it. This for me is what open data people are talking about when they talk about open formats. Machine readability and open formats matter because they are tools to increase access. I’ve seen too many techies talk about ‘open formats’ and activists’ eyes glaze over. But I think we’re both talking about the same thing we hold dear: improving access to vital data for all.

Likewise, it’s the connections between the datasets that are powerful and interesting. You may not care so much to know where most people under 15 years old live in your country, but if you’re told that those that live close to a nuclear waste disposal site happen to have the highest cancer rates, then it becomes seriously relevant. Same as above, techies often talk about technical data standards and get quizzical/skeptical – at best – looks in exchange. But technical data standards are the fuel that allows policy wonks to compare datasets, which creates relevant data. Connecting the dots makes it policy relevant – without data, you can’t make policy.

[availability of data] => [accessibility of data] => [comparability of data]

[availability of data] => [open formats] => [data standards]

Follow the Money groups do amazing work: extractives’ transparency advocates campaigning for vital releases of information on oil, gas, mining revenues into the hundreds of millions of dollars. Groups looking at curbing illicit flows of funds out of desperately poor countries via shell companies and phantom firms. Activists who scrutinize budgets, everything from big ticket national budget allocations, all the way down to very local issues like your local school spending on basic reading materials. And many more.

Together, these groups share one big thing in common – they are all seeking to follow the money. In other words, they are all trying to understand how money either gets in to government coffers, or how it fails to get there, and then how and whether it is spent for the good of the many, rather than the few lining their pockets.

To succeed, we all need data that’s not only public (e.g. public registries of beneficial ownership) but also accessible (in open formats) and comparable to other money flows.

Let’s work together to make it happen.

The following guest post from Martin Tisné was first published on his personal blog.

If you’re at OKCon 2013 and interested in joining the Open Knowledge Foundation and ONE to follow the money, you can come to our session on this topic at OKCon 2013 in Geneva, on Wednesday 18th September, 10:30-11:30 in Room 8, Floor 2 at the Centre International de Conférences Genève – CICG). Due to limited space, if you’re interested in joining us please email followthemoney@okcon.org.

An Open Letter on the UK’s Proposed Lobbying Bill

Jonathan Gray - September 9, 2013 in Access to Information, Featured, Open Data, Open Government Data, Policy

The following is an open letter to the Prime Minister and Deputy Prime Minister about the UK’s proposed Lobbying Bill, initiated by the Open Knowledge Foundation and signed by organisations working for greater government transparency and openness in the UK and around the world. A version of the letter was printed in today’s edition of The Independent newspaper.

For more about our position on this topic, you can read our recent blog post on the importance of lobbyist registers. For press enquiries please contact press@okfn.org.

The Lobbying Bill will be a missed opportunity for government openness unless crucial changes are made


Rt Hon David Cameron MP
Rt Hon Nick Clegg MP
Houses of Parliament
London
SW1A 0AA

Cc: Andrew Lansley CBE MP (Leader of the House of Commons),
Francis Maude MP (Minister for the Cabinet Office),
Chloe Smith MP (Minister for Political and Constitutional Reform),
Graham Allen MP (Chair of Political and Constitutional Reform Committee).

6th September 2013

Dear Prime Minister and Deputy Prime Minister,

We, the undersigned, strongly urge government to pause and redraft the proposed Lobbying Bill so that it will provide citizens with a genuine opportunity to scrutinise the activities of lobbyists in the UK.

The current version of the lobbyist register would only cover a small fraction of active lobbyists, leaving the public in the dark about the rest of the UK’s £2 billion lobbying industry. It will also not reveal any meaningful information on their activities.

We think a decent lobbyist register – which says who is lobbying whom, what they are lobbying for and how much they are spending – should be an essential part of the UK government’s openness agenda, and a key measure to ensure that lobbying is transparent and effectively regulated.

Crucially it should not just be restricted to consultant lobbyists, but should also include in-house lobbyists, big consultancies who offer a range of services, and other entities which offer lobbying services such as think tanks.

Furthermore we think it is essential the UK’s lobbyist register is published as machine-readable open data so that its contents can be analysed, connected with other information sources, and republished.

The UK has been a pioneer in opening up its public data and has a major opportunity to be a world leader in government openness at the Open Government Partnership Summit in the UK this autumn, following on from its success in putting open data at the top of the agenda at the G8 with the Open Data Charter.

However, if the Lobbying Bill goes ahead as it is without further changes, then it will be a significant missed opportunity for government openness in the UK, and a major blow to the government’s aspiration to be – in the words of the Prime Minister – “the most open and transparent government in the world”.

Signed,

EC Consultation on open research data

Sander van der Waal - July 16, 2013 in Access to Information, Open Access, Open Data

The European Commission held a public consultation on open access to research data on July 2 in Brussels inviting statements from researchers, industry, funders, IT and data centre professionals, publishers and libraries. The inputs of these stakeholders will play some role in revising the Commission’s policy and are particularly important for the ongoing negotiations on the next big EU research programme Horizon 2020, where about 25-30 billion Euros would be available for academic research. Five questions formed the basis of the discussion:

  • How we can define research data and what types of research data should be open?
  • When and how does openness need to be limited?
  • How should the issue of data re-use be addressed?
  • Where should research data be stored and made accessible?
  • How can we enhance “data awareness” and a “culture of sharing”?

Here is how the Open Knowledge Foundation responded to the questions:

How can we define research data and what types of research data should be open?

Research data is extremely heterogeneous, and would include (although not be limited to) numerical data, textual records, images, audio and visual data, as well as custom-written software, other code underlying the research, and pre-analysis plans. Research data would also include metadata – data about the research data itself – including uncertainties and methodology, versioned software, standards and other tools. Metadata standards are discipline-specific, but to be considered ‘open’, at a bare minimum it would be expected to provide sufficient information that a fellow researcher in the same discipline would be able to interpret and reuse the data, as well as be itself openly available and machine-readable. Here, we are specifically concerned with data that is being produced, and therefore can be controlled by the researcher, as opposed to data the researcher may use that has been produced by others.

When we talk about open research data, we are mostly concerned with data that is digital, or the digital representation of non-digital data. While primary research artifacts, such as fossils, have obvious and substantial value, the extent to which they can be ‘opened’ is not clear. However, the use of 3D scanning techniques can and should be used to enable the capture of many physical features or an image, enabling broad access to the artifact. This would benefit both researchers who are unable to travel to visit a physical object, as well as interested citizens who would typically be unable to access such an item.

By default there should be an expectation that all types of research data that can be made public, including all metadata, should be made available in machine-readable form and open as per the Open Definition. This means the data resulting from public work is free for anyone to use, reuse and redistribute, with at most a requirement to attribute the original author(s) and/or share derivative works. It should be publicly available and licensed with this open license.

When and how does openness need to be limited?

The default position should be that research data should be made open in accordance with the Open Definition, as defined above. However, while access to research data is fundamentally democratising, there will be situations where the full data cannot be released; for instance for reasons of privacy.

In these cases, researchers should share analysis under the least restrictive terms consistent with legal requirements, and abiding by the research ethics as dictated by the terms of research grant. This should include opening up non-sensitive data, summary data, metadata and code; and providing access to the original data available to those who can ensure that appropriate measures are in place to mitigate any risks.

Access to research data should not be limited by the introduction of embargo periods, and arguments in support of embargo periods should be considered a reflection of inherent conservatism among some members of the academic community. Instead, the expectation should be that data is to be released before the project that funds the data production has been completed; and certainly no later than the publication of any research output resulting from it.

How should the issue of data re-use be addressed?

Data is only meaningfully open when it is available in a format and under an open license which allows re-use by others. But simply making data available is often not sufficient for reusing it. Metadata must be provided that provides sufficient documentation to enable other researchers to replicate empirical results.

There is a role here for data publishers and repository managers to endeavour to make the data usable and discoverable by others. This can be by providing further documentation, the use of standard code lists, etc., as these all help make data more interoperable and reusable. Submission of the data to standard registries and use of common metadata also enable greater discoverability. Interoperability and the availability of data in machine-readable form are crucial to ensure data-mining and text-mining of the data can be performed, a form of re-use that must not be restricted.

Arguments are sometimes made that we should monitor levels of data reuse, to allow us to dynamically determine which data sets should be retained. We refute this suggestion. There is a moral responsibility to preserve data created by taxpayer funds, including data that represents negative results or that is not obviously linked to publications. It is impossible to predict possible future uses, and reuse opportunities may currently exist that may not be immediately obvious. It is also crucial to note the research interests change over time.

Where should research data be stored and made accessible?

Each discipline needs different options available to store data and open it up to their community and the world; there is no one-size-fits-all solution. The research data infrastructure should be based on open source software and interoperable based on open standards. With these provisions we would encourage researchers to use the data repository that best fits their needs and expectations, for example an institutional or subject repository. It is crucial that appropriate metadata about the data deposited is stored as well, to ensure this data is discoverable and can be re-used more easily.

Both the data and the metadata should be openly licensed. They should be deposited in machine-readable and open formats, similar to how the US government mandate this in their Executive Order on Government Information. This ensures the possibility to link repositories and data across various portals and makes it easier to find the data. For example, the open source data portal CKAN has been developed by the Open Knowledge Foundation, which enables the depositing of data and metadata and makes it easy to find and re-use data. Various universities, such as the Universities of Bristol and Lincoln, already use CKAN for these purposes.

How can we enhance data awareness and a culture of sharing?

Academics, research institutions, funders, and learned societies all have significant responsibilities in developing a culture of data sharing. Funding agencies and organisations disbursing public funds have a central role to play and must ensure research institutions, including publicly supported universities, have access to appropriate funds for longer-term data management. Furthermore, they should establish policies and mandates that support these principles.

Publication and, more generally sharing, of research data should be ingrained in the academic culture, and should be seen as a fundamental part of scholarly communication. However, it is often seen as detrimental to a career, partly as a result of the current incentive system set up by by universities and funders, partly as a result of much misunderstanding of the issues.

Educational and promotional activities should be set up to promote the awareness of open access to research data amongst researchers, to help disentangle the many myths, and to encourage them to self-identify as supporting open access. These activities should be set up in recognition of the fact that different disciplines are at different stages in the development of the culture of sharing. Simultaneously, universities and funders should explore options for creating incentives to encourage researchers to publish their research data openly. Acknowledgements of research funding, traditionally limited to publications, could be extended to research data and contribution of data curators should be recognised.

References

Shakespeare review: analysis

Laura James - May 15, 2013 in Access to Information, News, Open Data, Open Government Data

We welcome the Shakespeare review as a time to reflect, coming as it does at a time of great growth in open data in government and the public sector.

The UK has lead the way with government taking a pioneering stance on open data policy in recent years, and this report sets out key recommendations for how to best take forward this work.

It is particularly good to see acknowledgement that there is a “difference between a commitment to transparency and a true National Data Strategy for economic growth” as it is clear that many of the benefits of open public sector information will go beyond the economic.

As the Open Knowledge Foundation has long emphasized:

The best thing to be done with your data will be thought of by someone else

Shakespeare recognises this with the comment that “we cannot always predict where the greatest value lies but know there are huge opportunities across the whole spectrum of PSI.”

Getting more data released quickly, without agonising over quality concerns, is an excellent recommendation and we look forward to seeing this in practice. Alongside this we welcome the demand for high quality information in the National Core Reference Data plan, including key entity data; such reference data, following clear open standards, will transform what can be done with UK data. The request that Trading Funds should remove restrictive PSI licensing and work towards releasing all raw data for use and reuse is particularly warmly welcomed.

We are pleased to see consideration being given to privacy and confidentiality issues; our definition of open data has always excluded personally-identifiable information, but with greater data collection than ever before, we acknowledge the challenges this can bring for data publishers. The demand for realistic and pragmatic consideration of privacy and confidentiality is welcomed, and best practice guidelines will be very helpful in assisting data publishers here. In addition we hope to see key security and privacy sector experts engaged in this as there are tough technical challenges around anonymisation, aggregation and sandbox use, and deep technical understanding is needed to fully appreciate the risks and limits of such systems, and to create sensible guidelines.

We are also delighted to see open access mentioned in the report; open access to publicly-funded research data and papers has been a long-standing tenet of the Open Knowledge Foundation’s work. Shakespeare notes that “even today, access to academic research that has been paid for by the public is deliberately denied to the public, and to many researchers, by commercial publishers, aided by university lethargy, and government reluctance to apply penalties; thereby obstructing scientific progress.” We can, and must, do better here.

We applaud the call for more data scientists and greater statistical skills at all levels; stronger data awareness and skills are critical for all the benefits of open data to be realised. In particular, the recognition that interactive and workshop methods can be most effective at teaching data skills is well aligned with our own School of Data and long standing culture of hackathons and developer engagement. The more teaching and training around data, alongside other key STEM areas including maths and technology, the better.

Finally, it is great to see that the economic value of open data will be assessed through research and audit, but at the same time it is vital to be realistic about the timescales for significant change and impact in this field. We think on a timescale of decades to see the full benefits and effects of the new open approaches to creation, sharing and reuse of knowledge, and government and others must be realistic about what will be achieved and how quickly, to avoid disappointment.

Open data is valuable to us socially and culturally as well as commercially, but it is only one part of a solution, and we need to work on the other key elements, including institutional change, tools, skills and awareness, which are also necessary conditions to realise the full benefits of openness. These other elements may be harder, and more expensive, than the release of data – we should still release more open data, and we are glad to see this report affirming this and encouraging data skills alongside – but the journey is far from over.

As Shakespeare puts it:

“It is now time to build on the very positive start we have made on open data with a more directed, more predictable engineering of usable information. Obstacles must be cleared, structures defined, and progress audited, so that we have a purposeful, progressive strategy that we can trust to deliver the full benefits to the nation.”

If you’re interested in open data and you’d like to join our global community of open government data advocates, you can join our open-government mailing list:

We need open carbon emissions data now!

Jonathan Gray - May 13, 2013 in Access to Information, Campaigning, Featured, Featured Project, Open Data, Policy, WG Sustainability, Working Groups

Last week the average concentration of carbon dioxide in the atmosphere reached 400 parts per million, a level which is said to be unprecedented in human history.

Leading scientists and policy makers say that we should be aiming for no more than 350 parts per million to avoid catastrophic runaway climate change.

But what’s in a number? Why is the increase from 399 to 400 significant?

While the actual change is mainly symbolic (and some commentators have questioned whether we’re hovering above or just below 400), the real story is that we are badly failing to cut emissions fast enough.

Given the importance of this number, which represents humanity’s progress towards tackling one of the biggest challenges we currently face – the fact that it has been making the news around the world is very welcome indeed.

Why don’t we hear about the levels of carbon dioxide in the atmosphere from politicians or the press more often? While there are regularly headlines about inflation, interest and unemployment, numbers about carbon emissions rarely receive the level of attention that they deserve.

We want this to change. And we think that having more timely and more detailed information about carbon emissions is essential if we are to keep up pressure on the world’s governments and companies to make the cuts that the world needs.

As our Advisory Board member Hans Rosling puts it, carbon emissions should be on the world’s dashboard.

Over the coming months we are going to be planning and undertaking activities to advocate for the release of more timely and granular carbon emissions data. We are also going to be working with our global network to catalyse projects which use it to communicate the state of the world’s carbon emissions to the public.

If you’d like to join us, you can follow #OpenCO2 on Twitter or sign up to our open-sustainability mailing list:


Image credit: Match smoke by AMagill on Flickr. Released under Creative Commons Attribution license.

The new PSI Directive – as good as it seems?

Ton Zijlstra - April 19, 2013 in Access to Information, External, Open Data

A closer look at the new PSI Directive by Ton Zijlstra and Katleen Janssen

EPP

image by European People’s Party CC-BY-2.0, via Wikimedia Commons

On 10 April, the European Commission’s Vice-President Neelie Kroes, responsible for the Digital Agenda for Europe, announced that the European Union (EU) Member States have approved a text for the new PSI Directive. The PSI Directive governs the re-use of public sector information, otherwise known as as Open Government Data.

In this posting we take a closer look at the progress the EC press release claims, and make a comparison with the current PSI Directive. We base this comparison on the text (not officially published) of the output of the final trialogue of 25 March and apparently accepted by the Member States last week.

The final step now, after this acceptance by Member States, is the adoption of the same text by the European Parliament, who have been part of the trialogue and thus are likely to be in agreement. The vote in the ITRE Committee is planned on 25 April, and the plenary Parliament vote on 11 June. Member States will then have 24 months to transpose the new directive into national law, which means it should be in force towards the end of 2015 across the EU.

The Open Data yardstick

The existing PSI Directive was adopted in 2003, well before the emergence of the Open Data movement, and written with mostly ‘traditional’ and existing re-users of government information in mind. Within the wider Open Data community this new PSI Directive will largely be judged by a) how well it moves towards embracing Open Data as the norm, in the sense of the Open Definition, and b) to what extent it makes this mandatory for EU Member States.

This means that scope and access rights, and redress options where those rights are denied, charging and licensing practices as well as standards and formats are of interest here. We will go through these points of interest point by point:

Access rights and scope

  • The new PSI Directive brings museums, libraries and archives within its scope; however a range of exceptions and less strict rules apply to these data holders;
  • The Directive builds, as before, on existing national legislation concerning freedom of information and privacy and data protection. This means it only looks at re-use in the context of what is already legally public, and it does not make pro-active publishing mandatory in any way;
  • The general principle for re-use has been revised. Where the old directive describes cases where re-use has been allowed (making it dependent on that approval and thus leaving the choice to the Member States or the public bodies), the new directive says all documents within scope (i.e. legally public) shall be re-usable for commercial or non-commercial purposes. This is the source of the statement by Commissioner Neelie Kroes that a “genuine right to re-use public information, not present in the original 2003 Directive” has been created. For documents of museums, libraries, and archives the old rule applies: re-use needs to be allowed first (except for cultural resources that are opened up after exclusive agreements for their digitisation have ended – see below).

Asking for documents to re-use, and redress mechanisms if denied

  • The way in which citizens can ask to be provided with documents for re-use, or the way government bodies can respond, has not changed;
  • The redress mechanisms available to citizens have been specified in slightly more detail. It specifies that one of the ways of redress should be through an “impartial review body with appropriate expertise”, “swift” and with binding authority, “such as the national competition authority, the national access to documents authority or the national judicial authority”. This, although more specific than before, is not the creation of a specific, speedy and independent redress procedure hoped for.

Charging practices

  • When charges apply, they shall be limited to the “marginal costs of reproduction, provision and dissemination”, which is left open to interpretation. Marginal costing is an important principle, as in the case of digital material it would normally mean no charges apply;
  • The PSI Directive leaves room for exceptions to the stated norm of marginal costing, for public sector bodies who are required to generate revenue and for specifically excepted documents: firstly, they rely once more on the concept of the public task, which in the previous version of the directive has raised so much discussion; secondly, a distinction is made between institutions that have to generate revenue to cover a substantial part of all their costs and those that may generally be fully-funded by the State (except for particular datasets of which the collection, production, reproduction and dissemination has to be covered for a substantial part by revenue). Could this be a way to cover economic or even commercial activities, by defining them as a ‘public task’, thereby avoiding the non-discrimination rules requiring equal treatment of possible competitors?
  • The exceptions remain bound to an upper limit, that of the old PSI directive for the exceptions relating to institutions having to generate revenue. For cultural institutions, the upper limit of the total income includes the costs of collection, production, preservation and rights clearance, reproduction and dissemination, together with a reasonable return on investment;
  • How costs are structured and determined, and used to motivate standard charges, needs to be pre-established and published. In the case of the mentioned exceptions, charges and criteria applied need to be pre-established and published, with the calculation used being made transparent on request (as was the general rule before);
  • This requirement for standard charges to be fully transparent up-front, meaning before any request for re-use is submitted, might prove to have an interesting impact: it is unlikely that public sector bodies will go through establishing marginal costs and the underlying calculations for every data set they hold, but charges can no longer be applied as they have not been pre-established, motivated and published.

Licensing

  • The new PSI Directive contains no changes concerning licensing, so no explicit move towards open licenses;
  • Where Member States attach conditions to re-use, a standard license should be available, and public sector bodies should be encouraged to use it;
  • Conditions to re-use should not unnecessarily restrict re-use, nor restrict competition;
  • The Commission is asked to assist the Member States by creating guidelines, particularly relating to licensing.

Non-discrimination and Exclusive agreements

  • The existing rules to ensuring non-discrimination in how conditions for re-use are applied, including for commercial activities by the public sector itself, are continued;
  • Exclusive arrangements are not allowed as before, except for ensuring public interest services, or for digitisation projects by museums, libraries and archives. For the former, reviews are mandated every 3 years; for the latter, reviews are mandated after 10 years and then every 7 years. However, it is only the duration that should be reviewed, not their existence itself. In return for the exclusivity, the public body has to get a free copy of the cultural resources which must be available for re-use when the exclusive agreement ends. Here, the cultural institutions no longer have a choice whether to allow re-use, but it may be several years before the resource actually becomes available.

Formats and standards

  • Open standards and machine readable formats should be used for both documents and their metadata, where easily possible, but otherwise any pre-existing format and language is acceptable.

In summary, the new PSI Directive does not seem to take the bold steps the open data movement has been clamoring for over the past five years. At the same time, real progress has been made. Member States with a constructive approach will feel encouraged to do more. Also, the effort of transparency in charging may dissuade public sector bodies from applying charges. But the new PSI Directive will not serve as a tool for citizens aiming for more openness by default and by design. Even with the new redress mechanisms, getting your rights acknowledged and acted upon will remain a long and arduous path as before.

It will be interesting to see the European Parliament, as representative body, debate this in plenary.

About the authors


Katleen Jansen is a postdoctoral researcher in information law at the Interdisciplinary Centre for Law and ICT of KU Leuven, coordinator of the LAPSI 2.0 thematic network (www.lapsi-project.eu) and board member of OKFN Belgium. She specialises in re-use of PSI, open data, access to information and spatial data infrastructures. Currently working on open data licensing for the Flemish Region.


Ton Zijlstra has been involved in open government data since 2008. He is working for local, national and international public sector bodies to help them ‘do open data well’, both as an activist and consultant. Ton wrote the first plans for the Dutch national data portal, did a stint as project lead for the European Commission at http://epsiplatform.eu, and is now partner at The Green Land, a Netherlands based open data consultancy. He is a regular keynote speaker on open data, open government, and disruptive change / complexity across Europe.

Announcing v3.0 of Froide – the Open-Source Python-Based Freedom of Information Platform

Stefan Wehrmeyer - March 15, 2013 in Access to Information, OKF Germany, Open Government Data

I’m happy to announce version 3 release of Froide, the Open Source, Python-based platform for running Freedom of Information portals. Froide has been in development for nearly two years. It has powered the FOI portal in Germany for over a year and a half and has recently been used to launch an Austrian FoI site.

Full instructions for getting started with Froide can be found here, and the source code is online on github here.This latest release comes with the latest version of the Python web framework Django 1.5 and Bootstrap 2.3. All other dependencies have also been upgraded.

Some of the major features include:

FragDenStaat.de – Ask the State

Froide got started back in spring of 2011 when OKF Germany decided to create an FOI site. Unfortunately at that time the code of WhatDoTheyKnow was not ready to be used elsewhere (Alaveteli didn’t exist at all – plus, it must be said I’m a pythonista and it was ruby app!). I therefore started building an FOI platform based on Python/Django for Germany, internationalized from the ground up. After four months of coding and preparations we launched FragDenStaat.de – the German FOI portal – in August 2011.

Since then the software has seen continuous improvements and new features. Several of these additional features have been motivated by specific requirements for Germany, like tracking the cost of a request, uploading postal replies from authorities, hiding requester names from the public and redacting PDFs online. Froide leverages the power of the Django admin that allows community moderators to help with administration tasks and guide requesters on their FOI journey.

Just recently FragDenStaat.de got a little brother: the Austrian FOI portal FragDenStaat.at got off the ground and will track the development of the upcoming FOI legislation in Austria.

Challenges Overcome

Over the last two years, the German FOI community have struggled with – and overcome – many FOI oddities: baseless cost threats, lot of anti-digital behaviour, and very creative excuses for why information cannot be released. FragDenStaat.de has send out more than 3000 requests and the Federal FOI statistic for 2012 is at an all time high with more than a third of requests delivered and tracked by FragDenStaat.de.

One of the most interesting stories was a ban on publishing documents received through FOI: the German parliament had sent over a report on MP corruption but denied the right to publish it on the grounds of copyright. Any citizen could get and read the report by requesting it, but nobody was allowed to share it freely! This Kafkaesque situation made it difficult to spread the word and limited public debate on the topic. But we quickly came up with a solution to this problem: one-click requests for that specific document in your name. We quickly got hundreds of people to make this request and sparked a debate about the topic. Even though the documents have been leaked on the net, the German parliament still refuses to publish them. The matter will soon be resolved in front of a judge, but until then we continue to provide an easy means to request the documents and take a stand for FOI.

Colophon

Froide and FragDenStaat.de are civic coding projects of the Open Knowledge Foundation Germany. Check out their other projects.

This article would also be incomplete without a shout out here to Alaveteli – the excellent Open Source Ruby on Rails FOI software built by the great folks at MySociety – and to WhatDoTheyKnow, the original FOI site built by MySociety for the UK, which inspired both FragDenStaat and many other sites around the world.

Get Updates