Support Us

You are browsing the archive for WG Open Government Data.

Dispatch: Crisismappers Community needs Data Makers

Heather Leson - November 25, 2013 in Data Journalism, Events, Open Data and My Data, Open Data Partnership For Development, WG Open Government Data, Workshop

What does open data / open knowledge have to do with Crisismapping? Everything. In times of crisis, we live in open data / open government ecosystem. We seek, build and make it happen in real time – talk converts to action quickly.

On Tuesday, November 19th, the School of Data hosted a full day pre-conference training session as part of the International Conference of Crisis Mappers Conference (ICCM) in Nairobi, Kenya. The full event hosted over 110 attendees from around the world for a training offering with Knowledge/Research, Maps to Data and Mobile/Security. The Crisismappers community brings humanitarians, governmental staff, civil society practitioners, researchers, and technologists in a common, equal space. Participants work on projects ranging from human rights, anti-corruption, humanitarian response and economic development in post-conflict zones. The brilliance of cross-sector community focused on using data for their work highlights the importance that Open Knowledge Foundation as an member of the greater network. Building a global network of data makers is a one-by-one task. Our goal is to have leaders train their colleagues thus widening a circle of sharing and collaboration.

Some recent examples of our communities connecting include: Open Spending Tree Map by Donor: Foreign Aid Transparency – Faith (Philippines) and Early Results – Micromappers Yolanda (uses Crowdcrafting which was incubated at OKFN Labs).

Baking Soda with Crisis Mappers

Steve and School of Data

(Steve Kenei, Development Initiatives)

Data is just a word until we activate it. I like to call the School of Data the “Baking Soda” team. Together with key ingredients (community, problem/issue description, data sets and tool menus), they work with others to make data usable and actionable.

School of Data in session (School of Data session at ihub for ICCM)

The data track workshop sessions including using spreadsheets, cleaning data, data visualization and how to geocode. Some folks stayed in this track all day, even skipping breaks. The track started with a spreadsheet training delivered by Steve Kenei from Development Initiatives, continued with an Introduction to OpenRefine and an introduction to data visualization by Agnes Rube of Internews Kenya. The track was finished by School of Data mentor Ketty Adoch. The workshop was designed to address issues that civil society organizations have using data. One of the exciting results was the sheer concentration and intent of participants. They skipped breaks and even brought their own datasets to guide their learning.

Communities, Ideas connecting:

Ketty Adour, Fruits of Thought

Ketty Adour, Fruits of Thought

The ICCM conference, including pre-conference events, was jam packed week of maps, data, research and technology. Most of the ignite talks and panels referred to some stage of open data needs or the issues ranging from data ethics, data quality and data collection methodology. Ketty Adour – one of this years ICCM fellows – she shared her experiences on building a community mapping in Uganda using OpenStreetMap at Fruits of Thought.

Next Steps

During the self-organized sessions, together with Luis Capelo of UN OCHA , I hosted a discussion about Open Data Opportunities and Challenges. It was an exercise for the attendees to discuss Open Data and Crisismapping.

We determined a few concrete actions for the community:

  • A common data sharing space for Crisismappers interested in Humanitarian data.
  • A Crisismappers Open Data Working Group to help share impact and build momentum.
  • Training and a mentorship programs to help build skills and leadership in the field.

The Crisismappers community is over 5000 members strong with a mailing list, webinar and NING site. Do consider joining this vibrant community of maps and data makers who are at the edge of what it takes to unite policy with sheer determined actions. Also see our various Working Groups and the Open Data Partnership for Development programme.

Some additional resources:

How can open data lead to better data quality?

Jonathan Gray - September 3, 2013 in Featured, Open Data, Open Government Data, Policy, WG Open Government Data

Open data can be freely used by anyone – which means that data users can help to fix, enrich or flag problems with the data, leading to improvements in its quality.

The Open Knowledge Foundation is currently looking to collect the best examples and stories we can find about how open data can lead to better data.

We’re particularly interested in hearing about stories about how open government data has been checked, corrected and enhanced by citizens, civil society groups and others.

So far we have some really great examples – including:

  • Russian open data advocates trawling for errors in over 20 million procurement documents leading to fixes from the Treasury
  • Open Street Map volunteers correcting the locations of 18,000 bus stops in the UK and over 1,800 street names in Denmark
  • Data quality reports from the OpenSpending project leading to rapid improvements in the quality of UK government expenditure data

You can see the full list in progress at: http://bit.ly/opendata-betterdata

If you know of any more good examples, please send them our way and we’ll add them to the list.

We hope this will become a powerful piece of evidence that we can use to encourage public bodies and other data publishers to open up.

Edits to OpenStreetMap

Map showing history of edits to OpenStreetMap in London by Mapbox.

This initiative started life on the Open Knowledge Foundation’s open-government mailing list, which we encourage you to join if you are interested in open government data and how it can be used to increase accountability around the world.

Open tax data, or just VAT ‘open wash’

Chris Taggart - July 30, 2013 in Featured, Open Data, Open Economics, Open Government Data, Public Money, WG Open Government Data

This post is by Chris Taggart, the co-founder and CEO of OpenCorporates, the largest open database of companies in the world, and a member of the Open Government working group.

[Disclosure: I am on the UK Tax Transparency Board, which has not yet discussed these proposals, but will be doing so at the next meeting in early September]

A little over a week ago, Her Majesty’s Revenue & Customs (HMRC) published a consultation on publishing its data more widely, and in it stated its intention to join the open-data movement.

The UK helped secure the G8’s Open Data Charter, which presumes that the data held by Governments will be publicly available unless there is good reason to withhold it. It is important that HMRC plays a full part. HMRC’s relationship with businesses and individuals is unique, and this is reflected in the scope and depth of the information HMRC collects, creates and protects on behalf of taxpayers.

Great. Well, no.

The problem is that, despite what the above says, this consultation and the proposals within have little to do with open data or widening access, but instead are primarily about passing data, much of it personal data relating to ordinary individuals, to the anointed few. It also exposes some worrying data-related problems within HMRC that should be ringing alarm bells within government.

So what exactly is being suggested? There are two parts:

  1. Proposals to do with sharing HMRC’s data, particularly aggregated and anonymised data. At the moment HMRC can, in general, only share such data if it relates to HMRC’s functions, even if it’s in the wider public benefit.
  2. Proposals to do with the VAT Register. The VAT Register is currently private, even though the a large extent much of the information is ‘out there’, on till receipts, on invoices, on websites, and in various private datasets, and in fact in many countries it’s already public.

Both have their issues, but for moment we’ll concentrate on the second.

Now there has been no great clamour for the VAT Register from open-data activists (unlike say the postcode address file, company register, or Ordnance Survey data), so why is it being opened up? Well, why not? As the consultation says:

An underlying principle in developing the proposals in this chapter is brought out in the Shakespeare Review. Data belong to citizens and the presumption of government should be towards openness, unless this causes harm. It is not for government to dictate the nature of the opportunity. The corollary is that the Government will not always be aware of the range or scale of potential benefits, as the quotation below shows – this consultation will help to establish these.

So the proposal is to publish the VAT Register as open data, so that the wider community can do cool stuff with it? No. The consultation neatly elides from this lofty aim with something rather more grubby.

There has been public interest for some time, for example from credit reference agencies (CRAs), in the publication of VAT registration data as a resource to generate benefits.

Don’t the three big credit reference agencies (Experian, Equifax and Callcredit) already know a lot about companies? Surely they know the VAT numbers of many of them, and in any case know a lot more about most companies, especially active, trading companies (the sort that are registered for VAT)?

What they don’t have, however, is much information about sole-traders, small partnerships, individuals trading on their own account and without the shield of limited liability, with the responsibilities for publishing information that comes with that. That’s why the VAT register is so important to them, and that’s what this consultation is proposing to give them.

Of course they could just ask people for that information. But people might refuse, particularly if they don’t need to borrow money, and that would be a problem as far as building a monetisable dataset of them. If they could only get the government to give them access to that data – have the government act as their own data-collection arm, with the force of law to compel providing of the information – that would be great. For them. For individuals, and for the wider world, it’s not good at all.

First, because what we’re talking about here are individuals, who have privacy and data protection rights, not companies, and there needs to be compelling reasons for making that public in the first place – just because the big three credit reference agencies, or CRAs (Experian, Equifax, CallCredit), think they can make money from it isn’t good enough.

Second, because if open data is about one thing, it is about democratising access to data, about reversing the traditional position where, to use the words of the Chancellor, George Osborne, “Access to the world’s information – and the ability to communicate it – was controlled by an elite few”. And if there’s one thing that’s certain it’s that the CRAs have a lot of power.

But wait, doesn’t the consultation also propose that some of the VAT register is published as open data, specifically “a very selective extract covering just three data fields – VAT registration number (VRN), trading name, and Standard Industry Code (SIC) classification number”.

At first sight this might be seen as good, or better than nothing. In fact it shows that HMRC either doesn’t get data, or it’s just ‘openwash’ – an open-data figleaf to obscure the passing of personal and private data wholesale to the CRAs, and one that could potentially lead to greater fraud. Here’s why:

  • The three fields (VAT number, trading name, SIC code) together make up an orphan dataset, i.e. one that’s unconnected with any other data, and therefore is fundamentally useless… unless you want to fraudulently write an invoice calling yourself ‘AAA Plumbing’, charging VAT on it, and pocketing the 20%, knowing that either you will never be caught, or the real AAA Plumbing will be first place HMRC will come looking.
    Fraud is fundamentally about asymmetries of information flows (the fraudster knows more about you than you know about them). If, for example, you know that the real AAA Plumbing is a company with a registered address in Kirkcaldy, Scotland, for example, or the BBB Services is dissolved or has a website showing it works in the aircraft business, then you have a much greater chance of avoiding fraud.
  • Trading names are very problematic, and in general are not registered anywhere, so are little help. They also need have no relationship to the legal name, either of the person or the company. So if you want to find the company behind ZZZ Financial Experts, if indeed there is one, you’re out of luck. It’s puzzling that HMRC would even consider publishing the VAT Register without the legal form, and in the case of companies the company number.
  • One of the stated reasons for publishing the register is that “VAT registration data could also provide a foundation for private sector business registers”. Really? In this world of open data and the importance of core reference data, HMRC wants a private, proprietary identifier set to be created, with all the problems that it would entail? In fact, HMRC was supposed to working with the Department of Business, Innovation & Skills to build such a public dataset. Has it decided that it doesn’t understand data well enough to do this? Or would it rather shackle not just the government but the business sector as a whole to some such dataset?
  • Finally, it’s also rather surprising to discover that the VAT register appears to contain fields such as the company’s incorporation date and SIC codes. In the geek world we call this a denormalised dataset, meaning it’s duplicating data that rightfully belongs in another table or dataset. There are sometimes good reasons for doing this, but there are risks, such as the data becoming out of sync (which is the correct SIC code – the one on the VAT Register or on the Companies House record).

So what should HMRC be doing? First, it should abandon any plans to act as the Credit Reference Agencies’ data collectors, and publish the VAT register or part of the VAT register as a single open dataset, equal to all under the same terms. This would be a genuine spur for innovation, and may even result in increased competition and transparency.

Second, it should realise that there’s a fundamental difference between an individual – a living, breathing person with human rights – and a company. As well as human rights, individuals have data protection rights, privacy rights and don’t exist on a public register; companies on the other hand are artificial entities given a distinct legal personality by the state for the good of society, and in return exist in public (on the public Register of Companies). In the case of the VAT register, the pragmatic approach would be to publish the register as open data, but only that part that relates to companies.

Third, it needs to realise that it is fundamentally in the data business, like it or not, and it needs to quickly get to grips with the modern data world, including the power of data, for good, and for bad. The UK has probably the leading organisations in the world in this area, including OpenCorporates, the Open Knowledge Foundation and the Open Data Institute.

From PSI to open data – LAPSI is ready for a new round of legal questions

Katleen Janssen - June 10, 2013 in Open Government Data, WG EU Open Data, WG Open Government Data

In February, 23 partners kicked off the LAPSI 2.0 thematic network on the legal aspects of public sector information in Leuven, Belgium. The network, consisting of academic institutions and stakeholders from 15 countries, will continue where the previous LAPSI network left off, and look at the remaining legal barriers hindering the full and open availability of public sector information in Europe. The network will enable knowledge exchange between stakeholders; showcase good practice on how Member States and public bodies deal with PSI issues; and provide policy recommendations on how the European legal framework can support open data.

This European legal framework is currently being challenged by the emerging open data ecosystem. PSI is gradually being replaced by open data in people’s minds, throwing up a lot of new questions. For instance, over the years, many efforts have been made by national policy makers and public authorities to create more transparency in licensing procedures and to develop standard licences (although more transparency would still be very welcome!). However, this has led – somewhat counter-productively – to a proliferation of licence models, even among the open licences. Therefore, the LAPSI 2.0 network is focusing its attention in the first year of activities on the ‘legal interoperability’ of licences. What strategies can help to prevent conflicting (open) standardised licensing models from arising, and how can existing problems due to a lack of interoperability be addressed?

Another layer of complication with licenses comes from the shift from the provision of data via bulk downloads to the creation of web services, requiring the combination of a data approach with what is traditionally known as terms of service or service level agreements. Moreover, the one-source, one-way delivery of information from the public sector to the users is increasingly being replaced by participatory data sharing, the introduction of feedback loops and the integration of PSI with user generated content. It is questionable if the current legal framework is ready for this.

The LAPSI 2.0 network will also be working hard to embed PSI and open data in the institutional culture of the public sector, and – if this does not work – on the enforcement of the rules on PSI and open data through efficient and effective redress mechanisms. While many public bodies have embraced open data, there are still many more that need to be convinced about the benefits for economic growth, participation and accountability.

Whatever LAPSI 2.0 recommends, it will have to function against the background of the new Directive on re-use of PSI, which is due this summer. While the new directive is definitely a step in the right direction, its exact impact can currently only be guessed at by the rumours that are seeping through about the trialogue process. We anxiously await the final version of the directive, and look forward to playing a role in the translation of the text into Member States’ domestic law.

Over the next two years, LAPSI 2.0, in cooperation with other projects and initiatives, will organise two conferences and a number of workshops on the legal aspects of PSI and open data. Our first conference is already planned: on October 24th, we hope to see you in Ljubljana for a great day on “The new PSI directive: what’s next?”. We are also planning workshops at the Samos Summit in July and you can find us at all the important open data events, including the OKCon in Geneva.

If you are interested in knowing more about the network and our activities, check out our website or register for the stakeholders newsletter.

European Union launches CKAN data portal

Mark Wainwright - February 25, 2013 in CKAN, Open Data, WG EU Open Data, WG Open Government Data

On Friday, to coincide with Saturday’s International Open Data Day, the European Commission (EC) unveiled a new data portal, which will be used to publish data from the EC and other bodies of the European Union.

This major project was announced last year, and it went live in December for testing before today’s announcement. The portal includes extensive CKAN customisation and development work by the Open Knowledge Foundation, including a multilingual extension enabling data descriptions (metadata) to be made available in different languages: at present the metadata is offered in English, French, German, Italian and Polish. The portal was originally planned for EC data, but it will now also hold data from the European Environment Agency, and hopefully in time a number of other EU bodies as well.

The EU has been a key mover in driving the Open Data agenda in member states, so it is fitting that it is now promoting transparency and re-use of its own data holdings by making them available in one place. It has for some years been encouraging member states to publish data via dedicated portals, and it also supports the OKF’s work on publicdata.eu, a prototype of a pan-European data portal harvesting data from catalogues across the Union, via the LOD2 research project.

The portal currently makes 5,885 datasets available, most of which come from Eurostat. In their blog post announcing the launch the European Commission say they are “confident that it will be a catalyst for change in the way data is handled inside the Commission as well as beyond”, and promise more to come:

More data will become available as the Commission’s services adapt their data management and licensing policies and make machine-readable formats the rule. Our ambition is to make an open licence applicable across the board for all datasets in the portal.

Furthermore, in 2013, an overarching pan-European aggregator for open data should federate the content of more than 70 existing open data portal initiatives in the Member States at national, regional or local level.

We’re looking forward to helping make it happen.

European Commission Vice President Neelie Kroes praises work of Open Knowledge Foundation Greece

Theodora Middleton - February 21, 2013 in OKF Greece, Open Government Data, Open Knowledge Foundation Local Groups, WG EU Open Data, WG Open Government Data

Great News! Neelie Kroes, the Vice President of the European Commission, has sent her personal best wishes to the OKF team in Greece who launched their brand new open data portal last week! She said:

“Open data is a very powerful lever for both a better economy and society. Open data is fuel for innovation, it is a tool for transparency, for better government and policy. At a time when many Greeks are looking for new sources of inspiration and hope, I am pleased to say that the Open Knowledge Foundation is one of those sources. I encourage all public bodies to support this effort. Whether the task is finding a job or spending tax money wisely, open data can help.”

Here, here!

The Open Data Census – Tracking the State of Open Data Around the World

Rufus Pollock - February 20, 2013 in Events, Featured, Featured Project, Open Data, Open Government Data, Our Work, WG Open Government Data

Recent years have seen a huge expansion in open data activity around the world. This is very welcome, but at the same time it is now increasingly difficult to assess if, and where, progress is being made.

To address this, we started the Open Data Census in order to track the state of open data globally. The results so far, covering more than 35 countries and 200 datasets, are now available online at http://census.okfn.org/. We’ll be building this up even more during Open Data Day this weekend.

This post explains why we started the census and why this matters now. This includes the importance of quality (not just quantity) of data, the state of the census so far, and some immediate next steps – such as expanding the census to the city level and developing an “open data index” to give a single measure of open data progress.

Why the Census?

In the last few years there has been an explosion of activity around open data and especially open government data. Following initiatives like data.gov and data.gov.uk, numerous local, regional and national bodies have started open government data initiatives and created open data portals (from a handful 3 years ago there are now more than 250 open data catalogs worldwide).

But simply putting a few spreadsheets online under an open license is obviously not enough. Doing open government data well depends on releasing key datasets in the right way. Moreover, with the proliferation of sites it has become increasingly hard to track what is happening.

Which countries, or municipalities, are actually releasing open data and which aren’t?1 Which countries are making progress on releasing data on stuff that matters in the right way?

Quality not (just) Quantity

Progress in open government data is not (just) about the number of datasets being released. The quality of the datasets being released matters at least as much – and often more – than the quantity of these datasets.

We want to know whether governments around the world are releasing key datasets, for example critical information about public finances, locations and public transport rather than less critical information such as the location of park benches or the number of streetlights per capita.2

Similarly, is the data being released in a form that is comparable and interoperable or is it being release as randomly structured spreadsheets (or, worse, non-machine-readable PDFs)?

Tables like this are easy for humans, but difficult for machines.

This example of a table from US Bureau of Labor Statistics are easy for humans to interpret but very difficult for machines. (But at least it’s in plain text not PDF).)

The essential point here is that it is about quality as much quantity. Datasets aren’t all the same, whether in size, importance or format.

Enter the Census

And so was born the Open Knowledge Foundation’s Open Data Census – a community-driven effort to map and evaluate the progress of open data and open data initiatives around the world.

We launched the first round of data collection last April at the meeting of the Open Government Partnership in Brazil. Since then members of the Open Knowledge Foundation’s Open Government Data Working Group have been continuing to collect the data and our Labs team have been developing a site to host the census and present its results.

ogd census table

The central part of the census is an assessment based on 10 key datasets.

These were selected through a process of discussion and consultation with the Open Government Data Working Group and will likely be expanded in future (see some great suggestions from David Eaves last year). We’ll also be considering additional criteria: for example whether data is being released in a standard format that facilitates integration and reuse.

We focused on a specific list of core datasets (rather than e.g. counting numbers of open datasets) for a few important reasons:

  • Comparability: by assessing against the same datasets we would be able to compare across countries
  • Importance: Some datasets are more important than others and by specifically selecting a small set of key datasets we could make that explicit
  • Ranking: we want, ultimately, to be able to rank countries in an “Open Data Index”. This is much easier if we have a good list of cross-country comparable data. 3

Today, thanks to submissions from more than thirty contributors the census includes information on more 190 datasets from more than 35 countries around the world and we hope to get close to full coverage for more than 50 countries in the next couple of months.

ogd census map

The Open Data Index: a Scoreboard for Open Government Data

Having the census allows us to evaluate general progress on open data. But having a lot of information alone is not enough. We need to ensure the information is presented in a simple and understandable way especially if we want it to help drive improvements in the state of open government data around the world.

Inspired by work such as Open Budget Index from the International Budget Partnership, the Aid Transparency Index from Publish What You Fund, the Corruption Perception Index from Transparency International and many more, we felt a key aspect is to distill the results into a single overall ranking and present this clearly. (We’ve also been talking here with the great folks at the Web Foundation, who are also thinking about an Open Data Index connected with their work on the Web Index).

obp screenshot

As part of our first work on the Census dashboard last September for OKFestival we did some work on an “open data index”, which provided an overall rankings for countries. However, during that work, it became clear that building a proper index requires some careful thought. In particular, we probably wanted to incorporate other factors than just the pure census results, for example:

  • Some measure of the number of open datasets (appropriately calibrated!)
  • Whether the country has an open government data initiative and open data portal
  • Whether the country has joined the OGP
  • Existence (and quality) of an FoI law

In addition, there is the challenging question of weightings – not only between these additional factors and census scores but also for scoring the census. Should, for example, Belarus be scoring 5 or 6 out of 7 on the census despite it not being clear whether any data is actually openly licensed? How should we weight total number of datasets against the census score?

Nevertheless, we’re continuing to work on putting together an “open data index” and we hope to have an “alpha” version ready for the open government data community to use and critique within the next few months. (If you’re interested in contributing check out the details at the end of this post).

The City Census

The first version of the census was country oriented. But much of the action around open data happens at the city and regional level, and information about the area around us tends to be the most meaningful and important.

We’re happy to say plans are afoot to make this happen!

Specifically, we’ll be kicking off the city census with an Open Data Census Challenge this Saturday as part of Open Data Day.

If the Open Data Census has caught your interest, you are invited to become an Open Data Detective for a day and help locate open (and closed) datasets in cities around the world. Find out more and sign up here: http://okfn.org/events/open-data-day-2013/census/

Get Involved

Interested in the Open Data Census? Want to contribute? There are a variety of ways:

Notes


  1. For example, we’ve seen several open data initiatives releasing data under non-open licenses that restrict, for example, derivative works, redistribution or commercial use. 

  2. This isn’t to say that less critical information isn’t important – one of the key reasons for releasing material openly is that you never know who may derive benefit from it, and the “long tail of data” may yield plenty of unexpected riches. 

  3. Other metrics, such as numbers of datasets are very difficult to compare – what is a single dataset in one country can easily become a 100 or more in another country, for example unemployment could be in a single dataset or split into many datasets one for each month and region). 

Andrew Stott joins OKFN Advisory Board

Jonathan Gray - January 24, 2013 in Open Data, Open Government Data, WG Open Government Data, Working Groups

We’re very pleased to announce that Andrew Stott, the UK’s former Director for Transparency and Digital Engagement who pioneered data.gov.uk, has joined the Open Knowledge Foundation’s Advisory Board.

For those of you who aren’t familiar with him already from our events or from our open-government mailing list, here’s a brief bio:

Andrew Stott was the UK’s first Director for Transparency and Digital Engagement. He led the work to open government data and create “data.gov.uk”; and after the 2010 Election he led the policy development and implementation of the new Government’s commitments on Transparency of central and local government. Following his formal retirement in December 2010 he was appointed to the UK Transparency Board to continue to advise UK Ministers on open data and e-government policy. He also advises other governments on Open Data both bilaterally and through the World Bank and the World Wide Web Foundation. He is an expert adviser on Open Data strategy to the EU Citadel On The Move programme and co-chairs the OKFN Open Government Data Working Group.

Andrew has extensive knowledge – from the inside – about the challenges and obstacles to opening up government data and how to overcome them (for more on this you can see the litany of excuses he mentions in his talk from Open Government Data Camp in 2011) and he has been very active in the international open government data community over the past several years.

Welcome aboard Andrew!

ePSI Open Data Days, Warsaw, February 21-23

Theodora Middleton - January 22, 2013 in Events, Open Government Data, WG EU Open Data, WG Open Government Data

The ePSI platform team have announced “three days of open data fun” in Warsaw next month. The big day is the 2013 ePSI platform conference on 22nd February, but you’re also all invited to a workshop on the 21st, and a hackday on the 23rd!

At a glance

  • What?: ePSI conference, workshop and hackday
  • When?: 21st-23nd February
  • Where?: Warsaw University, Warsaw, Poland
  • Programme: in development here
  • Register: here for the workshop and here for the main conference. And it’s Free (but places are limited)!

The conference will focus on the theme “Gotcha! – getting everyone on board”. PSI re-use is in the process of reaching a certain degree of maturity and uptake. However, this uptake differs significantly between Member States, PSI domains and stakeholders. The ePSIplatform Conference will therefore be aimed at those that should embark, but have (partly) failed to do so far.

Meanwhile in the workshop we’ll be looking at the value of open data to the public sector itself. The workshop is especially aimed at those who work in the public sector.

And on the 23rd, the hackday will coincide with International Open Data Day, so you’re invited to join the Warsaw open data community for a day of building apps, cleaning up data, or building better connections to data holders. This will take place at Centrum Cyfrowe. Find out more on the Open Data Day in Warsaw here.

Get all the info on the Conference Page or download the Conference Infopack here.

We look forward to seeing you there!

Let’s defend Open Formats for Public Sector Information in Europe!

Regards Citoyens - December 3, 2012 in Access to Information, Campaigning, Open Data, Open Government Data, Open Standards, Open/Closed, Policy, WG EU Open Data, WG Open Government Data

Following some
remarks from Richard Swetenham from the European Commission
, we
made a few changes relative to the trialogue process and the coming
steps: the trialogue will start its meetings on 17th December and it
is therefore already very useful to call on our governments to support
Open Formats!

When we work on building all these amazing democratic transparency collaborative tools all over the world, all of us, Open Data users and producers, struggle with these incredibly frustrating closed or unexploitable formats under which public data is unfortunately so often released: XLS, PDF, DOC, JPG, completely misformatted tables, and so on.

The EU PSI directive revision is a chance to push for a clear Open Formats definition!

As part of Neelie Kroes’s Digital Agenda, the European Commission recently proposed a revision of the Public Sector Information (PSI) Directive widening the scope of the existing directive to encourage public bodies to open up the data they produce as part of their own activities.

The revision will be discussed at the European Parliament (EP), and this is the citizen’s chance to advocate for a clear definition of the Open Formats under which public sector information (PSI) should be released.

We believe at Regards Citoyens that having a proper definition of Open Formats within the EU PSI directive revision would be a fantastic help to citizens and contribute to economic innovation. We believe such a definition can be summed-up to in two simple rules inspired by the Open Knowledge Foundation’s OpenDefinition principles:

  • being platform independant and machine-readable without any legal, financial or technical restriction;
  • being the result of an openly developped process in which all users can actually be part of the specifications evolution.

Those are the principles we advocated in a policy note on Open Formats we published last week and sent individually to all Members of the European Parliament (MEPs) from the committee voting on the revision of the PSI directive last thursday.

Good news: the first rule was adopted! But the second one was not. How did that work?

ITRE vote on Nov 29th: what happened and how?

EP meetingA meeting at the European Parliament
CC-BY-ND EPP Group

The European parliamentary process first involves a main committee in charge of preparing the debates before the plenary session, in our case the Industry, Research and Energy committee (ITRE). Its members met on 29th November around 10am to vote on the PSI revision amongst other files.

MEPs can propose amendments to the revision beforehand, but, to speed up the process, the European Parliament works with what is called “compromise amendments” (CAs): the committee chooses a rapporteur leading the file in its name and each political group gets a “shadow rapporteur” to work together with the main rapporteur. They all study the proposed amendments together and try to sum them up in a few consensual ones called CAs, hence leading MEPs to pull away some amendments when they consider their concerns met. During the committee meeting, both kinds of amendment are voted on in accordance with predefined voting-list indicating the rapporteur’s recommandations.

Regarding Open Formats, everything relied on a proposition to add to the directive‘s 2nd article a paragraph providing a clear definition of what an Open Format actually is. The rapporteurs work led to a pretty good compromise amendment 18, which speaks pretty much for itself:

« An open format is one that is platform independent, machine readable, and made available to the public without legal, technical or financial restrictions that would impede the re-use of that information. »

This amendment was adopted, meaning this change will be proposed as a new amendment to all MEPs during the plenary debate. Given that it has the support of the rapporteur in the name of the responsible committee, it stands a good chance of being carried.

Regarding the open development process condition, MEP Amelia Andersdotter, shadow rapporteur for the European Parliament Greens group, maintained and adapted to this new definition her amendment 65:

« "open format" means that the format’s specification is maintained by a not-for-profit organisation the membership of which is not contingent on membership fees; its ongoing development occurs on the basis of an open decision-making procedure available to all interested parties; the format specification document is available freely; the intellectual property of the standard is made irrevocably available on a royalty-free basis. »

Even though it also got recommanded for approval by the main rapporteur, unfortunately the ALDE and EPP groups were not ready to support it yet and it got rejected.

Watching the 12 seconds during which the Open Formats issues were voted is a strange experience to anyone not familiar with the European Parliament, since most of the actual debate happens beforehand between the different rapporteurs, the committee meeting mainly consists of a succession of raised hand votes calls, which are occasionally electronically checked. Therefore, there are no public individual votes or records of these discussions available and the vote happens very quickly.

What next? Can we do anything?

Now that the ITRE committee has voted, its report should soon be made available online

As the European institutions work as a tripartite organisation, the text adopted by the ITRE committee will now be transferred to both the European Commission and Council for approval. This includes a trialogue procedure in which a consensus towards a common text must be driven. This is an occasion to call on our respective national governments to push in favor of Open Formats in order toc maintain and improve the definition which the EP already adopted.

The text which comes out of the tripartite debate will be discussed in plenary session, planned at the moment for 11th March 2013. Until noon on the Wednesday preceding the plenary, MEPs will still have the possibility to propose new amendments to be voted on at plenary: they can do so either as a whole political group, or as a group of at least 40 different MEPs from any groups.

Possible next steps to advocate Open Formats could therefore be the following:

  • Call on our national governments to push in favor of Open Formats;
  • Keep up-to-date with documents and procedures from the European Parliament: ParlTrack offers e-mail alerts on the dossier;
  • Whenever the proposition of new amendments towards the plenary debate is opened, we should contact our respective national MEPs from all political groups and urge them to propose amendments requiring Open Formats to be based on an open development process. Having multiple amendments coming from different political groups would certainly help MEPs realize this is not a partisan issue;
  • When the deadline for proposing amendments is reached, we should call on our MEPs by email, phone calls or Tweets to vote for such amendments and possibly against some opposed ones. In order to allow anyone to easily and freely phone their MEPs, we’re thinking about reusing La Quadrature du Net‘s excellent PiPhone tool for EU citizen advocacy.

In any case, contacting MEPs to raise concerns on Open Formats policies can of course always be useful at all times before and after the plenary debates. Policy papers, amendments proposals, vulgarisation documents, blogposts, open-letters, a petititon, tweets, … It can all help!

To conclude, we would like to stress once again that Regards Citoyens is an entirely voluntary organisation without much prior experience with the European Parliament. This means help and expertise is much appreciated! Let’s get all ready to defend Open Formats for European Open Data in a few weeks!

Regards Citoyens — CC-BY-SA

Get Updates