Support Us

You are browsing the archive for Featured Project.

Open Assets in Argentina

Guest - September 30, 2013 in Data Journalism, Featured Project, Open Government Data

The following guest post is by Florencia Coelho, from Argentinian daily La Nacion.

In Argentina, where a Freedom of Information Act (FOIA) has yet to be signed, LA NACION and three transparency NGOs – Poder Ciudadano, ACIJ (Asociación Civil por la Igualdad y la Justicia) and Fundación Directorio Legislativo joined efforts to produce the first site to open information on the assets of public servants, making their asset declarations available online.

The first stage of the web site contains more than 600 asset declarations from public servants from each of the three branches of government: executive, legislative and judicial. Priority was given to data on key positions within each branch as well as data on candidates in the upcoming October 2013 legislative elections.

Each NGO specialized in monitoring transparency and accountability of certain branches, presenting the necessary public information requests and processing the data received.

The information requested was received in print copies; therefore, in addition to entering the data, the teams also scanned the original requests, erasing any sensitive personal information before uploading them to DocumentCloud where they are linked to each asset declaration on the web site.

Teams collaborated with more than 30 volunteers who manually entered the data and cross checked every unit of content in a marathon six-day “check-a-thon”. Throughout the project cycle, the teams worked online using collaborative tools like Google Docs, Google Spreadsheets and Trello.

The database and the web site were designed and developed by LA NACION data and multimedia teams from Lanacion.com. Our Knight Mozilla Opennews fellow collaborated in optimizing the application and search tools. This news application, now in beta, will open data in machine readable formats for everyone to reuse.

The Open Asset Declarations website is being launched in a particular political context. A new law was recently passed which omits asset information on public officials´ spouses and children, thereby reducing the content previously available. Family asset information is vital to depict an accurate picture of the public officials´ wealth and key to any investigation on illicit enrichment.

IMG_0150

A “Check-a-thon” last week, comparing paper originals of statements with spreadsheet versions*

Even after earthquakes, we need Open

Guest - August 29, 2013 in Featured Project, OKF Italy, Open Development, Open Government Data

The following guest post is by Chistian Quintili from Open Ricostruzione. Open Ricostruzione is an Italian civic project focused on people engagement after the earthquake which damaged cities of Emilia-Romagna in 2012

Open Ricostruzione is pleased to have a little corner in the OKF network. Our project, in short, is a website to monitor public funding and private donations raised to reconstruct public buildings damaged by the earthquake which hit Emilia Romagna in May 2012.

Emilia Romagna is a region in Northern Italy, which in 2012 experienced a series of devastating earthquakes, measuring up to 6.0 on the richter scale. Up to 45,000 people were made homeless, and 27 lost their lives. The cost of reconstruction so far is estimated at around €350 million, with projects including schools, hospitals, and the restoration of historical cultural sites. We want to make sure that this process is open, transparent and accountable.

The Emilia-Romagna region and the ANCI (the association of all Italian municipalities) gathered the relevant administrative data; and an association working on IT and civic participation, called Open Polis, developed special software for accessing the data in a user-friendly and easy way. You can find raw data, project by project, on a featured website named Sisma2012.

open ricostrizione

But Open Ricostruzione is more than this. Technology isn’t enough to “rebuild” democracy: our focus is on re-building citizens’ skills. Beyond smart cities, we need smart citizens. For this reason, ActionAid is organizing a series of workshops to train civil society activists to monitor reconstruction, providing juridical and data journalism skills with Dataninja (an Italian data journalism network).

Bondeno 29 giugno 2013

Today each of us can contribute to make reconstruction in Emilia and our institutions more accountable, and this is possible just using a mobile phone, a camera and an internet connection. This means we can, and should be, more responsible for and concerned by the rebuilding of a better society, better institutions and better nation.

We have the tools and we want to make it happen.

We’d love to hear from you, and you can follow us @Open_Ric for updates.

Open Ricostruzione is a project designed by Wikitalia and realized by Anci, Ancitel, ActionAid and Openpolis with the technical support of Emilia Romagna Region and the financial support of Cisco Italy

On the trail of “Open Steps” – visiting open knowledge communities around the world

Christian Villum - August 19, 2013 in Featured Project

Margo & Alex from Open Steps

This is a guest blog post from Open Steps, an initiative by two young Berliners Alex (a software developer from Spain) and Margo (a graduate in European politics from France) who decided to leave their daily lives and travel around the world for one year to meet people and organizations working actively in open knowledge related projects, documenting them on their website.

Starting in July 2013 and for one year, we will travel through South-East Europe, Turkey, India, South-East Asia, Japan and South-America. During our travels we are generating a geo-located index of individuals and groups supporting open knowledge around the world. We have a natural interest in open data as it is the area in which both our backgrounds converge. We will therefore also run a workshop entitled “Visualising Open Data to bring out global issues” on the way and furthermore research the current situation of open data in the countries we are visiting.

After leaving Berlin at the end of June, we have travelled along Europe crossing beautiful countries, meeting hard-working geeks & other activists and hearing about promising projects around the topic of open knowledge. Now it’s time for us to sit down, sum up all the impressions we have gathered so far and share them through this article.

Visiting hackerspaces across Europe

You might have seen on our website that the organisations we met were mostly hackerspaces. Why is that? When it comes to sharing knowledge and supporting open cultures, these kind of organisations are on the top of the list. After sending more than a thousand emails and contact requests, we were happy to start receiving positive answers, and in Europe these were mostly from hackerspaces (Prague, Vienna, Tirana, Pristina, Skopje).

Visiting them was like doing a pilgrimage, travelling from one to the other. Each one is great and unique in terms of location, profile of members and running projects. But it was most interesting for us to discover that the interest and engagement of members of hackerspaces, especially in less developed countries, was extraordinary! We would like to highlight the efforts being done by the guys from Open Labs in Albania and FLOSSK in Kosovo, both of whom are pioneers in sharing knowledge through workshops and supporting open source software in these two countries. There, the public administration does not recognise the importance of these values and the word ‘open‘ is not so well known yet. Thanks to activists such as these, this is already changing.

In addition to hackerspaces, we also had the chance to visit inspiring places like the Solar Festival in Hranize, a small rural village in Czech Republic where a passionate group of people are sharing the benefits and beauty of clean energy. And the creative shop Zelazo in the Moda neighborhood of Istanbul, where people learn to design stuff by themselves.

For the love of Open Data

We love open data and we strongly believe it is a mechanism to improve our society in terms of transparency, democracy and citizen participation. That is what the second part of our project is about. With the support of the organizations we have visited, we have been able to run our workshop five times in total so far. Through it, we are not only spreading the word about the topic but also creating an opportunity to discuss about the situation of open data in the context of each country.

Varying government engagement across Europe

In our opinion, Europe shows a very heterogeneous implementation level regarding the steps towards being open: engagement of public administration, availability of open data platforms, legal framework and civil society awareness. We have experienced countries like Germany or Austria where both governmental and independent organisations (also at regional and local level) are already working on gathering and releasing data into the public domain, organising events and meeting challenges so developers create useful civic tools.

On the other hand, there are other countries like Albania where the first steps have not been taken by the government but by independent groups. Or like Turkey, which has been participating in the Open Government Partnership initiative since 2011 but has still not carried out any of the points specified in its action plan. As Mr Elshani, Head of e-Governance in Kosovo, pointed out during the debate at our event in Pristina, countries must first face issues like the need of infrastructure or gathering and categorising the data, before starting to release it. Of course, in our opinion the social-economic situation and the will of the administration to support transparency plays a big role when it comes to taking part in open data and open government initiatives.

The active participation of the attendees during our workshops has proven that open data is a very current and promising topic with big perspectives. However, there is a certain scepticism and a feeling that there is still a lot of work to do. The two questions we were mostly asked were focused on the integrity and authenticity of the data and on the use of standards for its publication.

Moving on from Europe…

After these first months in Europe, Open Steps will arrive in India in mid-September. We are hoping to meet more of the kinds of creative and passionate people that we have met up until now, so we are already establishing contacts with individuals and collectives working in open knowledge in the areas of education, government and social problem solving. Stay tuned for new updates, feel free to point us towards interesting projects and share your thoughts with us! You can follow our project under the addresses below:

Website: open-steps.org

Facebook: facebook.com/openstepsorg

Twitter: twitter.com/OpenSteps

-Margo & Alex

Publish from ScraperWiki to CKAN

Guest - July 5, 2013 in CKAN, Featured Project

The following post is by Aidan McGuire, co-founder of ScraperWiki. It is cross-posted on the ScraperWiki blog.

ScraperWiki are looking for open data activists to try out our new “Open your data” tool.

Since its first launch ScraperWiki has worked closely with the Open Data community. Today we’re building on this commitment by pre-announcing the release of the first in a series of tools that will enable open data activists to publish data directly to open data catalogues.

To make this even easier, ScraperWiki will also be providing free datahub accounts for open data projects.

This first tool will allow users of CKAN catalogues (there are 50, from Africa to Washington) to publish a dataset that has been ingested and cleaned on the new ScraperWiki platform. It’ll be released on the 11th July.

screenshot showing new tool (alpha)

If you run an open data project which scrapes, curates and republishes open data, we’d love your help testing it. To register, please email hello@scraperwiki.com with “open data” in the subject, telling us about your project.

Why are we doing this? Since its launch ScraperWiki has provided a place where an open data activist could get, clean, analyse and publish data. With the retirement of “ScraperWiki Classic” we decided to focus on the getting, cleaning and analysing, and leave the publishing to the specialists – places like CKAN.

This new “Open your data” tool is just the start. Over the next few months we also hope that open data activists will help us work on the release of tools that:

  • Generate RDF (linked data)
  • Update data real time
  • Publish to other data catalogues

Here’s to liberating the world’s messy open data!


Aidan McGuire is the co-founder of ScraperWiki, the site which enables you to “Get, clean, analyse, visualise and manage your data, with simple tools or custom-written code.” Among other things, they write and catalogue screen-scrapers to extract and analyse public data from websites.

Principles for Open Contracting

Guest - June 24, 2013 in Featured Project, Open Standards, Uncategorized

The following guest post is by the Open Contracting Partnership, announcing the release of their Principles for Open Contracting. It is cross-posted from their website.

Contracts

Over the past year, the Open Contracting Partnership has facilitated a global consultation process to create a set of global principles that can serve as a guide for all of those seeking to advance open contracting around the world.

The principles reflect norms and best practices from around the world related to disclosure and participation in public contracting.

They have been created with the inputs and feedback of nearly 200 members the open contracting community from government, private sector, civil society, donor organizations, and international financial institutions. These collaborators contributed inputs from various sector-specific perspectives (such as service delivery, infrastructure, extractive industries, and land).

The Open Contracting Partnership welcomes all your questions, comments or feedback. Please contact us at partnership@open-contracting.com

OPEN CONTRACTING GLOBAL PRINCIPLES

Preamble: These Principles reflect the belief that increased disclosure and participation in public contracting will have the effects of making contracting more competitive and fair, improving contract performance, and securing development outcomes. While recognizing that legitimate needs for confidentiality may justify exemptions in exceptional circumstances, these Principles are intended to guide governments and other stakeholders to affirmatively disclose documents and information related to public contracting in a manner that enables meaningful understanding, effective monitoring, efficient performance, and accountability for outcomes. These Principles are to be adapted to sector-specific and local contexts and are complementary to sector-based transparency initiatives and global open government movements.

Affirmative Disclosure

  1. Governments shall recognize the right of the public to access information related to the formation, award, execution, performance, and completion of public contracts.
  2. Public contracting shall be conducted in a transparent and equitable manner, in accordance with publicly disclosed rules that explain the functioning of the process, including policies regarding disclosure.
  3. Governments shall require the timely, current, and routine publication of enough information about the formation, award, execution, performance, and completion of public contracts to enable the public, including media and civil society, to understand and monitor as a safeguard against inefficient, ineffective, or corrupt use of public resources. This would require affirmative disclosure of:
    1. Contracts, including licenses, concessions, permits, grants or any other document exchanging public goods, assets, or resources (including all annexes, schedules and documents incorporated by reference) and any amendments thereto;
    2. Related pre-studies, bid documents, performance evaluations, guarantees, and auditing reports.
    3. Information concerning contract formation, including:
      1. The planning process of the procurement;
      2. The method of procurement or award and the justification thereof;
      3. The scope and specifications for each contract;
      4. The criteria for evaluation and selection;
      5. The bidders or participants in the process, their validation documents, and any procedural exemptions for which they qualify;
      6. Any conflicts of interest uncovered or debarments issued;
      7. The results of the evaluation, including the justification for the award; and
      8. The identity of the contract recipient and any statements of beneficial ownership provided;
    4. Information related to performance and completion of public contracts, including information regarding subcontracting arrangements, such as:
      1. General schedules, including major milestones in execution, and any changes thereto;
      2. Status of implementation against milestones;
      3. Dates and amounts of stage payments made or received (against total amount) and the source of those payments;
      4. Service delivery and pricing;
      5. Arrangements for ending contracts;
      6. Final settlements and responsibilities;
      7. Risk assessments, including environmental and social impact assessments;
      8. Assessments of assets and liabilities of government related to the contract;
      9. Provisions in place to ensure appropriate management of ongoing risks and liabilities; and
      10. Appropriate financial information regarding revenues and expenditures, such as time and cost overruns, if any.
  4. Governments shall develop systems to collect, manage, simplify and publish contracting data regarding the formation, award, execution, performance and completion of public contracts in an open and structured format, in accordance with the Open Contracting Data Standards as they are developed, in a user-friendly and searchable manner.
  5. Contracting information made available to the public shall be as complete as possible, with any exceptions or limitations narrowly defined by law, ensuring that citizens have effective access to recourse in instances where access to this information is in dispute.
  6. Contracting parties, including international financial institutions, shall support disclosure in future contracting by precluding confidentiality clauses, drafting confidentiality narrowly to cover only permissible limited exemptions, or including provisions within the contractual terms and conditions to allow for the contract and related information to be published.

  7. Participation, Monitoring, and Oversight
  8. Governments shall recognize the right of the public to participate in the oversight of the formation, award, execution, performance, and completion of public contracts.
  9. Governments shall foster an enabling environment, which may include legislation, that recognizes, promotes, protects, and creates opportunities for public consultation and monitoring of public contracting, from the planning stage to the completion of contractual obligations.
  10. Governments shall work together with the private sector, donors, and civil society to build the capacities of all relevant stakeholders to understand, monitor and improve public contracting and to create sustainable funding mechanisms to support participatory public contracting.
  11. Governments have a duty to ensure oversight authorities, including parliaments, audit institutions, and implementing agencies, to access and utilize disclosed information, acknowledge and act upon citizen feedback, and encourage dialogue and consultations between contracting parties and civil society organizations in order to improve the quality of contracting outcomes.
  12. With regard to individual contracts of significant impact, contracting parties should craft strategies for citizen consultation and engagement in the management of the contract.

Opening the weather, part 2

Nicolas Baldeck - June 20, 2013 in Featured Project

See also “Opening the weather, part 1″

Stormy sea at Castletown

I began paragliding a few years ago. It’s maybe the most weather-dependent sport in the world. We often fly in mountainous areas, very close to the ground. We need to know about local effects like thermal updrafts, clouds growth, mountain-breeze, foehn wind and all sorts of other micro weather effects.

I discovered there was very little information available at this level of detail. The information exists, but is not displayed anywhere because it’s too specific.

I asked our National Weather Service “Météo France”, if they could provide me with the raw data I needed to make my own paragliding forecasts. They told me “Fine, it’s €100,000 a year”. A little bit too expensive for my personal use (or for any mobile app developer)…

Investigations revealed that only a few public agencies globally share this data freely, mostly based in the US, Canada and Norway. I got some data from the US global model (GFS), which is used for pretty much every weather website. But those forecasts are very limited. The global model is really coarse (55km grid), and cannot see topography or land use. It doesn’t even see the Alps – not so very useful for paragliding.

To get the data at the level I need, I have to run my own high-resolution regional weather model, using coarse US data as input (see my meteo-parapente.com website). It’s not easy. It requires High Performance Computing (HPC) technology, with our own computing cluster, servers and archiving infrastructure.

openmeteo

This project started as a personal attempt to get better weather info for my paragliding, but the process has made me realise there are bigger issues at stake.

Everybody knows weather has an impact on most activities. According to METNEXT, 25% of France’s GDP is dependent on weather. Weather is cheap: when you spend a dollar for better weather knowledge, you save more than 20 avoiding loss and fatalities during severe weather. Margaret Zeigler at #openagdata points out that 90% of crop losses are due to weather.

In the US, weather data is public domain. But in most European countries, it’s not. Data from model outputs, rain radars, ground stations and satellites is sold for 100,000′s of euros.

This policy has a lot of side effects:

  • Free public services are quite bad, because they need to sell “premium” services.
  • No startup or SME can afford this price -> No “weather” business in Europe. Growing 1% against 20% in the US.
  • public agencies and researchers have big difficulties getting the data they need.

I was sad to learn that my departement is buying weather from a Belgium company instead of from the French national public agency.

So, OpenMeteoData has several goals :

  • To provide easy access to already available data.
  • To gather people and technical resources for creating open forecasts (both human analysis and numerical models)
  • To help institutions to open their data, and explain benefits to them
  • To act as a catalyst in the debate about opening public data. I’m already in touch with French government and Météo France.
  • To provide a platform to gather projects about open meteorology.

If you’d like to talk about the weather, our Open Sustainability list might be the right place for you!

Opening the weather, part 1

Theodora Middleton - June 18, 2013 in Featured Project

Red sky at night - Unst

Red sky at night, shepherd’s delight
A cow with its tail to the west makes the weather best
Onion skins very thin, mild winter coming in

Humans have always wanted to know what the weather has in store for them, and have come up with a whole load of ways to predict what’s coming; some better than others.

Weather forecasting as we know it began in earnest in the nineteenth century, when the invention of the electric telegraph revolutionised long-distance communications and made it possible for information about incoming weather to travel faster than the weather itself. Since then weather forecasting has become ever-more accurate, with improvements in the technology of reporting and communicating, as well as in the predictive models, making it possible for us to know the future weather in greater detail than ever before.

The data collected by weather stations across the world is translated by algorithms into predictions about the weather which is coming. But while some raw data is freely available to those who wish to use it, other datasets are locked behind towering paywalls, and all output predictions are generally the closed property of big forecasting companies.

Two projects which have emerged recently to challenge this are OpenWeatherMap.org and OpenMeteoData.org. As Olga Ukolova from OpenWeatherMap explained:

“We believe that enthusiasts joined by one idea could achieve more than large companies. We believe that meteorological data must be available, free and easy-to-use.”

An open weather forecasting service has the ability to harness the input of enthusiasts around the world, to produce forecasts of greater precision and detail than can be achieved by monolithic companies. Inspired by the success of community-driven knowledge creation in cases like Wikipedia and OpenStreetMap, the guys at OpenWeatherMap are looking to improve the quality of available information, while at the same time wresting control from the hands of profit-driven corporations:

“The project attracts enthusiasts to the process of data collection and estimation of data preciseness that increases accuracy of weather forecasts. If you have a weather station you can connect it to OpenWeatherMap service. You will get a convenient interface for gathering and monitoring data from your weather station. And you can embed the weather station data into your home page.”

The results are available to developers openly and for free:

“Mobile apps developers can receive any weather data for their applications by using JSON / XML API. Lots of weather applications for Android and iOS use OpenWeatherMap as weather data source. By the way the data can be received from WMS server and can be embedded into any cartographic web-application.

Web-application developers that use cartographic services can easily add weather information to it. OpenWeatherMap provides libraries for OpenStreetMaps and Google map. Plug-ins for Drupal and other CMS are available too.”

weather map Map from OpenWeatherMap.org

Later this week, Nicolas Baldeck from OpenMeteoData will tell us more about how he came to be interested in opening the weather, and what future he sees for the project.

We need open carbon emissions data now!

Jonathan Gray - May 13, 2013 in Access to Information, Campaigning, Featured, Featured Project, Open Data, Policy, WG Sustainability, Working Groups

Last week the average concentration of carbon dioxide in the atmosphere reached 400 parts per million, a level which is said to be unprecedented in human history.

Leading scientists and policy makers say that we should be aiming for no more than 350 parts per million to avoid catastrophic runaway climate change.

But what’s in a number? Why is the increase from 399 to 400 significant?

While the actual change is mainly symbolic (and some commentators have questioned whether we’re hovering above or just below 400), the real story is that we are badly failing to cut emissions fast enough.

Given the importance of this number, which represents humanity’s progress towards tackling one of the biggest challenges we currently face – the fact that it has been making the news around the world is very welcome indeed.

Why don’t we hear about the levels of carbon dioxide in the atmosphere from politicians or the press more often? While there are regularly headlines about inflation, interest and unemployment, numbers about carbon emissions rarely receive the level of attention that they deserve.

We want this to change. And we think that having more timely and more detailed information about carbon emissions is essential if we are to keep up pressure on the world’s governments and companies to make the cuts that the world needs.

As our Advisory Board member Hans Rosling puts it, carbon emissions should be on the world’s dashboard.

Over the coming months we are going to be planning and undertaking activities to advocate for the release of more timely and granular carbon emissions data. We are also going to be working with our global network to catalyse projects which use it to communicate the state of the world’s carbon emissions to the public.

If you’d like to join us, you can follow #OpenCO2 on Twitter or sign up to our open-sustainability mailing list:

Image credit: Match smoke by AMagill on Flickr. Released under Creative Commons Attribution license.

Announcing CKAN 2.0

Mark Wainwright - May 10, 2013 in CKAN, Featured, Featured Project, News, OKF Projects, Open Data, Open Government Data, Releases, Technical

CKAN is a powerful, open source, open data management platform, used by governments and organizations around the world to make large collections of data accessible, including the UK and US government open data portals.

Today we are very happy and excited to announce the final release of CKAN 2.0. This is the most significant piece of CKAN news since the project began, and represents months of hectic work by the team and other contributors since before the release of version 1.8 last October, and of the 2.0 beta in February. Thank you to the many CKAN users for your patience – we think you’ll agree it’s been worth the wait.

[Screenshot: Front page]

CKAN 2.0 is a significant improvement on 1.x versions for data users, programmers, and publishers. Enormous thanks are due to the many users, data publishers, and others in the data community, who have submitted comments, code contributions and bug reports, and helped to get CKAN to where it is. Thanks also to OKF clients who have supported bespoke work in various areas that has become part of the core code. These include data.gov, the US government open data portal, which will be re-launched using CKAN 2.0 in a few weeks. Let’s look at the main changes in version 2.0. If you are in a hurry to see it in action, head on over to demo.ckan.org, where you can try it out.

Summary

CKAN 2.0 introduces a new sleek default design, and easier theming to build custom sites. It has a completely redesigned authorisation system enabling different departments or bodies to control their own workflow. It has more built-in previews, and publishers can add custom previews for their favourite file types. News feeds and activity streams enable users to keep up with changes or new datasets in areas of interest. A new version of the API enables other applications to have full access to all the capabilities of CKAN. And there are many other smaller changes and bug fixes.

Design and theming

The first thing that previous CKAN users notice will be the greatly improved page design. For the first time, CKAN’s look and feel has been carefully designed from the ground up by experienced professionals in web and information design. This has affected not only the visual appearance but many aspects of the information architecture, from the ‘breadcrumb trail’ navigation on each page, to the appearance and position of buttons and links to make their function as transparent as possible.

[Screenshot: dataset page]

Under the surface, an even more radical change has affected how pages are themed in CKAN. Themes are implemented using templates, and the old templating system has been replaced with the newer and more flexible Jinja2. This makes it much easier for developers to theme their CKAN instance to fit in with the overall theme or branding of their web presence.

Authorisation and workflow: introducing CKAN ‘Organizations’

Another major change affects how users are authorised to create, publish and update datasets. In CKAN 1.x, authorisation was granted to individual users for each dataset. This could be augmented with a ‘publisher mode’ to provide group-level access to datasets. A greatly expanded version of this mode, called ‘Organizations’, is now the default system of authorisation in CKAN. This is much more in line with how most CKAN sites are actually used.

[Screenshot: Organizations page]

Organizations make it possible for individual departments, bodies, groups, etc, to publish their own data in CKAN, and to have control over their own publishing workflow. Different users can have different roles within an Organization, with different authorisations. Linked to this is the possibility for each dataset to have different statuses, reflecting their progress through the workflow, and to be public or private. In the default set-up, Organization user roles include Members (who can read the Organization’s private datsets), Editors (who can add, edit and publish datasets) and Admins (who can add and change roles for users).

More previews

In addition to the existing image previews and table, graph and map previews for spreadsheet data, CKAN 2.0 includes previews for PDF files (shown below), HTML (in an iframe), and JSON. Additionally there is a new plugin extension point that makes it possible to add custom previews for different data types, as described in this recent blog post.

[Screenshot: PDF preview]

News feeds and activity streams

CKAN 2.0 provides users with ways to see when new data or changes are made in areas that they are interested in. Users can ‘follow’ datasets, Organizations, or groups (curated collections of datasets). A user’s personalised dashboard includes a news feed showing activity from the followed items – new datasets, revised metadata and changes or additions to dataset resources. If there are entries in your news feed since you last read it, a small flag shows the number of new items, and you can opt to receive notifications of them via e-mail.

Each dataset, Organization etc also has an ‘activity stream’, enabling users to see a summary of its recent history.

[Screenshot: News feed]

Programming with CKAN: meet version 3 of the API

CKAN’s powerful application programming interface (API) makes it possible for other machines and programs to automatically read, search and update datasets. CKAN’s API was previously designed according to REST principles. RESTful APIs are deservedly popular as a way to expose a clean interface to certain views on a collection of data. However, for CKAN we felt it would be better to give applications full access to CKAN’s own internal machinery.

A new version of the API – version 3 – trialled in beta in CKAN 1.8, replaced the REST design with remote procedure calls, enabling applications or programmers to call the same procedures as CKAN’s own code uses to implement its user interface. Anything that is possible via the user interface, and a good deal more, is therefore possible through the API. This proved popular and stable, and so, with minor tweaks, it is now the recommended API. Old versions of the API will continue to be provided for backward compatibility.

Documentation, documentation, documentation

CKAN comes with installation and administration documentation which we try to keep complete and up-to-date. The major changes in the rest of CKAN have thus required a similarly concerted effort on the documentation. It’s great when we hear that others have implemented their own installation of CKAN, something that’s been increasing lately, and we hope to see even more of this. The docs have therefore been overhauled for 2.0. CKAN is a large and complex system to deploy and work on improving the docs continues: version 2.1 will be another step forward. Where people do run into problems, help remains available as usual on the community mailing lists.

… And more

There are many other minor changes and bug fixes in CKAN 2.0. For a full list, see the CKAN changelog.

Installing

To install your own CKAN, or to upgrade an existing installation, you can install it as a package on Ubuntu 12.04 or do a source installation. Full installation and configuration instructions are at docs.ckan.org.

Try it out

You can try out the main features at demo.ckan.org. Please let us know what you think!

LobbyPlag – Who is really writing the law?

Martin Virtel - March 22, 2013 in Featured Project, Open Government Data

Sometimes, the band continues to play because the audience is enjoying the music so much. This is pretty much what happened to Lobbyplag. Our plan was to drive home a single point that outraged us: Some Members of the European Parliament were taking law proposals verbatim from lobbyists and trying to slip them into the upcoming EU privacy law. They actually copy-and-pasted texts provided by the likes of Amazon, Google, Facebook or some banking industry body. The fact itself was Max Schrems’ discovery. Max is a lawyer, and he sought the help of Richard Gutjahr and the data journalists and developers from OpenDataCity – to present his evidence to the public in form of a website called Lobbyplag. The name evokes memories of past projects where people had hunted down plagiarism in the doctoral theses of German politicians.

Lobbyplag – discover the copy&paste politicians from Martin Virtel on Vimeo.

A lovestorm of reactions ensued, not only from the usual consumer privacy advocates. The site struck a chord among lobbying-stressed lawmakers and outraged citizens alike. Wolfgang Thierse, the president of the German Parliament, called it “a meritorious endeavor”, and two European lawmakers pledged to disclose their sources. People started proposing other laws to look at, started sending us papers from lobbyists, and offered their help for finding more lobby-plagiarizing politicians.

What had happened? Looking into the details of Privacy Law is not normally a crowd-pleaser, and like most laws this one was being made out sight, watched over only by a few specialists. This is the norm especially for the EU parliament, which still doesn’t attract a level of public attention and scrutiny to match its real power. There had already been a lot of reports about the intense lobbying against the Privacy Law.

Lobbyplag made a difference because Lobbyplag set a different tone. We simply presented the proof of what was being done behind closed doors – and gave people the power to look it up for themselves. And they did. And they liked it. And asked for more.

lobbyplag-the-stats-from-the-imco-committee

At that point, we decided that this was to be more than a single issue website, this was a public utility in the making. We successfully completed a 8000€ crowdfunding campaign at Krautreporter.de, a fledgling German platform, and we are now building the tools that interested citizens (assisted by algorithms) will need to make the comparisons between lobbyist texts and law amendments, and draw the conclusions by themselves. Stefan’s Parltrack project, which provides APIs to the European Parliament’s paperwork, will provide the foundation, as it did for the first iteration of lobbyplag, and we’re looking at using the Open Knowledge Foundation’s pybossa, a microtasking framework (you can see it in action at crowdrafting.org).

Of course, the first round of money is only a start – we’re a team of volunteers – so we also submitted Lobbyplag to the Knight News Challenge, which this year fittingly is looking to support projects that improve the way citizens and governments interact – you can read more about the proposal and provide feedback on the Knight News page.

We think that making comparisons easy and bringing lobbying out into the light is a way to achieve that. There’s nothing inherently wrong with lawmakers relying on experts when they’re not experts themselves – you’d expect them to. But if they hide who they’ve been listening to, and if they only listen to one side, they contribute towards public distrust in their profession. Making the process of lawmaking and influencing lawmakers more transparent will result in better debate, better understanding and better laws.

There’s a saying that “Laws, like sausages, cease to inspire respect in proportion as we know how they are made” – but we think that is not true any longer. Citizens all over the world are not really willing to respect lawmakers unless they can trace what they are stuffing in there.

Get Updates