Support Us

You are browsing the archive for Featured Project.

Principles for Open Contracting

Guest - June 24, 2013 in Featured Project, Open Standards, Uncategorized

The following guest post is by the Open Contracting Partnership, announcing the release of their Principles for Open Contracting. It is cross-posted from their website.

Contracts

Over the past year, the Open Contracting Partnership has facilitated a global consultation process to create a set of global principles that can serve as a guide for all of those seeking to advance open contracting around the world.

The principles reflect norms and best practices from around the world related to disclosure and participation in public contracting.

They have been created with the inputs and feedback of nearly 200 members the open contracting community from government, private sector, civil society, donor organizations, and international financial institutions. These collaborators contributed inputs from various sector-specific perspectives (such as service delivery, infrastructure, extractive industries, and land).

The Open Contracting Partnership welcomes all your questions, comments or feedback. Please contact us at partnership@open-contracting.com

OPEN CONTRACTING GLOBAL PRINCIPLES

Preamble: These Principles reflect the belief that increased disclosure and participation in public contracting will have the effects of making contracting more competitive and fair, improving contract performance, and securing development outcomes. While recognizing that legitimate needs for confidentiality may justify exemptions in exceptional circumstances, these Principles are intended to guide governments and other stakeholders to affirmatively disclose documents and information related to public contracting in a manner that enables meaningful understanding, effective monitoring, efficient performance, and accountability for outcomes. These Principles are to be adapted to sector-specific and local contexts and are complementary to sector-based transparency initiatives and global open government movements.

Affirmative Disclosure

  1. Governments shall recognize the right of the public to access information related to the formation, award, execution, performance, and completion of public contracts.
  2. Public contracting shall be conducted in a transparent and equitable manner, in accordance with publicly disclosed rules that explain the functioning of the process, including policies regarding disclosure.
  3. Governments shall require the timely, current, and routine publication of enough information about the formation, award, execution, performance, and completion of public contracts to enable the public, including media and civil society, to understand and monitor as a safeguard against inefficient, ineffective, or corrupt use of public resources. This would require affirmative disclosure of:
    1. Contracts, including licenses, concessions, permits, grants or any other document exchanging public goods, assets, or resources (including all annexes, schedules and documents incorporated by reference) and any amendments thereto;
    2. Related pre-studies, bid documents, performance evaluations, guarantees, and auditing reports.
    3. Information concerning contract formation, including:
      1. The planning process of the procurement;
      2. The method of procurement or award and the justification thereof;
      3. The scope and specifications for each contract;
      4. The criteria for evaluation and selection;
      5. The bidders or participants in the process, their validation documents, and any procedural exemptions for which they qualify;
      6. Any conflicts of interest uncovered or debarments issued;
      7. The results of the evaluation, including the justification for the award; and
      8. The identity of the contract recipient and any statements of beneficial ownership provided;
    4. Information related to performance and completion of public contracts, including information regarding subcontracting arrangements, such as:
      1. General schedules, including major milestones in execution, and any changes thereto;
      2. Status of implementation against milestones;
      3. Dates and amounts of stage payments made or received (against total amount) and the source of those payments;
      4. Service delivery and pricing;
      5. Arrangements for ending contracts;
      6. Final settlements and responsibilities;
      7. Risk assessments, including environmental and social impact assessments;
      8. Assessments of assets and liabilities of government related to the contract;
      9. Provisions in place to ensure appropriate management of ongoing risks and liabilities; and
      10. Appropriate financial information regarding revenues and expenditures, such as time and cost overruns, if any.
  4. Governments shall develop systems to collect, manage, simplify and publish contracting data regarding the formation, award, execution, performance and completion of public contracts in an open and structured format, in accordance with the Open Contracting Data Standards as they are developed, in a user-friendly and searchable manner.
  5. Contracting information made available to the public shall be as complete as possible, with any exceptions or limitations narrowly defined by law, ensuring that citizens have effective access to recourse in instances where access to this information is in dispute.
  6. Contracting parties, including international financial institutions, shall support disclosure in future contracting by precluding confidentiality clauses, drafting confidentiality narrowly to cover only permissible limited exemptions, or including provisions within the contractual terms and conditions to allow for the contract and related information to be published.
  7. Participation, Monitoring, and Oversight

  8. Governments shall recognize the right of the public to participate in the oversight of the formation, award, execution, performance, and completion of public contracts.
  9. Governments shall foster an enabling environment, which may include legislation, that recognizes, promotes, protects, and creates opportunities for public consultation and monitoring of public contracting, from the planning stage to the completion of contractual obligations.
  10. Governments shall work together with the private sector, donors, and civil society to build the capacities of all relevant stakeholders to understand, monitor and improve public contracting and to create sustainable funding mechanisms to support participatory public contracting.
  11. Governments have a duty to ensure oversight authorities, including parliaments, audit institutions, and implementing agencies, to access and utilize disclosed information, acknowledge and act upon citizen feedback, and encourage dialogue and consultations between contracting parties and civil society organizations in order to improve the quality of contracting outcomes.
  12. With regard to individual contracts of significant impact, contracting parties should craft strategies for citizen consultation and engagement in the management of the contract.

Opening the weather, part 2

Nicolas Baldeck - June 20, 2013 in Featured Project

See also “Opening the weather, part 1″

Stormy sea at Castletown

I began paragliding a few years ago. It’s maybe the most weather-dependent sport in the world. We often fly in mountainous areas, very close to the ground. We need to know about local effects like thermal updrafts, clouds growth, mountain-breeze, foehn wind and all sorts of other micro weather effects.

I discovered there was very little information available at this level of detail. The information exists, but is not displayed anywhere because it’s too specific.

I asked our National Weather Service “Météo France”, if they could provide me with the raw data I needed to make my own paragliding forecasts. They told me “Fine, it’s €100,000 a year”. A little bit too expensive for my personal use (or for any mobile app developer)…

Investigations revealed that only a few public agencies globally share this data freely, mostly based in the US, Canada and Norway. I got some data from the US global model (GFS), which is used for pretty much every weather website. But those forecasts are very limited. The global model is really coarse (55km grid), and cannot see topography or land use. It doesn’t even see the Alps – not so very useful for paragliding.

To get the data at the level I need, I have to run my own high-resolution regional weather model, using coarse US data as input (see my meteo-parapente.com website). It’s not easy. It requires High Performance Computing (HPC) technology, with our own computing cluster, servers and archiving infrastructure.

openmeteo

This project started as a personal attempt to get better weather info for my paragliding, but the process has made me realise there are bigger issues at stake.

Everybody knows weather has an impact on most activities. According to METNEXT, 25% of France’s GDP is dependent on weather.
Weather is cheap: when you spend a dollar for better weather knowledge, you save more than 20 avoiding loss and fatalities during severe weather. Margaret Zeigler at #openagdata points out that 90% of crop losses are due to weather.

In the US, weather data is public domain. But in most European countries, it’s not. Data from model outputs, rain radars, ground stations and satellites is sold for 100,000′s of euros.

This policy has a lot of side effects:

  • Free public services are quite bad, because they need to sell “premium” services.
  • No startup or SME can afford this price -> No “weather” business in Europe. Growing 1% against 20% in the US.
  • public agencies and researchers have big difficulties getting the data they need.

I was sad to learn that my departement is buying weather from a Belgium company instead of from the French national public agency.

So, OpenMeteoData has several goals :

  • To provide easy access to already available data.
  • To gather people and technical resources for creating open forecasts (both human analysis and numerical models)
  • To help institutions to open their data, and explain benefits to them
  • To act as a catalyst in the debate about opening public data. I’m already in touch with French government and Météo France.
  • To provide a platform to gather projects about open meteorology.

If you’d like to talk about the weather, our Open Sustainability list might be the right place for you!

Opening the weather, part 1

Theodora Middleton - June 18, 2013 in Featured Project

Red sky at night - Unst

Red sky at night, shepherd’s delight
A cow with its tail to the west makes the weather best
Onion skins very thin, mild winter coming in

Humans have always wanted to know what the weather has in store for them, and have come up with a whole load of ways to predict what’s coming; some better than others.

Weather forecasting as we know it began in earnest in the nineteenth century, when the invention of the electric telegraph revolutionised long-distance communications and made it possible for information about incoming weather to travel faster than the weather itself. Since then weather forecasting has become ever-more accurate, with improvements in the technology of reporting and communicating, as well as in the predictive models, making it possible for us to know the future weather in greater detail than ever before.

The data collected by weather stations across the world is translated by algorithms into predictions about the weather which is coming. But while some raw data is freely available to those who wish to use it, other datasets are locked behind towering paywalls, and all output predictions are generally the closed property of big forecasting companies.

Two projects which have emerged recently to challenge this are OpenWeatherMap.org and OpenMeteoData.org. As Olga Ukolova from OpenWeatherMap explained:

“We believe that enthusiasts joined by one idea could achieve more than large companies. We believe that meteorological data must be available, free and easy-to-use.”

An open weather forecasting service has the ability to harness the input of enthusiasts around the world, to produce forecasts of greater precision and detail than can be achieved by monolithic companies. Inspired by the success of community-driven knowledge creation in cases like Wikipedia and OpenStreetMap, the guys at OpenWeatherMap are looking to improve the quality of available information, while at the same time wresting control from the hands of profit-driven corporations:

“The project attracts enthusiasts to the process of data collection and estimation of data preciseness that increases accuracy of weather forecasts. If you have a weather station you can connect it to OpenWeatherMap service. You will get a convenient interface for gathering and monitoring data from your weather station. And you can embed the weather station data into your home page.”

The results are available to developers openly and for free:

“Mobile apps developers can receive any weather data for their applications by using JSON / XML API. Lots of weather applications for Android and iOS use OpenWeatherMap as weather data source. By the way the data can be received from WMS server and can be embedded into any cartographic web-application.

Web-application developers that use cartographic services can easily add weather information to it. OpenWeatherMap provides libraries for OpenStreetMaps and Google map. Plug-ins for Drupal and other CMS are available too.”

weather map
Map from OpenWeatherMap.org

Later this week, Nicolas Baldeck from OpenMeteoData will tell us more about how he came to be interested in opening the weather, and what future he sees for the project.

We need open carbon emissions data now!

Jonathan Gray - May 13, 2013 in Access to Information, Campaigning, Featured, Featured Project, Open Data, Policy, WG Sustainability, Working Groups

Last week the average concentration of carbon dioxide in the atmosphere reached 400 parts per million, a level which is said to be unprecedented in human history.

Leading scientists and policy makers say that we should be aiming for no more than 350 parts per million to avoid catastrophic runaway climate change.

But what’s in a number? Why is the increase from 399 to 400 significant?

While the actual change is mainly symbolic (and some commentators have questioned whether we’re hovering above or just below 400), the real story is that we are badly failing to cut emissions fast enough.

Given the importance of this number, which represents humanity’s progress towards tackling one of the biggest challenges we currently face – the fact that it has been making the news around the world is very welcome indeed.

Why don’t we hear about the levels of carbon dioxide in the atmosphere from politicians or the press more often? While there are regularly headlines about inflation, interest and unemployment, numbers about carbon emissions rarely receive the level of attention that they deserve.

We want this to change. And we think that having more timely and more detailed information about carbon emissions is essential if we are to keep up pressure on the world’s governments and companies to make the cuts that the world needs.

As our Advisory Board member Hans Rosling puts it, carbon emissions should be on the world’s dashboard.

Over the coming months we are going to be planning and undertaking activities to advocate for the release of more timely and granular carbon emissions data. We are also going to be working with our global network to catalyse projects which use it to communicate the state of the world’s carbon emissions to the public.

If you’d like to join us, you can follow #OpenCO2 on Twitter or sign up to our open-sustainability mailing list:


Image credit: Match smoke by AMagill on Flickr. Released under Creative Commons Attribution license.

Announcing CKAN 2.0

Mark Wainwright - May 10, 2013 in CKAN, Featured, Featured Project, News, OKF Projects, Open Data, Open Government Data, Releases, Technical

CKAN is a powerful, open source, open data management platform, used by governments and organizations around the world to make large collections of data accessible, including the UK and US government open data portals.

Today we are very happy and excited to announce the final release of CKAN 2.0. This is the most significant piece of CKAN news since the project began, and represents months of hectic work by the team and other contributors since before the release of version 1.8 last October, and of the 2.0 beta in February. Thank you to the many CKAN users for your patience – we think you’ll agree it’s been worth the wait.

[Screenshot: Front page]

CKAN 2.0 is a significant improvement on 1.x versions for data users, programmers, and publishers. Enormous thanks are due to the many users, data publishers, and others in the data community, who have submitted comments, code contributions and bug reports, and helped to get CKAN to where it is. Thanks also to OKF clients who have supported bespoke work in various areas that has become part of the core code. These include data.gov, the US government open data portal, which will be re-launched using CKAN 2.0 in a few weeks. Let’s look at the main changes in version 2.0. If you are in a hurry to see it in action, head on over to demo.ckan.org, where you can try it out.

Summary

CKAN 2.0 introduces a new sleek default design, and easier theming to build custom sites. It has a completely redesigned authorisation system enabling different departments or bodies to control their own workflow. It has more built-in previews, and publishers can add custom previews for their favourite file types. News feeds and activity streams enable users to keep up with changes or new datasets in areas of interest. A new version of the API enables other applications to have full access to all the capabilities of CKAN. And there are many other smaller changes and bug fixes.

Design and theming

The first thing that previous CKAN users notice will be the greatly improved page design. For the first time, CKAN’s look and feel has been carefully designed from the ground up by experienced professionals in web and information design. This has affected not only the visual appearance but many aspects of the information architecture, from the ‘breadcrumb trail’ navigation on each page, to the appearance and position of buttons and links to make their function as transparent as possible.

[Screenshot: dataset page]

Under the surface, an even more radical change has affected how pages are themed in CKAN. Themes are implemented using templates, and the old templating system has been replaced with the newer and more flexible Jinja2. This makes it much easier for developers to theme their CKAN instance to fit in with the overall theme or branding of their web presence.

Authorisation and workflow: introducing CKAN ‘Organizations’

Another major change affects how users are authorised to create, publish and update datasets. In CKAN 1.x, authorisation was granted to individual users for each dataset. This could be augmented with a ‘publisher mode’ to provide group-level access to datasets. A greatly expanded version of this mode, called ‘Organizations’, is now the default system of authorisation in CKAN. This is much more in line with how most CKAN sites are actually used.

[Screenshot: Organizations page]

Organizations make it possible for individual departments, bodies, groups, etc, to publish their own data in CKAN, and to have control over their own publishing workflow. Different users can have different roles within an Organization, with different authorisations. Linked to this is the possibility for each dataset to have different statuses, reflecting their progress through the workflow, and to be public or private. In the default set-up, Organization user roles include Members (who can read the Organization’s private datsets), Editors (who can add, edit and publish datasets) and Admins (who can add and change roles for users).

More previews

In addition to the existing image previews and table, graph and map previews for spreadsheet data, CKAN 2.0 includes previews for PDF files (shown below), HTML (in an iframe), and JSON. Additionally there is a new plugin extension point that makes it possible to add custom previews for different data types, as described in this recent blog post.

[Screenshot: PDF preview]

News feeds and activity streams

CKAN 2.0 provides users with ways to see when new data or changes are made in areas that they are interested in. Users can ‘follow’ datasets, Organizations, or groups (curated collections of datasets). A user’s personalised dashboard includes a news feed showing activity from the followed items – new datasets, revised metadata and changes or additions to dataset resources. If there are entries in your news feed since you last read it, a small flag shows the number of new items, and you can opt to receive notifications of them via e-mail.

Each dataset, Organization etc also has an ‘activity stream’, enabling users to see a summary of its recent history.

[Screenshot: News feed]

Programming with CKAN: meet version 3 of the API

CKAN’s powerful application programming interface (API) makes it possible for other machines and programs to automatically read, search and update datasets. CKAN’s API was previously designed according to REST principles. RESTful APIs are deservedly popular as a way to expose a clean interface to certain views on a collection of data. However, for CKAN we felt it would be better to give applications full access to CKAN’s own internal machinery.

A new version of the API – version 3 – trialled in beta in CKAN 1.8, replaced the REST design with remote procedure calls, enabling applications or programmers to call the same procedures as CKAN’s own code uses to implement its user interface. Anything that is possible via the user interface, and a good deal more, is therefore possible through the API. This proved popular and stable, and so, with minor tweaks, it is now the recommended API. Old versions of the API will continue to be provided for backward compatibility.

Documentation, documentation, documentation

CKAN comes with installation and administration documentation which we try to keep complete and up-to-date. The major changes in the rest of CKAN have thus required a similarly concerted effort on the documentation. It’s great when we hear that others have implemented their own installation of CKAN, something that’s been increasing lately, and we hope to see even more of this. The docs have therefore been overhauled for 2.0. CKAN is a large and complex system to deploy and work on improving the docs continues: version 2.1 will be another step forward. Where people do run into problems, help remains available as usual on the community mailing lists.

… And more

There are many other minor changes and bug fixes in CKAN 2.0. For a full list, see the CKAN changelog.

Installing

To install your own CKAN, or to upgrade an existing installation, you can install it as a package on Ubuntu 12.04 or do a source installation. Full installation and configuration instructions are at docs.ckan.org.

Try it out

You can try out the main features at demo.ckan.org. Please let us know what you think!

LobbyPlag – Who is really writing the law?

Martin Virtel - March 22, 2013 in Featured Project, Open Government Data

Sometimes, the band continues to play because the audience is enjoying the music so much. This is pretty much what happened to Lobbyplag. Our plan was to drive home a single point that outraged us: Some Members of the European Parliament were taking law proposals verbatim from lobbyists and trying to slip them into the upcoming EU privacy law. They actually copy-and-pasted texts provided by the likes of Amazon, Google, Facebook or some banking industry body. The fact itself was Max Schrems’ discovery. Max is a lawyer, and he sought the help of Richard Gutjahr and the data journalists and developers from OpenDataCity – to present his evidence to the public in form of a website called Lobbyplag. The name evokes memories of past projects where people had hunted down plagiarism in the doctoral theses of German politicians.

Lobbyplag – discover the copy&paste politicians from Martin Virtel on Vimeo.

A lovestorm of reactions ensued, not only from the usual consumer privacy advocates. The site struck a chord among lobbying-stressed lawmakers and outraged citizens alike. Wolfgang Thierse, the president of the German Parliament, called it “a meritorious endeavor”, and two European lawmakers pledged to disclose their sources. People started proposing other laws to look at, started sending us papers from lobbyists, and offered their help for finding more lobby-plagiarizing politicians.

What had happened? Looking into the details of Privacy Law is not normally a crowd-pleaser, and like most laws this one was being made out sight, watched over only by a few specialists. This is the norm especially for the EU parliament, which still doesn’t attract a level of public attention and scrutiny to match its real power. There had already been a lot of reports about the intense lobbying against the Privacy Law.

Lobbyplag made a difference because Lobbyplag set a different tone. We simply presented the proof of what was being done behind closed doors – and gave people the power to look it up for themselves. And they did. And they liked it. And asked for more.

lobbyplag-the-stats-from-the-imco-committee

At that point, we decided that this was to be more than a single issue website, this was a public utility in the making. We successfully completed a 8000€ crowdfunding campaign at Krautreporter.de, a fledgling German platform, and we are now building the tools that interested citizens (assisted by algorithms) will need to make the comparisons between lobbyist texts and law amendments, and draw the conclusions by themselves. Stefan’s Parltrack project, which provides APIs to the European Parliament’s paperwork, will provide the foundation, as it did for the first iteration of lobbyplag, and we’re looking at using the Open Knowledge Foundation’s pybossa, a microtasking framework (you can see it in action at crowdrafting.org).

Of course, the first round of money is only a start – we’re a team of volunteers – so we also submitted Lobbyplag to the Knight News Challenge, which this year fittingly is looking to support projects that improve the way citizens and governments interact – you can read more about the proposal and provide feedback on the Knight News page.

We think that making comparisons easy and bringing lobbying out into the light is a way to achieve that. There’s nothing inherently wrong with lawmakers relying on experts when they’re not experts themselves – you’d expect them to. But if they hide who they’ve been listening to, and if they only listen to one side, they contribute towards public distrust in their profession. Making the process of lawmaking and influencing lawmakers more transparent will result in better debate, better understanding and better laws.

There’s a saying that “Laws, like sausages, cease to inspire respect in proportion as we know how they are made” – but we think that is not true any longer. Citizens all over the world are not really willing to respect lawmakers unless they can trace what they are stuffing in there.

The Biggest Failure of Open Data in Government

Philip Ashlock - March 15, 2013 in Featured Project, Open Government Data

Many open data initiatives forget to include the basic facts about the government itself

In the past few years we’ve seen a huge shift in the way governments publish information. More and more governments are proactively releasing information as raw open data rather than simply putting out reports or responding to requests for information. This has enabled all sorts of great tools like the ones that help us find transportation or the ones that let us track the spending and performance of our government. Unfortunately, somewhere in this new wave of open data we forgot some of the most fundamental information about our government, the basic “who”, “what”, “when”, and “where”.

Census Dotmap by Brandon Martin-Anderson

Do you know all the different government bodies and districts that you’re a part of? Do you know who all your elected officials are? Do you know where and when to vote or when the next public meeting is? Now perhaps you’re thinking that this information is easy enough to find, so what does this have to do with open data? It’s true, it might not be too hard to learn about the highest office or who runs your city, but it usually doesn’t take long before you get lost down the rabbit hole. Government is complex, particularly in America where there can be a vast multitude of government districts and offices at the local level.

It’s difficult enough to come by comprehensive information about local government, so there definitely aren’t many surveys that help convey this problem, but you can start to get the idea from a pretty high level. Studies have shown that only about two thirds of Americans can name their governor (Pew 2007) while less than half can name even one of their senators (Social Capital Community Survey 2006). This excerpt from Andrew Romano in Newsweek captures the problem well:

Most experts agree that the relative complexity of the U.S. political system makes it hard for Americans to keep up. In many European countries, parliaments have proportional representation, and the majority party rules without having to “share power with a lot of subnational governments,” notes Yale political scientist Jacob Hacker, coauthor of Winner-Take-All Politics. In contrast, we’re saddled with a nonproportional Senate; a tangle of state, local, and federal bureaucracies; and near-constant elections for every imaginable office (judge, sheriff, school-board member, and so on). “Nobody is competent to understand it all, which you realize every time you vote,” says Michael Schudson, author of The Good Citizen. “You know you’re going to come up short, and that discourages you from learning more.”


How can we have a functioning democracy when we don’t even know the local government we belong to or who our democratically elected representatives are? It’s not that Americans are simply too ignorant or apathetic to know this information, it’s that the system of government really is complex. With what often seems like chaos on the national stage it can be easy to think of local government as simple, yet that’s rarely the case. There are about 35,000 municipal governments in the US, but when you count all the other local districts there are nearly 90,000 government bodies (US Census 2012) with a total of more than 500,000 elected officials (US Census 1992). The average American might struggle to name their representatives in Washington D.C., but that’s just the tip of the iceberg. They can easily belong to 15 government districts with more than 50 elected officials representing them.

We overlook the fact that it’s genuinely difficult to find information about all our levels of government. We unconsciously assume that this information is published on some government website well enough that we don’t need to include it as part of any kind of open data program. Even the cities that have been very progressive with open data like Washington DC and New York neglect to publish basic information like the names and contact details of their city councilmembers as raw open data. The NYC Green Book was finally posted online last year, but it’s still not available as raw data. Even in the broader open data and open government community, this information doesn’t get much attention. The basic contact details for government offices and elected officials were not part of the Open Data Census and neither were jurisdiction boundaries for government districts.


Fortunately, a number of projects have started working to address this discrepancy. In the UK, there’s already been great progress with websites like OpenlyLocal, TheyWorkForYou and MapIt, but similar efforts in North America are much more nascent. OpenNorth Represent has quickly become the most comprehensive database of Canadian elected officials with data that covers about half the population and boundary data that covers nearly two thirds. In the US, the OpenStates project has made huge progress in providing comprehensive coverage of the roughly 7,500 state legislators across the country while the Voting Information Project has started to provide comprehensive open data on where to vote and what’s on the ballot – some of the most essential yet most elusive data in our democracy. Most recently, DemocracyMap has been digging in at the local level, building off the data from the OpenStates API and the Sunlight Congress API and deploying an arsenal of web scrapers to provide the most comprehensive open dataset of elected officials and government boundaries in the US. The DemocracyMap API currently includes over 100,000 local officials, but it still needs a lot more data for complete coverage. In order to scale, many of these projects have taken an open source community-driven approach where volunteers are able to contribute scrapers to unlock more data, but many of us have also come to realize that we need data standards so we can work together better and so our governments can publish data the right way from the start.

James McKinney from OpenNorth has already put a lot of work into the Popolo Project, an initial draft of data standards to cover some of the most basic information about government like people and their offices. More recently James also started a W3C Open Government Community Group to help develop these standards with others working in this field. In the coming months I hope to see a greater convergence of these efforts so we can agree on basic standards and begin to establish a common infrastructure for defining and discovering who and what our government is. Imagine an atlas for navigating the political geography of the world from the international offices to those in the smallest neighborhood councils.

This is a problem that is so basic that most people are shocked when they realize it hasn’t been solved yet. It’s one of the most myopic aspects of the open government movement. Fortunately we are now making significant progress, but we need all the support we can get: scraping more data, establishing standards, and convincing folks like the Secretaries of State in many US States that we need to publish all boundaries and basic government contact information as open data. If you’re starting a new open data program, please don’t forget about the basics!

DemocracyMap is a submission for the Knight News Challenge. You can read the full proposal and provide feedback on the Knight News Challenge page.

An Open Knowledge Platform on Building Energy Performance to Mitigate Climate Change

Anne-Claire Bellec and Martin Kaltenböck - March 14, 2013 in Featured Project, Open Data, WG Sustainability

Buildings account for more than 30% of final energy use and energy-related carbon emissions in the world today. This sector
has the potential to play a crucial role in mitigating the global challenge of climate change. However, the building industry is a
local industry and the sector is fragmented at all levels, from planning to design and practical construction and over its various
technical aspects.

In this context, how best to help the sector deliver its global mitigation potential? Our answer at the Global Buildings
Performance Network
(GBPN) is collaboration: stimulating collective knowledge and analysis from experts and building
professionals worldwide to advance best building performance policies and solutions that can support better decision-making.
At the cornerstone of this strategy is our new Linked Open Data website launched on the 21st of February. This web-based tool
is unique in that it has been designed as a global participatory open data knowledge hub: harvesting, curating and creating
global best knowledge and data on building performance policies.

As the energy performance of buildings becomes central to any effective strategy to mitigate climate change, policymakers,
investors and project developers, members of governmental institutions and multilateral organisations need better access to
building performance data and knowledge to design, evaluate and compare policies and programmes from around the world.

The GBPN encourages transparent availability and access to reliable data. The GBPN data can be freely used, reused
and redistributed by anyone (as provided under a Creative Commons Attribution CC-BY 3.0 FR license.) – subject to the
requirement to attribute and share alike. In addition, the GBPN Knowledge Platform has been developed making use of Linked
Open Data technology and principles to connect with the best online resources. The GBPN Glossary is linked to DBpedia as
well as the reegle’s Clean Energy and Climate Change Thesaurus developed by the Renewable Energy and Energy Efficiency
Partnership
(REEEP) and (REN21). A “News Aggregator Tool” service is also available. And our
platform connects to our Regional Hubs data portals: Buildingsdata.eu, the open data portal for energy efficiency in European
buildings developed by the Buildings Performance Institute Europe (BPIE), and Buildingrating.org, the leading online tool for sharing global best practices on building rating and disclosure policies launched by the Institute for Market Transformation (IMT) in 2011.

One of the main features of the website is the “Policy Comparative Tool” enabling comparison of the world’s best practice
policies for new buildings. By understanding how countries have designed and implemented best practice codes, policy makers
can use this information to strengthen the future design of dynamic policies. The tool provides interactive data visualization
and analytics.

The GBPN aims to facilitate new synergies with energy efficiency experts and building professionals worldwide. For this
purpose, the new website offers a Laboratory, a participatory research collaboration tool for building energy efficiency experts
to share information and generate new knowledge on how best to develop ambitious building energy performance policies
globally.

The GBPN will be enriching its data over time with additional topics and information generated through data exchange
projects and research partnerships and is inviting any interested organisations to suggest any opportunities for collaboration.

The GBPN Open Knowledge Platform has been developed together with the Semantic Web Company, a consulting company
and technology provider providing semantic information management solutions with a strong focus on Open Data and Linked
Open Data principles and technologies.

About the GBPN:

The Global Buildings Performance Network (GBPN) is a globally organised and regionally focused network whose mission is to
advance best practice policies that can significantly reduce energy consumption and associated CO2 emissions from buildings.
We operate a Global Centre based in Paris and are represented by Hubs and Partners in four regions: China, India, Europe and
the United States. By promoting building energy performance globally, we strive to tackle climate change while contributing to the planet’s economic and social wellbeing.

Follow us on Twitter @GBPNetwork
Contact us at info@gbpn.org – www.gbpn.org

Document Freedom Day 2013

Erik Albers - March 12, 2013 in Events, Featured Project, Open Standards


What is document freedom?

Have you ever been stuck with some data that you
have not been able to open because it was in a format that needs some
specific kind of software to open it? The same thing
happens tens of thousands of times each day. Can you imagine how much
knowledge exchange doesn’t happen just because sender and receiver
(intentionally or not) are using different data formats? Can you imagine how
much knowledge future generations will lose if we keep on using proprietary,
closed data formats that one day no one will ever be able to open because
the company behind it had business secrets and patents on it but then went
bankrupt?


Open Standards, on the other hand, are data formats that have an open
documentation and everyone is free to use or implement in their own
software. The first characteristic (open documentation) guarantees that now
and even in a hundred of years everybody interested can understand the data
format and read it. The second characteristic (free to use) guarantees that
now and in even in a hundred years everybody is free to write some piece of
software to give everyone else the ability to read a specific piece of data.
That is why everyone and every public institution should be using Open Standards.


This is exactly the point where our document freedom campaign comes in.
Every year on the last Wednesday of March, the Free Software Foundation
Europe
runs a global campaign that is called
“Document Freedom Day”. The aim of the campaign
is to raise awareness of the usefulness of Open Standards. Therefore we
encourage local groups to organise an event that highlights the
importance of the use of Open Standards. Last year there were more than
50 events in more than 20 countries
. This year, Document Freedom Day
(DFD) will be on the 27th of March 2013.

The most important part of the whole campaign is done by guys like you and
me! In order to celebrate information accessibility and Open Standards, we
heavily depend on local activity on public places, in universities, in
hackerspaces or everywhere you can imagine. I am pretty sure that you have
very good ideas what you can do to raise some attention.


If you are interested, please have a look at some ideas of what you can
do
and feel free to support your event with our promotion material
that you can order for no cost. You can order the material on the webpage. Finally, if you are planning some activity, don’t
forget to register your event on our events page.

Thank you very much for your attention.
Your participation in Document Freedom Day can make the difference!

Images: Last year’s audience in Jakarta; DFD around the world; Document Freedom Day in Rio de Janeiro. All CC-BY-SA

Opening Product Data for a more responsible world

Philippe Plagnol - March 8, 2013 in Featured Project, Open Data

Data on the products we buy is rarely viewed as something to be opened. But in fact, the international standards that make it possible for products to be traded across borders can be used by consumers for their own ends – to help improve information-sharing and choice across the planet. There is currently no public database of this information – but we’re working to change that at Product Open Data.


Eugène Delacroix, “la liberté guidant le peuple”, 1830 – redesigned by Jessica Dere

Opening Product Data

When a consumer buys a product he gives power to a manufacturer, enabling it to continue or to extend its activities. A public worldwide product database would allow consumers to get information in real time, by scanning the barcode with a mobile phone, or to publish their opinions about specific products in a way that others can easily access. The consumer would have the tools to make decisions based on their own concerns about health, nutrition, ecology, or human rights, and to make ethical, dietary or value-based purchases.

GS1 is a worldwide organization which assign to a product a unique code that people can see below the barcode (called the GTIN code). There are billions of product commercialized in the world, and the full GTIN code list is stored only in GS1 database. The objective of POD (Product Open Data) is to open product data by gathering these key codes, and collecting product information from the manufacturer by creating a new RSS standard around this data (called PSS – Product Simple Syndication).

The POD database contains currently 1.4 million products. The most difficult task is to assign to each product a classification GPC code, which carries information about the particular type of product that it is. GPC codes are an international standard – GS1 has already assigned 10 million of them – but many e-commerce sites have developed their own taxonomies, which makes it difficult to compare product-types across sellers and to find the correct GPC codes online. Other challenges are finding information like the brand, dimensions, and packaging, and lastly but crucially, to guarantee the quality of data. The database and pictures are free to access.

Why is this important?

There are a whole load of reasons why opening product data is a really important step:

  • WIth the GTIN Code as a unique identifier, consumers will be able to communicate about a specific product across the world.

  • Almost all manufacturers around the world are covered by GS1, which is focused on supply chain. By developing an open database, a new organization with the same power will be created as a counterpoint, but focusing on consumers’ right

  • Organizations dealing with health, ecology, and human rights will be able to provide their own criteria about products very easily using the GTIN Code.

  • Individuals will be able to raise a risk or an alert about a product. A set of rules will have to be defined to avoid buzz triggers with wrong information.

  • Marketing and commerce will change a lot because consumers will have new inputs to decide what to buy (e-reputation)

  • Smartphone apps and a community will build around product knowledge.

Whether you’re interested in open source and open data, the protection of consumers, or the protection of the environment, we’d love to hear from you. Together we can join forces in an innovative project which is good for our planet.

Get Updates