Support Us

You are browsing the archive for Labs.

Introducing Open Knowledge Foundation Labs

Rufus Pollock - July 9, 2013 in Featured, Labs, News, Open Knowledge Foundation

Today we’re pleased to officially launch Open Knowledge Foundation Labs, a community home for civic hackers, data wranglers and anyone else intrigued and excited by the possibilities of combining technology and open information for good – making government more accountable, culture more accessible and science more efficient.

Labs

Labs is about “making” – whether that’s apps, insights or tools – using open data, open content and free / open source software. And you don’t need to be an uber-geek to participate: interest and a willingness to get your hands dirty (digitally), be that with making, testing or helping, is all that’s needed – although we do allow lurking on the mailing list ;-)

Join in now! Sign up on the mailing list, follow us on twitter or read more about what we’re up to and how you can get involved.

method="POST" class="form-inline">



Find out more

For the full picture of what Labs is up to, check out its projects page and the list of ideas for projects. Highlights include:

  • ReclineJS, a library for building data-driven web applications in pure JavaScript
  • Annotator, an open-source JavaScript library and tool that can be added to any webpage to make it annotatable
  • Nomenklatura, a simple service that makes it easy to maintain a canonical list of entities such as persons, companies or streets, and to match messy input against that list
  • PyBossa, a platform for crowd-sourcing online volunteer assistance on tasks that require human intelligence, which powers CrowdCrafting

Labs is part of the Open Knowledge Foundation Network and operates as a collaborative community, which anyone can join. You can also take a look at the:

While some of you may have noticed we’ve been operating unannounced and somewhat under the radar for some time, we recently revamped the website and decided that it was high time to officially cut the ribbon and open our doors.

If you’re interested in making things with open data or open content, we hope you’ll come and say hello.

What Do We Mean By Small Data

Rufus Pollock - April 26, 2013 in Featured, Ideas and musings, Labs, Open Data, Small Data

Earlier this week we published the first in a series of posts on small data: “Forget Big Data, Small Data is the Real Revolution”. In this second in the series, we discuss small data in more detail providing a rough definition and drawing parallels with the history of computers and software.

What do we mean by “small data”? Let’s define it crudely as:

“Small data is the amount of data you can conveniently store and process on a single machine, and in particular, a high-end laptop or server”

Why a laptop? What’s interesting (and new) right now is the democratisation of data and the associated possibility of large-scale distributed community of data wranglers working collaboratively. What matters here then is, crudely, the amount of data that an average data geek can handle on their own machine, their own laptop.

A key point is that the dramatic advances in computing, storage and bandwidth have far bigger implications for “small data” than for “big data”. The recent advances have increased the realm of small data, the kind of data that an individual can handle on their own hardware, far more relatively than they have increased the realm of “big data”. Suddenly working with significant datasets – datasets containing tens of thousands, hundreds of thousands or millions of rows can be a mass-participation activity.

(As should be clear from the above definition – and any recent history of computing – small (and big) are relative terms that change as technology advances – for example, in 1994 a terabyte of storage cost several hundred thousand dollars, today its under a hundred. This also means today’s big is tomorrow’s small).

Our situation today is similar to microcomputers in the late 70s and early 80s or the Internet in the 90s. When microcomputers first arrived, they seemed puny in comparison to the “big” computing and “big” software then around and there was nothing strictly they could do that existing computing could not. However, they were revolutionary in one fundamental way: they made computing a mass-participation activity. Similarly, the Internet was not new in the 1990s – it had been around in various forms for several decades – but it was at that point it became available at a mass-scale to the average developer (and ultimately citizen). In both cases “big” kept on advancing too – be it supercomputers or the high-end connectivity – but the revolution came from “small”.

This (small) data revolution is just beginning. The tools and infrastructure to enable effective collaboration and rapid scaling for small data are in their infancy, and the communities with the capacities and skills to use small data are in their early stages. Want to get involved in the small data forward revolution — sign up now


This is the second in a series of posts about the power of Small Data – follow the Open Knowledge Foundation blog, Twitter or Facebook to learn more and join the debate at #SmallData on Twitter.

Frictionless Data: making it radically easier to get stuff done with data

Rufus Pollock - April 24, 2013 in Featured, Ideas and musings, Labs, Open Data, Open Standards, Small Data, Technical

Frictionless Data is now in alpha at http://data.okfn.org/ – and we’d like you to get involved.

Our mission is to make it radically easier to make data used and useful – our immediate goal is make it as simple as possible to get the data you want into the tool of your choice.

This isn’t about building a big datastore or a data management system – it’s simply saving people from repeating all the same tasks of discovering a dataset, getting it into a format they can use, cleaning it up – all before they can do anything useful with it! If you’ve ever spent the first half of a hackday just tidying up tabular data and getting it ready to use, Frictionless Data is for you.

Our work is based on a few key principles:

  • Narrow focus — improve one small part of the data chain, standards and tools are limited in scope and size
  • Build for the web – use formats that are web “native” (JSON) and work naturally with HTTP (plain-text, CSV is streamable etc)
  • Distributed not centralised — designed for a distributed ecosystem (no centralized, single point of failure or dependence)
  • Work with existing tools — don’t expect people to come to you, make this work with their tools and their workflows (almost everyone in the world can open a CSV file, every language can handle CSV and JSON)
  • Simplicity (but sufficiency) — use the simplest formats possible and do the minimum in terms of metadata but be sufficient in terms of schemas and structure for tools to be effective

We believe that making it easy to get and use data and especially open data is central to creating a more connected digital data ecosystem and accelerating the creation of social and commercial value. This project is about reducing friction in getting, using and connecting data, making it radically easier to get data you need into the tool of your choice. Frictionless Data distills much of our learning over the last 7 years into some specific standards and infrastructure.

What’s the Problem?

Today, when you decide to cook, the ingredients are readily available at local supermarkets or even already in your kitchen. You don’t need to travel to a farm, collect eggs, mill the corn, cure the bacon etc – as you once would have done! Instead, thanks to standard systems of measurement, packaging, shipping (e.g. containerization) and payment, ingredients can get from the farm direct to my local shop or even my door.

But with data we’re still largely stuck at this early stage: every time you want to do an analysis or build an app you have to set off around the internet to dig up data, extract it, clean it and prepare it before you can even get it into your tool and begin your work proper.

What do we need to do for the working with data to be like cooking today – where you get to spend your time making the cake (creating insights) not preparing and collecting the ingredients (digging up and cleaning data)?

The answer: radical improvements in the “logistics” of data associated with specialisation and standardisation. In analogy with food we need standard systems of “measurement”, packaging, and transport so that its easy to get data from its original source into the application where you can start working with it.

Frictionless DAta idea

What’s Frictionless Data going to do?

We start with an advantage: unlike for physical goods transporting digital information from one computer to another is very cheap! This means the focus can be on standardizing and simplifying the process of getting data from one application to another (or one form to another). We propose work in 3 related areas:

  • Key simple standards. For example, a standardized “packaging” of data that makes it easy to transport and use (think of the “containerization” revolution in shipping)
  • Simple tooling and integration – you should be able to get data in these standard formats into or out of Excel, R, Hadoop or whatever tool you use
  • Bootstrapping the system with essential data – we need to get the ball rolling

frictionless data components diagram

What’s Frictionless Data today?

1. Data

We have some exemplar datasets which are useful for a lot of people – these are:

  • High Quality & Reliable

    • We have sourced, normalized and quality checked a set of key reference datasets such as country codes, currencies, GDP and population.
  • Standard Form & Bulk Access

    • All the datasets are provided in a standardized form and can be accessed in bulk as CSV together with a simple JSON schema.
  • Versioned & Packaged

    • All data is in data packages and is versioned using git so all changes are visible and data can becollaboratively maintained.

2. Standards

We have two simple data package formats, described as ultra-lightweight, RFC-style specifications. They build heavily on prior work. Simplicity and practicality were guiding design criteria.

Frictionless Data: package standard diagram

Data package: minimal wrapping, agnostic about the data its “packaging”, designed for extension. This flexibility is good as it can be used as a transport for pretty much any kind of data but it also limits integration and tooling. Read the full Data Package specification.

Simple data format (SDF): focuses on tabular data only and extends data package (data in simple data format is a data package) by requiring data to be “good” CSVs and the provision of a simple JSON-based schema to describe them (“JSON Table Schema”). Read the full Simple Data Format specification.

3. Tools

It’s early days for Frictionless Data, so we’re still working on this bit! But there’s a need for validators, schema generators, and all kinds of integration. You can help out – see below for details or check out the issues on github.

Doesn’t this already exist?

People have been working on data for a while – doesn’t something like this already exist? The crude answer is yes and no. People, including folks here at the Open Knowledge Foundation, have been working on this for quite some time, and there are already some parts of the solution out there. Furthermore, much of these ideas are directly borrowed from similar work in software. For example, the Data Packages spec (first version in 2007!) builds heavily on packaging projects and specifications like Debian and CommonJS.

Key distinguishing features of Frictionless Data:

  • Ultra-simplicity – we want to keep things as simple as they possibly can be. This includes formats (JSON and CSV) and a focus on end-user tool integration, so people can just get the data they want into the tool they want and move on to the real task
  • Web orientation – we want an approach that fits naturally with the web
  • Focus on integration with existing tools
  • Distributed and not tied to a given tool or project – this is not about creating a central data marketplace or similar setup. It’s about creating a basic framework that would enable anyone to publish and use datasets more easily and without going through a central broker.

Many of these are shared with (and derive from) other approaches but as a whole we believe this provides an especially powerful setup.

Get Involved

This is a community-run project coordinated by the Open Knowledge Foundation as part of Open Knowledge Foundation Labs. Please get involved:


  • Spread the word! Frictionless Data is a key part of the real data revolution – follow the debate on #SmallData and share our posts so more people can get involved

Forget Big Data, Small Data is the Real Revolution

Rufus Pollock - April 22, 2013 in Featured, Ideas and musings, Labs, Open Data, Small Data

There is a lot of talk about “big data” at the moment. For example, this is Big Data Week, which will see events about big data in dozens of cities around the world. But the discussions around big data miss a much bigger and more important picture: the real opportunity is not big data, but small data. Not centralized “big iron”, but decentralized data wrangling. Not “one ring to rule them all” but “small pieces loosely joined”.

Big data smacks of the centralization fads we’ve seen in each computing era. The thought that ‘hey there’s more data than we can process!’ (something which is no doubt always true year-on-year since computing began) is dressed up as the latest trend with associated technology must-haves.

Meanwhile we risk overlooking the much more important story here, the real revolution, which is the mass democratisation of the means of access, storage and processing of data. This story isn’t about large organisations running parallel software on tens of thousand of servers, but about more people than ever being able to collaborate effectively around a distributed ecosystem of information, an ecosystem of small data.

Just as we now find it ludicrous to talk of “big software” – as if size in itself were a measure of value – we should, and will one day, find it equally odd to talk of “big data”. Size in itself doesn’t matter – what matters is having the data, of whatever size, that helps us solve a problem or address the question we have.

For many problems and questions, small data in itself is enough. The data on my household energy use, the times of local buses, government spending – these are all small data. Everything processed in Excel is small data. When Hans Rosling shows us how to understand our world through population change or literacy he’s doing it with small data.

And when we want to scale up the way to do that is through componentized small data: by creating and integrating small data “packages” not building big data monoliths, by partitioning problems in a way that works across people and organizations, not through creating massive centralized silos.

This next decade belongs to distributed models not centralized ones, to collaboration not control, and to small data not big data.

Want to create the real data revolution? Come join our community creating the tools and materials to make it happen — sign up here:


This is the first in a series of posts about the power of Small Data – follow the Open Knowledge Foundation blog, Twitter or Facebook to learn more and join the debate at #SmallData on Twitter.

Further Reading

  • Nobody ever got fired for buying a cluster
    • Even at enterprises like Microsoft and Yahoo most jobs could run on a single machine. E.g. median job size is 14GB at Microsoft and 80% of jobs are less than 1TB. At Yahoo estimate median job size is 12GB.
    • Ananthanarayanan et al. show that Facebook jobs follow a power-law distribution with small jobs dominating; from their graphs it appears that at least 90% of the jobs have input sizes under 100 GB,” the paper states. “Chen et al. present a detailed study of Hadoop workloads for Facebook as well as 5 Cloudera customers. Their graphs also show that a very small minority of jobs achieves terabyte scale or larger and the paper claims explicitly that ‘most jobs have input, shuffle, and output sizes in the MB to GB range.’”
  • PACMan: Coordinated Memory Caching for Parallel Jobs – Ganesh Ananthanarayanan, Ali Ghodsi, Andrew Wang, Dhruba Borthakur, Srikanth Kandula, Scott Shenker, Ion Stoica

Open Interests Europe Hackathon in London, 24-25 November

Velichka Dimitrova - October 15, 2012 in Data Journalism, Events, Labs, Open Data, Sprint / Hackday


The European Journalism Centre and the Open Knowledge Foundation invite you to the Open Interests Europe Hackathon to track the lobbyists’ interests and money flows which shape European policy.

When: 24-25 November

Where: Google Campus Cafe, 4-5 Bonhill Street, EC2A 4BX London

How EU money is spent is an issue that concerns everyone who pays taxes to the EU. As the influence of Brussels lobbyists grows, it is increasingly important to draw the connections between lobbying, policy-making and funding. Journalists and activists need browsable databases, tools and platforms to investigate lobbyists’ influence and where the money goes in the EU. Join us and help build these tools! Open Interests Europe brings together developers, designers, activists, journalists and other geeks for two days of collaboration, learning, fun, intense hacking and app building.

The Lobby Transparency Challenge

Within any political process there are many interests wanting to be heard – companies, trade unions, NGOs – and Brussels is no exception. Corporate Europe Observatory, Friends of the Earth Europe and LobbyControl have begun to data-mine the lobby registers of the European Commission and of the European Parliament to find out who the lobbyists are, what they want and how much they are investing. You will have the exclusive opportunity to work with this data before it is made public in their upcoming portal. What can you do with this data?

Group leader: Erik Wesselius is one of the co-founders of Corporate Europe Observatory. In the past few years, Erik has focused on issues related to lobbying transparency and regulation as well as EU economic governance. In 2005, Erik was active in the Dutch campaign for a No against the EU Constitution.

The Fish Subsidies Challenge

Subsidies paid to owners of fishing vessels and others working in the fishing industry under the European Union’s common fisheries policy amount to approximately €1 billion a year. EU Transparency gathered detailed data relating to payments and recipients of fisheries subsidies in every EU member state from multiple sources, from European Commission databases to member state government databases and inter-governmental fishery organizations such as ICCAT. What can you do with this data?

Group leader: Jack Thurston is policy analyst, activist, writer and broadcaster. He is co-founder of FarmSubsidy.org, winner of a Freedom of Information Award from Investigative Reporters and Editors.

Prizes and Jury

All participants will get the satisfaction of contributing to a cause that affects us all! Not only that, the winning team will be awarded a 100 EUR Amazon voucher, pre-ordered copies of the movie The Brus$els Business – Who Runs the European Union? (to be released this autumn) and copies of The Data Journalism Handbook.

The Jury members are Rufus Pollock, co-Founder and Director of the Open Knowledge Foundation and Alastair Dant, Lead Interactive Technologist for the Guardian.

For more details at the event’s webpage: http://okfnlabs.org/events/hackdays/lobbying.html

Please register for the event at Eventbrite: http://openinterests.eventbrite.com/

If you have any questions or would like to submit a challenge around this topic, please contact: sprints [at] okfn.org

This event is organised by:

OKFN_EJC Supported by
Mozilla

Ignite Cleanweb

Velichka Dimitrova - September 12, 2012 in Events, External, Labs, Meetups, WG Economics

Ignite Cleanweb

Ignite Event in London

This Thursday in London, Cleanweb UK invites you to their first Ignite evening, hosted by Forward Technology. Come along and see a great lineup of lightning talks, all about what’s happening with sustainability and the web in the UK.

From clean clouds, to home energy, to climate visualisation, there will plenty to learn, and plenty of other attendees to get to know. It’ll be an evening to remember, so make sure you’re there! Sign up on the Cleanweb UK website.

Confirmed lighting talks:

  • Loco2 vs The European Rail Booking Monster, Jon Leighton, Loco2
  • Love Thy Neighbour. Rent Their Car, Tom Wright, Whipcar
  • Solar Panels Cross The Chasm, Jason Neylon, uSwitch
  • Weaponising Environmentalism, Chris Adams, AMEE
  • Energy Saving Behaviour – The Motivation Challenge, Paul Tanner, Virtual Technologies
  • Good Food, For Everyone, Forever. Easy, Right?, Ed Dowding, Sustaination
  • The Open Energy Monitor Project, Glyn Hudson & Tristan Lea, OpenEnergyMonitor
  • The Carbon Map, Robin Houston, Carbon Map
  • Putting the Local in Global Warming with Open Data, Jack Townsend, Globe Town
  • Cleanweb in the UK, James Smith, Cleanweb UK

and more…

Cleanweb community

Cleanweb Community London

There is a movement growing. Bit by bit, developers are using the power of the web to make our world more sustainable. Whether by improving the way we travel, the way we eat, or the way we use energy, the web is making a difference. The Cleanweb movement is building a global conversation, with local chapters running hackdays and meetups to get people together.

Here in the UK, we’ve been doing this longer than anyone else. Cleanweb-style projects were emerging in 2007, with 2008′s geeKyoto conference bringing together a lot of early efforts.

It’s only really appropriate then that we have the most active Cleanweb community in the world, in the form of Cleanweb London. With over 150 members, it’s a great base, on which we’re building a wider Cleanweb UK movement. We’ve run a hackday, have regular meetups, and are building towards our first Ignite Cleanweb evening.

This is an expanding community, made of many different projects and groups, and one that has a chance to do some real good. If you’d like to be part of it, or if you already are but didn’t know it, come along to a meetup and get involved!

Cleanweb MeetUp

OpenDataMx: Opening Up the Government, one Bit at a Time

Velichka Dimitrova - September 4, 2012 in Events, External, Featured, Featured Project, Labs, Open Access, Open Content, Open Data, Open Economics, Open Knowledge Foundation Local Groups, Open Spending, Policy, School of Data, Sprint / Hackday

On August 24-25, another edition of OpenDataMx took place: a 36-hour public data hackathon for the development of creative technological solutions to questions raised by the civil society. This time the event was hosted by the University of Communication in Mexico City.

The popularity of the event has grown: a total of 63 participants including coders and designers took part and another 58 representatives from civil society from more than ten different organisations attended the parallel conference. Government institutions participated actively as well: the Ministry of Finance and Public Credit, IFAI and the Government of the Oaxaca State. The workshops were about technology, open data and its potential in the search for technological solutions to the problems of civil society.

The following proposals resulted from the discussions in the conference:

  • Construct a methodology to collectively generate open data from civil society for reuse in data events as well as to demonstrate benefits of government bodies to adopt the practice of generating their data openly.
  • The collective construction of a common database of information and knowledge on the topic of open data through the wiki of OpenDataMx.

After 36 hours continuous work, each of the 23 teams presented their project, each based on the 30 datasets, provided by both the government and civil society organisations. As currently little open government data is available, the joint work of civil society was essential in order to realise the hackathon.

Read the Hackathon news in Spanish on the OpenDataMx blog here.

OpenDataMx1

The judging panel responsible for assessing the projects was comprised of recognised experts in technology, open data and its application to civil society needs. The panel consisted of Velichka Dimitrova (Open Knowledge Foundation), Matioc Anca (Fundación Ciudadano Inteligente), Eric Mill (Sunlight Foundation) and Jorge Soto from Citivox.

The first three projects were awarded money prizes ($30 000, $20 000 and $10 000 Mexican pesos respectively), allowing the teams to implement their project. An honorary mention was given to the project of the Government of the Oaxaca State and the Finance Ministry (SHCP) about the transparency of public works and citizen participation. The organisers of the hackathon also tried to link all teams to the institution or organisation relevant to their project in order to get support and advise for further steps. The organisers: Fundar, the Centre for Analysis and Investigations, SocialTIC; Colectivo por la Transparencia and the University of Communication would like to thank all participants, judges and speakers for their enthusiasm and valuable support in building the citizen community.

OpenDataMX2

Here are some details about the winning projects:


FIRST PLACE

Name of the Project: Becalia | becalia.org

General Description: A platform, allowing firms and civil society to sponsor students with limited economic means to continue their higher education.

Background to the problem: There are very few students who receive a government scholarship for higher education. Additionally, few students decide to continue their education to a higher level, less than 20% in all states. The idea is to support the students who do not have the means and enable the participation of civil society.

Technology and tools used: Ruby on Rails, Javascript, CoffeeScript

Datasets: PRONABES (Programa Nacional de Becas para la Educación Superior) – National Scholarship Program for Higher Education

Team members: Adrián González, Abraham Kuri, Javier Ayala, Eduardo López


SECOND PLACE

Name of the Project: Más inversión para movernos mejor (More investment for better movement) | http://berserar.negoapps.com/

General Description: A small website for citizen participation, where users are asked to allocating spending to a type of urban mobility e.g. cars, public transport of bicycles, signalling their preference on where they would like the government to invest. After assigning one’s preferences, the users can compare them with the actual spending of the government and are offered multimedia material informing them about the topic.
Background to the problem: There is lack of information on how the government spends the money and the importance of sustainable urban mobility.
Technology and tools used: HTML, Javascript, PHP, Codeigniter, Bootstrap, Excel and SQL

Datasets: Base de datos del Instituto de Políticas para el Transporte y el Desarrollo -ITDP (Database of the Policy Institute for Transport and Development) http://itdp.mx

Team members: Antonio Sandoval, Jorge Cravioto, Said Villegas, Jorge Cáñez


THIRD PLACE

Name of the Project: DiputadoMx | http://www.tudiputado.org/
General Description: An application that helps you find your representative by geographical area, political party, gender or commission he or she belongs to. The application is compatible with desktop and mobile technology.
Background to the problem: Lack of opportunity for citizens to communicate directly with their representatives.
Technology and tools used: 
HTML5, CSS3, JQUERY, PYTHON , GOOGLE APP ENGINE, MONGODB

Datasets: Base de datos del IFE del diputados (IFE Database of MPs)
Team members: Pedro Aron Barrera Almaraz


HONORARY MENTION:

Name of the Project: Obra Pública Abierta (Open Public Works)

General Description: Open Public Works is an open government tool, conceptualised and developed by the Government of the Oaxaca State and Ministry of Finance (SHCP). This platform is created in order to make public works more transparent, presenting them in a simpler language and encouraging citizen oversight from the users community. Open Public Works seeks to create state transparency policy of the 3rd generation in the three levels of governance. This open source platform is also meant as a public good that will be delivered to the various state governments to promote nationwide transparency, citizen participation, and accountability in the public works sector.

Background to the Problem: There is lack of transparency in the infrastructure funds spending by the state governments. The citizen is not familiar with basic information about public works realised in their community and no mechanisms for independent social audit exist. Moreover, state control bodies lack the ability to control and supervise all public works. Public participation in the control of public resources is essential to solve this situation, where society and government should work together. Additionally, there is no public policy cross all three levels of government for the transparency of this sector. Finally, the public lacks too§ls and incentives to monitor, report and, if necessary, denounce the use of public resources in this very nontransparent government sector.

Technology and tools used:  API de Google Maps V.2, PHP,  JavaScript y Jquery

Datasets:
 Data set de obra pública de la SHCP y SINFRA/SEFIN del Gobierno de Oaxaca (Datesets of piblic works of SHCP and SINFRA/SEFIN of the Government of Oaxaca).

Team Members: Berenice Hernández Sumano, Juan Carlos Ayuso Bautista, Tarick Gracida Sumano, José Antonio García Morales, Lorena Rivero, Roberto Moreno Herrera, Luis Fernando Ostria


For more information:

Photos and content thanks to Federico Ramírez and Fundar.

Ending Secrecy – Why Global Transparency Rules Matter

Laura Newman - August 24, 2012 in Labs, OKF Projects, Our Work, Policy

Earlier this week, the SEC voted on the final rules of Section 1504 of the Dodd Frank Act. Global Witness teamed up with the Open Knowledge Foundation to explain what these rules are about, and why they matter.

View the infographic ‘Ending Secrecy – Why Global Transparency Rules Matter’

On August 22nd 2012, the U.S. Securities and Exchange Commission (SEC) met to vote on the final rules for Section 1504 of the Dodd Frank Wall Street Reform and Consumer Protection Act. Section 1504 of Dodd Frank requires that oil, gas and mining companies publish the payments they make to governments. These provisions mark a watershed in the creation of global transparency standards, which could help to break the link between natural resources and conflict and corruption.

  • For Global Witness’ analysis of Section 1504 of the Dodd Frank Act, see their blog post.

  • To see Global Witness’ response to the SEC vote on the finals rules, read their initial statement.

As part of their campaign, Global Witness teamed up with the Open Knowledge Foundation to communicate to the world why these rules matter. The OKFN Services team put together an infographic, ‘Ending Secrecy – Why Global Transparency Rules Matter’, which helps to explain the need for this legislation, and the impact that the current veil of secrecy has on ordinary lives. View the full infographic here

In many countries, the majority of the population are living in abject poverty.

When natural resources are discovered, their situation should change – but often it doesn’t.

The natural resources are often sold off to foreign companies. Local communities are kept in the dark. Because there is no transparency around payments, revenues, and the deals that oil, gas and mining companies make with governments, it is much easier for corruption to take place.

Laws such as those under Section 1504 of the Dodd Frank Act are important because they outlaw this kind of secrecy, by requiring oil, gas and mining companies to publish the payments they make to governments.

Transparency is the first step towards accountability.

Global Witness are still reviewing the final rule text of Section 1504. Their initial response was cautiously optimistic. They said that ‘some aspects of the rule appear to represent a step forward’, and welcomed the decision that companies would not be able to exempt themselves from reporting in certain countries where governments do not want revenues disclosed. However, they noted that ‘the devil will be in the detail’ – not least because the SEC failed to define the term ‘project’, leaving dangerous ‘wiggle room’. See their full statement here.

Section 1504 of the Dodd Frank Act is particularly important because the provision covers the majority of internationally operating oil companies, as well as the world’s largest mining companies. The Act is also likely to influence the draft revisions to the European Accounting and Transparency Directives, which, if implemented, would be likely to cover still more companies in the provision to publish details of their payments.

If you wish to find out more about small projects and services at the Open Knowledge Foundation, please visit Services.