Support Us

You are browsing the archive for Visualization.

Global Witness and Open Knowledge – Working together to investigate and campaign against corruption related to the extractives industries

Sam Leon - November 14, 2014 in Data Journalism, Featured, Visualization

Sam Leon, one of Open Knowledge’s data experts, talks about his experiences working as an School of Data Embedded Fellow at Global Witness.

Global Witness are a Nobel Peace Prize nominated not-for-profit organisation devoted to investigating and campaigning against corruption related to the extractives industries. Earlier this year they received the TED Prize and were awarded $1 million to help fight corporate secrecy and on the back of which they launched their End Anonymous Companies campaign.


In February 2014 I began a six month ‘Embedded Fellowship’ at Global Witness, one of the world’s leading anti-corruption NGOs. Global Witness are no strangers to data. They’re been publishing pioneering investigative research for over two decades now, piecing together the complex webs of financial transactions, shell companies and middlemen that so often lie at the heart of corruption in the extractives industries.

Like many campaigning organisations, Global Witness are seeking new and compelling ways to visualise their research, as well as use more effectively the large amounts of public data that have become available in the last few years.

“Sam Leon has unleashed a wave of innovation at Global Witness”
-Gavin Hayman, Executive Director of Global Witness

As part of my work, I’ve delivered data trainings at all levels of the organisation – from senior management to the front line staff. I’ve also been working with a variety of staff to use data collected by Global Witness to create compelling infographics. It’s amazing how powerful these can be to draw attention to stories and thus support Global Witness’s advocacy work.

The first interactive we published on the sharp rise of deaths of environmental defenders demonstrated this. The way we were able to pack some of the core insights of a much more detailed report into a series of images that people could dig into proved a hit on social media and let the story travel further.

GW Info

See here for the full infographic on Global Witness’s website.

But powerful visualisation isn’t just about shareability. It’s also about making a point that would otherwise be hard to grasp without visual aids. Global Witness regularly publish mind-boggling statistics on the scale of corruption in the oil and gas sector.

“The interactive infographics we worked on with Open Knowledge made a big difference to the report’s online impact. The product allowed us to bring out the key themes of the report in a simple, compelling way. This allowed more people to absorb and share the key messages without having to read the full report, but also drew more people into reading it.”
-Oliver Courtney, Senior Campaigner at Global Witness

Take for instance, the $1.1 billion that the Nigerian people were deprived of due to the corruption around the sale of Africa’s largest oil block, OPL 245.

$1.1 billion doesn’t mean much to me, it’s too big of a number. What we sought to do visually was represent the loss to Nigerian citizens in terms of things we could understand like basic health care provision and education.

See here for the full infographic on Shell, ENI and Nigeria’s Missing Millions.


In October 2014, to accompany Global Witness’s campaign against anonymous company ownership, we worked with developers from data journalism startup J++ on The Great Rip Off map.

The aim was to bring together and visualise the vast number of corruption case studies involving shell companies that Global Witness and its partners have unearthed in recent years.

The Great Rip Off!

It was a challenging project that required input from designers, campaigners, developers, journalists and researchers, but we’re proud of what we produced.

Open data principles were followed throughout as Global Witness were committed to creating a resource that its partners could draw on in their advocacy efforts. The underlying data was made available in bulk under a Creative Commons Attribution Sharealike license and open source libraries like Leaflet.js were used. There was also an invite for other parties to submit case studies into the database.

“It’s transformed the way we work, it’s made us think differently how we communicate information: how we make it more accessible, visual and exciting. It’s really changed the way we do things.”
-Brendan O’Donnell, Campaign Leader at Global Witness

For more information on the School of Data Embedded Fellowship Scheme, and to see further details on the work we produced with Global Witness, including interactive infographics, please see the full report here.

2 1 3

Exploring the 2012 Open Budget Survey

Mark Wainwright - January 23, 2013 in Access to Information, Open Government Data, Open Knowledge Foundation, Visualization

How transparent and accountable are different countries’ national budgets? Every two years, the International Budget Partnership (IBP) runs the Open Budget Survey to try to answer this question, by measuring the budgets of over 100 countries against a wide range of openness standards. The results for 2012 are released today, with an interactive data explorer developed for the IBP by the Open Knowledge Foundation.

A recent post by Albert van Zyl on the IBP’s Open Budgets blog spells out the consequences of a lack of transparency: money vanishing into thin air, the projects it was destined for never happening, and communities being kept in poverty. As the post says, “There are sufficient public resources available globally to make substantial progress on eradicating extreme poverty and creating sustained economic development, but only if these funds are spent effectively and equitably”. For that to happen, van Zyl argues, budgets must be transparent, participatory, and accountable. The survey results show to what extent different countries achieve this or fall short of it.

The explorer gives users a number of ways to visualise the data, not only from the latest survey but from its three predecessors, starting in 2006. A map view shows the changing geography of openness over the four surveys, while a timeline (shown below) shows the movements of individual countries over the same period. A more detailed page of rankings shows graphically how each country’s score is calculated from ninety-five tests of openness, each with four levels from most to least open. A datasheet for each country presents the full data, letting the user see how it has performed on each test in every survey. Users can also generate custom reports, or download the entire dataset.

[IMG: Open Budget Survey timeline]

Another useful feature allows users to see how a country’s score might change for the next survey in 2014. Starting with the 2012 setup, decide what changes to make to your chosen country’s budget systems, and the change that would result to its openness score is shown.

The IBP is a project of the Center on Budget and Policy Priorities, a Washington-based think tank which has carried out highly-regarded work for over 30 years on alleviating poverty through national fiscal policy, both at home in the US and internationally. The Open Budget Survey has established itself as an important and independent tool, and the OKF is delighted to be involved in helping present the results. We hope they will be useful to policymakers, campaigners, journalists, and citizens in helping to push for more open and transparent budgets all over the world.

Hackday for News Apps at OK Fest

Esa Mäkinen - September 4, 2012 in Data Journalism, OKFest, Open Data, Sprint / Hackday, Visualization

GOAL: You have six hours to make a working news app. There are three of you, a coder, a graphic designer and a journalist.

Is it possible?

Yes. Five times in the last two years the biggest Finnish newspaper, Helsingin Sanomat, has invited people to do just this, at HS Open hack days, which I first talked about on this blog back in February. In the basement of our offices, groups of three have made data journalism that has even landed on the front page of Helsingin Sanomat.

Kunnanluoja-game was created at HS Open. It shows what will happen population and political structures in Finnish cities if the government forces the cities to merge. Plans to force mergers of cities has been one of the hot topics in Finnish politics for a few years.

On Friday 21st September, Helsingin Sanomat will organize the sixth HS Open, at OK Fest. This time we are processing data from the Failed State Index and the World Bank. The data will be provided by Helsingin Sanomat, but groups can use any data they choose.

The goal is to make a News App: a 560×400 pixel program that can be embedded to any web site. It can visualize the data or gather news data from users.

Participants can sign up individually or as groups. We will divide the participants into teams with all the necessary skills. We hope that people will discuss their ideas beforehand. E-mail is good, but if you can meet up during OK Fest that would be great!

The purpose of HS Open is to learn a new trade, datajournalism. The News Apps you make are yours, but we hope that we will be able to buy the best apps and publish them on our site.

You can find out more details about the day and sign up here. Places are limited, so get in touch soon. Hope to see you there!

Mapping the Republic of Letters

Nicole Coleman - March 22, 2012 in External, Open GLAM, Visualization, WG Cultural Heritage, WG Humanities

The following post is crossposted from the OpenGLAM blog, and is about Stanford’s Mapping the Republic of Letters Project – one of the finest examples of what can be done with cultural heritage data and open source tools. Mapping the Republic of Letters is a collaborative, interdisciplinary humanities research project looking at 17th and 18th century correspondence, travel, and publication to trace the exchange of ideas in the early modern period and the Age of Enlightenment.

What unites the researchers involved in Mapping the Republic of Letters is the opportunity to explore historical material in a spatial context and ask big-data questions across archives: Did the Republic of Letters have boundaries? Where was the Enlightenment?

The Republic of Letters is an early modern network of intellectuals whose connections transcended generations and state boundaries. It has been described as a lost continent and debate continues about whether or not it really existed. Though the ‘letters’ of the title refers to scholarly knowledge, epistolary exchange was, in fact, the net that held this community together. Letters could be shipped around the world and shared across generations. Among our case studies, Athanasius Kircher’s correspondence network was the most widely distributed, exchanging letters with Jesuit outposts from Macau to Mexico.

Since the early stages of our project, we used open-source graphics libraries to visualize our collected data. The first step is to understand the ‘shape’ of the archive. A timeline + histogram, for example, reveals at a glance the distribution of letters in the collection over hundreds of years. And the map connecting cities as source and destination of sent letters reveals geographic “cold-spots” as well as hot-spots in the archive.

As we begin to dive in and pursue specific research questions, visualization tools in the form of maps, network graphs and charts, help us to make sense of piles of data all at once. Voltaire’s correspondence alone includes about 15,000 letters. Putting those letters on a map instantly gives us a picture of where Voltaire traveled and reveals temporal and spatial patterns in his letter-writing. And while there is no record of epistolary exchange between Voltaire and American inventor and statesman, Benjamin Franklin, a network graph of their combined correspondence quickly reveals three second degree connections.

One outcome of this project is a visualization layer to complement the well-established text-based search model for archives. To begin to really piece together a map of the Republic of Letters, we need to find a way to thread a path through the many dispersed and otherwise silo-ed correspondence archives. Another great challenge is to visually reflect the gaps, uncertainty and ambiguity in the historical record. It is often those gray areas that provide new research opportunities for humanists. In this effort we are very pleased to be working in partnership with DensityDesign Research Lab in Milan.

We have also been working closely with the Cultures of Knowledge project at Oxford. Cultures of Knowledge recently released a beta version available of their open access union catalog of early modern letters, aptly named Early Modern Letters Online. Their model is not to be the repository, but to provide a rich search layer across existing correspondence collections pointing back out to the source repository. Our friends at the Dutch 17th Circulation of Knowledge project are addressing the challenges of mining early modern correspondence for topics across many languages.

The code-base for our visualizations is open source and available for download at athanasius-project.github.com. Our code-base is available, but that is not to say that our visualizations are pret-a-porter. Since our research is devoted to knowledge production in the humanities and not software development, the code is rather idiosyncratic and constrained by our changing data model. Please contact us if you would like to learn more or would like to join the effort.

Energy and Climate Post-Hack News

Velichka Dimitrova - March 13, 2012 in Events, Open Data, Open Economics, Our Work, Sprint / Hackday, Visualization

Earlier this month, our Energy and Climate Hackday brought together about 50 people in London and online, joining from Berlin, Washington D.C., Amsterdam, Graz and Bogota.

With participants working in the private sector, for NGOs, universities and the public sector, we had a good mix of people with different expertise and skills. Some people had some idea on how to communicate some resource scarcity, the threat of climate change or the need to transform the existing energy structure. The challenge for developers was to visualise and present the openly available data – such as the dataset with environmental indicators from the World Bank. It was a great chance to meet and work with people that you don’t meet on a day-to-day basis, and get new ideas and inspiration. The event was sponsored by AMEE, which provides aggregated and automated access to the world’s environmental and energy information, and was hosted at the offices of ThoughtWorks.

Ed Hogg from the Department of Energy and Climate Change presented the Global 2050 Pathways Calculator Challenge . The Global Calculator would show how different technology choices impact energy security and reflect the geographical opportunities and limitations of energy technologies. It could focus on sectors of the economy, on countries and regions, or combine visualisations on both, showing implications for emissions and temperatures.

 

The Carbon Budget Challenge: Because of the controversy around how much each country “should” be emitting into the atmosphere, there are different criteria for determining each country’s share. According to the principle of common but differentiated responsibility in international environmental law: “parties should protect the climate system for the benefit of future and present generations of human kind on the basis of equity and in accordance with their common but differentiated responsibility and respective capabilities.”  (Art. 3 of UNFCCC) So richer countries should bear a higher responsibility in order to ensure equitable access to sustainable development.

But it is not just the current rate of CO2 emissions that is important. Since carbon dioxide hangs around in the atmosphere for 50 to 100 years, the cumulative total emissions from historical data also need to be accounted for. According to the “polluter pays” principle, calculating the historical footprint of each country is an important way of determining each country’s responsibility. The way emissions are calculated also leaves room for scrutiny (and creative data visualisation). According to empirical evidence, the net emission transfers via international trade from developing to developed countries has increased, which poses the challenge of visualising “imported emissions”. The Historic Carbon Budget group worked on visualising historical time series of carbon dioxide emissions and comparing countries relative to the world mean.

Meanwhile, the Future Carbon Budget group worked on visualising how the world would look under different algorithms for “allocating” emissions to countries, where the weightings of each country would vary based on:

  • historical emissions or the extent to which past high-emitting countries have “used up” their rights to emit in the future.
  • population change and expected population growth and the rights of future generations to development
  • capacity of emission abatement based on GDP and resources to invest in research and development of green technologies.

A Contraction and Convergence model, which reduces overall emissions and brings them to an equal level per capita, was put together during the afternoon. Building upon this model, developers designed a visualisation tool where one could input different implementation years, GDP and population growth rates in order to estimate the contraction and convergence path.

The Phone App to Communicate Climate Change Challenge inspired one group to show climate data and visualisations on a phone based on where the person is located. It would be either directed at the members of international organisations missions or the general public. A phone app could be useful to communicate the basic climate change facts about particular regions to the staff of international organisations like the World Bank and the IMF, saving them from wading through long and complex reports. For the general public, “global climate change” often seems too complex and distant: a phone app that communicates climate facts based on location, which can be read wherever and whenever you have time, might reach those who would not otherwise connect with these issues.

Deforestation and Land Use Challenge gathered Berlin developers  to create a visualisation of land use and forest area in the world. The Forestogram shows a world map with pie charts of land use (forest, agricultural land and other areas), based on the 5-year FAO data reports since 1990. When selecting “Usage by Kind” the user sees a beautiful peace sign made of the pies of all countries in the world.

Other ideas which we worked on included a “Comparothon” or a web-based application which allows the visualisation of data based on the relative size of bubbles. Data could be compared either for a single indicator across time, or for a single cross-section in one period.

We would like to thank Ilias Bartolini, who was an amazing host at the offices of ThoughtWorks, our sponsors AMEE and all participants who shared their knowledge and skills for a Saturday. Some notes from the Hackday can be found on the Etherpad. Some prototypes are still being developed, so if you have a similar idea and would like to join in, please let us know!

For contact and feedback: velichka.dimitrova [at] okfn.org

Finnish data journalism app contest

Esa Mäkinen - February 21, 2012 in Data Journalism, External, OKF Finland, Visualization

Helsingin Sanomat, Finland’s leading national paper, is organizing an article app contest to find data visualizations.

For many journalists today, it’s not a lack of open data that’s the problem, but a lack of the skills and off-the-shelf visualizations needed to make that open data useful to them.

A year ago, the Finnish government decided that in principle all data generated with taxpayer money should be free.

This has been leading to tonnes of great releases. At the beginning of May, the National Land Survey of Finland will release all its maps as open data. The National Audit Office of Finland has already released campaign funding data as a kind of API. The City of Helsinki has the Helsinki Region Infoshare project, that collects city-level data into one place. Statistics Finland are also publishing all their data openly.

Furthermore, Open data activists such as Antti Poikola and Petri Kola have been doing great work in lobbying the Government and creating a data ecosystem. An Open Knowledge Foundation chapter is about to be formed and Open Data activists are crowdsourcing Freedom of Information Act-related data requests on Tietopyynto.fi.

So we have plenty of data, but using and publishing it is still lagging behind. This is especially true with the major media outlets. Journalists are still publishing static charts with their articles online or using Google Fusion Tables to make very basic visualizations. Not very innovative.

To tackle this problem, Helsingin Sanomat is organizing a contest to find article apps.

By article apps we mean applications that can be embedded into any web site in 560×400 pixel Iframe. An article app should visualize some interesting data, with the possibility of user interaction or of displaying data inputted by the users.

There are 3000 euros worth of prizes. Developers will not lose any rights to the works they submit to the contest. The contest is open to everyone, and the deadline for submissions is the 8th April 2012. More info can be found here.

There are few limitations for the article apps, but we hope that the apps use open data. If the article app crowdsources data from the users, it would be great if the data could be exported openly.

One part of this process has been to think about the business models of open data journalism. The idea behind the article app format is to standardize at least one format in data journalism. When we have some kind of standard, it will be easier to buy and to sell data journalism.

Our suggestion is that outlets buy the license to publish an article app once with each article – regardless whether it’s published at the HS.fi site, in our iPad application or some other channel. The next time we use the same graph with different data, we would pay the license fee again. For one article the compensation would be quite low, but if the app is used hundred times, it would be higher.

This business model is still theoretical, as we have not published anything using this model. Also, the amount we would pay for one article is still unclear, as we have not had any discussions with developers. We’d love to hear your thoughts on

Visualising Italian Spending Data

Theodora Middleton - July 27, 2011 in External, Open Government Data, Open Spending, Visualization, WG Open Government Data

The following guest post is by Daniele Galiffa, CEO at Visup.

Some weeks ago we had the opportunity to develop a 24-hour quick prototype regarding the way the Italian Public Administration spends our money.

Our goal was to highlight the value of using simple and effective information visualization solutions to gain greater insight into data, especially where that data is already publicly available.

We started looking for data and we found 2 good quality sources: the OKFN Open Spending project and ISTAT, the Italian institution for national statistics.

Before starting, we looked at the OpenSpending interactive visualizations and we found some issues with them:

  • they just focused on one single data-set
  • they used a TreeMap to represent non-hierarchical data
  • they just provide “static” pictures for a single specific year (or a single specific function)

So…we decided we could do more ;)

We started by combining other data-sets. The first one was the cartography (in order to understand WHERE the money was spent), and the second-one was a 7 year-series of data about population, helping us understand the amount of money spent for each single citizen during a specific period.

After some discussions we came to the final design of an interactive visualization. Features include: * a geographic map of Italy, where each region could be used as a filter to slice data focusing on a single specific region. * a list of all the functions, ordered alphabetically, that could be used to filter the data-sets. * the time-range filter, listing all the years with available data (2002-2008) * the calculation-mode filter, that allows data to be calculated as a total or divided by citizens. * a detail box, to get punctual information about the filtered data-sets.

For each function in the list, there is also a bar-chart that makes it easy to find the most “expensive” functions. Each region is also coloured according to a expenses-related gradient, in order to highlight the amount of money spent in it.

In order to visualize the spending trends during the 2002-2008 period we calculated maximum and minium values into the whole time-range, so we could get the maximum and minium peaks among years. This simple solution highlights yearly variations for a single region and even for each single function. Variation between years is also visible through a simple play/stop button, with an animation highlighting data filtered by year.

The web-based application (powered by the growing open-source technology coming out from Stanford) provides answers to loads of questions, like:

  • Which is the region that spent most on R&D during the 2002-2008? And which is the one that spent the most, for each single citizen?
  • How much did Regione Puglia spend, for each single citizen, on health services on 2007?
  • Where was the most money spent on education during the period 2002-2008? What about the money spent for each single citizen?
  • How was money divided among functions during the years?
  • How has the military budget, for each single citizen, changed?
  • How does spending on transportation in Regione Calabria in 2008 compare with the national average?

We would like to thank the OKFN community for the chance to share our thoughts with you and we’d love to hear your feedback and suggestions!

Notes from Visualizing Europe event, 14th June 2011

Jonathan Gray - June 22, 2011 in Events, Open Data, Visualization, WG Visualisation, Working Groups

The following post is from Jonathan Gray, Community Coordinator at the Open Knowledge Foundation.

Last week I participated in an event called Visualizing Europe organised by the folks at visualizing.org in association with the Open Knowledge Foundation and Infosthetics.

There were lots of really interesting talks and demos on data visualisation projects from across Europe and around the world – including from David McCandless of Information Is Beautiful (who worked on some of the original designs for WhereDoesMyMoneyGo.org) and Gregor Aisch of Driven By Data (who is currently working on the new designs for OpenSpending.org).

Gregor’s talk was about the concept of open visualisation – which includes open data, open source visualisation tools and open, collaborative working processes. He also gave a sneak preview of some of the new designs for OpenSpending.org!

The Open Knowledge Foundation is hugely excited about the potential of open data visualisation technologies to help people explore and analyse open datasets. The opportunity here is enormous – especially given the speed of recent developments in this area!

If you’re interested in meeting others interested in using open tools to visually represent open data, you can join our open-visualisation mailing list!

For more about the event you can see photos here, some of the comments on Twitter here and find more blog posts here, here, here and here. We’ll link to further material from visualizing.org as it becomes available!

Visualizing Europe, Brussels, 14th June 2011

Jonathan Gray - June 7, 2011 in Events, OKF Projects, Open Knowledge Foundation, Visualization, WG Visualisation, Working Groups

The following post is from Jonathan Gray, Community Coordinator at the Open Knowledge Foundation.

Next week some of Europe’s leading information designers and data visualisation experts will descend on Brussels for a one-day event showcasing projects and applications which visually represent Europe’s data. The event is organised by Visualizing.org in association with the Open Knowledge Foundation and Andrew Vande Moere’s wonderful Infosthetics blog.

Among those presenting are:

Further details (including information on how to request an invitation) are available at:

36 hours left to enter OpenDataChallenge.org!

Jonathan Gray - June 4, 2011 in OKF Projects, Open Data, Open Government Data, Open Knowledge Foundation, Visualization, WG EU Open Data, WG Open Government Data, Working Groups

The following post is from Jonathan Gray, Community Coordinator at the Open Knowledge Foundation.

There are now around 36 hours left to enter the OpenDataChallenge.org, Europe’s biggest open data competition!

  • There are €20,000 worth of awards and prizes for ideas, applications, visualisations, and datasets.

If you have:

  • an idea for a useful service that could build on top of public data
  • an interesting data visualisation that represents public information sources
  • a useful web application that uses open data
  • a dataset that you’ve worked to clean up, or that combines multiple information sources
  • a dataset released by a public body that you think is particularly interesting or useful

Then we’d love to hear from you!

We’d also greatly appreciate any help in blogging or tweeting about the competition, or passing this on to relevant colleagues. This is not just for open data lovers – anyone who can think of something useful that could be built using public information sources is welcome to enter. And 36 hours is plenty of time. ;-)

Get Updates