Support Us

You are browsing the archive for Spending Stories.

Data Expedition: Tax Avoidance and Evasion – 6th June

Lisa Evans - May 24, 2013 in Open Spending, School of Data, Spending Stories

Tax expedition

Want to dig deep into tax avoidance and evasion? We have gathered a wide range of data on this sensitive topic and for one afternoon we’ll guide you through some of the key decisions to think about when writing a story on the topics. With tax evasion and tax avoidance currently such a hot topic in the media, it’s crucial that people can understand the difference between the two terms as well as the mechanisms by which they happen.

When: Thursday June 6th – 12:00 BST to 17:00 BST – link to your timezone

We’ll be looking for projects such as:

  • Exploring the tax avoidance schemes used by Apple, Google, Amazon, or Starbucks

  • Looking at data gathered by tax collection authorities and patterns of avoidance that emerge from that dataset

  • Creating a “most wanted” list tax evaders for future research

  • Your project here!

Sign up here for the Data Expedition!

Please note that limited space is available. For more information about the Data Expedition format, we encourage you to read this article.

How can I participate?

To get involved either:

  • Lead a team! (Up to 6 hours) Are you able to help to coordinate a team on the day? This involves, helping your team to understand the options and research that has been conducted and starting a discussion about the choice of story and how to construct a plan for making the story happen. The School of Data team will hold a specific hangout for team leads on Monday 3rd June at 12:00 BST to prepare for Thursday’s activities. Please email schoolofdata [at] okfn.org if you are interested in getting involved.

  • Offer an expert introduction! (Up to one hour) We’re looking for experts who understand the loopholes or tactics used by companies in different countries to offer quick introductions from 5-30 mins long to get the expedition started.

  • Join us as a participant on the day! (3-6 hours) You will need to be prepared to brainstorm ideas with others in your group and ultimately explain your choice of story. There will be two roles you can take on the day – either getting stuck into the data (analyst) or writing (storyteller).

Aims of the expedition

We will aim to give people:

  • A clear understanding of the difference between tax evasion and tax avoidance
  • An key understanding of a few schemes via which people engage in them
  • Perhaps also a few story ideas!

How to get involved

Please make sure you are registered here and that you select “Tax Avoidance/Evasion” in the “I’m Interested in…” section. Please note: you will need to be available for at least 3 hours during the expedition period and spaces will be limited, so preference will be given to those who can definitely commit to the expedition. Spaces will be confirmed shortly before the expedition.

Stay up to date with the latest data expeditions

Want to be informed any time there is a new data expedition? Join the School of Data announcement list to get notifications of the expeditions as soon as they are announced!

IRS: Turn Over A New Leaf, Open Up Data

Beth Noveck - May 24, 2013 in Data Journalism, Open Spending, Spending Stories

The following post is co-authored by Stefan Verhulst and Beth Noveck. It is cross-posted from Forbes.com. If you’d like to learn more about tax data, check out our data expedition on tax evasion and avoidance on the 6th June!

The core task for Danny Werfel, the new acting commissioner of the U.S. Internal Revenue Service (IRS), is to repair the agency’s tarnished reputation and achieve greater efficacy and fairness in IRS investigations. Mr. Werfel can show true leadership by restructuring how the IRS handles its tax-exempt enforcement processes.

People filing tax forms at the IRS in 1920.

One of Mr. Werfel’s first actions on the job should be the immediate implementation of the groundbreaking Presidential Executive Order and Open Data policy, released last week, that requires data captured and generated by the government be made available in open, machine-readable formats. Doing so will make the IRS a beacon to other agencies in how to use open data to screen any wrongdoing and strengthen law enforcement.

By sharing readily available IRS data on tax-exempt organizations, encouraging Congress to pass a budget proposal that mandates release of all tax-exempt returns in a machine-readable format, and increasing the transparency of its own processes, the agency can begin to turn the page on this scandal and help rebuild trust and partnership between government and its citizens.

Every year in the United States approximately 1.5 million registered tax-exempt organizations file a version of the “Form 990” with the IRS and state tax authorities. The 990 collects details on the financial, governance and organizational structure of America’s universities, hospitals, foundations, and charities to the end of ensuring that they are deserving of tax exempt status. We are missing an opportunity to analyze this data so that decisions about whom to investigate can be based on evidence rather than conjecture, on patterns rather than prejudice.

Currently, hundreds of thousands of the largest tax-exempt organizations are required to file their returns electronically. The IRS should release this data in bulk as a free database immediately. If the IRS were to make these 990 data available in a form that could be easily downloadable and processed by computer programs for visualization and statistical analysis, researchers could quickly do more extensive, in-depth empirical research to better understand the sector and spot fraud, waste and abuse more systematically. Knowing who runs a nonprofit can help detect fraud. Attorneys General have occasionally found the same person collecting full time salaries from several different nonprofits.

Check out the guide on tax avoidance and evasion from OpenSpending to find out more about how to follow the money.

While the IRS is using robo-audits, catching large evasions still happens mainly by happenstance. With open data, they could be detected, first, through computer analysis. By using technology to expand the regulator’s toolkit, it becomes possible to target limited enforcement resources to where problems really are. The Securities and Exchange Commission has, for instance, developed an improved capacity to detect and prevent insider trading more effectively by making public information computable and easier to mine. In addition, open data creates the means for government and citizens to collaborate on spotting problems. As the adage goes, with many eyes, all bugs are shallow.

Similarly, Form 990 requires charities to disclose loans to or from current and former officers. Making these and other transactions that correlate with instances of fraud like these would save government resources at the state and federal levels.

With a 990 database, it would also be easier to run queries to understand which executives receive the highest compensation. By combining 990 and other data, such as lobbying data, it might become possible to spot impermissible political activities.

President Obama’s 2014 budget calls for requiring all tax exempts to file electronically, but also requires that the IRS makes these already public returns available in a timely, machine-readable format. These data would create a corpus of open, computable information that could be used to understand where nonprofits are providing services and where there are gaps. Enabling more people and organizations to analyze, visualize, and mash up the data, creating a large public community that is interested in the nonprofit sector and can collaborate to find ways to improve it.

In sum, the data that the IRS collect about nonprofit organizations present a great opportunity to learn about the sector and make it more effective.

Making IRS data open won’t solve every problem; the recent scandal has proven that the IRS must be more transparent about both the information it collects, but also how it manages that information. A commitment on day one to share the data it collects in a machine readable manner would show true leadership by Mr. Werfel and help solidify the Obama administration’s legacy as an open government.


Stefaan G. Verhulst is the Chief Research and Development Officer of the Governance Laboratory @NYU (GovLab) where he is responsible for building a research foundation on how to transform governance using advances in science and technology.

Beth Noveck is Founder and Director of the Governance Laboratory. She served in the White House as the first United States Deputy Chief Technology Officer and founder of the White House Open Government Initiative (2009-2011). She was appointed senior advisor for Open Government to the UK Prime Minister David Cameron. She is the author of “Wiki Government: How Technology Can Make Government Better, Democracy Stronger and Citizens More Powerful.”

Data Expeditions at MozFest

Lucy Chambers - November 14, 2012 in Featured, School of Data, Spending Stories, Workshop

Expeditions into the Data Landscape: the School of Data goes to #MozFest. Find out what happened at MozFest – and see the tools and data sets to recreate it yourself!

Saturday morning at MozFest. A sold out building, full of a thousand hackers, builders, makers, geeks, journalists, thinkers and more. And right at the top on the 9th floor? Three ‘data sherpers’ in sparkly cloaks…

Data Expeditions

The concept behind the ‘Data Expeditions’ run by the School of Data at this year’s MozFest was simple. Based on the ‘Dungeons and Dragons’ role-playing game, data explorers would tackle real world problems together, developing their data wrangling skills in the process.

As a first step, explorers were asked to rate their abilities. Can you tell a story? analyse data? code? tweet? draw? The emphasis was on ‘doing’, but not in any narrow sense – often, it’s the data newbie asking a ‘stupid question’ that sets the team on a fresh track, and becomes the biggest contribution of the day.

Next came the quests. Three Data Sherpas (still sparkling) set out three missions: delving into the data surrounding extractive industries and oil mines; exploring possible causes for a dramatic plummet in life expectancy in central Africa; and burrowing into the grimy world of tax havens.

The explorers divided, the sherpers guided – and the quests began!

Quest 1: Mining the Mines

The discovery of oil or natural resources in a country and subsequent mining and extraction activities have enormous economic and political significance. While some countries benefit off their natural wealth, others fall prey to corruption and exploitation. Approaching this topic we did not have a clear story we intended to investigate – instead the discussion in the first part of our session focussed on how to approach such a complex domain. After some discussion (luckily, the large team included two experts from the area and an investigative reporter), three themes areas emerged that we then decided to further dig into in smaller team:

  • One team worked on possible ways to combine company ownership information, conference documentation and social network data to generate a picture of the network of actors, companies and interests behind the extractive industry.

  • A second team decided to use a commercial database to explore the ownership of a single mine in the DRC. Where did money come from and who are the owners? A quick set of post-its on our data expeditions map served as a visualization of the setup.

  • Mapping was also the topic of the third group, which aimed to contrast overall revenue from extractives to economic, political and social indicators, such as the Corruption Perceptions Index. Using CartoDB, the group was able to easily generate a map that displayed country-by-country comparisons of the resulting ratios.

Quest 2: A Call to Investigate an African Crisis

In true Dungeons & Dragons style, Data Sherper Michael got a call from some dwarves in Middle Earth, who had heard about a sudden drop in life expectancy in central Africa. They didn’t know the details, but believed that the World Bank gnomes might have some facts which could shed some light on the mystery.

Cue the explorers in quest group two, who worked together throughout (kudos to such a large number!) to solve the mystery. After initial musings about a civil war, the team discovered a striking correlation between the increasing prevalence of HIV and plummeting life expectancies. By cross-referencing with other data sets, the team also noticed some interesting connections around health expenditure, public statements issued by politicians, and quirkier topics such as the target audience of condom marketing. More work would need to be done to really make a claim about causality, but there was certainly plenty to mull over.

Quest 3: Tax Islands

This was an experiment in providing a group with a chain of possible investigations (a map for the landscape) and then allowing a storyteller to choose their own expedition path throuh the data. The group divided into two teams to explore the possible stories (routes) you might want to take through tax avoidance and evasion.

The first group chose to show how an online book retailer might avoid tax, starting at the point of sale and tracing the money all the way through to the final countries in which tax was paid (and at what rate!). The second group wanted to show the effects of changes in tax laws, and looked at where large companies paid their tax and how they ‘moved’ as tax breaks changed.

The session was a big success. People really engaged with the issue, and the tax team benefitted from some particularly valuable insights from a few accountants who had direct experience of working on corporation tax for large companies. The format really worked (unless it was the spangly cloaks!) and our data expedition troops stayed at their desks until the very end.

Next steps: Online Mountaineering

The Data Expeditions format was somewhat experimental. We had no idea if the concept would work, but our inkling was that the only way to really teach data skills was to confront people with a mountain. By forging your own path (with the occassional leg-up or guidance from a sherper!), data explorers can pinpoint the extra skills they need to develop in order to scale new obstacles, map their own journey and ultimately to tell their own story. The answer may be at the top, but there are multiple routes to the summit – and each will offer a fresh view over the landscape.

Because the session was so successful, we are keen to repeat the Data Expeditions formula. Our next challenges will be:

  • To work out how to recreate this social dynamic online
  • To continue to follow up on these threads, questions and leads

To do this, we need your help!

  • Were you at the Data Expeditions session at MozFest? Write a short summary of what your team did and what you learned and send it to schoolofdata[@]okfn.org – we’d love to feature it on our blog!
  • Keen to run your own Data Expeditions session? Please do! You can find some of the resources we used below. Additionally, see the ‘Data Expeditions Toolkit’ below – sign-up to the mailing list and drop us a line at schoolofdata [@] okfn.org to find out more.
  • Know of more resources? Drop a line to via the mailing list or schoolofdata [@] okfn.org to let us know!

Recreate it yourself!

Use the Expeditions Toolkit

  1. Print out a copy of the character sheet (front, back) for all of the people participating
  2. Think of your topic areas and devise a suitably ridiculous name for your expedition. (Bonus points for ridiculous puns revolving around online gaming).
  3. Make some role descriptions cards. For each of the possible roles outlined in the character sheets outline tasks which people with that skillset could perform. We recommend at least 3 possible levels.
  4. Buy yourself a cape (optional)
  5. Get rolling – hand out your role desciption sheets, get people to fill in the radar plot and assign roles. Allow people to also specify a role that they are not so strong in, but which they would like to know more about, you can buddy them up with someone who is more advanced in those skills and encourage them to watch closely and ask lots of questions.
  6. Talk everyone through the notion of the expedition and explain their roles to them. Make it clear the aim is to produce something at the end of the session, that could be a blog post, a visualisation or a load of post-it leads – don’t specify, let them be as creative as possible!
  7. Start the storytellers off thinking of a question and get them talking to the scouts and analysts about where they might find that data. You’ll need lots of post it notes.
  8. Get the designers and engineers listening in to the conversations happening and working out how it might be possible to present the information, and feed back into the discussion
  9. One you’ve got a question, set the scouts and the analysts loose on finding and analysing the data.
  10. Get everyone to document their expedition, the avenues they tried which failed for some reason (the path was blocked), what worked, what data-sources the found and what tools they used. These are all useful for generating leads which people could follow up on afterwards and teaching people how a real data-campaign may be run.

We did ours in 3 hours – you may like to try doing it for longer, however make sure your session is short enough to have people’s full attention for the duration of the session and keep energy high.

That’s it. Good luck noble sherpas.

Resources that we used:

Data Sources

Tools & Resources

How Spending Stories Fact Checks Big Brother, the Wiretappers’ Ball

Lucy Chambers - February 27, 2012 in Open Spending, Spending Stories

This piece was co-written with Eric King of Privacy International and comes as Privacy International launches a huge new data release about companies selling surveillance technologies. It is cross-posted on the MediaShift PBS IDEA LAB and the OpenSpending blog.

Today, the global surveillance industry is estimated at around $5 billion a year. But which companies are selling? Which governments are buying? And why should we care?

We show how the OpenSpending platform can be used to speed up fact checking, showing which of these companies have government contracts, and, most interestingly, with which departments…

The Background

Big Brother is now indisputably big business, yet until recently the international trade in surveillance technologies remained largely under the radar of regulators and civil society. Buyers and suppliers meet, mingle and transact at secretive trade conferences around the world, and the details of their dealings are often shielded from public scrutiny by the ubiquitous defence of ‘national security’. Perhaps unsurprisingly, this environment has bred a widespread disregard for ethics and a culture in which the single-minded pursuit of profit is commonplace.

For years, European and American companies have been quietly selling surveillance equipment and software to dictatorships across the Middle East and North Africa – products that have allowed these regimes to maintain a stranglehold over free expression, smother the flames of political dissent and target individuals for arrest, torture and execution.

They include devices that intercept mobile phone calls and text messages in real time on a mass scale, malware and spyware that gives the purchaser complete control over a target’s computer and trojans that allow the camera and microphone on a laptop or mobile phone to be remotely switched on and operated. These technologies are also being bought by Western law enforcement, including small police departments in which the ability of officers to understand the legal parameters, levels of accuracy and limits of acceptability is highly questionable.

The data that has just been released on the Privacy International Website included the following:

  1. An updated list of companies selling surveillance technology, and
  2. Naming all the government agencies attending an international surveillance trade show known as the wiretappers ball.

Some names are predictable enough: the FBI, the US Drug Enforcement Administration, the UK Serious Organized Crime Agency and Interpol, for example. The presence of others is deeply disturbing: the national security agencies of Bahrain and Yemen, the embassies of Belarus and the Democratic Republic of Congo and the Kenyan intelligence agency, to name but a few. A few are downright baffling, like the US department of Commerce or the US Fish & Wildlife Service and Clark County School District Police Department.

Now, with the aid of OpenSpending, anyone can cross reference which contracts these companies hold with governments around the world. The investigation continues…

Using OpenSpending to speed up fact-checking

Privacy International approached the Spending Stories team to ask for a search widget to be able to search across all of the government spending datasets for contracts held between governments and these companies (until this point, it had only been possible to search one database at a time).

The Spending Browser is now live at http://opendatalabs.org/spendbrowser. And, as the URLs correspond to the queries, individual searches can be passed on for further examination and, importantly, embedded in articles directly. Try it yourself against the list of companies listed in the Surveillance Section of the Privacy International Site (Just enter a company e.g. ‘Endace Accelerated’ into the search bar).

The Spending Browser will become increasingly more powerful as ever more data is loaded into the system.

Want to help make this tool even more powerful? Get involved and help to build up the data bank.

Coverage

You can read more about the background to these stories on the Privacy International Site and recent coverage by the International Media:

Launch of Open Spending Blog: Thoughts on Journalist-Programmer interaction

Lucy Chambers - October 28, 2011 in Open Spending, Spending Stories

This post is by Lucy Chambers, Community Coordinator at the Open Knowledge Foundation.

Thanks to the hard work of the OpenSpending team getting the software to an exciting stage of development that we are happy to write about and some aesthetic love from our brilliant designer, Kat Braybrooke, the OpenSpending blog was officially launched yesterday.

You can tune in to the blog for updates on the project at: http://blog.openspending.org/

Our first major post is a roundup from the Global Investigative Journalism Conference in Kyiv, Ukraine, which Friedrich Lindenberg and I attended. We got some really useful feedback regarding Spending Stories and you can read the thought process on the OpenSpending blog. The planning process is ongoing, so if you have thoughts on what you would like to see from a service which adds context to the numbers behind stories about spending, please drop us a line via the OpenSpending mailing list.

The second thing we gained was real appreciation for how badly a system for bringing together coders and journalists is needed. We have some ideas for how we can help this happen and how you can get involved with existing initiatives, which we will write up in a separate post, but we’d also appreciate your input.

Are you a journalist? Have you worked with programmers in the past? How did you find them? How did you know it was someone you could trust to do a good job with your data?

Are you a coder? Have you ever worked with journalists, either as a volunteer or for pay? What is/would be your motive for collaborating with journalists? Where can people find you?

Please drop us a line on the Data-Driven-Journalism mailing list with your thoughts. Would be great to have a cracking post full of personal anecdotes!

Release of Whole of Government Accounts

Theodora Middleton - July 13, 2011 in Campaigning, External, News, Open Government Data, Open Spending, Spending Stories, WG Open Government Data, Where Does My Money Go

The following guest post is by Dan Herbert, who works on our Where Does My Money Go and Open Spending projects. He is the Programme Manager for MSc Accounting at Oxford Brookes University.

This week sees the publication of the first Whole of Government Accounts for the UK. WGA represents the end of a decade long project to implement commercial style accounting reports for the UK public sector. The Financial Times has said that we will now have a set of accounts for the UK that are just like those of Marks and Spencer. The reasons given for the development of WGA have been made in terms of improved accountability and a better understanding of the UK’s public finances. There are however good reasons to believe that neither of these claims can be substantiated.

Commercial accounting reports have their roots in the split between owners and managers of companies. The managers of companies need to account for their actions to the owners; the shareholders. The shareholders are principals and the managers act as agents. Accounts demonstrate that the agents have acted in the principals’ best interests and the audit of the accounts serves to demonstrate that the accounts are a true and fair representation of their actions. The reports are supposed to be useful in agents making decisions; mainly whether to sell their shares or to replace the managers.

Since 2005 the accounting reports for listed companies in Europe have been prepared in line with International Financial Reporting Standards – IFRS. IFRS are based on a conceptual framework that enshrines the role of shareholders/investors as the primary users of accounting reports. The standards are then designed to meet their information needs. It is on the basis of IFRS standards that the accounts of UK public bodies are now prepared and they underpin WGA. That they were never designed with public bodies in mind seems not to matter to those who took this policy decision.

Applying IFRS to pubic bodies may seem, on the face of it, to be a ‘good thing’. There are however serious problems. The main one being how does the principal/agent relationship work for public bodies? There is absolutely no empiric evidence that shows that anyone actually uses the accounts produced by public bodies to make any decision. There is no group of principals analogous to investors. There are many lists of potential users of the accounts. The Treasury, CIPFA (the UK public sector accounting body) and others have said that users might include the public, taxpayers, regulators and oversight bodies. I would be prepared to put up a reward for anyone who could prove to me that any of these people have ever made a decision based on the financial reports of a public body. If there are no users of the information then there is no point in making the reports better. If there are no users more technically correct reports do nothing to improve the understanding of public finances. In effect all that better reports do is legitimise the role of professional accountants in the accountability process.

Open data provides a route out of this accountability dead end. Instead of refining what are fundamentally useless reports open data does away with the principal/agent accountability model and replaces it with a more fluid one. Open data does not need anyone publishing the data to think about who the users are. Once data is in the public domain users define themselves and design reports that suit their needs by extracting data that is relevant to them. The various analysis tools that have been produced to analyse local authority spending show that more than one style of report can be produced. Instead of having a single aggregation of the data following IFRS many aggregations are possible for different interest groups. The possibility of linking financial to other performance data also exists; a possibility that has not been successfully addressed by public sector accounting reports.

The open data model does not require professional auditing in the same way as IFRS accounting reports. So long as the data released is complete then aggregations and presentations of the data can be ‘audited’ using a ‘many eyes’ model and corrections made by discursive processes. Further the open model has the potential to embed the discursive, questioning aspect of accountability that the static, professionally controlled accounting reports fail to do. Instead of the focus being on the production of a report the focus is on the reporting process.

Anthony Hopwood, the late Dean of the Said Business School in Oxford once wrote “Those with the power to determine what enters into organisational accounts have the means to articulate and diffuse their values and concerns, and subsequently to monitor, observe and regulate the actions of those that are now accounted for.” IFRS means that the values enshrined in accounting reports are those of the professional accountant. Open data allows the users to decide what is extracted from the data, how it is aggregated and reported. Open data has the possibility to shift power from preparers of accounts to users.

This is power shift is a big deal. The Financial Times article heralding the release of the WGA report focused on the extent of the indebtedness of the UK and on the pensions liability for public employees. Are these really all that the public are interested in? If I were going to invest my money in a company with the sole aim of making a return they would be important to me. As a citizen I am more interested in the priority given to different categories of spending, what is being done to alleviate social problems and where inefficiency in spending lies. IFRS does not show this and so however technically clever the WGA report is it may have no relevance to those whose interests it claims to represent. The same effort put into releasing usable open accounting records has far greater potential to engage the public.

OpenSpending goes live

Lucy Chambers - June 26, 2011 in Open Spending, Spending Stories

After several months of hard work, we are glad to announce the official launch of OpenSpending and turn to everyone interested in government accountability and financial transparency to help shape the future of the project.

The OpenSpending project will make it easier for the global public to explore and understand government spending. Our developers have already imported a range of datasets, including projected budgets from the European Union, detailed spending data from the UK Treasury, and smaller datasets such as the UK’s Barnet Council local budget.

The project will also integrate with Spending Stories, our Knight News Challenge-winning project, allowing narratives and explanations to be woven around spending data.

What can you do with OpenSpending?:

At present, the OpenSpending interface allows you to:

  • Load a wide range of financial expenditure data.
  • Visualize, search and dissect your data.
  • Embed visualizations into other sites.
  • Use an API to create custom applications and visualizations.
  • Flag interesting and erroneous elements of the data.
  • Reconcile entries with OpenCorporates company records.

The vision for OpenSpending:

Our long-term aim is to track every government and corporate financial transaction across the world and present it in useful and engaging forms for everyone from a schoolchild to a data geek.

Much like OpenStreetMap, we want people to be able to add to this database easily, using information from places and organisations of interest to them. We hope that the ease with which citizens can obtain budget data from local governments will continue to increase, and OpenSpending provides a platform for them to store and visualise it for the benefit of everyone in their community.

OpenSpending already provides an interface to your data, but we also aim to provide a white label service for custom sites such as Where Does My Money Go, built to allow UK taxpayers to understand where their government spends public money.

But the most interesting features of OpenSpending are yet to come and we want your input!

Whether you want to tell spending stories, enhance data journalism, or tailor the platform for use as an educational tool, we would like to hear from you. We want the users of OpenSpending to be able to annotate entries, group them in ways meaningful to them and write extensions to make the platform their own. Please post your comments and ideas to the OpenSpending mailing list.

We envisage that OpenSpending will spawn a community of budget analysts of a new kind. More data-literate citizens are citizens who demand more from their government and we hope that the tools and practices we are building and exploring will also lead to better informed decisions from governments themselves!

Interested in Getting Involved?

This is an incredibly ambitious project and there will be plenty of opportunities to get involved, so whether you are a developer, a data journalist or a proactive citizen, interested in how your money is being spent, we want to hear from you!

You can register you interest in getting involved via this form. Simply fill it in and we will get back to you as soon as we can.

In addition you may be interesting in taking part in the pre OKCon OpenSpending workshop on 29th June in Berlin.

Spending Stories is a winner of the Knight News Challenge!

Jonathan Gray - June 22, 2011 in Data Journalism, OKF Projects, Open Data, Spending Stories

The following post is from Jonathan Gray, Community Coordinator at the Open Knowledge Foundation.

We’re thrilled to announce that our proposal for Spending Stories has been chosen as a winner for the Knight News Challenge.

What is Spending Stories about?

News stories about government finances are common, but readers often find it challenging to place the numbers in perspective. Spending Stories will contextualize such news pieces by tying them to the data on which they are based. For example, a story on City Hall spending could be annotated with details on budget trends and related stories from other news outlets. The effort will be driven by a combination of machine-automated analysis and verification by users interested in public spending.


We can’t wait to get started on this project – and you’ll hear more from us about this very soon. In the meantime if you’re interested in spending data you can join our wdmmg-discuss list mailing list and if you’re interested in how we can use data to improve the news, you can join our data-driven-journalism mailing list.

If you want more information about the project you can see our blog post from earlier this year and our lengthier project FAQ!

Get Updates