Support Us

You are browsing the archive for Open Geodata.

River level data must be open

Nick Barnes - February 13, 2014 in Open Geodata, Open Government Data

My home – as you can see – is flooded, for the second time in a month. The mighty Thames is reclaiming its flood-plains, and making humans – especially the UK government’s Environment Agency – look puny and irrelevant. As I wade to and fro, putting sandbags around the doors, carrying valuables upstairs, and adding bricks to the stacks that prop up the heirloom piano, I occasionally check the river level data at the Agency website, and try to estimate how high the water will rise, and when.

[IMG: Flooded front door]

There are thousands of river monitoring stations across the UK, recording water levels every few minutes. The Agency publishes the resulting data on its website, in pages like this. For each station it shows a graph of the level over the last 24 hours (actually, the 24 hours up to the last reported data: my local station stopped reporting three days ago, presumably overwhelmed by the water), and has some running text giving the current level in metres above a local datum. There’s a small amount of station metadata, and that’s all. No older data, and no tabular data. I can’t:

  • See the levels over the course of a previous flood;
  • Measure how quickly the river typically rises, or how long it typically takes to go down;
  • Compare today’s flood to that four weeks ago (or those in 2011 or 2003);
  • Easily navigate to the data for neighbouring stations up and down river;
  • Get a chart showing the river level, or river level anomalies, along the length of the Thames;
  • Get a chart comparing that longitudinal view of the flood with the situation at any previous time;
  • Make a maps mash-up showing river level anomalies across the Thames catchment;
  • Make a personalised chart by adding my own observations, or critical values (‘electrics cut out’, ‘front garden floods’, ‘water comes into house’, …);
  • Make a crowd-sourced flooding community site combining river level data, maps, pictures, observations, and advice (‘sandbags are now available at the village hall’);
  • Make a mash-up combining river level data with precipitation records;
  • Make a flood forecasting tool by combining historical river level, ground-water, and precipitation records with precipitation forecasts.

Most of these things (not the last!) would be a small matter of programming, if the data were available. The Thames Valley is teeming with programmers who would be interested in bashing together a quick web app; or taking part in a larger open-source project to deliver more detailed, more accessible, and more useful flood data. But if we want to do any of those things, we have to pay a licence fee to access the data, and the licence would apparently then require us to get pre-approval from the Environment Agency before releasing any ‘product’. All this for data which is gathered, curated, and managed by a part of the UK government, nominally for the benefit of all.

Admittedly I couldn’t do any of those things this week anyway – too many boxes to carry, too much furniture to prop up. But surely this is a prime example of the need for open data.

US government to release open data using OKF’s CKAN platform

Mark Wainwright - February 1, 2013 in CKAN, News, Open Geodata, Open Government Data

You may have seen hints of it before, but the US government data portal, data.gov, has just announced officially that its next iteration – “data.gov 2.0″ – will incorporate CKAN, the open-source data management system whose development is led and co-ordinated by the Open Knowledge Foundation. The OKF itself is one of the organisations helping to implement the upgrade.

Like all governments, the US collects vast amounts of data in the course of its work. Because of its commitment to Open Data tens of thousands of datasets are openly published through data.gov. The new-look data.gov will be a major enhancement, and will for the first time bring together geospatial data with other kinds of data in one place.

CKAN is fast becoming an industry standard, and the US will become the latest to benefit from its powerful user interface for searching and browsing, rich metadata support, harvesting systems to help ingest data from existing government IT systems, and machine interface, helping developers to find and re-use the data. The partnership is also excellent news for CKAN, which is being improved with enhancements to its features for ingesting and handling geodata.

As it happens, CKAN itself is also moving towards a version 2.0. In fact, after months of hard work, the beta-version of CKAN 2.0 will hopefully be released in a couple of weeks. To keep up to date with developments, follow the CKAN blog or follow @CKANproject on Twitter.

Can Open Data help conflict prevention?

Catherine Dempsey - April 11, 2012 in External, Featured Project, Open Data, Open Geodata, Open Government Data, WG Development, WG Open Government Data

We’re in the planning stages of a conflict prevention project called PAX and open data perspectives have fed into our thinking in its processes and structures.

PAX aims to provide early warnings of emerging violent conflict, through an online collaborative system of data sharing and analysis. We’re still in the early stages of exploration and experiment, but the principle is that open data could help provide warnings of emerging violent conflict, enabling governments, NGOs and citizens to take action to prevent it escalating. PAX’s premise is that by collaborating on timely analysis of data, we may be able to ring the alarm on emerging situations much faster than with yesterday’s closed and hierarchical systems.

Openness permeates PAX’s approach, from data to processes to software. Our sources would include everything that we can get governments, NGOs and corporations to share, in addition to the direct voice of conflict-affected people –
through citizen reporting, mobiles, social media and so on. We’re looking at using open source software for sharing and reporting, with Ushahidi’s original mapping platform and their soon-to-be released revamped SwiftRiver platform.

SwiftRiver is a platform for sorting through huge flows (rivers) of information in times of crisis. It uses crowdsourcing methods not only to gather information, but also in the process of sorting and analysis. Those with local knowledge and local language are invited to join a transparent conversation about the value of any one piece of data – Is it of questionable authenticity? Could it be false information put out by the perpetrators of violence? Is it out of date? What’s the location?

With the results of that analysis, citizens can hold their governments to account on their efforts to prevent conflict. We hope to provide a new lever with which to ask governments to fulfil their responsibility to protect where populations are in danger.

The #Kony2012 campaign from Invisible Children demonstrated the power of a population calling on their government to take action to prevent Lord’s Resistance Army (LRA) atrocities. But the information they presented to
their public was widely criticised for being inaccurate and out of date. Wouldn’t this type of call to action be more powerful where the information on perpetrators of violence against civilians was more accurate and timely, through online verification, amplifying the voices of affected people?

Opening up direct communication with conflict-affected people could enable them to ask for the kind of action and resources they really want. Less well-known is the Invisible Children’s LRA Tracker project (a collaboration with
Resolve) which uses mapping and realtime reporting (including reports from people affected in the region) to shine a light on LRA attacks throughout the remote border area between Democratic Republic of Congo, South Sudan and
Central African Republic.

The pressure for openness, more information, transparency and the existence
of many non-governmental projects seeking to open up satellite imagery to the
wider public, is contributing to an environment where the governments are increasingly willing to share
their own government data on conflict regions. Following a stream of satellite imagery projects to document human rights abuses (see George Clooney’s Satellite Sentinel project, and Amnesty International and Human Rights Watch among others), a number of declassified images of Homs in Syria were publicly released by the US government (through US Ambassador Ford’s Facebook page), in a new move to expose atrocities there.

Some of the most interesting projects have got stuck in early on, so that they are prepared and proactive when it comes to gathering and analysing realtime data. The Syria Tracker crisismapping deployment launched shortly after the
protests began and is now the longest running mapping project covering the violence across the country. Working with volunteers both within and outside Syria, they are systematically recording information, mapping it, and creating a vital record of atrocities by the Syrian government.

Here we have civilians collecting and publishing data which governments are certainly not publishing, and may not even be collecting. This form of openness by citizens puts pressure on governments to improve their own documentation and publication – and we hope it may also encourage them to respond to events in the way that the people on the ground want them to.

We’d love to hear your thoughts and comments on the project, you can find out more at http://www.paxreports.org/.

Can Crowdsourcing Improve Open Data?

Guest - May 23, 2011 in External, Featured Project, Open Geodata, WG Open Government Data

The following guest post is from Tom Chance (@tom_chance), founder of OpenEcoMaps. This post is cross posted from the London Datastore blog with permission from the author.

What happens when open data is wrong? Can crowdsourcing improve it? Often, open data enthusiasts assume that the next step after the release of some government data is a smart phone app or cool visualisation. I’m more interested in collaborating on the data itself.
 
I’ve been working on a project called OpenEcoMaps and I’ve made use of open data releases, for which I’m very grateful. But the project is really focused on improving the data for London and making it useful for groups who don’t have access to smartphone developers with hip haircuts.
 
Take the Datastore entry on allotments, for example. The data was collected from boroughs by a London Assembly committee a few years back, and while it seems fairly comprehensive it only has midpoints rather than the shape of each allotment, and being a few years old it includes allotments that no longer exist. It also doesn’t include any community growing spaces, such as the thousands in the new Capital Growth network.
 
It turns out Capital Growth don’t have a very good dataset, either. They’ve got a great “wow factor” map but many locations are very vague and they don’t know the size of the growing spaces.
 
Councils don’t really know what’s out there either, though some such as Southwark (where I live) have commissioned organisations to map food projects for them.
 
I found a similar situation looking at renewable energy generators for a project local to me called Peckham Power. DECC have a map with barely anything on it; councils don’t keep a list of operating equipment from planning applications or their own estate; the GLA group have at least been slowly adding their renewables data to the Datastore following a request from the people at Peckham Power.
 
What obviously this calls for is open data collaboration. As it’s geodata I’m using OpenStreetMap to store the data – it’s a semi-mature technology in wide use, and it makes it reasonably easy for geeks to enter data and to pull it out for use in web apps, GIS software, etc.
 
OpenEcoMaps is basically a frontend, making the data in OpenStreetMap easy for people to use and contribute to. The idea is to make the data useful enough for a wide variety of organisations – public sector, charities, community groups, maybe even companies – to feel it’s worth collaborating on gathering and improving the data.
 
You can browse the maps, embed them on your web sites, use the KML files on a map you already have, and use a customised editor to contribute.
 
It’s not quite ready for use by your average local community group, but we’ve already got people using it in towns across the UK. It shows what you can do with open technology and data.
 
I’ve been meeting with the people behind London’s food strategy, Capital Growth, Southwark’s food strategy and various community food projects in my own neighbourhood to pilot OpenEcoMaps outside the map geek bubble.
 
Right now I really need help improving the code, so please spread the word. Over the next couple of years my hope is that this will spur growing interest amongst data hoarders in data collaboration rather than plain old data dumps.

Open Geoprocessing Standards and Open Geospatial Data

Guest - June 21, 2010 in External, Open Data, Open Geodata, Open Standards, WG Open Geospatial Data

The following guest post is from Lance McKee, who is Senior Staff Writer at the Open Geospatial Consortium (OGC) and a member of the Open Knowledge Foundation‘s Working Group on Open Geospatial Data.

OGC meeting

As the founding outreach director for the Open Geospatial Consortium (OGC) and now as senior staff writer for the OGC, I have been promoting the OGC consensus process and consensus-derived geoprocessing interoperability standards for sixteen years.

From the time I first learned about geographic information systems in the mid-1980’s, I have been fascinated by the vision of an ever-deepening accumulation of onion-like spatial data layers covering the Earth.

For those unfamiliar with geographic information systems (GIS): a “spatial data layer” is a digital map that can be processed with other maps of the same geographic area. With an elevation map and a road map, for example, you can derive a road slope map. Today, geospatial information has escaped the confines of the GIS to become a ubiquitous element of the world’s information infrastructure. This is largely a result of standards: Communication means transmitting or exchanging through a common system of symbols, signs, or behavior. Standardization means agreeing on a common system. OGC runs an open standardization process, and OGC standards enable communication between GISs, Earth imaging systems, navigation systems, map browsers, geolocated sensors, databases with address fields etc.

I was disappointed when I discovered that, in practice, despite extraordinary advances in technical capabilities for data sharing, much of the geospatial data created by scientists, perhaps most of it (other than data from civil agencies’ satellite-borne imaging systems), never becomes available to their colleagues. This lack of open access to geospatial data seems to me to be more tragic than the lack of open access to other kinds of scientific data, not only because humanity faces critical environmental challenges, but also because all geospatial data refer to the same Earth, and thus every new data layer is rich with possibilities for exploration of relationships to other data layers. I am, therefore, very glad that the Panton Principles have been published and a geospatial open access working group has been established.

In preparation for eventually writing an article on the subject of open access to geospatial data, working with a few OGC member representatives (special thanks to Simon Cox of CSIRO) and OGC staff, I collected a list of 17 reasons why scientists’ geospatial data ought to be published online, with metadata registered in a catalog, using OGC interoperability standards. (The 17 reasons are appended to this blog entry.)

In January I put these reasons into slides that I used in a talk at the Marsh Institute at Clark University in Worcester, Massachusetts. After briefly stating each reason, I explained how OGC standards and the progress of information technology make open access feasible. I provided evidence that the geosciences are rapidly moving in the direction of open access, and I offered ideas on how academics might contribute to and benefit from this progress.

I’m quite sure the Panton Principles are consistent with the goals of the geoscientists in the OGC. But I hasten to add that I am not speaking for them, and most of the 390+ OGC members are not geoscience organizations; most are technology providers, data providers and technology users with other roles in the geospatial technology ecosystem. But this diversity makes the OGC, I think, a particularly valuable “idea space” for academics who have an interest in open access to geospatial data and services. (Services are the future. A land use change model, for example, is a service when it is made available online “in the cloud” for others to use without downloading.)

One domain in the OGC that has value for open science is the work of the OGC Geo Rights Management Working Group (GeoRM WG). The Panton Principles discourage the use of licenses that limit commercial re-use or limit the production of derivative works, because the authors recognize the value of integrating and re-purposing datasets and enabling commercial activities that could be used to support data preservation. That’s important with respect to geospatial data, both because they are so often integrated and repurposed and because geospatial data sets are often complex and voluminous and thus potentially more expensive to curate than other kinds of data. The GeoRM WG has written a remarkable document, the GeoDRM Reference Model for use in developing standards for management of digital rights in the complex area of geospatial data and services. I think this will be a key resource as open access to geospatial data unfolds. The GeoDRM Reference Model provides a technical foundation necessary for implementing the Panton Principles.

Another valuable domain within the larger OGC idea space is the OGC Sensor Web Enablement (SWE) activity. Most geospatial data are collected by means of sensors, and thus it is important in the geosciences to have rigorous standard ways to describe sensors and sensor data in human-readable and machine-readable form. It is also important to have standard ways to schedule sensor tasks and aggregate sensor readings into data layers. Use of SWE standards is becoming important in some scientific areas such as ocean observation, hydrology and meteorology.

Both Web-resident sensors and data collections can be published and discovered by means of catalogs that implement the OGC Catalog Services – Web Interface Standard. This standard will likely become an integral infrastructure element for open access to geospatial data. It is designed to work with the ISO geospatial metadata standards, but those who begin implementing in this area discover that some work remains to make those standards more generally useful.

There are, in fact, many technical and institutional obstacles to overcome before science becomes as empowered by information technology as other estates such as business and entertainment. Technical interoperability obstacles are being overcome in the OGC by groups working in technology domains such as geosemantics, workflow, grid computing, data quality and oblique imagery; and in application domains such as hydrology, meteorology and Earth system science. Overcoming technical obstacles often precedes the obsolescence of institutional policies that stand as obstacles to progress.

I recently read Richard Ogle’s “Smart World,” a book about the new science of networks. In network terms, the OGC is a “hub” in an “open dynamic network”. What were once weak links between the OGC and other hubs such as the World Meteorological Organization and the International Environmental Modeling & Software Society (iEMSs) have been strengthened, and these stronger links make both the OGC and its partner hubs more likely to form new connections with other hubs. Hubs that directly contribute to digital connectivity, as the OGC does, have a special “pizzazz,” I would say. (I haven’t yet mastered the network science vocabulary). It seems to me the Open Knowledge Foundation and the Science Commons are hubs or idea spaces with a bright future of rich connections, and I look forward to seeing what connections they form with the OGC.

17 Reasons why scientific geospatial data should be published online using OGC standard interfaces and ISO standard metadata

Reason 1: Data transparency

Science demands transparency regarding data collection methods, data semantics, and processing methods. Rigor, documented!

Reason 2: Verifiability

Science demands verifiability. Any competent person should be able to examine a researcher’s data to see if those data support the researcher’s conclusions.

Reason 3: Useful unification of observations

Being able to characterize, in a standardized human-readable and machine-readable way, the parameters of sensors, sensor systems and sensor-integrated processing chains (including human interventions) enables useful unification of many kinds of observations, including those that yield a term rather than a number.

(From Simon Cox, JRC Europe and CSIRO Australia, editor of ISO 19156 (Observations and Measurements), coordinator of One-Geology geoinformatics, a designer of GeoSciML, and chair of the OGC Naming Authority.)

Reason 4: Data Sharing & Cross-Disciplinary Studies

Diverse data sets with well documented data models can be shared among diverse information communities*. Cross-disciplinary data sharing provides improved opportunities for cross-disciplinary studies.

OGC defines an information community as a group of people (such as a discipline or profession) who share a common geospatial feature data dictionary, including definitions of feature relationships, and a common metadata schema.

Reason 5: Longitudinal studies

Archiving, publishing and preserving well-documented data yields improved opportunities for longitudinal studies. As data formats, data structures, and data models evolve, scientists will need to access historical data and understand the assumptions so that meaningful scientific comparisons can be conducted. Community standards will help ensure long-term consistency of data representation.

Reason 6: Re-use

Open data enables scientists to re-use or repurpose data for new investigations, reducing redundant data collection and enabling more science to be done.

Reason 7: Planning

Open data policies enable collaborative planning of data collection and publishing efforts to serve multiple defined and yet-to-be-defined uses.

Reason 8: Return on investment

With open data policies, institutions and society overall will see greater return on their investment in research.

Reason 9: Due diligence

Open data policies will help research funding institutions perform due diligence and policy development.

Reason 10: Maximizing value

The value of data increases with the number of potential users*. This benefits science in a general way. It also creates opportunities for businesses that will collect, curate (document, archive, host, catalog, publish), and add value to data.

Similar to Metcalf’s law: “The value of a telecommunications network is proportional to the square of the number of connected users of the system.”

Reason 11: Data Discoverability

Open data is discoverable data. Data are not efficiently discovered through literature searches. Searches of data registered using ISO-standard XML-encoded metadata can be efficient and fine-grained.

Reason 12: Data Exploration

Robust data descriptions and quick access to data will enable more frequent and rapid exploration of data – [“natural experiments”]((http://en.wikipedia.org/wiki/Natural_experiment) – to explore hypothetical spatial relationships and to discover unexpected spatial relationships.

Reason 13: Data Fusion

Open data improves the ability to “fuse” in-situ measurements with data from scanning sensors. This bridges the divide between communities using unmediated raw spatial-temporal data and communities using spatial-temporal data that is the result of a complex processing chain.

(From Simon Cox)

Reason 14: Service chaining

Open data (and open online processing services) will improve scientists’ ability to “chain” Web services for data reduction, analysis and modeling.

Reason 15: Pace of science

Open data enables an accelerated pace of scientific discovery, as automation and improved institutional arrangements give researchers more time for field work, study and communication.

“Changes to the Earth that used to take 10,000 years now take three, one reason we need real-time science. … Governances must be able to see and act upon key intervention points.” Brian Walker, Program Director Resilience Alliance and a scientist with the CSIRO, Australia

Reason 16: Citizen science & PR

Open science will help Science win the hearts and minds of the non-scientific public, because it will make science more believable and it will help engage amateur scientists – citizen scientists – who contribute to science and help promote science. It will also increase the quality and quantity of amateur scientists’ contributions.

Reason 17: Forward compatibility

Open Science improves the ability to adopt and utilize new/better data storage, format, discovery, and transmission technologies as they become available.

(Offered to OGC’s David Arctur for this list on 6 January 2010 by Sharon LeDuc, Chief of Staff, NOAA’s National Climatic Data Center, Asheville, North Carolina, USA.)

(Another reason – cross-checking for sensor accuracy — occurred to me while writing this post.)

Open government data in the UK, US and further afield: new report

Jonathan Gray - June 1, 2010 in CKAN, Ideas and musings, Open Data, Open Geodata, Open Knowledge Foundation, WG EU Open Data, WG Open Government Data

We’re extremely proud that data.gov.uk – the UK Government’s open data portal – uses CKAN, OKF’s open source registry of open data. In the months in 2009 that led up to the release of data.gov.uk, OKF worked closely with the Cabinet Office to help them realise their vision of making public data publicly available in an open, reusable way. But our involvement with the UK government didn’t start there. Civil servants – particularly members of the Office for Public Sector Information – have been attending OKF events like OKCon since at least 2005. And we know that Sir Tim Berners Lee – who was brought on as an expert advisor to the Government as they worked up to the data.gov.uk project – was reading the OKF blog prior to his now famous “Raw Data Now!” talk at TED! ;-)

A new report released late last month charts the history of open government data in the UK and the US, and it’s a fascinating read. Written by OKF board member Becky Hogge for a consortium of grant-giving organisations including the Hewlett Foundation, the Ford Foundation, the Omidyar Network, the Open Society Institute and DfID, the Open Data Study:

“…explores the feasibility of advocating for open government data catalogues in middle income and developing countries. Its aim is to identify the advocacy strategies used in the US and UK data.gov and data.gov.uk initiatives, with a view to building a set of criteria that predict the success of similar initiatives in other countries and provide a template strategy to opening government data.”

I was interviewed for the report, as were John Wonderlich from the Sunlight Foundation, Tom Steinberg from mySociety and Ory Okollah from Ushahidi. Other interviewees include experts like Ethan Zuckerman and Toby Mendel, and – of course – Sir Tim Berners Lee.

The report draws some new and surprising conclusions. As well as recognising the role of organisations like the OKF and mySociety in bringing about data.gov.uk, it emphasises how crucial engagement with civil servants was to the success of the open data project in the UK. It raises interesting questions about what motivates politicians to embrace open data strategies, and even posits that the long battle to open up geospatial data in the UK worked in a positive way: “the barrier [opening geospatial data] imposed in the UK may have served as a common call to action among both civil society and the middle layer government administrators, which in turn served to strengthen the crucial communication between these two groups in the trajectory towards data.gov.uk, and ultimately enrich the final proposition when compared to data.gov.”

The report contains mixed findings about the prospects of similar projects in developing and middle income countries, providing a useful and very detailed checklist for advocates working within those countries to consult, and pointing to the potential role of international donors in this context. In short, I’d recommend reading this report to anyone interested in open government data, or indeed, in advocacy generally. Because, as Becky notes in her blog post introducing the report:

“I’d be hard pressed to think of an idea that has permeated as quickly as open data has from the fringe to the centre.”

Response to the consultation on opening access to Ordnance Survey data

jwalsh - March 15, 2010 in Open Geodata, Open Government Data, Policy

The Open Source Geospatial Foundation, or OSGeo, founded in 2006 is a not-for-profit organization whose mission is to support and promote the collaborative development of open geospatial technologies and data.

The Open Knowledge Foundation (OKF) is a not-for-profit organization founded in 2004 and dedicated to promoting open knowledge in all its forms.

What follows is a shared response to some of the questions raised by the consultation on the future of the Ordnance Survey’s data licensing and pricing model. This was sent using Ernest Marples’ open UK Geographic Data Consultation response service. See also the Simply Understand digestable, short version of the consultation document. March 17th, this Wednesday is the closing day of the response period.

Geographic information is critical to making effective use of open government data. Everything happens somewhere; to find data, and analyse it, location is invaluable context.

The Making Public Data Public programme is part of a global trend among administrations to provide state-collected information to citizens, free of cost or constraints.

Read the rest of this entry →

OKFNer Jo Walsh Speaking at IV Jornadas de SIG Libre

Rufus Pollock - March 8, 2010 in News, Open Geodata, Talks

The IV Jornadas de SIG Libre is taking place this week from the 10th-12th of March in Girona, Spain. This is the premier spanish F/OSS GIS event and OKFNer Jo Walsh will be speaking:

http://www.sigte.udg.edu/jornadassiglibre/keynotes

Open Street Map community responds to Haiti crisis

Jonathan Gray - January 15, 2010 in Exemplars, External, Open Data, Open Geodata

There has recently been a flurry of activity in the Open Street Map community to improve maps of Haiti to assist humanitarian aid organisations responding to the recent earthquake.

In particular mappers and developers are scouring satellite images to identify collapsed and damaged buildings/bridges, spontaneous refugee camps, landslides, blocked roads and other damaged infrastructure – to help NGOs and international organisations respond more effectively to the crisis.

They have issued a call for assistance:

On January 12 2010, a 7.0 earthquake struck Port-au-Prince. The OpenStreetMap community can help the response by tracing Yahoo imagery and other data sources, and collecting existing data sets below. If you have connections with expat Haitian communities, consider getting in touch to work with them to enter place names, etc.

On Wednesday Mikel Maron wrote to the OSM talk list asking for help. Yesterday several companies authorised the OSM community to use their images.

There have been specific requests for up to date mapping information from humanitarian organisations on the ground. For example, on Wednesday, Nicolas Chavent of the Humanitarian OpenStreetMap Team wrote to the OSM talk list:

I am relaying a mapping requirement grounded in Haiti from GIS practitioners mapping there at the United Nations Office of Coordination of Humanitarian Affairs (UNOCHA): “NEED to map any spontaneous camps appearing in the imagery with size in area”

Recently generated data from Open Street Map has been used in maps by ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action) and the World Food Programme.

Yesterday evening Mikel Maron reported there had been over 400 edits since the earthquake. At the time of writing it looks like this has now more than doubled to over 800 edits since 12th January.

The following two images – before and after the earthquake – give you an impression of how much the OSM community have been doing!

haiti.osm.pre-event

haiti.osm.20090114180900

For more see:

Some facts about UK postcodes

jwalsh - December 23, 2009 in Open Geodata, Open Government Data

Recent BBC news coverage stated that UK postcode data will be made freely available under an open licence from April 2010.

Colleagues at EDINA pointed out that some of the coverage assumes that the open data will be the same as that contained in the Royal Mail’s Postcode Address File – but this is uncertain.

Since 2000, UK postcode data has been managed by a consortium called “Gridlink” which comprises Ordnance Survey, Royal Mail, the Office of National Statistics and the General Register of Scotland.

Ordnance Survey collects the data from the other consortium members, and they all have the right to re-sell the collected data.
We can see the contractual setup of the Gridlink consortium in this response to an FOI request regarding the National Statistics Postcode Directory.

In short; it’s complicated. So it’s strange but not surprising to be sent a link to a confused opinion piece in the Financial Times discussing the role of “the Royal Mail, whose intellectual property the postcode datasets are”.

There are many different “postcode datasets” produced and licensed by different members of the GridLink consortium.

ONS and Royal Mail both sell “data products” which add lots of contextual data to the basic elements – which are the fact that a postcode exists, and the fact that it exists at a location. PAF records all the delivery addresses which share each postcode. The NSPD is meant for demographic statistics – it includes references to codes for census areas, health authorities, local government, etc covering the location of the postcode.

The simplest bit of raw data is the association of the postcode with the national grid reference at 1 meter resolution. The rest is added value, created by looking at the spatial relationships with other data sets. Imagine pushing a pin through a loose stack of paper shapes, giving them a shake, and seeing which ones stay on the pin. Then note which shapes stayed on the pin, and arrange the notes in a table form with an entry per postcode.

If the promise of data.gov.uk is realised, then anyone should be able to derive datasets that look a lot like the NSPD (but not the PAF). Raw postcode data provided under an open license does not stop Royal Mail from creating their own added-value products.

Open access just to the mapping between postcode and location would re-enable the “civic mashup” services that were dependent on ernestmarples.com. It’s enough to answer questions like “Who are my local councillors?”, “Are we in this school’s catchment area?” and “What new development work is being planned within a mile of me?”. Open postcodes will be fundamental to unlocking data.gov.uk and turning it into knowledge.

The data resulting from the activity of Gridlink is owned by the Crown, and is Crown Copyright. Postcodes are facts, they can be observed and independently recorded. NPEMaps and Free the Postcode are, or were, two efforts to reconstruct the facts from open sources – from old maps, or from GPS coordinates, and peoples’ knowledge about their own postcode.

I could go on in more obsessive detail, but you probably get the point. I’m unnerved by the idea that emotive media coverage of the Royal Mail’s future, as well as OS’s, will colour the consultation on opening state-collected geographic information in the UK. I would like to see more facts set out straight.

Thankfully, one worrying mis-statement in the FT article as already been corrected – the consultation on how to provide open access to the Ordnance Survey’s data was released today as expected. It runs til March 10th, and the likely last possible date that the current government can act on this is April 22nd. More on this later.

Get Updates