Support Us

The Public Domain Review brings out its first book

Adam Green - November 19, 2014 in Featured, Public Domain Review

Open Knowledge project The Public Domain Review is very proud to announce the launch of its very first book! Released through the newly born spin-off project the PDR Press, the book is a selection of weird and wonderful essays from the project’s first three years, and shall be (we hope) the first of an annual series showcasing in print form essays from the year gone by. Given that there’s three years to catch up on, the inaugural incarnation is a special bumper edition, coming in at a healthy 346 pages, and jam-packed with 146 illustrations, more than half of which are newly sourced especially for the book.

Spread across six themed chapters – Animals, Bodies, Words, Worlds, Encounters and Networks – there is a total of thirty-four essays from a stellar line up of contributors, including Jack Zipes, Frank Delaney, Colin Dickey, George Prochnik, Noga Arikha, and Julian Barnes.

What’s inside? Volcanoes, coffee, talking trees, pigs on trial, painted smiles, lost Edens, the social life of geometry, a cat called Jeoffry, lepidopterous spying, monkey-eating poets, imaginary museums, a woman pregnant with rabbits, an invented language drowning in umlauts, a disgruntled Proust, frustrated Flaubert… and much much more.

Order by 26th November to benefit from a special reduced price and delivery in time for Christmas.

If you are wanting to get the book in time for Christmas (and we do think it is a fine addition to any Christmas list!), then please make sure to order before midnight (PST) on 26th November. Orders place before this date will also benefit from a special reduced price!

Please visit the dedicated page on The Public Domain Review site to learn more and also buy the book!

The heartbeat of budget transparency

Tryggvi Björgvinsson - November 18, 2014 in Featured, Open Budget Survey Tracker

budget_heartbeat_1

Every two years the International Budget Partnership (IBP) runs a survey, called the Open Budget Survey, to evaluate formal oversight of budgets, how transparent governments are about their budgets and if there are opportunities to participate in the budget process. To easily measure and compare transparency among the countries surveyed, IBP created the Open Budget Index where the participating countries are scored and ranked using about two thirds of the questions from the Survey. The Open Budget Index has already established itself as an authoritative measurement of budget transparency, and is for example used as an eligibility criteria for the Open Government Partnership.

However, countries do not release budget information every two years; they should do so regularly, on multiple occasions in a given year. There is, however, as stated above a two year gap between the publication of consecutive Open Budget Survey results. This means that if citizens, civil society organisations (CSOs), media and others want to know how governments are performing in between Survey releases, they have to undertake extensive research themselves. It also means that if they want to pressure governments into releasing budget information and increase budget transparency before the next Open Budget Index, they can only point to ‘official’ data which can be up to two years old.

To combat this, IBP, together with Open Knowledge, have developed the Open Budget Survey Tracker (the OBS Tracker), http://obstracker.org,: an online, ongoing budget data monitoring tool, which is currently a pilot and covers 30 countries. The data are collected by researchers selected among the IBP’s extensive network of partner organisations, who regularly monitor budget information releases, and provide monthly reports. The information included in the OBS Tracker is not as comprehensive as the Survey, because the latter also looks at the content/comprehensiveness of budget information — not only the regularity of its publication. The OBS Tracker, however, does provide a good proxy of increasing or decreasing levels of budget transparency, measured by the release to (or witholding from) the public of key budget documents. This is valuable information for concerned citizens, CSOs and media.

With the Open Budget Survey Tracker, IBP has made it easier for citizens, civil society, media and others to monitor, in near real time (monthly), whether their central governments release information on how they plan to and how they spend the public’s money. The OBS Tracker allows them to highlight changes and facilitates civil society efforts to push for change when a key document has not been released at all, or not in a timely manner.

Niger and Kyrgyz Republic have improved the release of essential budget information after the latest Open Budget Index results, something which can be seen from the OBS Tracker without having to wait for the next Open Budget Survey release. This puts pressure on other countries to follow suit.

budget_heartbeat_2

The budget cycle is a complex process which involves creating and publishing specific documents at specific points in time. IBP covers the whole cycle, by monitoring in total eight documents which include everything from the proposed and approved budgets, to a citizen-friendly budget representation, to end-of-the-year financial reporting and the auditing from a country’s Supreme Audit Institution.

In each of the countries included in the OBS Tracker, IBP monitors all these eight documents showing how governments are doing in generating these documents and releasing them on time. Each document for each country is assigned a traffic light color code: Red means the document was not produced at all or published too late. Yellow means the document was only produced for internal use and not released to the general public. Green means the document is publicly available and was made available on time. The color codes help users quickly skim the status of the world as well as the status of a country they’re interested in.

budget_heartbeat_3

To make monitoring even easier, the OBS Tracker also provides more detailed information about each document for each country, a link to the country’s budget library and more importantly the historical evolution of the “availability status” for each country. The historical visualisation shows a snapshot of the key documents’ status for that country for each month. This helps users see if the country has made any improvements on a month-by-month basis, but also if it has made any improvements since the last Open Budget Survey.

Is your country being tracked by the OBS Tracker? How is it doing? If they are not releasing essential budget documents or not even producing them, start raising questions. If your country is improving or has a lot of green dots, be sure to congratulate the government; show them that their work is appreciated, and provide recommendations on what else can be done to promote openness. Whether you are a government official, a CSO member, a journalist or just a concerned citizen, OBS Tracker is a tool that can help you help your government.

An unprecedented Public-Commons partnership for the French National Address Database

Guest - November 17, 2014 in Featured, OKFN France

This is a guest post, originally published in French on the Open Knowledge Foundation France blog image00

Nowadays, being able to place an address on a map is an essential information. In France, where addresses were still unavailable for reuse, the OpenStreetMap community decided to create its own National Address Database available as open data. The project rapidly gained attention from the government. This led to the signing last week of an unprecedented Public-Commons partnership  between the National Institute of Geographic and Forestry Information (IGN), Group La Poste, the new Chief Data Officer and the OpenStreetMap France community.

In August, before the partnership was signed, we met with Christian Quest, coordinator of the project for OpenStreetMap France. He explained the project and its implications to us.

Here is a summary of the interview, previously published in French on the Open Knowledge Foundation France blog.

Signature of the Public-Commons partnership for the National Address Database  Credit: Etalab, CC-BY

Signature of the Public-Commons partnership for the National Address Database Credit: Etalab, CC-BY

Why Did OpenStreetMap (OSM) France decided to create an Open National Address Database?  

The idea to create an Open National Address Database came about one year ago after discussions with the Association for Geographic Information in France (AFIGEO). An Address Register was the topic of many reports  however these reports can and went without any follow-up and there were more and more people asking for address data on OSM.  

Address data are indeed extremely useful. They can be used for itinerary calculations or more generally to localise any point with an address on a map. They are also essentials for emergency rescues – ambulances, fire-fighters and police forces are very interested in the initiative.  

These data are also helpful for the OSM project itself as they enrich the map and are used to improved the quality of the data. The creation of such a register, with so many entries, required a collaborative effort both to scale up and to be maintained. As such, the OSM-France community naturally took it over. However, there was also a technical opportunity; OSM-France had previously developed a tool to collect information from the french cadastre website, which enabled them to start the register with significant amount of information.

Was there no National Address Registry project in France already?  

It existed on papers and in slides but nobody ever saw the beginning of it. It is, nevertheless, a relatively old project, launched in 2002 following the publication of a report on addresses from the CNIG. This report is quite interesting and most of its points are still valid today, but not much has been done since then.

IGN and La Poste were tasked to create this National Address Register but their commercial interests (selling data) has so far blocked this 12-year old project. As a result, a French address datasets did exist but these datasets were created for specific purposes as opposed to the idea of creating a reference dataset for French addresses. For instance, La Poste uses three different addresses databases: for mail, for parcels, and for advertisements.  

Technically, how do you collect the data? Do you reuse existing datasets?  

We currently use three main data sources: OSM which gathers a bit more than two million addresses, the address datasets already available as open data (see list here) and, when necessary, the address data collected from the website of the cadastre.  We also use FANTOIR data from the DGFIP which contains a list of all streets names and lieux-dits known from the Tax Office. This dataset is also available as open data.  

These different sources are gathered in a common database. Then, we process the data to complete entries and remove duplications, and finally we package the whole thing for export. The aim is to provide harmonised content that brings together information from various sources, without redundancy. The process is run automatically every night with the exception of manual corrections that are done from OSM contributors. Data are then made available as csv files, shapefiles and in RDF format for semantic reuse. A csv version is published on github to enable everyone to follow the updates. We also produce an overlay map which allows contributors to improve the data more easily.  OSM is used in priority because it is the only source from which we can collaboratively edit the data. If we need to add missing addresses, or correct them, we use OSM tools.  

Is your aim to build the reference address dataset for the country?  

This is a tricky question. What is a reference dataset? When you have more and more public services using OSM data, does that mean you are in front of a reference dataset?

According to the definition of the French National Mapping Council (CNIG), a geographic reference must enable every reuser to georeference its own data. This definition does not consider any particular reuse. On the other hand, its aim is to enable as much information as possible to be linked to the geographic reference.  For the National Address Database to become a reference dataset, it is imperative that data is more exhaustive. Currently, there is data for 15 million reusable addresses (August 2014) of an estimated total of about 20 million. We have more in our cumulative database, but our export scripts ensure there is a minimum quality and coherency and release only after the necessary checks have been made. We are also working on the lieux-dits which are not address data point, but which are still used in many rural areas in France.  

Beyond the question of the reference dataset, you can also see the work of OSM as complementary to the one of public entities. IGN has a goal of homogeneity in the exhaustivity of its information. This is due to its mission of ensuring an equal treatment of territories. We do not have such a constraint. For OSM, the density of data on a territory depends largely on the density of contributors. This is why we can offer a level of details sometimes superior, in particular in the main cities, but this is also the reason why we are still missing data for some départements.

Finally, we think to be well prepared for the semantic web and we already publish our data in RDF format by using a W3C ontology closed to the European INSPIRE model for address description.  

The reached agreement includes a dual license framework. You can reuse the data for free under an ODbL license, or you can opt for a non-share-alike license but you have to pay a fee.  Is share-alike clause an obstacle for the private sector?  

I don't think so because the ODbL license does not prevent commercial reuse. It only requires to mention the source and to share any improvement of the data under the same license. For geographical data aiming at describing our land, this share-alike clause is essential to ensure that the common dataset is up to date. Lands change constantly, data improvements and updates must, therefore, be continuous, and the more people are contributing, the more efficient this process is.  

I see it as a win-win situation compared to the previous one where you had multiple address datasets, maintained in closed silos with none of which were of acceptable quality for a key register as it is difficult to stay up to date on your own.  

However, for some companies, share-alike is incompatible with their business model, and a double licensing scheme is a very good solution. Instead of taking part in improving and updating the data, they pay a fee which will be used to improve and update the data.  

And now, what is next for the National Address Database?  

We now need to put in place tools to facilitate contribution and data reuse. Concerning the contribution, we want to set-up a one-stop-shop application/API, separated from OSM contribution tool, to enable everyone to report errors, add corrections or upload data. This kind of tool would enable us to easily integrate partners into the project. On the reuse side, we should develop an API for geocoding and address autocompletion because not everybody will necessarily want to manipulate millions of addresses!  

As a last word, OSM is celebrating its ten years anniversary. What does that inspire you?  

First, the success and the power of OpenStreetMap lies in its community, much more than in its data. Our challenge is therefore to maintain and develop this community. This is what enables us to do projects such as the National Addresses Database, but also to be more reactive than traditional actors when it is needed, for instance with the current Ebola situation. Centralised and systematic approaches for cartography reached their limits. If we want better and more up to date map data, we will need to adopt a more decentralised way of doing things, with more contributors on the ground. Here’s to Ten More Years of the OpenStreetMap community!

   

The Role of Open Data in Choosing Neighborhood

Lorenzo Leva - November 14, 2014 in Featured Project

To what extent is it important to get familiar with our environment?

If we think about how the world surrounding us has changed throughout the years, it is not so unreasonable that, while walking to work, we might encounter some new little shops, restaurants, or gas stations we had never noticed before. Likewise, how many times did we wander about for hours just to find green spaces for a run? And the only one we noticed was even more polluted than other urban areas!

Citizens are not always properly informed about the evolution of the places they live in. And that is why it would be crucial for people to be constantly up-to-date with accurate information of the neighborhood they have chosen or are going to choose.

the_role_of_opendata.doc

(Image source: London Evening Standard)

London is a neat evidence of how transparency in providing data is basic in order to succeed as a Smart City. The GLA’s London Datastore, for instance, is a public platform of datasets revealing updated figures on the main services offered by the town, in addition to population’s lifestyle and environmental risks. These data are then made more easily accessible to the community through the London Dashboard.

The importance of dispensing free information can be also proved by the integration of maps, which constitute an efficient means of geolocation. Consulting a map where it’s easy to find all the services you need as close as possible can be significant in the search for a location.

Skærmbillede 2014-11-03 kl. 14.02.12

(Image source: Smart London Plan)

The Global Open Data Index, published by Open Knowledge in 2013, is another useful tool for data retrieval: it showcases a rank of different countries in the world with scores based on openness and availability of data attributes such as transport timetables and national statistics.

Here it is possible to check UK Open Data Census and US City Open Data Census.

As it was stated, making open data available and easily findable online not only represented a success for US cities but favoured apps makers and civic hackers too. Lauren Reid, a spokesperson at Code for America, reported according to Government Technology: “The more data we have, the better picture we have of the open data landscape.”

That is, on the whole, what Place I Live puts the biggest effort into: fostering a new awareness of the environment by providing free information, in order to support citizens willing to choose the best place they can live.

The outcome is soon explained. The website’s homepage offers visitors the chance to type address of their interest, displaying an overview of neighborhood parameters’ evaluation and a Life Quality Index calculated for every point on the map.

The research of the nearest medical institutions, schools or ATMs thus gets immediate and clear, as well as the survey about community’s generic information. Moreover, data’s reliability and accessibility are constantly examined by a strong team of professionals with high competence in data analysis, mapping, IT architecture and global markets.

For the moment the company’s work is focused on London, Berlin, Chicago, San Francisco and New York, while higher goals to reach include more than 200 cities.

US Open Data Census finally saw San Francisco’s highest score achievement as a proof of the city’s labour in putting technological expertise at everyone’s disposal, along with the task of fulfilling users’ needs through meticulous selections of datasets. This challenge seems to be successfully overcome by San Francisco’s new investment, partnering with the University of Chicago, in a data analytics dashboard on sustainability performance statistics named Sustainable Systems Framework, which is expected to be released in beta version by the the end of 2015’s first quarter.

the_role_of_opendata.doc2

(Image source: Code for America)

Another remarkable collaboration in Open Data’s spread comes from the Bartlett Centre for Advanced Spatial Analysis (CASA) of the University College London (UCL); Oliver O’Brien, researcher at UCL Department of Geography and software developer at the CASA, is indeed one of the contributors to this cause. Among his products, an interesting accomplishment is London’s CityDashboard, a real-time reports’ control panel in terms of spatial data. The web page also allows to visualize the whole data translated into a simplified map and to look at other UK cities’ dashboards.

Plus, his Bike Share Map is a live global view to bicycle sharing systems in over a hundred towns around the world, since bike sharing has recently drawn a greater public attention as an original form of transportation, in Europe and China above all.

O’Brien’s collaboration with James Cheshire, Lecturer at UCL CASA, furthermore gave life to a groundbreaking project called DataShine, aimed to develop the use of large and open datasets within the social science community through new means of data’s visualisation, starting from a mapping platform with 2011 Census data, followed by maps of individual census tables and the new Travel to Work Flows table.

Skærmbillede 2014-11-03 kl. 14.01.59

(Image source: Suprageography)

Global Witness and Open Knowledge – Working together to investigate and campaign against corruption related to the extractives industries

Sam Leon - November 14, 2014 in Data Journalism, Featured, Visualization

Sam Leon, one of Open Knowledge’s data experts, talks about his experiences working as an School of Data Embedded Fellow at Global Witness.

Global Witness are a Nobel Peace Prize nominated not-for-profit organisation devoted to investigating and campaigning against corruption related to the extractives industries. Earlier this year they received the TED Prize and were awarded $1 million to help fight corporate secrecy and on the back of which they launched their End Anonymous Companies campaign.


In February 2014 I began a six month ‘Embedded Fellowship’ at Global Witness, one of the world’s leading anti-corruption NGOs. Global Witness are no strangers to data. They’re been publishing pioneering investigative research for over two decades now, piecing together the complex webs of financial transactions, shell companies and middlemen that so often lie at the heart of corruption in the extractives industries.

Like many campaigning organisations, Global Witness are seeking new and compelling ways to visualise their research, as well as use more effectively the large amounts of public data that have become available in the last few years.

“Sam Leon has unleashed a wave of innovation at Global Witness”
-Gavin Hayman, Executive Director of Global Witness

As part of my work, I’ve delivered data trainings at all levels of the organisation – from senior management to the front line staff. I’ve also been working with a variety of staff to use data collected by Global Witness to create compelling infographics. It’s amazing how powerful these can be to draw attention to stories and thus support Global Witness’s advocacy work.

The first interactive we published on the sharp rise of deaths of environmental defenders demonstrated this. The way we were able to pack some of the core insights of a much more detailed report into a series of images that people could dig into proved a hit on social media and let the story travel further.

GW Info

See here for the full infographic on Global Witness’s website.

But powerful visualisation isn’t just about shareability. It’s also about making a point that would otherwise be hard to grasp without visual aids. Global Witness regularly publish mind-boggling statistics on the scale of corruption in the oil and gas sector.

“The interactive infographics we worked on with Open Knowledge made a big difference to the report’s online impact. The product allowed us to bring out the key themes of the report in a simple, compelling way. This allowed more people to absorb and share the key messages without having to read the full report, but also drew more people into reading it.”
-Oliver Courtney, Senior Campaigner at Global Witness

Take for instance, the $1.1 billion that the Nigerian people were deprived of due to the corruption around the sale of Africa’s largest oil block, OPL 245.

$1.1 billion doesn’t mean much to me, it’s too big of a number. What we sought to do visually was represent the loss to Nigerian citizens in terms of things we could understand like basic health care provision and education.

See here for the full infographic on Shell, ENI and Nigeria’s Missing Millions.


In October 2014, to accompany Global Witness’s campaign against anonymous company ownership, we worked with developers from data journalism startup J++ on The Great Rip Off map.

The aim was to bring together and visualise the vast number of corruption case studies involving shell companies that Global Witness and its partners have unearthed in recent years.

The Great Rip Off!

It was a challenging project that required input from designers, campaigners, developers, journalists and researchers, but we’re proud of what we produced.

Open data principles were followed throughout as Global Witness were committed to creating a resource that its partners could draw on in their advocacy efforts. The underlying data was made available in bulk under a Creative Commons Attribution Sharealike license and open source libraries like Leaflet.js were used. There was also an invite for other parties to submit case studies into the database.

“It’s transformed the way we work, it’s made us think differently how we communicate information: how we make it more accessible, visual and exciting. It’s really changed the way we do things.”
-Brendan O’Donnell, Campaign Leader at Global Witness

For more information on the School of Data Embedded Fellowship Scheme, and to see further details on the work we produced with Global Witness, including interactive infographics, please see the full report here.

2 1 3

France Prefers to Pay (twice) for Papers by Its Researchers

Guest - November 11, 2014 in Open Access

France may not have any money left for its universities but it does have money for academic publishers.

While university presidents learn that their funding is to be reduced by EUR 400 million, the Ministry of Research has decided, under great secrecy, to pay EUR 172 million to the world leader in scientific publishing Elsevier .

In an exclusive piece published by the French news outlet Rue89 (Le Monde press group), Open Knowledge France members and open science evangelists Pierre-Carl Langlais and Rayna Stamboliyska released the agreement between the French Ministry and Elsevier. The post originally appeared here, in French.

Des fioles (Erlenmeyer), dans une classe de science (Lokesh Dhakar/Flickr/CC)

The Work of Volunteers

The scientific publishing market is an unusual sector, those who create value are never remunerated. Instead, they often pay to see their work published. Authors do not receive any direct financial gain from their articles, and the peer review is conducted voluntarily.

This enormous amount of work is indirectly funded by public money. Writing articles and participating in peer review are part of the expected activities of researchers, expected activities that lead to further research funding from the taxpayer.

Scientific publishing is centred around several privately-held publishing houses who own the journals where scientific research is published. Every journal has an editorial review board who receive potential contributions which are then sent to volunteer scientists for peer review. It is on the basis of comments and feedback from the peer review process that a decision is made whether an article is to be published or rejected and returned to the author(s).

When the article is accepted, the authors usually sign their copyright over to the publishers to sell access to the work, or can choose to make their work available to everyone, which oftentimes involves paying a given sum. In some cases journals only receive income for the service of publishing an article which is henceforth free to the consumer, but some journals have a mixed ‘hybrid’ selection so authors pay to publish some articles and their library still pays to purchase the rest of the journal. This is called ‘double dipping’ and while publishers claim they take it into account in their journal pricing, the secrecy around publisher contracts and lack of data means it is impossible to tell where money is flowing.

Huge Profit Margins

This is important because access to these journals is rarely cheap and publishers sell access primarily to academic libraries and research laboratories. In other words, financial resources for the publication of scientific papers come from credits granted to research laboratories; access to the journals these papers are published in is purchased by these same institutions. In both cases, these purchases are subsidies by the public.

The main actors in scientific publishing generate considerable income. In fact, the sector is dominated by an oligopoly with “the big four” sharing most of the global pie:

  • The Dutch Elsevier
  • The German Springer
  • The American Wiley
  • The English Informa

They draw huge profits: from 30% to 40% annual net profit in the case of Elsevier and Springer.

In other words, these four major publishers resell to universities content that the institutions themselves have produced.

In this completely closed market, competition does not exist, and pre-existing agreement is the rule: subscription prices have continued to soar for thirty years, while the cost of publishing, in the era of electronic publishing, has never been lower. For example, the annual subscription to Elsevier’s journal ‘Brain Research’ costs a whopping 15,000 EUR.

The Ministry Shoulders This Policy

The agreement between France and Elsevier amounted to ca. EUR 172 million for 476 universities and hospitals.

The first payment (approximately EUR 34 million of public money) was paid in full in September 2014. In return, 476 public institutions will have access to a body of about 2,000 academic journals.

This published research was mainly financed by public funds. Therefore in the end, we will have paid to Elsevier twice: once to publish, a second time to read.

This is not a blip. The agreement between Elsevier and the government is established policy. In March 2014, Geneviève Fioraso, Minister of Higher Education and Research, elaborated upon the main foci of her political agenda to the Academy of Sciences;two of which involve privileged interactions with Elsevier. This would be the first time that negotiating the right to read for hundreds of public research institutions and universities was managed at national level.

Pre-determined Negotiations

One could argue in favour of the Ministry’s benevolence vis-à-vis public institutions to the extent it supports this vital commitment to research. Such an argument would, however, fail to highlight multiple issues. Among these, we would pinpoint the total opacity in the choice of supplier (why Elsevier in particular?) and the lack of competitive pitch between several actors (for such an amount, open public tendering is required). The major problem which prevents competition is the monopolistic hold of publishers over knowledge – no-one else has the right to sell that particular article on cancer research that a researcher in Paris requires for their work – so there is little choice but to continue paying the individual publishers under the current system. Their hold on only expires with copyright, which is 70 years from the death of the last author and therefore entirely incompatible with the timeline of scientific discovery.

Prisoners of a game with pre-set rules, the negotiators (the Couperin consortium and the Bibliographic Agency for Higher Education, abbreviated as ABES) have not had much breathing space for negotiation. As aforementioned, a competitive pitch did not happen. Article 4 of the Agreement is explicit:

“Market for service provision without publication and without prior competition, negotiated with a particular tenderer for reasons connected with the protection of exclusive distribution rights.”

Therefore, a strange setup materialises for Elsevier to keep its former customers in its back pocket. The research organisations already having a contract with the publisher can only join the national license providing they accept a rise of the costs (that goes from 2.5 to 3.5%). Those without previous contract are not concerned.

How Many Agreements of the Sort?

To inflate the bill even more, Elsevier sells bundles of journals (its ‘flagship journals’): “No title considered as a ‘flagship journal’ (as listed in Annex 5) can be withdrawn from the collection the subscribers can access” (art. 6.2). These ‘flaghip journals’ cannot all claim outstanding impact factors. Moreover, they are not equally relevant acrossdisciplines and scientific institutions.

The final price has been reduced from the estimation initially planned in February: “only” EUR 172 million instead of EUR 188 million. Yet, this discount does not seem to be a gratuitous gift from Elsevier. Numerous institutions have withdrawn from the national license: from 642 partners in February, only 476 remain in the final deal.

Needless to say, the sitation is outrageous. Yet, it is just one agreement with one among several vendors. A recent report by the French Academy of Science [http://www.academie-sciences.fr/presse/communique/rads_241014.pdf] alluded to a total of EUR 105 million annually, dedicated to acquiring access to scientific publications. This figure, however, comes out as far below the reality. Indeed, the French agreement with Elsevier grants access to publications only to some of the research institutions and universities in France; and yet in this case, the publisher already preempts EUR 33-35 million per year. The actual costs plausibly reach a total of EUR 200-300 million.

An alternative exists.

Elsewhere in Europe…

An important international movement has emerged and developed promoting and defending a free and open access to scientific publications. The overall goal is to make this content accessible and reusable to anyone.

As a matter of fact, researchers have no interest whatsoever in maintaining the current system. Copyright in scholarly publication does not requite authors and thus constitutes a fiction whose main goal is to perpetrate the publisher’s rights. Not only does this enclosure limit access to scientific publications — it also prevents the researcher from reusing their own work, as they oftenconcede their copyright when opting in to publication agreements.

The main barrier to opening up access to publications appears to stem from the government. No action is taken for research to be released from the grip of oligopolistic publishers. Assessment of publicly funded research focuses on journals referred to as “qualifying” (that is, journals mainly published by big editors). Some university departments even consider that open access publications are, by default, “not scientific”.

Several European Countries lead the way:

  • Germany has passed a law limiting the publishers’ exclusive rights to one year. Once the embargo has expired, the researcher is free to republish his work and allow open access to it. More details here.
  • Negotiations have been halted in Elsevier’s base, the Netherlands. Even though Elsevier pays most of its taxes there, the Dutch governement fully supports the demands of researchers and librarians, aiming to open up the whole corpus of Dutch scientific publications by 2020. More details here.

The most chilling potential effect of the Elsevier deal is removing, for five years, any possible collective incentive to an ambitious French open access policy. French citizens will continue to pay twice for research they cannot read. And the government will sustain a closed and archaic editorial system whose defining feature is to single-handedly limit the right to read.

Seeking new Executive Director at Open Knowledge

Rufus Pollock - November 11, 2014 in Featured, News, Open Knowledge Foundation

Today we are delighted to put out our formal announcement for a new Executive Director. In our announcement about changes in leadership in September we had already indicated we would be looking to recruit a new senior executive and we are now ready to begin the formal process.

We are very excited to have this opportunity to bring someone new on board. Please do share this with your networks and especially anyone in particular you think would be interested. We emphasize that we are conducting a world-wide search for the very best candidates, although the successful candidate would ideally be able commute to London or Berlin as needed.

Full role details are below – to apply or to download further information on the required qualifications, skills and experience for the role, please visit http://www.perrettlaver.com/candidates quoting reference 1841. The closing date for applications is 9am (GMT) on Monday, 8th December 2014.

Role Details

Open Knowledge is a multi-award winning international not-for-profit organisation. We are a network of people passionate about openness, using advocacy, technology and training to unlock information and enable people to work with it to create and share knowledge. We believe that by creating an open knowledge commons and developing tools and communities around this we can make a significant contribution to improving governance, research and the economy. We’re changing the world by promoting a global shift towards more open ways of working in government, arts, sciences and much more. We don’t just talk about ideas, we deliver extraordinary software, events and publications.

We are currently looking for a new Executive Director to lead the organisation through the next exciting phase of its development. Reporting into the Board of Directors, the Executive Director will be responsible for setting the vision and strategic direction for the organisation, developing new business and funding opportunities and directing and managing a highly motivated team. S/he will play a key role as an ambassador for Open Knowledge locally and internationally and will be responsible for developing relationships with key stakeholders and partners.

The ideal candidate will have strong visionary and strategic skills, exceptional personal credibility, a strong track record of operational management of organisations of a similar size to Open Knowledge, and the ability to influence at all levels both internally and externally. S/he will be an inspiring, charismatic and engaging individual, who can demonstrate a sound understanding of open data and content. In addition, s/he must demonstrate excellent communication and stakeholder management skills as well as a genuine passion for, and commitment to, the aims and values of the Open Knowledge.

To apply or to download further information on the required qualifications, skills and experience for the role, please visit http://www.perrettlaver.com/candidates quoting reference 1841. The closing date for applications is 9am (GMT) on Monday, 8th December 2014.

The role is flexible in terms of location but ideally will be within commutable distance of London or Berlin (relocation is possible) and the salary will be competitive with market rate.

Call for action: Help improve the open knowledge directory

Guest - November 10, 2014 in Featured Project

opensteps This is a guest blog post from Open Steps, an independent blog aggregating worldwide information around Open Cultures in form of articles, videos and other resources. Its aim is to document open knowledge (OK) related projects and keep track on the status of such initiatives worldwide. From organisations using Open Data, promoting Open Source technologies, launching Open Government initiatives, following the principles behind Open Science, supporting the release of information to newsrooms practicing Data Journalism.

In this way, their site seeks to continue, this time virtually, the globetrotter project realised between July 2013 to July 2014 and discover further OK projects all around the world.

If you followed the journey across Europe, India, Asia and South-America that Margo and Alex from Open Steps undertook last year, you probably already know their open knowledge directory. During those 12 months, in every of the 24 visited countries they had the chance to met numerous enthusiastic activists sharing the same ideas and approaches. In order to keep record of all those amazing projects they created what began as a simple contact list but soon evolved in a web application that has been growing since then.

okdirectory1

After some iterations a new version has been recently released which not only features a new user interface with better usability but also sets a base for a continuous development that aims to encourage collaboration among people across borders while monitoring the status of open knowledge initiatives worldwide and raising awareness about relevant projects worth to discover. If you haven’t done it yet, head to http://directory.open-steps.org and join it!

New version implementing PLP Profiles

One of the main features of this new version is the implementation of the Portable Linked Profiles, short PLP. In a nutshell, PLP allows you to create a profile with your basic contact information that you can use, re-use and share. Basic contact information refers to the kind of information you are used to type in dozens of online forms, from registering on social networks, accessing web services or leaving your feedback in forums, it is always the same information: Name, Email, Address, Website, Facebook, Twitter, etc…PLP addresses this issue but also, and most important, allows you to decide where you want your data to be stored.

okdirectory2

By implementing PLP, this directory does not make use anymore of the old Google Form and now allow users to edit their data and keep it up-to-date easily. For the sake of re-usability and interoperability, it makes listing your profile in another directory so easy as just pasting the URI of your profile on it. If you want to know more about PLP, kindly head to the current home page, read a more extensive article about it on Open Steps or check the github repository with the documentation. PLP is Open Source software and is based on Open Web Standards and Common Vocabularies so collaboration is more than welcome.

Participate on defining the next steps for the open knowledge directory

Speaking about collaboration, on the upcoming Wednesday 12th of November, a discussion will take place on how the worldwide open knowledge community can benefit from such a directory, how the current Open Steps’ implementation can be improved and what would be the next steps to follow. No matter what background you have, if you are a member of the worldwide open knowledge community and want to participate on the improvement of the open knowledge directory, please join us.
When? Wednesday, 12th November 2014. 3pm GMT

Event on Google+: https://plus.google.com/events/c46ni4h7mc9ao6b48d9sflnetvo

References

This blog post is also available on the Open Education Working Group blog.

Global Open Data Index 2014: Reviewing in progress

Mor Rubinstein - November 6, 2014 in Open Data Index

October was a very exciting month for us in the Index team. We spoke to so many of you about the Index, face to face or in the virtual world, and we got so much back from you. It was amazing for us to see how the community is pulling together not only with submissions, but also giving advice in the mailing list, translating tweets and tutorials and spreading the word of the Index around. Thank you so much for your contributions.

Mor and Neal at AbreLATAM

This is the first time that we have done regional sprints, starting from the Americas in early October in AbreLATAM/ConDatos, through to our community hangout with Europe and MENA, and finishing off with Asia, Africa and Pacific. On Thursday last week, we hosted a Hangout with Rufus, who spoke about the the Index, how it can be used and where it is headed. We were also very lucky to have Oscar Montiel from Mexico, who spoke with us how they use the Index to demand datasets from the government and how they are now implementing the local data index in cities around Mexico so they can promote data openness at the municipal level. We were also excited to host Oludotun Babayemi from Nigeria, who explained how Index that involves Nigeria can help them to promote awareness in government and civilians to open data issues.

Skærmbillede 2014-10-30 kl. 15.10.04

Now that the sprints are over, we still have a lot of work ahead of us. We are now reviewing all of the submissions. This year, we divided the editor role from 2014 into two roles known as ‘contributor’ and ‘reviewer’. This has been done so we can have a second pair of eyes to to ensure information is reliable and of excellent quality. Around the world people a team of reviewers are working on the submissions from the sprints. We are still looking for reviewers for South Africa, Bangladesh, Finland, Georgia, Latvia, Philippines and Norway. You can apply to become one here.

We are finalising the Index 2014 over the next few weeks. Stay tuned for more updates. In the meantime, we are also collecting your stories about participating in the Index for 2014. If you would like to contribute to these regional blogs, please email emma.beer@okfn.org. We would love to hear from you and make sure your country is represented.

Open Knowledge Festival 2014 report: out now!

Beatrice Martini - November 6, 2014 in Community, Featured, Join us, News, OKFestival

Today we are delighted to publish our report on OKFestival 2014!

Open Knowledge Foundation-Festival 2014 at Kulturbrauerei in Berlin.

This is packed with stories, statistics and outcomes from the event, highlighting the amazing facilitators, sessions, speakers and participants who made it an event to inspire. Explore the pictures, podcasts, etherpads and videos which reflect the different aspects of the event, and uncover some of its impact as related by people striving for change – those with Open Minds to Open Action.

Want more data? If you are still interested in knowing more about how the OKFestival budget was spent, we have published details about the events income and expenses here.

If you missed OKFestival this year, don’t worry – it will be back! Keep an eye on our blog for news and join the Open Knowledge discussion list to share your ideas for the next OKFestival. Looking forward to seeing you there!

Get Updates