Support Us

You are browsing the archive for Free Culture.

Newsflash! OKFestival Programme Launches

Beatrice Martini - June 4, 2014 in Events, Free Culture, Join us, Network, News, OKFest, OKFestival, Open Access, Open Data, Open Development, Open Economics, Open Education, Open GLAM, Open Government Data, Open Humanities, Open Knowledge Foundation, Open Knowledge Foundation Local Groups, Open Research, Open Science, Open Spending, Open Standards, Panton Fellows, Privacy, Public Domain, Training, Transparency, Working Groups

At last, it’s here!

Check out the details of the OKFestival 2014 programme – including session descriptions, times and facilitator bios here!

Screen Shot 2014-06-04 at 4.11.42 PM

We’re using a tool called Sched to display the programme this year and it has several great features. Firstly, it gives individual session organisers the ability to update the details on the session they’re organising; this includes the option to add slides or other useful material. If you’re one of the facilitators we’ll be emailing you to give you access this week.

Sched also enables every user to create their own personalised programme to include the sessions they’re planning to attend. We’ve also colour-coded the programme to help you when choosing which conversations you want to follow: the Knowledge stream is blue, the Tools stream is red and the Society stream is green. You’ll also notice that there are a bunch of sessions in purple which correspond to the opening evening of the festival when we’re hosting an Open Knowledge Fair. We’ll be providing more details on what to expect from that shortly!

Another way to search the programme is by the subject of the session – find these listed on the right hand side of the main schedule – just click on any of them to see a list of sessions relevant to that subject.

As you check out the individual session pages, you’ll see that we’ve created etherpads for each session where notes can be taken and shared, so don’t forget to keep an eye on those too. And finally; to make the conversations even easier to follow from afar using social media, we’re encouraging session organisers to create individual hashtags for their sessions. You’ll find these listed on each session page.

We received over 300 session suggestions this year – the most yet for any event we’ve organised – and we’ve done our best to fit in as many as we can. There are 66 sessions packed into 2.5 days, plus 4 keynotes and 2 fireside chats. We’ve also made space for an unconference over the 2 core days of the festival, so if you missed out on submitting a proposal, there’s still a chance to present your ideas at the event: come ready to pitch! Finally, the Open Knowledge Fair has added a further 20 demos – and counting – to the lineup and is a great opportunity to hear about more projects. The Programme is full to bursting, and while some time slots may still change a little, we hope you’ll dive right in and start getting excited about July!

We think you’ll agree that Open Knowledge Festival 2014 is shaping up to be an action-packed few days – so if you’ve not bought your ticket yet, do so now! Come join us for what will be a memorable 2014 Festival!

See you in Berlin! Your OKFestival 2014 Team

Copyright Week: Public Domain Calculators

Primavera De Filippi - January 14, 2014 in Featured, Free Culture

From 13 to 18 January the Electronic Frontier Foundation (EFF) is organising Copyright Week, an event focused on promoting six key principles for guiding copyright policy and practice. Each day is dedicated to one of the principles, and today is ‘Building and defending a robust public domain’. This post is preceded by another one on the OpenGLAM blog.

Print

Many people recognise the value of works which are in the public domain and may even be familiar with many initiatives that provide access to public domain works (such as the Internet Archive, Wikimedia Commons, Project Gutenberg, etc). Yet, many people do not have a very clear conception of what the public domain is or why it is important.

New digital technologies make it possible for the public to access a vast quantity of cultural and historical material. Much of this material is in the public domain, and ongoing digitisation efforts mean that much more public domain material (in which copyright has expired) will be made available for the public to enjoy, share, and reuse.

However, is often difficult to determine whether a work has fallen into the public domain in any given jurisdiction, because the terms of copyright protection differ from country to country. And people are sometimes unclear about what can or cannot be done with works in the public domain. Copyright laws are complicated, and for the layperson it may not be clear how they apply in relation to a specific work. Though there are many international and multinational copyright agreements and copyright organisations, the exact details of copyright law vary from one country to another. Different countries have different legal systems and traditions – and copyright laws reflect these differences. Hence, given that works enter the public domain under different circumstances depending on the country, oftentimes the status of an individual work cannot be universally established. Rather, it needs to be evaluated on a case-by-case basis for every jurisdiction.

In order to make public domain determinations a less daunting task, the Open Knowledge Foundation has been working on the development of the Public Domain Calculators – a tool that enables people to determine the copyright status of a work (in the public domain, or not), thus helping users realize the value of artworks from the past.

A look into the past

The Open Knowledge Foundation began working on the first implementation of the Public Domain Calculators in 2006, then for the Public Domain Works project, whose goal was to identify sound recordings which are in the public domain in the United Kingdom, based on metadata provided by the BBC and private collectors. In 2007, as Public Domain Works began working with the Open Library project, the idea emerged to create a set of algorithms for determining the public domain status of a work in different jurisdictions.

At the first Communia workshop in 2008, the Open Knowledge Foundation proposed collaborating with legal experts in the network to create a set of public domain calculators for different jurisdictions in Europe. These discussions eventually led to the creation of the Public Domain Working Group, who planned to work on public domain calculators across Europe.

After several years, thanks to the support of a large community of legal and technical experts, the public domain calculators of the Open Knowledge Foundation are now a functional piece of software which can help people determine the copyright status of a work. Based on the research done by Europeana Connect (a project funded by the European Community Programme eContentplus), the public domain calculators rely on a series of national flowcharts which represents the provision of copyright law in the form of a decision tree. For any given work, the public domain calculators can determine whether or not that work is in the public domain in any given jurisdiction by matching the bibliographic metadata attached to that work against the provisions of copyright law for that particular jurisdiction. In terms of technology, the Public Domain Calculators of the Open Knowledge Foundation shares similarities with the those recently developed by Kennisland and the Institute for Information Law at the University of Amsterdam (IviR) in the framework of Europeana Connect. The main difference between the two is that the OKFN calculators have been designed to be completely independent from any user input and are therefore completely automated. This represents the most innovative aspect of this technology. By gathering the relevant metadata from a variety of databases, the public domain calculators only process the data necessary to identify the legal status of a work, so as to subsequently present them to the users upon request.

pdc

A glance into the future

The value of the Public Domain Calculators has recently been acknowledged by the French Ministry of Culture, which created a partnership with Open Knowledge Foundation France to develop a working prototype of the calculators for the French jurisdiction. In collaboration with two pilot institutions, the Bibliothèque Nationale de France and the Médiathèque de l’Architecture et du Patrimoine, the calculator will be presented as a pedagogical tool for the cultural sector to better understand the legal status of the works and the value of the metadata it produces. In France, this comes at an important time, as we’re entering the time when most of the works produced by authors who died during the second world war would, theoretically, enter the public domain. Yet, French copyright law stipulates that authors who died during the war in the name of France have extended terms of protection. Hence, by applying the standard 70 years post-mortem rule, a number of works which are still eligible for copyright protection might end up being incorrectly assumed to be in the public domain. The public domain calculators represent a technological solution to help people identify whether or not these works have indeed entered into the public domain. But the value of the public domain calculators extends far beyond highlighting the peculiarities of national copyright laws. Their objective is also to promote good practices within the cultural sector. Hence, in France, in addition to being a mere pedagogical tool, the calculators will also be employed as a benchmarking tool to help cultural institutions identify flaws and gaps in the structure or content of their bibliographical metadata, so as to ultimately increase the accuracy of the results.

We hope that other countries will follow the example of France, and that the potential of the public domain calculators as a means to promote good open data policies within the cultural sector will be appreciated by many other countries around the world.

Visit the website of Public Domain Calculator to learn more, and make sure to visit the Copyright Week website for full overview of the week’s posts and activities.

What We Hope the Digital Public Library of America Will Become

Jonathan Gray - April 17, 2013 in Bibliographic, Featured, Free Culture, Open Content, Open GLAM, Open Humanities, Policy, Public Domain

Tomorrow is the official launch date for the Digital Public Library of America (DPLA).

If you’ve been following it, you’ll know that it has the long term aim of realising “a large-scale digital public library that will make the cultural and scientific record available to all”.

More specifically, Robert Darnton, Director of the Harvard University Library and one of the DPLA’s leading advocates to date, recently wrote in the New York Review of Books, that the DPLA aims to:

make the holdings of America’s research libraries, archives, and museums available to all Americans—and eventually to everyone in the world—online and free of charge

What will this practically mean? How will the DPLA translate this broad mission into action? And to what extent will they be aligned with other initiatives to encourage cultural heritage institutions to open up their holdings, like our own OpenGLAM or Wikimedia’s GLAM-WIKI?

Here are a few of our thoughts on what we hope the DPLA will become.

A force for open metadata

The DPLA is initially focusing its efforts on making existing digital collections from across the US searchable and browsable from a single website.

Much like Europe’s digital library, Europeana, this will involve collecting information about works from a variety of institutions and linking to digital copies of these works that are spread across the web. A super-catalogue, if you will, that includes information about and links to copies of all the things in all the other catalogues.

Happily, we’ve already heard that the DPLA is releasing all of this data about cultural works that they will be collecting using the CC0 legal tool – meaning that anyone can use, share or build on this information without restriction.

We hope they continue to proactively encourage institutions to explicitly open up metadata about their works, and to release this as machine-readable raw data.

Back in 2007, we – along with the late Aaron Swartz – urged the Library of Congress to play a leading role in opening up information about cultural works. So we’re pleased that it looks like DPLA could take on the mantle.

But what about the digital copies themselves?

A force for an open digital public domain

The DPLA has spoken about using fair use provisions to increase access to copyrighted materials, and has even intimated that they might want to try to change or challenge the state of the law to grant further exceptions or limitations to copyright for educational or noncommercial purposes (trying to succeed where Google Books failed). All of this is highly laudable.

But what about works which have fallen out of copyright and entered the public domain?

Just as they are doing with metadata about works, we hope that the DPLA takes a principled approach to digital copies of works which have entered the public domain, encouraging institutions to publish these without legal or technical restrictions.

We hope they become proactive evangelists for a digital public domain which is open as in the Open Definition, meaning that digital copies of books, paintings, recordings, films and other artefacts are free for anyone to use and share – without restrictive clickwrap agreements, digital rights management technologies or digital watermarks to impose ownership and inhibit further use or sharing.

The Europeana Public Domain Charter, in part based on and inspired by the Public Domain Manifesto, might serve as a model here. In particular, the DPLA might take inspiration from the following sections:

What is in the Public Domain needs to remain in the Public Domain. Exclusive control over Public Domain works cannot be re-established by claiming exclusive rights in technical reproductions of the works, or by using technical and or contractual measures to limit access to technical reproductions of such works. Works that are in the Public Domain in analogue form continue to be in the Public Domain once they have been digitised.

The lawful user of a digital copy of a Public Domain work should be free to (re-) use, copy and modify the work. Public Domain status of a work guarantees the right to re-use, modify and make reproductions and this must not be limited through technical and or contractual measures. When a work has entered the Public Domain there is no longer a legal basis to impose restrictions on the use of that work.

The DPLA could create their own principles or recommendations for the digital publication of public domain works (perhaps recommending legal tools like the Creative Commons Public Domain Mark) as well as ensuring that new content that they digitise is explicitly marked as open.

Speaking at our OpenGLAM US launch last month, Emily Gore, the DPLA’s Director for Content, said that this is definitely something that they’d be thinking about over the coming months. We hope they adopt a strong and principled position in favour of openness, and help to raise awareness amongst institutions and the general public about the importance of a digital public domain which is open for everyone.

A force for collaboration around the cultural commons

Open knowledge isn’t just about stuff being able to freely move around on networks of computers and devices. It is also about people.

We think there is a significant opportunity to involve students, scholars, artists, developers, designers and the general public in the curation and re-presentation of our cultural and historical past.

Rather than just having vast pools of information about works from US collections – wouldn’t it be great if there were hand picked anthologies of works by Emerson or Dickinson curated by leading scholars? Or collections of songs or paintings relating to a specific region, chosen by knowledgable local historians who know about allusions and references that others might miss?

An ‘open by default’ approach would enable use and engagement with digital content that breathes a life into it that it might not otherwise have – from new useful and interesting websites, mobile applications or digital humanities projects, to creative remixing or screenings of out of copyright films with new live soundtracks (like Air’s magical reworking of Georges Méliès’s 1902 film Le Voyage Dans La Lune).

We hope that the DPLA takes a proactive approach to encouraging the use of the digital material that it federates, to ensure that it is as impactful and valuable to as many people as possible.

Sita’s free: Landmark copyleft animated film is now licensed CC0

Sarah Stierch - January 19, 2013 in Free Culture, Open Content, Public Domain, Public Domain Works

Sit back and relax Sita..you're free!

Sit back and relax Sita..you’re free!

This past Friday, American cartoonist, animator, and free culture activist Nina Paley announced she was releasing her landmark animated film Sita Sings the Blues under a Creative Commons CC0 licenseSita Sings the Blues is quite possibly the most famous animated film to be released under an open license. The 82 minute film, which is an autobiographical story mixed with an adaptation of the Ramayana, was released in 2008 under a Creative Commons Attribution-ShareAlike license.

Paley, a well known copyleft and free licensing advocate, found inspiration for releasing Sita in recent life events. The day after learning about the death of internet activist and computer programmer Aaron Swartz, Paley was asked to provide permissions, by the National Film Board of Canada (NFB), for filmmaker Chris Landreth to “refer” to Sita Sings the Blues in an upcoming film. Challenges with NFB lawyers reminded Paley of the challenges Swartz faced in relation to his “freeing” of JSTOR documents. “I couldn’t bear to enable more bad lawyers, more bad decisions, more copyright bullshit, by doing unpaid paperwork for a corrupt and stupid system. I just couldn’t,” Paley explained on her blog. She refused to sign the paperwork, and the NFB requested that Landreth remove any mentions of Sita in his film.

“CC-0 is as close as I can come to a public vow of legal nonviolence,” Paley states, channeling her frequent frustration with film industry lawyers and copyrights. In a copyleft community where participants are often challenged on what license is the best option, Paley took this chance to attempt to discover that: “I honestly have not been able to determine which Free license is “better,” and switching to CC-0 may help answer that question.”

Sita can now sing the blues (or perhaps something happier, since she is as free as it can get), without having to file for paperwork ever again.

Did Gale Cengage just liberate all of their public domain content? Sadly not…

Jonathan Gray - January 9, 2013 in Featured, Free Culture, Legal, Open Access, Open/Closed, Public Domain, WG Public Domain

Earlier today we received a strange and intriguing press release from a certain ‘Marmaduke Robida’ claiming to be ‘Director for Public Domain Content’ at Gale Cengage’s UK premises in Andover. Said the press release:

Gale, part of Cengage Learning, is thrilled to announce that all its public domain content will be freely accessible on the open web. “On this Public Domain Day, we are proud to have taken such a ground-breaking decision. As a common good, the Public Domain content we have digitized has to be accessible to everyone” said Marmaduke Robida, Director for Public Domain Content, Gale.

Hundreds of thousands of digitized books coming from some of the world’s most prestigious libraries and belonging to top-rated products highly appreciated by the academic community such as “Nineteenth Century Collection Online”, “Eighteenth Century Collection Online”, “Sabin America”, “Making of the Modern World” and two major digitized historical newspaper collections (The Times and the Illustrated London news) are now accessible from a dedicated websit. The other Gale digital collections will be progressively added to this web site throughout 2013 so all Public Domain content will be freely accessible by 2014. All the images are or will be available under the Public Domain Mark 1.0 license and can be reused for any purpose.

Gale’s global strategy is inspired by the recommandations issued by the European reflection group “Comite des sages” and the Public Domain manifesto. For Public Domain content, Gale decided to move to a freemium business model : all the content is freely accessible through basic tools (Public Domain Downloader, URL lists, …), but additional services are charged for. “We are confident that there still is a market for our products. Our state-of-art research platforms offer high quality services and added value which universities or research libraries are ready to pay for” said Robida.

A specific campaign targeted to national and academic libraries for promoting the usage of Public Domain Mark for digitized content will be launched in 2013. “We are ready to help the libraries that have a digitization programme fulfill their initial mission : make the knowledge accessible to everyone. We also hope that our competitors will follow the same way in the near future. Public Domain should not be enclosed by paywalls or dubious licensing terms” said Robida.

The press release linked to a website which proudly proclaimed:

All Public Domain content to be freely available online. Gale Digital Collections has changed the nature of research forever by providing a wealth of rare, formerly inaccessible historical content from the world’s most prestigious libraries. In january 2013, Gale has taken a ground-breaking decision and chosen to offer this content to all the academic community, and beyond to mankind, to which it belongs

This was met with astonishment by members of our public domain discussion list, many of whom suspected that the news might well be too good to be true. The somewhat mysterious, yet ever-helpful Marmaduke attempted to allay these concerns on the list, commenting:

I acknowledge this decision might seem a bit disorientating. As you may know, Gale is already familiar to give access freely to some of its content [...], but for Public Domain content we have decided to move to the next degree by putting the content under the Public Domain Mark.

Several brave people had a go at testing out the so-called ‘Public Domain Downloader’ and said that it did indeed appear to provide access to digitised images of public domain texts – in spite of concerns in the Twittersphere that the software might well be malware (in case of any ambiguity, we certainly do not suggest that you try this at home!).

I quickly fired off an email to Cengage’s Director of Media and Public Relations to see if they had any comments. A few hours later a reply came back:

This is NOT an authorized Cengage Learning press release or website – our website appears to have been illegally cloned in violation of U.S. copyright and trademark laws. Our Legal department is in the process of trying to have the site taken down as a result. We saw that you made this information available via your listserv and realize that you may not have been aware of the validity of the site at the time, but ask that you now remove the post and/or alert the listserv subscribers to the fact that this is an illegal site and that any downloads would be in violation of copyright laws.

Sadly the reformed Gale Cengage – the Gale Cengage opposed to paywalls, restrictive licensing and clickwrap agreements on public domain material from public collections, the Gale Cengage supportive of the Public Domain Manifesto and dedicated to liberating of public domain content for everyone to enjoy – was just a hoax, a fantasm. At least this imaginary, illicit doppelgänger Gale gives a fleeting glimpse of a parallel world in which one of the biggest gatekeepers turned into one of the biggest liberators overnight. One can only hope that Gale Cengage and their staff might – in the midst of their legal wrangling – be inspired by this uncanny vision of the good public domain stewards that they could one day become. If only for a moment.

Making a Real Commons: Creative Commons should Drop the Non-Commercial and No-Derivatives Licenses

Rufus Pollock - October 4, 2012 in Featured, Free Culture, Open Content, Open Data, Open Definition, Open Standards, Open/Closed, WG Open Licensing

Students for Free Culture recently published two excellent pieces about why Creative Commons should drop their Non-Commercial and No-Derivatives license variants:

As the first post says:

Over the past several years, Creative Commons has increasingly recommended free culture licenses over non-free ones. Now that the drafting process for version 4.0 of their license set is in full gear, this is a “a once-in-a-decade-or-more opportunity” to deprecate the proprietary NonCommercial and NoDerivatives clauses. This is the best chance we have to dramatically shift the direction of Creative Commons to be fully aligned with the definition of free cultural works by preventing the inheritance of these proprietary clauses in CC 4.0′s final release.

After reiterating some of the most common criticisms and objections against the NC and ND restrictions (if you are not familiar with these then they are worth reading up on), the post continues:

Most importantly, though, is that both clauses do not actually contribute to a shared commons. They oppose it.

This is a crucial point and one that I and others at the Open Knowledge Foundation have made time and time again. Simply: the Creative Commons licenses do not make a commons.

As I wrote on my personal blog last year:

Ironically, despite its name, Creative Commons, or more precisely its licenses, do not produce a commons. The CC licenses are not mutually compatible, for example, material with a CC Attribution-Sharealike (by-sa) license cannot be intermixed with material licensed with any of the CC NonCommercial licenses (e.g. Attribution-NonCommercial, Attribution-Sharealike-Noncommercial).

Given that a) the majority of CC licenses in use are ‘non-commercial’ b) there is also large usage of ShareAlike (e.g. Wikipedia), this is an issue affects a large set of ‘Creative Commons’ material.

Unfortunately, the presence of the word ‘Commons’ in CC’s name and the prominence of ‘remix’ in the advocacy around CC tends to make people think, falsely, that all CC licenses as in some way similar or substitutable.

The NC and ND licenses prevent CC licensed works forming a unified open digital commons that everyone is free to use, reuse and redistribute.

Perhaps if Creative Commons were instead called ‘Creative Choice’ and it were clearer that only a subset of the licenses (namely CC0, CC-BY and CC-BY-SA) contribute to the development of a genuine, unified, interoperable commons then this would not be so problematic. But the the fact that CC appears to promote such a commons (which in fact it does not) ultimately has a detrimental effect on the growth and development of the open digital commons.

As the Free Culture blog puts it:

Creative Commons could have moved towards being a highly-flexible modular licensing platform that enabled rightsholders to fine-tune the exact rights they wished to grant on their works, but there’s a reason that didn’t happen. We would be left with a plethora of incompatible puddles of culture. Copyright already gives rightsholdors all of the power. Creative Commons tries to offer a few simple options not merely to make the lives of rightsholders easier, but to do so towards the ends of creating a commons.

Whilst Free Culture are focused on “content” the situation is, if anything, more serious for data where combination and reuse is central and therefore interoperability (and the resulting open commons) are especially important.

We therefore believe this is the time for Creative Commons to either retire the NC and ND license variants, or spin them off into a separate entity which does not purport to promote or advance a digital commons (e.g. ‘Creative Choice’).

Please consider joining us and Students for a Free Culture in the call to Creative Commons to make the necessary changes:

The Revenge of the Yellow Milkmaid: Cultural Heritage Institutions open up dataset of 20m+ items

Sam Leon - September 17, 2012 in Featured, Free Culture, Open Data, Open GLAM

 

The following is a guest blog post by Harry Verwayen, Business Development Director at Europeana, Europe’s largest cultural heritage data repository.

Last week, on September 12 to be exact, we were proud to announce that Europeana released metadata for more than 20 million cultural heritage objects under a Creative Commons Zero Universal Public Domain Dedication.

This news is significant because it means that anyone can now use the data for any purpose – creative, educational, commercial – with no restrictions. It is by far the largest one-time dedication of cultural data to the Public Domain and we believe that this can offer a new boost to the knowledge economy, providing cultural institutions and digital entrepreneurs with opportunities to create innovative apps and games for tablets and smartphones and to create meaningful new web services. Releasing data from across the memory organisations of every EU country sets an important new international precedent, a decisive move away from the world of closed and controlled data.

Unsurprisingly, the news received warm support from the Open Data community. In the Guardian Datablog  last week, Jonathan Gray called this data release ‘a coup d’etat for advocates of open cultural data’. Vice-President of the European Commission, Neelie Kroes, tweeted: ‘World’s premier cultural dataset @EuropeanaEU just went #opendata! Over 20 million items. Big day 4 @creativecommons’.

This release was the result of hard work of Open Data advocates from around the globe, activists from the Europeana Network, and not in the least from our 2200 partners in Libraries, Museums and Archives who contribute data about digitised books, paintings, photographs, recordings and films to Europeana.

In a white paper that we published last year, ‘The Problem of the Yellow Milkmaid’, co-authors Martijn Arnoldus from Kennisland and Peter Kaufman from Intelligent Television, addressed the release of open metadata from the perspective of the business model of the cultural institutions. Why does it make sense for them to open up their data? The study showed that this depends to a large extent to the role that metadata plays in the business model of the institution. By and large, all institutions agreed that the principle advantages of opening their metadata is that this will increase their relevance in the digital space, it will engage new users with their holdings, and perhaps most importantly, that it is in alignment with their mission to make our shared cultural heritage more accessible to society.

But by themselves these arguments were not in all cases sufficiently convincing to make the bold move to open the data. There is also a fear that the authenticity of the works would be jeopardised if made available for anyone to re-use without attribution, and a loss of potential income if all control would be given away. All understandable arguments from institutions who are increasingly under financial pressure. Nevertheless one could feel that the balance was tilting towards opening access.

An illustrating annecdote was provided by the Rijksmuseum. “The Milkmaid, one of Johannes Vermeer’s most famous pieces, depicts a scene of a woman quietly pouring milk into a bowl. During a survey the Rijksmuseum discovered that there were over 10,000 copies of the image on the internet—mostly poor, yellowish reproductions. As a result of all of these low-quality copies on the web, according to the Rijksmuseum, “people simply didn’t believe the postcards in our museum shop were showing the original painting. This was the trigger for us to put high-resolution images of the original work with open metadata on the web ourselves. Opening up our data is our best defence against the ‘Yellow Milkmaid’.” 

With the release of the records in our repository, we can say that the Milkmaid and her 20 million fellow original works get their revenge: alongside millions of copies, the authentic works are now findable and accessible for the public.

We can therefore conclude that the release of the metadata is a major step forward towards a ‘Cultural Commons’, a collectively owned space available for all to use and create new services on. But having the building blocks available doesn’t mean that the building is ready…. Now that the conditions for access to metadata have been met, we need to work together to reap the opportunities that we have all heralded for so long.

We will therefore continue to work with partners like Creative Commons, Open Knowledge Foundation and Wikimedia Foundation to make cultural and scientific data more accessible and to support a vibrant Public Domain. We will also work to pave the way for creative re-use by developers, providing the infrastructure for opening up opportunities to create new meaningful ways to access and interpret culture.

On Tuesday, coders and developers from all over Europe will do just that when they meet as part of the Open Heritage and Open Science streams of the festival for a joint hackday, using Europeana’s dataset which we have made available as Linked Data. This is the first time that hackers will have access to the full Europeana dataset for re-use, and I am excited to see what creative apps and mash-ups are developed. Previous hackdays have resulted in apps like Artspace that would, for example, allow Europeana collections to be made available in public places such as coffee shops, libraries, schools, and hotels, or allow you to create and share your personal online guides to art. Now that this huge cultural dataset is free for all to re-use, for any purpose, we can hope to see many more such applications becoming a reality, including commercial educational applications that have not been possible before now.

I very much look forward to seeing you in Helsinki this week to discuss how we can bring the opportunities of Open Data to full fruition!

Taking “utmost transparency” to the next level – at4am for all!

Erik Josefsson - June 27, 2012 in Free Culture, Legal, Open Government Data, WG EU Open Data, WG Open Government Data

What? When?? Where??? How?!?! were the questions that got me started some 10 years ago now, on my free software journey that’s taken me to the heart of the European Parliament. As a young Swedish musician, politically innocent and ignorant as the next, I got worked up together with a bunch of newborn stallmanites unleashing ourselves on the internet determined to kill the software patents directive. There was a lot of code. I remember Xavi rewrote the EU’s co-decision procedure algorithm in java to be able to understand it, and that our content management system said ‘Cannot parse this Directive’ instead of returning 404. The tracking of MEPs was managed by Knecht, an email driven content management program written in lisp (insert awe comment here), and I cannot remember the number of different perl scripts that were playing around with voting results. It all ended happily (we won), and I still say “Can I have a B-item please!” whenever I get to go for drink with Miernik or Jonas.

You might think things would be different when you’re on the inside. I have been working in the European Parliament since the last elections, but it turns out at least three of the questions are still the same – What? When?? Where??? One administrative response from the institution is to serve the MEPs and their staff with iPads and intranet pages. Users of iPads and intranet are happy. But I am not. I have decided with a bunch of old stubborn stallmanites to try to use free software in the European Parliament as far as humanly possible. And we do. And it is (partially) possible. We put up a sign at FOSDEM in February last year calling for help and now we are 2 patrons, 13 members and 29 supporters. You can find info on how to become a supporter or a member (or even patron) of European Parliament Free Software Users Group (EPFSUG) here.

Another administrative answer by the institution to the questions above has been to build an Automatic Tool for AMendments, at4am. If ever I can nominate anybody to the Nobel Peace Prize, it would be the at4am developers team who have made this brilliant application possible. They have succeeded in making independent and competing committees in the European Parliament cooperate to provide information on their internal workings that can be parsed into a unified way of tabling amendments. It’s huge. Imagine a world without git (or anything like it) and then there is git – that is how epic this application is. More than 150k amendments has been tabled since its launch. I’d say that the same number of tears and curses have been saved.

Now, to close this already long, bushy and wild blog post with the reason for it in the first place: The at4am team has decided to share the code with the world, and on Wednesday 11 July we’re going to talk about which license would be best to use. The event is kindly hosted by MEP Marie-Christine Vergiat, and Carlo Piana and Karsten Gerloff from Free Software Foundation Europe (FSFE) are going to speak. Please come! A follow up meeting should of course focus on how to get the data out of the EP intranet and which licence would then be the best to use.

Why? Because the question “How?!?!” actually has an answer already. Rule 103 of the Rules of Procedure of the European Parliament reads as follows:

Transparency of Parliament’s activities

  1. Parliament shall ensure that its activities are conducted with the utmost transparency, in accordance with the second paragraph of Article 1 of the Treaty on European Union, Article 15 of the Treaty on the Functioning of the European Union and Article 42 of the Charter of Fundamental Rights of the European Union.

That’s a pretty serious standard. Come join to give it meaning! Let’s figure out how to make utmost transparency work in practice.

Ideas for OpenPhilosophy.org

Jonathan Gray - December 20, 2011 in Bibliographic, Free Culture, Ideas and musings, Open Content, Open Data, Public Domain, WG Cultural Heritage, WG Humanities, WG Public Domain, Working Groups

The following post is from Jonathan Gray, Community Coordinator at the Open Knowledge Foundation. It is cross-posted from jonathangray.org.

For several years I’ve been meaning to start OpenPhilosophy.org, which would be a collection of open resources related to philosophy for use in teaching and research. There would be a focus on the history of philosophy, particularly on primary texts that have entered the public domain, and on structured data about philosophical texts.

The project could include:

  • A collection of public domain philosophical texts, in their original languages. This would include so called ‘minor’ figures as well as well known thinkers. The project would bring together texts from multiple online sources – from projects like Europeana, the Internet Archive, Project Gutenberg or Wikimedia Commons, to smaller online collections from libraries, archives, academic departments or individual scholars. Every edition would be rights cleared to check that it could be freely redistributed, and would be made available either under an open license, with a rights waiver or a public domain dedication.
  • Translations of public domain philosophical texts, including historical translations which have entered the public domain, and more recent translations which have been released under an open license.
  • Ability to lay out original texts and translations side by side – including the ability to create new translations, and to line up corresponding sections of the text.
  • Ability to annotate texts, including private annotations, annotations shared with specific users or groups of users, and public annotations. This could be done using the Annotator tool.
  • Ability to add and edit texts, e.g. by uploading or by importing via a URL for a text file (such as a URL from Project Gutenberg). Also ability to edit texts and track changes.
  • Ability to be notified of new texts that might be of interest to you – e.g. by subscribing to certain philosophers.
  • Stable URLs to cite texts and or sections of texts – including guidance on how to do this (e.g. automatically generating citation text to copy and paste in a variety of common formats).

The project could also include a basic interface for exploring and editing structured data on philosophers and philosophical works:

  • Structured bibliographic data on public domain philosophical works – including title, year, publisher, publisher location, and so on. Ability to make lists of different works for different purposes, and to export bibliographic data in a variety of formats (building on existing work in this area – such as Bibliographica and related projects).
  • Structured data on secondary texts, such as articles, monographs, etc. This would enable users to browse secondary works about a given text. One could conceivably show which works discuss or allude to a given section of a primary text.
  • Structured data on the biographies of philosophers – including birth and death dates and other notable biographical and historical events. This could be combined with bibliographic data to give a basic sense of historical context to the texts.

Other things might include:

  • User profiles – to enable people to display their affiliation and interests, and to be able to get in touch with other users who are interested in similar topics.
  • Audio version of philosophical texts – such as from Librivox.
  • Links to open access journal articles.
  • Images and other media related to philosophy.
  • Links to Wikipedia articles and other introductory material.
  • Educational resources and other material that could be useful in a teaching/learning context – e.g. lecture notes, slide decks or recordings of lectures.

While there are lots of (more or less ambitious!) ideas above, the key thing would be to develop the project in conjunction with end users in philosophy departments, including undergraduate students and researchers. Having something simple that could be easily used and adopted by people who are teaching, studying or researching philosophy or other humanities disciplines would be more important that something cutting edge and experimental but less usable. Hence it would be really important to have a good, intuitive user interface and lots of ongoing feedback from users.

What do you think? Interested in helping out? Know of existing work that we could build on (e.g. bits of code or collections of texts)? Please do leave a comment below, join discussion on the open-humanities mailing list or send me an email!

Get Updates