Support Us

You are browsing the archive for WG Humanities.

Open Humanities Hack, 21st-22nd November

Sam Leon - October 11, 2012 in Events, Sprint / Hackday, WG Humanities

Where?: Guys Campus, Hodgkin Building, London, SE1 1UL

When?: 21st-22md November

Sign up: Please fill in the sign-up form


Humanities Hack is the first Digital Humanities hack organised jointly by the Kings College London Department of Digital Humanities, DARIAH, the Digitised Manuscripts to Europeana (DM2E) project and our Open Humanities Working Group.

The London event is the first of a series of hack days organised for Digital Humanists and intended to target research-driven experimentation with existing Humanities data sets. One of the most exciting recent developments in Digital Humanities include the investigation and analysis of complex data sets that require the close collaboration between Humanities and computing researchers. The aim of the hack day is not to produce complete applications but to experiment with methods and technologies to investigate these data sets so that at the end we can have an understanding of the types of novel techniques that are emerging.

We are providing a few open humanities data sets but we welcome any addition. We are currently collecting data sets here, if you have any that might be useful for the event please do put them in.

Possible themes include but are not limited to

  • Research in textual annotation has been a particular strength of Digital Humanities. Where are the next frontiers? How can we bring together insights from other fields and Digital Humanities?
  • How do we provide linking and sharing Humanities data that makes sense of its complex structure, with many internal relationships both structural and semantic. In particular, distributed Humanities research data often includes digital material combining objects in multiple media, and in addition there is diversity of standards for describing the data.
  • Visualisation. How do we develop reasonable visualisations that are practical and help build on overall intuition for the underlying Humanities data set
  • How can we advance the novel Humanities technique of Network Analysis to describe complex relationships of ‘things’ in social-historical systems: people, places, etc.

With this hack day we seek to from groups of computing and humanities researchers that will work together to come up with small-scale prototypes that showcase new and novel ways of working with Humanities data.

As numbers are limited for this hack, please register here.

If you have any questions, please do not hesitate to contact Sam Leon (sam.leon@okfn.org) or Tobias Blanke (tobias.blanke@kcl.ac.uk)

Open Book Publishers releases “The Digital Public Domain”

Theodora Middleton - April 24, 2012 in COMMUNIA, External, Open Access, Public Domain, WG Humanities, WG Public Domain

openbookpublishers
Open Book Publishers is the first UK academic publisher to have made all its books freely available
online, publishing peer-reviewed research in subjects across the Humanities and Social Sciences. They are “committed to the idea that high quality scholarship should be available to readers everywhere regardless of their income or access to university libraries”.

CommuniaCover
This week sees their most recent release hit the virtual shelves, The Digital Public Domain: Foundations for an Open Culture, edited by Melanie Dulong de Rosnay, co-founder and chair of Communia, and Juan Carlos De Martin. From the Press Release:

This book brings together essays by
academics, librarians, entrepreneurs, activists
and policy makers, who were all part of the
EU-funded Communia project. Together the
authors argue that the Public Domain — that
is, the informational works owned by all of us,
be that literature, music, the output of
scientific research, educational material or
public sector information — is fundamental to
a healthy society.


The essays range from more theoretical papers
on the history of copyright and the Public
Domain, to practical examples and case
studies of recent projects that have engaged
with the principles of Open Access and
Creative Commons licensing. The book is
essential reading for anyone interested in the
current debate about copyright and the
Internet. It opens up discussion and offers
practical solutions to the difficult question of
the regulation of culture at the digital age.

Open Book Publishers argue that “One of the fundamental aims of academia is to spark thought and debate in both the academic and wider community. Open Access helps spread educational materials to everyone, globally, not just to those who can afford it. A large proportion of scholarly research is publicly funded, so it seems only reasonable that its results be made available as widely as possible.”

The free PDF edition of this title was made
possible by generous funding received from
the European Union (eContentplus framework
project ECP-2006-PSI-610001). Get your copy here, and have a browse of their other titles at www.openbookpublishers.com.

THATCamping in Luxembourg

Sam Leon - March 26, 2012 in Events, Open GLAM, WG Humanities

THATCamps (The Humanities And Technology Camps) are a form of “unconference” focussed on the nascent discipline of the Digital Humanities that have risen rapidly in popularity since their invention by the folks over at George Mason University.

I was lucky enough to be one of 50 participants at this year’s first THATCamp in Europe which was held in Luxebourg at the end of last week. On the first day of this conference-not-conference we gathered in the grand confines of the Centre Culturel de Rencontre. In true THATCamp spirit there were no lengthy introductory presentations just a nod to the people who would be leading the first set of sessions.

I was immediately met with disappointment when I realised that I wanted to go to five out of the six parallel sessions that were running. I went for the Zotero session. Frédéric Clavert took us through the basic ins and outs of the reference management tool as a stand-alone application and as Firefox plug-in.

Within minutes it became immediately apparent just how useful some of the compatability features of this tool would be for Open Knowledge Foundation projects like TEXTUS. It would be wonderful, I mused, if when I was reading a Nietzsche text online on OpenPhilosophy.org I would be able to just click a button to add it to my references in Zotero. What would be even more useful would be if I were able to highlight a given section of text and click – “cite in Zotero”. If Zotero could then save the URI that TEXTUS would generate for that quote plus all the relevant citation information this would make the process of citing texts in TEXTUS seamless for existing Zotero users.

After a quick coffee break, I was back in another session in which a group of academics, teachers, students, archivists and developers were given a crash course in Python.

We used Pythonanywhere and the Pattern module to manipulate and visualise networks of Twitter users who had been Tweeting who had been tagging their Tweets with that #thatcamplux.

While I doubt I’ll be making any significant contributions to the world of software programming in the near future it was an excellent opportunity to play with the power of Python and get to grips with some of its fundamentals.

Before we all headed off to the pub, one of the other key rituals of THATCamp was performed. We planned all sessions for the next day. Ideas were put forward, grouped and then voted on. Within half an hour a spontaneous schedule had emerged that covered a variety of topics that related to the Digital Humanities. The end result can be seen here.

On the second day I once again walked down to the Centre Culturel de Rencontre in the bright Luxembourgish sunshine to join my fellow campers for more unconference sheningans. I caught a workshop on training the next generation of digital humanists in which a rather self-reflective discussion was intiated about the nature of the digital humanities. Were the Digital Humanities pretty much identical to the disciplines we’d known since the Renaissance with a bit of Twitter and Word Press thrown-in, or were they something more, with new methodologies and avenues for knowledge creation?

Marin Dacos, of Open Editions and founder of revues.org, endorsed the claim that what distinguished the Digital Humanities from the more traditional conception of the humanities was the potential it offered for open and collaborative research processes. This was consistent with the view enshrined in a a document produced by the participants at an earlier THATCamp, in which he had a hand, the Manifesto for the Digital Humanities.

THATCamp struck me as being an excellent antidote to the more rigid traditional style of conference we’ve all become accustomed to. It was breath of fresh air with a friendly and congenial atmosphere where people with a mix of skills were able to share their views and work together. It was also fantastic to see the extent to which ideas relating to open access were being put at the heart of the Digital Humanities’ agenda boding well for all the blossoming Open Humanities projects that the Open Knowledge Foundation now plays home to.

Mapping the Republic of Letters

Nicole Coleman - March 22, 2012 in External, Open GLAM, Visualization, WG Cultural Heritage, WG Humanities

The following post is crossposted from the OpenGLAM blog, and is about Stanford’s Mapping the Republic of Letters Project – one of the finest examples of what can be done with cultural heritage data and open source tools. Mapping the Republic of Letters is a collaborative, interdisciplinary humanities research project looking at 17th and 18th century correspondence, travel, and publication to trace the exchange of ideas in the early modern period and the Age of Enlightenment.

What unites the researchers involved in Mapping the Republic of Letters is the opportunity to explore historical material in a spatial context and ask big-data questions across archives: Did the Republic of Letters have boundaries? Where was the Enlightenment?

The Republic of Letters is an early modern network of intellectuals whose connections transcended generations and state boundaries. It has been described as a lost continent and debate continues about whether or not it really existed. Though the ‘letters’ of the title refers to scholarly knowledge, epistolary exchange was, in fact, the net that held this community together. Letters could be shipped around the world and shared across generations. Among our case studies, Athanasius Kircher’s correspondence network was the most widely distributed, exchanging letters with Jesuit outposts from Macau to Mexico.

Since the early stages of our project, we used open-source graphics libraries to visualize our collected data. The first step is to understand the ‘shape’ of the archive. A timeline + histogram, for example, reveals at a glance the distribution of letters in the collection over hundreds of years. And the map connecting cities as source and destination of sent letters reveals geographic “cold-spots” as well as hot-spots in the archive.

As we begin to dive in and pursue specific research questions, visualization tools in the form of maps, network graphs and charts, help us to make sense of piles of data all at once. Voltaire’s correspondence alone includes about 15,000 letters. Putting those letters on a map instantly gives us a picture of where Voltaire traveled and reveals temporal and spatial patterns in his letter-writing. And while there is no record of epistolary exchange between Voltaire and American inventor and statesman, Benjamin Franklin, a network graph of their combined correspondence quickly reveals three second degree connections.

One outcome of this project is a visualization layer to complement the well-established text-based search model for archives. To begin to really piece together a map of the Republic of Letters, we need to find a way to thread a path through the many dispersed and otherwise silo-ed correspondence archives. Another great challenge is to visually reflect the gaps, uncertainty and ambiguity in the historical record. It is often those gray areas that provide new research opportunities for humanists. In this effort we are very pleased to be working in partnership with DensityDesign Research Lab in Milan.

We have also been working closely with the Cultures of Knowledge project at Oxford. Cultures of Knowledge recently released a beta version available of their open access union catalog of early modern letters, aptly named Early Modern Letters Online. Their model is not to be the repository, but to provide a rich search layer across existing correspondence collections pointing back out to the source repository. Our friends at the Dutch 17th Circulation of Knowledge project are addressing the challenges of mining early modern correspondence for topics across many languages.

The code-base for our visualizations is open source and available for download at athanasius-project.github.com. Our code-base is available, but that is not to say that our visualizations are pret-a-porter. Since our research is devoted to knowledge production in the humanities and not software development, the code is rather idiosyncratic and constrained by our changing data model. Please contact us if you would like to learn more or would like to join the effort.

Announcing DM2E: Exploring the possibilities of Linked Open Data in cultural heritage

Sam Leon - March 19, 2012 in DM2E, Featured, Our Work, WG Cultural Heritage, WG Humanities, WG Open Bibliographic Data

The Open Knowledge Foundation is delighted to announce that it will be leading the community work for a three-year EU funded project entitled Digitised Manuscripts to Europena (DM2E). The project consortium, which includes academic institutions, NGOs and commercial partners, will be led by Professor Stefan Gradmann at the Humboldt University.

Europeana

The project aims to enable as many of Europe’s memory institutions to easily upload their digital content into Europeana.

Europeana is Europe’s largest cultural heritage portal, giving access to millions of digital artefacts contributed by over 2000 cultural heritage institutions across Europe. Founded in 2008, Europeana offers access to Europe’s history to all citizens with an internet connection. Not only does Europeana hold a huge amount of promise for researchers and scholars who benefit immensely from having access to huge aggregated datasets about cultural heritage objects, but through the use of APIs Europeana promises to stimulate the development of a swathe of apps and tools with applications in tourism and education.

Open GLAM (Galleries, Libraries, Archives, Museums)

As part of DM2E, the Open Knowledge Foundation will be continuing to work closely with cultural institutions from all over Europe encouraging them to openly license their metadata.

Metadata that is contributed to content aggregation platforms like Europeana is most valuable if it is openly licensed, maximising the number of applications it can have. The Open Knowledge Foundation’s Open Bibliographical Principles are the expression of the ideas we seeks to realise in this field.

Last year, the team at Europeana announced their new Data Exchange Agreement which stipulates that metadata must be provided to Europeana under the Creative Commons Public Domain License (CC-0). This is a significant step towards the goal of achieving an open cultural heritage data ecosystem that extends access to all, and encourages the reuse of cultural data in a whole variety of novel contexts both commercial and non-commercial.

The Open Knowledge Foundation’s Open GLAM work will be key in this respect. We will be teaming up with the likes of Wikimedia, Creative Commons and UK Discovery to run open licensing clinics and technical workshops for librarians and archivists all over Europe in order to demystify some of the legal issues around open metadata, and also to showcase projects that build upon openly licensed content to show just what is possible when you free your metadata!

The next workshop in this strand will be held at the Staatsbibliothek zu Berlin on April 20th and it will be co-hosted with Wikimedia Germany. Watch this space for more details!

Linked Open Data in cultural heritage

One of the core aspirations of DM2E is to leverage the tremendous potential offered by Linked Data technologies such as RDF to create a network of interconnected and linked cultural datasets.

To have cultural heritage data in Linked Data formats will enable the automated enrichment of metadata provided to Europeana. For instance, any metadata fields about authors of books will be linked to the giant DBPedia datasets, thus supplying more information about the life of that particular author, ultimately enriching the original metadata record.

The important task of building a tool that will translate “flat” (non-linked) data from cultural heritage institutions into RDF falls to the Freie Universität Berlin. They will develop technology that can take a diverse range of metadata types as its source, and turn them into the Linked Data that aligns with the Europeana Data Model (EDM).

For any of you who want to brush up on just what Linked Data is and why it is relevant to cultural heritage, the folk at Europeana made a wonderful video explaining it all recently:

Engaging researchers

But DM2E is not only about enabling more archives and libraries to provide linked open metadata to Europeana, it’s also about working with research communities who will consume the aggregated Linked Data on Europeana. The Italian company Net7 will be leading work on tools that will help scholars from the humanities to work with this data. Tools for semantic annotation and building collections of texts on which complex analysis can be formed will be key.

Key links

Ideas for OpenPhilosophy.org

Jonathan Gray - December 20, 2011 in Bibliographic, Free Culture, Ideas and musings, Open Content, Open Data, Public Domain, WG Cultural Heritage, WG Humanities, WG Public Domain, Working Groups

The following post is from Jonathan Gray, Community Coordinator at the Open Knowledge Foundation. It is cross-posted from jonathangray.org.

For several years I’ve been meaning to start OpenPhilosophy.org, which would be a collection of open resources related to philosophy for use in teaching and research. There would be a focus on the history of philosophy, particularly on primary texts that have entered the public domain, and on structured data about philosophical texts.

The project could include:

  • A collection of public domain philosophical texts, in their original languages. This would include so called ‘minor’ figures as well as well known thinkers. The project would bring together texts from multiple online sources – from projects like Europeana, the Internet Archive, Project Gutenberg or Wikimedia Commons, to smaller online collections from libraries, archives, academic departments or individual scholars. Every edition would be rights cleared to check that it could be freely redistributed, and would be made available either under an open license, with a rights waiver or a public domain dedication.
  • Translations of public domain philosophical texts, including historical translations which have entered the public domain, and more recent translations which have been released under an open license.
  • Ability to lay out original texts and translations side by side – including the ability to create new translations, and to line up corresponding sections of the text.
  • Ability to annotate texts, including private annotations, annotations shared with specific users or groups of users, and public annotations. This could be done using the Annotator tool.
  • Ability to add and edit texts, e.g. by uploading or by importing via a URL for a text file (such as a URL from Project Gutenberg). Also ability to edit texts and track changes.
  • Ability to be notified of new texts that might be of interest to you – e.g. by subscribing to certain philosophers.
  • Stable URLs to cite texts and or sections of texts – including guidance on how to do this (e.g. automatically generating citation text to copy and paste in a variety of common formats).

The project could also include a basic interface for exploring and editing structured data on philosophers and philosophical works:

  • Structured bibliographic data on public domain philosophical works – including title, year, publisher, publisher location, and so on. Ability to make lists of different works for different purposes, and to export bibliographic data in a variety of formats (building on existing work in this area – such as Bibliographica and related projects).
  • Structured data on secondary texts, such as articles, monographs, etc. This would enable users to browse secondary works about a given text. One could conceivably show which works discuss or allude to a given section of a primary text.
  • Structured data on the biographies of philosophers – including birth and death dates and other notable biographical and historical events. This could be combined with bibliographic data to give a basic sense of historical context to the texts.

Other things might include:

  • User profiles – to enable people to display their affiliation and interests, and to be able to get in touch with other users who are interested in similar topics.
  • Audio version of philosophical texts – such as from Librivox.
  • Links to open access journal articles.
  • Images and other media related to philosophy.
  • Links to Wikipedia articles and other introductory material.
  • Educational resources and other material that could be useful in a teaching/learning context – e.g. lecture notes, slide decks or recordings of lectures.

While there are lots of (more or less ambitious!) ideas above, the key thing would be to develop the project in conjunction with end users in philosophy departments, including undergraduate students and researchers. Having something simple that could be easily used and adopted by people who are teaching, studying or researching philosophy or other humanities disciplines would be more important that something cutting edge and experimental but less usable. Hence it would be really important to have a good, intuitive user interface and lots of ongoing feedback from users.

What do you think? Interested in helping out? Know of existing work that we could build on (e.g. bits of code or collections of texts)? Please do leave a comment below, join discussion on the open-humanities mailing list or send me an email!

LODLAM-NZ Round Up

Theodora Middleton - December 20, 2011 in External, WG Cultural Heritage, WG Humanities, WG Open Bibliographic Data

The following guest post is by Jon Voss, whose projects include History Pin and Civil War Data 150.

I recently traveled to Wellington, New Zealand to take part in the National Digital Forum of New Zealand (#ndf2011), which was held at the national museum of New Zealand, Te Papa. Following the conference, the amazing team at Digital NZ hosted and organized a Linked Open Data in Libraries, Archives & Museums unconference (#lodlam). The two events were well attended by Kiwis as well as a large number of international attendees from Australia, and a few from as far as the US, UK and Germany.

When it comes to innovative digital initiatives in cultural heritage, the rest of the world has been looking to New Zealand and Australia for some time. Federated metadata exchanges and search has been happening across institutions in projects like Digital NZ and Trove. I was able to learn more about the Digital NZ APIs as well as those from Museum Victoria, Powerhouse Museum, and State Records New South Wales. In fact, the remarkable proliferation of APIs in Australasia has allowed us to consider the possibilities of Linked Open Data to harvest and build upon data held in databases in multiple institutions.

Given the extent to which tools for opening access to data have been developed here, I was surprised by the level of frustration that exists around copyright issues. There’s a clear sense that government is moving too slowly in making materials available to the public with open licensing. We talked a lot about the idea of separately licensing metadata and assets (i.e. information about a photo vs the digital copy of the photo), as has been happening across Europe and increasingly the United States. There are strong advocates within the GLAM sector (galleries, libraries, archives & museums) here, and demonstrating use cases utilizing openly licensed metadata will go far in helping to move those conversations forward with policy makers.

To that end, a session was convened to explore the possibilities of an international LODLAM project focused on World War I, the centennial commemoration of which is fast approaching. The Civil War Data 150 project we’ve been slowly moving forward in the US may provide a rough framework to build from. At least a half dozen or more libraries, archives and museums have expressed interest in participating in a WWI project already. First steps may be identifying openly licensed datasets to be contributed, key vocabularies and ontologies to apply, and ideas for visualizations that would leverage the use of Linked Open Data. For anything to happen here, someone will need to take the lead in organizing (not me, we’re still trying to build some tools around the Civil War Data 150 concept!). Good notes were posted on the LODLAM blog about the conversation and how to convene future conversations. Anyone who gets involved with this, please spread the word and keep the LODLAM community apprised of your progress and ways to contribute.


We also had a workshop on using Google Refine by Carlos Arroyo from the Powerhouse Museum, with props to the FreeYourMetadata crew. Some lively sessions dug into just what and how Linked Data is and some of the pitfalls and potentials. Another session explored the importance and potential of local vocabularies, and how they can contribute to Linked Data implementations. One great example was the vocabularies surrounding Maori artifacts (Taonga) at Te Papa, and how publishing those datasets can aid other museums around the world to better describe and provide digital access to Maori collections.

As I’ve attended various LODLAM meetups since June, I’ve noticed clear momentum from one to another as these conversations progress rapidly, with those further along helping those of us just learning. After LODLAM-DC I realized the importance of including library, archive, and museum vendors in all of these gatherings. At LODLAM-NZ I could see the potential of bringing together developers in the GLAM sector and those utilizing Linked Data in commercial settings. In places like San Francisco, where commercial interests are already leading the charge on Linked Data (which is not a bad thing) and there’s an active Semantic Web developer community, the GLAM sector may be playing catchup. But the sheer number of datasets potentially available as open data coming from the GLAM sector, together with the expertise of managing massive amounts of structured data, creates a space ripe for collaboration and experimentation, and these lines will continue to blur.

Open Humanities Working Group Update

Theodora Middleton - December 20, 2011 in WG Humanities, Working Groups

The following update is from the Open Humanities Working Group, courtesy of James Harriman-Smith. To help you keep up with everything that’s going on across the OKF, we are publishing weekly updates from different Working Groups.

Salvete. Ahem. The latest and most important news from the Open Humanities Working Group is that we now have a blog, intended to help coordinate all of the Foundation’s projects in the humanities: http://humanities.okfn.org. This follows on from the merger of the Open Literature mailing list into the Open Humanities one.

On the site you will find:

If you have a spare moment, please do have a look and give me any feedback about content and design you have. You might also like to tell us about an upcoming event, join our mailing list, or drop in for our next general meeting on Wednesday 18th January at 5pm GMT.

Prizewinning bid in ‘Inventare il Futuro’ Competition

James Harriman-Smith - November 5, 2011 in Annotator, Bibliographic, Featured Project, Free Culture, Ideas and musings, News, OKF Projects, Open Shakespeare, Public Domain, Public Domain Works, Texts, WG Humanities, WG Open Bibliographic Data

By James Harriman-Smith and Primavera De Filippi

On the 11th July, the Open Literature (now Open Humanities) mailing list got an email about a competition being run by the University of Bologna called ‘Inventare il Futuro’ or ‘Inventing the Future’. On the 28th October, Hvaing submitted an application on behalf of the OKF, we got an email saying that our idea had won us €3 500 of funding. Here’s how.

The Idea: Open Reading

The competition was looking for “innovative ideas involving new technologies which could contribute to improving the quality of civil and social life, helping to overcome problems linked to people’s lives.” Our proposal, entered into the ‘Cultural and Artistic Heritage’ category, proposed joining the OKF’s Public Domain Calculators and Annotator together, creating a site that allowed users more interaction with public domain texts, and those texts a greater status online. To quote from our finished application:

Combined, the annotator and the public domain calculators will power a website on which users will be able to find any public domain literary text in their jurisdiction, and either download it in a variety of formats or read it in the environment of the website. If they chose the latter option, readers will have the opportunity of searching, annotating and anthologising each text, creating their own personal response to their cultural literary heritage, which they can then share with others, both through the website and as an exportable text document.

As you can see, with thirty thousand Euros for the overall winner, we decided to think very big. The full text, including a roadmap is available online. Many thanks to Jason Kitkat and Thomas Kandler who gave up their time to proofread and suggest improvements.

The Winnings: Funding Improvements to OKF Services

The first step towards Open Reading was always to improve the two services it proposed marrying: the Annotator and the Public Domain Calculators. With this in mind we intend to use our winnings to help achieve the following goals, although more ideas are always welcome:

  • Offer bounties for flow charts regarding the public domain in as yet unexamined jurisdictions.
  • Contribute, perhaps, to the bounties already available for implementing flowcharts into code.
  • Offer mini-rewards for the identification and assessment of new metadata databases.
  • Modify the annotator store back-end to allow collections.
  • Make the importation and exportation of annotations easier.

Please don’t hesitate to get in touch if any of this is of interest. An Open Humanities Skype meeting will be held on 20th November 2011 at 3pm GMT.

Get Updates