Support Us

You are browsing the archive for Open Access.

Skillshares and Stories: Upcoming Community Sessions

Heather Leson - April 3, 2014 in CKAN, Events, Network, OKF Brazil, OKF Projects, Open Access, Open Knowledge Foundation Local Groups, School of Data

We’re excited to share with you a few upcoming Community Sessions from the School of Data, CKAN, Open Knowledge Brazil, and Open Access. As we mentioned earlier this week, we aim to connect you to each other. Join us for the following events!

What is a Community Session: These online events can be in a number of forms: a scheduled IRC chat, a community google hangout, a technical sprint or hackpad editathon. The goal is to connect the community to learn and share their stories and skills.

We held our first Community Session yesterday. (see our Wiki Community Session notes) The remaining April events will be online via G+. These sessions will be a public Hangout to Air. The video will be available on the Open Knowledge Youtube Channel after the event. Questions are welcome via Twitter and G+.

All these sessions are Wednesdays at 10:30 – 11:30 am ET/ 14:30 – 15:30 UTC.

Mapping with Ketty and Ali: a School of Data Skillshare (April 9, 2014)

Making a basic map from spreadsheet data: We’ll explore tools like QGIS (a free and Open-source Geographic Information System), Tilemill (a tool to design beautiful interactive web maps) Our guest trainers are Ketty Adoch and Ali Rebaie.

To join the Mapping with Ketty and Ali Session on April 9, 2014

Q & A with Open Knowledge Brazil Chapter featuring Everton(Tom) Zanella Alvarenga (April 16, 2014)

Around the world, local groups, Chapters, projects, working groups and individuals connect to Open Knowledge. We want to share your stories.

In this Community Session, we will feature Everton (Tom) Zanella Alvarenga, Executive Director.

Open Knowledge Foundation Brazil is a newish Chapter. Tom will share his experiences growing a chapter and community in Brazil. We aim to connect you to community members around the world. We will also open up the conversation to all things Community. Share your best practices

Join us on April 16, 2014 via G+

Take a CKAN Tour (April 23, 2014)

This week we will give an overview and tour of CKAN – the leading open source open data platform used by the national governments of the US, UK, Brazil, Canada, Australia, France, Germany, Austria and many more. This session will cover why data portals are useful, what they provide and showcase examples and best practices from CKAN’s varied user base! Our special guest is Irina Bolychevsky, Services Director (Open Knowledge Foundation).

Learn and share your CKAN stories on April 23, 2014

(Note: We will share more details about the April 30th Open Access session soon!)

Resources

Knowledge Creation to Diffusion: The Conflict in India

Guest - February 28, 2014 in Open Access, Open Research, Open Science

facebook-cover

This is a guest post by Ranjit Goswami, Dean (Academics) and (Officiating) Director of Institute of Management Technology (IMT), Nagpur, India. Ranjit also volunteers as one of the Indian Country Editors for the Open Data Census.

Developing nations, more so India, increasingly face a challenge in prioritizing its goals. One thing that increasingly becomes relevant in this context, in the present age of open knowledge, is the relevance of subscription-journals in dissipation and diffusion of knowledge in a developing society. Young Aaron Swartz from Harvard had made an effort to change it, that did cost him his life; most developed nations have realized research funded by tax-payers money should be made freely available to tax-payers, but awareness on these issues are at quite pathetic levels in India – both at policy level and among members of academic community.

Before one looks at the problem, a contextual understanding is needed. Today, a lot of research is done globally, including some of it in India, and its importance in transforming nations and society is increasingly getting its due recognition across nations. Quantum of original application oriented research, applicable specifically to the developing world, is a small part of overall global research. Some of it is done locally in India too, in spite of two obvious constraints developing nations face: (1) lack of funds, and (2) lack of capability and/or capacity.

Tax-funded research should be freely available

This article argues that research outcomes, done in India with Indian tax-payers money, are to be freely available to all Indians, for better diffusion. Unfortunately, the present practice is quite opposite.

The lack of diffusion of knowledge becomes evident in absence of any planned efforts, to make the research done in local context available in open platforms. Here when one looks at the academic community in India, due to the older mindset where research score and importance is given only for publishing research papers in journals, often even in journals of questionable quality, faculty members are encouraged to publish in subscription-journals. Open access journals are considered untouchables. Faculty members mostly do not keep a version of the publication to be freely accessible – be it in their own institute’s website, or in other formats online. More than 99% of Indian higher educational institutes do not have any open-access research content in their websites.

Simultaneously, a lot of academic scams get reported, more from India, as measuring research contribution is a difficult task. Faculty members often fall prey to short-cuts of institute’s research policy, in this age of mushrooming journals.

Facing academic challenges

India, in its journey to be an to an open knowledge society, faces diverse academic challenges. Experienced faculty members feel, that making their course outlines available in the public domain would lead to others copying from it; whereas younger faculty members see subscription journal publishing as the only way to build a CV. The common ill-founded perception is that top journals would not accept your paper if you make a version of it freely available. All of above act counter-productive to knowledge diffusion in a poor country like India. The Government of India has often talked about open course materials, but in most government funded higher educational institutes, one seldom sees even a course outline in public domain, let alone research output.
Question therefore is: For public funded universities and institutes, why should any Indian user have to cough up large sums of money again to access their research output? And it is an open truth that – barring a very few universities and institutes – most Indian colleges, universities and research organizations or even practitioners cannot afford the money required to pay for subscribing most well-known journal databases, or afford individual articles therein.

facebook-cover

It would not be wrong to say that out of thirty-thousand plus higher educational institutes, not even one per cent has a library access comparable to institutes in developed nations. And academic research output, more in social science areas, need not be used only for academic purposes. Practitioners – farmers, practicing doctors, would-be entrepreneurs, professional managers and many others may benefit from access to this research, but unfortunately almost none of them would be ready or able to shell out $20+ for a few pages by viewing only the abstract, in a country where around 70% of people live below $2 a day income levels.

Ranking is given higher priority than societal benefit

Academic contribution in public domain through open and useful knowledge, therefore, is a neglected area in India. Although, over the last few years, we have seen OECD nations, including China, increasingly encouraging open-access publishing by academic community; in India – in its obsession with university ranks where most institutes fare poorly, we are on reverse gear. Director of one of India’s best institutes have suggested why such obsessions are ill-founded, but the perceptions to practices are quite opposite.

It is, therefore, not rare to see a researcher getting additional monetary rewards for publishing in top-category subscription journals, with no attempt whatsoever – be it from researcher, institute or policy-makers – to make a copy of that research available online, free of cost. Irony is, that additional reward money again comes from taxpayers.

Unfortunately, existing age-old policies to practices are appreciated by media and policy-makers alike, as the nation desperately wants to show to the world that the nation publishes in subscription journals. Point here is: nothing wrong with producing in journals, encourage it even more for top journals, but also make a copy freely available online to any of the billion-plus Indians who may need that paper.

Incentives to produce usable research

In case of India, more in its publicly funded academic to research institutes, we have neither been able to produce many top category subscription-journal papers, nor have we been able to make whatever research output we generate freely available online. On quality of management research, The Economist, in a recent article stated that faculty members worldwide ‘have too little incentive to produce usable research. Oceans of papers with little genuine insight are published in obscure periodicals that no manager would ever dream of reading.’ This perfectly fits in India too. It is high time we look at real impact of management and social science research, rather than the journal impact factors. Real impact is bigger when papers are openly accessible.

Developing and resource deficit nations like India, who need open access the most, thereby further lose out in present knowledge economy. It is time that Government and academic community recognize the problem, and ensure locally done research is not merely published for academic referencing, but made available for use to any other researcher or practitioner in India, free of cost.

Knowledge creation is important. Equally important is diffusion of that knowledge. In India, efforts to resources have been deployed on knowledge creation, without integrative thinking on its diffusion. In the age of Internet and open access, this needs to change.

facebook-cover

Prof. Ranjit Goswami is Dean (Academics) and (Officiating) Director of Institute of Management Technology (IMT), Nagpur – a leading private B-School in India. IMT also has campuses in Ghaziabad, Dubai and Hyderabad. He is on twitter @RanjiGoswami

Copyright and Open Access 2014

Michelle Brook - January 15, 2014 in Featured, Open Access

This post is a guest post by Michelle Brook and Tom Olijhoek from the Open Knowledge Foundation Open Access Working Group.

This week has been proclaimed Copyright week by the EFF (Electronic Frontier Foundation) and today, Wednesday Jan 15, is Open Access Day 2014.

It is almost exactly 1 year ago that Aaron Swartz (http://en.wikipedia.org/wiki/Aaron_Swartz) died in the middle of his struggle for open knowledge and it would be a good thing to make this week and in particular Open Access Day, a recurring event in his honor.

The open access movement has gained momentum in the past year and too much has happened to list every thing. Instead lets focus on a few key events and developments.

In 2013 the White House has issued a directive stating that all publicly funded research should be made publicly available in repositories. The reaction of the scientific publishers has been to allow this, but under the condition that there is an embargo time of 6 months or 1 year. Many have thought that this would be a necessary transition measure, but recently they have been proven very wrong in this assumption because a powerful lobby of publishers is now even demanding for embargo times of up to 3 years!

In our opinion any embargo time for making publications open access is the wrong thing to do: it is not in the interest of science, not in the interest of society, it seems designed only to protect the rights of the publishers in order to maintain their profits. Any paper, especially in the Science, Technology, Engineering and Maths disciplines, refers to work done at least 1-2 years previously. Combined with the inherent fast pace of science, any embargo period – especially prolonged embargo periods – will make sharing of the information less useful and less efficient by prolonging this time span further. Instead we should strive for Zero-embargo publication and push for SHORTER review and handling times, which can sometimes be as long as 6 months!

We should remember Open Access is not only about having information freely available to view. People should also be able to reuse the information freely with no restrictions other than the requirement to attribute. Instead of traditional copyright rules and property rights open access publishers increasingly use a set of licenses developed by Creative Commons. These licenses provide a basic choice of rules for the usage of the work, in combination with the stringent demand for attribution of the work to the original author(s). In this way copyright remains (forever) with the author while allowing for unrestricted (or in other cases somewhat restricted) use of the information.

The original copyright rules that evolved around 1700 (Statute of Anne) were developed to protect the right of the owner of a work for a limited time (2x 14 years) in exchange for having the work in the public domain after this time period. So in a sense these rules were aimed at allowing to share the information. Because information did not travel that fast in those days, this ‘embargo period’ was then considered enough. When through technical advancements information started to move more quickly the copyright period was gradually extended to 70 years and more (Copyright, Designs and Patents Act 1988). However, in the process the copyright ownership had shifted from individual copyright to corporate copyright owned by publishing businesses. The ultimate goal of the copyright laws no longer reflected the ultimate goal of sharing information after a short period of time, but instead have a new role of defending business interests for as long as possible.

Today, thanks to the invention of the Internet, we see the making of a sharing economy. Many sharing communities exist already, but the community of sharing scientists is slow in coming. Although the internet was developed by scientists to exchange information the public has been much more quick in seeing and using the possibilities for sharing ideas, goods and information. Sharing of scientific information is still in its infancy, not in the least because of the ongoing efforts of traditional publishers to shield information for as long as this is profitable, but open science communities have started to form all over the world. This can be seen by the rapid growth of the Open Knowledge Foundation, with over 40 local open knowledge communities worldwide, many more than only two years ago. And it is also illustrated by the steady growth of older open access publishers like PLoS, BioMedCentral, as well as the very successful introductions of new journals like eLife and PeerJ.

Political and scientific support is also growing. The next European research program Horizon2020 aims at 100 % open access for all publicly funded research. And a scientific society like the Max Planck society has just organized its tenth anniversary Berlin conference on open access in Berlin.

However not only political and scientific support is important. We want to have citizens, students, entrepreneurs, and everyone else who needs (specific) information to push for global open access to all academic literature. And we need your help to do this.

  • You can contact the Open Knowledge Foundation by registering on the website
  • You can subscribe to any of the mailing lists of the OKF for instance the open access list and take part in discussions
  • You can share your stories on difficulties or success with accessing information on the website WhoNeedsAccess
  • You can download the OpenAccessButton and start registering where you hit paywalls when trying to access information

Tom Olijhoek and Michelle Brooks from the Open Access Working Group/ OKF

PDF Liberation Hackathon – January 18-19

Guest - December 19, 2013 in Events, Featured, Open Access, Open Content, Sprint / Hackday

This guest blog post has been written by Marc Joffe, of Public Sector Credit Solutions.

OpenSpending Workshop Bosnia

Open government data is valuable only to the extent that it can be used cost-effectively. When governments provide “open data” in the form of voluminous PDFs they offer the appearance of openness without its benefits. In this situation, the open government movement had two options: demand machine readable data or hack the PDFs – using technology to liberate the interesting data from them. The two approaches are complimentary; we can pursue both at the same time.

When it comes to liberating data from PDFs, advanced technologies are available but expensive. In my previous life as a technology manager at a financial firm, I was given the opportunity to purchase a sophisticated PDF extraction tool for USD 200,000 – not counting annual maintenance and implementation consulting costs.

This amount is beyond the reach of just about every startup and non-profit in the open data world. It is also beyond the means of most media organizations, so lowering the cost of PDF extraction is also a priority for journalists. The data journalism community has responded by developing software to harvest usable information from PDFs. Tabula, a tool written by Knight-Mozilla OpenNews Fellow Manuel Aristarán, extracts data from PDF tables in a form that can be readily imported to a spreadsheet – if the PDF was “printed” from a computer application. Introduced earlier this year, Tabula continues to evolve thanks to the volunteer efforts of Manuel, with help from OpenNews Fellow Mike Tigas and New York Times interactive developer Jeremy Merrill. Meanwhile, DocHive, a tool whose continuing development is being funded by a Knight Foundation grant, addresses PDFs that were created by scanning paper documents. DocHive is a project of Raleigh Public Record and is led by Charles and Edward Duncan.

These open source tools join a number of commercial offerings such as Able2Extract and ABBYY Fine Reader that extract data from PDFs. A more comprehensive list of open source and commercial resources is available here.

Unfortunately, the free and low cost tools available to data journalists and transparency advocates have limitations that hinder their ability to handle large scale tasks. If, like me, you want to submit hundreds of PDFs to a software tool, press “Go” and see large volumes of cleanly formatted data, you are out of luck. These limits reduce our ability to analyze and report on Parliamentary/Congressional financial disclosures, campaign contribution records and government budgets – which often arrive in volume, in PDF form.

PDF hacking has uses outside the government transparency / data journalism nexus. As Peter Murray-Rust has argued, the progress of science is being retarded because valuable data are “jailed” within PDF journal articles. For this reason, Dr. Rust and several colleagues have been developing AMI – a tool that leverages Apache PDFBox to mine usable content from scientific documents.

Whether your motive is to improve government, lower the cost of data journalism or free scientific data, you are welcome to join The PDF Liberation Hackathon on January 18-19, 2014 – sponsored by The Sunlight Foundation, Knight-Mozilla OpenNews and others. We’ll have hack sites at the NYU-Poly Incubator in New York, Chicago Community Trust, Sunlight’s Washington DC office and at RallyPad in San Francisco (one or two locations will have an opening social on the evening of the 17th). Developers can also join remotely because we will publish a number of clearly specified PDF extraction challenges before the hackathon.

Participants can work on one of the pre-specified challenges or choose their own PDF extraction projects. Ideally, hackathon teams will use (and hopefully improve upon) open source tools to meet the hacking challenges, but they will also be allowed to embed commercial tools into their projects as long as their licensing cost is less than $1000 and an unlimited trial is available.

Prizes of up to $500 will be awarded to winning entries. To receive a prize, a team must publish their source code on a GitHub public repository. To join the hackathon in DC or remotely, please sign up at Eventbrite; to hack with us in SF, please sign up via this Meetup. Signup links for New York and Chicago will be posted here. Please also complete our Google Form survey.

The PDF Liberation Hackathon is going to be a great opportunity to advance the state of the art when it comes to harvesting data from public documents. I hope you can join us.

Open Access Week 2013!

Michelle Brook - October 24, 2013 in Open Access

Happy Open Access Week!

Open Access week is a global event, celebrating open access. Taking place in the last full week of October every year, there are many events taking place online and offline which bring together people who care about Open Access, and provide opportunity to spread the good word.

There’s a lot going on this year!

  • There are a huge number of events taking place – so look out to see if there is one near you!
  • If there aren’t any events nearby, or you can’t get out, many of these events will be streamed. (A list of these can be found here
  • Follow the Twitter conversation on the hashtags #oaweek and #openaccess
  • The Guardian are hosting a live chat abut the future of Open Access research and publishing on Friday
  • The ASAP (Accelerating Science Award Programme) winners have been announced (Big congratulations to the winners!)

There are some great blog posts and articles emerging on Open Access. Let us know in the comments if I’ve missed any articles or news that you think we should be sharing!

As many people probably know, the Open Knowledge Foundation cares deeply about open access to research outputs – as defined by the Budapest Open Access Initiative and in alignment with the Open Knowledge Definition. From the Panton Principles and Panton Fellowships through to the Open Science and Open Access working groups, the community around the Open Knowledge Foundation is recognised as being highly involved in the push for greater openness around scholarship and research!

We’re going to be doing much more to support our community in the advocacy of open access over the coming months. Please sign up to let us know how we can best support you, and what type of tools and resources will help you!

Photo credit flickr user slubdresden

EC Consultation on open research data

Sander van der Waal - July 16, 2013 in Access to Information, Open Access, Open Data

The European Commission held a public consultation on open access to research data on July 2 in Brussels inviting statements from researchers, industry, funders, IT and data centre professionals, publishers and libraries. The inputs of these stakeholders will play some role in revising the Commission’s policy and are particularly important for the ongoing negotiations on the next big EU research programme Horizon 2020, where about 25-30 billion Euros would be available for academic research. Five questions formed the basis of the discussion:

  • How we can define research data and what types of research data should be open?
  • When and how does openness need to be limited?
  • How should the issue of data re-use be addressed?
  • Where should research data be stored and made accessible?
  • How can we enhance “data awareness” and a “culture of sharing”?

Here is how the Open Knowledge Foundation responded to the questions:

How can we define research data and what types of research data should be open?

Research data is extremely heterogeneous, and would include (although not be limited to) numerical data, textual records, images, audio and visual data, as well as custom-written software, other code underlying the research, and pre-analysis plans. Research data would also include metadata – data about the research data itself – including uncertainties and methodology, versioned software, standards and other tools. Metadata standards are discipline-specific, but to be considered ‘open’, at a bare minimum it would be expected to provide sufficient information that a fellow researcher in the same discipline would be able to interpret and reuse the data, as well as be itself openly available and machine-readable. Here, we are specifically concerned with data that is being produced, and therefore can be controlled by the researcher, as opposed to data the researcher may use that has been produced by others.

When we talk about open research data, we are mostly concerned with data that is digital, or the digital representation of non-digital data. While primary research artifacts, such as fossils, have obvious and substantial value, the extent to which they can be ‘opened’ is not clear. However, the use of 3D scanning techniques can and should be used to enable the capture of many physical features or an image, enabling broad access to the artifact. This would benefit both researchers who are unable to travel to visit a physical object, as well as interested citizens who would typically be unable to access such an item.

By default there should be an expectation that all types of research data that can be made public, including all metadata, should be made available in machine-readable form and open as per the Open Definition. This means the data resulting from public work is free for anyone to use, reuse and redistribute, with at most a requirement to attribute the original author(s) and/or share derivative works. It should be publicly available and licensed with this open license.

When and how does openness need to be limited?

The default position should be that research data should be made open in accordance with the Open Definition, as defined above. However, while access to research data is fundamentally democratising, there will be situations where the full data cannot be released; for instance for reasons of privacy.

In these cases, researchers should share analysis under the least restrictive terms consistent with legal requirements, and abiding by the research ethics as dictated by the terms of research grant. This should include opening up non-sensitive data, summary data, metadata and code; and providing access to the original data available to those who can ensure that appropriate measures are in place to mitigate any risks.

Access to research data should not be limited by the introduction of embargo periods, and arguments in support of embargo periods should be considered a reflection of inherent conservatism among some members of the academic community. Instead, the expectation should be that data is to be released before the project that funds the data production has been completed; and certainly no later than the publication of any research output resulting from it.

How should the issue of data re-use be addressed?

Data is only meaningfully open when it is available in a format and under an open license which allows re-use by others. But simply making data available is often not sufficient for reusing it. Metadata must be provided that provides sufficient documentation to enable other researchers to replicate empirical results.

There is a role here for data publishers and repository managers to endeavour to make the data usable and discoverable by others. This can be by providing further documentation, the use of standard code lists, etc., as these all help make data more interoperable and reusable. Submission of the data to standard registries and use of common metadata also enable greater discoverability. Interoperability and the availability of data in machine-readable form are crucial to ensure data-mining and text-mining of the data can be performed, a form of re-use that must not be restricted.

Arguments are sometimes made that we should monitor levels of data reuse, to allow us to dynamically determine which data sets should be retained. We refute this suggestion. There is a moral responsibility to preserve data created by taxpayer funds, including data that represents negative results or that is not obviously linked to publications. It is impossible to predict possible future uses, and reuse opportunities may currently exist that may not be immediately obvious. It is also crucial to note the research interests change over time.

Where should research data be stored and made accessible?

Each discipline needs different options available to store data and open it up to their community and the world; there is no one-size-fits-all solution. The research data infrastructure should be based on open source software and interoperable based on open standards. With these provisions we would encourage researchers to use the data repository that best fits their needs and expectations, for example an institutional or subject repository. It is crucial that appropriate metadata about the data deposited is stored as well, to ensure this data is discoverable and can be re-used more easily.

Both the data and the metadata should be openly licensed. They should be deposited in machine-readable and open formats, similar to how the US government mandate this in their Executive Order on Government Information. This ensures the possibility to link repositories and data across various portals and makes it easier to find the data. For example, the open source data portal CKAN has been developed by the Open Knowledge Foundation, which enables the depositing of data and metadata and makes it easy to find and re-use data. Various universities, such as the Universities of Bristol and Lincoln, already use CKAN for these purposes.

How can we enhance data awareness and a culture of sharing?

Academics, research institutions, funders, and learned societies all have significant responsibilities in developing a culture of data sharing. Funding agencies and organisations disbursing public funds have a central role to play and must ensure research institutions, including publicly supported universities, have access to appropriate funds for longer-term data management. Furthermore, they should establish policies and mandates that support these principles.

Publication and, more generally sharing, of research data should be ingrained in the academic culture, and should be seen as a fundamental part of scholarly communication. However, it is often seen as detrimental to a career, partly as a result of the current incentive system set up by by universities and funders, partly as a result of much misunderstanding of the issues.

Educational and promotional activities should be set up to promote the awareness of open access to research data amongst researchers, to help disentangle the many myths, and to encourage them to self-identify as supporting open access. These activities should be set up in recognition of the fact that different disciplines are at different stages in the development of the culture of sharing. Simultaneously, universities and funders should explore options for creating incentives to encourage researchers to publish their research data openly. Acknowledgements of research funding, traditionally limited to publications, could be extended to research data and contribution of data curators should be recognised.

References

The White House Seeks Champions of Open Science

Ross Mounce - May 8, 2013 in Open Access, Open Science, WG Open Data in Science

Here at the Open Knowledge Foundation, we know Open Science is tough, but ultimately rewarding. It requires courage & leadership to take the open path in science.

Nearly a week ago on the open-science mailing list we started putting together a list of established scientists who have in some way or another made significant contributions to open science or lent their esteemed reputation to calls for increased openness in science. Our open list now has over 130 notable scientists, among whom 88 are Nobel prize winners.

In an interesting parallel development, the White House has just put out a call to help identify “Open Science” Champions of Change — outstanding individuals, organizations, or research projects promoting and using open scientific data for the benefit of society.

whitehouseOPENSCIENCE

Anyone can nominate an Open Science candidate for consideration by May 14, 2013.

What more proof do we need that open science is both good, and valued in society? This marks a tremendous validation of the open science movement. The US government is not seeking to reward any scientist; only open scientists actively working to change the world for the better will win this recognition.

We’re still a long way from Open Science being the norm in science. But perhaps now, we’re a crucial step closer to important widespread recognition that Open Science is good, and could be the norm in the future. We eagerly await the unveiling of the winning Open Science champions at the White House on the 20th June later this year.

Science Europe denounces ‘hybrid’ Open Access

Ross Mounce - May 2, 2013 in Open Access, Open Science, WG Open Data in Science

Recently Science Europe published a clear and concise position statement titled:
Principles on the Transition to Open Access to Research Publications

This is an extremely timely & important document that clarifies what governments and research funders should expect during the transition to open access. Unlike the recent US OSTP public access policy which allows publishers to apply up to a 12 month access embargo (to the disgust of some scientists like Michael Eisen) on publicly-funded research, this new Science Europe statement makes clear that only up to a 6 month embargo at maximum should be accepted for publicly funded STEM research. The recent RCUK (UK research councils) open access policy also requires 6 months embargo at most, with some caveats.

But among the many excellent principles is a particularly bold and welcome proclamation:

the hybrid model, as currently defined and implemented by publishers, is not a working and viable pathway to Open Access. Any model for transition to Open Access supported by Science Europe Member Organisations must prevent ‘double dipping’ and increase cost transparency

Hybrid options are typically far more expensive than ‘pure’ open access journal costs, and they don’t typically aid transparency or the wider transition to open access.

The Open Knowledge Foundation heartily endorses these principles as together with the above they respect, and reinforce the need for free access AND full re-use rights to scientific research.


About Science Europe:

Science Europe is an association of European Research Funding Organisations and Research Performing Organisations, based in Brussels. At present Science Europe comprises 51 Research Funding and Research Performing Organisations from 26 countries, representing around €30 billion per annum.

Open Research Data Handbook – Call for case Studies

Velichka Dimitrova - April 9, 2013 in Featured, OKF Projects, Open Access, Open Science

The OKF Open Research Data Handbook – a collaborative and volunteer-led guide to Open Research Data practices – is beginning to take shape and we need you! We’re looking for case studies showing benefits from open research data: either researchers who have personal stories to share or people with relevant expertise willing to write short sections.

Designed to provide an introduction to open research data, we’re looking to develop a resource that will explain what open research data actually is, the benefits of opening up research data, as well as the processes and tools which researchers need to do so, giving examples from different academic disciplines.

Leading on from a couple of sprints, a few of us are in the process of collating the first few chapters, and we’ll be asking for comment on these soon.

In the meantime, please provide us with case studies to include, or let us know if you are willing to contribute areas of expertise to this handbook.

i want you

We now need your help to gather concrete case studies which detail your experiences of working with Open Research Data. Specifically, we are looking for:

  • Stories of the benefits you have seen as a result of open research data practices
  • Challenges you have faced in open research, and how you overcame them
  • Case studies of tools you have used to share research data or to make it openly available
  • Examples of how failing to follow open research practices has hindered the progress of science, economics, social science, etc.
  • … More ideas from you!

Case studies should be around 200-500 words long. They should be concrete, based on real experiences, and should focus on one specific angle of open research data (you can submit more than one study!).

Please fill out the following form in order to submit a case study:

Link to form

If you have any questions, please contact us on researchhandbook [at] okfn.org

Will Obama’s new $100m brain mapping project be open access?

Jonathan Gray - April 4, 2013 in Open Access, Open Science, Policy

On Tuesday President Obama unveiled a new $100 million research initiative to map the human brain.

The BRAIN (Brain Research Through Advancing Innovative Neurotechnologies) initiative will “accelerate the development and application of new technologies that will enable researchers to produce dynamic pictures of the brain that show how individual brain cells and complex neural circuits interact at the speed of thought”.

As well as trying to vastly improve scientific understanding of “the three pounds of matter that sits between our ears”, it is hoped that this research will enable new forms of prevention and treatment for conditions like Alzheimer’s, Parkinson’s, autism and epilepsy.

In his speech, Obama made several comparisons between the BRAIN initiative and the Human Genome Project, an initiative which saw
unprecedented international collaboration and data sharing between research centres around the world to map the tens of thousands of genes of the human genome. Dr Francis Collins, who led the Human Genome Project and is the current Director of the National Institutes of Health, spoke alongside Obama at the announcement.

While there has been no explicit announcement about whether or not the BRAIN initiative will be open access (and while there are obviously difficult ethical and privacy issues in this field), we hope that it will follow in the footsteps of the Human Genome Project’s pioneering approach to data sharing – which saw data being placed into the public domain by default, without restrictions on its use and redistribution. This helped to minimise duplication, maximise synergy and ultimately to accelerate the pace of research in this area.

There was a rival initiative to the Human Genome Project from a private company called Celera, which aimed to create its own subscription database of the human genome, and to patent over 300 genes. Martin Bobrow, a representative for the Human Genome Project, later said: “Celera’s requirements seemed to amount to them establishing an effective monopoly over the human genome.” If they had succeeded the consequences to scientific research and innovation in this area could have been devastating.

With its mixture of public and private investment and public and private research organisations, all with different interests and different approaches to sharing, there is a danger that Obama’s new brain mapping initiative could fracture into silos of separate researchers and groups, duplicating work, competing against each other and claiming exclusive control and commercialisation over the fruits of their research.

Given the strong focus on US innovation in President Obama’s speech, it is also not clear how the initiative will collaborate with other initiatives such as the European Commission’s recently announced €1 billion Human Brain Project, which looks to have at least some overlapping aims and goals to Obama’s new initiative.

While the EC will be requiring open access to research funded by their Horizon 2020 programme, it is not yet clear whether the U.S. Office of Science and Technology Policy (OSTP)’s recent announcement in support of open access will apply to the BRAIN initiative.

We hope that President Obama’s new brain mapping initiatives will adopt a strong and principled commitment to open access and international collaboration, so that the world can benefit from accelerated and more impactful research around mapping the human brain in the same way that it has with the human genome.

If you’re interested in following our work in this area, you can join our open-science discussion list by filling in your details in the form below:






Get Updates