Support Us

You are browsing the archive for Open Economics.

First Open Economics International Workshop Recap

Velichka Dimitrova - January 28, 2013 in Access to Information, Events, Featured, Open Access, Open Data, Open Economics, Open Standards, Our Work, WG Economics, Workshop

The first Open Economics International Workshop gathered 40 academic economists, data publishers and funders of economics research, researchers and practitioners to a two-day event at Emmanuel College in Cambridge, UK. The aim of the workshop was to build an understanding around the value of open data and open tools for the Economics profession and the obstacles to opening up information, as well as the role of greater openness of the academy. This event was organised by the Open Knowledge Foundation and the Centre for Intellectual Property and Information Law and was supported by the Alfred P. Sloan Foundation. Audio and slides are available at the event’s webpage.

Open Economics Workshop

Setting the Scene

The Setting the Scene session was about giving a bit of context to “Open Economics” in the knowledge society, seeing also examples from outside of the discipline and discussing reproducible research. Rufus Pollock (Open Knowledge Foundation) emphasised that there is necessary change and substantial potential for economics: 1) open “core” economic data outside the academy, 2) open as default for data in the academy, 3) a real growth in citizen economics and outside participation. Daniel Goroff (Alfred P. Sloan Foundation) drew attention to the work of the Alfred P. Sloan Foundation in emphasising the importance of knowledge and its use for making decisions and data and knowledge as a non-rival, non-excludable public good. Tim Hubbard (Wellcome Trust Sanger Institute) spoke about the potential of large-scale data collection around individuals for improving healthcare and how centralised global repositories work in the field of bioinformatics. Victoria Stodden (Columbia University / RunMyCode) stressed the importance of reproducibility for economic research and as an essential part of scientific methodology and presented the RunMyCode project.

Open Data in Economics

The Open Data in Economics session was chaired by Christian Zimmermann (Federal Reserve Bank of St. Louis / RePEc) and was about several projects and ideas from various institutions. The session examined examples of open data in Economics and sought to discover whether these examples are sustainable and can be implemented in other contexts: whether the right incentives exist. Paul David (Stanford University / SIEPR) characterised the open science system as a system which is better than any other in the rapid accumulation of reliable knowledge, whereas the proprietary systems are very good in extracting the rent from the existing knowledge. A balance between these two systems should be established so that they can work within the same organisational system since separately they are distinctly suboptimal. Johannes Kiess (World Bank) underlined that having the data available is often not enough: “It is really important to teach people how to understand these datasets: data journalists, NGOs, citizens, coders, etc.”. The World Bank has implemented projects to incentivise the use of the data and is helping countries to open up their data. For economists, he mentioned, having a valuable dataset to publish on is an important asset, there are therefore not sufficient incentives for sharing.

Eustáquio J. Reis (Institute of Applied Economic Research – Ipea) related his experience on establishing the Ipea statistical database and other projects for historical data series and data digitalisation in Brazil. He shared that the culture of the economics community is not a culture of collaboration where people willingly share or support and encourage data curation. Sven Vlaeminck (ZBW – Leibniz Information Centre for Economics) spoke about the EDaWaX project which conducted a study of the data-availability of economics journals and will establish publication-related data archive for an economics journal in Germany.

Legal, Cultural and other Barriers to Information Sharing in Economics

The session presented different impediments to the disclosure of data in economics from the perspective of two lawyers and two economists. Lionel Bently (University of Cambridge / CIPIL) drew attention to the fact that there is a whole range of different legal mechanism which operate to restrict the dissemination of information, yet on the other hand there is also a range of mechanism which help to make information available. Lionel questioned whether the open data standard would be always the optimal way to produce high quality economic research or whether there is also a place for modulated/intermediate positions where data is available only on conditions, or only in certain part or for certain forms of use. Mireille van Eechoud (Institute for Information Law) described the EU Public Sector Information Directive – the most generic document related to open government data and progress made for opening up information published by the government. Mireille also pointed out that legal norms have only limited value if you don’t have the internalised, cultural attitudes and structures in place that really make more access to information work.

David Newbery (University of Cambridge) presented an example from the electricity markets and insisted that for a good supply of data, informed demand is needed, coming from regulators who are charged to monitor markets, detect abuse, uphold fair competition and defend consumers. John Rust (Georgetown University) said that the government is an important provider of data which is otherwise too costly to collect, yet a number of issues exist including confidentiality, excessive bureaucratic caution and the public finance crisis. There are a lot of opportunities for research also in the private sector where some part of the data can be made available (redacting confidential information) and the public non-profit sector also can have a tremendous role as force to organise markets for the better, set standards and focus of targeted domains.

Current Data Deposits and Releases – Mandating Open Data?

The session was chaired by Daniel Goroff (Alfred P. Sloan Foundation) and brought together funders and publishers to discuss their role in requiring data from economic research to be publicly available and the importance of dissemination for publishing.

Albert Bravo-Biosca (NESTA) emphasised that mandating open data begins much earlier in the process where funders can encourage the collection of particular data by the government which is the basis for research and can also act as an intermediary for the release of open data by the private sector. Open data is interesting but it is even more interesting when it is appropriately linked and combined with other data and the there is a value in examples and case studies for demonstrating benefits. There should be however caution as opening up some data might result in less data being collected.

Toby Green (OECD Publishing) made a point of the different between posting and publishing, where making content available does not always mean that it would be accessible, discoverable, usable and understandable. In his view, the challenge is to build up an audience by putting content where people would find it, which is very costly as proper dissemination is expensive. Nancy Lutz (National Science Foundation) explained the scope and workings of the NSF and the data management plans required from all economists who are applying for funding. Creating and maintaining data infrastructure and compliance with the data management policy might eventually mean that there would be less funding for other economic research.

Trends of Greater Participation and Growing Horizons in Economics

Chris Taggart (OpenCorporates) chaired the session which introduced different ways of participating and using data, different audiences and contributors. He stressed that data is being collected in new ways and by different communities, that access to data can be an enormous privilege and can generate data gravities with very unequal access and power to make use of and to generate more data and sometimes analysis is being done in new and unexpected ways and by unexpected contributors. Michael McDonald (George Mason University) related how the highly politicised process of drawing up district lines in the U.S. (also called Gerrymandering) could be done in a much more transparent way through an open-source re-districting process with meaningful participation allowing for an open conversation about public policy. Michael also underlined the importance of common data formats and told a cautionary tale about a group of academics misusing open data with a political agenda to encourage a storyline that a candidate would win a particular state.

Hans-Peter Brunner (Asian Development Bank) shared a vision about how open data and open analysis can aid in decision-making about investments in infrastructure, connectivity and policy. Simulated models about investments can demonstrate different scenarios according to investment priorities and crowd-sourced ideas. Hans-Peter asked for feedback and input on how to make data and code available. Perry Walker (new economics foundation) spoke about the conversation and that a good conversation has to be designed as it usually doesn’t happen by accident. Rufus Pollock (Open Knowledge Foundation) concluded with examples about citizen economics and the growth of contributions from the wider public, particularly through volunteering computing and volunteer thinking as a way of getting engaged in research.

During two sessions, the workshop participants also worked on Statement on the Open Economics principles will be revised with further input from the community and will be made public on the second Open Economics workshop taking place on 11-12 June in Cambridge, MA.

Open Research Data Handbook Sprint – 15-16 February

Velichka Dimitrova - January 16, 2013 in Events, Featured, Open Data Handbook, Open Economics, Open Science, Open Standards, Sprint / Hackday, WG Development, WG Economics, WG Open Bibliographic Data, WG Open Data in Science

On February 15-16, the Open Research Data Handbook Sprint will happen at the Open Data Institute, 65 Clifton Street, London EC2A 4JE.

The Open Research Data Handbook aims to provide an introduction to the processes, tools and other areas that researchers need to consider to make their research data openly available.

Join us for a book sprint to develop the current draft, and explore ways to remix it for different disciplines and contexts.

Who it is for:

  • Researchers interested in carrying out their work in more open ways
  • Experts on sharing research and research data
  • Writers and copy editors
  • Web developers and designers to help present the handbook online
  • Anyone else interested in taking part in an intense and collaborative weekend of action

Register at Eventbrite

What will happen:

The main sprint will take place on Friday and Saturday. After initial discussions we’ll divide into open space groups to focus on research, writing and editing for different chapters of the handbook, developing a range of content including How To guidance, stories of impact, collections of links and decision tools.

A group will also look at digital tools for presenting the handbook online, including ways to easily tag content for different audiences and remix the guide for different contexts.

Agenda:

Week before & after:

  • Calling for online contributions and reviews

Friday:

  • Seminar or bring your own lunch on open research data.
  • From 2pm: planning and initial work in the handbook in small teams (optional)

Saturday:

  • 10.00 – 10:30: Arrive and coffee
  • 10.30 – 11.30: Introducing open research – lightning talks
  • 11.30 – 13:30: Forming teams and starting sprint. Groups on:
    • Writing chapters
    • Decision tools
    • Building website & framework for book
    • Remixing guide for particular contexts
  • 13.30 – 14:30: Lunch
  • 14.30 – 16:30: Working in teams
  • 17.30 – 18:30: Report back
  • 18:30 – …… : Pub

Partners:

OKF Open Science Working Group – creators of the current Open Research Data Handbook
OKF Open Economic Working Group – exploring economics aspects of open research
Open Data Research Network - exploring a remix of the handbook to support open social science
research in a new global research network, focussed on research in the Global South.
Open Data Institute – hosting the event

The Statistical Memory of Brazil

Velichka Dimitrova - January 14, 2013 in Open Data, Open Economics, WG Economics

This blog post is written by Eustáquio Reis, Senior Research Economist at the Institute of Applied Economic Research (Ipea) in Brazil and member of the Advisory Panel of the Open Economics Working Group. It is cross-posted from the Open Economics Blog.


The project Statistical Memory of Brazil aims to digitize and to make freely available and downloadable the rare book collections of the Library of the Minister of Finance in Rio de Janeiro (BMF/RJ). The project focuses on the publications containing social, demographic, economic and financial statistics for the nineteenth and early twentieth century Brazil. At present, approximately 1,500 volumes, 400,000 pages and 200,000 tables have been republished.

Apart from democratizing the contents to both the scientific community and the general public, the project intends the physical preservation of the collection. The rarity, age and precarious state of conservation of the books strongly recommend to restrict physical access to them, limiting their handling to specific bibliographical purposes.

For the Brazilian citizen, free access to the contents of rare historical collections and statistics provides a form of virtual appropriation of the national memory, and as such a source of knowledge, gratification and cultural identity.

The Library of the Minister of Finance in Rio de Janeiro (BMF/RJ)

Inaugurated in 1944, the BMF/RJ extends over 1,200 square meters in the Palacio da Fazenda in downtown Rio de Janeiro, the seat of the Minister of Finance up to 1972 when it was moved to Brasilia. The historical book collection dates back to the early 19th century when the Portuguese Colonial Administration was transferred to Brazil. Thereafter, several libraries from other institutions — Brazilian Customs, Brazilian Institute of Coffee, Sugar and Alcohol Institute, among others — were incorporated to the collection which today comprises over 150,000 volumes mainly specialized in economics, law, public administration and finance.

Rare book collections

For the purposes of the project, the collection of rare books includes a few thousand statistical reports and yearbooks. To mention just a few, the annual budgets of the Brazilian Empire, 1821-1889; annual budgets of the Brazilian Republic since 1890; Ministerial and Provincial reports since the 1830s; foreign and domestic trade yearbooks since 1839; railways statistics since the 1860s; stock market reports since the 1890s; economic retrospects and financial newsletters since the 1870s; the Brazilian Demographic and Economic Censuses starting in 1872 as well as the Brazilian Statistical Yearbooks starting in 1908. En passant, it should be noted that despite their rarity, fragility, and scientific value, these collections are hardly considered for republication in printed format.

Partnerships and collaborations

Under the initiative of the Research Network on Spatial Analysis and Models (Nemesis), sponsored by the Foundation for the Support of Research of the State of Rio de Janeiro and the National Council for Scientific and Technological Development, the project is a partnership between the Regional Administration of the Minister of Finance in Rio de Janeiro (MF/GRA-RJ); Institute of Applied Economic Researh (IPEA) and the Internet Archive (IA).

In addition to the generous access to its library book collection, The Minister of Finance provides the expert advice on their librarians as well as the office space and facilities required for the operation of the project. The Institute of Applied Economic Research provides advisory in economics, history and informatics. The Internet Archive provides the Scribe® workstations and digitization technology, making the digital publications available in several different formats on the website.

The project also makes specific collaborations with other institutions to supplement the collections of the Library of the Minister of Finance. Thus, the Brazilian Statistical Office (IBGE) supplemented the collections of the Brazilian Demographic and Economic Censuses, as well as of the Brazilian Statistical Yearbooks; the National Library (BN) made possible the republication of the Budgets of the Brazilian Empire; the Provincial and Ministerial Reports; the Rio News; and the Willeman Brazilian Review, the latter in collaboration with and the Department of Economics of the Catholic University of Rio de Janeiro.

Future developments and extensions

Based upon open source software designed to publish, manage, link and preserve digital contents (Drupal, Fedora and Islandora), a new webpage of the project is under construction including two collaborative / crowdsourcing platforms.

The first crowdsourcing platform will create facilities for the indexing, documentation and uploading of images and tabulations of historical documents and databases compiled by other research institutions or individuals willing to make voluntary contributions to the project. The dissemination of the digital content intends to stimulate research innovations, extensions, and synergies based upon the historical documents and databases. For such purpose, an open source solution to be considered is the Harvard University Dataverse Project.

The second crowdsourcing platform intends to foster online decentralized collaboration of volunteers to compile or transcribe to editable formats (csv, txt, xls, etc.) the content of selected digital republications of the Brazil’s Statistical Memory project. Whenever possible, optical character recognition (OCR) programs and routines will be used to facilitate the transcription of the image content of the books. The irregular typography of older publications, however, will probably require visual character recognition and manual transcription of contents. Finally, additional routines and programs will be developed to coordinate, monitor and revise the compilations made, so as to avoid mistakes and duplications.

Project Team

Eustáquio Reis, IPEA, Coordinator
Kátia Oliveira, BMF/RJ, Librarian
Vera Guilhon, BMF/RJ, Librarian
Jorge Morandi, IPEA, TI Coordinator
Gemma Waterston, IA, Project Manager
Ana Kreter, Nemesis, Researcher
Gabriela Carvalho, FGV, Researcher
Lucas Mation, IPEA, Researcher

Interns:
Fábio Baptista
Anna Vasconcellos
Ana Luiza Freitas
Amanda Légora


To receive updates about Open Economics, sign up to the mailing list:

Economics & Coordinating the Crowd

Ayeh Bandeh-Ahmadi - December 20, 2012 in Featured, Open Economics, WG Economics

This blog post is written by Ayeh Bandeh-Ahmadi, PhD candidate at the Department of Economics, University of Maryland.

Group designed by Amar Chadgar from the Noun Project

This past spring, I spent a few months at the crowdfunding company Kickstarter, studying a number of aspects of the firm from what makes some projects succeed while others fail, preferences among backers, predictors of fraud, and market differences across geography and categories. I uncovered some fascinating tidbits through my research, but what stands out the most is just how much more challenging it is to run an effective crowdfunding service than you might think. For everything that has been written about crowdfunding’s great promise (Tim O’Reilly tweeted back in February “Seems to me that Kickstarter is the most important tech company since facebook. Maybe more important in the long run.”), its ability to deliver on fantastic and heretofore unachievable outcomes ultimately hinges on getting communities of people onto the same page about each other’s goals and expectations. In that regard, crowdfunding is all about overcoming a longstanding information problem, just like any other crowdguided system, and it offers some great lessons about both existing and missing tools for yielding better outcomes from crowdsourced science to the development of open knowledge repositories.

What is both compelling and defining amongst crowdguided systems — from prediction markets, the question and answer site Quora, to crowdsourced science and funding platforms like Kickstarter, MedStartr and IndieGogo — is their ability to coordinate improvements in social welfare that were practically impossible before. The idea is that if we could combine efforts with the right collection of other individuals who have compatible goals and access to complimentary resources to ours, then we could achieve outcomes that previously or on our own might be impossible. In the case of crowdfunding, these resources might be largely financial, whereas in the case of crowdsourcing, they might involve time and other resources like computing power and expertise. In both cases, the promise of crowdguided approaches are their ability to arrive at pareto-improvements to outcomes (economists’ way of describing scenarios where some are better off but no one is worse off). Achieving those outcome improvements that were impossible under traditional institutions also requires coordination mechanisms that improve bandwidth for processing information, incentives, preferences, and resources across the community.

Crowdguided systems often improve coordination by providing:

  • opportunities for identifying meaningful problems with particularly high value to the community. Identifying communal values helps develop clearer definitions of relevant communities and important metrics for evaluating progress towards goals.
  • opportunities for individuals to learn from others’ knowledge and experience. Under the right conditions, this can lead to more information and wisdom than any few individuals could collectively arrive at.
  • opportunities for whole communities to coordinate allocation of effort, financing and other resources to maximize collective outcomes. Coordinating each person’s contribution can result in achieving the same or better outcomes with less duplication of effort.

There are some great lessons to take from crowdfunding when it comes to building community, thinking about coordination mechanisms, and designing better tools for sharing information.


A major part of Kickstarter’s success comes from its founders’ ability to bring together the creative community they have long been members of around projects the community particularly values. Despite the fact that technology projects like the Pebble watch and Ouya videogame controller receive a great deal of press and typically the largest funding, they still account for a smaller fraction of funding and backings than music or film, in large part a reflection of the site’s strength in its core creative community. It helps that projects that draw from a likeminded community have a built-in sense of trust, reputation and respect. Kickstarter further accomplishes a sense of community amongst backers of each project through facilitating meaningful rewards. By offering to share credit, methodology, the final product itself, and/or opportunities to weigh in on the design and execution of a project, the most thoughtful project creators help to align backers’ incentives with their own. In the case of crowdfunding, this often means incentivizing backers to spread the word via compelling calls to their own social networks. In the case of crowdsourcing science, getting the word out to other qualified networks of researchers is often equally important. Depending on the project, it may also be worth considering whether skewed participation could bias results. Likewise, the incentive structures facilitated through different credit-sharing mechanisms and opportunities for individuals to contribute to crowdsourced efforts in bigger, different ways are quite relevant to consider and worth economic investigation.

I often hear from backers that the commitment mechanism is what compels them to back crowdfunding projects they otherwise wouldn’t. The possibility of making each individual’s contribution to the collective effort contingent on the group’s collective behavior is key to facilitating productive commitments from the crowd that were previously not achievable. Economists would be first to point out the clear moral hazard problem that exists in the absence of such a mechanism: if everyone suspects that everyone (or no one) else will already fund a project to their desired level, then no one will give to it. There is an analogous problem when it comes to crowdsourcing science in that each potential contributor needs to feel that their actions make a difference in personal or collective outcomes that they care about. Accordingly, it is important to understand what drives individuals to contribute — and this will certainly vary across different communities and types of project — in order to articulate and improve transparent incentive systems tailored to each.

Finally, while crowdfunding projects focused on delivering technology often garner the most press, they also present some of the greatest challenges for these platforms. Technology projects face the greatest risks in part simply because developing technologies, like delivering scientific findings, can be especially risky. To aggravate matters further, individuals drawn to participating in these projects may have quite different personal incentives than those designing them. When it comes to especially risky science and technology projects, in crowdfunding as in crowdsourcing, the value of good citizen-input is especially high but the noise and potential for bias are likewise high as well. Finding ways to improve the community’s bandwidth for sharing and processing its collective wisdom, observations and preferences is, in my opinion, quite key to achieving greater innovation in crowdguided platforms. Luckily, economists have done quite a bit of work on design of prediction markets and other mechanisms for extracting information in noisy environments and on reputation mechanisms that could and perhaps ought to be extended to thinking about these problems.

Next time, I’ll summarize some of the key findings from this research and areas where it could be better targeted to the design of crowdguided systems.

Research Data Management in Economic Journals

Velichka Dimitrova - December 11, 2012 in Access to Information, External, Open Data, Open Economics, Open Standards, WG Economics

This blog post is written by Sven Vlaeminck | ZBW – German National Library of Economics / Leibniz Information Center for Economics

Background

In Economics, as in many other research disciplines, there is a continuous increase in the number of papers where authors have collected their own research data or used external datasets. However, so far there have been few effective means of replicating the results of economic research within the framework of the corresponding article, of verifying them and making them available for repurposing or using in the support of the scholarly debate.

In the light of these findings B.D. McCullough pointed out: “Results published in economic journals are accepted at face value and rarely subjected to the independent verification that is the cornerstone of the scientific method. Most results published in economics journals cannot be subjected to verification, even in principle, because authors typically are not required to make their data and code available for verification.” (McCullough/McGeary/Harrison: “Lessons from the JMCB Archive”, 2006)

Harvard Professor Gary King also asked: “[I]f the empirical basis for an article or book cannot be reproduced, of what use to the discipline are its conclusions? What purpose does an article like this serve?” (King: “Replication, Replication” 1995). Therefore, the management of research data should be considered an important aspect of the economic profession.

The project EDaWaX

Several questions came up when we considered the reasons why economics papers may not be replicable in many cases:

First: what kind of data is needed for replication attempts? Second: it is apparent that scholarly economic journals play an important role in this context: when publishing an empirical paper, do economists have to provide their data to the journal? How many scholarly journals commit their authors to do so? Do these journals require their authors to submit only the datasets, or also the code of computation? Do they pledge their authors to provide programs used for estimations or simulations? And what about descriptions of datasets, variables, values or even a manual on how to replicate the results?

As part of generating the functional requirements for this publication-related data archive, the project analyzed the data (availability) policies of economic journals and developed some recommendations for these policies that could facilitate replication.

To read about the results of the EDaWaX survey, please see the full blog post on Open Economics.

Data Policies of Economic Journals

Reputation Factor in Economic Publishing

Daniel Scott - November 1, 2012 in External, Open Access, Open Economics, Open/Closed, WG Economics

SSDL

“The big problem in economics is that it really matters in which journals you publish, so the reputation factor is a big hindrance in getting open access journals up and going”. Can the accepted norms of scholarly publishing be successfully challenged?

This quotation is a line from the correspondence about writing this blogpost for the OKFN. The invitation came to write for the Open Economics Working Group, hence the focus on economics, but in reality the same situation pertains across pretty much any scholarly discipline you can mention. From the funding bodies down through faculty departments and academic librarians to individual researchers, an enormous worldwide system of research measurement has grown up that conflates the quality of research output with the publications in which it appears. Journals that receive a Thomson ISI ranking and high impact factors are perceived as the holy grail and, as is being witnessed currently in the UK during the Research Excellence Framework (REF) process, these carry tremendous weight when it comes to research fund awards.


Earlier this year, I attended a meeting with a Head of School at a Russell Group university, in response to an email that I had sent with information about Social Sciences Directory, the ‘gold’ open access publication that I was then in the first weeks of setting up. Buoyed by their acceptance to meet, I was optimistic that there would be interest and support for the idea of breaking the shackles of existing ranked journals and their subscription paywall barriers. I believed then – and still believe now – that if one or two senior university administrators had the courage to say, “We don’t care about the rankings. We will support alternative publishing solutions as a matter of principle”, then it would create a snowball effect and expedite the break up of the current monopolistic, archaic system. However, I was rapidly disabused. The faculty in the meeting listened politely and then stated categorically that they would never consider publishing in a start up venture such as Social Sciences Directory because of the requirements of the REF. The gist of it was, “We know subscription journals are restrictive and expensive, but that is what is required and we are not going to rock the boat”.

I left feeling deflated, though not entirely surprised. I realised some time ago that the notion of profit & loss, or cost control, or budgetary management, was simply anathema to many academic administrators and that trying to present an alternative model as a good thing because it is a better deal for taxpayers is an argument that is likely to founder on the rocks of the requirements of the funding and ranking systems, if not apathy and intransigence. A few years ago, whilst working as a sales manager in subscription publishing, I attended a conference of business school deans and directors. (This in itself was unusual, as most conferences that I attended were for librarians – ALA, UKSG, IFLA and the like – as the ‘customer’ in a subscription sense is usually the university library). During a breakout session, a game of one-upmanship began between three deans, as they waxed lyrically about the overseas campuses they were opening, the international exchanges of staff and students they had fixed up, the new campus buildings that were under construction, and so on.

Eventually, I asked the fairly reasonable question whether these costly ventures were being undertaken with a strategic view that they would eventually recoup their costs and were designed to help make their schools self-funding. Or indeed, whether education and research are of such importance for the greater good of all that they should be viewed as investments. The discomfort was palpable. One of the deans even strongly denied that this is a question of money. That the deans of business schools should take this view was an eye-opening insight in to the general academic attitude towards state funding. It is an attitude that is wrong because ultimately, of course, it is entirely about the money. The great irony was that this conversation took place in September 2008, with the collapse of Lehman Brothers and the full force of the Global Financial Crisis (GFC) soon to impact gravely on the global higher education and research sector. A system that for years had been awash with money had allowed all manner of poor practices to take effect, in which many different actors were complicit. Publishers had seized on the opportunity to expand output massively and charge vast fees for access; faculty had demanded that their libraries

subscribe to key journals, regardless of cost; libraries and consortia had agreed to publishers’ demands because they had the money to do so; and the funding bodies had built journal metrics into the measurement for future financing. No wonder, then, that neither academia nor publishers could or would take the great leap forward that is required to bring about change, even after the GFC had made it patently clear that the ongoing subscription model is ultimately unsustainable. Change needs to be imposed, as the British government bravely did in July with the decision to adopt the recommendations of the Finch Report.

However, this brings us back to the central issue and the quotation in the title. For now, the funding mechanisms are the same and the requirement to publish in journals with a reputation is still paramount. Until now, arguments against open access publishing have tended to focus on quality issues. The argument goes that the premier (subscription) journals take the best submissions and then there is a cascade downwards through second tier journals (which may or may not be subscription-based) until you get to a pile of leftover papers that can only be published by the author paying a fee to some sort of piratical publisher. This does not stand much scrutiny. Plenty of subscription-based journals are average and have been churned out by publishers looking to beef up their portfolios and justify charging ever-larger sums. Good research gets unnecessarily dumped by leading journals because they adhere to review policies dating from the print age when limited pagination forced them to be highly selective. Other academics, as we have seen at Social Sciences Directory, have chosen to publish and review beyond the established means because they believe in finding and helping alternatives. My point is that good research exists outside the ‘top’ journals. It is just a question of finding it.

So, after all this, do I believe that the “big hindrance” of reputation can be overcome? Yes, but only through planning and mandate. Here is what I believe should happen:

  1. The sheer number of journals is overwhelming and, in actuality, at odds with modern user behaviour which generally accesses content online and uses a keyword search to find information. Who needs journals? What you want is a large collection of articles that are well indexed and easily searchable, and freely available. This will enable the threads of inter-disciplinary research to spread much more effectively. It will increase usage and reduce cost-per-download (increasingly the metrics that librarians use to measure the return on investment of journals and databases), whilst helping to increase citation and impact.
  2. Ensure quality control of peer review by setting guidelines and adhering to them.
  3. De-couple the link between publishing and tenure & department funding.
  4. In many cases, universities will have subscribed to a particular journal for years and will therefore have access to a substantial back catalogue. This has often been supplemented by the purchase of digitised archives, as publishers cottoned on to other sources of revenue which happened to chime with librarians’ preferences to complete online collections and take advantage of non-repeatable purchases. Many publishers also sell their content to aggregators, who agree to an embargo period so that the publisher can also sell the most up-to-date research directly. Although the axe has fallen on many print subscriptions, some departments and individuals still prefer having a copy on their shelves (even though they could print off a PDF from the web version and have the same thing, minus the cover). So, aside from libraries often paying more than once for the same content, they will have complete collections up to a given point in time. University administrators need to take the bold decision to change, to pick an end date as a ‘cut off’ after which they will publicly state that they are switching to new policies in support of OA. This will allow funds to be freed up and used to pay for institutional memberships, article processing fees, institutional repositories – whatever the choice may be. Editors, authors and reviewers will be encouraged to offer their services elsewhere, which will in turn rapidly build the reputation of new publications.

Scholarly publishing is being subjected to a classic confrontation between tradition and modernity. For me, it is inevitable that modernity will win out and that the norms will be successfully challenged.

This post is also available on the Open Economics blog. If you’re interested in the issues raised, join our Open Economics or our Open Access lists to discuss them further!

Review of Open Access in Economics

Ross Mounce - October 30, 2012 in Access to Information, Open Access, Open Economics, WG Economics

This blog is cross-posted from the OKFN’s Open Economics blog

Ever since BioMed Central (BMC) published its first free online article on July 19th 2000, the Open Access movement has made significant progress, so much so that many different stakeholders now see 100% Open Access to research as inevitable in the near future. Some are already extrapolating from recent growth trends that Open Access will take 90% of the overall article share by just 2020 (Lewis, 2012). Another recent analysis shows that during 2011 the number of Open Access articles published was ~340,000 spread over ~6,700 different journals which is about 17% of the overall literature space (1.66 million articles) for that year (Laakso & Bjork, 2012).

Perhaps because of the more obvious lifesaving benefits, biomedical research in particular has seen the largest growth in Open Access – patients & doctors alike can gain truly lifesaving benefit from easy, cost-free, Open Access to research. Those very same doctors and patients may have difficulty accessing the latest toll access-only research; any delay or impediment to accessing up-to-date medical knowledge can have negative, even fatal consequences:

[The following is from 'The impact of open access upon public health. PLoS Medicine (2006) 3:e252+' illustrating how barriers to knowledge access have grave consequences]

Arthur Amman, President of Global Strategies for HIV Prevention, tells this story: “I recently met a physician from southern Africa, engaged in perinatal HIV prevention, whose primary access to information was abstracts posted on the Internet. Based on a single abstract, they had altered their perinatal HIV prevention program from an effective therapy to one with lesser efficacy. Had they read the full text article they would have undoubtedly realized that the study results were based on short-term follow-up, a small pivotal group, incomplete data, and unlikely to be applicable to their country situation. Their decision to alter treatment based solely on the abstract’s conclusions may have resulted in increased perinatal HIV transmission”

But there are also significant benefits to be gained from Open Access to other, non-biomedical research. Open Access to social science & humanities research is also increasing, and has recently been mandated by Research Councils UK (RCUK), the UK agency that dictates policy for all publicly-funded academic research in the UK, on the basis of the Finch report [PDF]. Particularly with respect to economics, I find it extremely worrying that our MPs and policymakers often do NOT have access to the latest academic economic research. David Willetts MP, recently admitted he couldn’t access some research on a BBC Radio 3 interview recently. Likewise at the Open Knowledge Festival in Helsinki recently, a policymaker expressed frustration at his inability to access possible policy-influencing evidence as published in academic journals.

So, for this blogpost, I set about seeing what the Open Access publishing options are for economists. I am well-versed in the OA options for scientists and have produced a visualization of various different paid Gold Open Access options here which has garnered much interest and attention. Even for scientists there are a wealth of completely free-to-publish-in options that are also Open Access (free-to-read, no subscription or payment required).

As far I can see, the Gold Open Access ‘scene’ in Economics is less well-developed relative to the sciences. The Directory of Open Access Journals (DOAJ) lists 192 separate immediate Open Access journals of varying quality (compared to over 500 medical journals listed in DOAJ). These OA economics journals also seem to be newer on average than the similar spread of OA biomedical journals. Nevertheless I found what appear to be some excellent OA economics journals including:

  • Economic Analysis and Policy  - a journal of the Economic Society of Australia, seems to take great pride and interest in Open Access: there’s a whole issue devoted to the subject of Open Access in Economics with papers by names even I recognise e.g. Christian Zimmermann & John Willinsky.
  • Theoretical Economics – published by the Econometrics Society three times a year. Authors retain the copyright to their works, and these are published under a standard Creative Commons licence (CC BY-NC). The PDFs seem very high-quality to me and contain an abundance of clickable hyperlinks & URLs – an added-value service I don’t even see from many good subscription publishers! Publishing here only requires one of the authors to be a member of the society which only costs £50 a year, with fee reductions for students. Given many OA science publications cost >£1000 per publication I find this price extremely reasonable.
  • Monthly Labor Review – published by the US Department of Labor, and in existence since 1915(!) this seems to me to be another high-quality, highly-read Open Access journal.
  • Economics – published in Germany under a Creative Commons Licence (CC BY-NC). It has an excellent, modern and clear website, great (high-standard) data availability policy and even occasionally awards prizes for the best papers published in the journal.
  • Journal of Economic and Social Policy – another Australian journal, established in the year 2000, providing a simple but completely free outlet for publishing on social and economic issues, reviewing conceptual problems, or debating policy initiatives.
  • …and many more. Just like with science OA journals there are numerous journals of local interest e.g. Latin American journals: Revista Brasileira de Economia, the Latin American Journal of Economics, Revista de Economia Contemporânea, Revista de Análisis Económico. European journals like the South-Eastern Europe Journal of Economics (SEEJE) and Ekonomska Istrazivanja (Croatian) and Asian journals e.g. Kasarinlan (Philippine Journal of Third World Studies). These should not be dismissed or discounted, not everything is appropriate for ‘international-scope’ journals. Local journals are important for publishing smaller scale research which can be built-upon by comparative studies and/or meta-analyses.
 It’s International Open Access Week this week: 22 – 28 October 2012

 

Perhaps more interesting with respect to Open Access in Economics is the thriving Green Open Access scene. In the sciences Green Open Access is pretty limited in my opinion. arXiv has popularised Green OA in certain areas of physics & maths but in my particular domain (Biology) Green OA is a deeply unpopular and unused method of providing OA. From what I have seen OA initiatives in Economics such as RePEc (Research Papers in Economics) and EconStor seem to be extremely popular and successful. As I understand it RePEc provides Open Bibliographic Data for an impressive volume of Economics articles, in this respect the field is far ahead of the sciences – there is little free or open bibliographic data from most science publishers. EconStor is an OA repository of the German National Library of Economics – Leibniz Information Centre for Economics. It contains more than 48,000 OA works which is a fiercely impressive volume. The search functions are perhaps a tad basic, but with that much OA literature collected and available for use I’ve no doubt someone will create a better, more powerful search interface for the collection.

In summary, from my casual glance at OA publishing in Economics as a non-economist, mea culpa, things look very positive here. Unless informed otherwise I think the OA scene here too is likely to grow and dominate the academic publishing space as it is in other areas of academia.

References

Laakso, M. and Bjork, B. C. 2012. Anatomy of open access publishing: a study of longitudinal development and internal structure. BMC Medicine 10:124+

Lewis, D. W. 2012. The inevitability of open access. College & Research Libraries 73:493-506.


To join the Open Economics Working Group, please sign up to the mailing list here

Data Party: Tracking Europe’s Failed Banks

Anders Pedersen - October 19, 2012 in Data Journalism, Events, Open Data, Open Economics, Sprint / Hackday

This blog is cross-posted from the OKFN’s Open Economics blog.

nuklr.dave CC BY

This fall marked the five year anniversary of the collapse of UK-based Northern Rock in 2007. Since then an unknown number of European banks have collapsed under the weight over plummeting housing markets, financial mismanagement and other reasons. But how many European banks did actually crash during the crisis?

In the United States, the Federal Deposit Insurance Corporation keeps a neat Failed bank list, which has recorded 496 bank failures in the US since 2000.

Europe however, and for that matter the rest of the world, still lack similar or comparable data on how many banks actually failed since the beginning of the crisis. Nobody has collected data on how many Spanish cajas actually crashed and how many troubled German landesbanken actually went under.

At the Open Economics Skype-chat earlier this month it was agreed to take the first steps for creating a Failed Bank Tracker for Europe at an upcoming “Data party”:

Join the Data Party

Wednesday 24th October at 5:30pm London / 6:30pm Berlin.

We hope that a diverse group of you will join in the gathering of failed bank data. During the Data Party you will have plenty of chances to discuss al questions regarding bank failures whether they be specific cases. Do not let your country or region leave a blank spot when we draw up the map of bank failures.

At the data party we will go through some of these questions:

  • What kind of failed bank data do we wish to collect (date, amount, type of intervention, etc.)?
  • What are the possible sources (press, financial regulators or European agencies)?
  • Getting started with the data collection for the Failed Bank Tracker

 

You can join the Data party by adding your name and skype ID here.

 

Getting good data: What makes a failed bank?

For this first event collecting data on failed European banks should provide more than enough work for us. At this moment neither the European Commission, Eurostat nor the European Banking Authority are keeping any records of bank failures like in the FDIC in the US. The best source of official European information available is from DG Competition, which keeps track of approved state aid measures in member states in their State Aid database. Its accuracy is however limited as it contains cases from state intervention with specific bank collapses to sector wide bank guarantee schemes.

A major reason for the lack of data on bank failures is the fact that legislation often differs dramatically between countries in terms of what actually defines a bank failure. In early 2012 I asked the UK regulator FSA, if they could provide a list of failed banks similar to the list from FDIC in the US. In a response the FSA asserted that the UK did not have a single bank failures since 2007:

“I regret that we do not have a comparable list to that of the US. Looking at the US list it appears to be a list of banks that have entered administration. As far as I am aware no UK banks have entered administration in this period, though of course a number were taken over or received support during the crisis.”

The statement from FSA demonstrate that, for instance Northern Rock, which brought a £ 2bn loss on UK taxpayers, never officially failed, due to the fact that it never entered administration. The example from FSA demonstrates that collecting data on bank failures would be  interesting and useful.

Earlier this year I got a head start on the data collection when a preliminary list of failed banks, were collected from both journalists and national agencies such as the Icelandic the Financial Supervisory Authority. The first 65 banks entered in the tracker, mostly from Northern Europe are available here.

Looking forward to bring data on failed banks together at the Data Party.

To get involved in the Open Economics activities, please visit our website and sign up to the mailing list.

OpenDataMx: Opening Up the Government, one Bit at a Time

Velichka Dimitrova - September 4, 2012 in Events, External, Featured, Featured Project, Labs, Open Access, Open Content, Open Data, Open Economics, Open Knowledge Foundation Local Groups, Open Spending, Policy, School of Data, Sprint / Hackday

On August 24-25, another edition of OpenDataMx took place: a 36-hour public data hackathon for the development of creative technological solutions to questions raised by the civil society. This time the event was hosted by the University of Communication in Mexico City.

The popularity of the event has grown: a total of 63 participants including coders and designers took part and another 58 representatives from civil society from more than ten different organisations attended the parallel conference. Government institutions participated actively as well: the Ministry of Finance and Public Credit, IFAI and the Government of the Oaxaca State. The workshops were about technology, open data and its potential in the search for technological solutions to the problems of civil society.

The following proposals resulted from the discussions in the conference:

  • Construct a methodology to collectively generate open data from civil society for reuse in data events as well as to demonstrate benefits of government bodies to adopt the practice of generating their data openly.
  • The collective construction of a common database of information and knowledge on the topic of open data through the wiki of OpenDataMx.

After 36 hours continuous work, each of the 23 teams presented their project, each based on the 30 datasets, provided by both the government and civil society organisations. As currently little open government data is available, the joint work of civil society was essential in order to realise the hackathon.

Read the Hackathon news in Spanish on the OpenDataMx blog here.

OpenDataMx1

The judging panel responsible for assessing the projects was comprised of recognised experts in technology, open data and its application to civil society needs. The panel consisted of Velichka Dimitrova (Open Knowledge Foundation), Matioc Anca (Fundación Ciudadano Inteligente), Eric Mill (Sunlight Foundation) and Jorge Soto from Citivox.

The first three projects were awarded money prizes ($30 000, $20 000 and $10 000 Mexican pesos respectively), allowing the teams to implement their project. An honorary mention was given to the project of the Government of the Oaxaca State and the Finance Ministry (SHCP) about the transparency of public works and citizen participation. The organisers of the hackathon also tried to link all teams to the institution or organisation relevant to their project in order to get support and advise for further steps. The organisers: Fundar, the Centre for Analysis and Investigations, SocialTIC; Colectivo por la Transparencia and the University of Communication would like to thank all participants, judges and speakers for their enthusiasm and valuable support in building the citizen community.

OpenDataMX2

Here are some details about the winning projects:


FIRST PLACE

Name of the Project: Becalia | becalia.org

General Description: A platform, allowing firms and civil society to sponsor students with limited economic means to continue their higher education.

Background to the problem: There are very few students who receive a government scholarship for higher education. Additionally, few students decide to continue their education to a higher level, less than 20% in all states. The idea is to support the students who do not have the means and enable the participation of civil society.

Technology and tools used: Ruby on Rails, Javascript, CoffeeScript

Datasets: PRONABES (Programa Nacional de Becas para la Educación Superior) – National Scholarship Program for Higher Education

Team members: Adrián González, Abraham Kuri, Javier Ayala, Eduardo López


SECOND PLACE

Name of the Project: Más inversión para movernos mejor (More investment for better movement) | http://berserar.negoapps.com/

General Description: A small website for citizen participation, where users are asked to allocating spending to a type of urban mobility e.g. cars, public transport of bicycles, signalling their preference on where they would like the government to invest. After assigning one’s preferences, the users can compare them with the actual spending of the government and are offered multimedia material informing them about the topic.
Background to the problem: There is lack of information on how the government spends the money and the importance of sustainable urban mobility.
Technology and tools used: HTML, Javascript, PHP, Codeigniter, Bootstrap, Excel and SQL

Datasets: Base de datos del Instituto de Políticas para el Transporte y el Desarrollo -ITDP (Database of the Policy Institute for Transport and Development) http://itdp.mx

Team members: Antonio Sandoval, Jorge Cravioto, Said Villegas, Jorge Cáñez


THIRD PLACE

Name of the Project: DiputadoMx | http://www.tudiputado.org/
General Description: An application that helps you find your representative by geographical area, political party, gender or commission he or she belongs to. The application is compatible with desktop and mobile technology.
Background to the problem: Lack of opportunity for citizens to communicate directly with their representatives.
Technology and tools used: 
HTML5, CSS3, JQUERY, PYTHON , GOOGLE APP ENGINE, MONGODB

Datasets: Base de datos del IFE del diputados (IFE Database of MPs)
Team members: Pedro Aron Barrera Almaraz


HONORARY MENTION:

Name of the Project: Obra Pública Abierta (Open Public Works)

General Description: Open Public Works is an open government tool, conceptualised and developed by the Government of the Oaxaca State and Ministry of Finance (SHCP). This platform is created in order to make public works more transparent, presenting them in a simpler language and encouraging citizen oversight from the users community. Open Public Works seeks to create state transparency policy of the 3rd generation in the three levels of governance. This open source platform is also meant as a public good that will be delivered to the various state governments to promote nationwide transparency, citizen participation, and accountability in the public works sector.

Background to the Problem: There is lack of transparency in the infrastructure funds spending by the state governments. The citizen is not familiar with basic information about public works realised in their community and no mechanisms for independent social audit exist. Moreover, state control bodies lack the ability to control and supervise all public works. Public participation in the control of public resources is essential to solve this situation, where society and government should work together. Additionally, there is no public policy cross all three levels of government for the transparency of this sector. Finally, the public lacks too§ls and incentives to monitor, report and, if necessary, denounce the use of public resources in this very nontransparent government sector.

Technology and tools used:  API de Google Maps V.2, PHP,  JavaScript y Jquery

Datasets:
 Data set de obra pública de la SHCP y SINFRA/SEFIN del Gobierno de Oaxaca (Datesets of piblic works of SHCP and SINFRA/SEFIN of the Government of Oaxaca).

Team Members: Berenice Hernández Sumano, Juan Carlos Ayuso Bautista, Tarick Gracida Sumano, José Antonio García Morales, Lorena Rivero, Roberto Moreno Herrera, Luis Fernando Ostria


For more information:

Photos and content thanks to Federico Ramírez and Fundar.

Get Updates