Support Us

You are browsing the archive for WG Economics.

The Statistical Memory of Brazil

Velichka Dimitrova - January 14, 2013 in Open Data, Open Economics, WG Economics

This blog post is written by Eustáquio Reis, Senior Research Economist at the Institute of Applied Economic Research (Ipea) in Brazil and member of the Advisory Panel of the Open Economics Working Group. It is cross-posted from the Open Economics Blog.

The project Statistical Memory of Brazil aims to digitize and to make freely available and downloadable the rare book collections of the Library of the Minister of Finance in Rio de Janeiro (BMF/RJ). The project focuses on the publications containing social, demographic, economic and financial statistics for the nineteenth and early twentieth century Brazil. At present, approximately 1,500 volumes, 400,000 pages and 200,000 tables have been republished.

Apart from democratizing the contents to both the scientific community and the general public, the project intends the physical preservation of the collection. The rarity, age and precarious state of conservation of the books strongly recommend to restrict physical access to them, limiting their handling to specific bibliographical purposes.

For the Brazilian citizen, free access to the contents of rare historical collections and statistics provides a form of virtual appropriation of the national memory, and as such a source of knowledge, gratification and cultural identity.

The Library of the Minister of Finance in Rio de Janeiro (BMF/RJ)

Inaugurated in 1944, the BMF/RJ extends over 1,200 square meters in the Palacio da Fazenda in downtown Rio de Janeiro, the seat of the Minister of Finance up to 1972 when it was moved to Brasilia. The historical book collection dates back to the early 19th century when the Portuguese Colonial Administration was transferred to Brazil. Thereafter, several libraries from other institutions — Brazilian Customs, Brazilian Institute of Coffee, Sugar and Alcohol Institute, among others — were incorporated to the collection which today comprises over 150,000 volumes mainly specialized in economics, law, public administration and finance.

Rare book collections

For the purposes of the project, the collection of rare books includes a few thousand statistical reports and yearbooks. To mention just a few, the annual budgets of the Brazilian Empire, 1821-1889; annual budgets of the Brazilian Republic since 1890; Ministerial and Provincial reports since the 1830s; foreign and domestic trade yearbooks since 1839; railways statistics since the 1860s; stock market reports since the 1890s; economic retrospects and financial newsletters since the 1870s; the Brazilian Demographic and Economic Censuses starting in 1872 as well as the Brazilian Statistical Yearbooks starting in 1908. En passant, it should be noted that despite their rarity, fragility, and scientific value, these collections are hardly considered for republication in printed format.

Partnerships and collaborations

Under the initiative of the Research Network on Spatial Analysis and Models (Nemesis), sponsored by the Foundation for the Support of Research of the State of Rio de Janeiro and the National Council for Scientific and Technological Development, the project is a partnership between the Regional Administration of the Minister of Finance in Rio de Janeiro (MF/GRA-RJ); Institute of Applied Economic Researh (IPEA) and the Internet Archive (IA).

In addition to the generous access to its library book collection, The Minister of Finance provides the expert advice on their librarians as well as the office space and facilities required for the operation of the project. The Institute of Applied Economic Research provides advisory in economics, history and informatics. The Internet Archive provides the Scribe® workstations and digitization technology, making the digital publications available in several different formats on the website.

The project also makes specific collaborations with other institutions to supplement the collections of the Library of the Minister of Finance. Thus, the Brazilian Statistical Office (IBGE) supplemented the collections of the Brazilian Demographic and Economic Censuses, as well as of the Brazilian Statistical Yearbooks; the National Library (BN) made possible the republication of the Budgets of the Brazilian Empire; the Provincial and Ministerial Reports; the Rio News; and the Willeman Brazilian Review, the latter in collaboration with and the Department of Economics of the Catholic University of Rio de Janeiro.

Future developments and extensions

Based upon open source software designed to publish, manage, link and preserve digital contents (Drupal, Fedora and Islandora), a new webpage of the project is under construction including two collaborative / crowdsourcing platforms.

The first crowdsourcing platform will create facilities for the indexing, documentation and uploading of images and tabulations of historical documents and databases compiled by other research institutions or individuals willing to make voluntary contributions to the project. The dissemination of the digital content intends to stimulate research innovations, extensions, and synergies based upon the historical documents and databases. For such purpose, an open source solution to be considered is the Harvard University Dataverse Project.

The second crowdsourcing platform intends to foster online decentralized collaboration of volunteers to compile or transcribe to editable formats (csv, txt, xls, etc.) the content of selected digital republications of the Brazil’s Statistical Memory project. Whenever possible, optical character recognition (OCR) programs and routines will be used to facilitate the transcription of the image content of the books. The irregular typography of older publications, however, will probably require visual character recognition and manual transcription of contents. Finally, additional routines and programs will be developed to coordinate, monitor and revise the compilations made, so as to avoid mistakes and duplications.

Project Team

Eustáquio Reis, IPEA, Coordinator
Kátia Oliveira, BMF/RJ, Librarian
Vera Guilhon, BMF/RJ, Librarian
Jorge Morandi, IPEA, TI Coordinator
Gemma Waterston, IA, Project Manager
Ana Kreter, Nemesis, Researcher
Gabriela Carvalho, FGV, Researcher
Lucas Mation, IPEA, Researcher

Fábio Baptista
Anna Vasconcellos
Ana Luiza Freitas
Amanda Légora

To receive updates about Open Economics, sign up to the mailing list:

Economics & Coordinating the Crowd

Ayeh Bandeh-Ahmadi - December 20, 2012 in Featured, Open Economics, WG Economics

This blog post is written by Ayeh Bandeh-Ahmadi, PhD candidate at the Department of Economics, University of Maryland.

Group designed by Amar Chadgar from the Noun Project

This past spring, I spent a few months at the crowdfunding company Kickstarter, studying a number of aspects of the firm from what makes some projects succeed while others fail, preferences among backers, predictors of fraud, and market differences across geography and categories. I uncovered some fascinating tidbits through my research, but what stands out the most is just how much more challenging it is to run an effective crowdfunding service than you might think. For everything that has been written about crowdfunding’s great promise (Tim O’Reilly tweeted back in February “Seems to me that Kickstarter is the most important tech company since facebook. Maybe more important in the long run.”), its ability to deliver on fantastic and heretofore unachievable outcomes ultimately hinges on getting communities of people onto the same page about each other’s goals and expectations. In that regard, crowdfunding is all about overcoming a longstanding information problem, just like any other crowdguided system, and it offers some great lessons about both existing and missing tools for yielding better outcomes from crowdsourced science to the development of open knowledge repositories.

What is both compelling and defining amongst crowdguided systems — from prediction markets, the question and answer site Quora, to crowdsourced science and funding platforms like Kickstarter, MedStartr and IndieGogo — is their ability to coordinate improvements in social welfare that were practically impossible before. The idea is that if we could combine efforts with the right collection of other individuals who have compatible goals and access to complimentary resources to ours, then we could achieve outcomes that previously or on our own might be impossible. In the case of crowdfunding, these resources might be largely financial, whereas in the case of crowdsourcing, they might involve time and other resources like computing power and expertise. In both cases, the promise of crowdguided approaches are their ability to arrive at pareto-improvements to outcomes (economists’ way of describing scenarios where some are better off but no one is worse off). Achieving those outcome improvements that were impossible under traditional institutions also requires coordination mechanisms that improve bandwidth for processing information, incentives, preferences, and resources across the community.

Crowdguided systems often improve coordination by providing:

  • opportunities for identifying meaningful problems with particularly high value to the community. Identifying communal values helps develop clearer definitions of relevant communities and important metrics for evaluating progress towards goals.
  • opportunities for individuals to learn from others’ knowledge and experience. Under the right conditions, this can lead to more information and wisdom than any few individuals could collectively arrive at.
  • opportunities for whole communities to coordinate allocation of effort, financing and other resources to maximize collective outcomes. Coordinating each person’s contribution can result in achieving the same or better outcomes with less duplication of effort.

There are some great lessons to take from crowdfunding when it comes to building community, thinking about coordination mechanisms, and designing better tools for sharing information.

A major part of Kickstarter’s success comes from its founders’ ability to bring together the creative community they have long been members of around projects the community particularly values. Despite the fact that technology projects like the Pebble watch and Ouya videogame controller receive a great deal of press and typically the largest funding, they still account for a smaller fraction of funding and backings than music or film, in large part a reflection of the site’s strength in its core creative community. It helps that projects that draw from a likeminded community have a built-in sense of trust, reputation and respect. Kickstarter further accomplishes a sense of community amongst backers of each project through facilitating meaningful rewards. By offering to share credit, methodology, the final product itself, and/or opportunities to weigh in on the design and execution of a project, the most thoughtful project creators help to align backers’ incentives with their own. In the case of crowdfunding, this often means incentivizing backers to spread the word via compelling calls to their own social networks. In the case of crowdsourcing science, getting the word out to other qualified networks of researchers is often equally important. Depending on the project, it may also be worth considering whether skewed participation could bias results. Likewise, the incentive structures facilitated through different credit-sharing mechanisms and opportunities for individuals to contribute to crowdsourced efforts in bigger, different ways are quite relevant to consider and worth economic investigation.

I often hear from backers that the commitment mechanism is what compels them to back crowdfunding projects they otherwise wouldn’t. The possibility of making each individual’s contribution to the collective effort contingent on the group’s collective behavior is key to facilitating productive commitments from the crowd that were previously not achievable. Economists would be first to point out the clear moral hazard problem that exists in the absence of such a mechanism: if everyone suspects that everyone (or no one) else will already fund a project to their desired level, then no one will give to it. There is an analogous problem when it comes to crowdsourcing science in that each potential contributor needs to feel that their actions make a difference in personal or collective outcomes that they care about. Accordingly, it is important to understand what drives individuals to contribute — and this will certainly vary across different communities and types of project — in order to articulate and improve transparent incentive systems tailored to each.

Finally, while crowdfunding projects focused on delivering technology often garner the most press, they also present some of the greatest challenges for these platforms. Technology projects face the greatest risks in part simply because developing technologies, like delivering scientific findings, can be especially risky. To aggravate matters further, individuals drawn to participating in these projects may have quite different personal incentives than those designing them. When it comes to especially risky science and technology projects, in crowdfunding as in crowdsourcing, the value of good citizen-input is especially high but the noise and potential for bias are likewise high as well. Finding ways to improve the community’s bandwidth for sharing and processing its collective wisdom, observations and preferences is, in my opinion, quite key to achieving greater innovation in crowdguided platforms. Luckily, economists have done quite a bit of work on design of prediction markets and other mechanisms for extracting information in noisy environments and on reputation mechanisms that could and perhaps ought to be extended to thinking about these problems.

Next time, I’ll summarize some of the key findings from this research and areas where it could be better targeted to the design of crowdguided systems.

First Open Economics International Workshop

Velichka Dimitrova - December 17, 2012 in Access to Information, Events, Featured, Open Access, Open Data, Our Work, WG Economics, Workshop

You can follow all the goings-on today and tomorrow through the live stream.

On 17-18 December, economics and law professors, data publishers, practitioners and representatives from international institutions will gather at Emmanuel College, Cambridge for the First Open Economics International Workshop. From showcasing the examples of successes in collaborative economic research and open data to reviewing the legal cultural and other barriers to information sharing, this event aims to build an understanding of the value of open data and open tools for the economics profession and the obstacles to opening up information in economics. The workshop will also explore the role of greater openness in broadening understanding of and engagement with economics among the wider community including policy-makers and society.

This event is part of the Open Economics project, funded by the Alfred P. Sloan Foundation and is a key step in identifying best practice as well as legal, regulatory and technical barriers and opportunities for open economic data. A statement on the Open Economics Principles will be produced as a result of the workshop.

Session: “Open Data in Economics – Reasons, Examples, Potential”:
Examples of open data in economics so far and its potential benefits
Session host: Christian Zimmermann, (Federal Reserve Bank of St. Louis, RePEc), Panelists: Paul David (Stanford University, SIEPR), Eustáquio J. Reis (Institute of Applied Economic Research – Ipea), Johannes Kiess (World Bank), Sven Vlaeminck (ZBW – Leibniz Information Centre for Economics).
Session: “Legal, Cultural and other Barriers to Information Sharing in Economics” : Introduction and overview of challenges faced in information sharing in Economics
Session host: Lionel Bently, (University of Cambridge / CIPIL), Panelists: Mireille van Eechoud, (Institute for Information Law), David Newbery, (University of Cambridge), John Rust, (Georgetown University).
Session: “Current Data Deposit and Releases – Mandating Open Data?”: Round table discussion with stakeholders: Representatives of funders, academic publishing and academics.
Session host: Daniel L. Goroff, (Alfred P. Sloan Foundation), Panelists: Albert Bravo-Biosca, (NESTA), Toby Green, (OECD Publishing), Nancy Lutz, (National Science Foundation).
Session: Trends of Greater Participation and Growing Horizons in Economics: Opening up research and the academy to wider engagement and understanding with the general public, policy-makers and others.
Session host: Chris Taggart, (OpenCorporates), Panelists: Michael P. McDonald, (George Mason University), Hans-Peter Brunner, (Asian Development Bank), Perry Walker, (New Economics Foundation)

The workshop is a designed to be a small invite-only event with a round-table format allowing participants to to share and develop ideas together. For a complete description and a detailed programme visit the event website. Podcasts and slides will be available on the webpage after the event.

The event is being organized by the Centre for Intellectual Property and Information Law (CIPIL) at the University of Cambridge and Open Economics Working Group of the Open Knowledge Foundation and is funded by the Alfred P. Sloan Foundation. More information about the Working Group can be found online.

Interested in getting updates about this project and getting involved? Join the Open Economics mailing list:

Research Data Management in Economic Journals

Velichka Dimitrova - December 11, 2012 in Access to Information, External, Open Data, Open Economics, Open Standards, WG Economics

This blog post is written by Sven Vlaeminck | ZBW – German National Library of Economics / Leibniz Information Center for Economics


In Economics, as in many other research disciplines, there is a continuous increase in the number of papers where authors have collected their own research data or used external datasets. However, so far there have been few effective means of replicating the results of economic research within the framework of the corresponding article, of verifying them and making them available for repurposing or using in the support of the scholarly debate.

In the light of these findings B.D. McCullough pointed out: “Results published in economic journals are accepted at face value and rarely subjected to the independent verification that is the cornerstone of the scientific method. Most results published in economics journals cannot be subjected to verification, even in principle, because authors typically are not required to make their data and code available for verification.” (McCullough/McGeary/Harrison: “Lessons from the JMCB Archive”, 2006)

Harvard Professor Gary King also asked: “[I]f the empirical basis for an article or book cannot be reproduced, of what use to the discipline are its conclusions? What purpose does an article like this serve?” (King: “Replication, Replication” 1995). Therefore, the management of research data should be considered an important aspect of the economic profession.

The project EDaWaX

Several questions came up when we considered the reasons why economics papers may not be replicable in many cases:

First: what kind of data is needed for replication attempts? Second: it is apparent that scholarly economic journals play an important role in this context: when publishing an empirical paper, do economists have to provide their data to the journal? How many scholarly journals commit their authors to do so? Do these journals require their authors to submit only the datasets, or also the code of computation? Do they pledge their authors to provide programs used for estimations or simulations? And what about descriptions of datasets, variables, values or even a manual on how to replicate the results?

As part of generating the functional requirements for this publication-related data archive, the project analyzed the data (availability) policies of economic journals and developed some recommendations for these policies that could facilitate replication.

To read about the results of the EDaWaX survey, please see the full blog post on Open Economics.

Data Policies of Economic Journals

Reputation Factor in Economic Publishing

Daniel Scott - November 1, 2012 in External, Open Access, Open Economics, Open/Closed, WG Economics


“The big problem in economics is that it really matters in which journals you publish, so the reputation factor is a big hindrance in getting open access journals up and going”. Can the accepted norms of scholarly publishing be successfully challenged?

This quotation is a line from the correspondence about writing this blogpost for the OKFN. The invitation came to write for the Open Economics Working Group, hence the focus on economics, but in reality the same situation pertains across pretty much any scholarly discipline you can mention. From the funding bodies down through faculty departments and academic librarians to individual researchers, an enormous worldwide system of research measurement has grown up that conflates the quality of research output with the publications in which it appears. Journals that receive a Thomson ISI ranking and high impact factors are perceived as the holy grail and, as is being witnessed currently in the UK during the Research Excellence Framework (REF) process, these carry tremendous weight when it comes to research fund awards.

Earlier this year, I attended a meeting with a Head of School at a Russell Group university, in response to an email that I had sent with information about Social Sciences Directory, the ‘gold’ open access publication that I was then in the first weeks of setting up. Buoyed by their acceptance to meet, I was optimistic that there would be interest and support for the idea of breaking the shackles of existing ranked journals and their subscription paywall barriers. I believed then – and still believe now – that if one or two senior university administrators had the courage to say, “We don’t care about the rankings. We will support alternative publishing solutions as a matter of principle”, then it would create a snowball effect and expedite the break up of the current monopolistic, archaic system. However, I was rapidly disabused. The faculty in the meeting listened politely and then stated categorically that they would never consider publishing in a start up venture such as Social Sciences Directory because of the requirements of the REF. The gist of it was, “We know subscription journals are restrictive and expensive, but that is what is required and we are not going to rock the boat”.

I left feeling deflated, though not entirely surprised. I realised some time ago that the notion of profit & loss, or cost control, or budgetary management, was simply anathema to many academic administrators and that trying to present an alternative model as a good thing because it is a better deal for taxpayers is an argument that is likely to founder on the rocks of the requirements of the funding and ranking systems, if not apathy and intransigence. A few years ago, whilst working as a sales manager in subscription publishing, I attended a conference of business school deans and directors. (This in itself was unusual, as most conferences that I attended were for librarians – ALA, UKSG, IFLA and the like – as the ‘customer’ in a subscription sense is usually the university library). During a breakout session, a game of one-upmanship began between three deans, as they waxed lyrically about the overseas campuses they were opening, the international exchanges of staff and students they had fixed up, the new campus buildings that were under construction, and so on.

Eventually, I asked the fairly reasonable question whether these costly ventures were being undertaken with a strategic view that they would eventually recoup their costs and were designed to help make their schools self-funding. Or indeed, whether education and research are of such importance for the greater good of all that they should be viewed as investments. The discomfort was palpable. One of the deans even strongly denied that this is a question of money. That the deans of business schools should take this view was an eye-opening insight in to the general academic attitude towards state funding. It is an attitude that is wrong because ultimately, of course, it is entirely about the money. The great irony was that this conversation took place in September 2008, with the collapse of Lehman Brothers and the full force of the Global Financial Crisis (GFC) soon to impact gravely on the global higher education and research sector. A system that for years had been awash with money had allowed all manner of poor practices to take effect, in which many different actors were complicit. Publishers had seized on the opportunity to expand output massively and charge vast fees for access; faculty had demanded that their libraries

subscribe to key journals, regardless of cost; libraries and consortia had agreed to publishers’ demands because they had the money to do so; and the funding bodies had built journal metrics into the measurement for future financing. No wonder, then, that neither academia nor publishers could or would take the great leap forward that is required to bring about change, even after the GFC had made it patently clear that the ongoing subscription model is ultimately unsustainable. Change needs to be imposed, as the British government bravely did in July with the decision to adopt the recommendations of the Finch Report.

However, this brings us back to the central issue and the quotation in the title. For now, the funding mechanisms are the same and the requirement to publish in journals with a reputation is still paramount. Until now, arguments against open access publishing have tended to focus on quality issues. The argument goes that the premier (subscription) journals take the best submissions and then there is a cascade downwards through second tier journals (which may or may not be subscription-based) until you get to a pile of leftover papers that can only be published by the author paying a fee to some sort of piratical publisher. This does not stand much scrutiny. Plenty of subscription-based journals are average and have been churned out by publishers looking to beef up their portfolios and justify charging ever-larger sums. Good research gets unnecessarily dumped by leading journals because they adhere to review policies dating from the print age when limited pagination forced them to be highly selective. Other academics, as we have seen at Social Sciences Directory, have chosen to publish and review beyond the established means because they believe in finding and helping alternatives. My point is that good research exists outside the ‘top’ journals. It is just a question of finding it.

So, after all this, do I believe that the “big hindrance” of reputation can be overcome? Yes, but only through planning and mandate. Here is what I believe should happen:

  1. The sheer number of journals is overwhelming and, in actuality, at odds with modern user behaviour which generally accesses content online and uses a keyword search to find information. Who needs journals? What you want is a large collection of articles that are well indexed and easily searchable, and freely available. This will enable the threads of inter-disciplinary research to spread much more effectively. It will increase usage and reduce cost-per-download (increasingly the metrics that librarians use to measure the return on investment of journals and databases), whilst helping to increase citation and impact.
  2. Ensure quality control of peer review by setting guidelines and adhering to them.
  3. De-couple the link between publishing and tenure & department funding.
  4. In many cases, universities will have subscribed to a particular journal for years and will therefore have access to a substantial back catalogue. This has often been supplemented by the purchase of digitised archives, as publishers cottoned on to other sources of revenue which happened to chime with librarians’ preferences to complete online collections and take advantage of non-repeatable purchases. Many publishers also sell their content to aggregators, who agree to an embargo period so that the publisher can also sell the most up-to-date research directly. Although the axe has fallen on many print subscriptions, some departments and individuals still prefer having a copy on their shelves (even though they could print off a PDF from the web version and have the same thing, minus the cover). So, aside from libraries often paying more than once for the same content, they will have complete collections up to a given point in time. University administrators need to take the bold decision to change, to pick an end date as a ‘cut off’ after which they will publicly state that they are switching to new policies in support of OA. This will allow funds to be freed up and used to pay for institutional memberships, article processing fees, institutional repositories – whatever the choice may be. Editors, authors and reviewers will be encouraged to offer their services elsewhere, which will in turn rapidly build the reputation of new publications.

Scholarly publishing is being subjected to a classic confrontation between tradition and modernity. For me, it is inevitable that modernity will win out and that the norms will be successfully challenged.

This post is also available on the Open Economics blog. If you’re interested in the issues raised, join our Open Economics or our Open Access lists to discuss them further!

Review of Open Access in Economics

Ross Mounce - October 30, 2012 in Access to Information, Open Access, Open Economics, WG Economics

This blog is cross-posted from the OKFN’s Open Economics blog

Ever since BioMed Central (BMC) published its first free online article on July 19th 2000, the Open Access movement has made significant progress, so much so that many different stakeholders now see 100% Open Access to research as inevitable in the near future. Some are already extrapolating from recent growth trends that Open Access will take 90% of the overall article share by just 2020 (Lewis, 2012). Another recent analysis shows that during 2011 the number of Open Access articles published was ~340,000 spread over ~6,700 different journals which is about 17% of the overall literature space (1.66 million articles) for that year (Laakso & Bjork, 2012).

Perhaps because of the more obvious lifesaving benefits, biomedical research in particular has seen the largest growth in Open Access – patients & doctors alike can gain truly lifesaving benefit from easy, cost-free, Open Access to research. Those very same doctors and patients may have difficulty accessing the latest toll access-only research; any delay or impediment to accessing up-to-date medical knowledge can have negative, even fatal consequences:

[The following is from 'The impact of open access upon public health. PLoS Medicine (2006) 3:e252+' illustrating how barriers to knowledge access have grave consequences]

Arthur Amman, President of Global Strategies for HIV Prevention, tells this story: “I recently met a physician from southern Africa, engaged in perinatal HIV prevention, whose primary access to information was abstracts posted on the Internet. Based on a single abstract, they had altered their perinatal HIV prevention program from an effective therapy to one with lesser efficacy. Had they read the full text article they would have undoubtedly realized that the study results were based on short-term follow-up, a small pivotal group, incomplete data, and unlikely to be applicable to their country situation. Their decision to alter treatment based solely on the abstract’s conclusions may have resulted in increased perinatal HIV transmission”

But there are also significant benefits to be gained from Open Access to other, non-biomedical research. Open Access to social science & humanities research is also increasing, and has recently been mandated by Research Councils UK (RCUK), the UK agency that dictates policy for all publicly-funded academic research in the UK, on the basis of the Finch report [PDF]. Particularly with respect to economics, I find it extremely worrying that our MPs and policymakers often do NOT have access to the latest academic economic research. David Willetts MP, recently admitted he couldn’t access some research on a BBC Radio 3 interview recently. Likewise at the Open Knowledge Festival in Helsinki recently, a policymaker expressed frustration at his inability to access possible policy-influencing evidence as published in academic journals.

So, for this blogpost, I set about seeing what the Open Access publishing options are for economists. I am well-versed in the OA options for scientists and have produced a visualization of various different paid Gold Open Access options here which has garnered much interest and attention. Even for scientists there are a wealth of completely free-to-publish-in options that are also Open Access (free-to-read, no subscription or payment required).

As far I can see, the Gold Open Access ‘scene’ in Economics is less well-developed relative to the sciences. The Directory of Open Access Journals (DOAJ) lists 192 separate immediate Open Access journals of varying quality (compared to over 500 medical journals listed in DOAJ). These OA economics journals also seem to be newer on average than the similar spread of OA biomedical journals. Nevertheless I found what appear to be some excellent OA economics journals including:

  • Economic Analysis and Policy  - a journal of the Economic Society of Australia, seems to take great pride and interest in Open Access: there’s a whole issue devoted to the subject of Open Access in Economics with papers by names even I recognise e.g. Christian Zimmermann & John Willinsky.
  • Theoretical Economics – published by the Econometrics Society three times a year. Authors retain the copyright to their works, and these are published under a standard Creative Commons licence (CC BY-NC). The PDFs seem very high-quality to me and contain an abundance of clickable hyperlinks & URLs – an added-value service I don’t even see from many good subscription publishers! Publishing here only requires one of the authors to be a member of the society which only costs £50 a year, with fee reductions for students. Given many OA science publications cost >£1000 per publication I find this price extremely reasonable.
  • Monthly Labor Review – published by the US Department of Labor, and in existence since 1915(!) this seems to me to be another high-quality, highly-read Open Access journal.
  • Economics – published in Germany under a Creative Commons Licence (CC BY-NC). It has an excellent, modern and clear website, great (high-standard) data availability policy and even occasionally awards prizes for the best papers published in the journal.
  • Journal of Economic and Social Policy – another Australian journal, established in the year 2000, providing a simple but completely free outlet for publishing on social and economic issues, reviewing conceptual problems, or debating policy initiatives.
  • …and many more. Just like with science OA journals there are numerous journals of local interest e.g. Latin American journals: Revista Brasileira de Economia, the Latin American Journal of Economics, Revista de Economia Contemporânea, Revista de Análisis Económico. European journals like the South-Eastern Europe Journal of Economics (SEEJE) and Ekonomska Istrazivanja (Croatian) and Asian journals e.g. Kasarinlan (Philippine Journal of Third World Studies). These should not be dismissed or discounted, not everything is appropriate for ‘international-scope’ journals. Local journals are important for publishing smaller scale research which can be built-upon by comparative studies and/or meta-analyses.
 It’s International Open Access Week this week: 22 – 28 October 2012


Perhaps more interesting with respect to Open Access in Economics is the thriving Green Open Access scene. In the sciences Green Open Access is pretty limited in my opinion. arXiv has popularised Green OA in certain areas of physics & maths but in my particular domain (Biology) Green OA is a deeply unpopular and unused method of providing OA. From what I have seen OA initiatives in Economics such as RePEc (Research Papers in Economics) and EconStor seem to be extremely popular and successful. As I understand it RePEc provides Open Bibliographic Data for an impressive volume of Economics articles, in this respect the field is far ahead of the sciences – there is little free or open bibliographic data from most science publishers. EconStor is an OA repository of the German National Library of Economics – Leibniz Information Centre for Economics. It contains more than 48,000 OA works which is a fiercely impressive volume. The search functions are perhaps a tad basic, but with that much OA literature collected and available for use I’ve no doubt someone will create a better, more powerful search interface for the collection.

In summary, from my casual glance at OA publishing in Economics as a non-economist, mea culpa, things look very positive here. Unless informed otherwise I think the OA scene here too is likely to grow and dominate the academic publishing space as it is in other areas of academia.


Laakso, M. and Bjork, B. C. 2012. Anatomy of open access publishing: a study of longitudinal development and internal structure. BMC Medicine 10:124+

Lewis, D. W. 2012. The inevitability of open access. College & Research Libraries 73:493-506.

To join the Open Economics Working Group, please sign up to the mailing list here

The Benefits of Open Data (part II) – Impact on Economic Research

Guo Xu - October 23, 2012 in Access to Information, Featured, WG Economics

This blog is cross-posted from the OKFN’s Open Economics blog

A couple of weeks ago, I wrote the first part of the three part series on Open Data in Economics. Drawing upon examples from top research that focused on how providing information and data can help increase the quality of public service provision, the article explored economic research on open data. In this second part, I would like to explore the impact of openness on economic research.

We live in a data-driven age

There used to be a time when data was costly: There was not much data around. Comparable GDP data, for example, has only been collected starting in the early mid 20th Century. Computing power was expensive and costly: Data and commands were stored on punch cards, and researchers only had limited hours to run their statistical analyses at the few computers available at hand.

Today, however, statistics and econometric analysis has arrived in every office: Open Data initiatives at the World Bank and governments have made it possible to download cross-country GDP and related data using a few mouse-clicks. The availability of open source statistical packages such as R allows virtually everyone to run quantitative analyses on their own laptops and computers. Consequently, the number of empirical papers have increased substantially. The left figure (taken from Espinosa et al. 2012) plots the number of econometric (statistical) outputs per article in a given year: Quantitative research has really taken off since the 1960s. Where researchers used datasets with a few dozens of observations, modern applied econometricians now often draw upon datasets boasting millions of detailed micro-level observations.

 Why we need open data and access

The main economic argument in favour of open data is gains from trade. These gains come in several dimensions: First, open data helps avoid redundancy. As a researcher, you may know there are often same basic procedures (such as cleaning datasets, merging datasets) that have been done thousands of times, by hundreds of different researchers. You may also have experienced the time wasted compiling a dataset someone else already put together, but was unwilling to share: Open data in these cases can save a lot of time, allowing you to build upon the work of others. By feeding your additions back to the ecosystem, you again ensure that others can build on your data work. Just like there is no need to re-invent the wheel several times, the sharing of data allows researchers to build on existing data work and devote valuable time to genuinely new research.

Second, open data ensures the most efficient allocation of scarce resources – in this case datasets. Again, as a researcher, you may know that academics often treat their datasets as private gold mines. Indeed, entire research careers are often built on possessing a unique dataset. This hoarding often results in valuable data lying around on a forgotten harddisk, not fully used and ultimately wasted. What’s worse, the researcher – even though owning a unique dataset – may not be the most skilled to make full use of the dataset, while someone else may possess the necessary skills but not the data. Only recently, I had the opportunity to talk to a group of renown economists who – over the past decades – have compiled an incredibly rich dataset. During the conversation, it was mentioned that they themselves may have only exploited 10% of the data – and were urgently looking for fresh PhDs and talented researchers to unlock the full potential of the their data. But when data is open, there is no need to search, and data can be allocated to the most skilled researcher.

Finally, and perhaps most importantly, open data – by increasing transparency – also fosters scientific rigour: When datasets and statistical procedures are made available to everyone, a curious undergraduate student may be able to replicate and possibly refute the results of a senior researcher. Indeed, journals are increasingly asking researchers to publish their datasets along with the paper. But while this is a great step forward, most journals still keep the actual publication closed, asking for horrendous subscription fees. For example, readers of my first post may have noticed that many of the research articles linked could not be downloaded without a subscription or university affiliation. Since dissemination, replication and falsification are key features of science, the role of both open data and open access become essential to knowledge generation.

But there are of course challenges ahead: For example, while a wider access to data and statistical tools is a good thing, the ease of running regressions with a few mouse-clicks also results in a lot of mindless data mining and nonsensical econometric outputs. Quality control, hence, is and remains important. There are and in some cases also should be some barriers to data sharing. In some cases, researchers have invested a substantial time of their lives to construct their datasets, in which case it is understandable why some are uncomfortable to share their “baby” with just anyone. In addition, releasing (even anonymized) micro-level data often raises concerns of privacy protection. These issues – and existing solutions – will be discussed in the next post.

Are you interested in participating in the activities of the Open Economics Working Group? Click here to get involved

The Benefits of Open Data – Evidence from Economic Research

Guo Xu - October 5, 2012 in Access to Information, WG Economics

This blog is cross-posted from the OKFN’s Open Economics blog

Looking back on the Open Knowledge Festival 2012 in September, there’s an impression that openness is everywhere: There are working groups on Open Science and Open Linguistics, topic streams on Gender and Diversity in Openness, and events like Open Prom and Open Sauna. Open Knowledge and Open Data, it seems, are omnipresent.

Looking beyond the Open Knowledge community, however, the situation is very different. In Economics, for example, not many know what “open data”, “open access” or “Open Economics” exactly mean. Indeed, not many even care. A common reaction is: “Yes, it sounds interesting and important, but does it really matter? And why should I care about it?”

In this post, I would like to give some hard evidence on the positive role of opening up information has had in economics, and sketch ideas for how to involve economists – professional or in training – to bring ideas of openness into the mainstream. I’ll look at economic research on open data, the impact of open data on economic research, and challenges and ways forward.

The real world impacts of open information

Making information accessible to the public can improve public service delivery. In countries where corruption is pervasive, services and funds often do not reach the frontline provider. And even if services do reach the people, the quality of services provided is often shockingly poor: survey evidence from Bangladesh, Ecuador, India, Peru and Uganda found absence rates as high as 20% and 35% for school teachers and health workers. In many cases, staff are poorly trained.

Releasing data on service delivery can help reduce corruption and improve public services. In Uganda, researchers provided information to parents by publishing funding data for a random subset of schools in local newspapers. In consequence, corruption decreased significantly, while schooling outcomes improved substantially. Similar evidence in health delivery and redistributive policies suggests that providing information can help the public to discipline public service providers, improving the quality of services.

Information can also expose corrupt politicians. The Federal Government of Brazil, for example, began to select and audit municipalities at random, releasing audit reports to the media. Researchers found that the audit outcomes had a significant impact on the reelection probability of politicians: those exposed for corruption were punished at the ballots, and the impact was most pronounced in areas where the dissemination of information was favoured by local radio.

A story from fishermen in South India provides another example of how information can improve market efficiency. Studying the adoption of mobile phones in Kerala, researchers have found convincing evidence that access to information through mobile phones helped fishermen sell their catch at the market where the price was highest (and fish most demanded). Instead of sailing to a port and simply hoping for a good price, fishermen were empowered by technology to make informed decisions on how to trade.

Finally, the benefits of transparency are not only restricted to reducing corruption and lowering the cost of information. A comparative study finds that transparency – measured by accuracy and frequency of macroeconomic information released to the public – leads to lower borrowing costs in sovereign bond markets. Open data pays off in many ways, in many different contexts.

These are just a few selective examples of how cutting-edge economic research has identified the benefits of openness in a diverse range of situations. The cases I presented are not based on correlations, but carefully established causal relationships, leaving little doubt – at least within the context studied – that information matters, big time. Perhaps most importantly, these cases have also shown that open data must be understood in a broad sense. These interventions do not take advantage of linked data, do not use CSVs that are shared through Facebook or Twitter – often, these interventions are simple solutions that ultimately help improve the everyday lives of the people.

Ignite Cleanweb

Velichka Dimitrova - September 12, 2012 in Events, External, Labs, Meetups, WG Economics

Ignite Cleanweb

Ignite Event in London

This Thursday in London, Cleanweb UK invites you to their first Ignite evening, hosted by Forward Technology. Come along and see a great lineup of lightning talks, all about what’s happening with sustainability and the web in the UK.

From clean clouds, to home energy, to climate visualisation, there will plenty to learn, and plenty of other attendees to get to know. It’ll be an evening to remember, so make sure you’re there! Sign up on the Cleanweb UK website.

Confirmed lighting talks:

  • Loco2 vs The European Rail Booking Monster, Jon Leighton, Loco2
  • Love Thy Neighbour. Rent Their Car, Tom Wright, Whipcar
  • Solar Panels Cross The Chasm, Jason Neylon, uSwitch
  • Weaponising Environmentalism, Chris Adams, AMEE
  • Energy Saving Behaviour – The Motivation Challenge, Paul Tanner, Virtual Technologies
  • Good Food, For Everyone, Forever. Easy, Right?, Ed Dowding, Sustaination
  • The Open Energy Monitor Project, Glyn Hudson & Tristan Lea, OpenEnergyMonitor
  • The Carbon Map, Robin Houston, Carbon Map
  • Putting the Local in Global Warming with Open Data, Jack Townsend, Globe Town
  • Cleanweb in the UK, James Smith, Cleanweb UK

and more…

Cleanweb community

Cleanweb Community London

There is a movement growing. Bit by bit, developers are using the power of the web to make our world more sustainable. Whether by improving the way we travel, the way we eat, or the way we use energy, the web is making a difference. The Cleanweb movement is building a global conversation, with local chapters running hackdays and meetups to get people together.

Here in the UK, we’ve been doing this longer than anyone else. Cleanweb-style projects were emerging in 2007, with 2008′s geeKyoto conference bringing together a lot of early efforts.

It’s only really appropriate then that we have the most active Cleanweb community in the world, in the form of Cleanweb London. With over 150 members, it’s a great base, on which we’re building a wider Cleanweb UK movement. We’ve run a hackday, have regular meetups, and are building towards our first Ignite Cleanweb evening.

This is an expanding community, made of many different projects and groups, and one that has a chance to do some real good. If you’d like to be part of it, or if you already are but didn’t know it, come along to a meetup and get involved!

Cleanweb MeetUp

Get Updates