Support Us

Draft Open Data Policy for Qatar

Rayna Stamboliyska - April 24, 2014 in Open Government Data, Open Knowledge Foundation Local Groups, Open MENA, Open Standards, Policy

The following post was originally published on the blog of our Open MENA community (Middle East and North Africa).

The Qatari Ministry of Information and Communication Technologies (generally referred to as ictQATAR) had launched a public consultation on its draft Open Data Policy. I thus decided to briefly present a (long overdue) outline of Qatar’s Open Data status prior to providing a few insights of the current Policy document.

Public sector Open Data in Qatar: current status

Due to time constraints, I did not get the chance to properly assess public sector openness for the 2013 edition of the Open Data Index (I served as the MENA editor). My general remarks are as follows (valid both end of October 2013 and today):

  • Transport timetables exist online and in digital form but are solely available through non-governmental channels and are in no way available as Open Data. The data is thus neither machine-readable nor freely accessible — as per the Open Definition, — nor regularly updated.
  • Government budget, government spending and elections results are nowhere to be found online. Although there are no elections in the country (hence no election results to be found; Qatar lacks elected Parliament), government budget and spending theoretically exist.
  • Company register is curated by the Qatar Financial Centre Authority, is available online for anyone to read and seems to be up-to-date. Yet, the data is not available for download in anything other than PDF (not a machine-readable format) and is not openly licensed which severely restricts any use one could decide to make out of it.
  • National statistics seem to be partly available online through the Qatar Information Exchange office. The data does not, however, seem to be up-to-date, is mostly enclosed in PDFs and is not openly licensed.
  • Legislation content is provided online by Al-Meezan, the Qatari Legal Portal. Although data seems available in digital form, it does not seem to be up-to-date (no results for 2014 regardless of the query). The licensing of the website is not very clear as the mentions include both “copyright State of Qatar” and “CC-by 3.0 Unported”.
  • Postcodes/Zipcodes seem to be provided through the Qatar Postal Services yet the service does not seem to provide a list of all postcodes or a bulk download. The data, if we assume it’s available, is not openly licensed.
  • National map at a scale of 1:250,000 or better (1cm = 2.5km) is nowhere to be found online, at least I did not manage to (correct me if I am wrong).
  • Emissions of pollutants data is not available through the Ministry of Environment. (Such data is defined as “aggregate data about the emission of air pollutants, especially those potentially harmful to human health. “Aggregate” means national-level or more detailed, and on an annual basis or more often. Standard examples of relevant pollutants would be carbon monoxides, nitrogen oxides, or particulate matter.”)

This assessment would produce an overall score of 160 (as per the Open Data Index criteria) which would rank Qatar at the same place as Bahrain, that is much lower than other MENA states (e.g., Egypt and Tunisia). A national portal exists but it does not seem to comprehend what open format and licensing mean as data is solely provided as PDFs and Excel sheets, and is the property of the Government. (The portal basically redirects the user to the aforementioned country’s national statistics website.) Lastly, information requests can be made through the portal.

The 2013 edition of the Open Data Barometer provides a complementary insight and addresses the crucial questions of readiness and outreach:

[There is] strong government technology capacity, but much more limited civil society and private sector readiness to secure benefits from open data. Without strong foundations of civil society freedoms, the Right to Information and Data Protection, it is likely to be far harder for transparency and accountability benefits of open data to be secured. The region has also seen very little support for innovation with open data, suggesting the economic potential of open data will also be hard to realise. This raises questions about the motivation and drivers for the launch of open data portals and platforms.

Screenshot from the Open Data Barometer 2013.

2014 Open Data Policy draft

Given the above assessment, I was pleasantly surprised to discover that a draft Open Data Policy is being composed by ictQATAR. The document sets the record straight from the beginning:

Information collected by or for the government is a national resource which should be managed for public purposes. Such information should be freely available for anyone to use unless there are compelling privacy, confidentiality or security considerations by the government. [...] Opening up government data and information is a key foundation to creating a knowledge based economy and society. Releasing up government-held datasets and providing raw data to their citizens, will allow them to transform data and information into tools and applications that help individuals and communities; and to promote partnerships with government to create innovative solutions.

The draft Policy paper then outlines that “all Government Agencies will put in place measures to release information and data”. The ictQATAR will be in charge of coordinating those efforts and each agency will need to nominate a senior manager internally to handle the implementation of the Open Data policy through the identification and release of datasets as well as the follow-up on requests to be addressed by citizens. The Policy emphasizes that “each agency will have to announce its “Terms of Use” for the public to re-use the data, requirement is at no fees”.

The Policy paper also indicates how the national Open Data portal will operate. It will be “an index to serve as gateway to public for dataset discovery and search, and shall redirect to respective Government Agencies’ data source or webpage for download”. Which clearly indicates that each individual Agency will need to create own website where the data will be released and maintained.

The proposed national Open Data portal is also suggested to operate as an aggregator of “all public feedback and requests, and the government agencies’ responses to the same”. Alongside, the portal will continue to allow the public to submit information requests (as per the freedom of information framework in the country). This is an interesting de facto implementation of the Freedom of Information Act Qatar still lacks.

The draft Policy further states:

Where an Agency decides to make information available to the public on a routine basis, it should do so in a manner that makes the information available to a wide range of users with no requirement for registration, and in a non-proprietary, non-exclusive format.

This is an interesting remark and constitutes one of my main points of criticism to the proposed paper. The latter neither contains a mention about what the recommended formats should be nor about licensing. Thus, one is left wondering whether the Agencies should just continue to stick to Microsoft Excel and PDF formats. If these were adopted as the default formats, then the released data would not be truly open as none of these two formats is considered open and the files are not machine-readable (a pre-requisite for data to be defined as open). Indeed, instead of going for a lengthy description of various formats, it would have been much more useful to elaborate on preferred format, e.g. CSV.

An additional concern is the lack of mention of a license. Even though the Policy paper does a great job emphasizing that the forthcoming data needs to be open for anyone to access, use, reuse and adapt, it makes no mention whatsoever about the envisioned licensing. Would the latter rely on existing Creative Commons licenses? Or would the ictQATAR craft its own license as have done other governments across the world?

An additional reason for concern is the unclear status of payment to access data. Indeed, the Policy paper mentions at least three times (sections 4.2 (i); 4.4 (ii); Appendix 6, ‘Pricing Framework’ indicator) that the data has to be provided at no cost. Yet, the Consultation formulates the question:

Open Data should be provided free of charge where appropriate, to encourage its widespread use. However, where is it not possible, should such data be chargeable and if so, what are such datasets and how should they be charged to ensure they are reasonable?

This question indicates that financial participation from potential users is considered probable. If such a situation materialized, this would be damaging for the promising Open Data Policy as paying to access data is one of the greatest barriers to access to information (regardless of how low the fee might be). Thus, if the data is provided at a cost, it is not Open Data anymore as by definition, Open Data is data accessible at no cost for everyone.

My personal impression is that the Policy draft is a step in the right direction. Yet the success of such a policy, if implemented, remains very much dependent on the willingness of the legislator to enable a shift towards increased transparency and accountability. My concerns stem from the fact that the national legislation has precedence over ictQATAR’s policy frameworks which may make it very difficult to achieve a satisfactory Open Data shift. The Policy draft states:

Agencies may also develop criteria at their discretion for prioritizing the opening of data assets, accounting for a range of factors, such as the volume and quality of datasets, user demand, internal management priorities, and Agency mission relevance, usefulness to the public, etc.

The possibility that an Agency might decide to not open up data because it would be deemed potentially harmful to the country’s image or suchlike is real. Given that no Freedom of Information Act exists, there is no possible appeal mechanism allowing to challenge a negative decision citing public interest as outweighing deemed security concerns. The real test for how committed to openness and transparency the government and its Agencies are will come at that time.

The Appendix 6 is thus very imprecise regarding the legal and security constraints that might prevent opening up public sector data. Furthermore, the precedence of the national legislation should not be neglected: it for ex. prohibits any auditing or data release related to contracting and procurement; no tenders are published for public scrutiny. Although the country has recently established national general anti-corruption institutions, there is a lack of oversight of the Emir’s decisions. According to Transparency International Government Defence Anti-Corruption Index 2013, “the legislature is not informed of spending on secret items, nor does it view audit reports of defence spending and off-budget expenditure is difficult to measure”.

Note: I have responded to the consultation in my personal capacity (not as OpenMENA). Additional insights are to be read which I have chosen not to feature here.

Open Knowledge Festival Call for Volunteers Opens Today!

Beatrice Martini - April 22, 2014 in Events, Featured, Join us, OKFest, OKFestival

9501197271_76f573b157_z

  • What: Join the Volunteers Team at OKFestival 2014!
  • When: July 15-17th, Berlin, Germany
  • Why? Lots of reasons! Find them here!

The OKFestival team is launching our call for volunteers today, and we are excited to bring on board amazing members of our community who will help us to make this festival the huge success we are anticipating. Apply now!

Volunteers are integral to our ability to run OKFestival – without you, we wouldn’t have enough hands to get everything done over the days of the festival!

Join Us!

If you want to come to Berlin this July 15th-17th and help us to create the best Open festival there has ever been, please apply today at the link above, and then spread the word to ensure others know about the festival too!

There is no hard deadline on applying, but the sooner you apply the better your chance of being selected to come and make Open history with us at this year’s OKFestival. We can’t wait to see you there!

Building an archaeological project repository II: Where are the research data repositories?

Guest - April 17, 2014 in CKAN, Open Science, WG Archaeology

This is a guest post by Anthony Beck, Honorary fellow, and Dave Harrison, Research fellow, at the University of Leeds School of Computing

DART_UML_DART_2011_2013_RAW

Data repository as research tool

In a previous post, we examined why Open Science is necessary to take advantage of the huge corpus of data generated by modern science. In our project Detection of Archaeological residues using Remote sensing Techniques, or DART, we adopted Open Science principles and made all the project’s extensive data available through a purpose-built data repository built on the open-source CKAN platform. But with so many academic repositories, why did we need to roll our own? A final post will look at how the portal was implemented.

DART: data-driven archaeology

DART’s overall aim is to develop analytical methods to differentiate archaeological sediments from non-archaeological strata, on the basis of remotely detected phenomena (e.g. resistivity, apparent dielectric permittivity, crop growth, thermal properties etc). DART is a data rich project: over a 14 month period, in-situ soil moisture, soil temperature and weather data were collected at least once an hour; ground based geophysical surveys and spectro-radiometry transects were conducted at least monthly; aerial surveys collecting hyperspectral, LiDAR and traditional oblique and vertical photographs were taken throughout the year, and laboratory analyses and tests were conducted on both soil and plant samples. The data archive itself is in the order of terabytes.

Analysis of this archive is ongoing; meanwhile, this data and other resources are made available through open access mechanisms under liberal licences and are thus accessible to a wide audience. To achieve this we used the open-source CKAN platform to build a data repository, DARTPortal, which includes a publicly queryable spatio-temporal database (on the same host), and can support access to individual data as well as mining or analysis of integrated data.

This means we can share the data analysis and transformation processes and demonstrate how we transform data into information and synthesise this information into knowledge (see, for example, this Ipython notebook which dynamically exploits the database connection). This is the essence of Open Science: exposing the data and processes that allow others to replicate and more effectively build on our science.

Lack of existing infrastructure

Pleased though we are with our data repository, it would have been nice not to have to build it! Individual research projects should not bear the burden of implementing their own data repository framework. This is much better suited to local or national institutions where the economies of scale come into their own. Yet in 2010 the provision of research data infrastructure that supported what DART did was either non-existent or poorly advertised. Where individual universities provided institutional repositories, these were focused on publications (the currency of prestige and career advancement) and not on data. Irrespective of other environments, none of the DART collaborating partners provided such a data infrastructure.

Data sharing sites like Figshare did not exist – and when it did exist the size of our hyperspectral data, in particular, was quite rightly a worry. This situation is slowly changing, but it is still far from ideal. The positions taken by Research Councils UK and the Engineering and Physical Science Research Council (EPSRC) on improving access to data are key catalysts for change. The EPSRC statement is particularly succinct:

Two of the principles are of particular importance: firstly, that publicly funded research data should generally be made as widely and freely available as possible in a timely and responsible manner; and, secondly, that the research process should not be damaged by the inappropriate release of such data.

This has produced a simple economic issue – if research institutions can not demonstrate that they can manage research data in the manner required by the funding councils then they will become ineligible to receive grant funding from that council. The impact is that the majority of universities are now developing their own, or collaborating on communal, data repositories.

But what about formal data deposition environments?

DART was generously funded through the Science and Heritage Programme supported by the UK Arts and Humanities Research Council (AHRC) and the EPSRC. This means that these research councils will pay for data archiving in the appropriate domain repository, in this case the Archaeology Data Service (ADS). So why produce our own repository?

Deposition to the ADS would only have occurred after the project had finished. With DART, the emphasis has been on re-use and collaboration rather than primarily on archiving. These goals are not mutually exclusive: the methods adopted by DART mean that we produced data that is directly suitable for archiving (well documented ASCII formats, rich supporting description and discovery metadata, etc) whilst also allowing more rapid exposure and access to the ‘full’ archive. This resulted in DART generating much richer resource discovery and description metadata than would have been the case if the data was simply deposited into the ADS.

The point of the DART repository was to produce an environment which would facilitate good data management practice and collaboration during the lifetime of the project. This is representative of a crucial shift in thinking, where projects and data collectors consider re-use, discovery, licences and metadata at a much earlier stage in the project life cycle: in effect, to create dynamic and accessible repositories that have impact across the broad stakeholder community rather than focussing solely on the academic community. The same underpinning philosophy of encouraging re-use is seen at both FigShare and DataHub. Whilst formal archiving of data is to be encouraged, if it is not re-useable, or more importantly easily re-useable, within orchestrated scientific workflow frameworks then what is the point.

In addition, it is unlikely that the ADS will take the full DART archive. It has been said that archaeological archives can produce lots of extraneous or redundant ‘stuff’. This can be exacerbated by the unfettered use of digital technologies – how many digital images are really required for the same trench? Whilst we have sympathy with this argument, there is a difference between ‘data’ and ‘pretty pictures’: as data analysts, we consider that a digital photograph is normally a data resource and rarely a pretty picture. Hence, every image has value.

This is compounded when advances in technology mean that new data can be extracted from ‘redundant’ resources. For example, Structure from Motion (SfM) is a Computer Vision technique that extracts 3D information from 2D objects. From a series of overlapping photographs, SfM techniques can be used to extract 3D point clouds and generate orthophotographs from which accurate measurements can be taken. In the case of SfM there is no such thing as redundancy, as each image becomes part of a ‘bundle’ and the statistical characteristics of the bundle determine the accuracy of the resultant model. However, one does need to be pragmatic, and it is currently impractical for organisations like the ADS to accept unconstrained archives. That said, it is an area that needs review: if a research object is important enough to have detailed metadata created about it, then it should be important enough to be archived.

For DART, this means that the ADS is hosting a subset of the archive in long-term re-use formats, which will be available in perpetuity (which formally equates to a maximum of 25 years), while the DART repository will hold the full archive in long term re-use formats until we run out of server money. We are are in discussion with Leeds University to migrate all the data objects over to the new institutional repository with sparkling new DOIs and we can transfer the metadata held in CKAN over to Open Knowledge’s public repository, the dataHub. In theory nothing should be lost.

How long is forever?

The point on perpetuity is interesting. Collins Dictionary defines perpetuity as ‘eternity’. However, the ADS defines ‘digital’ perpetuity as 25 years. This raises the question: is it more effective in the long term to deposit in ‘formal’ environments (with an intrinsic focus on preservation format over re-use), or in ‘informal’ environments (with a focus on re-use and engagement over preservation (Flickr, Wikimedia Commons, DART repository based on CKAN, etc)? Both Flickr and Wikimedia Commons have been around for over a decade. Distributed peer to peer sharing, as used in Git, produces more robust and resilient environments which are equally suited to longer term preservation. Whilst the authors appreciate that the situation is much more nuanced, particularly with the introduction of platforms that facilitate collaborative workflow development, this does have an impact on long-term deployment.

Choosing our licences

Licences are fundamental to the successful re-use of content. Licences describe who can use a resource, what they can do with this resource and how they should reference any resource (if at all).

Two lead organisations have developed legal frameworks for content licensing, Creative Commons (CC) and Open Data Commons (ODC). Until the release of CC version 4, published in November 2013, the CC licence did not cover data. Between them, CC and ODC licences can cover all forms of digital work.

At the top level the licences are permissive public domain licences (CC0 and PDDL respectively) that impose no restrictions on the licensees use of the resource. ‘Anything goes’ in a public domain licence: the licensee can take the resource and adapt it, translate it, transform it, improve upon it (or not!), package it, market it, sell it, etc. Constraints can be added to the top level licence by employing the following clauses:

  • BY – By attribution: the licensee must attribute the source.
  • SA – Share-alike: if the licensee adapts the resource, they must release the adapted resource under the same licence.
  • NC – Non commercial: the licensee must not use the work within a commercial activity without prior approval. Interestingly, in many area of the world, the use of material in university lectures may be considered a commercial activity. The non-commercial restriction about the nature of the activity, not the legal status of the institution doing the work.
  • ND – No derivatives: the licensee can not derive new content from the resource.

Each of these clauses decreases the ‘open-ness’ of the resource. In fact, the NC and ND clause are not intrinsically open (they restrict both who can use and what you can do with the resource). These restrictive clauses have the potential to produce license incompatibilities which may introduce profound problems in the medium to long term. This is particularly relevant to the SA clause. Share-alike means that any derived output must be licensed under the same conditions as the source content. If content is combined (or mashed up) – which is essential when one is building up a corpus of heritage resources – then content created under a SA clause can not be combined with content that includes a restrictive clause (BY, NC or ND) that is not in the source licence. This licence incompatibility has a significant impact on the nature of the data commons. It has the potential to fragment the data landscape creating pockets of knowledge which are rarely used in mainstream analysis, research or policy making. This will be further exacerbated when automated data aggregation and analysis systems become the norm. A permissive licence without clauses like Non-commercial, Share-alike or No-derivatives removes such licence and downstream re-user fragmentation issues.

For completeness, specific licences have been created for Open Government Data. The UK Government Data Licence for public sector information is essentially an open licence with a BY attribution clause.

At DART we have followed the guidelines of The Open Data Institute and separated out creative content (illustrations, text, etc.) from data content. Hence, the DART content is either CC-BY or ODC-BY respectively. In the future we believe it would be useful to drop the BY (attribution) clause. This would stop attribute stacking (if the resource you are using is a derivative of a derivative of a derivative of a ….. (you get the picture), at what stage do you stop attribution) and anything which requires bureaucracy, such as attributing an image in a powerpoint presentation, inhibits re-use (one should always assume that people are intrinsically lazy). There is a post advocating ccZero+ by Dan Cohen. However, impact tracking may mean that the BY clause becomes a default for academic deposition.

The ADS uses a more restrictive bespoke default licence which does not map to national or international licence schemes (they also don’t recognise non CC licences). Resources under this licence can only be used for teaching, learning, and research purposes. Of particular concern is their use of the NC clause and possible use of the ND clause (depending on how you interpret the licence). Interestingly, policy changes mean that the use of data under the bespoke ADS licence becomes problematic if university teaching activities are determined to be commercial. It is arguable that the payment of tuition fees represents a commercial activity. If this is true then resources released under the ADS licence can not be used within university teaching which is part of a commercial activity. Hence, the policy change in student tuition and university funding has an impact on the commercial nature of university teaching which has a subsequent impact on what data or resources universities are licensed to use. Whilst it may never have been the intention of the ADS to produce a licence with this potential paradox, it is a problem when bespoke licences are developed, even if they were originally perceived to be relatively permissive licences. To remove this ambiguity it is recommended that submissions to the ADS are provided under a CC licence which renders the bespoke ADS licence void.

In the case of DART, these licence variations with the ADS should not be a problem. Our licences are permissive (by attribution is the only clause we have included). This means the ADS can do anything they want with our resources as long as they cite the source. In our case this would be the individual resource objects or collections on the DART portal. This is a good thing, as the metadata on the DART portal is much richer than the metadata held by the ADS.

Concerns about opening up data, and responses which have proved effective

Christopher Gutteridge (University of Southampton) and Alexander Dutton (University of Oxford) have collated a Google doc entitled ‘Concerns about opening up data, and responses which have proved effective‘. This document describes a number of concerns commonly raised by academic colleagues about increasing access to data. For DART two issues became problematic that were not covered by this document:

  • The relationship between open data and research novelty and the impact this may have on a PhD submission.
  • Journal publication – specifically that a journal won’t publish a research paper if the underlying data is open.

The former point is interesting – does the process of undertaking open science, or at least providing open data, undermine the novelty of the resultant scientific process? With open science it could be difficult to directly attribute the contribution, or novelty, of a single PhD student to an openly collaborative research process. However, that said, if online versioning tools like Git are used, then it is clear who has contributed what to a piece of code or a workflow (the benefits of the BY clause). This argument is less solid when we are talking solely about open data. Whilst it is true that other researchers (or anybody else for that matter) have access to the data, it is highly unlikely that multiple researchers will use the same data to answer exactly the same question. If they do ask the same question (and making the optimistic assumption that they reach the same conclusion), it is still highly unlikely that they will have done so by the same methods; and even if they do, their implementations will be different. If multiple methods using the same source data reach the same conclusion then there is an increased likelihood that the conclusion is correct and that the science is even more certain. The underlying point here is that 21st-century scientific practice will substantially benefit from people showing their working. Exposure of the actual process of scientific enquiry (the algorithms, code, etc.) will make the steps between data collection and publication more transparent, reproduceable and peer-reviewable – or, quite simply, more scientific. Hence, we would argue that open data and research novelty is only a problem if plagiarism is a problem.

The journal publication point is equally interesting. Publications are the primary metric for academic career progression and kudos. In this instance it was the policy of the ‘leading journal in this field’ that they would not publish a paper from a dataset that was already published. No credible reasons were provided for this clause – which seems draconian in the extreme. It does indicate that no one size fits all approach will work in the academic landscape. It will also be interesting to see how this journal, which publishes work which is mainly funded by EPSRC, responds to the EPSRC guidelines on open data.

This is also a clear demonstration that the academic community needs to develop new metrics that are more suited to 21st century research and scholarship by directly link academic career progression to other source of impact that go beyond publications. Furthermore, academia needs some high-profile exemplars that demonstrate clearly how to deal with such change. The policy shift and ongoing debate concerning ‘Open access’ publications in the UK is changing the relationship between funders, universities, researchers, journals and the public – a similar debate needs to occur about open data and open science.

The altmetrics community is developing new metrics for “analyzing, and informing scholarship” and have described their ethos in their manifesto. The Research Councils and Governments have taken a much greater interest in the impact of publically funded research. Importantly public, social and industry impact are as important as academic impact. It is incumbent on universities to respond to this by directly linking academic career progression through to impact and by encouraging improved access to the underlying data and procesing outputs of the research process through data repositories and workflow environments.

The Tragic Consequences of Secret Contracts

Theodora Middleton - April 14, 2014 in Campaigning, Featured, Stop Secret Contracts

The following post is by Seember Nyager, CEO of the Public and Private Development Centre in Nigeria, one of our campaign partners in the Stop Secret Contracts campaign

procurement montior

Every day, through secret contracts being carried out within public institutions, there is confirmation that the interest of the public is not served. A few days ago, young Nigerians in Abuja were arrested for protesting against the reckless conduct of the recruitment exercise at the Nigerian Immigration Service (NIS) that led to the death of 19 applicants.

Although the protesters were later released, the irony still stings that whilst no one has been held for the resulting deaths from the reckless recruitment conduct, the young voices protesting against this grave misconduct are being silenced by security forces. Most heart-breaking is the reality that the deadly outcomes of the recruitment exercise could have been avoided with more conscientious planning, through an adherence to due process and diligence in the selection of consultants to carry out the exercise.

A report released by Premium times indicates that the recruitment exercise was conducted exclusively by the Minister of Interior who hand-picked the consultant that carried out the recruitment exercise at the NIS. The non-responsiveness of the Ministry in providing civic organizations including BudgIT and PPDC with requested details of the process through which the consultant was selected gives credence to the reports of due process being flouted.

The non-competitive process through which the consultant was selected is in sharp breach of the Public procurement law and its results have undermined the concept of value for money in the award of contracts for public services. Although a recruitment website was built and deployed by the hired consultant, the information gathered by the website does not seem to have informed the plan for the conduct of the recruitment exercise across the country which left Nigerians dead in its wake. Whilst the legality of the revenue generated from over 710,000 applicants is questioned, it is appalling that these resources were not used to ensure a better organized recruitment exercise.

This is not the first time that public institutions in Nigeria have displayed reckless conduct in the supposed administration of public services to the detriment of Nigerians. The recklessness with which the Ministry of Aviation took a loan to buy highly inflated vehicles, the difficulty faced by BudgIT and PPDC in tracking the exact amount of SURE-P funds spent, the 20 billion Dollars unaccounted for by the NNPC are a few of the cases where Nation building and development is undermined by public institutions.

In the instance of the NIS recruitment conducted three weeks ago, some of the consequences have been immediate and fatal, yet there is foot dragging in apportioning liability and correcting the injustice that has been dealt to Nigerians. On the same issue, public resources have been speedily deployed to silence protesters.

procurement monitor2

It is time that our laws which require due process and diligence are fully enforced. Peaceful protests should no longer be clamped down because Nigerians are justified for being outraged by any form of institutional recklessness. The Nigerian Immigration Service recruitment exercise painfully illustrates that the outcomes of secret contracts could be deadly and such behaviour cannot be allowed to continue. We must stop institutional recklessness, we must stop secret contracts.

Ms. Seember Nyager coordinates procurement monitoring in Nigeria. Follow Nigerian Procurement Monitors at @Nig_procmonitor.

Why secret contracts matter in aid transparency

Nicole Valentinuzzi - April 11, 2014 in Campaigning, Stop Secret Contracts

The following guest post is by Nicole Valentinuzzi, from our Stop Secret Contracts campaign partner Publish What You Fund.

A new campaign to Stop Secret Contracts, supported by the Open Knowledge Foundation, Sunlight Foundation and many other international NGOs, aims to make sure that all public contracts are made available in order to stop corruption before it starts.

As transparency campaigners ourselves, Publish What You Fund is pleased to be a supporter of this new campaign. We felt it was important to lend our voice to the call for transparency as an approach that underpins all government activity.

We campaign for more and better information about aid, because we believe that by opening development flows, we can increase the effectiveness and accountability of aid. We also believe that governments have a duty to act transparently, as they are ultimately responsible to their citizens.

This includes publishing all public contracts that governments put out for tender, from school books to sanitation systems. These publicly tendered contracts are estimated to top nearly US$ 9.5 trillion each year globally, yet many are agreed behind closed doors.

These secret contracts often lead to corruption, fraud and unaccountable outsourcing. If the basic facts about a contract aren’t made publicly available – for how much and to whom to deliver what – then it is not possible to make sure that corruption and abuses don’t happen.

But what do secret contracts have to do with aid transparency, which is what we campaign for at Publish What You Fund? Well, consider the recent finding by the campaign that each year Africa loses nearly a quarter of its GDP to corruption…then consider what that money could have been spent on instead – things like schools, hospitals and roads.

This is money that in many cases is intended to be spent on development. It should be published – through the International Aid Transparency Initiative (IATI), for example – so that citizens can follow the money and hold governments accountable for how it is spent.

But corruption isn’t just a problem in Africa – the Stop Secret Contracts campaign estimates Europe loses an estimated €120 billion to corruption every year.

At Publish What You Fund, we tell the world’s biggest providers of development cooperation that they must publish their aid information to IATI because it is the only internationally-agreed, open data standard. Information published to IATI is available to a wide range of stakeholders for their own needs – whether people want to know about procurement, contracts, tenders or budgets. More than that, this is information that partner countries have asked for.

Governments use tax-payer money to award contracts to private companies in every sector, including development. We believe that any companies that receive public money must be subject to the same transparency requirements as governments when it comes to the goods and services they deliver.

Greater transparency and clearer understanding of the funds that are being disbursed by governments or corporates to deliver public services can only be helpful in building trust and supporting accountability to citizens. Whether it is open aid or open contracts, we need to get the information out of the hands of governments and into the hands of citizens.

Ultimately for us, the question remains how transparency will improve aid – and open contracts are another piece of the aid effectiveness puzzle. Giving citizens full and open access to public contracts is a crucial first step in increasing global transparency. Sign the petition now to call on world leaders to make this happen.

stopsecretcontracts logo

OKFestival 2014 Financial Aid Programme Launches Today!

Beatrice Martini - April 9, 2014 in Events, Featured, News, OKFest, OKFestival

The OKFestival 2014 Team is happy to announce that we are launching our Financial Aid Programme today! Screen Shot 2014-04-08 at 4.55.39 PM We’re delighted to support and ensure the attendance of those with great ideas who are actively involved in the open movement, but whose distance or finances make it difficult for them to get to this year’s festival in Berlin. Diversity and inclusivity are a huge part of our festival ethos and we are committed to ensuring broad participation from all corners of the world. We’re striving to create a forum for all ideas and all people and our Financial Aid Programme will help us to do just that.

What: OKFestival, 15-17th July 2014, Berlin

How to Apply: Check out our Financial Aid webpage

Deadline: Sunday 4th May

Our Travel Grants cover travel and accommodation costs, and our aim is to get you to Berlin if you can’t quite make it there yourself. For more information on what we’ll cover – and what we won’t – how to apply, and what to expect if you do, have a look at our Financial Aid page.

  Image credit: Flickr user Andrew Nash

Skillshares and Stories: Upcoming Community Sessions

Heather Leson - April 3, 2014 in CKAN, Events, Network, OKF Brazil, OKF Projects, Open Access, Open Knowledge Foundation Local Groups, School of Data

We’re excited to share with you a few upcoming Community Sessions from the School of Data, CKAN, Open Knowledge Brazil, and Open Access. As we mentioned earlier this week, we aim to connect you to each other. Join us for the following events!

What is a Community Session: These online events can be in a number of forms: a scheduled IRC chat, a community google hangout, a technical sprint or hackpad editathon. The goal is to connect the community to learn and share their stories and skills.

We held our first Community Session yesterday. (see our Wiki Community Session notes) The remaining April events will be online via G+. These sessions will be a public Hangout to Air. The video will be available on the Open Knowledge Youtube Channel after the event. Questions are welcome via Twitter and G+.

All these sessions are Wednesdays at 10:30 – 11:30 am ET/ 14:30 – 15:30 UTC.

Mapping with Ketty and Ali: a School of Data Skillshare (April 9, 2014)

Making a basic map from spreadsheet data: We’ll explore tools like QGIS (a free and Open-source Geographic Information System), Tilemill (a tool to design beautiful interactive web maps) Our guest trainers are Ketty Adoch and Ali Rebaie.

To join the Mapping with Ketty and Ali Session on April 9, 2014

Q & A with Open Knowledge Brazil Chapter featuring Everton(Tom) Zanella Alvarenga (April 16, 2014)

Around the world, local groups, Chapters, projects, working groups and individuals connect to Open Knowledge. We want to share your stories.

In this Community Session, we will feature Everton (Tom) Zanella Alvarenga, Executive Director.

Open Knowledge Foundation Brazil is a newish Chapter. Tom will share his experiences growing a chapter and community in Brazil. We aim to connect you to community members around the world. We will also open up the conversation to all things Community. Share your best practices

Join us on April 16, 2014 via G+

Take a CKAN Tour (April 23, 2014)

This week we will give an overview and tour of CKAN – the leading open source open data platform used by the national governments of the US, UK, Brazil, Canada, Australia, France, Germany, Austria and many more. This session will cover why data portals are useful, what they provide and showcase examples and best practices from CKAN’s varied user base! Our special guest is Irina Bolychevsky, Services Director (Open Knowledge Foundation).

Learn and share your CKAN stories on April 23, 2014

(Note: We will share more details about the April 30th Open Access session soon!)

Resources

Coding da Vinci – Open GLAM challenge in Germany

Guest - April 3, 2014 in Events, OKF Germany, Open GLAM

The following blog is by Helene Hahn, Open GLAM coordinator at Open Knowledge Germany. It is cross-posted from the Open GLAM blog

More and more galleries, libraries, archives and museums (GLAMs) are digitizing their collections to make them accessible online and to preserve our heritage for future generations. By January 2014, over 30 million objects have been made available via Europeana – among which over 4.5 million records were contributed from German institutions.

Through the contribution of open data and content, cultural institutions provide tools for the thinkers and doers of today, no matter what sector they’re working in; in this way, cultural heritage brings not just aesthetic beauty, but also brings wider cultural and economic value beyond initial estimations.

Coding da Vinci, the first German open cultural data hackathon will take place in Berlin to bring together both cultural heritage institutions and the hacker & designer community to develop ideas and prototypes for the cultural sector and the public. It will be structured as a 10-week-challenge running from April 26th until July 6th under the motto “Let them play with your toys!”, coined by Jo Pugh of the UK National Archives. All projects will be presented online for everyone to benefit from, and prizes will be awarded to the best projects at the end of the hackathon.

The participating GLAMs have contributed a huge range of data for use in the hackathon, including highlights such as urban images (including metadata) of Berlin in the 18th and 19th centuries, scans of shadow boxes containing insects and Jewish address-books from the 1930s in Germany, and much more! In addition, the German Digital Library will provide their API to hackathon participants. We’re also very happy to say that for a limited number of participants, we can offer to cover travel and accommodation expenses – all you have to do is apply now!

All prices, challenges and datasets will soon be presented online.

This hackathon is organized by: German Digital Library, Service Centre Digitization Berlin, Open Knowledge Foundation Germany, and Wikimedia Germany.

The School of Data Journalism 2014!

Milena Marin - April 3, 2014 in Data Journalism, Events, Featured, School of Data

DJH_5 copy

We’re really excited to announce this year’s edition of the School of Data Journalism, at the International Journalism Festival in Perugia, 30th April – 4th May.

It’s the third time we’ve run it (how time flies!), together with the European Journalism Centre, and it’s amazing seeing the progress that has been made since we started out. Data has become an increasingly crucial part of any journalists’ toolbox, and its rise is only set to continue. The Data Journalism Handbook, which was born at the first School of Data Journalism is Perugia, has become a go-to reference for all those looking to work with data in the news, a fantastic testament to the strength of the data journalism community.

As Antoine Laurent, Innovation Senior Project Manager at the EJC, said:

“This is really a must-attend event for anyone with an interest in data journalism. The previous years’ events have each proven to be watershed moments in the development of data journalism. The data revolution is making itself felt across the profession, offering new ways to tell stories and speak truth to power. Be part of the change.”

Here’s the press release about this year’s event – share it with anyone you think might be interested – and book your place now!


PRESS RELEASE FOR IMMEDIATE RELEASE

April 3rd, 2014

Europe’s Biggest Data Journalism Event Announced: the School of Data Journalism

The European Journalism Centre, Open Knowledge and the International Journalism Festival are pleased to announce the 3rd edition of Europe’s biggest data journalism event, the School of Data Journalism. The 2014 edition takes place in Perugia, Italy between 30th of April – 4th of May as part of the International Journalism Festival.

#ddjschool #ijf13

A team of about 25 expert panelists and instructors from New York Times, The Daily Mirror, Twitter, Ask Media, Knight-Mozilla and others will lead participants in a mix of discussions and hands-on sessions focusing on everything from cross-border data-driven investigative journalism, to emergency reporting and using spreadsheets, social media data, data visualisation and mapping techniques for journalism.

Entry to the School of Data Journalism panels and workshops is free. Last year’s editions featured a stellar team of panelists and instructors, attracted hundreds of journalists and was fully booked within a few days. The year before saw the launch of the seminal Data Journalism Handbook, which remains the go-to reference for practitioners in the field.

Antoine Laurent, Innovation Senior Project Manager at the EJC said:

“This is really a must-attend event for anyone with an interest in data journalism. The previous years’ events have each proven to be watershed moments in the development of data journalism. The data revolution is making itself felt across the profession, offering new ways to tell stories and speak truth to power. Be part of the change.”

Guido Romeo, Data and Business Editor at Wired Italy, said:

“I teach in several journalism schools in Italy. You won’t get this sort of exposure to such teachers and tools in any journalism school in Italy. They bring in the most avant garde people, and have a keen eye on what’s innovative and new. It has definitely helped me understand what others around the world in big newsrooms are doing, and, more importantly, how they are doing it.”

The full description and the (free) registration to the sessions can be found on http://datajournalismschool.net You can also find all the details on the International Journalism Festival website: http://www.journalismfestival.com/programme/2014

ENDS

Contacts: Antoine Laurent, Innovation Senior Project Manager, European Journalism Centre: laurent@ejc.net Milena Marin, School of Data Programme Manager, Open Knowledge Foundation, milena.marin@okfn.org

Notes for editors

Website: http://datajournalismschool.net Hashtag: #DDJSCHOOL

The School of Data Journalism is part of the European Journalism Centre’s Data Driven Journalism initiative, which aims to enable more journalists, editors, news developers and designers to make better use of data and incorporate it further into their work. Started in 2010, the initiative also runs the website DataDrivenJournalism.net as well as the Doing Journalism with Data MOOC, and produced the acclaimed Data Journalism Handbook.

About the International Journalism Festival (www.journalismfestival.com) The International Journalism Festival is the largest media event in Europe. It is held every April in Perugia, Italy. The festival is free entry for all attendees for all sessions. It is an open invitation to listen to and network with the best of world journalism. The leitmotiv is one of informality and accessibility, designed to appeal to journalists, aspiring journalists and those interested in the role of the media in society. Simultaneous translation into English and Italian is provided.

About Open Knowledge (www.okfn.org) Open Knowledge, founded in 2004, is a worldwide network of people who are passionate about openness, using advocacy, technology and training to unlock information and turn it into insight and change. Our aim is to give everyone the power to use information and insight for good. Visit okfn.org to learn more about the Foundation and its major projects including SchoolOfData.org and OpenSpending.org.

About the European Journalism Centre (www.ejc.net) The European Journalism Centre is an independent, international, non-profit foundation dedicated to maintaining the highest standards in journalism in particular and the media in general. Founded in 1992 in Maastricht, the Netherlands, the EJC closely follows emerging trends in journalism and watchdogs the interplay between media economy and media culture. It also hosts each year more than 1.000 journalists in seminars and briefings on European and international affairs.

The Open Knowledge Foundation Newsletter, April 2014

Theodora Middleton - April 2, 2014 in Featured, Newsletter

Hi!

After last month’s launch-fest, March has been a thoughtful month, with reflective and planning pieces taking centre-stage on our blog. Of course OKFestival has been ramping up since its launch, giving more detail on topics and running sessions to help with submitting proposals; however we’ve also had more from the Community Survey results, as well as guest posts dealing with ‘open washing’ and exploring what open data means to different people.

Keep checking in on the Community Stories Tumblr for the latest news on what people are doing around the world to push the agenda for Open Knowledge. This month’s updates come from India, Tanzania, Greece, Malta, Russia and Germany, and from OpenMENA (Middle East and North Africa) – the new group focusing on Open Knowledge in the Arab world.

Also, congratulations to our very own Rufus Pollock, named a Tech Hero for Good by NESTA :-)

OKFestival 2014

Plans have been moving at pace over the last month.

So many proposals came in, and so many people wanted more time to submit, we extended the deadline for proposals to March 30th. We’ll have to wait until May to learn if our proposals have been accepted, and later in May for the programme announcement, but many thanks to all who have proposed sessions – and good luck to you!

It’s not long to go now, so don’t forget to buy your ticket

If you need distraction from the wait, check out this flash-back to last year: the 2013 Open Reader, a collection of stories and articles inspired by Open Knowledge Conference 2013.

Stop Secret Contracts

Last month we launched our campaign for a stop to secret contracts, asking various organisations to partner with us and asking you who care about openness to sign up to show your support.

Spread the word to your colleagues, friends and family to show that we will not stand for corruption, fraud, unaccountability or backdoor deals.

Signatures not enough? To get more involved please contact us and help us stop secret contracts.

#SecretContracts

Coming Up

Easter Eggs 1

The School of Data heads to Perugia! Europe’s Biggest Data Journalism Event, from The European Journalism Centre, Open Knowledge and the International Journalism Festival, the School of Data Journalism takes place 30th April to 4th May. This event has an impressive programme with free entry to panel and workshops so check it out and register to save your place.

OGP grows to 62 countries. The Open Government Partnership (OGP) will welcome 8 new countries during April: ‘Cohort 4′ consists of Australia, Ireland, Malawi, Mongolia, New Zealand, Sierra Leone, Serbia, and Trinidad and Tobago.

And… Time-zone changes! It messes with schedules and deadlines, but adds to the fun of this time of year.

All the best from Open Knowledge!

Get Updates