As announced in January, this year the Open Knowledge Foundation (OKFN) team is working to develop a stable version of the Open Data Editor (ODE) application. Thanks to financial support from the Patrick J. McGovern Foundation, we will be able to create a no-code tool for data manipulation and publishing that is accessible to everyone, unlocking the power of data for key groups including scientists, journalists and data activists.

[Disclaimer: Open Data Editor is currently available for download and testing in beta. We are working on a stable version. Updates will be announced throughout the year. Learn more here.]

Since the beginning of the year, we’ve been working on building the ODE team and conducting the first phase of user research. We have interviewed 10 people so far, covering different user profiles such as journalists, people working in NGOs and the private sector, and data practitioners in general.

The Open Data Editor is built on top of Frictionless Data specifications and software, and is an example of a simple, open-by-design alternative to the complex software offered by the Big Tech industry. Developing this type of technology is part of our current strategic focus on promoting and supporting the development of open digital public infrastructure that is accessible to all.

As part of this, we want to open up this process in a series of blogs, sharing with the community and those interested in the world of open data how each stage of the creation of this software is developing.

What have we learned so far?

  • Put people first: organisations need to spend more time on user research. Organisations can lose money and spend unnecessary time on things that may not be as useful as they think if they don’t reach out to their community and try to understand their problems before building solutions. This may sound obvious, but it happens all the time.
  • Spend more time thinking about the problem you are trying to solve. Whenever you want to improve a tool, you may be tempted to jump in and try to fix it from a technical point of view. This can create a bigger problem. It’s important to take a step back, learn everything you can about the tool, and talk to potential users to understand if what the technology is trying to solve is a real problem for them.
  • Build diverse and interdisciplinary teams. The current OKFN team working on ODE includes three software developers, a product owner and a project manager. We all have different expertise and backgrounds, which is key to being able to put ourselves in the shoes of our potential users. Most importantly, we are all data practitioners ourselves!
  • Do not reinvent the wheel: check out the resources your community has already made available. This is also a good way to reuse resources that your community has opened up, so that you spend less time on key parts of your work. For example, during our research process we used the amazing Discovery Kit created by the Open Contracting Partnership. Although the toolkit was originally developed to help teams build tools and software using open contracting data, we followed the advice and used some elements, such as their user personas, to adapt it for our specific work.
  • Share and iterate your ideas with people outside your organisation. Getting external insights is a very good practice for those building open source products. “Sharing is caring” is good for you and your products 🙂

Initial findings

After the first round of user interviews, here are the first conclusions on the difficulties and current state of the art of tabular open data according to data practitioners.

  • Same old problems. Data practitioners still spend a lot of time exploring and cleaning data. Analysis is only a small part.
  • The struggle with PDFs continues. Some respondents explained how they have to manually copy and paste data or use technologies such as Tabula to extract tables from PDFs. 
  • Preferred tools for exploring and cleaning data: Spreadsheet tools like Google Sheets, Open Office and Excel.
  • Favourite features to start exploring the data: Pivot tables and filters. 
  • Generative AI “not for data analysis”. Data practitioners, especially journalists, are reluctant to use AI for data analysis or to draw conclusions from the data they’re working with. They don’t want to share their datasets without knowing how they’re being used (privacy concerns), and because it’s impossible to reconstruct what the technology is doing to achieve specific results.

You can also find more details in the following presentation.

If you want to get more closely involved with the development of the Open Data Editor application, you can express your interest in joining one of the testing sessions by filling this form.

You can also email us at info@okfn.org, follow the GitHub repository or join the Frictionless Data community. We meet once a month.