As mentioned in our previous post, the ODE team decided to make a pre-release of the app at the beginning of October. Why? We wanted to interview potential users and collect as much feedback as we could to detect key elements that needed to be addressed for the stable version of the app. Here is a short post explaining how we did it and what we learned from the process.

How we organised sessions

We decided to conduct 3 group user testing sessions with people with different backgrounds. In this sense, we got insights from people who work with data on a day-to-day basis. We picked testers from very different geographies and with very different technical skill levels (data practitioners, activists, non-profits workers, policy workers). This was key to understanding whether the ODE was a useful app in different data workflows and work environments. It also allows us to understand if our target audience really matched the potential users of the app. Some of the group sessions also included people with more technical background. The objective of doing this was to observe critical conversations between technical and non-technical people. 

After group sessions, our team had 1:1 calls with people that did not only use data in their daily work, but also train other people to develop data skills.

How we structured calls with participants

Two days before the session, we asked participants to read the user guide to download and install the app.  We provided a document so that they could document the process and mention any challenges they encountered. In addition, we also asked them to scan through the ODE user guide. 

During the sessions we asked people about their experience regarding the installation and their thoughts on the user guide. We then moved to the uploading process and how the ODE displays files with and without errors.

The calls also covered:

  • User insights of the errors report and the metadata panel. 
  • Conversations around the terms we use across the app.
  • Feedback on key features like the `Open Location File` where we explain to users what happens after a file is ingested on the app.

Here is a short summary of what users told us about the app

Installation process

✅ Data savvy users did not report problems downloading and installing the app. 

🚨 People with less data skills faced some challenges trying to find the right file on the GitHub repository. 

💡 In the short term, our team will improve the process by making changes to the project page and also add one-minute videos to explain the process and make it more straightforward.

User guide

✅ Almost all users agreed that the ODE’s user guide was clear and help them understand how the app and its features work. 

🚨 The main challenge that we face here is that, although participants read the documentation before the user testing session, we know this is not a common practice when people use apps, so we should not rely on documentation as an element that will address all questions users may have. There are some clarifications that will imply technical changes and user interface work.

ODE’s welcome dialog

🚨 We added a welcome dialogue box explaining the objective of the tool for those who open the ODE for the first time. Although the message was unanimously understood, some users reported that their usual behaviour would be to quickly skip the initial dialogue box without paying much attention to the message.

💡 One possible solution to this problem is to reduce text from the dialogue box. Keeping it concise would require less effort, and could therefore encourage people to read it.

Uploading file

✅ Participants highlighted that the process was simple and straightforward to them. No one struggled to find the feature and to upload files.

💡 Some users suggested we include the “drag and drop” option to give people an easy way to ingest data. 

🚨 Our team needs to do some additional work on the Adding external data feature. Since this part of the uploading screen does not specify what type of links the user can enter, the feature is unclear. The ODE currently accepts Google Sheets links and other URLs like data in GitHub and open data portals.

💡 The user testing sessions also helped us rethink elements that we thought were working well when, in fact, they weren’t. That was the case for the red and green dots next to the file names on the sidebar. We added this feature after the ingestion process to indicate to users in a simple way whether their files contain errors or not. However, only one or two users could explain what the icon meant, and the choice of differentiating the icons only based on colours was not great from an accessibility perspective.

Errors and errors report panel

✅ It was easy for participants to explain that cells with a red background were an indication of a problem in a specific part of the table. 

🚨 The errors report panel was one of the features where users experienced more challenges. Apart from the user guide, the tool does not include a complete reference to the type of errors that the ODE detects. While some of those errors are clear (duplicated column names, for instance), we did not specify in the app the validation rules apply to tables whenever the tool reads a file. 

💡 Since mentioning the list of errors in the documentation does not solve the problem (not everyone reads guides before using tools), one possible solution is adding individual files with errors to the sidebar and naming those files with the type of error. This could help users understand errors from the beginning.

🚨 Users also struggled to understand if ODE found errors in their files. In the app the Errors report gets activated only if errors are found, but this was not clear for the users. 

🚨 The ODE takes errors explanations from Frictionless. Since Frictionless is mostly used by technical people, the potential users of the ODE found errors description difficult to comprehend.

💡 We decided to include an error counter to clearly show problematic files. Since this change was implemented while user testing sessions were in progress we had the opportunity to test the change and the people we reached out to provide a positive reaction.

Metadata panel and publication feature

🚨 Conversations with potential users allowed us to confirm that this feature together with the Publication feature targets people with more technical skills. The main function of both sections was not clear for the current target audience for ODE: users with little to no technical skills.

Would you integrate the ODE in your data processes?

✅ We asked all participants if they would integrate the app into their daily work. All the participants said that, in a stable version, the ODE would be an incredibly useful tool, which would save them time.

Read more