This is the fifteenth conversation of the 100+ Conversations to Inspire Our New Direction (#OKFN100) project.

Starting in 2023, we are meeting with over 100 people to discuss the future of open knowledge, shaped by a diverse set of visions from artists, activists, academics, archivists, thinkers, policymakers, data scientists, educators, and community leaders from around the world.

How can openness accelerate and strengthen struggles against the complex challenges of our time? This is the key question behind conversations like the one you can read below.

*

Today’s conversation is with Ivana Feldfeber, co-founder and executive director of Latin America’s first Gender Data Observatory, DataGénero.

Ivana is a feminist data activist from Bariloche, Argentina. She is part of the Feminist AI network for Latin America, <f+AI+r>, and was a team leader at the Centre for Artificial Intelligence and Digital Policy (CAIDP), where she worked analysing different AI policies in her region.

Since 2022, Ivana and her team have been researching AI tools for criminal courts working with gender-based violence data in Argentina, which resulted in the AymurAI app. This project is one that we admire at Open Knowledge for being simple, open and private by design, sustainable and focused on solving real problems, characteristics that we promote through The Tech We Want initiative.

The conversation with Ivana took place in mid-April, this time with Lucas Pretti, OKFN’s Director of Communications and Advocacy.

We hope you enjoy reading it.

*

Lucas Pretti: Over the last year, with The Tech We Want, we in OKFN have been using the phrase “real world problems” a lot to refer to the purpose of the technologies we want and build. I noticed that you also use this phrase in DataGénero communications. What do you mean by that, and how should technology respond to them?

Ivana Feldfeber: Human problems require human solutions, not just technological ones. Technology can be a great ally, a powerful tool to think about solutions, but it is not the solution. In fact, we are quite critical of technological solutionism, the idea that if we focus all our energy on developing new technologies, humanity will be better off. Because in reality none of that is happening. There is no evidence that the more technology, the better off humanity will be.

The problem is that those who set the pace of these technological advances are very few people with a lot of power and interests that don’t necessarily have to do with people living better lives. They dress it up, but if you look at the underlying problems, like climate change or hunger in the Global South, it’s clear that they’re not going to be solved by more artificial intelligence. It’s ridiculous, but that’s the discourse they want to force on us.

Lucas Pretti: You and DataGénero are very involved in discussions on digital governance, a topic that mobilises us a lot at OKFN. Do you think it is possible to influence from local initiatives to reorient the global agenda, for example on AI, data or the internet? In other words, how can we change the dominant technological agenda?

Ivana Feldfeber: It is difficult, but we have to try. Sometimes I feel like we are like Quixote, tilting at windmills. But once I was talking to Vilas Dhar from the Patrick McGovern Foundation and he told me that even if we feel that it’s an unequal fight, we have to continue to be the people who raise their hands and say “this is wrong”. And at the same time, show that things can be done differently.

For example, at DataGénero we develop AI tools that we distribute for free, without licence-based business models. Because if we want governments to be more transparent, we cannot charge them for the tools that allow them to be transparent. Now, through AymurAI, we are working with several judiciaries to implement systems that process data locally, without relying on the cloud, so that sensitive information does not end up in the hands of companies.

Lucas Pretti: How do you scale the impact of projects like the one you mentioned, which are arguably small tech, without falling into the logic of infinite growth?

Ivana Feldfeber: We were frightened by the idea that AI requires an unattainable infrastructure. But with two graphics cards you can run useful models. We are working with many IT teams in the justice system to implement AI tools locally, without the cloud, using our own servers. The risk is that they will copy us without giving credit, but we prefer that to governments relying on companies.

The key is for states to invest in their own infrastructure and trained technical teams. In some Central American countries, for example, some companies charge very expensive licences for an analysis system; we offered a free alternative. If we want open justice or transparency, we cannot charge for the tools.

Lucas Pretti: Speaking of public infrastructures, I’ll repeat a question that prompted a panel at the last The Tech We Want Summit: Should governments be tech-savvy?

Ivana Feldfeber: Of course! If not, things happen like in Salta, Argentina, where the government used a Microsoft algorithm that supposedly predicted teenage pregnancies. The system was based on variables such as access to clean water or educational level, which was absurd and stigmatising. But because they didn’t understand how it worked, they thought it was a magic bullet to avoid implementing comprehensive sexuality education.

The problem is that there is a lot of ignorance and also a lot of techno-optimism: the idea that technology is going to save us from everything. But we cannot leave public decisions in the hands of companies. We need regional regulations, with the participation of academia and civil society.

Lucas Pretti: In that sense, what skills do you think are key today for ordinary people to be able to navigate this new post-IA technological world? I am referring to the digital literacy gap that prevents a more critical participation in the construction of technology by the people who are really affected.

Ivana Feldfeber: First of all, we have to demystify. Many people are afraid of or reject it because they don’t understand what AI can and cannot do. On the other hand, some believe everything they see on the internet. We need critical literacy: to understand basic privacy issues, to know that not everything that is circulating is real, to learn to read between the lines of app terms and conditions.

That’s why I’d like to develop something like a website that explains in simple language what you’re accepting when you use Instagram or WhatsApp. If not, we will continue to see cases like my grandmother, who believes everything she receives on WhatsApp, or teenagers who share deepfakes without realising the damage they are doing.

Lucas Pretti: Finally, tell us a bit about the team and how DataGénero is organised.

Ivana Feldfeber: We are a core team of four women working full time and about ten other people working on specific projects. In the AymurAI project, there are five developers and some testers. We put together teams according to the needs of the projects. For example, in the political parity project, we also have a lot of people collecting data from electoral candidacies to look at parity. That’s how it works. It is also important to note that decisions are made by a board of several people; every alliance or project that we take on always goes through that.

Lucas Pretti: Perfect. So let’s get on with it. Thank you very much for your time and for sharing these ideas.

Ivana Feldfeber: Yes, of course! Let’s stay in touch to concretise some of these collaborations.