Today, we are proudly announcing Open Knowledge’s AI Learning Labs, a new initiative that aims to experiment with AI, translate knowledge from social sector organisations around the world, and produce public, multilingual AI-literacy resources tailored for organisations addressing similar issues elsewhere.
People working on high-impact social issues need targeted, practical AI resources. That is why we designed a programme through which we can collectively ask urgent questions about the risks and opportunities of data and AI – for their work and the communities they serve. Together with selected social organisations, we will catalyse learning and develop replicable methods to help organisations build AI skills, use AI responsibly, and develop their own AI projects.
In the following interview, Renata Ávila (OKFN’s CEO) and Solana Larsen (AI Literacy Lead) sit down with Lucas Pretti (Communications & Advocacy) to unpack the philosophy behind the initiative. The conversation offers a vision of open knowledge that is as much about co-asking the right questions as it is about co-building the right tools.
Why ‘Labs’ and Why Now

Lucas Pretti: Let us start with the project name: “AI Learning Labs.” We are using “labs,” not “training,” “courses,” or “academy.” Years ago, at an early stage of the open movement, OKFN had the Open Knowledge Labs initiative focused on building things. Now, AI dominates the narrative. Is this a new era of experimentation?

Renata Ávila: I think this is consistent with the work OKFN has done for 20 years. It is a continuity of our approach: learning by doing and learning by experimenting. We never stopped building, testing, and experimenting while holding to our values.
In this AI-intense environment, what we are trying to bring with these Learning Labs is the idea that AI is a tool that needs appropriation and experimentation. It reminds me of the early days of the web, as you mentioned.
The difference is that the web was somewhat neutral, but these tools are no longer neutral.
Many existing learning resources are either marketing pitches or they jump straight to the risks and ethical concerns, skipping the experimentation phase where people can come to their own conclusions and create their own use and risk frameworks. We can not always go straight to the bad parts. We need to let people use and experiment with these tools in their own work. Then, the next step is to understand the risks and ethical concerns. But if they don’t consciously test it first, it’s hard to discuss those ethical implications in the abstract.

Solana Larsen: One thing that is different now is that people who are not programmers or developers are also challenged and inspired to think of different ways they can use AI. We are in an era of experimentation where many organisations new to building technology are deeply curious, inspired, and a little scared. They want to know how to use AI to advance their mission.
“Learning Labs” fits because we are all in a mode of learning. These technologies are changing rapidly, and none of us fully understand the implications of using them in different contexts. We need to be in a continuous process of learning together, between organisations and between technical and non-technical communities.
Producing Knowledge from Experimentation

Lucas Pretti: So, what are the AI Learning Labs in terms of process and outcomes? What is it, and what kind of resource will it produce?

Solana Larsen: We are developing a process for learning hand in hand with organisations. The idea is to learn by doing and learn by making. We are trying out new ways to exchange knowledge between people building things in the field. Initially, we are working with just a handful of social sector organisations on issues ranging from climate to human rights.
Part of the process is figuring out the questions, answering them together, and then building something they can use that can also benefit others. The outputs could be a technical prototype for accessing knowledge or a guide on how to approach continued learning with AI. Our goal is for this to be valuable for our partner organisations as well as others facing similar challenges.

Renata Ávila: An interesting aspect is that we specifically chose organisations with constrained resources – limited staff, outdated software, unstable internet. Most commercial AI is built on an idea of abundance: new computers, stable electricity, dedicated staff. That is not the reality for most.
Despite these limitations, there is a desire to learn and jump into new technologies, and that comes with dangers. If we do not test these technologies in such environments, we can not point out the limitations to the companies pushing “AI for good.” We hope this process will bring developers a lot of feedback and paths for innovation they did not consider before.
OKFN has always been a bridge across communities, and that is what we are trying to do now. We want public interest AI to be meaningful to people on the ground. We want the next generation of tools to include a reality check – for instance, perhaps they could be simpler and more environmentally friendly.
Open and Adaptable Learning Resources

Lucas Pretti: I want to cover replicability. We want to do things that are very situated knowledge, but we also want the learnings to be replicable or interoperable. Instead of “one size fits all,” I am thinking “one size inspires many.” How are we approaching this learning together?

Renata Ávila: By documenting the experience well. The key is the ability to localise by sharing the experience. It is not a copy-paste. We want to share the reality of an organisation that might be a shared reality for others.
For example, one of our first projects is with a small documentation center that holds verified facts you can not find elsewhere. In many countries – especially those who have experienced conflict and transitions of power – social movement researchers are the keepers of a part of history that was censored. Their documents are precious for defining a country’s future. But when young people ask an AI system about their specific local history, those facts are missing.
These kinds of documentation centers are everywhere, but are underfunded and dying. Through this process, we are not just sharing the experience of one center, but raising the urgency of the issue to our networks. By sharing a common concern, we create a community around an issue and a collaboration – so we can share tools, localise solutions, and give back to a common pot.

Solana Larsen: Learning together is also about developing a culture of communicating openly about tech. People are often shy or hesitant to discuss AI because they fear they do not understand it fully. Creating a space for open reflection, sharing concerns, hopes, and excitement is a big part of what we want to do with the Learning Labs.
Better Together Than Alone

Lucas Pretti: Let us go into practical details. Can you talk about the profile of the organisations we’re partnering with in this first phase – their geography, their context, their constraints?

Renata Ávila: For the first year, we are strategically partnering with a selection of organisations in different thematic areas and geographies we consider crucial. We are about to announce the first two organisations. One is concerned with the preservation of the knowledge commons. Another is addressing the climate crisis. We are still looking for two more, probably working towards peace and other pressing social issues.
While this first phase is not an open call, if you think your organisation might be interested in the second part of the year, you can contact us at info@okfn.org or stay in touch through the Open Knowledge Forum. We want to convene a broader community. If you are an expert working on the problems they will address, join us. If you want to volunteer or collaborate, write to us or submit your profile to our Global Directory.
Collaborations can be on expertise, AI tools, translation, or if you are facing a similar problem, to join us in exploring the solutions.

Lucas Pretti: Solana, could you briefly describe the pilot’s logistics? Duration, the role of the facilitators, and how it works?

Solana Larsen: We are customising the process for each organisation, but it is typically a couple of months of engagement. Part of the process involves Open Knowledge gathering expertise from our Network. Another part involves a person close to the organisation who will do more intensive investigation or develop a prototype for a tool according to the needs of the organisation.
The final step for every project is always: How do we pay it forward? How do we make sure others can benefit from the process? So part of the output is always something that can be used for learning by other organisations.
Applied AI Literacy

Lucas Pretti: To finalise, I want a reflection. Listening to you, this project is very much about knowledge, not just tech. That is the core of what open knowledge is. Can you give a final vision statement on how opening knowledge through technology connects to what we are doing now?

Renata Ávila: I would describe it as applied AI literacy – hopefully with open, ethical tools. It is about a process of discovery. We are not prescribing a solution; we have many questions ourselves. For example, if you are advocating for climate justice, do you use AI? And if so, how do you use it so you do not become part of the problem? It is a discovery process that is beneficial not only to the organisations but to the entire OKFN community.

Solana Larsen: There are many decisions that go into developing any technical tool. We can know what we want it to do, but there are still many ways to get there. It is not just about how we build things, but how we build things in a way that aligns with the shared purpose and vision of open movements – as thoughtfully as possible.
Join the conversation
We are just getting started, and we would love for you to join us. Here are the ways you can get involved with the AI Learning Labs:
- For peer-to-peer advice, collaboration, and for sharing/discussing more initiatives bridging AI and open data: join the Open Knowledge Forum
- To express interest in becoming a pilot organisation or for general inquiries and partnerships: write to info@okfn.org
- Are you an expert working on the same issues? Submit your profile to the Global Directory.
Acknowledgement

We are grateful for the Patrick J. McGovern Foundation’s (PJMF) generous support and our continued partnership in enhancing digital literacy and investing in AI for the public good. Learn more about its charitable programmes here.




