Register your Interest: Open Knowledge Justice Programme Community Meetups

What’s this about? The Open Knowledge Justice Programme is kicking off a series of free, monthly community meetups to talk about Public Impact Algorithms. Register here. Who is this for? Do you want to learn more about Public Impact Algorithms? Would you like to know how to spot one, and how they might affect the […]

Launching the new website for The Justice Programme

Today we are excited to share with you the new website for The Justice Programme A few months ago we made the decision to build a dedicated website for the project because the range of activities and services has grown. We want our partners to easily find out what we are doing, and how […]

OK Justice Programme secures definitive guidance on the use of algorithms in online exams. Our first win in the fight to ensure that Public Impact Algorithms do no harm!

An independent inquiry adopts nearly all of our recommendations in our first challenge to the misuse of Public Impact Algorithms. Strong guidance given to the UK’s Bar Standards Board on the use of “remote proctoring software” which should now guide others’ use of this technology.   About The Justice Programme  The Justice Programme is a […]

Open Knowledge Justice Programme challenges the use of algorithmic proctoring apps

Today we’re pleased to share more details of the Justice Programme’s new strategic litigation project: challenging the (mis)use of remote proctoring software.   What is remote proctoring? Proctoring software uses a variety of techniques to ‘watch’ students as they take exams. These exam-invigilating software products claim to detect, and therefore prevent, cheating. Whether this software can […]

What is a public impact algorithm?

Meg Foulkes discusses public impact algorithms and why they matter. “When I look at the picture of the guy, I just see a big Black guy. I don’t see a resemblance. I don’t think he looks like me at all.” This is what Robert Williams said to police when he was presented with the evidence […]

Do we trust the plane or the pilot? The problem with ‘trustworthy’ AI

On April 8th 2019, the High-Level Expert Group on AI, a committee set up by the European Commission, presented the Ethics Guidelines for Trustworthy Artificial Intelligence. It defines trustworthy AI through three principles and seven key requirements. Such AI should be: lawful, ethical and robust, and take into account the following principles: Human agency and […]

Launching the Open Knowledge Justice Programme

Supporting legal professionals in the fight for algorithmic accountability, by Meg Foulkes and Cedric Lombion Last month, Open Knowledge Foundation made a commitment to apply our unique skills and network to the emerging issues of AI and algorithms. We can now provide you with more details about the work we are planning to support legal […]