Open Knowledge Justice Programme challenges the use of algorithmic proctoring apps

Today we’re pleased to share more details of the Justice Programme’s new strategic litigation project: challenging the (mis)use of remote proctoring software.   What is remote proctoring? Proctoring software uses a variety of techniques to ‘watch’ students as they take exams. These exam-invigilating software products claim to detect, and therefore prevent, cheating. Whether this software can […]

What is a public impact algorithm?

Meg Foulkes discusses public impact algorithms and why they matter. “When I look at the picture of the guy, I just see a big Black guy. I don’t see a resemblance. I don’t think he looks like me at all.” This is what Robert Williams said to police when he was presented with the evidence […]

Do we trust the plane or the pilot? The problem with ‘trustworthy’ AI

On April 8th 2019, the High-Level Expert Group on AI, a committee set up by the European Commission, presented the Ethics Guidelines for Trustworthy Artificial Intelligence. It defines trustworthy AI through three principles and seven key requirements. Such AI should be: lawful, ethical and robust, and take into account the following principles: Human agency and […]

Launching the Open Knowledge Justice Programme

Supporting legal professionals in the fight for algorithmic accountability, by Meg Foulkes and Cedric Lombion Last month, Open Knowledge Foundation made a commitment to apply our unique skills and network to the emerging issues of AI and algorithms. We can now provide you with more details about the work we are planning to support legal […]