A New Era in AI Governance

This week marked a pivotal moment in the geopolitical history of AI governance. The European Union launched the European Center for Algorithmic Transparency (ECAT) - a unique initiative that combines research and implementation. As a long-time algorithmic auditor, algorithmic audit SaaS startup founder, former Twitter exec in charge of algorithmic auditing, and expert consultant for the ECAT team, I provided my reflections, hopes, and fears of the possibilities of the Digital Services Act (DSA) in ushering in an era of applied technical accountability and advancing the field of algorithmic auditing.

First - my hopes. I like to start positive. I’m optimistic about the remit of the DSA and it’s approach to at-scale harms. They’ve chosen some of the most challenging harms to tackle - dissemination of illegal content, impact on democratic processes, impact on fundamental human rights, gender based violence, the protection of public health and minors and negative impact on a person’s physical and mental well-being. All too often, technical audits end up cycling around niche quantifications of fairness and implementation of fancy explainabilty techniques but fail to grapple with social impact harms at scale. On the other end, purely qualitative audits miss the information gathered from a technologist’s perspective and often fails to identify the source of harms, which often lies not in policies but in the data and code. Each is necessary, neither is sufficient.

Next, I’m optimistic about the structure of the DSA, and how it lends itself well to future-proofing against as yet unrealized technologies. The last year has demonstrated the issues with an insufficiently future-forward policy structure, with the need to entirely revisit the structure of the equally ambitious EU AI Act in the face of generative AI. According to the team, the DSA has no such foreseeable limitations. By defining qualifying companies by reach and potential for impact on risk areas, the implementation and requirements of an audit do not change in light of rapid technological advancement.

Finally, I’m optimistic about the team in charge of research and implementation at the Joint Research Centre. I’m deeply impressed with their leadership as well as their team of experts in Brussels, Ispra, and Sevilla, all of whom I had the pleasure to get to know better and meet during our two-day intensive. There’s a lot of work to do - and the team is hiring - but in light of so many seemingly dystopian narratives come to fruition lately, it is heartening to know this team is at the forefront of global protections.

Now, the challenges. A few weeks ago, I co-hosted two listening sessions at the Koret Law School at the University of California, Berkeley on invitation by the Brussels on the Bay team - the satellite San Francisco office engaging with Silicon Valley on behalf of the European Union. Along with the ECAT team, we hosted semi-private workshops geared towards small to medium size companies, startups, academics, and civil society organizations to field questions and gather information and input on the upcoming auditing requirements as well as data sharing initiatives. We learned a lot and I’ll share some of my thoughts that have been informed by my time thus far consulting with ECAT and conducting these workshops and intensives.

This post is the first in a six-part series on algorithmic compliance auditing. In the next posts, I outline the five primary challenges introduced as we co-invent this new field as it relates to the ambitous new legal landscape.

Challenge 1. What is a compliance audit?

Challenge 2. We don’t know what we don’t know. What are our metrics of success?

Challenge 3. Where’s the talent?

Challenge 4. What is a gold star audit? How can an audit be repeatable, scalable, comparable and timely?

Challenge 5. What is ‘proportionate’ and ‘effective’ remediation?

Stay Tuned!

Previous
Previous

What’s an audit?