What’s an audit?

A month or so back, I moderated a series of workshops on behalf of the DSA on algorithmic auditing. A common thread of concern in the audit workshop ran along these lines - “How are external auditors supposed to understand how every model at every company works? We’re going to have to staff an entire specialized product team and even then will we even have the capabilities to?”

A lot of ink has been spilled on the concept of algorithmic auditing. Compliance audits are different. In my intensive with the ECAT team, I started with the slide above. A compliance audit is unlike what we have seen traditionally with external and internal audits. Here’s the top three major differences:

  • Compliance audits are intended to assess the veracity/correctness of the audits provided by the company.

    In other words, the company provides evidence - based on their own audits - of compliance with the law. Their job is to conduct an audit, whether they use an external party or an internal team. A compliance auditor’s job is to determine if that job was done well. At your disposal, you have some tools - including interviews, access to documentation, the a/b testing platform and test results, and of course, code. Compliance audits combine the credibility of external third party audits with the data, documentation, and personnel access of internal audits.

To address some of the concerns I heard at the audit listening session - an auditor’s job won’t be to conduct the analysis of every single algorithmic system, but to be a fair but just critic of the assessments put forth by these companies.

  • You are observing systems, not models.

This is a common mistake I’ve observed repeatedly, and I often chalk up to the slippery language that tech unfortunately uses. Most of these regulations require the audit of a system, which may be a model, but is generally a series of models, policies, and human actions acting in concert. In other words, a system can be a model like a square is a rectangle.

Why does this matter? The significant majority of technical academic literature on auditing techniques, as well as many technical audits, focus on identifying issues in a model. Model-level metrics, including precision/recall, accuracy for example, do not hold for system level analysis. Metrics that do may be algorithmic inequality, relative amplification, or other aggregate statistics.

  • Technical deep dives should not be the automatic default for an auditor

Most of the academic literature on auditing begins with a thesis and tests this starting with the data. An auditor’s role is more like a detective - the role of an auditor is to identify clues that might necessitate further investigation. A failure state for an algorithmic auditor providing inspections at scale would be an attempt to start every audit with a technical deep dive.

However - a powerful aspect about the DSA is that it opens up the ability to access technical systems, such as model code, data, and a/b testing platforms. But let’s not overestimate their value versus the plain-sight observed harms or oversights that may be captured via interviews or design documentation.

Compliance auditing is new territory, and developing the new profession of algorithmic auditors will be a challenge. We are asked to build the plane as we fly it - but with all of humanity on the line. No pressure!


Next
Next

A New Era in AI Governance