Democratization of AI, Open Source, and AI Auditing: Thoughts from the DisinfoCon Panel in Berlin

Community Article Published October 8, 2024

Last month (September 2024), I got to speak on the disinfocon panel on “Auditing generative AI models: identifying and mitigating risks to democratic discourse” with Brando Benifei (EU parliament) and Oliver Marsh (Algorithm Watch). We discussed risks and potentials of Open Source AI, transparency and accountability, and auditing.

This blogpost covers some of my thoughts on these topics based on the insights of the panel and the questions of the moderator, Francesca Giannaccini. These are questions I often come across when speaking about Open Source AI, and the recent developments of the EU AI Act.

Accessing AI models

From your perspective, how can we balance the aspiration for democratization of AI with the need to mitigate the risks?

Making AI more broadly accessible and supporting open access and open science means decentralization of power in AI development. This is something we should strive for, as it enables a wider range of voices to be heard - but it also enables work and research on how to make AI models safe.

It is also important to keep in mind that open source AI models do not mean wider accessibility of AI. Not everyone has the resources or technical understanding of running AI models. Most people interact with AI systems, which often are chat interfaces, which can then be used as well to create disinformation to be distributed. In fact, we see a lot of misuse of AI is currently happening through these much more user friendly chat-interfaces. This is to say, we shouldn’t set open source AI in the context of being of more risk. It allows researchers and developers to invent new tools to mitigate risks, as they can understand in-depth how these models work.

This is also the Hugging Faces approach to ethical openness - in the development of our own models (and models created in collaborations), we aim to exemplify how ethical AI development could look like. For example, by proposing ethical charters for the projects we collaborate on, making it transparent which values the developers put at the center of developing AI models or by proposing mechanisms for data owners to opt-out of training data of models.

For the models and datasets that we host, we provide tools that enable responsible sharing. There is flagging, enabling the community to flag models that they deem inappropriate to be shared or that do not follow our code of conduct and content policy. Models can be assigned “Not for all audiences” tags, to indicate that datasets and models shouldn’t be proposed to users automatically. This is, for example, useful for datasets that need to be shared and used for filtering but shouldn’t be used for training, such as hate speech datasets. We encourage the in-depth documentation of AI artifacts through model and dataset cards as well as the use of the OpenRAIL license, which promotes responsible AI development and reuse.

Definition of Open Source AI

Could you help clarify the concept of open-source for us? Do 'open' models always guarantee full transparency?

There are different stages of model releases. When we talk about closed source models, we usually mean models that can be accessed only through an interface, such as a chat bot, or an API, which is the developer equivalent to a chatbot, a way of a computer to communicate with the AI system.

When we want to define open models, we need to look at the development life cycle of an AI model. The Open Source Initiative (OSI) is working on a definition of open source AI. In this definition, an AI model is considered open source if the following things are available: The data, or sufficiently detailed information about the data that was used to train the system so a skilled person can recreate the training of the model. The code and algorithms used to train and run the AI system under an open license. And finally, the weights and parameters, which can be considered the model itself.

At the moment, we see models being published under this definition but also open models, in which we only have access to the model weights, which can be used to run the model, but one cannot substantially change or recreate the model.

Better understanding what openness means for regulation is crucial but also has different implications for how much of the model we can understand and access. The requirements for documentation, for example, need to be stricter for a closed source model, in which we have no access to any parts of the model, than, for example, an open model where we can access different aspects of the model and just have a look, so to say.

Auditing LLM models

Given the evolving nature of the field, what do you see as the most promising approach for auditing AI systems and how the output of an auditing process should be like?

In our paper “Coordinated Flaw Disclosure for AI: Beyond Security Vulnerabilities”, we propose a coordinated flaw disclosure approach, inspired by cybersecurity, in which all users can contribute to uncovering and reporting flaws. Having a systematic approach to flaw disclosure of AI systems is important to ensure all issues will be addressed, not only the issues raised by people and communities which are anyway likely to be heard. My co-authors ran a red teaming and bug bounty program at this years’ DEFCON, the largest hacker conference in the US.

Human auditing is a laborious process, which can only ever be one of the puzzle pieces towards evaluation of AI systems. I would like to point at existing work in the field of AI Audition, such as the paper “AI auditing: The Broken Bus on the Road to AI Accountability”, which finds that only a subset of AI audit studies translate to desired accountability outcomes.

There is work on social impact evaluations, which I think are very relevant here. There is an initiative Hugging Face is part of, which brings together academics and representatives from different institutions, working together towards social impact evaluations. As AI develops, these evaluations will need to develop, too, therefore it is important that a wide range of perspectives are included in the discussions from the beginning.