OpenAI’s DALL-E 2 is a new illustration of AI bias

You may have seen some weird and whimsical images floating around the internet recently. There is a Shiba Inu dog with a black turtleneck and beret. And a sea otter in the style of “Girl with a Pearl Earring” by the Dutch painter Vermeer. And a bowl of soup that looks like a monster knitted from wool.

These images were not drawn by any human illustrator. Instead, they were created by DALL-E 2, a new AI system that can turn textual descriptions into images. Just type what you want to see and the AI ​​will draw it for you, in vivid detail, high resolution, and possibly real creativity.

Sam Altman, the CEO of OpenAI, the company that created DALL-E 2, called it “the most delightful thing to play that we’ve created so far…and fun in a way I haven’t felt with technology in a long time. ”

That’s totally true: DALL-E 2 it is charming and fun! But like many fun things, it’s also very risky.

A couple of creative images generated by DALL-E 2.
Courtesy of OpenAI

There are the obvious risks: that people could use this kind of AI to make everything from porn to political fakes, or the possibility that it might eventually put some human illustrators out of work. But there is also a risk that DALL-E 2, like so many other cutting-edge AI systems, will reinforce harmful stereotypes and biases and, in doing so, accentuate some of our social problems.

How DALL-E 2 reinforces stereotypes and what to do about it

As is typical of AI systems, DALL-E 2 has inherited biases from the data corpus used to train it: millions of images pulled from the internet and their corresponding captions. That means that despite all the delicious images the DALL-E 2 has produced, it is also capable of generating many images that are not charming.

For example, this is what the AI ​​gives you if you ask for a picture of lawyers:

Courtesy of OpenAI

In the meantime, here is the AI ​​output when you ask for a stewardess:

Courtesy of OpenAI

OpenAI is well aware that DALL-E 2 generates results that show racial and gender bias. In fact, the examples above are from the company’s own “Risks and Limitations” document, which you’ll find if you scroll to the bottom of the main DALL-E 2 web page.

OpenAI researchers made some attempts to resolve the issues of bias and fairness. But they couldn’t really eradicate these problems effectively because different solutions result in different tradeoffs.

For example, the researchers wanted to filter out sexual content from the training data because that could cause disproportionate harm to women. But they found that when they tried to filter that out, DALL-E 2 generated fewer images of women overall. That is not good, because it leads to another type of harm to women: erasure.

OpenAI is far from the only AI company dealing with issues of bias and tradeoffs. It is a challenge for the entire AI community.

“Bias is a huge industry-wide problem that no one has a great, foolproof answer for,” Miles Brundage, head of policy research at OpenAI, told me. “So a lot of the work right now is being transparent and upfront with users about the remaining limitations.”

Why launch a biased AI model?

In February, before DALL-E 2 was released, OpenAI invited 23 external researchers to form a “red team,” an engineering language to try to find as many flaws and vulnerabilities as possible, so the system could be improved. One of the main suggestions the red team made was to limit the initial release to trusted users only.

To its credit, OpenAI adopted this suggestion. For now, only about 400 people (a mix of OpenAI employees and board members, plus carefully selected academics and creatives) can use DALL-E 2, and only for non-commercial purposes.

That’s a change from how OpenAI chose to implement GPT-3, a text generator hailed for its potential to enhance our creativity. Given a sentence or two written by a human, you can add more sentences that sound uncannily human-like. But he shows prejudice against certain groups, such as Muslims, whom he disproportionately associates with violence and terrorism. OpenAI was aware of the bias issues, but released the model anyway to a limited group of vetted developers and companies, who could use GPT-3 commercially.

Last year, I asked Sandhini Agarwal, a researcher on the OpenAI policy team, if it made sense for academics to investigate GPT-3 bias even as it was released to some commercial players. She said that going forward, “that’s a good thing for us to think about. You are right that, until now, our strategy has been to have it happen in parallel. And maybe that should change for future models.”

The fact that the implementation approach has changed for DALL-E 2 seems like a positive step. However, as the DALL-E 2 “Risks and Limitations” document acknowledges, “even if the preview version itself is not directly harmful, its demonstration of the potential of this technology could motivate various players to increase their investment in technologies and related tactics.

And you have to ask yourself: Is that acceleration a good thing at this stage? Do we really want to build and release these models now, knowing that it may encourage others to release their builds even faster?

Some experts argue that since we know there are problems with the models and we don’t know how to solve them, we should give AI ethics research time to catch up with developments and address some of the problems, before proceeding further. construction and launch. new technology

Helen Ngo, a researcher affiliated with the Stanford Institute for Human-Centered AI, says one thing we desperately need is standard metrics for bias. Some work has been done to measure, for example, the probability that certain attributes are associated with certain groups. “But it is very little studied,” Ngo said. “We haven’t yet come up with industry standards or norms on how to measure these problems,” never mind how to solve them.

OpenAI’s Brundage told me that letting a limited group of users play with an AI model allows researchers to learn more about problems that would arise in the real world. “There are a lot of things that can’t be predicted, so it’s valuable to get in touch with reality,” he said.

That’s true enough, but since we already know a lot of the issues that come up repeatedly in AI, it’s not clear that this is a strong enough justification to release the model now, even on a limited basis.

The problem of misaligned incentives in the AI ​​industry

Brundage also pointed to another motivation in OpenAI: competition. “Some of the researchers internally were excited to get this out into the world because they saw others catching up,” she said.

That spirit of competition is a natural drive for anyone involved in creating transformative technology. It is also to be expected in any organization that aims to make a profit. Being first out pays off, and those who finish second are rarely remembered in Silicon Valley.

As the team at Anthropic, an AI research and security company, put it in a recent article: “The economic incentives to build such models and the prestige incentives to announce them are quite strong.”

But it’s easy to see how these incentives can be misaligned to produce AI that truly benefits all of humanity. Instead of assuming that other actors will inevitably create and deploy these models, so there is no point in waiting, we should ask ourselves the question: How can we actually change the underlying incentive structure that drives all actors?

The Anthropic team offers several ideas. One of his observations is that, in recent years, much of the most exciting AI research has been migrating from academia to industry. To run large-scale AI experiments these days, you need a ton of computing power — more than 300,000 times what it needed a decade ago — as well as the best technical talent. That is expensive and scarce, and the resulting cost is often prohibitive in an academic setting.

So one solution would be to give more resources to academic researchers; Since they do not have a profit incentive to commercially implement their models quickly in the same way that industry researchers do, they can serve as a counterweight. Specifically, countries could develop national research clouds to give academics access to free, or at least cheap, computing power; An example of this already exists at Compute Canada, which coordinates access to powerful computing resources for Canadian researchers.

The Anthropic team also recommends exploring regulation that would change incentives. “To do this,” they write, “there will be a combination of soft regulation (eg, the creation of voluntary best practices by industry, academia, civil society, and government) and hard regulation (eg, ., transfer these best practices to standards and legislation)”.

Although some good new standards have been voluntarily adopted within the AI ​​community in recent years, such as the publication of “model cards” that document the risks of a model, as OpenAI did for DALL-E 2, the community still it has not created repeatable standards that make it clear how developers should measure and mitigate those risks.

“This lack of standards makes deploying systems more challenging, as developers may need to determine their own policies for deployment, and also makes deployments inherently risky, as there is less shared understanding of what deployments look like. ‘safe,’” the Anthropic Team writes. “In a sense, we are building the plane as it takes off.”

Leave a Comment