On Monday, President Joe Biden signed an executive order to put guardrails in place as artificial intelligence and large language models continue to be developed and released for public and commercial use. The order came just ahead of this week’s global summit on AI safety.
We can’t say Biden’s order puts us ahead of the curve, exactly, because AI is already frequently used and has already created problems. But, Monday’s order helps us from falling too far behind technological advancement to be able to regulate it at all, as happened with social media.
The two main parts of the executive order focus on encouraging the (safe) development and use of AI, as well as giving guidance to protect people from being victimized by artificial intelligence.
The executive order’s guidelines include new standards for safety and security (providing testing results to the federal government), for AI watermarking (labeling AI-generated text and images) and for landlords and federal agencies to identify and minimize bias produced from AI algorithms.
Watermarking will be increasingly important as the internet becomes flooded with AI-generated text and images. The images in particular can seem life-like enough to pass for something real — hence the danger of “deepfakes,” in which someone can be realistically depicted doing or saying something they haven’t.
The order’s guidance on identifying and bypassing the inherent bias in AI is also essential. It’s long been known that artificial intelligence programs have built-in implicit bias: Some of the first attempts at using facial recognition software to locate criminals in crowds regularly misidentified Black and other people of color as criminals. Even the most recent versions of generative AI produce stereotypical images when given generic prompts. For example, as the Washington Post found, the prompt “a portrait of a person cleaning” results exclusively in images of women doing chores.
Biden’s executive order is the most sweeping attempt at AI regulation to date, but its power is limited. It can provide guidance, but few requirements. It can issue orders to federal agencies, but not directly to private entities. It can outline goals for protecting people’s privacy and data, but it can’t establish specific rules without an act of Congress.
Data and privacy protection is the one thing missing from Biden’s order — and probably the most important issue when it comes to regulating AI. These large language models and image-generating softwares have been created and trained on the online data of millions of unsuspecting people, using everything from personal blogs (and possibly social media posts) to pirated copies and counterfeits of copyrighted work.
But something as complicated and broad as regulating what AI can and cannot be trained on can’t be done with just an executive order. Which is why Biden’s order urges Congress to “pass bipartisan data privacy legislation to protect all Americans, especially kids.”
Biden has done what he can. Now it is up to Congress to take the next steps — and it cannot drag its feet. AI technology is evolving so fast that it seems new models are released every month, and more and more companies are jumping on the bandwagon to create their own versions. We need appropriate regulations in place before AI spirals out of control.