As excitement around artificial intelligence increases, there’s a growing agreement that we need to regulate AI. This call is not just coming from government entities, but also from those who are at the forefront of AI development. They propose that new legislation might be the very tool required to prevent this innovation from becoming our own Frankenstein monster. They believe AI regulations could deter this burgeoning technology from spiraling out of control, and build AI algorithms to help people without the danger of AI taking over humans.
In this article, we will talk about the principles of the newly born AI Bill of Rights.
While the concept of legal checks and balances on AI has been mooted since the emergence of technologies like ChatGPT, one of the most persuasive appeals came from a leading figure in the AI scene, OpenAI‘s CEO, Sam Altman. Speaking at a recent Senate Judiciary subcommittee, “I think if this technology goes wrong, it can go quite wrong,” he voiced concerns about the severe fallout if this technology were to spiral out of control, expressing an eagerness to cooperate with government bodies to avert such a scenario, “We want to work with the government to prevent that from happening,” he said.
This sentiment is positively received by the government, which has long been advocating for such an idea. Days prior to Altman’s testimony, he was among the tech leaders invited to the White House to attend Vice President Kamala Harris’s warning against the potential dangers of AI, while simultaneously appealing for industry assistance in devising preventative measures.
Identifying and solving AI future problems is not an easy matter. Striking the right balance between fostering industry innovation and preserving rights and citizen protection poses a formidable challenge. Limiting such an emerging technology, that is already starting to transform our world, risks impeding potentially ground-breaking advances. Moreover, even if major players like the US, Europe, and India agree to these constraints, will China acknowledge them?
In an unusual flurry of activity, the White House has made attempts to sketch out the potential shape of AI regulation. In October 2022, a month prior to the launch of ChatGPT, saw the release of a document titled “Blueprint for an AI Bill of Rights”, born of a year’s worth of preparation, public input, and tech-cratic wisdom.
However, it’s clear about its non-binding nature and that it does not represent formal US government policy. This AI Bill of Rights is less contentious and binding than its constitutional counterpart, steering clear of tricky topics such as firearms, free speech, and due process. Instead, it offers an aspirational wish list aimed at mitigating the double-edged sword of progress. Let’s delve into the highlights:
Blueprint for an AI Bill of Rights Principles:
- Safe and Effective Systems: You should be protected from unsafe or ineffective systems. Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards.
- Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.
- Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used. You should be protected from violations of privacy through design choices that ensure such protections are included by default, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected.
- Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.
Read here the full of Blueprint for an AI Bill of Rights
These points, while appearing straightforward, can be complex when translated into AI regulations that can be enforced. There are various ways to address the negative impacts of AI including sharing information about training language models like those behind ChatGPT, allowing opt-outs for users who don’t want their content used, preventing built-in bias, enforcing antitrust laws, and protecting personal information used by AI products.
The White House blueprint doesn’t just apply to Artificial intelligence but spans the entire tech sector. Each point seems to represent a user right that has been continually transgressed. Big tech didn’t wait for generative AI to create biased algorithms, opaque systems, intrusive data practices, and a dearth of opt-out options. That’s the basic requirement, and discussing these issues in the context of new technology only underlines the failure to protect citizens from the negative impacts of current technology.
The lament was echoed during the Senate hearing featuring Altman, where senator after senator echoed the same sentiment: “We fumbled with social media regulation; let’s not repeat the mistake with AI”. Yet there is no statute of limitations on legislation to curb past abuses. Last time I checked, billions of people, practically everyone in the US with access to a smartphone, are still actively using social media, being subjected to cyberbullying, privacy violations, and exposure to disturbing content. Nothing is stopping Congress from taking a tougher stance on these companies and, most importantly, enacting privacy laws.
The fact that Congress hasn’t acted on this casts serious shadows over the future of an AI bill of Rights. It’s no wonder then, that some regulators, like FTC chair Lina Khan, are not standing idle for new laws. She asserts that current law offers her agency ample jurisdiction to address issues of bias, anti-competitive behavior, and privacy invasion posed by new AI products.
Highlighting the challenge of devising new laws—and the sheer scale of the task that lies ahead—was this week’s White House update on the AI Bill of Rights. It revealed that the Biden administration is making a Herculean effort to develop a national AI strategy. However, the “national priorities” within this strategy remain elusive.
The White House is now calling upon tech companies, other AI stakeholders and the general public to answer 29 questions about the advantages and disadvantages of Artificial Intelligence. Much like the Senate subcommittee sought input from Altman and his fellow panelists on a way forward, the administration is soliciting ideas from corporations and the public.
In its request for information, the White House pledges to “consider each comment, whether it encompasses a personal story, experiences with AI systems, or technical legal, research, policy, or scientific content, or other material.”
So, humans, you have until 5:00 pm ET on July 7, 2023, to submit your documents, lab reports, and personal narratives to shape an AI policy that currently exists only as a blueprint, even as millions interact with Bard, and ChatGPT, and employers eye leaner workforces. Perhaps then we’ll move towards codifying these laudable principles into a new AI regulation.
1 Comment