The peculiarities of artificial intelligence no longer require demonstration. Ensuring that AI adheres to ethical standards has, therefore, emerged as a significant concern globally. However, how can we be certain that technology behemoths, embroiled in intense competition, will take the initiative to incorporate these restrictive measures into their innovations?
You’ve likely come across it in various places. Elon Musk has critiqued ChatGPT for leaning towards being overly “politically correct.” In truth, OpenAI’s AI aims to maintain a moral stance and uphold certain values. It shies away from answering queries with dishonest motivations and consistently seizes the opportunity to reprimand those whose inquiries reveal anti-democratic or racist inclinations. The debate on whether ChatGPT’s mode of operation is commendable or not is a part of the broader discussion on artificial intelligence ethics or, more specifically, the imperative to steer it in that direction.
This debate is far from superficial, given that the capabilities of these machines are escalating, facilitating more sophisticated forecasts. Already, AI surpasses humans in a myriad of medical diagnoses and legal analyses, with its repertoire of accomplishments continually expanding.
Irregularities present in certain AI systems
The risks posed by AI are neither novel nor difficult to specify.
Societal prejudices
Given that AI fundamentally relies on data, this leads to two implications:
- This data can manifest prejudices through the automatic interpretation of information, devoid of any conscious bias. Consequently, AI may replicate these prejudices; Nature has detailed insights on this issue.
- It may incorporate our personal data, thereby exacerbating certain preconceptions we hold, rather than offering a new perspective.
Indeed, on numerous occasions, AI systems have demonstrated glaring biases. Studies conducted by the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University in China have observed that some generative AI systems exhibit societal, sexist, or other forms of bias. ProPublica has reported extensively on instances of bias within AI tools used by the judicial system.
As noted by ProPublica, an AI tool utilized by courts in the United States to assess the likelihood of repeat offenses predicted twice the number of false positives for Black defendants (45%) compared to white defendants (23%). In addition, it was revealed that a Google AI algorithm systematically excluded women’s resumes for research positions.
The intrinsic risks of deepfakes
Another cause for concern is AI’s ability to produce highly convincing deepfakes. Since 2017, actresses such as Emma Watson, Scarlett Johansson, or singers like Taylor Swift and Katy Perry have been forced to confront the emergence of pornographic videos crafted without their consent. Furthermore, deepfakes have been utilized for political purposes to sully the reputation of future American presidents or to falsely attribute statements to them they never made.
Cornelian dilemmas
In the context of an autonomous vehicle, imaginable scenarios can be quite vexing. The vehicle may be forced to choose the lesser of two evils, knowing that some form of damage is inevitable. Should it prioritize the safety of its passengers even if it means risking the lives of young children crossing the road? Formulating an answer to this is anything but straightforward.
What defines an ethical AI?
In light of such challenges, the necessity to impose ethical guidelines on AI has become clear. This concept aligns closely with the three laws of robotics as proposed by author Isaac Asimov in his series of short stories and novels, which are predicated on the philosophy that a robot must always protect and serve its human creators, even at the cost of its own existence to save them. Presently, AI is at the center of similar deliberations and stands.
An ethical AI system would respect both humanity and the planet. UNESCO, following extensive discussions on the topic, has highlighted the necessity to adhere to several values deemed vital to our societies:
- Upholding fundamental human rights and dignity;
- Supporting life in peaceful societies;
- Championing diversity and eschewing gender biases;
- Respecting environmental and climate considerations.
Meanwhile, the European Union has approved the AI Act, set to take effect in May 2024. It encompasses numerous restrictive measures:
- Restricted use of facial recognition;
- Banning citizen scoring;
- Requiring clear disclosure if an image has been produced by generative AI;
- And more.
These various initiatives aim to regulate a rapidly evolving technology and to implement safeguards before it’s too late. Yet, amidst a relentless battle for dominance, can we genuinely expect these technological juggernauts to pause and ponder the long-term ramifications of artificial intelligences?