Google, the renowned technology giant under Alphabet Inc., has made a significant amendment to its artificial intelligence (AI) guidelines, erasing a clause that pledged to steer clear of using AI in potentially damaging scenarios, including weaponry. The change, which removed the section titled "AI applications we will not pursue," has sent ripples through the industry and raised questions about Google's future projects.
Noteworthy among industry leaders voicing concerns is Margaret Mitchell, ex-leader of Google's ethical AI team and current chief ethics scientist for AI startup Hugging Face. She warns that this removal is not only an annulment of the hard work achieved by ethical AI activists at Google but also potentially paves the way for the use of lethal AI technology.
Meanwhile, James Manyika, senior vice president at Google, and Demis Hassabis, leader of Google's AI lab, DeepMind, emphasised in a recent blog post the need for democracies to lead AI development with values of freedom, equality, and respect for human rights. However, they refrained from predicting the possible impact of the removed clause on Google’s future pursuits.
This move comes amid a broader change in policies among large tech firms. Earlier this year, Meta Platforms Inc. and Amazon.com Inc. both scaled down their diversity and inclusion efforts. The changing dynamics in the field of AI is fuelling debates on how to balance ethical dilemmas with competitive needs, leaving an uncertain future landscape for AI technology.
- CyberBeat
CyberBeat is a grassroots initiative from a team of producers and subject matter experts, driven out of frustration at the lack of media coverage, responding to an urgent need to provide a clear, concise, informative and educational approach to the growing fields of Cybersecurity and Digital Privacy.
If you have a story of interest, a comment, a concern or if you'd just like to say Hi, please contact us
We couldn't do this without the support of our sponsors and contributors.