News Security Technology

Google Revises AI Ethics Policy, Eases Restrictions On Military & Surveillance Use

Since OpenAI launched ChatGPT in 2022, AI has rapidly advanced, leaving regulations struggling to keep pace

In a significant policy shift, Google has updated its ethical guidelines on artificial intelligence (AI), moving away from its earlier commitment to avoid AI applications in weapons and surveillance. The company’s original 2018 AI principles had explicitly prohibited AI use in four key areas: weapons, surveillance, technologies that could cause overall harm, and applications violating international law and human rights.

The change was announced in a blog post by Demis Hassabis, head of AI at Google, and James Manyika, senior vice president for technology and society. They explained that the decision reflects the growing role of AI in national security and the need for companies in democratic nations to collaborate with governments.

Shifting Stance From 2018 Commitments

Google’s updated AI principles now emphasise human oversight and feedback, ensuring AI systems comply with international law and human rights standards. The company also pledges to test AI models rigorously to minimise unintended harmful effects.

This marks a major departure from Google’s stance in 2018, when it faced strong internal opposition over its Pentagon contract, known as Project Maven. The project involved using Google’s AI to analyse drone footage, leading to protests from thousands of employees.

In an open letter, Google employees urged the company to stay out of military projects, stating:

Following the backlash, Google chose not to renew its contract with the Pentagon, reinforcing its commitment to ethical AI use at the time.

AI Landscape Evolved

Since OpenAI launched ChatGPT in 2022, AI has rapidly advanced, leaving regulations struggling to keep pace. This rapid evolution has influenced Google’s decision to ease its self-imposed restrictions.

Hassabis and Manyika acknowledged that AI frameworks from democratic nations have shaped Google’s understanding of both AI’s risks and its potential. The revised approach aims to strike a balance between technological innovation and responsible governance, as AI continues to transform industries, security, and global geopolitics.

Leave a Reply

Your email address will not be published. Required fields are marked *