Cutting Corners...at what Cost?
As the AI race develops, leaders in AI development in the US are changing safety guidelines to release more models, faster.
Image generated with DALL·E via ChatGPT by OpenAI
OpenAI’s GPT-4.1 released without Safety Report
This week, OpenAI released their newest family of LLMs, including GPT-4.1, GPT-4.1mini, and GPT-4.1nano. All models in the family perform at higher latency benchmarks than the GPT-3.5 models, providing higher efficiency and accuracy at a lower cost. With the release of these new models, OpenAI has failed to release a safety report detailing their performance on safety benchmarks such as deception, answering harmful questions, or producing malicious content. With previous releases, OpenAI has delayed safety card releases in the past, but this follows an alarming industry trend of delayed or shortened safety reports for new AI models. Both Google and OpenAI have altered their AI safety policies this year, with safeguards being removed or softened to maintain competitiveness and relevancy in a fast-paced industry. DeepSeek-v3 sent shockwaves through the AI space when it was released this February, and US AI companies have been racing to maintain their lead in the AI war since. The US House of Representatives released a report on Wednesday detailing national security issues concerning DeepSeek, which could potentially urge more (this is a trend) US-based AI policy supporting cutting corners in order to maintain a tech lead and prevent the spread of potentially harmful AI systems globally.
Takeaway: To maintain their position, industry leaders in AI are changing their safety policies to reduce development time and gain ground in the AI race faster and faster. Within the context of a US government and executive branch that is actively pursuing nationalist policy concerning AI, preserving safety measures in AI development should be a priority for maintaining long-term growth. The technology developed in the next few years sets the standard for the rest of the century, and it should be developed with the safety of future users in mind. - Natalie Sherman
Read more at OpenAI, Google, TechCrunch, and read the House of Representatives report here.
Google and AI Weapons
On February 4, 2025, Google’s parent company Alphabet reported that they had updated their AI ethical guidelines, with the most notable change being the removal of the clause promising not to pursue innovations that may “cause or are likely to cause overall harm.” In defense of this decision, Google head of AI Demis Hassabis and senior vice president for technology and security James Manyika wrote, “We believe that companies, governments, and organisations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.” However, experts are concerned that this move has opened the door for future creation of AI-powered autonomous weapons. The Human Rights Watch spoke out against Google’s decision, stating that AI can “complicate accountability” in battlefield situations with the potential for “life or death consequences.”
Takeaway: It is far too early to know what will come of this update to Google’s ethical artificial intelligence guidelines, but the response from human rights organizations is clear: when human lives are directly at stake, AI must be used with the utmost caution. –Grace Ogden
To read more about Alphabet’s update to their ethical AI guidelines, please visit The Guardian and BBC.