According to TechCrunch, Google's technical report for its latest AI model, Gemini 2.5 Pro, has been criticized by experts for lacking crucial safety details, making it difficult to assess potential risks the model might pose and raising concerns about the company's commitment to transparency in AI development.
The absence of Google's Frontier Safety Framework (FSF) in the Gemini 2.5 Pro report represents a significant omission that experts find particularly concerning. Google introduced the FSF in 2024 as a protocol to identify future AI capabilities that could cause "severe harm," and updated it in February 2025 with "stronger security protocols on the path to AGI"1. However, the technical report released weeks after the model's deployment makes no mention of this framework2.
Risk governance experts argue that effective frontier AI safety frameworks should include structured components like independent validation, transparent documentation, and clear escalation pathways when risk thresholds are reached34. Without these elements, safety frameworks may suffer from "silos, blurred accountabilities and unclear decision making processes," potentially increasing "the chance of harmful models being released"4. This pattern of prioritizing deployment over comprehensive safety reporting appears to be an emerging trend across major AI labs, with Meta and OpenAI also releasing similarly limited safety evaluations for their latest models25.
Google's release of Gemini 2.5 Pro without accompanying safety documentation appears to violate multiple commitments the company made to both the U.S. government and international bodies. At a July 2023 White House meeting, Google pledged to publish comprehensive safety reports for all major AI model releases, including evaluations of dangerous capabilities and limitations12. The company made similar promises at the Seoul Safety Summit through the "Frontier AI Safety Commitments," which required "public transparency on the implementation" of safety evaluations1.
The delay in publishing safety information raises serious accountability concerns. The technical report for Gemini 2.5 Pro was released weeks after the model was already available to the public, while Gemini 2.5 Flash still lacks any safety documentation despite being launched12. As Thomas Woodside of the Secure AI Project noted, Google's last publication of dangerous capability tests was in June 2024 for a model announced four months earlier, suggesting a pattern of prioritizing deployment over transparency1. Peter Wildeford from the Institute for AI Policy and Strategy summarized the problem succinctly: "It's impossible to verify if Google is living up to its public commitments and thus impossible to assess the safety and security of their models"13.
The AI industry appears to be caught in a concerning "race to the bottom" where companies prioritize rapid deployment over thorough safety testing. OpenAI has dramatically reduced its safety testing timeline for its upcoming "o3" model to just a few days, compared to the months-long evaluations conducted previously12. This acceleration comes as Google aggressively positions Gemini 2.5 Pro in the market with competitive pricing strategies that industry observers worry could trigger a destructive price war3.
These developments highlight a troubling pattern where commercial pressures are potentially compromising safety protocols across major AI labs. While companies like Google tout improved evaluation efficiencies to justify shorter testing periods1, experts remain skeptical about whether such compressed timelines can adequately assess complex risks. This industry-wide rush raises significant concerns about whether proper safety guardrails are being maintained as companies compete for market dominance, especially as models like Gemini 2.5 Pro demonstrate increasingly sophisticated capabilities in reasoning and problem-solving45.