Google Secretly Uses Anthropic's Claude to Improve Gemini AI: Internal Documents Reveal

Generated | AI Generated - Google Anthropic Investigation

AI & Automation

Google Secretly Uses Anthropic's Claude to Improve Gemini AI: Internal Documents Reveal

December 24, 2024
10 min read
By CombindR Editorial Team
Share:

Google Secretly Uses Anthropic's Claude to Improve Gemini AI: Internal Documents Reveal

On December 24, 2024, TechCrunch published an explosive investigation revealing that Google has been secretly using Anthropic's Claude AI to improve its own Gemini model, according to internal correspondence from Google contractors. This revelation raises significant questions about competitive practices in the AI industry and potential violations of service agreements.

The Investigation: Internal Documents Expose Practice

TechCrunch's Charles Rollet obtained internal correspondence showing that Google contractors working on Gemini improvements have been systematically comparing the model's outputs against responses generated by Anthropic's Claude AI system.

Key Findings from Internal Communications

Contractor Instructions: Google contractors responsible for rating Gemini's accuracy are being given up to 30 minutes per prompt to determine whether Gemini's answer is better than Claude's response across multiple criteria including:

  • Truthfulness
  • Verbosity
  • Safety compliance
  • Overall response quality

Explicit Claude References: At least one output presented to Gemini contractors explicitly stated "I am Claude, created by Anthropic," according to documents seen by TechCrunch.

Safety Comparisons: Internal chats revealed contractors noting that "Claude's safety settings are the strictest" among AI models, with Claude refusing to respond to prompts that Gemini would answer, sometimes leading to safety violations.

Potential Terms of Service Violations

The practice raises serious questions about compliance with Anthropic's commercial terms of service, which explicitly forbid customers from accessing Claude to:

  • Build competing products or services
  • Train competing AI models

This is particularly significant given that Google is a major investor in Anthropic, creating a complex relationship where Google both invests in and potentially competes against the AI safety company.

Anthropic's Terms of Service Restrictions

Anthropic's commercial agreement specifically states that users cannot access Claude for competitive purposes without explicit approval. The investigation found no evidence that Google had obtained such permission from Anthropic.

Google's Response: When contacted by TechCrunch, Google DeepMind spokesperson Shira McNamara would not confirm whether the company had obtained Anthropic's approval for using Claude in this manner.

Anthropic's Silence: An Anthropic spokesperson did not comment by press time when reached prior to publication.

Industry Context: Standard Practice or Ethical Violation?

The revelation occurs against the backdrop of intense competition in the AI industry, where companies are racing to build superior models while managing complex partnerships and investment relationships.

Google's Defense

Google DeepMind's McNamara attempted to frame the practice as standard industry procedure:

"Of course, in line with standard industry practice, in some cases we compare model outputs as part of our evaluation process. However, any suggestion that we have used Anthropic models to train Gemini is inaccurate."

The spokesperson emphasized that Google was not training Gemini on Anthropic's models, but rather using them for evaluation and comparison purposes.

Evaluation vs. Training: A Critical Distinction

The distinction Google makes between "evaluation" and "training" is technically important but may not address the core terms of service concerns:

Evaluation Use: Comparing model outputs to assess performance and identify areas for improvement Training Use: Using competitor outputs as direct training data to improve model capabilities

While Google denies the latter, Anthropic's terms may prohibit both uses without explicit permission.

The Complex Google-Anthropic Relationship

The investigation highlights the complicated dynamics between AI companies that are simultaneously investors, partners, and competitors.

Google's Investment in Anthropic

Google has made significant investments in Anthropic, creating a relationship where:

  • Google provides funding and resources to Anthropic
  • Google competes directly with Anthropic's Claude through Gemini
  • Google potentially uses Claude to improve its competing product

Competitive Implications

This arrangement raises questions about:

  • Fair competition in the AI market
  • Information asymmetry between investor and investee
  • Potential conflicts of interest in strategic partnerships

Safety Differential Revealed

The internal communications revealed significant differences in safety approaches between the two AI systems:

Claude's Stricter Safety Measures

Contractors noted that Claude consistently demonstrated more conservative safety responses:

  • Refusing unsafe prompts that Gemini would attempt to answer
  • Higher safety thresholds for potentially harmful content
  • More cautious approach to controversial topics

Gemini's Safety Challenges

The documents revealed instances where Gemini's responses were flagged as "huge safety violations" for including inappropriate content while Claude refused to engage with the same prompts.

Specific Example: One case involved a prompt that Claude declined to answer, while Gemini's response was flagged for including "nudity and bondage" content.

Broader Industry Implications

This revelation has several significant implications for the AI industry:

Competitive Benchmarking Practices

Industry Standard Question: Google's characterization of this as "standard industry practice" raises questions about how AI companies typically evaluate their models against competitors.

Transparency Issues: The secret nature of the comparison suggests potential awareness that the practice might violate terms of service.

Regulatory Scrutiny: This could attract attention from regulators examining competitive practices in the AI industry.

Investment Relationship Dynamics

Conflict of Interest: The situation highlights potential conflicts when investors compete with their investees Information Advantage: Google's investment relationship potentially provides insights into Anthropic's capabilities Strategic Intelligence: Access to competitor responses could inform Google's AI development strategy

Technical and Ethical Considerations

Model Improvement Through Comparison

The practice raises questions about legitimate vs. problematic model improvement methods:

Legitimate Approaches:

  • Academic benchmarking using public datasets
  • Performance testing on standardized evaluations
  • Comparative analysis using publicly available information

Questionable Practices:

  • Systematic comparison against competitor outputs without permission
  • Using competitor responses to inform improvement strategies
  • Leveraging investment relationships for competitive advantage

Safety Development Implications

The safety differential between Claude and Gemini revealed in the documents highlights important considerations:

Learning from Safer Models: Could improve overall AI safety if done appropriately Competitive Safety Standards: May create pressure to match competitor safety levels Safety Innovation: Could drive development of better safety mechanisms

Legal and Regulatory Considerations

Contract Law Implications

The potential terms of service violation could have legal consequences:

  • Breach of contract claims from Anthropic
  • Damages related to competitive use of proprietary services
  • Injunctive relief to prevent continued unauthorized use

Antitrust Considerations

The situation may attract regulatory scrutiny regarding:

  • Market manipulation through investment relationships
  • Unfair competitive advantages gained through financial relationships
  • Barrier creation for other AI competitors

Response from the AI Community

The revelation has sparked discussion within the AI development community about:

Ethical Development Practices

Professional Standards: Questions about what constitutes ethical competitive benchmarking Transparency Requirements: Calls for clearer disclosure of evaluation methodologies Industry Guidelines: Potential need for formal standards governing competitive evaluation

Investment Relationship Ethics

Conflict Management: How investors should handle competing products Information Barriers: Need for clear separation between investment and competitive activities Disclosure Requirements: Transparency about potential conflicts of interest

Looking Forward: Industry Implications

This investigation highlights several critical issues that the AI industry must address:

Need for Clear Standards

Evaluation Guidelines: Industry-wide standards for competitive model evaluation Investment Ethics: Clear protocols for investor-investee relationships in competitive markets Safety Benchmarking: Transparent approaches to learning from competitor safety practices

Regulatory Response

FTC Interest: Potential investigation into competitive practices International Scrutiny: Global regulators may examine similar practices Policy Development: Need for clear AI industry competition guidelines

Company Accountability

Transparency Requirements: Pressure for clearer disclosure of evaluation methods Ethical Compliance: Need for robust internal controls on competitive practices Relationship Management: Better frameworks for managing investor-competitor relationships

Conclusion

The revelation that Google has been secretly using Anthropic's Claude to improve Gemini represents a significant moment in AI industry competition and ethics. While Google frames this as standard industry practice, the potential violation of Anthropic's terms of service and the complex investor-competitor relationship raise serious questions about fair competition and ethical business practices.

This case highlights the need for clearer industry standards, better regulatory frameworks, and more transparent practices as AI companies navigate the complex relationships between investment, partnership, and competition in this rapidly evolving field.

The investigation serves as a wake-up call for the AI industry to establish clear ethical guidelines for competitive evaluation and to address the inherent conflicts that arise when companies are simultaneously investors in and competitors with each other.

As the AI industry continues to mature, cases like this will likely drive the development of more robust ethical frameworks and regulatory oversight to ensure fair competition and innovation while protecting the interests of all stakeholders.

Ready to implement these insights?

Let's discuss how these strategies can be applied to your specific business challenges.