Google Denies Training Gemini AI on Anthropic Models Amid Evaluation Practices

Google has clarified that it does not train its Gemini AI using Anthropic models, despite evaluating outputs for performance comparisons.

Dec 27, 2024
Google Denies Training Gemini AI on Anthropic Models Amid Evaluation Practices
Google Gemini

It was recently reported that Google was using Anthropic's Claude AI model under the hood for the Gemini AI Series, but Google's DeepMind spokesperson has firmly stated that while the company does compare model outputs as part of its evaluation process, it does not train Gemini using Anthropic models. "Any suggestion that we have used Anthropic models to train Gemini is inaccurate," she emphasized.

Contractors evaluating Gemini were tasked with comparing the AI's responses against those generated by Claude. These evaluations included assessing accuracy and quality based on criteria such as truthfulness and verbosity. Internal reports show concerns among contractors about potential inaccuracies in sensitive topics like healthcare.

Anthropic's terms of service explicitly prohibit customers from using Claude to build competing products or services without prior approval. This raises critical considerations about whether Google's evaluation practices could conflict with these terms.

The situation highlights the competitive landscape of AI development, where companies often evaluate their models against those of competitors to improve performance. However, the line between evaluation and training can be blurred, leading to potential legal and ethical dilemmas in the rapidly evolving field of artificial intelligence.

Industry experts emphasize that transparency in AI practices will be essential in maintaining trust and compliance as tech companies continue to innovate and refine their AI capabilities.