The Rundown: AI startup Anthropic just released Claude 2.1, featuring new upgrades, including a doubled context size, 2x fewer hallucinations, and new tool integrations.
The details:
The 200K token context window enables users to provide docs up to ~150K words for analysis.
New improvements reduced incorrect answers by 30%, with a 3-4x lower rate of mistakenly concluding a document supports a claim.
The new ‘tool use’ beta feature allows Claude to directly leverage developer APIs, knowledge bases, and datasets.
The relevance: Just a day after the OpenAI board reportedly contacted Anthropic for a merger, CEO Dario Amodei woke up and chose violence with new upgrades that continue to position Claude 2 as the most capable ChatGPT alternative.