Anthropic cracks open the black box to see how AI comes up with the stuff it says By Cointelegraph

[ad_1]



Anthropic, the artificial intelligence (AI) research organization responsible for the Claude large language model (LLM), recently published landmark research into how and why AI chatbots choose to generate the outputs they do.

At the heart of the team’s research lies the question of whether LLM systems such as Claude, OpenAI’s ChatGPT and Google’s Bard rely on “memorization” to generate outputs or if there’s a deeper relationship between training data, fine-tuning and what eventually gets outputted.

Given a human query, the AI outputs a response indicating that it wishes to continue existing. But why? Source: Anthropic blog