The
model, codenamed as “Olympus”, has 2 trillion parameters, the
people said, which could make it one of the largest models being
trained. OpenAI's GPT-4 model, one of the best models available,
is reported to have one trillion parameters.
The people spoke on condition of anonymity because the details
of the project were not yet public.
Amazon declined to comment. The Information reported on the
project name on Tuesday.
The team is spearheaded by Rohit Prasad, former head of Alexa,
who now reports directly to CEO Andy Jassy. As head scientist of
artificial general intelligence (AGI) at Amazon, Prasad brought
in researchers who had been working on Alexa AI and the Amazon
science team to work on training models, uniting AI efforts
across the company with dedicated resources.
Amazon has already trained smaller models such as Titan. It has
also partnered with AI model startups such as Anthropic and AI21
Labs, offering them to Amazon Web Services (AWS) users.
Amazon believes having homegrown models could make its offerings
more attractive on AWS, where enterprise clients want to access
top-performing models, the people familiar with the matter said,
adding there is no specific timeline for releasing the new
model.
LLMs are the underlying technology for AI tools that learn from
huge datasets to generate human-like responses.
Training bigger AI models is more expensive given the amount of
computing power required. In an earnings call in April, Amazon
executives said the company would increase investment in LLMs
and generative AI while cutting back on fulfillment and
transportation in its retail business.
(Reporting by Krystal Hu in San Francisco. Editing by Gerry
Doyle)
[© 2023 Thomson Reuters. All rights
reserved.] Copyright 2022 Reuters. All rights reserved. This material may not be published,
broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content.
|
|