Monday, November 27, 2023

AWS debuts generative AI

Amazon Web Services announced an API platform named Bedrock, which hosts generative AI models built by top startups AI21 Labs, Anthropic, and Stability AI.

Generative AI has exploded in popularity with the development of models capable of producing text and images. Commercial tools developed by buzzy startups like OpenAI and Midjourney have won tens of millions of users, and Big Tech is now rushing to catch up.

While Microsoft and Google compete to bring generative AI chatbots to search and productivity suites, Amazon’s strategy is to remain fairly neutral – like some kind of machine-learning Switzerland – and provide access to the latest models on its cloud platform. It’s a win-win for startups that have agreed to work with the e-commerce giant. Developers pay to use APIs to access the upstarts’ models, and AWS provides all the underlying infrastructure that fully manages and provides those services.

“Customers have told us there are a few big things standing in their way today,” said Swami Sivasubramanian, AWS’ veep of machine learning, in a blog post.

“First, they need a straightforward way to find and access high-performing [foundational models] that give outstanding results and are best suited for their purposes. Second, customers want integration into applications to be seamless, without having to manage huge clusters of infrastructure or incur high costs.”

Amazon Bedrock currently offers large language models capable of processing and generating text – AI21 Labs’ Jurassic-2 and Anthropic’s Claude – and Stability AI’s text-to-image model Stable Diffusion. Bedrock will also provide two of Amazon’s own foundation models under the Titan brand, not to be confused with Google’s Titan-branded stuff.

Developers can build their own generative AI-powered products and services on the backs of these Bedrock-managed APIs and can fine-tune a model for a particular task by providing their own labeled training examples. Amazon said this customization process would allow organs to tailor neutral networks to their particular applications without worrying if their private training data will leak, be misplaced, or be used to train other large language models.

Latest