The Jamba model, developed by AI21, boasts a 256K context window and offers three times the throughput, and has now been seamlessly integrated into Microsoft Azure AI.

AI21’s most recent development, Jamba-Instruct, is the initial LLM suitable for production use that is built upon the groundbreaking hybrid Mamba-Transformer architecture, which the company pioneered in the marketplace. A novel combination of high throughput, high-quality output, and a 256K context window, Jamba-Instruct establishes a new benchmark for AI solutions designed for enterprise use.

Conventional LLMs employ the Transformer architecture, which is hindered in practical production use by two obstacles: high memory consumption and sluggish inference as the context grows.

Models-as-a-Service will shortly grant Azure customers access to Jamba-Instruct, eliminating the requirement for them to manage the underlying infrastructure required to host the model. This milestone establishes Azure as the first hyperscaler cloud to provide customers with this model. This strategic integration highlights Microsoft’s dedication to equipping enterprises with cutting-edge AI tools and technologies that facilitate digital transformation and foster innovation across various sectors.

Jamba-Instruct is intended to provide enterprise applications with value, quality, and performance. 

Jamba-Instruct distinguishes itself from its rivals in terms of cost and quality, rendering it the optimal selection for organisations in search of sophisticated AI solutions that prioritise effectiveness while maintaining performance and dependability.

Safety guardrails, conversation capabilities, and enhanced command comprehension are all standard features of Jamba-Instruct. This reduces TCO and expedites the time to production for enterprise applications, resulting in a shortened time to value for organisations.

“Jamba-Instruct will be introduced to Microsoft Azure customers in the near future through a collaboration between our organisation and Microsoft.” Pankaj Dugar, SVP, GM North America of AI21, stated, “This partnership reaffirms our mutual dedication to equipping organisations with some of the most cutting-edge AI solutions accessible.” “Our organization’s mission at AI21 is in perfect harmony with Microsoft’s commitment to prioritising customers and ensuring easy access to state-of-the-art technology.” Customers will soon have the opportunity to utilise Jamba-Instruct on Azure AI, which is poised to bring about a paradigm shift in the way language processing is approached and propel organisation to previously unattainable heights of effectiveness and understanding.

Eric Boyd, Corporate Vice President of Azure AI Platform, expressed enthusiasm about the upcoming collaboration with AI21 to incorporate the Jamba-Instruct model into Microsoft Azure’s extensive AI model portfolio. This addition will offer developers a diverse selection of foundational and open models. The Jamba-Instruct model is designed to provide a strategic advantage by reducing costs while maintaining high performance and quality, thanks to its unique architecture and extended context window.

TextUs appoints Rachel Fernandes as new senior vice president of Product Previous post TextUs appoints Rachel Fernandes as new senior vice president of Product
Volt secures $3M seed funding to enhance next gen SMS cost reduction Next post Volt secures $3M seed funding to enhance next gen SMS cost reduction