PRODUCT

Product News: August 2024

We've made significant strides in delivering a comprehensive, end-to-end experience for building LLM-powered applications and services for the enterprise.


New Platform Components

We are excited to announce general availability of two new major platform components! 

Introducing our intelligent content store and enterprise-grade workflow. Together with our LLM service execution, we delivered the first AI-native platform designed for content-rich workflows. This powerful combination enables complex agentic behaviors, enriched memory capabilities, and advanced generation abilities.

 

In addition to the major announcements about our intelligent content store and enterprise-grade workflow, we have a number of improvements with our Studio, universal LLM library, developer tools, and cloud security.

 

Studio 

Our objectives: Make it easy for developers, prompt engineers, and business analysts to rapidly build, test, deploy, and operate LLM-powered tasks.

Working with JSON

It’s great to get proper JSON out of LLMs when building applications, but it’s a bit harder to read for us humans! JSON is now automatically rendered in a user-friendly way to be able to read and share results more easily, enabling better collaboration with business users.

Just like reading JSON, writing is hard too! We now automatically generate forms so it’s easy for users to type in test data and see how the prompt is rendered.

Finally, annotating JSON schemas is a powerful way to instruct LLMs, so we added a control for non-technical users to edit the description of properties in JSON schemas. 

Replay Runs

Take any historic run and replay it using the same or a different model. This feature is extremely useful to debug or improve performance. It is also very useful to observe and improve when operating in production.

 

Advancing LLMs and GenAI

Our objectives: Advance LLM and GenAI by adopting the latest innovations and building advanced features for working with generative models.

We have a number of advancements with our LLM support! All LLM improvements have also been released into our universal LLM library, LLumiverse on GitHub. LLumiverse provides a lightweight library for node.js to abstract and unify prompt executions on LLMs.

Multimodality

Introducing cross-model multi-modal support! You can now integrate images into your prompts, alongside text instructions, and receive answers. This is supported on all the main multi-model models: Claude 3, GPT-4o, and Gemini 1.5.

New Inference Providers

We’ve added three new inference providers: Microsoft’s Azure OpenAI, IBM’s WatsonX.ai, and Mistral AI’s La Platforme. Our platform and LLumiverse, our library of universal LLM connectors, now support the 10 most prevalent inference providers for enterprise use!

Azure + OpenAI

Microsoft’s Azure OpenAI expands on our existing support for OpenAI models. Organizations already running cloud workloads with Microsoft can now use the platform along with the Azure OpenAI Service for inference.

 

Azure-plus-OpenAI

 

IBM

WatsonX.ai brings support for IBM’s Granite series of foundation models which have been trained on trusted enterprise data spanning internet, academic, code, legal and finance. WatsonX.ai also hosts numerous open source models from Meta, Google, MistralAI, and more.  

IBM_WatsonX-600x240

Mistral

Mistral AI’s La Platforme expands on our existing support for Mistral AI models available on Amazon Bedrock. Now you can access Mistral AI’s state-of-the-art generalist models, specialized models, and research models through La Platforme.

 

Mistral-600x240

 

New LLMs

With the addition of new inference providers and our universal LLM connectors, LLumiverse, we've seamlessly added support for new models and prompt formats as they became available. Notably, we've added support for Anthropic's Claude Sonnet 3.5, Meta's Llama 3.1 models, and Mistral AI's Mistral Large 2, all of which have been released and deployed across multiple inference providers over the past few months.

Embeddings

The platform now supports embeddings generation on all supported inference providers. Embeddings enable vector search on content and are automatically generated for content objects and semantic chunks when configured on a project. Need to change the embedding model? No problem, simply configure the desired environment and model and recalculate embeddings for the content store.

 

Developer Experience

Our objectives: Make it easy for developers to build applications, including APIs, SDKs, CLI, language support, IDE support, code sync, and more.

CLIENT & CLI

Updated Packages

We’ve released new versions of the client and CLI packages which also contain the new API endpoints for workflow and content. 

SCHEMAS

On-Demand Schemas

You can now dynamically pass a schema to an interaction execution, enabling dynamic structured data generation.

INTEGRATION

OpenAPI

Easily integrate with OpenAPI clients using our improved OpenAPI endpoint for interaction execution. All the tasks are now available and properly typed.

 

 

Security

Our objectives: Enforce enterprise-grade security system-wide, including key management, access control, encryption, certification, compliance, and other security related topics.

Multi-Tenant Support

Managing multiple projects or clients? Our improved infrastructure ensures that each project is now stored in its own separate database. This means better data isolation, enhanced security, and a clear boundary between different projects. Whether you’re handling sensitive data across various clients or simply need to keep your projects neatly organized, this feature is designed to give you peace of mind and simplify your project management.

Workload Identity Federation

You can now connect and authenticate with your cloud provider, down to the account or project level, to enable fine-grained access to execution environments performing inference.

 

Conclusion

We continue to focus on supporting the leading inference providers, models, and LLM innovations like multi-modality through our open-source project LLumiverse, while enhancing the experiences for both engineers and non-technical users through our Studio and developer tools.

We’ve entered the next phase of our product evolution and continue to execute on our mission of transforming the way businesses interact with content by combining LLM service execution with enterprise-grade workflow and GenAI-enriched content. This powerful combination delivers the most comprehensive end-to-end platform to rapidly build and operate LLM-powered applications and services for the enterprise.

Similar posts

Get notified on new blog articles

Be the first to know about new blog articles from Vertesia. Stay up to date on industry trends, news, product updates, and more.

Subscribe to our blog