We are excited to announce general availability of two new major platform components!
Introducing our intelligent content store and enterprise-grade workflow. Together with our LLM service execution, we delivered the first AI-native platform designed for content-rich workflows. This powerful combination enables complex agentic behaviors, enriched memory capabilities, and advanced generation abilities.
In addition to the major announcements about our intelligent content store and enterprise-grade workflow, we have a number of improvements with our Studio, universal LLM library, developer tools, and cloud security.
Our objectives: Make it easy for developers, prompt engineers, and business analysts to rapidly build, test, deploy, and operate LLM-powered tasks.
It’s great to get proper JSON out of LLMs when building applications, but it’s a bit harder to read for us humans! JSON is now automatically rendered in a user-friendly way to be able to read and share results more easily, enabling better collaboration with business users.
Just like reading JSON, writing is hard too! We now automatically generate forms so it’s easy for users to type in test data and see how the prompt is rendered.
Finally, annotating JSON schemas is a powerful way to instruct LLMs, so we added a control for non-technical users to edit the description of properties in JSON schemas.
Take any historic run and replay it using the same or a different model. This feature is extremely useful to debug or improve performance. It is also very useful to observe and improve when operating in production.
Our objectives: Advance LLM and GenAI by adopting the latest innovations and building advanced features for working with generative models.
We have a number of advancements with our LLM support! All LLM improvements have also been released into our universal LLM library, LLumiverse on GitHub. LLumiverse provides a lightweight library for node.js to abstract and unify prompt executions on LLMs.
Introducing cross-model multi-modal support! You can now integrate images into your prompts, alongside text instructions, and receive answers. This is supported on all the main multi-model models: Claude 3, GPT-4o, and Gemini 1.5.
We’ve added three new inference providers: Microsoft’s Azure OpenAI, IBM’s WatsonX.ai, and Mistral AI’s La Platforme. Our platform and LLumiverse, our library of universal LLM connectors, now support the 10 most prevalent inference providers for enterprise use!
With the addition of new inference providers and our universal LLM connectors, LLumiverse, we've seamlessly added support for new models and prompt formats as they became available. Notably, we've added support for Anthropic's Claude Sonnet 3.5, Meta's Llama 3.1 models, and Mistral AI's Mistral Large 2, all of which have been released and deployed across multiple inference providers over the past few months.
The platform now supports embeddings generation on all supported inference providers. Embeddings enable vector search on content and are automatically generated for content objects and semantic chunks when configured on a project. Need to change the embedding model? No problem, simply configure the desired environment and model and recalculate embeddings for the content store.
Our objectives: Make it easy for developers to build applications, including APIs, SDKs, CLI, language support, IDE support, code sync, and more.
Our objectives: Enforce enterprise-grade security system-wide, including key management, access control, encryption, certification, compliance, and other security related topics.
Managing multiple projects or clients? Our improved infrastructure ensures that each project is now stored in its own separate database. This means better data isolation, enhanced security, and a clear boundary between different projects. Whether you’re handling sensitive data across various clients or simply need to keep your projects neatly organized, this feature is designed to give you peace of mind and simplify your project management.
You can now connect and authenticate with your cloud provider, down to the account or project level, to enable fine-grained access to execution environments performing inference.
We continue to focus on supporting the leading inference providers, models, and LLM innovations like multi-modality through our open-source project LLumiverse, while enhancing the experiences for both engineers and non-technical users through our Studio and developer tools.
We’ve entered the next phase of our product evolution and continue to execute on our mission of transforming the way businesses interact with content by combining LLM service execution with enterprise-grade workflow and GenAI-enriched content. This powerful combination delivers the most comprehensive end-to-end platform to rapidly build and operate LLM-powered applications and services for the enterprise.