COMPOSABLE SOFTWARE

The Platform for GenAI & LLM Applications

With Composable's Large Language Model (LLM) software platform, enterprise teams design, test, deploy, and operate LLM-powered tasks that automate and augment their business processes and applications with security, governance, and orchestration to drive efficiency, improve performance, and lower costs.

Composable-Screenshot-Compilation
THE PRODUCT

What is Composable?

Composable is an API-first LLM software platform that enables enterprise organizations to focus on what they do best while helping accelerate and future-proof how they experiment, build, deploy, manage, and scale GenAI-augmented applications.
  • The Web UI

  • API & Integration Options

  • Cloud Deployment

Composable Studio

Studio is the web UI of Composable. It enables enterprise teams to rapidly create, test, and deploy LLM tasks. Studio provides an easy and secure way to connect to all of the major AI providers through our open-source connectors. It also includes a prompt designer with prompt templates, an interaction composer where you define the tasks you want the LLM to perform, a playground to test, compare, and refine your prompts and models, the ability to fine-tune everything, monitoring and analytics to understand how your interactions and models perform, plus semantic RAG and workflow capabilities.

Multiple Integration Options

Composable is an API-first platform. Anything you can do in Studio, you can do via the API. We offer a REST API, OpenAPI/Swagger, JavaScript SDK, and CLI.

Multi-Cloud Deployment

Composable's multi-cloud SaaS is hosted on Google Cloud and AWS. Composable can also be deployed in any public or private cloud supporting container images and MongoDB.

INFERENCE PROVIDERS

AI/LLM Environments

Environments are where you connect to inference providers. Simply add your API key to connect to any of the major AI providers and access their LLM foundation models using our open-source connectors.

It's easy to assign different models to different tasks from any of the available inference providers at any time.

Environments
OpenAI-600x240
Bedrock-600x240
Vertex-600x240
Azure-plus-OpenAI
IBM_WatsonX-600x240
Groq-600x240
Replicate-600x240
HuggingFace-600x240
Mistral-600x240
TogetherAI-600x240
SYNTHETIC LLM

Virtualized LLM

Virtualized or Synthetic LLM is how we bring specialized models to the enterprise.

Virtualized LLMs can distribute tasks across multiple models to eliminate any one model as a single point of failure. Virtualized LLMs can also send tasks to multiple models in parallel as an approach to assess and select the best result, evaluate and gradually roll-out new models, or fine-tune lower cost models based on the results of better performing models.

Want to assign a task to Llama2 instead of GPT4 in 30% of cases? No problem.

Our Virtualized LLM capabilities enable:

  • Self-improvement
  • Specialization (& distillation)
  • Gradual roll-out of new models
  • Benchmarking
  • Evaluation
  • Model independence

loan-balancing-icon

Load Balancing
Distributed tasks on models, based on weights

shadowing-icon

Shadowing
Execute in shadow of main model for evaluation

multi-head-icon

Multi-Head
Several LLMs execute the task, an evaluator selects the best one to be served, label all

self-training-icon

Self-Training
Automatically fine-tune less performing models with the results selected by LLM or human feedback

self-improvement-icon

Self-Improvement
Iterate on Self-Training to have the model converge and specialize on the task

LLM PROMPTS

Prompt Designer

Prompt templates are the building blocks of prompts and are assembled in interactions to define a task. Select a prompt template from our library of examples, or create your own reusable prompt.

Reuse tested prompts and compose them to create more complex versions. In addition, prompts come with schemas in and out to strengthen quality — thanks to type safety.

Prompts are automatically converted to the target's model format without any change. We manage the syntax and transformation needed for each LLM.

Prompt Templates

Combine prompts with input schemas, variables, and test data to create a rendering of your prompt.

Prompt Rendering

Write your prompt once and let our platform manage the syntax and transformations optimized for running against any LLM.

Prompt Library

Share, reuse, and innovate with a library of prompt templates to accelerate the creation of new LLM tasks. 

Prompt-Designer
LLM TASKS

Interaction Composer

Interactions define the tasks that the LLM are requested to perform. Define your task and output schema, add your prompts segments, and pick your LLM. Even mediate or load balance between multiple LLMs.

Task Configuration

Give your task a name, choose an LLM or Virtualized LLM, and define the output of your task as basic text or a strongly typed schema.

Prompts Segments

Select from the available prompt templates to create the prompt segments, test data, and final prompt rendering that defines your interaction.

Prompt Assistance

Receive AI-driven recommendations for prompt designs, backed by custom training to refine LLM responses.

Interaction-Composer-800x527
REFINEMENT

Playground & Fine-Tuning

Test, compare, and refine your prompts and LLMs. Publish your interaction when ready to deploy your AI/LLM task.

Playground

Run your interaction along with test data against any LLM and stream the result in real-time until the final output is displayed in a form or JSON format.

Publishing & Versioning

Publish your interaction with version control and an audit trail that keeps track of all history. Even fork an interaction to create something new.

Fine-Tuning

Fine-tune everything! Fine-tune your prompts, interactions, or LLM environments based on your runs.

Playground-800x531
INSIGHTS

Monitoring & Analytics

Monitor the execution of your tasks along with analytics to understand how your interactions and models perform.

Tests

Craft tests to validate interactions, ensuring ongoing consistency and adhering to enterprise standards.

Runs

Dive deep into LLM results, track iterative changes, and decipher variations to fine-tune LLM interactions.

Analytics

Stay atop LLM performance metrics. Monitor quality, latency, and overall system health for proactive management.

Analytics-800x526
ORCHESTRATION

Content & Workflow

An intelligent content store to pre-process content for retrieval-augmented generation (RAG) and a workflow engine for orchestrating durable generative AI processes.

RAG Guide Square Image
CONNECTIVITY

API & Integration

Enterprise teams can integrate LLM-powered tasks into existing applications or create brand new applications and services with multiple integration options. Expose interaction definitions as robust API endpoints, ensure top-notch schema validation, and minimize call latency.

REST API

The Composable API is a RESTful API to create, manage, and execute interactions.

OpenAPI/Swagger

The OpenAPI/Swagger spec describes the API for executing interactions and accessing runs.

JavaScript SDK

The JavaScript SDK library provides an easy way to integrate with applications using JavaScript.

CLI

The command line interface (CLI) can be used to access your projects from a terminal.

MULTI-CLOUD

Multiple Deployment Options

Composable's multi-cloud SaaS is hosted on Google Cloud and AWS. Composable can also be deployed in any public or private cloud supporting container images and MongoDB.

Composable-Favicon
Google-Cloud-400x400
AWS-400x400
Azure-400x400
FAQ

Commonly Asked Questions

What does API-first mean?

Everything you can do in the UI you can do through the API. 

Where can Composable run?

Composable can run as SaaS or can be deployed on any cloud infrastructure.

Can Composable be used with custom models?

Yes, custom models can be accessed through any supported inference providers.

Is Composable an LLM application development framework?

No, Composable is an end-to-end platform that offers production-ready LLM services, a content engine, and agentic orchestration.

Build Intelligent Applications and Services using LLMs

With Composable's LLM software platform enterprise teams can harness AI by leveraging multiple inference providers and models to create LLM-powered tasks that safely automate and augment their business, without the burden and complexity of managing disparate APIs, security models, or prompt formats.

See if Composable is right for your organization. Request a demo now.