INFERENCE PROVIDER

Groq

Groq is a fully integrated inference provider and execution environment in which language models run. Groq created the first Language Processing Unit™ (LPU™) Inference Engine. It is an end-to-end processing unit system that provides extremely fast inference for computationally intensive applications with a sequential component to them.

Groq-600x240
UNIQUE BENEFITS

Groq Integration with Composable

The integration with Groq offers unique benefits for Composable users
  • Powerful prompt studio

  • Execution and management UI

  • Easily port part of your workloads to Groq without application code changes

FEATURES

Composable Environments

Environments are the execution runtime environment for the generative model.

Portable Task Model

Execute a task on any model and inference provider with zero changes

Single Execution Interface

For all models and providers, including streaming

Virtualization Layer

Integrate different models and providers into a single virtualized environment

Fine-Tuning

Fine-tune everything! Fine-tune your prompts, interactions, or LLM environments based on your runs.

Load-Balancing

Distributed tasks on models, based on weights

Storage, Indexing & Search
TAKE THE NEXT STEP

Get a Demo of Composable

Experience a live demo, ask questions, and discover why Composable is the right choice for your organization.