INFERENCE PROVIDER

Hugging Face

We support Hugging Face Inference Endpoints. Developers can easily deploy Transformers, Diffusers or any model on dedicated, fully managed infrastructure.

HuggingFace-600x240
UNIQUE BENEFITS

Hugging Face Integration with Composable

The integration with Hugging Face offers unique benefits for Composable users
  • Easily deploy any Hugging Face Transformers, Sentence-Transformers and Diffusion models

  • Support all of the Hugging Face Transformers, Sentence-Transformers and Diffusion tasks as well as custom tasks not supported by Hugging Face Transformers

FEATURES

Composable Environments

Environments are the execution runtime environment for the generative model.

Portable Task Model

Execute a task on any model and inference provider with zero changes

Single Execution Interface

For all models and providers, including streaming

Virtualization Layer

Integrate different models and providers into a single virtualized environment

Fine-Tuning

Fine-tune everything! Fine-tune your prompts, interactions, or LLM environments based on your runs.

Load-Balancing

Distributed tasks on models, based on weights

Storage, Indexing & Search
TAKE THE NEXT STEP

Get a Demo of Composable

Experience a live demo, ask questions, and discover why Composable is the right choice for your organization.