API Integration

Integrate AI models running on CLORE.AI into your applications.

circle-check

Quick Start

Most AI services on CLORE.AI provide OpenAI-compatible APIs. Replace the base URL and you're ready.

from openai import OpenAI

client = OpenAI(
    base_url="http://<your-clore-server>:8000/v1",
    api_key="not-needed"  # Most self-hosted don't require key
)

response = client.chat.completions.create(
    model="meta-llama/Llama-3.1-8B-Instruct",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)

LLM APIs

vLLM (OpenAI Compatible)

Server setup:

Python client:

Node.js client:

cURL:

Ollama API

Python:

Ollama also supports OpenAI format:

TGI API

Python:


Image Generation APIs

Stable Diffusion WebUI API

Enable API: Add --api to launch command.

Python:

Node.js:

ComfyUI API

Python:

WebSocket for progress:

FLUX with Diffusers


Audio APIs

Whisper Transcription

Using whisper-asr-webservice:

Direct Whisper API:

Text-to-Speech (Bark)


Building Applications

Chat Application

Image Generation Service

Multi-Modal Pipeline


Error Handling


Best Practices

  1. Connection pooling - Reuse HTTP connections

  2. Async requests - Use aiohttp for concurrent calls

  3. Timeouts - Always set request timeouts

  4. Retry logic - Handle temporary failures

  5. Rate limiting - Don't overwhelm the server

  6. Health checks - Monitor server availability


Next Steps

Last updated

Was this helpful?