FAQ

Common questions about using CLORE.AI for AI workloads.

circle-check

Getting Started

How do I create an account?

  1. Click Sign Up

  2. Enter email and password

  3. Verify your email

  4. Done! You can now deposit funds and rent GPUs

What payment methods are accepted?

  • CLORE - Native token (often gives best rates)

  • BTC - Bitcoin

  • USDT - Tether (ERC-20, TRC-20)

  • USDC - USD Coin

What's the minimum deposit?

There's no strict minimum, but we recommend $5-10 to start experimenting. This covers several hours on budget GPUs.

How long until my deposit arrives?

Currency
Confirmations
Typical Time

CLORE

10

~10 minutes

BTC

2

~20 minutes

USDT/USDC

Network dependent

1-15 minutes


Choosing Hardware

Which GPU should I choose?

It depends on your task:

Task
Recommended GPU

Chat with 7B models

RTX 3060 12GB ($0.03/hr)

Chat with 13B-30B models

RTX 3090 24GB ($0.06/hr)

Chat with 70B models

RTX 5090 32GB ($0.15/hr) or A100 40GB ($0.17/hr)

Image generation (SDXL)

RTX 3090 24GB ($0.06/hr)

Image generation (FLUX)

RTX 4090/5090 ($0.10-0.18/hr)

Video generation

RTX 4090+ or A100 ($0.10-0.20/hr)

70B+ models (FP16)

A100 80GB ($0.25/hr)

See GPU Comparison for detailed specs.

What's the difference between On-Demand and Spot?

Type
Price
Availability
Best For

On-Demand

Fixed price

Guaranteed

Production, long tasks

Spot

30-50% cheaper

Can be interrupted

Testing, batch jobs

Spot orders may be terminated if someone bids higher. Save your work frequently!

How much VRAM do I need?

Use this quick guide:

Model Size
Minimum VRAM (Q4)
Recommended

7B

6GB

12GB

13B

8GB

16GB

30B

16GB

24GB

70B

35GB

48GB

See Model Compatibility for full details.

What does "Unverified" mean on a server?

Unverified servers haven't completed CLORE.AI's hardware verification. They may:

  • Have slightly different specs than listed

  • Be less reliable

Verified servers have confirmed specs and better reliability.


Connecting to Servers

How do I connect via SSH?

After your order starts:

  1. Go to My Orders

  2. Find your order

  3. Copy SSH command: ssh -p <port> root@<proxy-address>

  4. Use password shown in order details

Example:

SSH connection refused - what do I do?

  1. Wait 1-2 minutes - Server may still be starting

  2. Check order status - Must be "Running"

  3. Verify port - Use the port from order details, not 22

  4. Check firewall - Your network may block non-standard ports

How do I access web interfaces (Gradio, Jupyter)?

  1. In order details, find the HTTP port link

  2. Click the link to open in browser

  3. Or manually: http://<proxy-address>:<http-port>

Can I use VS Code Remote SSH?

Yes! Add to your ~/.ssh/config:

Then connect in VS Code: Remote-SSH: Connect to Hostclore-gpu

How do I transfer files?

Upload to server:

Download from server:

For large transfers, use rsync:


Running AI Workloads

How do I install Python packages?

For persistent installations, include in your startup command or Docker image.

Why is my model running out of memory?

  1. Use quantization - Q4 uses 4x less VRAM than FP16

  2. Enable CPU offload - Slower but works

  3. Reduce batch size - Use batch_size=1

  4. Lower context length - Reduce max_tokens

  5. Choose bigger GPU - Sometimes necessary

How do I use Hugging Face models that require login?

For gated models (Llama, etc.), first accept terms on Hugging Face website.

Can I run multiple models at once?

Yes, if you have enough VRAM:

  • Each model needs its own VRAM allocation

  • Use different ports for different services

  • Consider vLLM for efficient multi-model serving

How do I keep my work after the order ends?

Before order ends:

  1. Download outputs: scp -P <port> root@<proxy>:/root/outputs/* ./

  2. Push to cloud: aws s3 sync /root/outputs s3://bucket/

  3. Commit to Git: git push

Data is lost when order ends! Always backup important files.


Billing & Orders

How is billing calculated?

  • Hourly rate × hours used

  • Billing starts when order status is "Running"

  • Minimum billing: 1 minute

  • Partial hours billed by the minute

How do I stop an order?

  1. Go to My Orders

  2. Click Stop on your order

  3. Confirm termination

You'll only be charged for time used.

Can I extend my order?

Yes, if no one has bid higher on your spot order. Add funds to your balance and the order continues automatically.

What happens if my balance runs out?

  1. Warning notification sent

  2. Grace period (~5-10 minutes)

  3. Order terminated

  4. All data lost!

Keep sufficient balance or download work before running low.

Can I get a refund?

Contact support for:

  • Hardware issues (GPU not working)

  • Significant downtime

  • Billing errors

Refunds not provided for:

  • User errors

  • Normal order usage

  • Spot order interruptions


Docker & Images

What Docker images are available?

See Docker Images Catalog for ready-to-use images:

  • ollama/ollama - LLM runner

  • vllm/vllm-openai - High-performance LLM API

  • universonic/stable-diffusion-webui - Image generation

  • yanwk/comfyui-boot - Node-based image gen

Can I use my own Docker image?

Yes! Specify your image in the order:

Ensure the image is publicly accessible or provide credentials.

How do I persist data between restarts?

Use the provided volume mount points or specify custom volumes:


Troubleshooting

"CUDA out of memory" error

  1. Check VRAM - nvidia-smi

  2. Use smaller model or quantization

  3. Enable offload - --cpu-offload

  4. Reduce batch size - batch_size=1

  5. Kill other processes - pkill python

"Connection timed out" error

  1. Check order status - Must be "Running"

  2. Wait longer - Large images take time to start

  3. Check ports - Use correct port from order

  4. Try again - Network issues are temporary

Web UI not loading

  1. Wait for startup - Some UIs take 2-5 minutes

  2. Check logs: docker logs <container>

  3. Verify port - Use HTTP port from order details

  4. Check command - Must include --listen 0.0.0.0

Model download stuck

  1. Check disk space: df -h

  2. Use smaller model - Start with 7B

  3. Pre-download - Include in Docker image

  4. Check HF token - Required for gated models

Slow inference speed

  1. Check GPU usage: nvidia-smi

  2. Enable GPU - Model may be on CPU

  3. Use Flash Attention - --flash-attn

  4. Optimize settings - Reduce precision, batch


Security

Is my data secure?

  • Each order runs in isolated container

  • Data is deleted when order ends

  • Network traffic goes through CLORE proxies

  • Don't store sensitive data on rented GPUs

Should I change the root password?

Optional but recommended for long orders:

How do I protect API keys?

  1. Use environment variables - Not command line args

  2. Don't commit to Git - Use .gitignore

  3. Delete after use - Clear history: history -c


Still Need Help?

Last updated

Was this helpful?