Setting Up Your Development Environment

What We're Building

A complete development environment setup script that provisions a GPU, installs your preferred tools (VS Code Server, Jupyter, conda), syncs your code, and keeps everything reproducible.

Prerequisites

  • Clore.ai API key

  • Python 3.10+

  • Your SSH public key (~/.ssh/id_rsa.pub)

Step 1: The Environment Configuration

# config.py
"""Environment configuration for Clore development setup."""

from dataclasses import dataclass
from typing import List, Dict, Optional
import os

@dataclass
class DevEnvironment:
    """Development environment configuration."""
    
    # GPU requirements
    gpu_type: str = "RTX 4090"
    min_vram_gb: int = 24
    max_price_usd: float = 0.50
    
    # Docker image
    base_image: str = "pytorch/pytorch:2.7.1-cuda12.8-cudnn9-devel"
    
    # Ports to expose
    ports: Dict[str, str] = None
    
    # Environment variables
    env_vars: Dict[str, str] = None
    
    # Packages to install
    pip_packages: List[str] = None
    apt_packages: List[str] = None
    
    # Code sync
    git_repos: List[str] = None
    
    def __post_init__(self):
        self.ports = self.ports or {
            "22": "tcp",      # SSH
            "8888": "http",   # Jupyter
            "8080": "http",   # VS Code Server
            "6006": "http",   # TensorBoard
        }
        
        self.env_vars = self.env_vars or {
            "NVIDIA_VISIBLE_DEVICES": "all",
            "PYTHONUNBUFFERED": "1",
            "JUPYTER_TOKEN": "cloredev",
        }
        
        self.pip_packages = self.pip_packages or [
            "jupyterlab",
            "tensorboard",
            "wandb",
            "transformers",
            "datasets",
            "accelerate",
            "bitsandbytes",
        ]
        
        self.apt_packages = self.apt_packages or [
            "git",
            "curl",
            "wget",
            "vim",
            "htop",
            "nvtop",
            "tmux",
        ]


# Preset configurations
PRESETS = {
    "ml-training": DevEnvironment(
        gpu_type="RTX 4090",
        min_vram_gb=24,
        base_image="pytorch/pytorch:2.7.1-cuda12.8-cudnn9-devel",
        pip_packages=[
            "transformers", "datasets", "accelerate", "bitsandbytes",
            "wandb", "tensorboard", "jupyterlab", "optuna"
        ]
    ),
    "inference": DevEnvironment(
        gpu_type="RTX 3090",
        min_vram_gb=24,
        base_image="nvidia/cuda:12.8.0-base-ubuntu22.04",
        pip_packages=[
            "vllm", "fastapi", "uvicorn", "transformers"
        ]
    ),
    "rendering": DevEnvironment(
        gpu_type="RTX 4090",
        min_vram_gb=24,
        base_image="nvidia/cuda:12.8.0-base-ubuntu22.04",
        apt_packages=["blender", "ffmpeg"]
    ),
}

Step 2: Server Setup Script Generator

Step 3: Automated Environment Provisioner

Step 4: Sync Your Local Code

Full Workflow Script

Quick Commands

Docker Images Reference

Image
Use Case
Size

pytorch/pytorch:2.7.1-cuda12.8-cudnn9-devel

ML Training

~8GB

tensorflow/tensorflow:2.14.0-gpu

TensorFlow

~6GB

nvidia/cuda:12.8.0-base-ubuntu22.04

Minimal CUDA

~2GB

nvidia/cuda:12.8.0-devel-ubuntu22.04

CUDA Development

~4GB

huggingface/transformers-pytorch-gpu

Transformers

~10GB

Next Steps

Last updated

Was this helpful?