# Deploying on GigaSPOT

Deployments on GigaSPOT can be only achieved by GigaSPOT API (<https://gigaspot-api-docs.clore.ai/>), firstly you need to generate API key for your clore.ai account

This has been chosen, because GigaSPOT is tool for professionals and in such highly competetive envieronment it should make sence to only manage GigaSPOT orders by bots.

GigaSPOT does not offer port forwarding for it's orders. If you need to reach inner ports inside of container i can recommend implementing [FRP](https://github.com/fatedier/frp) inside your workload

Order lifetime is determined by the amount of time, that hosting provider allowed the machine to be rented. Order lifetime is limited to 20 days. This data is returned in snapshot of the market described [here](https://gigaspot-api-docs.clore.ai/get-market-12836589e0)

***

## Deploying From CLORE Container Registry (CCR)

<figure><img src="https://2864042869-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FbyNlC5NV8aIpXnc76G7w%2Fuploads%2FCcpJccGgWg0QKj5c648K%2Fimage.png?alt=media&#x26;token=27232ab8-216f-45cf-b7f4-a97d6dedb5e2" alt=""><figcaption></figcaption></figure>

You need to firstly cache image from [dockerhub](https://hub.docker.com/) onto CCR, there are currently limit posed (600MB) per image, this makes it feasible for PoW type workloads, this is to make sure that GigaSPOT can be fair to everyone ,to not be slowed down by caching large images and allow vast majority of machines to connect to GigaSPOT. If you have workload that could benefit from GigaSPOT, can't be fited into 600MB and you are expected to spend over 20000$/month on GigaSPOT please message <marketing@clore.ai>

[⚠️](https://www.pinterest.com/pin/what-does-the-warning-emoji-mean--885661082950897478/) The 600MB limit is for uncompressed image, so after you build your image you can see the uncompressed size in `docker image ls`

Image on CCR has TTL (Time To Live) default of 30 days, the counter resets on deploying new GigaSPOT order with the CCR image, this is to automatically clean up CCR of no longer needed images.

***

## Deploying from Base Images

Some images created by CLORE.AI are already cached on our machines, so they can be utilised by clients on GigaSPOT marketplace

[⚠️](https://www.pinterest.com/pin/what-does-the-warning-emoji-mean--885661082950897478/) Base image cannot be guaranteed to stay the same for all the time, the images will be automatically updated in the future to newer versions of it's base image. The future changes to base images by clore.ai team will be tried to be to not break any workloads, but still your workload might lose support, when let's say base images get upgraded to newer ubuntu in years down the line. Base image updates will be mentioned in clore.ai socials before happening, so if you are following you will be updated

### 1. Ubuntu 24.04

This image has CCR ID `a3f9c4d7e5b088d8a0bff880`

Currently using base image [cloreai/jupyter:ubuntu24.04-v2](https://hub.docker.com/repository/docker/cloreai/jupyter/tags/ubuntu24.04-v2/sha256-0586bbd2c26a8bcfd194d9d022ce4966ede23b3a743471032069c1f2ed2abc27) with source at <https://gitlab.com/cloreai-public/containers/jupyter>

With this image deployed inside the container you will have 650MB of free space by default to setup your workload

This image allows you to deploy your workloads by specifying bash script that will be downloaded on first start of the image

You can inspire by this example for mining CLORE Blockchain using [t-rex](https://github.com/trexminer/T-Rex) on [vipor.net](https://vipor.net/connect/clore) mining pool

<https://gitlab.com/cloreai-public/gigaspot-examples/ubuntu-base-mining/-/blob/main/example-clore-blockchain.sh>

Order creation API call with this example, bidding at 13 CLORE/day with no OC enforced, power limit at 350W

This image uses ENV variable `DELEGATED_ENTRYPOINT` as source for where to download the script. The script will be once downloaded and will be runned on every start of the container, so make sure you are designing you script to be able to be killed at any time, even in initial deployment. GigaSPOT is high paced trading environment, where your order might be outbid even during running initialization phase of your script, so robust code is really helpful.

Example of api call to deploy

```bash
curl -X POST \
  -H 'auth: NXj2bHUXHwzvd5-Lm6UfvgGtnNwaHxLu' \
  -H 'Content-Type: application/json' \
  -d '[
    {
      "currency": "CLORE-Blockchain",
      "image": "a3f9c4d7e5b088d8a0bff880",
      "renting_server": 40329,
      "price": 13,
      "oc": [
        {
          "pl": 350
        }
      ],
      "env": {
        "DELEGATED_ENTRYPOINT": "https://gitlab.com/cloreai-public/gigaspot-examples/ubuntu-base-mining/-/raw/main/example-clore-blockchain.sh",
        "WORKER_NAME": "clore-gigaspot-40329"
      }
    }
  ]' \
  'https://api.clore.ai/v1/create_gigaspot_orders'
```

You can see in the example a ENV `WORKER_NAME` which is used to configure worker name for the miner, because it's passed to the miner [here](https://gitlab.com/cloreai-public/gigaspot-examples/ubuntu-base-mining/-/blob/main/example-clore-blockchain.sh?ref_type=heads#L78)

This example did not work in reality when deployed on machine #40329 due to network restrictions on certain ISPs

### 2. HiveOS

This image has CCR ID `c9a4e2f6b7d488d8f0bab0ff`

Currently using base image [cloreai/hiveos:0.3](https://hub.docker.com/repository/docker/cloreai/hiveos/tags/0.3/sha256-8bb62bb715bbbb9fe46fa6f529815afaed11fa60d513c7a33e8bc14d4dc87f17) with source at <https://gitlab.com/cloreai-public/containers/hiveos>

With this image deployed inside the container you will have 650MB of free space by default to setup your workload

This image is used to deploy HiveOS on Clore GigaSPOT, such deployment is possible, but not really recommended for large scale operations, while it's great for debugging because of [Hive Shell](https://hiveon.com/knowledge-base/guides/hshell/) also can be useful for beginers to setup workloads on GigaSPOT, because of it's UI.

For deployment of HiveOS you need to create HiveOS account and use for each rented machine unique `Rig ID` and `Password` which are fields generated by HiveOS for connecting machines

These fields are inputed with ENV, look at this example:

```bash
curl -X POST \
  -H 'auth: NXj2bHUXHwzvd5-Lm6UfvgGtnNwaHxLu' \
  -H 'Content-Type: application/json' \
  -d '[
    {
      "currency": "CLORE-Blockchain",
      "image": "c9a4e2f6b7d488d8f0bab0ff",
      "renting_server": 40329,
      "price": 13,
      "oc": [
        {
          "pl": 350
        }
      ],
      "env": {
        "rig_id": "10452701",
        "rig_pass": "UTA2xoxo"
      }
    }
  ]' \
  'https://api.clore.ai/v1/create_gigaspot_orders'
```

Also when running HiveOS, note that on some machines connections to certain pool endpoints might be restricted depending on the ISP

## Order Eviction

There can only be 8 orders (bids) per gigaspot market (machine). If more orders are present per machine, order with lowest profitability is canceled on CLORE.AI Billing interval

## Final word

While GigaSPOT is a powerful tool, it's best suited for linux users and people with deep understanding what can happen in such environments, can imagine the potencial risks.

It's best practise to mandatory in my opinion to verify outputs of the machine, your system should be ideally verifying processing speed / hashrate of machines, ideally have a list of bad performing machines, hosts to prevent financial losses.

GigaSPOT is offered as is, in no cases including GPUs being missreported no refunds will be issued. It's clients responsibility to validate machine performance and attain black list.

This article goes into only creating gigaspot orders, to edit them, modify overclocking settings, you should look into [GigaSPOT API Documentation](https://gigaspot-api-docs.clore.ai/)

Note that some machines may have ISP-level network restrictions that can affect connectivity to certain endpoints
