Scope

This document defines all models that cross the wire between Zoltraak Gateway and the GPU cloud provider APIs. All models are grounded in the actual API payloads confirmed from RunPod and Vast.ai documentation. These models live exclusively in the adapters layer and are never exposed to clients or used by services directly.

RunPod Integration Models

RunPod REST API base URL: https://rest.runpod.io/v1

Confirmed endpoints used:

Because start and stop carry no request body, no request model is needed for those operations. Only a response model for the status query is required.

classDiagram
    class RunPodPodResponse {
        String id
        String desiredStatus
        String publicIp
        String ports
        String costPerHr
        String lastStatusChange
    }

Vast.ai Integration Models

Vast.ai REST API base URL: https://console.vast.ai

Confirmed endpoints used:

Unlike RunPod, Vast.ai uses a single PUT endpoint for both start and stop, distinguished by the state field in the request body.

classDiagram
    class VastAiManageRequest {
        String state
    }

    class VastAiInstanceResponse {
        Long id
        String actualStatus
        String publicIpaddr
        List~Integer~ ports
    }
    
    class VastAiInstanceWrapper {
		    VastAiInstanceResponse instances
    }

Shared Provider Output

This model is produced by both adapters and consumed by any service that needs to reach Ollama or ComfyUI on the pod. It abstracts away the difference between RunPod's stable URL format and Vast.ai's dynamic IP resolution.