This document defines all models that cross the wire between Zoltraak Gateway and the GPU cloud provider APIs. All models are grounded in the actual API payloads confirmed from RunPod and Vast.ai documentation. These models live exclusively in the adapters layer and are never exposed to clients or used by services directly.
RunPod REST API base URL: https://rest.runpod.io/v1
Confirmed endpoints used:
POST /pods/{podId}/start -- Authorization header only, no request bodyPOST /pods/{podId}/stop -- Authorization header only, no request bodyGET /pods/{podId} -- returns pod detailBecause start and stop carry no request body, no request model is needed for those operations. Only a response model for the status query is required.
classDiagram
class RunPodPodResponse {
String id
String desiredStatus
String publicIp
String ports
String costPerHr
String lastStatusChange
}
Vast.ai REST API base URL: https://console.vast.ai
Confirmed endpoints used:
PUT /api/v0/instances/{id}/ with body {"state": "running"}PUT /api/v0/instances/{id}/ with body {"state": "stopped"}GET /api/v0/instances/{id}/ returns instance detailUnlike RunPod, Vast.ai uses a single PUT endpoint for both start and stop, distinguished by the state field in the request body.
classDiagram
class VastAiManageRequest {
String state
}
class VastAiInstanceResponse {
Long id
String actualStatus
String publicIpaddr
List~Integer~ ports
}
class VastAiInstanceWrapper {
VastAiInstanceResponse instances
}
This model is produced by both adapters and consumed by any service that needs to reach Ollama or ComfyUI on the pod. It abstracts away the difference between RunPod's stable URL format and Vast.ai's dynamic IP resolution.