Documentation

Everything you need to buy, trade, and deploy tokenized GPU compute. From first token to production cluster.

Example: Launch an Inference Job

import { LubbockClient } from '@lubbock/sdk';

const client = new LubbockClient({
  apiKey: process.env.LUBBOCK_API_KEY,
});

// Redeem 4 LUB-MI300X tokens for an inference session
const job = await client.compute.create({
  gpu: 'MI300X',
  durationHours: 4,
  jobType: 'inference',
  config: {
    model: 'meta-llama/Llama-3-70B',
    quantization: 'fp16',
    maxBatchSize: 32,
  },
});

console.log(job.id);       // "job-abc123"
console.log(job.endpoint); // "https://compute.lubbock.cloud/job-abc123"
console.log(job.status);   // "provisioning"