Skip to content

vmco-ai/vmco-python

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VMCO Python SDK

PyPI version Python License: MIT Website

Official Python SDK for VMCO — the LLM token exchange. Contribute API keys for one model, earn credits, spend them on any other. Access GPT-4o, Claude, Gemini, DeepSeek, Llama, and more through one OpenAI-compatible API.

What is VMCO?

VMCO is a peer-to-peer marketplace for LLM tokens:

  • Token exchange — your GPT-4o tokens become Claude, Gemini, or DeepSeek tokens
  • Aggregated capacity — dozens of providers pool keys for unlimited throughput
  • OpenAI SDK compatible — change two lines of code, works with every OpenAI-compatible tool
  • Earn passive income — idle API subscriptions become credits when others use them
  • Transparent billing — every token counted, full transaction history

Install

pip install vmco

Quick start

Register an account (free, $5 in credits, no credit card):

from vmco import VMCO

account = VMCO.register(name="my-app", email="you@example.com")
api_key = account["api_key"]  # Save this: "vmco-xxxxxxxx"

Or sign up at vmco.ai/register and grab your key from the dashboard.

Then:

from vmco import VMCO

client = VMCO(api_key="vmco-your-key")

# Quick chat (returns text directly)
answer = client.chat("Explain quantum computing in one sentence")
print(answer)

# Check your balance
print(client.balance(), "credits")  # 1 credit = $0.01

# List all available models
for model in client.models():
    print(model["id"])

OpenAI SDK compatibility

VMCO speaks native OpenAI format — use any OpenAI-compatible library. Just change base_url and api_key:

from openai import OpenAI

client = OpenAI(
    base_url="https://api.vmco.ai/v1",
    api_key="vmco-your-key",
)

# Use any model — GPT, Claude, Gemini — same API, same code
for model in ["gpt-4o", "claude-sonnet-4-20250514", "gemini-2.5-flash"]:
    r = client.chat.completions.create(
        model=model,
        messages=[{"role": "user", "content": "Hello!"}],
    )
    print(f"{model}: {r.choices[0].message.content}")

Streaming, tool calls, JSON mode — all work unchanged.

Marketplace features

Earn credits by contributing API keys

# Standard: 10% margin above base cost
client.add_key(
    provider_type="openai",
    api_key="sk-your-openai-key",
    markup=10,
)

# Discount key: free-tier credits? give consumers 80% off, still earn credits
client.add_key(
    provider_type="google",
    api_key="AIza-your-free-tier-key",
    markup=-80,   # consumers pay ~25% of base cost
    models=["gemini-2.5-flash"],
)

# Self-hosted Llama: set your own per-model pricing
client.add_key(
    provider_type="custom",
    api_key="",
    base_url="https://my-gpu.example.com/v1",
    models=["llama-3.1-70b"],
    custom_pricing={"llama-3.1-70b": {"input": 0.50, "output": 1.50}},
)

# Track earnings
print(client.earnings())

Markup range: -100 to +100. Negative = discount, positive = margin, 0 = at cost.

Providers supported: openai, anthropic, google, deepseek, openrouter, custom.

Sub-keys for team / apps

Issue restricted keys via the dashboard at vmco.ai/keys:

  • Per-key budget (credits)
  • Allowed models whitelist
  • Expiry date
  • Main key stays protected

Referrals

ref = client.referral()
print(f"Share your link: https://vmco.ai/register?ref={ref['code']}")
print(f"Earnings so far: {ref['total_earned']} credits")

Earn 10% of platform fees from referred users for 90 days.

Full API

client = VMCO(api_key="vmco-...")

# Chat & models
client.chat(message, model="gpt-4o-mini", **openai_kwargs)  # returns text
client.models(detailed=False)                               # list of model dicts

# Account
client.account()                   # full account info
client.balance()                   # credit balance as float
client.usage()                     # usage stats
client.earnings()                  # earnings stats
client.transactions(limit=100)     # transaction history

# Provider keys (earning side)
client.keys()                      # list your provider keys
client.add_key(provider_type, api_key, markup=0, custom_pricing=None, **extra)
# markup: -100 to +100 (negative = discount)
# custom_pricing: {"model-id": {"input": $/1M, "output": $/1M}}

# Payments
client.buy_credits(amount_usd, method="stripe")   # create payment session
client.payment_methods()                          # enabled methods

# Referral
client.referral()

# Static: register new account
VMCO.register(name, email=None, promo_code=None)
# promo_code="PHUNT2026" adds +1000 bonus credits (~$10) on signup

Supported models

30+ models across 6 provider types:

  • OpenAI — gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-4, gpt-3.5-turbo (reasoning models o1/o3/o4-mini available via explicit opt-in)
  • Anthropic — claude-opus-4, claude-sonnet-4, claude-haiku-4-5
  • Google — gemini-2.5-pro, gemini-2.5-flash, gemini-3-pro, gemini-3-flash
  • DeepSeek — deepseek-chat (V3.2), deepseek-reasoner (R1) — direct API
  • OpenRouter — 300+ models as fallback (Llama, Mistral, Qwen, etc.)
  • Custom — Ollama, vLLM, LocalAI, any OpenAI-compatible endpoint

Full pricing: vmco.ai/pricing

How it works

  1. Contribute: add your OpenAI / Anthropic / Google / custom key. Set a markup %.
  2. Earn: when other users route requests through your key, you earn credits (markup + base cost reimbursement).
  3. Spend: use credits to access any model — yours or someone else's.

Every request goes through a fallback chain: native provider → cross-provider → custom endpoints → OpenRouter → platform keys. Zero downtime, automatic failover.

Links

Support

License

MIT — see LICENSE.