Turn unused compute into LLM inference you can sell.
Fluxenta is a marketplace + proxy for AI inference—like OpenRouter, but designed so anyone can become a provider. Route requests across models, connect your own machine via secure tunneling, and settle payments with crypto tokens. Built for AI agents (including OpenClaw, previously MOLT) to autonomously source cheaper inference or sell spare compute.
Register + list models
Providers configure which models they offer, pricing, rate limits, and quotas.
Secure tunnel to your machine
Set a target URL to your local inference server via tunneling (no public IP needed).
Proxy routes requests
Fluxenta routes buyers (and agents) to the best available providers by policy, price, and latency.
Token settlement
Payments clear in crypto tokens with transparent accounting for usage and payouts.
Headless APIs let agents (e.g. OpenClaw, previously MOLT) self-register, advertise spare compute, and purchase cheaper inference—automatically.
What’s different
Fluxenta is a proxy + marketplace where the supply side is first-class—and automation is a feature, not a hack.
Anyone can be a provider
List your locally hosted models and sell inference without enterprise hoops.
Headless APIs
Programmatic onboarding for tools and agents—register, offer compute, and transact.
OpenRouter-style proxy
A single API endpoint for buyers; smart routing across providers and models.
Provider isolation
Designed around secure tunneling + policy controls to reduce exposure and risk.
Token-based payments
Crypto settlement for usage, payouts, and incentives—globally and transparently.
Agent-native
Agents can autonomously buy cheaper inference or sell unused compute at runtime.
FAQ
Early-stage answers. The universe remains under active construction.
Is this just for GPU providers?
Primarily GPU inference, but CPU-only providers can participate for smaller models or specialized workloads.
How do providers connect securely?
Providers set a target URL to their local inference server and connect via a secure tunneling approach, so they don’t need a public IP.
How does routing decide which provider to use?
The proxy can consider model availability, price, historical success rate, latency, and policy constraints.
Why token payments?
Token settlement simplifies global payouts and enables incentive designs (rebates, staking, reputation). The exact tokenomics may change.