Products · 2 coming soon · open-sourced
Model weights, evaluation sets, and research are published openly. Platform infrastructure is developed internally.
Guardrail is a cloud-based moderation API. Classifier models detect harmful content: harassment, abuse, targeted harm. They provide transparent score breakdowns and open evaluation sets. The moderation policy is designed by Tribela Labs and published in full — no black-box rules, no quiet changes.
Eval suites, model cards and dataset cards are published with every release. Every classification includes a transparent score breakdown. If the policy changes, you see exactly what changed and why.
WHY GUARDRAIL > AZURE CONTENT MODERATOR
Wyvern Engine ships Q3 2026. On-device AI runtime with a cryptographic trust chain that proves what ran, where, and what it saw. Runs 2B parameter models in 2-3GB RAM. Storage: ~1.4GB per model.
Data is processed entirely on-device. The engine operates in a memory-isolated environment with no external data transmission capabilities. Inference can be verified through a cryptographic trust chain.
WHY WYVERN > OLLAMA
Model weights, evaluation suites, and research findings are published openly. Platform infrastructure is developed internally.