Two tools.
In development.

Model weights, evaluation sets, and research are published openly. Platform infrastructure is developed internally.

tribela/guardrail
cloud-apicoming soon
03 // Guardrail

Moderate content.
Transparent policy.

Classifier models you can drop in. Policy we design in the open.

Guardrail is a cloud-based moderation API. Classifier models detect harmful content: harassment, abuse, targeted harm. They provide transparent score breakdowns and open evaluation sets. The moderation policy is designed by Tribela Labs and published in full — no black-box rules, no quiet changes.

Eval suites, model cards and dataset cards are published with every release. Every classification includes a transparent score breakdown. If the policy changes, you see exactly what changed and why.

Feature 01
5 endpoints: text, image, audio, video, age assurance.
Feature 02
Specialised models: RoBERTa, ConvNeXt, Whisper, SafeSearch.
Feature 03
Gateway + subscriptions: API keys, usage tracking, PostgreSQL.
Feature 04
Policy-as-code: published rules, transparent scores.
// fig.03: endpoint
POST /v1/guardrail/classify
Authorization: Bearer ${TOKEN}
Content-Type: application/json

{
  "text": "...user content...",
  "policy": "standard.v2",
  "context": "dm",
  "locale": "en-GB"
}

→ 200 OK
{
  "flags": [
    "harassment.targeted",
    "toxicity.low"
  ],
  "scores": {
    "harassment": 0.87,
    "toxicity": 0.34
  },
  "action": "flag_for_review",
  "model": "guardrail.v1.2.eu",
  "policy_version": "standard.v2"
}
endpoints:
text · image · audio · video · age
models:
RoBERTa · ConvNeXt · Whisper
gateway:
auth · subs · usage · postgres
pricing:
€0.50/1K requests · b2b
POLICY
Policy published in the open with each release.
COMPLIANCE
Built for EU Online Safety Acts. Azure targets US norms.
SCORES
Transparent score breakdowns per classification.
EVALS
Open evaluation sets shipped. Azure eval is internal.
tribela/wyvern-engine
Q3 2026
02 // Wyvern

On-device AI.
No cloud call.

The wyvern is coming.
Private, verifiable, on-device.

Wyvern Engine ships Q3 2026. On-device AI runtime with a cryptographic trust chain that proves what ran, where, and what it saw. Runs 2B parameter models in 2-3GB RAM. Storage: ~1.4GB per model.

Data is processed entirely on-device. The engine operates in a memory-isolated environment with no external data transmission capabilities. Inference can be verified through a cryptographic trust chain.

Feature 01
Multimodal: LLM, vision, audio, streaming.
Feature 02
Cryptographic trust chain: verifiable inference.
Feature 03
Agent loops: tool use, reasoning, local execution.
Feature 04
Cross-platform: JS, Python, Kotlin, Swift, Rust.
Feature 05
EngineCore modes: Inference, Moderation, Agent.
TRUST
Cryptographic trust chain proves what ran. Ollama has no verifiable inference.
PRIVACY
Memory-isolated environment with no external data transmission.
MODES
EngineCore: Inference, Moderation, Agent. Ollama is inference-only.
SCOPE
2B models, 2-3GB RAM. Runs on phones. Ollama targets servers.
status:
Q3 2026
scope:
LLM · vision · audio · agents
integrations:
JS · Python · Kotlin · Swift · Rust
licence:
Apache-2.0

Open source.
Contributions welcome.

Model weights, evaluation suites, and research findings are published openly. Platform infrastructure is developed internally.

Read the research labs@tribela.com