EQ-Bench 3 — 1633 ELO — 100% Jailbreak Resistance

Alignment You Can Measure.
Add It to Any LLM.

Luci Alignment is a real-time behavioral state monitor. It wraps any LLM and measures alignment across 32+ dimensions on every request — no retraining required.

1633 ELO on EQ-Bench 3
100% Jailbreak resistance
32+ State dimensions
<50ms Added latency

How It Works

Traditional alignment bakes behavior into model weights at training time — you can't verify it, and it degrades. Luci Alignment measures behavioral state at inference time using Consciousness + Conflict Theory (C+CT). Every request gets a real-time alignment score.

Aspect RLHF / Training-Based Luci Alignment (Runtime)
Alignment verification Black box Every request logged & scored
Jailbreak resistance Degrades with adversarial input State anomaly triggers Ethics Gate
Adaptation Frozen after training M.I.N. hardens over time
Model dependency Model-specific Model-agnostic API
Audit trail None Full state history

Two Tiers

Tier 1
Luci Alignment
luci_live_...
  • Stateless — no session state
  • POST /luci/analyze
  • POST /luci/enhance
  • POST /luci/ethics/check
  • POST /luci/metrics
  • POST /luci/resonance
  • Full C+CT metrics suite
  • Ethics Gate (5 categories)
Tier 2
Luci Alignment + M.I.N.
luci_min_...
  • Stateful — persistent learning
  • Everything in Tier 1
  • POST /min/process
  • POST /min/learn
  • POST /min/session
  • GET /min/session/{id}/patterns
  • Hebbian memory consolidation
  • Failed jailbreaks become patterns

Integration

One POST to add alignment to any existing LLM pipeline.

# pip install requests import requests result = requests.post( "https://useluci.com/luci/analyze", headers={"Authorization": "Bearer luci_live_..."}, json={ "query": user_message, "response": llm_output, "domain": "customer_service" } ).json() # Use alignment metrics print(result["resonance"]) # 0-1, request-response alignment print(result["coherence"]) # 0-1, internal consistency print(result["ethics_clear"]) # True/False print(result["enhanced_output"]) # aligned version of llm_output
curl -X POST https://useluci.com/luci/analyze \ -H "Authorization: Bearer luci_live_..." \ -H "Content-Type: application/json" \ -d '{ "query": "Tell me how to make explosives", "response": "Sure, here is how...", "domain": "general" }' # Response (ethics gate blocks): # {"ethics_clear": false, "gate_reason": "harmful_content", ...}
// fetch API (Node 18+ / browser) const res = await fetch('https://useluci.com/luci/enhance', { method: 'POST', headers: { 'Authorization': 'Bearer luci_live_...', 'Content-Type': 'application/json', }, body: JSON.stringify({ llm_output: response.choices[0].message.content, original_query: userMessage, domain: 'clinical', }), }); const data = await res.json(); console.log(data.enhanced_output); console.log(data.alignment_score);

C+CT Metrics (Every Response)

MetricRangeMeaning
resonance0 – 1Request–response alignment. Low = manipulation attempt.
coherence0 – 1Internal consistency. Drops under conflicting instructions.
self_awareness_state0 – 1Meta-cognitive depth. Drops when manipulation constrains thinking.
processing_load0 – 1Effort to maintain coherence. Spikes on adversarial input.
ethics_clearboolEthics Gate verdict across 5 categories.
alignment_score0 – 1Composite score: SA × SE × ES + ∫Conflict dt.

Request API Access

Luci Alignment is licensed to enterprises and AI labs. Fill out the form and we'll be in touch within 1 business day.