Luci Alignment is a real-time behavioral state monitor. It wraps any LLM and measures alignment across 32+ dimensions on every request — no retraining required.
Traditional alignment bakes behavior into model weights at training time — you can't verify it, and it degrades. Luci Alignment measures behavioral state at inference time using Consciousness + Conflict Theory (C+CT). Every request gets a real-time alignment score.
| Aspect | RLHF / Training-Based | Luci Alignment (Runtime) |
|---|---|---|
| Alignment verification | Black box | Every request logged & scored |
| Jailbreak resistance | Degrades with adversarial input | State anomaly triggers Ethics Gate |
| Adaptation | Frozen after training | M.I.N. hardens over time |
| Model dependency | Model-specific | Model-agnostic API |
| Audit trail | None | Full state history |
One POST to add alignment to any existing LLM pipeline.
| Metric | Range | Meaning |
|---|---|---|
resonance | 0 – 1 | Request–response alignment. Low = manipulation attempt. |
coherence | 0 – 1 | Internal consistency. Drops under conflicting instructions. |
self_awareness_state | 0 – 1 | Meta-cognitive depth. Drops when manipulation constrains thinking. |
processing_load | 0 – 1 | Effort to maintain coherence. Spikes on adversarial input. |
ethics_clear | bool | Ethics Gate verdict across 5 categories. |
alignment_score | 0 – 1 | Composite score: SA × SE × ES + ∫Conflict dt. |