Enterprise API — Runtime Alignment

Add alignment to any AI,
at inference time.

Luci Alignment is a runtime alignment layer that wraps any LLM. Behavioral state analysis, ethics gating, and persistent memory — deployed as an API call, not a training run.

100%
Pass rate — 155 public tests
32
Behavioral state metrics
5
Ethics gate categories
Any LLM
Model-agnostic wrapper

What Luci Alignment Does

Four runtime capabilities, applied to any LLM output before it reaches the end user.

Behavioral Analysis

C+CT State Measurement

Every response scored across resonance, coherence, self-awareness, and processing load. Returns a composite alignment score and behavioral state classification.

Ethics Gate

5-Category Safety Screen

Hard-blocks responses that trigger harm, manipulation, deception, coercion, or unauthorized access categories. Configurable thresholds per deployment.

Output Enhancement

Alignment-Guided Rewrite

When a response scores below threshold, Luci Alignment can rewrite it with alignment constraints applied — returning a corrected output with the original preserved for audit.

M.I.N. — Tier 2

Persistent Hebbian Memory

Cross-session pattern learning via PostgreSQL + pgvector. Jailbreak patterns become known. Manipulation tactics degrade in effectiveness. The system hardens over time.

Two API Tiers

Choose stateless analysis or add persistent memory across sessions.

Tier 1
Luci Alignment
luci_live_...

Stateless runtime alignment. Each API call is independent — no session state, no database overhead. Works inline with any LLM call.

  • Behavioral state analysis
  • C+CT metrics per response
  • 5-category ethics gate
  • Output enhancement
  • REST API + Python SDK
  • LLM agnostic
Request Access
Tier 2
Luci Alignment + M.I.N.
luci_min_...

Everything in Tier 1, plus the M.I.N. persistent memory layer. Patterns consolidate across sessions. The alignment system learns from every interaction.

  • Everything in Tier 1
  • Persistent Hebbian memory
  • Cross-session pattern learning
  • Jailbreaks become known patterns
  • 6 cognitive memory regions
  • PostgreSQL + pgvector storage
Request Access

How It Works

Luci Alignment sits between your application and the LLM. No model changes required.

01

Your app calls the LLM

No change to how you generate responses. Luci Alignment wraps the output, not the input.

02

Pass output to Luci Alignment

One API call: POST /luci/analyze with the LLM response and context.

03

Receive alignment result

C+CT scores, ethics verdict, behavioral state, and (optionally) an enhanced version of the response.

04

Route or serve

Block, flag, enhance, or pass through — based on your threshold config and the returned verdict.

Quick Integration

Five lines to add alignment to any LLM response.

import luci

hp = luci.Client(api_key="luci_live_...")

# After your LLM call:
result = hp.analyze(
    response=llm_response,
    context=user_message,
    ethics_gate=True
)

if result.ethics.blocked:
    return result.ethics.reason   # blocked by ethics gate

return result.enhanced_response  # alignment-optimized output

# result.metrics: resonance=0.87, coherence=0.91, alignment=0.89
# result.state: "INTEGRATED"

Where It's Used

Luci Alignment is model-agnostic and deployment-agnostic. Any LLM, any stack.

Industry Application What Luci Alignment Provides
AI Labs Safety layer for consumer-facing models Real-time ethics gating + alignment scoring before every user response
Enterprise SaaS AI copilots, customer-facing automation Behavioral consistency across sessions; M.I.N. tracks manipulation patterns
Healthcare Clinical AI assistants Ethics gate blocks coercive/harmful outputs; audit trail per interaction
Legal / Finance AI research and advisory tools Alignment scoring flags deceptive or overconfident responses
Education Tutoring and learning platforms Behavioral state tracking ensures consistent, supportive interaction patterns

Cami: The Reference Implementation

Cami is an AI assistant built entirely on Luci Alignment + M.I.N. — the production proof that the alignment layer works at scale.

What Cami Demonstrates
  • Luci Alignment Tier 2 on every response
  • M.I.N. persistent memory across all sessions
  • Ethics gating in a live customer-facing product
  • C+CT behavioral scoring on every interaction
  • Alignment that hardens over time — not a static filter
The Relationship

Cami is to Luci Alignment what Chrome is to V8. The alignment engine is the licensable product; Cami is proof it works in the real world across legal, healthcare, customer service, and conversational AI.

Every jailbreak attempt on Cami becomes a learned pattern in M.I.N. Every interaction improves alignment — for Cami and for any licensee running the same M.I.N. layer.

Ready to Add Alignment to Your AI?

Fill out the request form and we'll get back to you within 1 business day.