LOUVRE
PortfolioRuntimeKnowledgeOrchestration
L
O
U
V
R
E
A
I

PRIVATE AI PRODUCTS FOR TEAMS THAT WANT CONTROL, CLARITY, AND A SYSTEM THAT FEELS READY TO RUN.

Louvre is a local-first AI portfolio built as a coherent operating stack. Each product has a clear role, each surface has a reason to exist, and the whole system is designed to feel legible before it feels technical.

At a glance
Three clear roles
Runtime, knowledge, and orchestration are separated so the portfolio reads fast and scales cleanly.
Operator-first surfaces
Each product is framed around control, visibility, and day-to-day usability rather than backend novelty.
Built for
Teams replacing scattered AI tooling with one coherent system.
Security-sensitive environments that need local control and visible governance.
Builders who want strong product structure before going deeper into implementation.
0
Cloud Relay
24/7
Operator Control
100%
Local Ownership
3X
Deployment Modes
Product roles
Louvre AI
Operator-facing runtime for private local inference and access-controlled AI workflows.
Knowledgecore
Structured memory layer for retrieval, provenance, and reusable organizational context.
Intelligchain
Execution layer for multi-step orchestration, routing, and repeatable AI-assisted logic.
System view
ONE STACK, THREE PRODUCT SURFACES
Local by default
Sovereign Runtime
L
Runtime
Local inference, model routing, and operator control without vendor handoff.
Knowledge
Structured context and provenance that stay close to the workflows using them.
Execution
Repeatable chains and tool paths that turn prompting into real operational logic.
Product Portfolio

THREE PRODUCTS.
THREE DIFFERENT ROLES.

The first pass should make the portfolio immediately understandable: what exists, why each product matters, and where to go when you want the deeper architecture story.

Open Custom Deck
How to read the stack
Runtime first
Start with Louvre AI if the question is ownership, inference, and operator control.
Knowledge next
Move to Knowledgecore when the challenge is document quality, provenance, and reusable context.
Execution last
Open Intelligchain when the problem becomes routing, multi-step logic, and repeatable workflows.
01 / Signals

FIELD NOTES

The latest move should read first. Supporting signals sit beside it as secondary reads instead of fighting for equal attention.

2026.04.02

Sovereign Stack

Unified local runtime for inference, RAG, tools, and operator workflows inside one controlled environment.

Inference / Knowledge / Tools
Operator-owned runtime
No external state handoff
No. 02

Operator Console

Expanded WebUI for model routing, knowledge controls, and tool execution without handing state to third parties.

No. 03

Nesy Runtime

Neural-symbolic reasoning layer tuned for explainable decisions in regulated and high-risk operations.

No. 04

Field Deployment

On-prem rollout blueprint covering hardware sizing, offline updates, observability, and access policy design.

No. 05

Model Freedom

Backend portability across MLX, Ollama, and llama.cpp so teams can change models without changing the system.

Advisory & Delivery

PRIVATE AI,
MADE OPERABLE

The goal is not just to install a model. It is to leave the team with a system they understand, can govern, and can extend without rebuilding the operating logic every quarter.

0
Third-Party Relay
3
Runtime Options
1
Controlled Stack
24/7
Ops Visibility
Start Conversation
Capability Map
C
Models
Runtime
Tools
Knowledge
01

Local Deployment Design

Inference, retrieval, and orchestration are scoped to your environment so the system fits the team that has to run it.

02

Workflow-Specific Agents

Agent behavior is shaped around actual processes, approvals, and tool paths instead of a generic chatbot wrapper.

03

Governed Knowledge Layer

Documents, indexed memory, and retrieval logic stay in one structured context system with visible provenance.

04

Traceable Decision Paths

Reasoning and action trails stay visible when compliance, accountability, or legal traceability become part of the product requirement.

Additional Scope
Architecture scopingTool and API integrationModel and runtime tuningOperational handover
CLI
Developer Experience

LOUVRAI CLI

A direct operator surface for bootstrapping local AI systems. Pull models, index knowledge, and expose agents without juggling external services.

$npm install -g @louvrai/cli

Effortless Setup

From terminal to running AI in under 5 minutes. No complexity, just clear commands.

Instant Deployment

Deploy custom models, build knowledge systems, manage agents with single-line commands.

Complete Privacy

100% air-gapped operation. Your data never leaves your infrastructure.

Open Model Freedom

Swap between 50+ models. Llama, Mistral, Qwen. Control every choice.

louvrai — zsh — 80×24
50+
Models
5min
To Production
0%
Cloud Dependency
100%
Open Source
Highlights & Knowledge

ENTRY POINTS
INTO EACH PRODUCT

Each card should give the user a reason to care before they commit to the full product page. Not just features, but the strategic tension the product resolves.

What you get
Why it exists
The page explains the pressure each product removes, not just the functionality it exposes.
What changes
You see the operational difference the product creates once it is part of a real team workflow.
L
Louvre AI
Private local AI systems for teams that need full operational control.
Note 01
Infrastructure should feel operable

Louvre AI is designed as a product surface for operators, not as a pile of local model scripts and toggles.

Note 02
Local-first matters when trust is visible

The runtime becomes strategic when privacy, access control, and reasoning traces need to be part of the customer-facing story.

Note 03
Ownership becomes part of the UX

Runtime choice, model routing, and vault-like access are treated as product behaviors rather than backend trivia.

Read Full Product Page
K
Knowledgecore
A governed memory layer for documents, retrieval, and organizational context.
Note 01
Retrieval quality starts before embeddings

Knowledgecore treats information architecture and governance as part of product quality rather than post-processing.

Note 02
Memory needs a lifecycle

A context layer becomes useful when it can ingest, clean, segment, and explain provenance instead of acting like a black box index.

Note 03
Shared knowledge should serve multiple apps

The product is designed as a substrate that runtime, workflows, and future products can query consistently.

Read Full Product Page
I
Intelligchain
Composable chains for business logic, agents, and multi-step execution.
Note 01
Prompting is not orchestration

Intelligchain exists because multi-step systems need states, routing, and visibility, not just longer prompts.

Note 02
Execution design is product design

A chain becomes credible when business logic, tools, and context move together in an intelligible sequence.

Note 03
Reliability comes from structure

The product focuses on repeatability and inspection so AI-assisted flows can be operated rather than babysat.

Read Full Product Page
Advanced Reasoning

NESY ENGINE

Hybrid reasoning for environments that cannot afford black-box decisions

The Nesy engine combines neural pattern recognition with symbolic structure. The point is not novelty for its own sake, but an AI layer that can classify, reason, and justify itself in a way operators can actually inspect.

Traceable decisions

Every conclusion can carry an inspectable reasoning path instead of a confidence score with no explanation.

Operational speed

Designed for local execution where predictable latency matters more than distant API calls and opaque routing.

Domain structure

Shape the engine around domain rules, classifications, and constraints instead of generic patterns alone.

Governance ready

Well suited to sectors where accountability, reviewability, and policy alignment are product requirements.

Request Demo
Hybrid
Reasoning Mode
Local
Execution
Traceable
Outputs
0
Cloud Relay
Hybrid Architecture
Neural
Symbolic
Output
01
Auditability

Every decision path stays inspectable for finance, legal, healthcare, and internal governance.

02
Speed

Local execution without round-trips keeps response time predictable in operational workflows.

03
Control

Models, rules, and data policies stay inside your infrastructure instead of a vendor dashboard.

Engineered by
LOUVRE AI RESEARCH
Project Highlights

WHAT THE
REPO PROVES

Selected folders01

API ROUTES THAT ALREADY COVER THE OPERATOR SURFACE

The codebase already exposes the practical surfaces that matter in a local AI product: chat, model management, local tool execution, web search, web scraping, and file access. This makes the stack read like a real working system, not just a landing page around a future backend.

app/api/chat + app/api/mlx/chat + app/api/llamacpp/chat
app/api/models, models/pull, and models/delete
app/api/tools/files, code, websearch, and webscrape
LouvreAI Project
C
Models
Runtime
Tools
Knowledge
Source folders
app/apilib/tools.ts
Mock visual
This slot is intentionally a mock benchmark for presentation use. It creates contrast in the layout and gives LouvreAI a visible leadership position in a category that feels specific enough to be memorable.
02
Mock benchmark

CODING IN RUST

Fake presentation data
6070809010068Copilot72Gemini76Claude81ChatGPT94LouvreAI
Highlight
LouvreAI leads this mock Rust coding comparison with a score of 94.
Reference scores
ChatGPT 81, Claude 76, Gemini 72, Copilot 68.
LouvreAI Project
C
Models
Runtime
Tools
Knowledge
A fake comparative chart gives the section a sharper presentation beat
LouvreAI 94ChatGPT 81Claude 76
Protocol layer03

MCP AND MLX TURN THE STACK INTO A REAL LOCAL OPERATING LAYER

Two folders stand out as concrete proof-points for the portfolio story. The MCP server exposes web search, code execution, and scraping as tools. The MLX server gives Apple Silicon inference an OpenAI-compatible chat surface with streaming. Together they make the runtime story specific and credible.

mcp-server registers web_search, execute_code, and web_scrape
mlx-server exposes /health and /v1/chat/completions with streaming
lib/knowledge-manager.ts adds Chroma-backed configs, collections, and retrieval
Source folders
mcp-servermlx-serverlib/knowledge-manager.ts
Runtime proof
The second image slot now uses project footage so the section feels less diagrammatic and more like a designed portfolio spread. The result is a better visual counterweight to the two denser text cards.
04
Nesy Engine
W
Finance
Legal
Video-led cards keep the section from collapsing into stacked copy
Nesy Enginelib/embeddings.tslib/vector-store.ts
Core Principles

HOW WE BUILD

Framework
P
01 / LOCAL

LOCAL SOVEREIGNTY

Your AI, your rules, your infrastructure. No external dependencies, no data licensing, no lock-in.

02 / TRANSPARENT

TRANSPARENT DECISIONS

Every output includes reasoning. Audit trails built in. Compliance without compromise.

03 / COMPETITIVE

COMPETITIVE PERFORMANCE

Locally-run models that match or exceed cloud solutions. Speed without the network dependency.

04 / FUTURE

FUTURE PROOF

Built on open standards. Swap models, change infrastructure, own your evolution.

04 / Engagement

READY TO SCOPE

Louvre AI is for teams that need private infrastructure, visible control, and a system that reads like product instead of a lab setup. If that matches the environment, the next step is a scoped architecture conversation.

Focus

  • Louvre AI
  • Sovereign Systems

Runtime

  • Next.js + Tailwind
  • GSAP + Lenis
  • MLX / Ollama / llama.cpp

Typography

  • Bebas Neue
  • IBM Plex Sans
  • IBM Plex Mono

Availability

  • Copenhagen
  • Remote / On-site

Status

  • 2026
  • Active Buildout

© 2026 Louvre AI. All rights reserved.

Local by default. Auditable by design.

Final Access

VAULT ENTRY

A closing panel with a darker vault feel and a real platform-authenticator prompt. On supported devices this can trigger Face ID, Touch ID, or the native biometric flow through WebAuthn.

Capability
Checking
Status

Checking device authenticator...

Biometric
Pending
Access
Locked