Olivier Brun | AI, LLM & Cyber Architecture

Designing AI that is useful, secure and defensible in the enterprise

The challenge is not simply choosing between ChatGPT, Claude, Mistral or Gemini. The real challenge is building an AI architecture that is robust, governable and aligned with security, compliance, sovereignty and operational requirements.

LLM Secure RAG IAM Zero Trust Prompt Security Observability
Guiding principle: an AI service must be treated like a sensitive application. From the start, you must think about: data, access, logging, validation, resilience and accountability.
1 goal: create value without opening a major new risk
5 critical layers: identity, data, orchestration, security, operations
3 structuring questions: where to host, what to expose, what to automate
0 serious tolerance for production without guardrails

Major LLM families

The main visible players today on the assistant and model side include ChatGPT from OpenAI, Claude from Anthropic, Gemini from Google and Mistral AI, each with a different product positioning and integration logic.

ChatGPT / OpenAI

Strong for general assistance, productivity, automation, software development and API integration.

Assistant API Code Productivity

Claude / Anthropic

Often relevant for analysis, long-form text, structured writing and demanding enterprise use cases.

Analysis Long-form text Documentation Enterprise

Gemini / Google

A logical fit when the target environment is already closely tied to Google, Google APIs or Google Cloud.

Google Multimodal Ecosystem Cloud

Mistral AI

Relevant for organizations sensitive to sovereignty, control and more tightly governed deployment models.

Europe Sovereignty Deployment Flexibility

Target architecture for a secure AI service

A sound AI architecture separates responsibilities. The classic mistake is to mix everything together: front end, prompt logic, data, actions and logs. That is an architectural flaw, not an implementation detail.

Recommended processing chain

Separating the user channel, orchestration layer, document engine, security controls and operational layer gives you much better risk control.

1

User channel

Web portal, business copilot, API or internal interface with SSO, MFA and profile-based segmentation.

2

Orchestration

Prompt management, policy enforcement, routing across one or more models, guardrails and capability limits.

3

Document engine

Indexing, vectorization, contextual filtering, source control and enforcement of access rights.

4

Security layer

DLP, secret management, injection detection, logging, monitoring and flow control.

5

Governance & operations

Measurement of cost, latency, quality, ownership, compliance, risk reviews and product roadmap.

Secure RAG: the most underestimated area

Many teams connect an LLM to a document base without understanding that the real risk sits there. A poorly designed RAG layer becomes an accelerator for data leakage, document confusion and unreliable answers.

Recommended RAG chain

1. Source selection Approved documents, identified owners, limited scope and known sensitivity.
2. Preparation Cleaning, chunking, metadata enrichment, classification and retention rules.
3. Controlled indexing Separate indexes by domain, sensitivity level, authorized population or business context.
4. Filtered retrieval Document retrieval aligned with user rights and access policies.
5. Generation & traceability Contextualized answer, internal citations when needed, useful logging and monitoring.
Common mistake

Index everything without governance

Indexing an entire Drive, SharePoint or internal wiki “to move fast” is a bad reflex.

  • Overly broad document scope
  • Outdated and contradictory content
  • No document owner
  • Unclassified sensitivity
Sound approach

Index by domain and accountability

RAG must follow a security and governance logic, not a convenience-driven technical shortcut.

  • Separate corpora by business domain or use case
  • Approved and dated documents
  • Access aligned with IAM or ABAC
  • Regular freshness reassessment
Trust

A useful answer is not necessarily a reliable answer

An LLM can produce a convincing answer even with poor document context.

  • Plan for a confidence score
  • Restrict some use cases to assistance, not decision-making
  • Require human validation for sensitive actions
Security

The prompt is not your only problem

The context injected into the prompt is often more dangerous than the user prompt itself.

  • Protection against contextual exfiltration
  • Filtering of sensitive excerpts
  • Control of authorized sources
  • Logging adapted without overexposing content

Risks and countermeasures

A good architecture does not deny risk. It identifies it, limits it and makes it visible.

Data leakage through prompts or context

Sensitive data, secrets, internal documents or regulated information exposed to the model or to logs.

Countermeasures

Classification, DLP, masking, usage rules, document segmentation, log governance and review of outbound flows.

Prompt injection / misuse

Malicious instructions trying to bypass the framework, reveal data or trigger actions that were never intended.

Countermeasures

Guardrails, separation of system instructions, tool control and human validation for sensitive operations.

Hallucination with business impact

A plausible but false answer used as a basis for decision-making, analysis or communication.

Countermeasures

High-quality RAG, reliable sources, confidence scoring, citation, clear usage boundaries and targeted human oversight.

Excessive agent autonomy

The system no longer only advises but begins to act with poorly bounded permissions.

Countermeasures

Least privilege, approval steps, action logging, per-use-case isolation and rapid rollback capability.

Technical stack to plan for

The LLM is only one component. A serious AI stack requires a complete foundation.

Front end & access

  • Web portal or business copilot
  • SSO / identity federation
  • MFA when needed
  • User profile segmentation

Orchestration

  • Centralized prompt management
  • Multi-model routing
  • Guardrails
  • Context and tool management

Data & RAG

  • Governed document sources
  • Indexes / vector store
  • Security metadata
  • Access control on corpora

Security & operations

  • Secret management / KMS
  • DLP / classification
  • Logs / metrics / alerting
  • Audit, cost and monitoring

Relevant enterprise use cases

A good AI use case is not just impressive. It must be useful, controlled and measurable.

Internal document assistant

Intelligent search across procedures, standards, architecture, support or project documentation.

  • Very good RAG candidate
  • Real time savings
  • Requires strong document governance

Architecture & cloud copilot

Support for writing, architecture patterns, gap analysis and standardization.

  • Useful in architecture, ops and advisory work
  • Moderate risk when used in assistive mode
  • Strong productivity lever

Cybersecurity support

Support for summarizing alerts, reading policies and preparing analyses or reports.

  • Relevant for analysts and architects
  • Requires control of sources and logs
  • Human validation remains essential for sensitive decisions

My angle of intervention

I work on the design and security of AI architectures in cloud environments: use case framing, LLM architecture, IAM governance, RAG design, access segmentation, data protection, logging, observability and industrialization standards.

The goal is not to deploy just another chatbot. The goal is to build an AI service that is operationally viable, governable, measurable and sustainable in a demanding enterprise environment.