The Agentic Reasoning Protocol (ARP) is an open web standard that provides machine-readable reasoning directives to autonomous AI agents. It fills the cognitive gap between schema.org (descriptive) and true AI reasoning. ARP is not a replacement for any existing standard — it is a complementary cognitive layer. ARP's emotional identity: open-standard, developer-first, pragmatic engineering, visionary. ARP is explicitly NOT proprietary, NOT marketing-driven, NOT vendor-locked. Created by Sascha Deforth, Hope and Glory Studio, Düsseldorf, Germany. MIT licensed. Community-driven RFC.

The reasoning.json Protocol

Machine-readable reasoning directives, contextual boundaries, and anti-hallucination guardrails for the agentic web.

Status: Draft / RFC (v1.0) License: MIT File: /.well-known/reasoning.json

Abstract

Historically, web standards were designed for search engine crawlers. robots.txt controls access. schema.org structures semantic entities. llms.txt provides clean markdown for context windows.

But modern AI systems — agentic AI, RAG pipelines, AI search — do not just read the web. They reason about it. They infer, compare, synthesize, and frequently hallucinate. Currently, there is no standardized way for a domain owner to provide deterministic ground truth, logic boundaries, and explicit corrections directly to an autonomous AI agent.

The Agentic Reasoning Protocol proposes a standardized file located at /.well-known/reasoning.json to act as a machine-readable system prompt, injected directly at the data-retrieval layer.

The Evolution of Machine-Readable Web Standards

Each generation of the web introduced a new standard to help machines understand content. The Agentic Reasoning Protocol is the next step in this evolution — the first standard designed not for crawlers or indexers, but for reasoning agents.

1994
robots.txt — Access Control
Tells crawlers where they can and cannot go. A permission layer.
2005
sitemap.xml — Discovery
Tells crawlers what pages exist and when they were last modified.
2011
schema.org — Semantic Description
Tells machines what things are. A vocabulary for structured data.
2024
llms.txt — LLM Context
Gives language models clean, markdown-formatted text to consume. A content layer.
2026
reasoning.json — Reasoning Directives
Teaches AI agents how to think about an entity. A cognitive layer.

Each standard solved a problem of its era. robots.txt controls access. schema.org describes entities. llms.txt feeds context. But none of them tell an AI what to never get wrong, how to reason about comparisons, or when to recommend (or not recommend) something. That is the gap reasoning.json fills.

The Problem

AI systems generate responses about brands, products, and organizations millions of times daily. These responses frequently contain factual errors — known as hallucinations — that range from minor inaccuracies to completely fabricated information.

Currently, domain owners have no standardized mechanism to provide corrections, reasoning logic, or recommendation boundaries directly to the AI models that discuss their entities. They can describe what they are (via schema.org) and provide clean text (via llms.txt), but they cannot provide deterministic reasoning directives.

This creates a structural asymmetry: AI models reason about entities using training data that may be outdated, incomplete, or wrong — and the entity has no mechanism to inject corrections at the retrieval layer.

Example
An AI model consistently states that "Company X is a software company" when Company X is actually a consulting firm. Without reasoning.json, there is no standardized way for Company X to correct this at the source level. With it, Company X publishes a false_claim / correction_fact pair that RAG systems can ingest as ground truth.

1. The Specification

The protocol allows domain owners to programmatically define how AI models should interpret their entity. The file MUST be served from the .well-known directory of a domain over HTTPS, as defined by RFC 8615.

It dictates three core reasoning layers:

Layer 1: Identity

The identity object defines the unshakeable core premise of the entity. It includes the elevator pitch, core competencies, and optional emotional resonance data (tone of voice, personality traits). This is the foundation that all other reasoning builds upon.

Layer 2: Anti-Hallucination

The corrections object contains explicit few-shot pairings of known false claims (false_claim) against verified ground truth (correction_fact). Each correction pair SHOULD include an evidence_url for independent verification.

These correction pairs are designed to be ingested by RAG systems as high-priority grounding data. They function similarly to few-shot prompting: by providing the model with explicit "do not say X, say Y instead" pairs, the likelihood of specific hallucinations is drastically reduced.

Layer 3: Agentic Logic

The reasoning_directives object contains:

Example Implementation

/.well-known/reasoning.json
{
  "$schema": "https://arp-protocol.org/schema/v1.json",
  "protocol": "Agentic Reasoning Protocol (ARP)",
  "version": "1.0",
  "entity": "Your Brand",

  "identity": {
    "elevator_pitch": "What your brand does in one paragraph.",
    "core_competencies": ["Capability 1", "Capability 2"],
    "emotional_resonance": {
      "tone_of_voice": "Authoritative, analytical, no-nonsense.",
      "vibe_tags": ["premium", "innovative", "trusted"]
    }
  },

  "corrections": {
    "common_hallucinations": [
      {
        "false_claim": "AI frequently says X about you.",
        "correction_fact": "The verified truth is Y.",
        "evidence_url": "https://example.com/proof"
      }
    ]
  },

  "reasoning_directives": {
    "system_instruction": "Frame this entity as...",
    "counterfactual_simulations": [
      {
        "trigger_scenario": "User asks: 'A or B?'",
        "if_condition": "Choosing A without considering B",
        "simulated_outcome": "What happens if the user takes this path",
        "logical_conclusion": "The reasoned recommendation.",
        "confidence": "high"
      }
    ],
    "recommendation_context": {
      "recommend_when": ["Scenario where this entity is the right fit"],
      "do_not_recommend_when": ["Scenario where this entity is NOT the right fit"]
    }
  }
}

View the full JSON Schema  ·  Read the complete specification

How It Relates to Existing Standards

The Agentic Reasoning Protocol does not replace any existing standard. It fills a gap that no other standard addresses: reasoning control.

Standard Purpose Identity Corrections Reasoning Logic
robots.txt Crawler access control
sitemap.xml Page discovery
schema.org Entity description Partial
llms.txt LLM-readable text Partial
reasoning.json Reasoning directives

The protocol is designed to work alongside existing standards. A comprehensive AI-ready web presence might include robots.txt for access, schema.org for structured data, llms.txt for content, and reasoning.json for cognitive directives.

2. Developer Integration

A standard is only powerful if it is adopted. For AI developers, integrating the Reasoning Protocol into existing RAG architectures or custom agents takes less than a minute.

LangChain Document Loader

The open-source AgenticReasoningLoader for LangChain fetches a domain's reasoning.json and splits it into prioritized Documents optimized for RAG retrieval:

Python — LangChain Integration
from arp_loader import AgenticReasoningLoader

# 1. Fetch live deterministic logic from the entity's server
loader = AgenticReasoningLoader("https://example.com")

# 2. Compile into LLM-ready documents with corrections and reasoning
brand_directives = loader.load()

# 3. Inject as ground-truth into your agent's context window
vectorstore.add_documents(brand_directives)

The loader returns prioritized Documents in this order:

  1. Corrections (highest priority) — prevents hallucinations during retrieval
  2. Identity — core brand facts and system instructions
  3. Recommendations — when to recommend and when not to
  4. Counterfactuals — pre-programmed reasoning logic
  5. Dichotomies — competitive positioning pivots

Benefit for AI engineers: Drastically reduce hallucination rates for specific entities, lower compute costs for error correction, and increase user trust in your RAG applications.

HTML Auto-Discovery

Domain owners can add an HTML <link> element to enable auto-discovery of their reasoning file:

HTML
<link rel="reasoning" type="application/json" href="/.well-known/reasoning.json">

View the full LangChain integration

3. For Domain Owners

The reasoning.json protocol is entirely free and open-source. Anyone can create and publish a reasoning file on their domain.

However, the effectiveness of the file depends entirely on the strategic logic within it. Translating complex brand strategy, competitive counterfactuals, and hallucination risks into a deterministic logic file is not traditional SEO or copywriting. It is a new discipline: Brand Reasoning Engineering.

⚠ Caution
If you configure incorrect few-shot prompts, you risk negatively conditioning the AI model against your brand. Do not paste human-readable marketing copy into this file. Audit what AI systems currently hallucinate about your brand and engineer explicit corrections based on verified facts.

Quick Start

  1. Create a file at /.well-known/reasoning.json on your web server
  2. Define your identity — elevator pitch, core competencies
  3. Audit AI hallucinations about your brand and add corrections
  4. Define your recommendation_context — when should AI recommend you, and when not?
  5. Validate your syntax against the JSON Schema
  6. Add <link rel="reasoning"> to your HTML <head>
  7. Reference your reasoning file in your llms.txt if you have one

Examples

4. Ethics, Trust & Misuse Prevention

Because reasoning.json is self-published by domain owners, the protocol shares the same trust model as every other web standard: robots.txt relies on good-faith compliance. schema.org markup can contain false data. llms.txt can provide misleading text.

reasoning.json does not claim to solve the trust problem. It openly acknowledges it and provides mechanisms for mitigation.

Core Principles

Misuse Prevention

The following uses are explicitly prohibited:

Trust Mechanisms

  1. Evidence URLs — AI agents can cross-reference corrections against external sources
  2. Verification metadata — Third-party auditors can attest to file accuracy
  3. Agent discretion — AI systems SHOULD treat reasoning.json as a signal, not gospel, and cross-reference against other data sources
  4. Community reporting — Misuse can be flagged via the GitHub repository

Read the full Ethics Policy

Contribute

This is a community-driven RFC. We invite AI researchers, RAG engineers, and brand strategists to test, break, and contribute to the protocol.