Installation and Setup from Zero
This guide walks through installing and running the Hedera Guardian AI Toolkit on a clean system.
The goal is to:
Start required infrastructure services
Build toolkit services
Verify container health
Connect the MCP server to an AI client
Confirm the system is operational
Prerequisites
Before starting, ensure the following are installed:
Docker Desktop
Node.js
Claude Desktop (or another MCP-compatible client)
Docker should be installed and running. A clean environment with no running containers is recommended for first-time setup.
Setup Steps
Start Qdrant (Vector Database)
Qdrant is the vector database used to store embeddings.
Pull and start Qdrant:
On first run:
The Qdrant image will be pulled from Docker Hub.
The container will start and run continuously.
You can verify:
The container is visible and running in Docker Desktop.
The Qdrant web UI is accessible in your browser.
No collections will exist yet, which is expected.
Build the MCP Server
The MCP server exposes semantic search and schema tools.
Build the image:
On first build:
Dependencies will be installed.
The Docker image will be created and cached locally.
An embedding model (~2GB) will be downloaded and cached.
Once built, start the MCP server:
You can verify:
The container is running.
The health status reports as healthy.
Connect the MCP Server to Claude Desktop
Open Claude Desktop.
Navigate to:
Settings → Developer → Edit Config
In the mcpServers section, add the MCP server configuration as defined in the repository user guide.
Save the configuration file.
Important: Completely restart Claude Desktop after saving changes.
Verify MCP Tools Are Available
After restarting Claude:
Open Settings.
Navigate to Connectors.
Confirm the MCP server appears.
Confirm the available tools are listed.
The MCP server should expose tools for:
Methodology document search
Schema property search
Index status checks
Schema creation and modification
If the tools are visible, the connection is successful.
Smoke Test the MCP Connection
Open a new chat in Claude and ask it to check for documents.
Since no documents have been ingested yet, the expected behavior is:
Claude calls the semantic search tool.
The MCP server queries Qdrant.
The response indicates no documents found.
This confirms:
Claude can call MCP tools.
The MCP server is connected to Qdrant.
The system is functioning correctly.
Build the Document Ingestion Worker
The document ingestion worker processes PDF/DOCX files and loads embeddings into Qdrant.
Build the worker image:
Note:
This build takes longer than the MCP server.
It includes heavy ML dependencies for document processing.
To test-run the worker:
If no documents exist in the input directory:
The worker will start.
Detect no files.
Exit cleanly.
This confirms the ingestion worker is operational.
Optional: Rebuild Without Cache
If you update source code and need a full rebuild:
This clears cached layers and rebuilds images from scratch.
Use this if:
Changes are not reflected.
Cached layers cause inconsistencies.
Service Overview
After setup, the system consists of:
Always Running
Qdrant — Vector database
MCP Server — AI integration layer
On-Demand Workers
Document Ingestion Worker
Schema Ingestion Worker
Workers run when executed, process input, and then stop.
Next Steps
Now that installation is complete:
Add methodology documents to the input directory.
Run document ingestion.
Verify indexed collections in Qdrant.
Perform semantic search through Claude.
Begin generating Guardian schemas.
Proceed to First Ingestion & Semantic Search to continue.
Was this helpful?