Skip to main content

AI Assistant Setup

By integrating with MCP (Model Context Protocol) and using different LLM providers as its backbone, the AI assistant helps users with their investigations.

The AI Assistant supports all MCP servers, including remote instances, by implementing Streamable HTTP and Server-Sent Event (SSE) server functionality.

Setup

  • Click on Assistant tab to access the AI Assistant.
  • Click on Open MCP Tools Menu to configure the AI Assistant.

Here we have to set:

  • LLM settings (API key, model, etc.)
  • MCP servers (e.g., Arcanna MCP server, Elasticsearch, Splunk, etc.)

Cloud LLM configuration

Supported Cloud LLM providers:

  • Bedrock ("bedrock")
  • OpenAI ("openai")
  • Anthropic ("anthropic")

Add the LLM configuration parameters in the config as follows:

Bedrock:

{
"bedrock": {
"aws_access_key": "AWS_ACCESS_KEY",
"aws_secret_key": "AWS_SECRET_KEY",
"aws_region": "AWS_REGION"
},
"sdk": "bedrock",
"model": LLM_MODEL,
"system_prompt" (optional): SYSTEM_PROMPT,
"max_tokens" (optional): MAX_TOKENS,
"context_window_percentage" (optional): CONTEXT_WINDOW_PERCENTAGE
}

OpenAI:

{
"openai": {
"endpoint": "OPENAI_API (defaults to "https://api.openai.com/v1")",
"api_key": API_KEY
},
"sdk": "openai",
"model": LLM_MODEL,
"system_prompt" (optional): SYSTEM_PROMPT,
"max_tokens" (optional): MAX_TOKENS,
"context_window_percentage" (optional): CONTEXT_WINDOW_PERCENTAGE
}

Azure OpenAI:

{
"openai": {
"endpoint": "AZURE_OPENAI_API",
"api_key": API_KEY,
"api_version": API_VERSION (defaults to "2025-01-01-preview"),
},
"sdk": "openai",
"model": LLM_MODEL,
"system_prompt" (optional): SYSTEM_PROMPT,
"max_tokens" (optional): MAX_TOKENS,
"context_window_percentage" (optional): CONTEXT_WINDOW_PERCENTAGE
}

Anthropic:

{
"anthropic": {
"api_key": API_KEY
},
"sdk": "anthropic",
"model": LLM_MODEL,
"system_prompt" (optional): SYSTEM_PROMPT,
"max_tokens" (optional): MAX_TOKENS,
"context_window_percentage" (optional): CONTEXT_WINDOW_PERCENTAGE
}

Local-hosted LLM configuration

Supported local-hosted LLM providers:

  • Ollama ("ollama")

Ollama supports running a wide variety of open-source Large Language Models (LLMs) locally, including popular families like Llama, Mistral, Gemma, Phi, DeepSeek, and Qwen.

Add the LLM configuration parameters in the config as follows:

Ollama:

{
"ollama": {
"endpoint": "OLLAMA_API"
},
"sdk": "ollama",
"model": LLM_MODEL,
"system_prompt" (optional): SYSTEM_PROMPT,
"max_tokens" (optional): MAX_TOKENS,
"context_window_percentage" (optional): CONTEXT_WINDOW_PERCENTAGE
}

System Prompt:

  • Allows users to provide a set of initial instructions, context, rules, and guidelines to an AI assistant before it begins interacting or processing user requests. It acts as a foundational configuration or a "meta-instruction" that fundamentally shapes the AI's behavior, persona, operational boundaries, and interaction style.
  • The system prompt's capability to shape assistant behavior, combined with MCP Tool Approval mechanism (By default, before executing any tool the user is asked to approve the tool), acts as a robust guardrail against misuse.
  • The AI Assistant comes with an internal system prompt. Add this parameter in config to provide additional instructions and tailor its behavior to your specific needs.

Example using "bedrock" as LLM provider:

Connect RAG Vector Database

Integrating a Retrieval Augmented Generation (RAG) system improves LLM responses by grounding them in proprietary data, resulting in more accurate and contextually relevant answers.

The assistant can use an Elasticsearch vector database to perform Retrieval Augmented Generation (RAG).

  • Add the following settings to the configuration to enable RAG using Elasticsearch as a vector database:
{
"rag_es_server": RAG_ELASTICSEARCH_SERVER,
"rag_vector_es_index": RAG_VECTOR_ELASTICSEARCH_INDEX,
"rag_embeddings_model": RAG_EMBEDDINGS_MODEL
}

MCP Servers Configuration:

  • AI Assistant pre-installed MCP servers: Arcanna, Arcanna Input, Elasticsearch, Splunk, VirusTotal, Shodan, Sequential Thinking.

  • Add the MCP servers to use. In our example, we use the pre-installed servers:

  • We can see the available MCP tools and servers by clicking on Overview tab:

Example Config:

{
"elasticsearch-mcp-server": {
"command": "/opt/venv/bin/elasticsearch-mcp-server",
"args": [],
"env": {
"ELASTICSEARCH_HOSTS": "https://IP:PORT",
"ELASTICSEARCH_USERNAME": ELASTICSEARCH_USERNAME,
"ELASTICSEARCH_PASSWORD": ELASTICSEARCH_PASSWORD
}
},
"virustotal-mcp": {
"command": "/usr/local/bin/mcp-virustotal",
"args": [],
"env": {
"VIRUSTOTAL_API_KEY": VIRUSTOTAL_API_KEY
}
},
"shodan-mcp": {
"command": "/usr/local/bin/mcp-shodan",
"args": [],
"env": {
"SHODAN_API_KEY": SHODAN_API_KEY
}
},
"arcanna-mcp": {
"command": "/opt/venv/bin/arcanna-mcp-server",
"args": [],
"env": {
"ARCANNA_MANAGEMENT_API_KEY":ARCANNA_MANAGEMENT_API_KEY,
"ARCANNA_HOST": ARCANNA_HOST
}
},
"splunk-mcp-server": {
"command": "/root/.local/bin/poetry",
"args": [
"--directory",
"/app/splunk_mcp/",
"run",
"python",
"splunk_mcp.py",
"stdio"
],
"env": {
"SPLUNK_HOST": SPLUNK_HOST,
"SPLUNK_PORT": SPLUNK_PORT,
"SPLUNK_USERNAME": SPLUNK_USERNAME,
"SPLUNK_PASSWORD": SPLUNK_PASSWORD,
"SPLUNK_SCHEME": "https",
"VERIFY_SSL": false,
"FASTMCP_LOG_LEVEL": "INFO"
}
},
"arcanna-input-mcp": {
"command": "/opt/venv/bin/arcanna-mcp-input-server",
"args": [],
"env": {
"ARCANNA_INPUT_API_KEY": ARCANNA_INPUT_API_KEY,
"ARCANNA_HOST": ARCANNA_HOST,
"ARCANNA_USER": ARCANNA_USER
}
},
... LLM SETTINGS ...
}

Remote MCP Servers Configuration:

Before using this feature, ensure that the specified MCP server implements Streamable HTTP or Server-Sent Events (SSE). If so, add it to the configuration as follows:

{"sse_mcp_server": {"url": <http://sse_enabled_mcp_server_address:port/path>}}

MCP Tool Approval mechanism

  • By default, before executing any tool the user is asked to approve the tool.
  • To bypass the tool approval mechanism and automatically execute all tools, add the yolo_mode: true flag to the configuration.

  • Asking if Arcanna is up and running. The LLM use the health_check tool from Arcanna MCP Server. The tool will be executed only if the user approves it.