Skip to main content

AI Assistant Setup

By integrating with MCP (Model Context Protocol) and using different LLM providers as its backbone, the AI assistant helps users with their investigations.

The AI Assistant supports all MCP servers, including remote instances, by implementing Streamable HTTP and Server-Sent Event (SSE) server functionality.

info

For these functionalities to work, Arcanna needs to have internet access to the LLM providers and the MCP servers.

Setup

  • Click on Assistant tab to access the AI Assistant.
  • Click on Open MCP Tools Menu to configure the AI Assistant.

Configuration Tabs Overview

The MCP Tools Menu provides four main configuration tabs:

The Overview tab provides a comprehensive view of your AI Assistant configuration:

  • Shows all configured MCP servers and their status
  • Displays available tools from each MCP server
  • Provides a quick health check of your setup
  • Allows users to enable/disable tools or servers for their current conversation

Assistant Config

LLM Providers

We support both cloud and local-hosted LLM providers.

{
"bedrock": {
"aws_access_key": "AWS_ACCESS_KEY",
"aws_secret_key": "AWS_SECRET_KEY",
"aws_region": "AWS_REGION"
},
"sdk": "bedrock",
"model": "LLM_MODEL"
}

Additional Configuration Options

All LLM providers support the following optional configuration parameters that can be added to any provider configuration:

System Prompt (system_prompt):

  • Allows users to provide a set of initial instructions, context, rules, and guidelines to an AI assistant before it begins interacting or processing user requests. It acts as a foundational configuration or a "meta-instruction" that fundamentally shapes the AI's behavior, persona, operational boundaries, and interaction style.
  • The system prompt's capability to shape assistant behavior, combined with MCP Tool Approval mechanism (By default, before executing any tool the user is asked to approve the tool), acts as a robust guardrail against misuse.
  • The AI Assistant comes with an internal system prompt. Add this parameter in config to provide additional instructions and tailor its behavior to your specific needs.

Max Tokens (max_tokens):

  • Controls the maximum number of tokens the AI assistant can generate in a single response.
  • Helps manage response length and API costs.
  • It is specified in the provider block (E.g.: "openai", "bedrock", "gemini").
  • If not specified, uses the provider's default limit.

Context Window Percentage (context_window_percentage):

  • Sets a threshold for the conversation context window.
  • If the conversation context window exceeds the threshold, the user will be alerted and starting a new conversation is recommended.
  • Value should be between 0 and 1.

Arcanna Platform prompt (enabled_platform_prompt)

  • There is an option to have a platform prompt enabled for each conversation. This platform prompt describes better what is arcanna and what it is used for. It has a length of approximatively 10 thousand characters.
  • Using it will provide better context to the LLM. However if you are using a smaller model it should remain disabled (as it is by default).
  • Valid values are false/true (defaults to false).

Disabled tool calling (disable_tool_calling)

  • Some smaller models don't handle tool calling so well. Others have limited support by design.
  • Valid values are false/true (defaults to false).

Example with additional options:

{
"openai": {
"endpoint": "https://api.openai.com/v1",
"api_key": "API_KEY",
"max_tokens": 4000
},
"sdk": "openai",
"model": "gpt-4",
"system_prompt": "You are a helpful cybersecurity assistant.",
"context_window_percentage": 0.9,
"enabled_platform_prompt": false,
"dsable_tool_calling": false
}

MCP Tool Approval mechanism

  • By default, before executing any tool the user is asked to approve the tool.
  • To bypass the tool approval mechanism and automatically execute all tools, click the flash icon from the message input area. Disabling can be done the same way, by clicking the flash icon one more time.

Tool Approval Flow
Example: When asking if Arcanna is up and running, the LLM uses the health_check tool from Arcanna MCP Server. The tool will be executed only if the user approves it.

MCP Config

The AI Assistant comes with several pre-installed MCP servers. To use them, simply add the configuration for each server you want to enable in your JSON config.

Available Pre-installed MCP Servers:

MCP ServerDescriptionRepository/URL
ArcannaAccess Arcanna platform functionalityInternal Documentation
Arcanna InputInput data management for ArcannaInternal Documentation
ElasticsearchQuery and search Elasticsearch indiceselasticsearch-mcp-server
SplunkSearch and analyze Splunk datasplunk-mcp-server
VirusTotalScan files and URLs for malware@burtthecoder/mcp-virustotal
ShodanSearch for internet-connected devices@burtthecoder/mcp-shodan
SlackSend messages and manage Slack workspaces@modelcontextprotocol/server-slack
OpenAIIntegration with OpenAI API@mzxrai/mcp-openai
Sequential ThinkingStructured problem-solving and reasoning@modelcontextprotocol/server-sequential-thinking

How to Configure MCP Servers:

Below are configuration examples for each MCP server. Add only the servers you need to your MCP JSON config:

{
"elasticsearch-mcp-server": {
"command": "/opt/venv/bin/elasticsearch-mcp-server",
"args": [],
"env": {
"ELASTICSEARCH_HOSTS": "https://your-elasticsearch-host:port",
"ELASTICSEARCH_USERNAME": "your-username",
"ELASTICSEARCH_PASSWORD": "your-password"
}
}
}

Remote MCP Servers Configuration:

Before using this feature, ensure that the specified MCP server implements Streamable HTTP or Server-Sent Events (SSE). If so, add it to the configuration as follows:

{"sse_mcp_server": {"url": "http://sse_enabled_mcp_server_address:port/path"}}

Variables

The Variables tab allows you to define environment variables that can be reused across both MCP and Assistant configurations using Jinja template syntax (e.g., {{ variable_name }}).

Usage

  1. Click "Add variable" to create a new variable
  2. Set the variable name and value
  3. Reference the variable in your configurations using {{ VARIABLE_NAME }}

Example

Define a variable:

  • Variable Name: ELASTICSEARCH_HOSTS
  • Variable Value: http://localhost:9200

Use in MCP Config:

{
"elasticsearch-mcp-server": {
"env": {
"ELASTICSEARCH_HOSTS": "{{ ELASTICSEARCH_HOSTS }}"
}
}
}