Skip to main content

AI Assistant Setup

By integrating with MCP (Model Context Protocol) and using different LLM providers as its backbone, the AI assistant helps users with their investigations.

The AI Assistant supports all MCP servers, including remote instances, by implementing Streamable HTTP and Server-Sent Event (SSE) server functionality.

Setup

  • Click on Assistant tab to access the AI Assistant.
  • Click on Open MCP Tools Menu to configure the AI Assistant.

Configuration Tabs Overview

The MCP Tools Menu provides four main configuration tabs:

The Overview tab provides a comprehensive view of your AI Assistant configuration:

  • Shows all configured MCP servers and their status
  • Displays available tools from each MCP server
  • Provides a quick health check of your setup
  • Allows users to enable/disable tools or servers for their current conversation

Assistant Config

LLM Providers

We support both cloud and local-hosted LLM providers.

{
"bedrock": {
"aws_access_key": "AWS_ACCESS_KEY",
"aws_secret_key": "AWS_SECRET_KEY",
"aws_region": "AWS_REGION"
},
"sdk": "bedrock",
"model": "LLM_MODEL"
}

Additional Configuration Options

All LLM providers support the following optional configuration parameters that can be added to any provider configuration:

System Prompt (system_prompt):

  • Allows users to provide a set of initial instructions, context, rules, and guidelines to an AI assistant before it begins interacting or processing user requests. It acts as a foundational configuration or a "meta-instruction" that fundamentally shapes the AI's behavior, persona, operational boundaries, and interaction style.
  • The system prompt's capability to shape assistant behavior, combined with MCP Tool Approval mechanism (By default, before executing any tool the user is asked to approve the tool), acts as a robust guardrail against misuse.
  • The AI Assistant comes with an internal system prompt. Add this parameter in config to provide additional instructions and tailor its behavior to your specific needs.

Max Tokens (max_tokens):

  • Controls the maximum number of tokens the AI assistant can generate in a single response.
  • Helps manage response length and API costs.
  • If not specified, uses the provider's default limit.

Context Window Percentage (context_window_percentage):

  • Sets a threshold for the conversation context window.
  • If the conversation context window exceeds the threshold, the user will be alerted and starting a new conversation is recommended.
  • Value should be between 0 and 1.

Example with additional options:

{
"openai": {
"endpoint": "https://api.openai.com/v1",
"api_key": "API_KEY"
},
"sdk": "openai",
"model": "gpt-4",
"system_prompt": "You are a helpful cybersecurity assistant.",
"max_tokens": 4000,
"context_window_percentage": 0.9
}

MCP Tool Approval mechanism

  • By default, before executing any tool the user is asked to approve the tool.
  • To bypass the tool approval mechanism and automatically execute all tools, add the auto_tool_approval: true flag to the configuration.
Tool Approval Flow
Example: When asking if Arcanna is up and running, the LLM uses the health_check tool from Arcanna MCP Server. The tool will be executed only if the user approves it.

MCP Config

The AI Assistant comes with several pre-installed MCP servers. To use them, simply add the configuration for each server you want to enable in your JSON config.

Available Pre-installed MCP Servers:

MCP ServerDescriptionRepository/URL
ArcannaAccess Arcanna platform functionalityInternal Documentation
Arcanna InputInput data management for ArcannaInternal Documentation
ElasticsearchQuery and search Elasticsearch indiceselasticsearch-mcp-server
SplunkSearch and analyze Splunk datasplunk-mcp-server
VirusTotalScan files and URLs for malware@burtthecoder/mcp-virustotal
ShodanSearch for internet-connected devices@burtthecoder/mcp-shodan
SlackSend messages and manage Slack workspaces@modelcontextprotocol/server-slack
OpenAIIntegration with OpenAI API@mzxrai/mcp-openai
Sequential ThinkingStructured problem-solving and reasoning@modelcontextprotocol/server-sequential-thinking

How to Configure MCP Servers:

Below are configuration examples for each MCP server. Add only the servers you need to your MCP JSON config:

{
"elasticsearch-mcp-server": {
"command": "/opt/venv/bin/elasticsearch-mcp-server",
"args": [],
"env": {
"ELASTICSEARCH_HOSTS": "https://your-elasticsearch-host:port",
"ELASTICSEARCH_USERNAME": "your-username",
"ELASTICSEARCH_PASSWORD": "your-password"
}
}
}

Remote MCP Servers Configuration:

Before using this feature, ensure that the specified MCP server implements Streamable HTTP or Server-Sent Events (SSE). If so, add it to the configuration as follows:

{"sse_mcp_server": {"url": "http://sse_enabled_mcp_server_address:port/path"}}

Variables

The Variables tab allows you to define environment variables that can be reused across both MCP and Assistant configurations using Jinja template syntax (e.g., {{ variable_name }}).

Usage

  1. Click "Add variable" to create a new variable
  2. Set the variable name and value
  3. Reference the variable in your configurations using {{ VARIABLE_NAME }}

Example

Define a variable:

  • Variable Name: ELASTICSEARCH_HOSTS
  • Variable Value: http://localhost:9200

Use in MCP Config:

{
"elasticsearch-mcp-server": {
"env": {
"ELASTICSEARCH_HOSTS": "{{ ELASTICSEARCH_HOSTS }}"
}
}
}