GNETiX Docs
Mcp servers

MCP Servers

Tool servers that expose your infrastructure to AI

MCP (Model Context Protocol) is an open standard for connecting AI models to external tools, data sources, and services. GNETiX uses MCP as the bridge between the AI Director and your infrastructure.

An MCP server exposes three types of capabilities:

CapabilityDescription
ToolsFunctions the AI can call (e.g., show_interface_status, restart_pod, query_metrics)
ResourcesData the AI can read (e.g., configuration files, documentation, runbooks)
PromptsReusable prompt templates for common operations

How MCP Fits Into GNETiX

User Message --> Director (LLM) --> Tool Call --> On-Prem Agent --> MCP Server --> Infrastructure
                                                                      ^
                                                              Streamable HTTP
  1. The Director decides which tool to call based on the user's request
  2. The tool call is dispatched to an on-prem sub-agent via the WebSocket relay
  3. The sub-agent connects to the appropriate MCP server over Streamable HTTP
  4. The MCP server executes the tool against your infrastructure and returns the result
  5. The result flows back to the Director for synthesis

MCP servers run on-prem alongside your infrastructure. They never need to be exposed to the internet. The sub-agent handles all communication with the cloud via outbound WebSocket.

Included Example Servers

GNETiX ships with reference MCP server implementations that you can use as starting points:

ServerDescriptionTechnology
network-mcpNetwork device management (show commands, config changes)Netmiko
kubernetes-mcpKubernetes cluster operations (pods, deployments, logs)kubernetes-client
dynatrace-mcpMonitoring data retrieval (problems, metrics, entities)Dynatrace API
cribl-mcpLog pipeline managementCribl API

Each example server is a standalone Python project with a Dockerfile, pyproject.toml, and well-documented tool definitions.

Next Steps