Here is a crazy idea! We all used Copilot to generate verilog RTL or systemverilog tests. Now, what if we give an agent direct access to read signals, drive ports, wait for delays, run a few clock cycles. Essentially, giving full access to an agent and asking it to go to town on the RTL. I hear you saying “why?”. My usual reply is what else is there to do on a such lovely night!
But really, the advantage could be that we don’t need to generate code (that might have syntax errors or much worse semantic errors). The idea is creating an MCP server that starts cocotb in the background and uses ZeroMQ to send commands from yet another MCP tool to this background cocotb simulation. Again, why? Because cocotb doesn’t work with asyncio. Technically I could use HTTP but why would I do that when we have ZeroMQ.
How Link to heading
Full disclosure I wrote this from scratch with a Langchain agent but I am too lazy to rewrite it with MCP for copilot so I asked copilot to do it for me.
The Copilot agent communicates with an MCP server that manages cocotb simulations, which controls Verilog RTL.
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Copilot Agent │ ◄─MCP──►│ MCP Server │ ◄─ZMQ──►│ Cocotb Sim │
│ (VS Code) │ │ (cocotb_mcp.py) │ │ (test_interactive.py) │
└─────────────────┘ └──────────────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ Verilog RTL │
│ │
└─────────────────┘
Installing the MCP server for copilot is easy enough. Just add a configuration file at .vscode/mcp.json
{
"servers": {
"cocotb-simulator": {
"type": "stdio",
"command": "${workspaceFolder}/.venv/bin/python3",
"args": ["${workspaceFolder}/src/cocotb_mcp.py"]
}
},
"inputs": []
}
In src/cocotb_mcp.py, we need to declare the tools that the agent can call. These tools define the interface for starting simulations, sending commands, and managing simulation state:
@app.list_tools()
async def list_tools() -> list[Tool]:
"""List available interactive cocotb simulation tools."""
return [
Tool(
name="start_interactive_simulation",
description="Start an interactive cocotb simulation that listens for commands via ZeroMQ. This allows controlling the simulation dynamically by sending commands to set signals, read values, and control timing.",
inputSchema={
"type": "object",
"properties": {
"verilog_file": {
"type": "string",
"description": "Path to the Verilog file to simulate",
},
"top_level": {
"type": "string",
"description": "Top-level module name. If not provided, inferred from file",
},
"simulator": {
"type": "string",
"description": "Simulator to use. Default: 'icarus'",
"enum": ["icarus", "verilator"],
},
"zmq_port": {
"type": "integer",
"description": "ZeroMQ port for command interface. Default: 5555",
},
"waves": {
"type": "boolean",
"description": "Enable waveform dumping. Default: true",
},
},
"required": ["verilog_file"],
},
),
Tool(
name="send_sim_command",
description="Send a command to a running interactive simulation via ZeroMQ. Commands: set_signal, get_signal, wait_ns, wait_clocks, start_clock, list_signals, get_sim_time, stop",
inputSchema={
"type": "object",
"properties": {
"command": {
"type": "string",
"description": "Command to send (set_signal, get_signal, wait_ns, wait_clocks, start_clock, list_signals, get_sim_time, ping, stop)",
},
"signal": {
"type": "string",
"description": "Signal name (for set_signal/get_signal)",
},
"value": {
"description": "Signal value (for set_signal)",
},
"time": {
"type": "integer",
"description": "Time in nanoseconds (for wait_ns)",
},
"cycles": {
"type": "integer",
"description": "Number of clock cycles (for wait_clocks)",
},
"period_ns": {
"type": "integer",
"description": "Clock period in nanoseconds (for start_clock). Default: 10",
},
"zmq_port": {
"type": "integer",
"description": "ZeroMQ port. Default: 5555",
},
},
"required": ["command"],
},
),
Tool(
name="stop_interactive_simulation",
description="Stop a running interactive simulation",
inputSchema={
"type": "object",
"properties": {
"sim_id": {
"type": "string",
"description": "Simulation ID to stop. Default: stops the most recent",
},
},
},
),
Tool(
name="list_running_simulations",
description="List all currently running interactive simulations",
inputSchema={
"type": "object",
"properties": {},
},
),
]
The cocotb test file interactive_test.py contains a simple test that creates a ZeroMQ server to receive and execute simulation commands:
async def interactive_test(dut):
"""
Interactive test that waits for commands via ZeroMQ
This test creates a ZeroMQ server that listens for commands to control
the simulation. The simulation will run indefinitely until a 'stop' command
is received.
"""
dut._log.info("=== Starting Interactive Simulation ===")
dut._log.info("Waiting for commands on ZeroMQ port 5555...")
dut._log.info("Commands: set_signal, get_signal, wait_ns, wait_clocks, start_clock, stop")
server = SimulationServer(dut, port=5555)
try:
await server.run()
finally:
server.close()
dut._log.info("=== Interactive Simulation Ended ===")
Example Link to heading
Just to convince myself that this is working, I used the following prompt to verify a simple verilog adder with an intentional bug where I used - instead of +.
Prompt given to the agent:
Verify the RTL at examples/adder.v using cocotb interactive simulation from cocotb-simulator mcp.
Don't read the verilog and depend only on the tools.
Compare the output with expected output based on this adder module.
Agent’s reasoning: The agent plans the next steps by looking up the available tools and determining what to do next.
The user wants me to verify RTL at examples/adder.v using cocotb interactive simulation from cocotb-simulator mcp. They specifically say not to read the verilog and depend only on the tools. I need to compare the output with expected output based on the adder module.
Looking at the deferred tools list, I can see several cocotb-simula related tools:
mcp_cocotb-simula_list_running_simulations
mcp_cocotb-simula_send_sim_command
mcp_cocotb-simula_start_interactive_simulation
mcp_cocotb-simula_stop_interactive_simulation
I need to:
First, search for these tools using tool_search_tool_regex to load them
Start an interactive simulation with the adder.v file
Send test commands to verify the adder functionality
Compare the output with expected results for an adder
Agent executes test commands: Setting input signals and reading outputs to verify behavior.
✓ Command executed successfully
Command: set_signal
Response: {
"status": "ok",
"signal": "a",
"value": 5
}
✓ Command executed successfully
Command: set_signal
Response: {
"status": "ok",
"signal": "b",
"value": 3
}
✓ Command executed successfully
Command: get_signal
Response: {
"status": "ok",
"signal": "sum",
"value": 2
}
Agent’s findings: After testing multiple input combinations, The agent flags the failing tests with expected/actual output.
CRITICAL BUG FOUND: The module is performing subtraction instead of addition!
Test Results with 8-bit Values:
Test Inputs (a,b,cin) Expected (Addition) Actual (Subtraction) Status
1 5+3+0 sum=8, cout=0 sum=2, cout=0 (5-3) ✗ FAIL
2 10+20+1 sum=31, cout=0 sum=247, cout=1 (10-20+1=-9) ✗ FAIL
3 200+100+0 sum=44, cout=1 sum=100, cout=0 (200-100) ✗ FAIL
Conclusion Link to heading
Is this better than generation tests (either generating cocotb tests or SystemVerilog UVM tests)? My answer is “who cares!”. To be honest, It’s probably a waste of tokens to let an agent do these repeatedly and it will be slower than generating tests and running them. But it’s surely fun to see :)