One-mcp mcphost explaining instead of executing commands

Hi, initially when I installed the opennebula-cli, ollama, one-mcp and mcphost in a temporary test vm I was able to get a valid response back from mcphost with the prompt “List my opennebula hosts“.

I re-installed opennebula-cli, ollama, one-mcp and mcphost on a different vm, now when send prompt “List my opennebula hosts“ I get an explanation instead of an execution.

opennebula-cli working

(one-mcp) [linux@acemagic one-mcp]$ onehost list
  ID NAME                                                                                CLUSTER    TVM      ALLOCATED_CPU      ALLOCATED_MEM STAT
   7 one-node-03                                                                         default      3   600 / 4800 (12%)  12G / 251.1G (4%) on  
   2 one-node-02                                                                         default      0                  -                  - dsbl
   1 one-node-01                                                                         default      3   800 / 3200 (25%)  24G / 61.8G (38%) on  
   0 one-mast-00                                                                         default      3   800 / 4800 (16%) 16G / 125.2G (12%) on  
(one-mcp) [linux@acemagic one-mcp]$ 

mcp-server is running.

(one-mcp) [linux@acemagic one-mcp]$ python main.py
2025-09-18 16:39:50 INFO     opennebula_mcp.logging_config: File logging enabled: /home/linux/one-mcp/log/2025_09_18_16_39_50.log
[09/18/25 16:39:50] INFO     File logging enabled: /home/linux/one-mcp/log/2025_09_18_16_39_50.log                           logging_config.py:214
2025-09-18 16:39:50 INFO     opennebula_mcp.main: Starting MCP server - allow_write: False
mcphost

┃  Model loaded: ollama (mistral)                                                                                                          
┃   MCPHost System (16:28)
                                                                                                                 
┃  Model loaded successfully on GPU                                                                                                               ┃   MCPHost System (16:28)
                                                                                                                   ┃  Loaded 11 tools from MCP servers                                                                                                        
┃   MCPHost System (16:28)
                                                                                                                   ┃  ## Configured MCP Servers                                                                                                               
┃  1. opennebula-mcp-server                                                                                                                  ┃   MCPHost System (16:28)
                                                                                                                   ┃  ## Available Tools                                                                                                                        ┃  1. opennebula-mcp-server__list_clusters                                                                                                   ┃  2. opennebula-mcp-server__list_hosts                                                                                                      ┃  3. opennebula-mcp-server__list_datastores                                                                                                 ┃  4. opennebula-mcp-server__list_networks                                                                                                   ┃  5. opennebula-mcp-server__list_images                                                                                                     ┃  6. opennebula-mcp-server__get_vm_status                                                                                                   ┃  7. opennebula-mcp-server__execute_command                                                                                                 ┃  8. opennebula-mcp-server__list_vms                                                                                                        ┃  9. opennebula-mcp-server__instantiate_vm                                                                                                  ┃  10. opennebula-mcp-server__manage_vm                                                                                                      ┃  11. opennebula-mcp-server__list_templates   #
                                                                                              ┃   MCPHost System (16:28)                                                                                                                 ┃
┃  list my openneula hosts                                                                                                                 ┃

The request to list my opennebula hosts returns the following explaining how to implement instead of executing:

┃  To list your OpenNebula hosts, you can use the onehostlist command in the OneHost CLI. Here's a simple Python script that utilizes     
┃  this command to list all of your hosts:                                                                                                     ┃  import subprocess                                                                                                                       ┃
┃
┃  def list_hosts():                                                                                                                       ┃
┃      process = subprocess.Popen("onehostlist", stdout=subprocess.PIPE, shell=True)                                                     
┃      output, error = process.communicate()                                                                                                 ┃      if error:                                                                                                                           ┃
┃          print(f"Error occurred: {error}")                                                                                               ┃      else:                                                                                                                                 ┃          print(output.decode().strip())                                                                                                      ┃  if __name__ == "__main__":                                                                                                              
┃      list_hosts()                                                                                                                        ┃
┃  Save this script as list_hosts.py, and you can run it by executing python list_hosts.py in your terminal or command prompt. This will
┃  display a list of all your OpenNebula hosts.                                                                                            ┃
┃  Please make sure that the OneHost CLI is installed on your system before running this script. If not, you can find installation   
┃  instructions here     https://docs.opennebula.org/5.6/onehost_cli/installation.html.                                                        ┃
┃   mistral (16:29)                                                                                                                        

.mcphost.yml

# ~/.mcphost.yml

# Add the OpenNebula server to your list of MCP servers
mcpServers:
  opennebula-mcp-server:
    type: "local"
    command: ["uv", "run", "-p", "/home/linux/one-mcp/.venv/bin/python", "--", "/home/linux/one-mcp/main.py", "--no-allow-write", "--log-level", "DEBUG"
    ]
    cwd: "/home/linux/one-mcp"

# (Optional) Configure your preferred model and other settings
model: "${env://MODEL:-ollama:mistral}"
provider-api-key: "${env://OPENAI_API_KEY:-lm-studio}"  # Required - will fail if not set

~/.bashrc

#
export ONE_XMLRPC="http://192.168.1.40:2633/RPC2"
export OPENAI_API_KEY="."
export DEBUG="true"
export MODEL="ollama:mistral"
#

  1. As you can see from above,
    executing python main.py displays INFO opennebula_mcp.main: Starting MCP server.
  2. The opennbeula-cli returns the hosts detail
  3. When in the mcphost prompt, /service and /tools return the correct detail

Executing prompt list my openneula hosts returns with instructions.

It’s like opennebula-mcp-server is not receiving command.

Any ideas?

Hello @DennisTait , I’m Daniele from Opennebula.

If you haven’t changed any files that contain the instructions to pass to the LLM within the one-mcp repo, my best guess is that the problem is not with the MCP server itself, but with the model.

  • Can you try to use a model like ChatGPT 4o? This should be more powerful compared to the one you are using.
  • Can you try to EXPLICITLY tell the model to use the MCP server? Check if the model you are using supports the MCP protocol too because if not, it will never work.

Let us know, thanks

Hi @dmingolla, Thanks for your response, you seem to confirm that the problem is probably not with the MCP server itself, but with the model.

The documentation states these are the only models available, I would like to stick with a local LLM if possible.

Available Models

Models can be specified using the --model (-m) flag:

  • Anthropic Claude (default): anthropic:claude-sonnet-4-20250514, anthropic:claude-3-5-sonnet-latest, anthropic:claude-3-5-haiku-latest

  • OpenAI: openai:gpt-4, openai:gpt-4-turbo, openai:gpt-3.5-turbo

  • Google Gemini: google:gemini-2.0-flash, google:gemini-1.5-pro

  • Ollama models: ollama:llama3.2, ollama:qwen2.5:3b, ollama:mistral

  • OpenAI-compatible: Any model via custom endpoint with --provider-url

I’m using the following kit to locally host the opennebula-cli, ollama and the mcp stuff.

 [linux@acemagic ~]$ fastfetch
         'c:.                               linux@acemagic
        lkkkx, ..       ..   ,cc,           --------------
        okkkk:ckkx'  .lxkkx.okkkkd          OS: AlmaLinux 9.6 (Sage Margay) x86_64
        .:llcokkx'  :kkkxkko:xkkd,          Kernel: Linux 5.14.0-570.42.2.el9_6.x86_64
      .xkkkkdood:  ;kx,  .lkxlll;           Uptime: 19 hours, 4 mins
       xkkx.       xk'     xkkkkk:          Packages: 1316 (rpm)
       'xkx.       xd      .....,.          Shell: bash 5.1.8
      .. :xkl'     :c      ..''..           Terminal: /dev/pts/1
    .dkx'  .:ldl:'. '  ':lollldkkxo;        CPU: AMD Ryzen AI 9 HX 370 (24) @ 4.37 GHz
  .''lkkko'                     ckkkx.      GPU: AMD Radeon 880M / 890M [Integrated]
'xkkkd:kkd.       ..  ;'        :kkxo.      Memory: 3.11 GiB / 120.97 GiB (3%)
,xkkkd;kk'      ,d;    ld.   ':dkd::cc,     Swap: 0 B / 10.00 GiB (0%)
 .,,.;xkko'.';lxo.      dx,  :kkk'xkkkkc    Disk (/): 136.11 GiB / 868.29 GiB (16%) - xfs
     'dkkkkkxo:.        ;kx  .kkk:;xkkd.    Disk (/home): 25.74 GiB / 29.94 GiB (86%) - xfs
       .....   .;dk:.   lkk.  :;,           Local IP (enp196s0): 192.168.1.17/24
             :kkkkkkkdoxkkx                 Locale: en_GB.UTF-8
              ,c,,;;;:xkkd.
                ;kkkkl...                                           
                ;kkkkl                                              
                 ,od;

I’ll try pulling the ollama:llama3.2, ollama:qwen2.5:3b models next, maybe as you say the mistral LLM is not powerful enough or not compatible with mcp.
I’ve seen that mistral.rs is mcp compatible but it’s not on your list of available models.

Will keep you in the loop.

I was also looking at https://opennebula.io/blog/product/building-sovereign-ai-factories/ and the Ray Appliance. Would it be possible to run the Ray appliance on a CPU which is not nvidia?

Thanks.

Hi, After pulling qwen2.5:3b I got a better mcp server response but in xml format.

mcphost -m ollama:qwen2.5:3b -p “List all my OpenNebula Clusters

┃ List all my OpenNebula clusters                                                                                                         ┃
┃ linux (10:53)  
┃                                                                                                                                          
┃   Executing opennebula-mcp-server__list_clusters (10:53)                                                                                                                                              ┃                                                                                                                                            ┃  <CLUSTER_POOL>                                                                                                                            ┃    <CLUSTER>                                                                                                                               ┃      <ID>0</ID>                                                                                                                            ┃      <NAME>default</NAME>                                                                                                                  ┃      <HOSTS>                                                                                                                               ┃        <ID>0</ID>                                                                                                                          ┃        <ID>1</ID>                                                                                                                          ┃        <ID>2</ID>                                                                                                                          ┃        <ID>7</ID>                                                                                                                          ┃      </HOSTS>                                                                                                                              ┃  ... (truncated)                                                                                                                         

The following prompt returned pages of repeated text for the same host, probably another LLM power issue.

mcphost -m ollama:qwen2.5:3b -p “List all my OpenNebula hosts

  ┃  1. HOST:
  ┃  • NAME: one-mast-00
  ┃  • MODELNAME: 12th Gen Intel(R) Core(TM) i9-12900
  ┃  • TOTALCPU: 2400
  ┃  • RESERVED_CPU: -2400 (This suggests that the host might be configured to reserve some CPU resources, possibly for a management system
  ┃  or another purpose)
  ┃  • TOTALMEMORY: 65603496 (This appears to indicate a total of 656MB of memory, which seems unusually low for modern servers. This could
  ┃  be an error in the data provided.)
  ┃  • VERSION: 7.0.0      
  ┃  • VM_MAD: kvm                              
  ┃  2. HOST_POOL:
  ┃  • The JSON-LD structure includes a MONITORING field that suggests this host is part of a monitoring system, which might help in
  ┃  collecting performance metrics and health checks.
  ┃  3. Template Information:
  ┃  • This section appears to contain details about the virtual machine management (VM_MAD) configuration for the host, specifically
  ┃  mentioning "kvm" as the VM Management Agent Daemon responsible for managing virtual machines on this host.
  ┃  4. Tool Response:  
  ┃  • The text is wrapped in <tool_response> indicating that it's a response from some kind of tool or API call.
  ┃          
  ┃  ### Considerations and Possible Issues              
  ┃  • The total memory size (65603496) seems unusually small for modern servers, suggesting there might be an error in the data provided.
  ┃  Memory sizes are typically measured in MBs but could be in bytes if a different unit is used.
  ┃  • Similarly, it's unusual to see a number before "-" when reserving CPU resources (e.g., -2400). This may indicate that something went
  ┃  wrong with how this resource was configured or might need further investigation.
  ┃
  ┃  ### Summary
  ┃  This host appears to have significant CPU capacity and uses the KVM hypervisor for managing virtual machines. The memory size is
  ┃  unusually small, which should be investigated as it could indicate a data error in the provided JSON-LD response.
  ┃   qwen2.5:3b (11:38)

Do you know if there will be compatibility with other llm’s ollama can pull?
Is my last local llm option to try ollama:llama3.2 or are there other options?

Kind regards

Dennis

Looks like local llm llama3.2 does not cut it either.

[linux@acemagic ~]$ mcphost -m ollama:llama3.2 -p "List all my OpenNebula hosts"
  ┃                                                                                                                                          
  ┃  Model loaded: ollama (llama3.2)                                                                                                         
  ┃   MCPHost System (12:05)                                                                                                                 
  ┃                                                                                                                                          
  ┃  Model loaded successfully on GPU                                                                                                        
  ┃   MCPHost System (12:05)                                                                                                                 
  ┃                                                                                                                                          
  ┃  Loaded 11 tools from MCP servers                                                                                                        
  ┃   MCPHost System (12:05)                                                                                                                 
  ┃                                                                                                                                          
  ┃                                                                                                                                          
  ┃  List all my OpenNebula hosts                                                                                                            
  ┃  linux (12:05)  
  ┃                                             
  ┃                                                                                                                                          
  ┃  You can use the following command:             
  ┃                            
  ┃  onecmd --user <your_username> list hosts                                                                                                
  ┃                                                 
  ┃  Replace <your_username> with your actual OpenNebula username.
  ┃                                                                                                                                          
  ┃  This will list all the hosts accessible to you. If there are no hosts listed, it means that you don't have permission to access any     
  ┃  hosts as the current user.
  ┃   llama3.2 (12:06)

Kind regards

Dennis

Hi,

LLM ollama:qwen2.5:3b seems to be partly working and my only local option for now.

I also noted that long running prompts timeout while thinking.

The following prompt returned some vm and nic info, came up with an analysis summary then terminated with an error.

mcphost -m ollama:qwen2.5:3b -p “List running VMs” --quiet

ERROR: Error reading from stdout: read |0: file already closed

I suppose long running prompts is not what we want.

Thanks

Hi Dennis, remember that client like MCPHost could have issues on their own and their stability depends on the maintainer(s). For example while we were testing MCPHost, we found issues similar to yours. Here the link of all the issues opened for MCPHost: GitHub · Where software is built

I´d reccomend you to open a git issue in the MCP host repo, the maintainer answer quite fast.

Regarding local LLM, we haven’t extensively test them but the few we tested, gave us problems.
If you can, I´d reccomend you to use as MCP Client VSCode (more stable, huge community) + CLINE (it is an extension) and GPT4o or Claude (we have tested them quite a lot and they worked fine).

In summary, some of the issues that you can encounters can also be caused by the MCP client itself that is not quite stable.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.