Skip to content

Conversation

@signed-log
Copy link

This PR has the goal to add a Proxmox (over HTTPS only for now) gatherer

It supports VMs and LXCs, both getting into the "vms" attribute for commonality with the other modules

It supports username/password or api_token authentification, I will need to document the minimal permission set necessary to use a unpriviliged Proxmox account as the probe.

Dependencies: proxmoxer (python3-proxmoxer on *SUSE) and requests to use the HTTPS

Notes:

  • Proxmox doesn't have a node UUID/identifier apart from the node name
  • Proxmox doesn't use the name of the VM as a key, only the VMID. The name is only being shown in the various interfaces. A VMID is cluster-wide unique.
  • I used the name as key to keep commonality, but there is always the possibility of multiple VMs having the same name. On the other hand, using VMID as key would likely cause confusion as it isn't descriptive at all.

Script passes Pylint with the only addition of a # pylint: disable=too-many-instance-attributes

Example out

{
    "generic_cluster_data": {
        "proxmox_node_1": {
            "cpuArch": "x86_64",
            "cpuDescription": "Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz",
            "cpuMhz": "3800.864",
            "hostIdentifier": "proxmox_node_1",
            "name": "proxmox_node_1",
            "optionalVmData": {
                "generic_vm_1.generic.internal": {
                    "disk": 32768,
                    "memory": 2048,
                    "proxmoxVmid": 10001,
                    "totalCpuThreads": 2,
                    "uptime": 0,
                    "vmState": "stopped"
                },
                "generic_vm_2.generic.internal": {
                    "disk": 32768,
                    "memory": 2048,
                    "proxmoxVmid": 10002,
                    "totalCpuThreads": 2,
                    "uptime": 61003,
                    "vmState": "running"
                },
                "generic_vm_3.generic.internal": {
                    "disk": 32768,
                    "memory": 4096,
                    "proxmoxVmid": 10003,
                    "totalCpuThreads": 2,
                    "uptime": 0,
                    "vmState": "stopped"
                },
                "generic_vm_4.generic.internal": {
                    "disk": 65536,
                    "memory": 8192,
                    "proxmoxVmid": 10004,
                    "totalCpuThreads": 4,
                    "uptime": 0,
                    "vmState": "stopped"
                },
                "generic_vm_5.generic.internal": {
                    "disk": 15954,
                    "memory": 1024,
                    "proxmoxVmid": 10005,
                    "totalCpuThreads": 1,
                    "uptime": 60998,
                    "vmState": "running"
                },
                "generic_vm_6": {
                    "disk": 0,
                    "memory": 2048,
                    "proxmoxVmid": 10006,
                    "totalCpuThreads": 2,
                    "uptime": 0,
                    "vmState": "stopped"
                },
                "generic_vm_7.generic.internal": {
                    "disk": 31949,
                    "memory": 4096,
                    "proxmoxVmid": 10007,
                    "totalCpuThreads": 4,
                    "uptime": 60995,
                    "vmState": "running"
                },
                "generic_vm_8.generic.internal": {
                    "disk": 16384,
                    "memory": 2048,
                    "proxmoxVmid": 10008,
                    "totalCpuThreads": 2,
                    "uptime": 0,
                    "vmState": "stopped"
                }
            },
            "os": "ProxmoxVE",
            "os_version": "8.4",
            "ramMb": 31854,
            "totalCpuCores": 4,
            "totalCpuSockets": 1,
            "totalCpuThreads": 8,
            "vms": {
                "generic_vm_1.generic.internal": 10001,
                "generic_vm_2.generic.internal": 10002,
                "generic_vm_3.generic.internal": 10003,
                "generic_vm_4.generic.internal": 10004,
                "generic_vm_5.generic.internal": 10005,
                "generic_vm_6": 10006,
                "generic_vm_7.generic.internal": 10007,
                "generic_vm_8.generic.internal": 10008
            }
        },
        "proxmox_node_2": {
            "cpuArch": "x86_64",
            "cpuDescription": "Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz",
            "cpuMhz": "3898.562",
            "hostIdentifier": "proxmox_node_2",
            "name": "proxmox_node_2",
            "optionalVmData": {
                "generic_vm_9": {
                    "disk": 16384,
                    "memory": 2048,
                    "proxmoxVmid": 10009,
                    "totalCpuThreads": 2,
                    "uptime": 0,
                    "vmState": "stopped"
                },
                "generic_vm_10.generic.internal": {
                    "disk": 32768,
                    "memory": 2048,
                    "proxmoxVmid": 10010,
                    "totalCpuThreads": 2,
                    "uptime": 77430,
                    "vmState": "running"
                },
                "generic_vm_11.generic.internal": {
                    "disk": 32768,
                    "memory": 2048,
                    "proxmoxVmid": 10011,
                    "totalCpuThreads": 2,
                    "uptime": 0,
                    "vmState": "stopped"
                },
                "generic_vm_12.generic.internal": {
                    "disk": 65536,
                    "memory": 8192,
                    "proxmoxVmid": 10012,
                    "totalCpuThreads": 4,
                    "uptime": 0,
                    "vmState": "stopped"
                },
                "generic_vm_13.generic.internal": {
                    "disk": 65536,
                    "memory": 8192,
                    "proxmoxVmid": 10013,
                    "totalCpuThreads": 4,
                    "uptime": 0,
                    "vmState": "stopped"
                },
                "generic_vm_14.generic.internal": {
                    "disk": 32768,
                    "memory": 2048,
                    "proxmoxVmid": 10014,
                    "totalCpuThreads": 2,
                    "uptime": 0,
                    "vmState": "stopped"
                },
                "generic_vm_15": {
                    "disk": 16384,
                    "memory": 2048,
                    "proxmoxVmid": 10015,
                    "totalCpuThreads": 2,
                    "uptime": 0,
                    "vmState": "stopped"
                }
            },
            "os": "ProxmoxVE",
            "os_version": "8.2",
            "ramMb": 31854,
            "totalCpuCores": 4,
            "totalCpuSockets": 1,
            "totalCpuThreads": 8,
            "vms": {
                "generic_vm_9": 10009,
                "generic_vm_10.generic.internal": 10010,
                "generic_vm_11.generic.internal": 10011,
                "generic_vm_12.generic.internal": 10012,
                "generic_vm_13.generic.internal": 10013,
                "generic_vm_14.generic.internal": 10014,
                "generic_vm_15": 10015
            }
        }
    }
}

Copy link
Contributor

@mackdk mackdk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My python knowledge is not enough to review someone else's code, so I'll let others provide more meaningful comments. Nevertheless I noticed two things:

  • In uyuni, the modules are hardcoded in the frontend code. Currently any new module introduced in virtual-host-gatherer won't appear in Uyuni UI, and thus it can't be used to create a virtual host manager. I create a PR to try to address that uyuni-project/uyuni#10509
  • I wanted to trying to test if the module works (I going to have soon a Proxmox installation in my personal network) but, unless I'm missing something, there is no package providing proxmoxer that is intallable in an uyuni installation. I see it only available in factory.

DEFAULT_PARAMETERS = OrderedDict(
[
("host", None),
("port", None),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't there a default port you can set here instead of using None?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, 8006, I will do so

("password", None),
("api_token_id", None),
("api_token_secret", None),
("verify_ssl", None),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe the default for this should be True?

Copy link
Author

@signed-log signed-log Jun 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a de-facto True, I manipulate it in the other functions, but I guess it's stupid in hindsight. I mostly didn't know if it would always send a boolean.

https://github.com/uyuni-project/virtual-host-gatherer/blob/528d8c52fea6523a45c07563e0ea02b4f8536ec2/virtual-host-gatherer/lib/gatherer/modules/Proxmox.py#L97C1-L100C35

@signed-log
Copy link
Author

I wanted to trying to test if the module works (I going to have soon a Proxmox installation in my personal network) but, unless I'm missing something, there is no package providing proxmoxer that is intallable in an uyuni installation. I see it only available in factory.

It seems to indeed only be available in TW/Slowroll

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants