The Ansible community libvirt collection provides a method to interact with QEMU and LXC (if that interests you, please come and join us!). Along with support for libvirt tasks such as managing guests, networks and storage, it also provides a dynamic inventory. This does not use SSH, but rather interacts directly with the VM over a virtual serial link using qemu-guest-agent
, with commands executed as root
inside the guest.
The dynamic inventory has a couple of requirements. It does not (yet) support SELinux in enforcing mode inside the guest; it requires the qemu-guest-agent
service to be running inside the guest; the host must be able to query qemu-guest-agent
successfully; and qemu-guest-agent
needs to support the following capabilities:
- guest-exec
- guest-file-close
- guest-file-open
- guest-file-read
- guest-file-write
A number of distros will blacklist these in the guest’s qemu guest agent configuration, so this might need to be removed and the service restarted. In CentOS, this is configured with the BLACKLIST_RPC
option in the /etc/sysconfig/qemu-ga
file, so you would need to modify that and restart the qemu-guest-agent
service.
Install the collection
To get started, we first need to install the collection.
ansible-galaxy collection install community.libvirt
Create an inventory file
Next, to use it as a dynamic inventory and talk to a working libvirtd
, we need to create an inventory file which provides the connection details for our hypervisor. In this example, we’re creating a file called kvm.yml
which provides the uri
to talk to the local daemon supervising QEMU domains.
---
plugin: community.libvirt.libvirt
uri: 'qemu:///system'
Now that we have our inventory file, we can test it!
ansible-inventory --inventory kvm.yml --list
This should return JSON formatted data, which looks like this on a hypervisor with no guests.
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
}
}
Inventory with a remote host
The cool thing is that it supports standard URI connection strings, so we can also talk to remote libvirt hosts over SSH with an inventory like this.
---
plugin: community.libvirt.libvirt
uri: 'qemu+ssh://user@server/system'
Spin up VMs
OK! Spin up some test virtual machines and try listing the inventory again.
If you need a hand creating some VMs, then we can do it with my Ansible virtual infrastructure playbook, which pulls in my role and includes a sample inventory. Running this will also automatically install and start everything on localhost
that we need for libvirtd
.
git clone --recursive https://github.com/csmart/virt-infra-ansible.git
cd virt-infra-ansible
Let’s grab the CentOS Stream 8 images and put them in place so that we can spin up the VMs.
curl -O https://cloud.centos.org/centos/8-stream/x86_64/images/CentOS-Stream-GenericCloud-8-20210603.0.x86_64.qcow2
sudo mkdir -p /var/lib/libvirt/images
sudo mv -iv CentOS-Stream-GenericCloud-8-20210603.0.x86_64.qcow2 /var/lib/libvirt/images/
Next, let’s create some extra args so that the Ansible role can put SELinux into permissive mode and configure qemu-guest-agent
capabilities for us automatically.
cat > /tmp/ansible-extra-args.json << \EOF
{
"virt_infra_disk_cmd": [
"sed -i s/^BLACKLIST_RPC=/\\#BLACKLIST_RPC=/ /etc/sysconfig/qemu-ga",
"sed -i s/^SELINUX=.*/SELINUX=permissive/ /etc/selinux/config"
]
}
EOF
OK! Now we can spin up two VMs, simple-centos-8-1
and example-centos-8
from the included sample inventory and test if we can see them using the dynamic inventory.
./run.sh --limit kvmhost,simple-centos-8-1,example-centos-8 --extra-vars "@/tmp/ansible-extra-args.json"
This should successfully spin up those two machines, which we can remove cleanly in a later step. If you need to SSH into them for some reason, they should be configured with a local account that matches Linux username, with a password of password
and be accessible with any of the SSH keys from your home directory.
ssh example-centos-8 uptime
Test the dynamic inventory
Let’s try the dynamic inventory again and see if we can find those new VMs.
ansible-inventory --inventory kvm.yml --list
This time the returned JSON includes the VMs. Note that groups are automatically created for each host.
{
"1c85f707-70bd-4449-9e74-0364329b2cae": {
"hosts": [
"simple-centos-8-1"
]
},
"73d59360-33f9-44e6-8c71-0ac2f6530c43": {
"hosts": [
"example-centos-8"
]
},
"_meta": {
"hostvars": {
"example-centos-8": {
"ansible_connection": "community.libvirt.libvirt_qemu",
"ansible_libvirt_uri": "qemu:///system"
},
"simple-centos-8-1": {
"ansible_connection": "community.libvirt.libvirt_qemu",
"ansible_libvirt_uri": "qemu:///system"
}
}
},
"all": {
"children": [
"1c85f707-70bd-4449-9e74-0364329b2cae",
"73d59360-33f9-44e6-8c71-0ac2f6530c43",
"ungrouped"
]
}
}
Test guest agent
We can use the virsh
command line tool to test that the host is able to communicate with the guest agent running in the VM, and that it supports the capabilities that we need. Let’s do that now, against one of the example VMs.
sudo virsh qemu-agent-command example-centos-8 '{"execute":"guest-info"}' --pretty
Firstly, this needs to work and secondly, it should return a list of supported commands where you can ensure the required capabilities are enabled and working.
Use the dynamic inventory with Ansible
Now that we have a working inventory and have confirmed the guest agent connection is working, we can use Ansible to dynamically generate an inventory and connect to the guests. Again, this doesn’t use SSH, but rather the gemu-guest-agent
interface.
Let’s test this using the standard Ansible ping
module.
ansible --inventory kvm.yml all -m ping
We should see a successful response!
simple-centos-8-1 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/libexec/platform-python"
},
"changed": false,
"ping": "pong"
}
example-centos-8 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/libexec/platform-python"
},
"changed": false,
"ping": "pong"
}
And of course we can run real playbooks. Here’s an example playbook called site.yml
to install critical packages 😉
---
- hosts: all
tasks:
- name: Install packages
package:
name:
- git
- rsync
- tmux
- vim
state: present
become: true
register: result_package_install
retries: 10
delay: 5
until: result_package_install is succeeded
Let’s run it!
ansible-playbook --inventory kvm.yml site.yml
And we should see that it happily executes the playbook.
PLAY [all] ******************************************************************************************
TASK [Gathering Facts] ******************************************************************************
ok: [simple-centos-8-1]
ok: [example-centos-8]
TASK [Install packages] *****************************************************************************
changed: [example-centos-8]
changed: [simple-centos-8-1]
PLAY RECAP ******************************************************************************************
example-centos-8 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
simple-centos-8-1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Limiting the hosts with dynamic inventory
Now that you have a working dynamic inventory, you can use the standard Ansible limit
option to restrict machines for your playbooks. This way you can have the flexibility of the dynamic inventory, but restrict which guests you execute Ansible against.
ansible-playbook --inventory kvm.yml --limit example-centos-8 site.yml
Remove test VMs
Finally, we can cleanly remove those test VMs using the same Ansible code, by setting their state to undefined
.
./run.sh --limit kvmhost,simple-centos-8-1,example-centos-8 -e virt_infra_state=undefined
And there you have it. Using the dynamic inventory with the libvirt plugin can be quite handy.
3 thoughts on “Using a dynamic libvirt inventory with Ansible”
Thank you. Very helpful.
Thank you for your guide, very helpful examples!
Neat trick, but is there really a use case for cattle VMs when there’s containers?