How to create Linux bridges and Open vSwitch bridges with NetworkManager

My virtual infrastructure Ansible role supports connecting VMs to both Linux and Open vSwitch bridges, but they must already exist on the KVM host.

Here is how to convert an existing Ethernet device into a bridge. Be careful if doing this on a remote machine with only one connection! Make sure you have some other way to log in (e.g. console), or maybe add additional interfaces instead.

Export interfaces and existing connections

First, export the the device you want to convert so we can easily reference it later (e.g. eth1).

export NET_DEV="eth1"

Now list the current NetworkManager connections for your device exported above, so we know what to disable later.

sudo nmcli con |egrep -w "${NET_DEV}"

This might be something like System eth1 or Wired connection 1, let’s export it too for later reference.

export NM_NAME="Wired connection 1"

Create a Linux bridge

Here is an example of creating a persistent Linux bridge with NetworkManager. It will take a device such as eth1 (substitute as appropriate) and convert it into a bridge. Note that we will be specifically giving it the device name of br0 as that’s the standard convention and what things like libvirt will look for.

Continue reading How to create Linux bridges and Open vSwitch bridges with NetworkManager

Using Ansible to define and manage KVM guests and networks with YAML inventories

I wanted a way to quickly spin different VMs up and down on my KVM dev box, to help with testing things like OpenStack, Swift, Ceph and Kubernetes. Some of my requirements were as follows:

  • Define everything in a markup language, like YAML
  • Manage VMs (define, stop, start, destroy and undefine) and apply settings as a group or individually
  • Support different settings for each VMs, like disks, memory, CPU, etc
  • Support multiple drives and types, including Virtio, SCSI, SATA and NVMe
  • Create users and set root passwords
  • Manage networks (create, delete) and which VMs go on them
  • Mix and match Linux distros and releases
  • Use existing cloud images from distros
  • Manage access to the VMs including DNS/hosts resolution and SSH keys
  • Have a good set of defaults so it would work out of the box
  • Potentially support other architectures (like ppc64le or arm)

So I hacked together an Ansible role and example playbook. Setting guest states to running, shutdown, destroyed or undefined (to delete and clean up) are supported. It will also manage multiple libvirt networks and guests can have different specs as well as multiple disks of different types (SCSI, SATA, Virtio, NVMe). With Ansible’s –limit option, any individual guest, a hostgroup of guests, or even a mix can be managed.

Managing KVM guests with Ansible

Although Terraform with libvirt support is potentially a good solution, by using Ansible I can use that same inventory to further manage the guests and I’ve also been able to configure the KVM host itself. All that’s really needed is a Linux host capable of running KVM, some guest images and a basic inventory. The Ansible will do the rest (on supported distros).

Continue reading Using Ansible to define and manage KVM guests and networks with YAML inventories

Booting Fedora cloud images with KVM

Here’s how you can play with the Fedora cloud images on your local machine with KVM.

Download a cloud image.

wget https://download.fedoraproject.org/pub/fedora/linux/releases/30/Cloud/x86_64/images/Fedora-Cloud-Base-30-1.2.x86_64.qcow2

Make a new local backing image (so that we don’t write to our downloaded image) called my-disk.qcow2.

qemu-img create -f qcow2 -b Fedora-Cloud-Base-30-1.2.x86_64.qcow2 my-disk.qcow2 20G

The cloud image uses cloud-init to configure itself on boot which sets things like hostname, usernames, passwords and ssh keys, etc. You can also run specific commands at two stages of the boot process (see bootcmd and runcmd below) and output messages (see final_message below) which is useful for scripted testing.

Continue reading Booting Fedora cloud images with KVM

Linux on Mac Pro with multiple drives

Update: This is possible using EFI only installs, yay!

The Apple Mac Pro at work has four bays for 3.5″ hard drives. My plan was to have OS X on the main drive with Linux on a secondary drive for virtualised environments. Native Linux could run on drives in the other slots if necessary.

I installed OS X on the primary drive and install rEFIt to manage all operating systems. So far so good.

Next I installed Fedora 12 on the secondary drive, but no matter the installation layout (whether MBR or GPT) I couldn’t for the life of me get rEFIt to boot it.

The install would be detected and come up in the pretty menu, but booting it resulted in a completely black screen. Nothing I tried seemed to fix the issue (for some reason even a single drive with Linux and EFI only wouldn’t work).

At my wits end I decided to Google the issue and came across an entry in the Debian wiki which explains my issue:

rEFIt assumes that you have only one disk drive. If you try and install linux onto a secondary drive, you will probably have found that rEFIt lets you try and boot your newly-minted linux partition/drive, only for you to get a “Missing operating system” error message. This is actually a Syslinux error message. What happens is that rEFIt looks on the primary disk for an MBR record, fails to find one (obviously!), so sticks the syslinux MBR onto the primary disk, and tries to boot that.

So the problem appears to be with rEFIt 🙁 Hopefully this will be fixed at some point, because being able to boot the OS from any drive on a Mac Pro would be oh, so handy.

In the mean time, I’ve installed Fedora on the same drive as OS X and will then use the other drives for virtualisation. I guess in theory putting /boot on a small partition on the primary drive with OS X might also work.

-c