Category Archives: Fedora

Anything related to the Fedora Project.

Resolving mDNS across VLANs with Avahi on OpenWRT

mDNS, or multicast DNS, is a way to discover devices on your network at .local domain without any central DNS configuration (also known as ZeroConf and Bonjour, etc). Fedora Magazine has a good article on setting it up in Fedora, which I won’t repeat here.

If you’re like me, you’re using OpenWRT with multiple VLANs to separate networks. In my case this includes my home automation (HA) network (VLAN 2) from my regular trusted LAN (VLAN 1). Various untrusted home automation products, as well as my own devices, go into the HA network (more on that in a later post).

In my setup, my OpenWRT router acts as my central router, connecting each of my networks and controlling access. My LAN can access everything in my HA network, but generally only establish related TCP traffic is allowed back from HA to LAN. There are some exceptions though, for example my Pi-hole DNS servers which are accessible from all networks, but otherwise that’s the general setup.

With IPv4, mDNS communicates by sending IP multicast UDP packets to 224.0.0.251 with source and destination ports both using 5353. In order to receive requests and responses, your devices need to be running an mDNS service and also allow incoming UDP traffic on port 5353.

As multicast is local only, mDNS doesn’t work natively across routed networks. Therefore, this prevents me from easily talking to my various HA devices from my LAN. In order to support mDNS across routed networks, you need a proxy in the middle to transparently send requests and responses back and forward. There are a few different options for a proxy, such as igmpproxy, but i prefer to use the standard Avahi server on my OpenWRT router.

Continue reading

Updating OpenStack TripleO Ceph nodes safely one at a time

Part of the process when updating Red Hat’s TripleO based OpenStack is to apply the package and container updates, viaupdate run step, to the nodes in each Role (like Controller, CephStorage and Compute, etc). This is done in-place, before the ceph-upgrade (ceph-ansible) step, converge step and reboots.

openstack overcloud update run --nodes CephStorage

Rather than do an entire Role straight up however, I always update one node of that type first. This lets me make sure there were no problems (and fix them if there were), before moving onto the whole Role.

I noticed recently when performing the update step on CephStorage role nodes that OSDs and OSD nodes were going down in the cluster. This was then causing my Ceph cluster to go into backfilling and recovering (norebalance was set).

We want all of these nodes to be done one at a time, as taking more than one node out at a time can potentially make the Ceph cluster stop serving data (all VMs will freeze) until it finishes and gets the minimum number of copies in the cluster. If all three copies of data go offline at the same time, it’s not going to be able to recover.

Continue reading

Using Ansible and dynamic inventory to manage OpenStack TripleO nodes

TripleO based OpenStack deployments use an OpenStack all-in-one node (undercloud) to automate the build and management of the actual cloud (overcloud) using native services such as Heat and Ironic. Roles are used to define services and configuration, which are then applied to specific nodes, for example, Service, Compute and CephStorage, etc.

Although the install is automated, sometimes you need to run adhoc tasks outside of the official update process. For example, you might want to make sure that all hosts are contactable, have a valid subscription (for Red Hat OpenStack Platform), restart containers, or maybe even apply custom changes or patches before an update. Also, during the update process when nodes are being rebooted, it can be useful to use an Ansible script to know when they’ve all come back, services are all running, all containers are healthy, before re-enabling them.

Inventory script

To make this easy, we can use the TripleO Ansible inventory script, which queries the undercloud to get a dynamic inventory of the overcloud nodes. When using the script as an inventory source with the ansible command however, you cannot pass arguments to it. If you’re managing a single cluster and using the standard stack name of overcloud, then this is not a problem; you can just call the script directly.

Continue reading

Using network namespaces with veth to NAT guests with overlapping IPs

Sets of virtual machines are connected to a virtual bridges (e.g. virbr0 and virbr1) and as they are isolated, can use the same subnet range and set of IPs. However, NATing becomes a problem because the host won’t know which VM to return the traffic to.

To solve this problem, we can use network namespaces and some veth (virtual Ethernet) devices for each private network we want to NAT. The veth devices come as an interconnected pair of interfaces (think patch cable) and we will use two pairs for each network namespace; one which patches into the provider network (e.g. 192.168.0.0/24) and another which patches to the virtual machine’s private network (e.g. 10.0.0.0/24).

By providing each private network with is own unique upstream routable IP and applying NAT rules inside each namespace separately we can avoiding any conflict.

Configuration for multiple namespace NAT
Continue reading

Using Ansible to define and manage KVM guests and networks with YAML inventories

I wanted a way to quickly spin different VMs up and down on my KVM dev box, to help with testing things like OpenStack, Swift, Ceph and Kubernetes. Some of my requirements were as follows:

  • Define everything in a markup language, like YAML
  • Manage VMs (define, stop, start, destroy and undefine) and apply settings as a group or individually
  • Support different settings for each VMs, like disks, memory, CPU, etc
  • Support multiple drives and types, including Virtio, SCSI, SATA and NVMe
  • Create users and set root passwords
  • Manage networks (create, delete) and which VMs go on them
  • Mix and match Linux distros and releases
  • Use existing cloud images from distros
  • Manage access to the VMs including DNS/hosts resolution and SSH keys
  • Have a good set of defaults so it would work out of the box
  • Potentially support other architectures (like ppc64le or arm)

So I hacked together an Ansible role and example playbook. Setting guest states to running, shutdown, destroyed or undefined (to delete and clean up) are supported. It will also manage multiple libvirt networks and guests can have different specs as well as multiple disks of different types (SCSI, SATA, Virtio, NVMe). With Ansible’s –limit option, any individual guest, a hostgroup of guests, or even a mix can be managed.

Managing KVM guests with Ansible

Although Terraform with libvirt support is potentially a good solution, by using Ansible I can use that same inventory to further manage the guests and I’ve also been able to configure the KVM host itself. All that’s really needed is a Linux host capable of running KVM, some guest images and a basic inventory. The Ansible will do the rest (on supported distros).

Continue reading

Red Hat crippling CloudForms product and migrating users to IBM

CloudForms is Red Hat’s supported version of upstream ManageIQ, an infrastructure management platform. It lets you see, manage and deploy to various platforms like OpenStack, VMWare, RHEV, OpenShift and public cloud like AWS and Azure, with single pane of glass view across them all. It has its own orchestration engine but also integrates with Ansible for automated deployments.

As best I can tell, their CloudForms updated Statement of Direction article (behind paywall, sorry) shows that Red Hat is killing off support for non-Red Hat platforms like VMware, AWS, Azure, etc. The justification is to focus on open platforms, which I think means CloudForms will ultimately disappear entirely with Red Hat focusing on OpenShift instead.

We made a strategic decision to focus our management strategy on the future — open, cloud-native environments that promote portability across on-premise, private and public clouds.

CloudForms updated Statement of Direction

However to me this is still a big blow to users of the platform, where I’m sure most will have at least some VMWare to manage. Indeed, when implementing CloudForms at work and talking to Red Hat, they said that their most mature integration in CloudForms is with VMWare.

According to the Red Hat article, CloudForms with full platform support is being embedded into IBM Cloud Pak for Multicloud Management and users are encouraged to “migrate your Red Hat CloudForms subscriptions to IBM Cloud Pak for Multicloud Management licenses.” Red Hat’s CloudForms Statement of Direction FAQ article lays out the migration path, which does confirm Red Hat will continue to support existing clients for the remainder of their subscription.

So in short, CloudForms from Red Hat is being crippled and will only support Red Hat products, which really means that users are being forced to buy IBM instead. Of course Red Hat is entitled to change their own products, but this move does seem curious when execs on both sides said they would remain independent. Maybe it’s better than killing CloudForms outright?

We can publicly say that all our products will survive in their current form and continue to grow. We will continue to support all our products; we’re separate entities and we’re going to have separate contracts, and there is no intention to de-emphasise any of our products and we’ll continue to invest heavily in it.

Jim Whitehurst as Red Hat CEO

KVM guests with emulated SSD and NVMe drives

Sometimes when you’re using KVM guests to test something, perhaps like a Ceph or OpenStack Swift cluster, it can be useful to have SSD and NVMe drives. I’m not talking about passing physical drives through, but rather emulating them.

NVMe drives

QEMU supports emulating NVMe drives as arguments on the command line, but it’s not yet exposed to tools like virt-manager. This means you can’t just add a new drive of type nvme into your virtual machine XML definition, however you can add those qemu arguments to your XML. This also means that the NVMe drives will not show up as drives in tools like virt-manager, even after you’ve added them with qemu. Still, it’s fun to play with!

Continue reading

Automatically updating containers with Docker

Running something in a container using Docker or Podman is cool, but maybe you want an automated way to always run the latest container? Using the :latest tag alone does not to this, that just pulls the latest container at the time. You could have a cronjob that just always pulls the latest containers and restarts the container but then if there’s no update you have an outage for no reason.

It’s not too hard to write a script to pull the latest container and restart the service only if required, then tie that together with a systemd timer.

To restart a container you need to know how it was started. If you have only one container then you could just hard-code it, however it gets more tricky to manage if you have a number of containers. This is where something like runlike can help!

Continue reading

OwnTracks recorder in a container on Fedora with Let’s Encrypt and nginx

OwnTracks Recorder is a web application which maps locations over time. Generally, it connects to an MQTT server and subscribes to owntracks/+ topics for any location updates, but it also has a built in function to receive updates over HTTP.

I have been using OwnTracks with MQTT for a while, but found it to be too unreliable on Android (disconnects in the background and doesn’t reconnect nicely). Using HTTP is supposed to be more reliable, so this is how I set it up. The idea is to use OwnTracks on Android to post directly to the OwnTracks recorder over HTTP instead of MQTT and have recorder post the MQTT messages on our behalf using LUA scripts (for Home Assistant).

Friends is an important feature (to let members of the family see where eachother is located) and fortunately it is supported in HTTP mode (but it requires a little bit more configuration).

Continue reading

Enabling Docker in Fedora 31 by reverting to cgroups v1

Fedora has switched to cgroups v2 by default now, but Docker doesn’t yet support it and so fails to start. If you want to use Docker then you need to revert cgroups to v1 by adding the systemd.unified_cgroup_hierarchy=0 kernel argument.

Add systemd.unified_cgroup_hierarchy=0 to the default GRUB config with sed.

sudo sed -i '/^GRUB_CMDLINE_LINUX/ s/"$/ systemd.unified_cgroup_hierarchy=0"/' /etc/default/grub

Now rebuild your GRUB config.

If you’re using BIOS boot then it’s this.

sudo grub2-mkconfig -o /boot/grub2/grub.cfg

If you’re running EFI, then it’s this.

sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

Now reboot and make sure Docker can start!