Author Archives: Chris

How to create Linux bridges and Open vSwitch bridges with NetworkManager

My virtual infrastructure Ansible role supports connecting VMs to both Linux and Open vSwitch bridges, but they must already exist on the KVM host.

Here is how to convert an existing Ethernet device into a bridge. Be careful if doing this on a remote machine with only one connection! Make sure you have some other way to log in (e.g. console), or maybe add additional interfaces instead.

Export interfaces and existing connections

First, export the the device you want to convert so we can easily reference it later (e.g. eth1).

export NET_DEV="eth1"

Now list the current NetworkManager connections for your device exported above, so we know what to disable later.

sudo nmcli con |egrep -w "${NET_DEV}"

This might be something like System eth1 or Wired connection 1, let’s export it too for later reference.

export NM_NAME="Wired connection 1"

Create a Linux bridge

Here is an example of creating a persistent Linux bridge with NetworkManager. It will take a device such as eth1 (substitute as appropriate) and convert it into a bridge. Note that we will be specifically giving it the device name of br0 as that’s the standard convention and what things like libvirt will look for.

Continue reading

Accessing USB serial devices in Fedora Silverblue

One of the things I do a lot on my Fedora machines is talk to devices via USB serial. While a device is correctly detected at /dev/ttyUSB0 and owned by the dialout group, adding myself to that group doesn’t work as it can’t be found. This is because under Silverblue, there are two different group files (/usr/lib/group and /etc/group) with different content.

There are some easy ways to solve this, for example we can create the matching dialout group or write a udev rule. Let’s take a look!

On the host with groups

If you try to add yourself to the dialout group it will fail.

sudo gpasswd -a ${USER} dialout
gpasswd: group 'dialout' does not exist in /etc/group

Trying to re-create the group will also fail as it’s already in use.

sudo groupadd dialout -r -g 18
groupadd: GID '18' already exists

So instead, we can simply grab the entry from the OS group file and add it to /etc/group ourselves.

grep ^dialout: /usr/lib/group |sudo tee -a /etc/group

Now we are able to add ourselves to the dialout group!

Continue reading

Making dnf on Fedora Silverblue a little easier with bash aliases

Fedora Silverblue doesn’t come with dnf because it’s an immutable operating system and uses a special tool called rpm-ostree to layer packages on top instead.

Most terminal work is designed to be done in containers with toolbox, but I still do a bunch of work outside of a container. Searching for packages to install with rpm-ostree still requires dnf inside a container, as it does not have that function.

I add these two aliases to my ~/.bashrc file so that using dnf to search or install into the default container is possible from a regular terminal. This just makes Silverblue a little bit more like what I’m used to with regular Fedora.

cat >> ~/.bashrc << EOF
alias sudo="sudo "
alias dnf="bash -c '#skip_sudo'; toolbox -y create 2>/dev/null; toolbox run sudo dnf"
EOF

If the default container doesn’t exist, toolbox creates it. Note that the alias for sudo has a space at the end. This tells bash to also check the next command word for alias expansion, which is what makes sudo work with aliases. Thus, we can make sure that both dnf and sudo dnf will work. The first part of the dnf alias is used to skip the sudo command so the rest is run as the regular user, which makes them both work the same.

We need to source that file or run a new bash session to pick up the aliases.

Continue reading

Fedora Silverblue is an amazing immutable desktop

I recently switched my regular Fedora 31 workstation over to the 31 Silverblue release. I’ve played with Project Atomic before and have been meaning to try it out more seriously for a while, but never had the time. Silverblue provided the catalyst to do that.

What this brings to the table is quite amazing and seriously impressive. The base OS is immutable and everyone’s install is identical. This means quality can be improved as there are less combinations and it’s easier to test. Upgrades to the next major version of Fedora are fast and secure. Instead of updating thousands of RPMs in-place, the new image is downloaded and the system reboots into it. As the underlying images don’t change, it also offers full rollback support.

This is similar to how platforms like Chrome OS and Android work, but thanks to ostree it’s now available for Linux desktops! That is pretty neat.

It doesn’t come with a standard package manager like dnf. Instead, any packages or changes you need to perform on the base OS are done using rpm-ostree command, which actually layers them on top.

And while technically you can install anything using rpm-ostree, ideally this should be avoided as much as possible (some low level apps like shells and libvirt may require it, though). Flatpak apps and containers are the standard way to consume packages. As these are kept separate from the base OS, it also helps improve stability and reliability.

Continue reading

Easy containers on Fedora with toolbox

The toolbox program is a wrapper for setting up containers on Fedora. It’s not doing anything you can’t do yourself with podman, but it does make using and managing containers more simple and easy to do. It comes by default on Silverblue where it’s aimed for use with terminal apps and dev work, but you can try it on a regular Fedora workstation.

sudo dnf install toolbox

Creating containers

You can create just one container if you want, which will be called something like fedora-toolbox-32, or you can create separate containers for different things. Up to you. As an example, let’s create a container called testing-f32.

toolbox create --container testing-f32

By default toolbox uses the Fedora registry and creates a container which is the same version as your host. However you can specify a different version if you need to, for example if you needed a Fedora 30 container.

toolbox create --release f30 --container testing-f30

These containers are not yet running, they’ve just been created for you.

Continue reading

Custom WiFi enabled nightlight with ESPHome and Home Assistant

I built this custom night light for my kids as a fun little project. It’s pretty easy so thought someone else might be inspired to do something similar.

Custom WiFi connected nightlight

Hardware

The core hardware is just an ESP8266 module and an Adafruit NeoPixel Ring. I also bought a 240V bunker light and took the guts out to use as the housing, as it looked nice and had a diffuser (you could pick anything that you like).

Continue reading

Defining home automation devices in YAML with ESPHome and Home Assistant, no programming required!

Having built the core of my own “dumb” smart home system, I have been working on making it smart these past few years. As I’ve written about previously, the smart side of my home automation is managed by Home Assistant, which is an amazing, privacy focused open source platform. I’ve previously posted about running Home Assistant in Docker and in Podman.

Home Assistant, the privacy focused, open source home automation platform

I do have a couple of proprietary home automation products, including LIFX globes and Google Home. However, the vast majority of my home automation devices are ESP modules running open source firmware which connect to MQTT as the central protocol. I’ve built a number of sensors and lights and been working on making my light switches smart (more on that in a later blog post).

I already had experience with Arduino, so I started experimenting with this and it worked quite well. I then had a play with Micropython and really enjoyed it, but then I came across ESPHome and it blew me away. I have since migrated most of my devices to ESPHome.

ESPHome provides simple management of ESP devices

ESPHome is smart in making use of PlatformIO underneath, but its beauty lies in the way it abstracts away the complexities of programming for embedded devices. In fact, no programming is necessary! You simply have to define your devices in YAML and run a single command to compile the firmware blob and flash a device. Loops, initialising and managing multiple inputs and outputs, reading and writing to I/O, PWM, functions and callbacks, connecting to WiFi and MQTT, hosting an AP, logging and more is taken care of for you. Once up, the devices support mDNS and unencrypted over the air updates (which is fine for my local network). It supports both Home Assistant API and MQTT (over TLS for ESP8266) as well as lots of common components. There is even an addon for Home Assistant if you prefer using a graphical interface, but I like to do things on the command line.

When combined with Home Assistant, new devices are automatically discovered and appear in the web interface. When using MQTT, the channels are set with retain flag, so that the devices themselves and their last known states are not lost on reboots (you can disable this for testing).

That’s a lot of things you get for just a little bit of YAML!

Continue reading

Resolving mDNS across VLANs with Avahi on OpenWRT

mDNS, or multicast DNS, is a way to discover devices on your network at .local domain without any central DNS configuration (also known as ZeroConf and Bonjour, etc). Fedora Magazine has a good article on setting it up in Fedora, which I won’t repeat here.

If you’re like me, you’re using OpenWRT with multiple VLANs to separate networks. In my case this includes my home automation (HA) network (VLAN 2) from my regular trusted LAN (VLAN 1). Various untrusted home automation products, as well as my own devices, go into the HA network (more on that in a later post).

In my setup, my OpenWRT router acts as my central router, connecting each of my networks and controlling access. My LAN can access everything in my HA network, but generally only establish related TCP traffic is allowed back from HA to LAN. There are some exceptions though, for example my Pi-hole DNS servers which are accessible from all networks, but otherwise that’s the general setup.

With IPv4, mDNS communicates by sending IP multicast UDP packets to 224.0.0.251 with source and destination ports both using 5353. In order to receive requests and responses, your devices need to be running an mDNS service and also allow incoming UDP traffic on port 5353.

As multicast is local only, mDNS doesn’t work natively across routed networks. Therefore, this prevents me from easily talking to my various HA devices from my LAN. In order to support mDNS across routed networks, you need a proxy in the middle to transparently send requests and responses back and forward. There are a few different options for a proxy, such as igmpproxy, but i prefer to use the standard Avahi server on my OpenWRT router.

Continue reading

Updating OpenStack TripleO Ceph nodes safely one at a time

Part of the process when updating Red Hat’s TripleO based OpenStack is to apply the package and container updates, viaupdate run step, to the nodes in each Role (like Controller, CephStorage and Compute, etc). This is done in-place, before the ceph-upgrade (ceph-ansible) step, converge step and reboots.

openstack overcloud update run --nodes CephStorage

Rather than do an entire Role straight up however, I always update one node of that type first. This lets me make sure there were no problems (and fix them if there were), before moving onto the whole Role.

I noticed recently when performing the update step on CephStorage role nodes that OSDs and OSD nodes were going down in the cluster. This was then causing my Ceph cluster to go into backfilling and recovering (norebalance was set).

We want all of these nodes to be done one at a time, as taking more than one node out at a time can potentially make the Ceph cluster stop serving data (all VMs will freeze) until it finishes and gets the minimum number of copies in the cluster. If all three copies of data go offline at the same time, it’s not going to be able to recover.

Continue reading

Using Ansible and dynamic inventory to manage OpenStack TripleO nodes

TripleO based OpenStack deployments use an OpenStack all-in-one node (undercloud) to automate the build and management of the actual cloud (overcloud) using native services such as Heat and Ironic. Roles are used to define services and configuration, which are then applied to specific nodes, for example, Service, Compute and CephStorage, etc.

Although the install is automated, sometimes you need to run adhoc tasks outside of the official update process. For example, you might want to make sure that all hosts are contactable, have a valid subscription (for Red Hat OpenStack Platform), restart containers, or maybe even apply custom changes or patches before an update. Also, during the update process when nodes are being rebooted, it can be useful to use an Ansible script to know when they’ve all come back, services are all running, all containers are healthy, before re-enabling them.

Inventory script

To make this easy, we can use the TripleO Ansible inventory script, which queries the undercloud to get a dynamic inventory of the overcloud nodes. When using the script as an inventory source with the ansible command however, you cannot pass arguments to it. If you’re managing a single cluster and using the standard stack name of overcloud, then this is not a problem; you can just call the script directly.

Continue reading