I built this custom night light for my kids as a fun little project. It’s pretty easy so thought someone else might be inspired to do something similar.
The core hardware is just an ESP8266 module and an Adafruit NeoPixel Ring. I also bought a 240V bunker light and took the guts out to use as the housing, as it looked nice and had a diffuser (you could pick anything that you like).
Having built the core of my own “dumb” smart home system, I have been working on making it smart these past few years. As I’ve written about previously, the smart side of my home automation is managed by Home Assistant, which is an amazing, privacy focused open source platform. I’ve previously posted about running Home Assistant in Docker and in Podman.
I do have a couple of proprietary home automation products, including LIFX globes and Google Home. However, the vast majority of my home automation devices are ESP modules running open source firmware which connect to MQTT as the central protocol. I’ve built a number of sensors and lights and been working on making my light switches smart (more on that in a later blog post).
I already had experience with Arduino, so I started experimenting with this and it worked quite well. I then had a play with Micropython and really enjoyed it, but then I came across ESPHome and it blew me away. I have since migrated most of my devices to ESPHome.
ESPHome is smart in making use of PlatformIO underneath, but its beauty lies in the way it abstracts away the complexities of programming for embedded devices. In fact, no programming is necessary! You simply have to define your devices in YAML and run a single command to compile the firmware blob and flash a device. Loops, initialising and managing multiple inputs and outputs, reading and writing to I/O, PWM, functions and callbacks, connecting to WiFi and MQTT, hosting an AP, logging and more is taken care of for you. Once up, the devices support mDNS and unencrypted over the air updates (which is fine for my local network). It supports both Home Assistant API and MQTT (over TLS for ESP8266) as well as lots of common components. There is even an addon for Home Assistant if you prefer using a graphical interface, but I like to do things on the command line.
When combined with Home Assistant, new devices are automatically discovered and appear in the web interface. When using MQTT, the channels are set with retain flag, so that the devices themselves and their last known states are not lost on reboots (you can disable this for testing).
That’s a lot of things you get for just a little bit of YAML!
mDNS, or multicast DNS, is a way to discover devices on your network at .local domain without any central DNS configuration (also known as ZeroConf and Bonjour, etc). Fedora Magazine has a good article on setting it up in Fedora, which I won’t repeat here.
If you’re like me, you’re using OpenWRT with multiple VLANs to separate networks. In my case this includes my home automation (HA) network (VLAN 2) from my regular trusted LAN (VLAN 1). Various untrusted home automation products, as well as my own devices, go into the HA network (more on that in a later post).
In my setup, my OpenWRT router acts as my central router, connecting each of my networks and controlling access. My LAN can access everything in my HA network, but generally only establish related TCP traffic is allowed back from HA to LAN. There are some exceptions though, for example my Pi-hole DNS servers which are accessible from all networks, but otherwise that’s the general setup.
With IPv4, mDNS communicates by sending IP multicast UDP packets to 126.96.36.199 with source and destination ports both using 5353. In order to receive requests and responses, your devices need to be running an mDNS service and also allow incoming UDP traffic on port 5353.
As multicast is local only, mDNS doesn’t work natively across routed networks. Therefore, this prevents me from easily talking to my various HA devices from my LAN. In order to support mDNS across routed networks, you need a proxy in the middle to transparently send requests and responses back and forward. There are a few different options for a proxy, such as igmpproxy, but i prefer to use the standard Avahi server on my OpenWRT router.
Part of the process when updating Red Hat’s TripleO based OpenStack is to apply the package and container updates, viaupdate run step, to the nodes in each Role (like Controller, CephStorage and Compute, etc). This is done in-place, before the ceph-upgrade (ceph-ansible) step, converge step and reboots.
openstack overcloud update run --nodes CephStorage
Rather than do an entire Role straight up however, I always update one node of that type first. This lets me make sure there were no problems (and fix them if there were), before moving onto the whole Role.
I noticed recently when performing the update step on CephStorage role nodes that OSDs and OSD nodes were going down in the cluster. This was then causing my Ceph cluster to go into backfilling and recovering (norebalance was set).
We want all of these nodes to be done one at a time, as taking more than one node out at a time can potentially make the Ceph cluster stop serving data (all VMs will freeze) until it finishes and gets the minimum number of copies in the cluster. If all three copies of data go offline at the same time, it’s not going to be able to recover.
TripleO based OpenStack deployments use an OpenStack all-in-one node (undercloud) to automate the build and management of the actual cloud (overcloud) using native services such as Heat and Ironic. Roles are used to define services and configuration, which are then applied to specific nodes, for example, Service, Compute and CephStorage, etc.
Although the install is automated, sometimes you need to run adhoc tasks outside of the official update process. For example, you might want to make sure that all hosts are contactable, have a valid subscription (for Red Hat OpenStack Platform), restart containers, or maybe even apply custom changes or patches before an update. Also, during the update process when nodes are being rebooted, it can be useful to use an Ansible script to know when they’ve all come back, services are all running, all containers are healthy, before re-enabling them.
To make this easy, we can use the TripleO Ansible inventory script, which queries the undercloud to get a dynamic inventory of the overcloud nodes. When using the script as an inventory source with the ansible command however, you cannot pass arguments to it. If you’re managing a single cluster and using the standard stack name of overcloud, then this is not a problem; you can just call the script directly.
Sets of virtual machines are connected to a virtual bridges (e.g. virbr0 and virbr1) and as they are isolated, can use the same subnet range and set of IPs. However, NATing becomes a problem because the host won’t know which VM to return the traffic to.
Each veth device acts like a patch cable and is actually made up of two network devices, one for each end (e.g. peer1-a and peer1-b). By adding those interfaces between bridges and/or namespaces, you create a link between them.
The network namespace is only used for NAT and is where the veth IPs are set, the other end will act like a patch cable without an IP. The VMs are only connected into their respective bridge (e.g. virbr0) and can talk to the network namespace over the veth patch.
We will use two pairs for each network namespace.
One (e.g. represented by veth1 below ) which connects the virtual machine’s private network (e.g. virbr0 on 10.0.0.0/24) into the network namespace (e.g. net-ns1) where it sets an IP and will be the private network router (e.g. 10.0.0.1).
Another (e.g. represented by veth2 below) which connects the upstream provider network (e.g. br0 on 192.168.0.0/24) into the same network namespace where it sets an IP (e.g. 192.168.0.100).
Repeat the process for other namespaces (e.g. represented by veth3 and veth4 below).
By providing each private network with is own unique upstream routable IP and applying NAT rules inside each namespace separately we can avoid any conflict.
I wanted a way to quickly spin different VMs up and down on my KVM dev box, to help with testing things like OpenStack, Swift, Ceph and Kubernetes. Some of my requirements were as follows:
Define everything in a markup language, like YAML
Manage VMs (define, stop, start, destroy and undefine) and apply settings as a group or individually
Support different settings for each VMs, like disks, memory, CPU, etc
Support multiple drives and types, including Virtio, SCSI, SATA and NVMe
Create users and set root passwords
Manage networks (create, delete) and which VMs go on them
Mix and match Linux distros and releases
Use existing cloud images from distros
Manage access to the VMs including DNS/hosts resolution and SSH keys
Have a good set of defaults so it would work out of the box
Potentially support other architectures (like ppc64le or arm)
So I hacked together an Ansible role and example playbook. Setting guest states to running, shutdown, destroyed or undefined (to delete and clean up) are supported. It will also manage multiple libvirt networks and guests can have different specs as well as multiple disks of different types (SCSI, SATA, Virtio, NVMe). With Ansible’s –limit option, any individual guest, a hostgroup of guests, or even a mix can be managed.
Although Terraform with libvirt support is potentially a good solution, by using Ansible I can use that same inventory to further manage the guests and I’ve also been able to configure the KVM host itself. All that’s really needed is a Linux host capable of running KVM, some guest images and a basic inventory. The Ansible will do the rest (on supported distros).
CloudForms is Red Hat’s supported version of upstream ManageIQ, an infrastructure management platform. It lets you see, manage and deploy to various platforms like OpenStack, VMWare, RHEV, OpenShift and public cloud like AWS and Azure, with single pane of glass view across them all. It has its own orchestration engine but also integrates with Ansible for automated deployments.
As best I can tell, their CloudForms updated Statement of Direction article (behind paywall, sorry) shows that Red Hat is killing off support for non-Red Hat platforms like VMware, AWS, Azure, etc. The justification is to focus on open platforms, which I think means CloudForms will ultimately disappear entirely with Red Hat focusing on OpenShift instead.
We made a strategic decision to focus our management strategy on the future — open, cloud-native environments that promote portability across on-premise, private and public clouds.
However to me this is still a big blow to users of the platform, where I’m sure most will have at least some VMWare to manage. Indeed, when implementing CloudForms at work and talking to Red Hat, they said that their most mature integration in CloudForms is with VMWare.
According to the Red Hat article, CloudForms with full platform support is being embedded into IBM Cloud Pak for Multicloud Management and users are encouraged to “migrate your Red Hat CloudForms subscriptions to IBM Cloud Pak for Multicloud Management licenses.” Red Hat’s CloudForms Statement of Direction FAQ article lays out the migration path, which does confirm Red Hat will continue to support existing clients for the remainder of their subscription.
So in short, CloudForms from Red Hat is being crippled and will only support Red Hat products, which really means that users are being forced to buy IBM instead. Of course Red Hat is entitled to change their own products, but this move does seem curious when execs on both sides said they would remainindependent. Maybe it’s better than killing CloudForms outright?
We can publicly say that all our products will survive in their current form and continue to grow. We will continue to support all our products; we’re separate entities and we’re going to have separate contracts, and there is no intention to de-emphasise any of our products and we’ll continue to invest heavily in it.
Sometimes when you’re using KVM guests to test something, perhaps like a Ceph or OpenStack Swift cluster, it can be useful to have SSD and NVMe drives. I’m not talking about passing physical drives through, but rather emulating them.
QEMU supports emulating NVMe drives as arguments on the command line, but it’s not yet exposed to tools like virt-manager. This means you can’t just add a new drive of type nvme into your virtual machine XML definition, however you can add those qemu arguments to your XML. This also means that the NVMe drives will not show up as drives in tools like virt-manager, even after you’ve added them with qemu. Still, it’s fun to play with!
Running something in a container using Docker or Podman is cool, but maybe you want an automated way to always run the latest container? Using the :latest tag alone does not to this, that just pulls the latest container at the time. You could have a cronjob that just always pulls the latest containers and restarts the container but then if there’s no update you have an outage for no reason.
It’s not too hard to write a script to pull the latest container and restart the service only if required, then tie that together with a systemd timer.
To restart a container you need to know how it was started. If you have only one container then you could just hard-code it, however it gets more tricky to manage if you have a number of containers. This is where something like runlike can help!