Part of the process when updating Red Hat’s TripleO based OpenStack is to apply the package and container updates, via
update run step, to the nodes in each Role (like Controller, CephStorage and Compute, etc). This is done in-place, before the
ceph-upgrade (ceph-ansible) step, converge step and reboots.
openstack overcloud update run --nodes CephStorage
Rather than do an entire Role straight up however, I always update one node of that type first. This lets me make sure there were no problems (and fix them if there were), before moving onto the whole Role.
I noticed recently when performing the update step on
CephStorage role nodes that OSDs and OSD nodes were going down in the cluster. This was then causing my Ceph cluster to go into backfilling and recovering (
norebalance was set).
We want all of these nodes to be done one at a time, as taking more than one node out at a time can potentially make the Ceph cluster stop serving data (all VMs will freeze) until it finishes and gets the minimum number of copies in the cluster. If all three copies of data go offline at the same time, it’s not going to be able to recover.
Continue reading Updating OpenStack TripleO Ceph nodes safely one at a time
TripleO based OpenStack deployments use an OpenStack all-in-one node (undercloud) to automate the build and management of the actual cloud (overcloud) using native services such as Heat and Ironic. Roles are used to define services and configuration, which are then applied to specific nodes, for example,
Although the install is automated, sometimes you need to run adhoc tasks outside of the official update process. For example, you might want to make sure that all hosts are contactable, have a valid subscription (for Red Hat OpenStack Platform), restart containers, or maybe even apply custom changes or patches before an update. Also, during the update process when nodes are being rebooted, it can be useful to use an Ansible script to know when they’ve all come back, services are all running, all containers are healthy, before re-enabling them.
To make this easy, we can use the TripleO Ansible inventory script, which queries the undercloud to get a dynamic inventory of the overcloud nodes. When using the script as an inventory source with the
Continue reading Using Ansible and dynamic inventory to manage OpenStack TripleO nodes
ansible command however, you cannot pass arguments to it. If you’re managing a single cluster and using the standard stack name of
overcloud, then this is not a problem; you can just call the script directly.