Archive for the 'FOSS' Category

Signal Return Orientated Programming attacks

When a process is interrupted, the kernel suspends it and stores its state in a sigframe which is placed on the stack. The kernel then calls the appropriate signal handler code and after a sigreturn system call, reads the sigframe off the stack, restores state and resumes the process. However, by crafting a fake sigframe, we can trick the kernel into executing something else.

My friend Rashmica, an intern at OzLabs, has written an interesting blog post about this for some work she’s doing with the POWER architecture in Linux.

TRIM on LVM on LUKS on SSD, revisited

A few years ago I wrote about enabling trim on an SSD that was running with LVM on top of LUKS. Since then things have changed slightly, a few times.

With Fedora 24 you no longer need to edit the /etc/crypttab file and rebuild your initramfs. Now systemd supports a kernel boot argument rd.luks.options=discard which is the only thing you should need to do to enable trim on your LUKS device.

Edit /etc/default/grub and add the rd.luks.options=discard argument to the end of GRUB_CMDLINE_LINUX, e.g.:
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.luks.uuid=luks-de023401-ccec-4455-832bf-e5ac477743dc rd.luks.uuid=luks-a6d344739a-ad221-4345-6608-e45f16a8645e rhgb quiet rd.luks.options=discard"
GRUB_DISABLE_RECOVERY="true"

Next, rebuild your grub config file:
sudo grub2-mkconfig -o /boot/grub2/grub.cfg

If you’re using LVM, the setting is the same as the previous post. Edit the /etc/lvm/lvm.conf file and enabled the issue_discards option:
issue_discards = 1

If using LVM you will need to rebuild your initramfs so that the updated lvm.conf is in there.
sudo dracut -f

Reboot and try fstrim:
sudo fstrim -v /

Now also thanks to systemd, you can just enable the fstrim timer (cron) to do this automatically:
sudo systemctl enable fstrim.timer

Running scripts before and after suspend with systemd

I’ve had this question a few times, so it’s probably a good candidate for my blog.

If you want to do something before you suspend, like unload a module or run some script, it’s quite easy with systemd. Similarly, you can easily do something when the system resumes (like reload the module).

The details are in the systemd-suspend man page:
man systemd-suspend.service

Simply put an executable script of any name under /usr/lib/systemd/system-sleep/ that checks whether the first argument is pre (for before the system suspends) or post (after the system wakes from suspend).

If it is pre, then do the thing you want to before suspend, if it’s post then do the thing you want to do after resume. Simple!

Here’s a useless example:
#!/bin/sh
if [ "${1}" == "pre" ]; then
  # Do the thing you want before suspend here, e.g.:
  echo "we are suspending at $(date)..." > /tmp/systemd_suspend_test
elif [ "${1}" == "post" ]; then
  # Do the thing you want after resume here, e.g.:
  echo "...and we are back from $(date)" >> /tmp/systemd_suspend_test
fi

Automatic power saving on a Linux laptop with PowerTOP and systemd

If you have a laptop and want to get more battery life, you may already know about a handy tool from Intel called PowerTOP.

PowerTOP not only monitors your system for interrupts but has a tunable section where you can enable various power saving tweaks. Toggling one such tweak in the PowerTOP interface will show you the specific Linux system command it ran in order to enable or disable it.

PowerTOP Tweaks

Furthermore, it takes an argument ––auto-tune which lets you enable all of the power saving measures it has detected.

The package on Fedora comes with a systemd service, so enabling power saving on boot is simple, just enable the service:
sudo systemctl enable powertop

I noticed, however, that putting some devices into low-power mode on my laptop has unwanted side effects. In my case, the audio system outputs white noise and the USB mouse and keyboard are too slow to wake up (which I find annoying when I want to quickly click on, or type something).

So I took note of the Linux commands that PowerTOP was running for the USB peripherals and the audio device when it disabled power saving. The plan is to use the power of powertop ––auto-tune but I’ll then turn power saving back off for those specific devices.

I created an executable script under /usr/local/sbin/powertop-fixups.sh to disable power saving on those devices:
#!/bin/sh
# Don't do powersave on intel sound, we get static noise
echo '0' > '/sys/module/snd_hda_intel/parameters/power_save'
 
# Don't suspend USB keyboard and mouse
# This takes time to wake up which is annoying
echo 'on' > '/sys/bus/usb/devices/1-2.1.1/power/control'
echo 'on' > '/sys/bus/usb/devices/1-2.2/power/control'

Now, I just needed to tell systemd to start my script on boot, which should require and start after the powertop service.

I created the following service file at /etc/systemd/system/powertop-fixups.service:
[Unit]
Description=PowerTOP fixups
Requires=powertop.service
After=powertop.service
 
[Service]
Type=oneshot
ExecStart=/usr/local/sbin/powertop-fixups.sh
 
[Install]
WantedBy=multi-user.target

Then all I had to do was activate and enable it! Note that I don’t need to enable powertop.service itself, systemd will take care of that for me.

sudo systemctl daemon-reload
sudo systemctl enable powertop-fixups

Now I get the benefit of most of the power savings from PowerTOP, without the settings that were annoying.

Providing git:// (protocol) access to repos using GitLab

I mirror a bunch of open source projects in a local GitLab instance which works well.

By default, GitLab only provides https and ssh access to repositories, which can be a pain for continuous integration (especially if you were to use self-signed certificates).

However, it’s relatively easy to configure your GitLab server to run a git daemon and provide read-only access to anyone on any repos that you choose.

On my CentOS box, I installed git-daemon which includes systemd git@.service and git.socket files. I copied these to make a new service called git-daemon, like so:

[root@gitlab ~]# cp /usr/lib/systemd/system/git@.service \
/etc/systemd/system/git-daemon@.service
[root@gitlab ~]# cp /usr/lib/systemd/system/git.socket \
/etc/systemd/system/git-daemon.socket
[root@gitlab ~]# systemctl daemon-reload

Then I edited the git-daemon.socket to point it to the git repositories, /var/opt/gitlab/git-data/repositories/, which is the default location when using the GitLab omnibus package.
[Unit]
Description=Git Repositories Server Daemon
Documentation=man:git-daemon(1)
 
[Service]
User=git
ExecStart=-/usr/libexec/git-core/git-daemon \
--base-path=/var/opt/gitlab/git-data/repositories/ \
--syslog --inetd --verbose
StandardInput=socket

Now start and enable the service:
[root@gitlab ~]# systemctl start git-daemon.socket
[root@gitlab ~]# systemctl enable git-daemon.socket

As per the git-daemon.service systemd file, you should now have git-daemon listening on port 9418, however you may need to open the port through the firewall:

[root@gitlab ~]# firewall-cmd --permanent --zone=public --add-port=9418/tcp
[root@gitlab ~]# systemctl reload firewalld

Now, to enable git:// access to any given repository, you need to touch a file called git-daemon-export-ok in that repo’s git dir (it should be owned by your gitlab user, which is probably git). For example, a mirror of the Linux kernel:

-sh-4.2$ touch /var/opt/gitlab/git-data/repositories/mirror/linux.git/git-daemon-export-ok

From your local machine, test your git:// access!

[12:15 chris ~]$ git ls-remote git://gitlab/mirror/linux.git |head -1
46e595a17dcf11404f713845ecb5b06b92a94e43 HEAD

Success!

If you wanted to, you could set up a cron job to make sure that any new mirrors that come along are exported without manual intervention.

First, create an executable script somewhere, like /usr/local/bin/export_git-daemon_repos.sh (note, this excludes any wiki git repos).

#!/bin/bash
 
set -eo pipefail
 
if [[ "$USER" != "git" ]]; then
    echo "Only run this as the git user."
    exit 1
fi
 
cd /var/opt/gitlab/git-data/repositories/mirror
for x in $(ls -d * |grep -v \.wiki\.git) ; do
    pushd ${x}
    if [[ ! -e "git-daemon-export-ok" ]]; then
        touch git-daemon-export-ok
    fi
    popd
done

Then add it as a cron job for the git user on your gitlab server to run every two hours, or whatever suits you, e.g.:
-sh-4.2$ crontab -l
0 */2 * * * /usr/local/bin/export_git-daemon_repos.sh >/dev/null

Mirroring git repositories (to GitLab)

There are several open source git repos that I mirror in order to provide local speedy access to. Pushing those to a local GitLab server also means people can easily fork them and carry on.

On the GitLab server I have a local posix mrmirror user who also owns a group called mirror in GitLab (this user is cannot be called “mirror” as the user and group would conflict in GitLab).

In mrmirror’s home directory there’s a ~/git/mirror directory which stores all the repos that I want to mirror. The mrmirror user also has a cronjob that runs every few hours to pull down any updates and push them to the appropriate project in the GitLab mirror group.

So for example, to mirror Linux, I first create a new project in the GitLab mirror group called linux (this would be accessed at something like https://gitlab/mirror/linux.git).

Then as the mrmirror user on GitLab I run a mirror clone:
[mrmirror@gitlab ~]$ cd ~/git/mirror
[mrmirror@gitlab mirror]$ git clone --mirror git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

Then the script takes care of future updates and pushes them directly to GitLab via localhost:
#!/bin/bash
 
# Setup proxy for any https remotes
export http_proxy=http://proxy:3128
export https_proxy=http://proxy:3128
 
cd ~/git/mirror
 
for x in $(ls -d *.git) ; do
    pushd ${x}
    git remote prune origin
    git remote update -p
    git push --mirror git@localhost:mirror/${x}"
    popd
done
 
echo $(date) > /tmp/git_mirror_update.timestamp

That’s managed by a simple cronjob that the mrmirror user has on the GitLab server:
[mrmirror@gitlab mirror]$ crontab -l
0 */4 * * * /usr/local/bin/git_mirror_update.sh

And that seems to be working really well.

Configuring Postfix to forward emails via localhost to secure, authenticated GMail

It’s pretty easy to configure postfix on a local Linux box to forward emails via an external mail server. This way you can just send via localhost in your programs or any system daemons and the rest is automatically handled for you.

Here’s how to forward via GMail using authentication and encryption on Fedora (23 at the time of writing). You should consider enabling two-factor authentication on your gmail account, and generate a password specifically for postfix.

Install packages:
sudo dnf install cyrus-sasl-plain postfix mailx

Basic postfix configuration:
#Only listen on IPv4, not IPv6. Omit if you want IPv6.
sudo postconf inet_protocols=ipv4
 
#Relay all mail through to TLS enabled gmail
sudo postconf relayhost=[smtp.gmail.com]:587
 
#Use TLS encryption for sending email through gmail
sudo postconf smtp_use_tls=yes
 
#Enable authentication for gmail
sudo postconf smtp_sasl_auth_enable=yes
 
#Use the credentials in this file
sudo postconf smtp_sasl_password_maps=hash:/etc/postfix/sasl_passwd
 
#This file has the certificate to trust gmail encryption
sudo postconf smtp_tls_CAfile=/etc/ssl/certs/ca-bundle.crt
 
#Require authentication to send mail
sudo postconf smtp_sasl_security_options=noanonymous
sudo postconf smtp_sasl_tls_security_options=noanonymous

By default postfix listens on localhost, which is probably what you want. If you don’t for some reason, you could change the inet_interfaces parameter in the config file, but be warned that then anyone on your network (or potentially the Internet if it’s a public address) could send mail through your system. You may also want to consider using TLS on your postfix server.

By default, postfix sets myhostname to your fully-qualified domain name (check with hostname -f) but if you need to change this for some reason you can. For our instance it’s not really necessary because we’re forwarding email through a relay and not accepting locally.

Check that our configuration looks good:
sudo postconf -n
sudo postfix check

Create a password file using a text editor:
sudoedit /etc/postfix/sasl_passwd

The content should be in this form (the brackets are required, just replace your username@gmail.com address and password):
[smtp.gmail.com]:587 username@gmail.com:password

Hash the password for postfix:
sudo postmap /etc/postfix/sasl_passwd

Tail the postfix log:
sudo journalctl -f -u postfix.service &

Start the service (you should see it start up in the log):
sudo systemctl start postfix

Send a test email, replace username@gmail.com with your real email address:
echo "This is a test." | mail -s "test message" username@gmail.com

You should see the email go through the journalctl log and be forwarded, something like:
Feb 29 04:32:51 hostname postfix/smtp[4115]: 87BE620221: to=, relay=smtp.gmail.com[209.85.146.108]:587, delay=1.9, delays=0.04/0.06/0.55/1.3, dsn=2.0.0, status=sent (250 2.0.0 OK 1456720371 m32sm102235580ksj.52 - gsmtp)

Permanently setting SELinux context on files

I’m sure there are lots of howtos on the Internet for this, but…

Say you are running a web server like nginx and your log files are in a non-standard location, you may have problems starting the service because SELinux is blocking nginx from reading or writing to the files.

You can set the context of these files so that nginx will be happy:
[user@server ~]$ sudo chcon -Rv --type=httpd_log_t /srv/mydomain.com/logs/

That’s only temporary however, and the original context will be restored if you run restorecon or relabel your filesystem.

So you can do this permanently using the semanage command, like so:

[user@server ~]$ sudo semanage fcontext -a -t httpd_log_t "/srv/mydomain.com/logs(/.*)?"

Now you can use the standard selinux command to restore the correct label and it will use the new one you set above.
[user@server ~]$ sudo restorecon -rv /srv/

Trusting a self-generated CA system-wide on Fedora

Say you’re using FreeIPA (or perhaps you’ve generated your own CA) and you want to have your machines trust it. Well in Fedora you can run the following command against the CA file:


# trust anchor rootCA.pem

Building a Mini-ITX NAS? Don’t buy a Silverstone DS380 case.

Edit: I made some changes which have dropped the temps to around 40 degrees at idle (haven’t tested at load yet). The case has potential, but I still think it’s slightly too cramped and the airflow is not good enough.

Here’s what I changed:

  • Rearranged the drives to leave a gap between each one, which basically limits the unit to 4 drives instead of 8
  • Inverted the PSU as per suggestion from Dan, so that it helps to draw air through the case. The default for the PSU is to draw air from outside and bypass the case.
  • Plugged the rear and side fans directly into the PSU molex connector, rather than through mainboard and rear of hard drive chassis

So I’m building a NAS (running Fedora Server) and thought that the Silverstone DS380 case looked great. It has 8 hot-swappable SATA bays, claims decent cooling with filters, neat form factor.

ds380-34

It requires an SFX PSU, but there are some that have enough juice on the 12v rail (although avoid the SilverStone SX500-LG, it’s slightly too long) so that it’s not a major problem (although I would prefer standard ATX).

So I got one to run low-power i3, C226 chipset mainboard and five HGST 3TB NAS drives. Unfortunately the cooling through the drives is pretty much non-existent. The two fans on the side draw air in but blow onto the hotswap chassis and nothing really draws air through it.

As a result, many of the drives run around 65 degrees Celsius at idle (tested overnight) which is already outside of the drives’ recommended temperature range of 0-60 degrees.

I’ve replaced the case with my second choice Fractal Design NODE 304 and the drives at idle all sit at around 35 degrees.

node

It has two smaller fans at the front to bring air directly over the drives and a larger one at the rear, with a manual L/M/H speed controller for all three on the rear of the case. As a bonus, it uses a standard ATX power supply and has plenty of room for it.

The only downside I’ve found so far is the lack of hot-swap, but my NAS isn’t mission-critical so that’s not a deal breaker for me.

Your mileage might vary, but I won’t buy the DS380 for a NAS again, unless it’s going to run full of SSDs or something (or I heavily mod the case). It’s OK for a small machine though without a bunch of disks (shame!) and that’s what I’ve re-purposed it for now.

-c