Server Upgraded to Debian Bullseye

13 Sep 2021
11 minute read

I’ve spent this week upgrading my server to the latest Debian release (bullseye).

Although it’s possible to upgrade a system in-place, whenever there’s a major point release I take the opportunity to rebuild my server from scratch. I don’t use apt full-upgrade. I instead start with a completely fresh install of Debian 11 and copy my server custom config over to it, making adjustments as I go depending on where Debian 11 has changed things. This has two main benefits:

  • I can clean out any accumulated cruft in my system. Everything is fresh and new
  • It gives me a chance to test my backups.

I had a server that I ran before this, and I never upgraded it. Then I eventually ran into an issue that required me to upgrade, and it would have taken so much work that I abandoned it. Now I keep everything up to date as much as possible, and it helps everything to run as smoothly as possible.

Passenger to Puma

The first issue I ran into was Passenger. Passenger is the app server I use; it fits between my Rails app and the Nginx web server. I like Passenger because it integrates with nginx nicely. It’s just a plugin and in practice I don’t even have to think about Passenger. Whenever I make a change I can simply restart nginx and all my changes are in place.

However, because it’s a plugin it has to be compiled for each specific version of Nginx. I was surprised when I went to the Passenger web site that they didn’t yet have a version compiled for Debian 11 Bullseye. I waited a few days but it looked like they weren’t going to have it any time soon (it’s still not there as of today, almost a month after the Debian release). So I decided to replace it and use Puma as my application server.

Puma makes things a little more complicated because it runs as a separate process. But it does integrate with systemd, the somewhat controversial replacement for init (more on that later). With the proper gems in place, Puma will automatically pick up the sockets it needs to communicate with Nginx.

I needed to write the systemd unit files for Puma. The puma.socket file defines which sockets it listens on

[Unit]
Description=Puma HTTP Server Accept Sockets

[Socket]
ListenStream=0.0.0.0:9292
ListenStream=0.0.0.0:9293

# Socket options matching Puma defaults
NoDelay=true
ReusePort=true
Backlog=1024

[Install]
WantedBy=sockets.target

And the puma.service file which actually runs Puma

[Unit]
Description=Puma HTTP Server
After=network.target

Requires=puma.socket

[Service]
# Puma supports systemd's `Type=notify` and watchdog service
# monitoring, if the [sd_notify](https://github.com/agis/ruby-sdnotify) gem is installed
Type=notify

# If your Puma process locks up, systemd's watchdog will restart it within seconds.
WatchdogSec=10

# Preferably configure a non-privileged user
User=deploy

# The path to the your application code root directory.
WorkingDirectory=/home/deploy/reiterate-production
Environment=RAILS_ENV=production

# Systemd requires absolute paths
ExecStart=/home/deploy/.rvm/bin/rvm . do /home/deploy/reiterate-production/sbin/puma -C /home/deploy/etc/puma/production.rb

Restart=always

The tricky part there is in the ExecStart line. There are several examples of Puma unit files for systemd on the web, but none of them worked for me. The issue is that Puma needs a very specific environment to run under. Normally, if I were just running puma on the command line, I could start it up with bundle exec puma and it just works. But there are problems with bundler handing off the open sockets to the app, so instead I generated a binstub for puma and run that.

It still didn’t work though, because systemd will run this as root, even if you tell it to run as a different User (I believe setting the user just drops the access privileges but doesn’t actually source that user’s environment). Because the binstub specifies #!/bin/env ruby it runs the system default version of ruby, which has conflicts with the ruby gems which were compiled for the ruby in my production environment. To fix that, I need to run rvm, which sets the environment to pick up the right ruby.

Now that Puma can be started and managed by systemd, I needed to change the Nginx config to run as a reverse proxy. This is some overhead that I didn’t need to worry about when I had Passenger. But getting the right config wasn’t much of a problem at all.

upstream puma {
  server 127.0.0.1:9292;
  server 127.0.0.1:9293;
}

server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  server_name reiterate-app.com reiterate.app;

  ...
  location / {
    try_files $uri @backend;
  }

  location @backend {
    proxy_pass http://puma;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}

This is just the relevant portions of the Nginx config, and it’s pretty simple. I just give Nginx the ports that Puma is listening to, and for any request it checks to see if the file actually exists, and if not it hands the request off to Puma, which serves it to my Rails app.

The only other difference is that when I make a change to my Rails app, I need to systemctl restart puma instead of Nginx.

As a bonus, I can remove passenger from my sources.list so my setup is a little more pure-Debian.

Sublime Text

I do all my editing with Sublime Text. But most of my editing is done on my MacBook, and then I use Git too push the files to the server. When doing server management like this, though, I tend to use vi to edit files; I’ve never tried to edit them remotely. It turns out that setting that up was pretty simple. There’s a plugin for Sublime RemoteSubl that pairs with a simple shell script which I installed into /usr/local/bin/rsubl. Then I added a line to my .ssh/config

Host production
HostName reiterate-app.com
RemoteForward 52698 localhost:52698

And now after I ssh production I can simply rsubl file.txt and it pops up in a Sublime Text window for me to edit. Very nice!

Backups

I have three backup systems in place for my production server.

  • A code is kept under source control, with git. The code for the main rails app that drives the server and API is in one repository, and the code for Authorio is in another. These are both kept on my laptop which I consider the “main” repository, and then I push code out to production when its ready. (Authorio is open source and a copy is synched to GitHub as well).
  • The database is backed up to box.com. I have a script that does a complete dump, zips it and encrypts it, and then uploads it to box.com. I looked at a couple different third-party storage solutions before settling on box. They were free (for the small amounts of data I’m using) and their API was documented well enough that I could set up an automated backup with it.
  • Then there’s “everything else”: all of the configuration and custom scripts that make the server run. Typically for a cloud server like this the solution is just to do a whole-disk-image backup, via either the could provider (Linode has a nice backup service) or some other third party.

    I went a different route. For every customization I’ve made, I’ve tracked the affected files. I have a list of all the files I need to recreate the server, and I simply tar them up to a small (50kb) file.

    The advantage of this method is that it’s much more efficient than blindly backing up every file on the server. Also, I have a greater understanding of how all the various software pieces fit together. And, when I want to upgrade to a new version of the operating system, it’s easier to apply my changes to a new base os than it would be to try and merge a full-disk backup (or even try to upgrade in place).

When doing an upgrade like this one, I can copy my tar file over to the new server, extract it in a safe place, then copy the files over to their new locations. It might be nice to try and automate that process via something like Ansible but for now what I have works pretty well. Come to think of it, what I have is not far removed from a Dockerfile. But Docker would be way too much overhead for my system.

I’ve never automated the server backup (the third part), so I took this opportunity to add that automation, with tarsnap.

Tarsnap is a wonderfully geeky backup service. It’s a one-man operation that basically provides tar as a cloud service. And that’s it. There’s no GUI, no client app, not even an API. You basically have to script everything yourself. Why is this so great? Well, the guy who runs it is an expert in these sorts of systems, so it’s rock-solid. It’s also dirt cheap. I’ve been using Tarsnap to backup my laptop for years, and it costs me about $1/year. Yes, thats one single dollar per year. He doesn’t have any monthly or annual plans; he charges by the byte: 250 picodollars per byte-month of stored data. Yeah, picodollars. Your account balance is tracked to like thirteen decimal places.

I highly recommend Tarsnap.

So I added my production server to my tarsnap account, and wrote a quick systemd unit to back up my special server file daily. I should probably move the database backup to tarsnap as well.

Yarn

I only use Yarn because the Rails asset pipeline has a dependency on it. Under Debian 10 I had a package repository for yarn in my sources.list, but it looks like yarn has moved to npm as the official means to install. So that’s one less custom package in my source list! The only thing there now is npm.

Certbot and Certificate Management

Reiterate runs https-only using Let’s Encrypt to get certs. The official certbot-approved way to install certbot is via snap and that’s what I was using on Debian 10. I didn’t like snap. Certbot likes snap because it lets them push out new versions at a faster cadence than the sedate Debian release schedule. But snap comes with so much baggage. It mounts a dozen disks. And it forces you to have a snap/ directory in your home dir. I hate clutter like that.

Debian 11 provides certbot 1.12, which is recent enough to work for me (Debian 10 only had 0.30). All I had to do was apt-get install certbot and copy my certificates over. Much cleaner than what I had before.

Server Log Stats

I use the goaccess package to analyza my server logs. The version of goaccess in Debian 11 had some nice new config options, such as blocklists for IPs or URLs. I needed to make a few changes to its config file but nothing major. It had moved to its own subdirectory as well.

And that was it! Everything is up and running on Debian 11 now. Overall I’ve been very pleased with Debian both as a distribution and as a community. Since I’m not using any GUI it’s fit my needs perfectly as a server OS. (Although many people use Debian for desktop systems as well). Two thumbs up.

Tagged with

Comments and Webmentions


You can respond to this post using Webmentions. If you published a response to this elsewhere,

This post is licensed under CC BY 4.0