• Category Archives Technology
  • Migrate partitions to LVM on a live server

    Background

    My server was provisioned by Contabo as a Debian 9 server with a traditional MBR partition layout. At some point I did manage to at least split /var and /home off from the root partition, leaving the following layout for many years:

    NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda      8:0    0   1.2T  0 disk 
    ├─sda1   8:1    0   237M  0 part /boot
    ├─sda2   8:2    0    20G  0 part /
    ├─sda3   8:3    0    20G  0 part /var
    ├─sda4   8:4    0     1K  0 part 
    └─sda5   8:5    0 259.8G  0 part /home

    Recently I upgraded the VPS to have more diskspace, as I was starting to run low and, instead of mucking about with an ever-increasing amount of relatively inflexible extended partitions and symlinks, I decided to figure out a way to convert this layout to LVM which will provide me with flexibility to mange the disk space in the future.

    After a bit of research and prototyping all the steps in a local VM, I came up with the following procedure which worked for me.

    Note that I did NOT convert the /boot partition and that the disk remains an MBR-partitioned disk.

    Before Starting

    It is strongly recommended to take a backup and/or snapshot of the server before commencing. A singly mistype or issue during the conversion could lead to full data loss.

    Initial Conversion

    The first step is to convert sda2 and sda3 to use the Logical Volume Manager (LVM) and move the root and var partitions into that. This requires a multi-step process:

    1. Create a temporary LVM Physical Volume (PV) and move the data into it
    2. Update the system configuration
    3. Reboot the system
    4. Remove the original partitions and replace them with a new PV
    5. Add the PV to the LVM Volume Group (VG)
    6. Remove the temporary PV from the VG

    To create the temporary PV, I first needed to increase the size of the extended partition to make use of the new disk space. I used cfdisk for this. Note that there seems to be a bug in the Debian 11 version of cfdisk – when first increasing the size of the extended partition, a message is shown stating the maximum size, but nothing happens. Deleting the size and pressing enter again applies the size last specified.

    Next is creating the LVM:

    pvcreate /dev/sda6
    vgcreate vg1 /dev/sda6
    lvcreate -n root -L 8G vg1 && mkfs.ext4 /dev/mapper/vg1-root
    lvcreate -n var -L 20G vg1 && mkfs.ext4 /dev/mapper/vg1-var

    Now we can copy the data. Note that when copying /var, it is useful to shut down as many services on the server as possible to reduce the data being actively written to the partition. It may be a good idea to run the sync again just prior reboot.

    mount /dev/mapper/vg1-root /mnt
    rsync -avxq / /mnt/
    mount /dev/mapper/vg1-var /mnt/var
    rsync -avxq /var/ /mnt/var/

    And finally we edit the system configuration:

    • vim /etc/fstab
      • point the / and /var mounts to the new LVM volumes
    • vim /boot/grub/grub.cfg
      • ensure kernel command line is updated to have “root=/dev/mapper/vg1-root”

    fstab:

    # / was on /dev/sda2 during installation
    #UUID=904300f1-5d90-4c10-908a-b8ac334bd021 /               ext4    errors=remount-ro 0       0
    /dev/mapper/vg1-root                      /               ext4    errors=remount-ro 0       0
    # /boot was on /dev/sda1 during installation
    UUID=9d8415e3-5d47-42e5-b169-ab0f5db14645 /boot           ext4    defaults,noatime        0       1
    
    #UUID=8de1d736-5b9e-44b1-ba6f-34984912889e /var            ext4    errors=remount-ro 0       1
    /dev/mapper/vg1-var                       /var            ext4    errors=remount-ro 0       1
    UUID=70554039-342d-4035-8182-ece5b032ec5b /home           ext4    errors=remount-ro 0       1

    grub.cfg:

    ### BEGIN /etc/grub.d/10_linux ###
    function gfxmode {
            set gfxpayload="${1}"
    }
    set linux_gfx_mode=
    export linux_gfx_mode
    menuentry 'Debian GNU/Linux' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-a5511b28-0df7-48c2-8565-baeaede58cfa' {
            load_video
            insmod gzio
            if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
            insmod part_msdos
            insmod ext2
            insmod lvm
            set root='hd0,msdos1'
            if [ x$feature_platform_search_hint = xy ]; then
              search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1  9d8415e3-5d47-42e5-b169-ab0f5db14645
            else
              search --no-floppy --fs-uuid --set=root 9d8415e3-5d47-42e5-b169-ab0f5db14645
            fi
            echo    'Loading Linux 6.1.0-0.deb11.21-amd64 ...'
            linux   /vmlinuz-6.1.0-0.deb11.21-amd64 root=/dev/mapper/vg1-root ro rootdelay=10 net.ifnames=0 ixgbe.allow_unsupported_sfp=1 quiet 
            echo    'Loading initial ramdisk ...'
            initrd  /initrd.img-6.1.0-0.deb11.21-amd64
    }
    

    Now for the scary part – rebooting. Double-check that all configurations have also been applied to the new partitions and everything has been copied correctly. Make sure you have a way to recover should the system not reboot!

    Completing the initial conversion

    Assuming the reboot went well, the system should now be running on the new LVM Logical Volumes (LV’s).

    Now delete the /dev/sda2 and /dev/sda3 partitions, and create a new LVM PV in their place. Mount it and add it to the volume group, then remove the temporary PV:

    vgextend vg1 /dev/sda2
    pvmove /dev/sda6
    vgreduce vg1 /dev/sda6
    pvremove /dev/sda6

    Now we can delete the temporary /dev/sda6 partition.

    Moving the data

    Next we have to move the home partition. Again, this is a multi-step process as we first have to move the data out of the extended partition, then remove the extended partition.

    First create a new primary partition, /dev/sda3, large enough to hold the data. Make it a PV and add it to vg1 as before, then copy the data. Note that as before, ideally there should be no active users during the copy and any services which write into users’ home directories should be shut down.

    vgextend vg1 /dev/sda3
    lvcreate -n home -L 250G vg1 && mkfs.ext4 /dev/mapper/vg1-home
    mount /dev/mapper/vg1-home /mnt
    rsync -avxq /home/ /mnt/

    Now remount the new partition in place of the home partition. It may be better and safer to reboot the system instead to avoid any data loss or corruption, since lazy-unmounting a filesystem can lead to all sorts of edge cases.

    umount -l /home && mount /dev/mapper/vg1-home /home

    Next we delete all the logical partitions and the extended partition. The free data should now be between sda2 and sda3, which lets us increase the size of sda2. We can now increase the LVM PV to make use of the new space. Once we’ve done that, we can pvmove the data from the temporary Physical Volume and remove it:

    pvresize /dev/sda2
    pvmove /dev/sda3
    vgreduce /dev/sda3
    pvremove /dev/sda3

    Finally, we can delete the /dev/sda3 partition and add the remaining free space to the LVM Volume Group. From now on it is trivial to manage disk layout using LVM.

    Final Layout

    After all was said and done, the server ended up with a disk layout as follows:

    NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    sda            8:0    0  1.2T  0 disk 
    ├─sda1         8:1    0  237M  0 part /boot
    └─sda2         8:2    0  500G  0 part 
      ├─vg1-root 254:0    0    8G  0 lvm  /
      ├─vg1-var  254:1    0   20G  0 lvm  /var
      └─vg1-home 254:2    0  260G  0 lvm  /home

    For now I’ve left the LVM partition at 500GiB, which is double what the old disk was, and gives the various volumes plenty of room to grow.


  • PS5 Remote Play on Linux – Chiaki

    Chiaki – a remote-play client for PlayStation consoles.

    Overview

    While playing Undertale on my PS5, I got a bit frustrated at some of the trophies. Spending hours spamming the X button didn’t really feel like fun and rewording gameplay. A brief search later led me to discover a remote-play option called Chiaki for Linux. 10 minutes later I had it up-and-running! Impressive.

    The remote-play session can be controlled by keyboard, or a PS5 controlled can be connected to the PC. In my case, it was just plug-and-play.

    I tweaked the config a bit to use 1080p instead of the default 720p resolution, and to use hardware acceleration. I also added Chiaki to Steam and configured it to open Chiaki using gamemode as otherwise the screensaver was kicking in. Unfortunately the Steam overlay and screenshot facility is not working (yet).

    Worked absolutely brilliantly on my setup – gigabit LAN and a 1080p-based Debian GNU/Linux 12 desktop PC.

    It was then also rather trivial to script spamming the X button using xdotool..

    Issues

    Chiaki 2.2.0 – “Extended DualSense Support” crashes the remote play session, forcing a restart of the PS5 before remote play works again. To be fair, this feature is marked experimental.

    Remote Play of streaming content (eg, PlayStation Plus Premium classic games) shows a black screen, with the streamed content being displayed on the TV. Not sure if the official PlayStation remote play application has the same problem.

    Installation

    Core Installation

    The steps were pretty simple:

    1. Install Chiaki:
      1. apt-get install chiaki
    2. Retrieve account ID (NOT your login or username)
      1. I used a python script provided by the Chiaki developers.
      2. Here’s a reddit post describing an alternative quite convoluted approach (didn’t try it)
      3. And here’s a webpage which retrieves it – by far the easiest method! (This does NOT require you enter any login credentials, but does require your account to be publicly viewable.)
    3. Enter required data
      1. Account ID
      2. Code from the console
        1. Settings -> System -> Remote Play -> Link Device
    4. ?
    5. Profit!

    Optionally:

    • Add it to your Steam library
    • Run it using gamemode
    • Tweak configuration to use hardware acceleration and higher resolution

  • Undertale on PS5

    So I just played (and nearly completed) the cult-indy-hit Undertale on my PS5.

    Firstly, it’s an awesome little action adventure rpg thingy. If you haven’t played it, I can highly recommend it despite it’s rather old-skool looks. Quirky humour, interesting choices, and only a few hours long for a basic play-through although it has quite a lot of depth if you want to spend the time on it.

    It effectively combines puzzles and combat (via some nifty little mini-games) although exploration is quite limited. While there are some hidden areas, mostly it’s a linear story.

    My two main gripes are that there is no way to permanently speed up dialogue display and that it is very grindy to get money, which is required to get enough consumables for healing for the final fights if you’re not so good at those.

    Specifically on the PS5 port, whoever designed the trophies for this game really should go back to the drawing board; most of them are just filler and mind numbingly tedious repetition. It’s not even required to complete the game in order to Platinum it!
    Without going into spoilers, the game itself has plenty of opportunities for much better trophies which would properly reward the player. Somewhat amazed SIE approved half of these trophies!

    The game itself: 4/5
    The PS5 trophies: 2/5


  • WordPress and Piwigo? Yes please!

    So I just discovered the PiwigoPress plugin for WordPress.
    While it’s obsolete and the widget no longer works, the “short code” feature still does. Unfortunately it’s not very well documented, but it is possible to add pictures to an article which link back to not only the picture, but also the album which that picture is part of.
    Yayy.

    Trawling through the source code, it seems the following is possible:

    [PiwigoPress id=<pic> lnktype=albumpicture url='http://gallery.lemmurg.com/']
    idPiwigo picture id(s)eg.
    id=1 – picture id 1
    id=1-5 – all pictures with ids 1 through 5
    id=1,3,4 – pictures with ids 1, 3, and 4
    lnktypepicture (default, link to picture only)
    album (?)
    albumpicture (link to picture with album)
    eg lnktype=albumpicture
    urlURL of the Piwigo siteeg: url=http://gallery.lemmurg.com
    sizeSize of the picture. Possible values:
    sq – square
    th – thumbnail
    xs – extra small
    sm – small
    me – medium
    la – large (default)
    xl – extra large
    xx – extra-extra large

    eg: size=sm
    nameAdds image name
    0 – no (default)
    1 – yes
    auto – ?
    eg: name=1
    descAdds image description.
    0 – no (default)
    1 – yes
    eg: desc=1
    class?
    style?
    lnktypepicture – link to picture only (default)
    albumpicture – link to picture with album
    album – ?
    example: lnktype=albumpicture
    opntypeWhether to open in the current tab or a new one.
    _blank – open in new tab (default)
    ordertype?
    random – random order (default)
    orderascWhether to sort pictures in ascending order.
    0 – no (default)
    1 – yes

  • Upgrading Nextcloud 15 to 19 on Debian …

    So my Debian 9 server was still running Nextcloud 15. Meanwhile Nextcloud 20 is out.

    When I looked at performing the (manual) update I actually found a Nextcloud 16 download already in place but it seems I never completed that. Not long afterwards I discovered why – Nextcloud 16 requires PHP 7.3, but Debian 9 only has PHP 7.0 available.

    Long story short, instead of chimera’ing my Debian install I bit the bullet and decided to finally upgrade the server to Debian 10

    Some time later…

    After the server upgrade completed I was able to use the Nextcloud web interface to upgrade to Nextcloud 16.. and 17… and 18… and 19… and 20!

    That’s were the fun stopped, many things were broken in NC20 (apps just showing blank pages), so, having taken a backup between every upgrade, I rolled back to NC19 (incidentally validating that my backups worked).

    Most things worked out of the box. Critically for me, Grauphel did not.

    Long story short, it turns out that on Debian 10, the version of the PHP OAuth package is actually not compatible with the installed version of PHP 7.3! Installing a binary-compatible package from the Debian package snapshots site fixed this.

    Amongst other things I did during the upgrade cycles was:

    • changed the database to 4-byte suppport allowing for more characters in paths and comments.
    • fixed several other minor PHP configuration issues which Nextcloud was warning about.
    • fixed support for Maps (Nextcloud bug in the upgrade scripts left some database columns misconfigured:
      • Column name "oc_maps_address_geo"."object_uri" is NotNull, but has empty string or null as default.
      • The fix was to manually edit the scripts.
    • wrote backup scripts backing up the Nextcloud directory, the database, and, optionally, the data directory.


  • Upgrading Debian 9 to Debian 10

    Triggered by needing to upgrade Nextcloud, I finally bit the bullet and decided to upgrade my virtually-hosted Debian server from Debian 9 “stretch” to Debian 10 “buster”.

    The upgrade, as usual, was fairly trivial:

    apt-get update
    apt-get upgrade
    <edit /etc/apt/sources.conf to point to the new version>
    apt-get update
    apt-get upgrade
    apt-get full-upgrade
    reboot

    There were various configuration files which needed tweaking during and after the upgrade. vimdiff was very useful. I also learned a new screen feature – split-screen! (Ctrl-a – |). Finally a shoutout to etckeeper for maintaining a full history of all edits made in /etc.

    Post-upgrade Issues and Gotchas

    dovecot (imap server)

    A huge issue was that I could no longer access my emails from anywhere.

    Turns out that dovecot was no longer letting me log in. The mail log file had numerous “Can’t load DH parameters” error entries. I had not merged in a required change to the ssl certificate configuration.

    exim4 (mail server)

    The second huge issue was that exim was no longer processing incoming mail. Turns out that spamd wasn’t started after the reboot. Fixed by:

    systemctl start spamassassin.service
    systemctl enable spamassassin.service

    shorewall (firewall)

    Another major gotcha: the shorewall firewalls were not automatically re-enabled, and it took me three days to notice. Yikes! I had left the server on sys-v init instead of systemctl and the upgrade had silently switched over. After restarting the firewall, use systemctl enable to configure it to start on bootup.

    systemctl start shorewall.service
    systemctl enable shorewall.service
    systemctl start shorewall6.service
    systemctl enable shorewall6.service

    bind9 (name server)

    Another item was that bind was no longer starting up – it needed a tweak to the apparmor configuration. Appears that on my server the log files are written to a legacy directory and the new default configuration prevented bind from writing into it and hence failing to start up.

    Miscellaneous

    • I finally removed dovecot spam from syslog by giving it its own logfiles (tweaking fail2ban accordingly).
    • Various PHP options needed tweaking and several new modules needed installing to support Nextcloud (manually installed so no dependency tracking).

    Later Updates

    • Discovered that phpldapadmin was broken. Manually downloaded and installed an updated version from “testing”.

  • New Scuba Toy – Shearwater Peregrine

    Just bought a Shearwater Peregrine as a backup for my Shearwater Perdix AI (budget didn’t quite stretch to a second Perdix..)

    Haven’t dived it yet, but following are my immediate unboxing impressions (pictures (via Google image search)).

    The good:

    • Familiar layout and the same great screen as the Perdix.
    • Smaller and lighter form factor than the Perdix.
    • Mostly full-featured recreational dive computer with some intro-to-tec features.
      • No AI/compass
      • Up to 100% O2 and 3 gases
      • Lots of “tec” displays
    • Built-in battery charging is via the Qi wireless standard, so absolutely no exposed contacts.
      • But due to wrist-straps/bungees it may be difficult to use generic Qi charging pads.
    • Dive download via Bluetooth (hopefully works better than the Perdix!)

    The bad:

    • Buttons are physical rather than the piezo-electric ones from the Perdix (subjective).
    • Battery is a built-in rechargeable battery (subjective).
    • Limited display customisation (as compared to the Perdix).
    • Screen protector is a standard thin protector rather than the thick gel-like one of the Perdix.

    The ugly:

    • The charging pad uses a micro-USB cable rather than USB-C (for a brand-new product, I would expect it to use the latest standards)
    • Still fairly large (although required to support the screen, the bezel _could_ be a bit smaller given the target market)
    • No compass or Air Integration (for the price-point, many competing products offer these features)

  • Gotcha when testing Unity Objects for null in C#

    Unity has overloaded the ‘==’ operator, specifically, checking objects against “null” might not always do what you expect. More information can be found in this blog post from Unity.

    Specifically, if a Unity Object (as opposed to a C# object) gets destroyed, it will be set to a “fake null” object.  In this case, the following statements are true:

    1
    2
    3
    4
    
        obj == null          // true    obj is null          // false
        (object)obj == null  // false
        obj ?? value         // obj

    Note the different results between lines 1 and  2 & 3.

    If obj truly is null, then the results are as expected:

    1
    2
    3
    4
    
        obj == null          // true
        obj is null          // true
        (object)obj == null  // true
        obj ?? value         // value

    It may be more readable to provide an Extension Method to UnityObject:

        /// <summary>
        /// Extension Method on Unity Objects returning whether the object really is null.
        /// </summary>
        /// Unity overloads the '==' operator so that it returns true on both null references
        /// as well as references to destroyed objects. This function only returns true if
        /// the reference truly is null, and returns false for "fake null" objects.
        public static bool IsTrueNull(this UnityEngine.Object ob)
        {
            return (object)ob == null;
        }