• Linux
  • May 2024
    M T W T F S S
     12345
    6789101112
    13141516171819
    20212223242526
    2728293031  
  • Meta

config-bak

A little over 4 years ago, I wrote a script to backup my configuration files in case I made a change to them that didn’t work out or accidentally deleted them. My first rendition of the script was quite linear and consisted individual blocks of code for each file I was backing up and it grew to be quite large. But it worked, and I used it in that for for a couple of years. Later on, I put each of those individual routines into functions. It was still quite linear. Recently, I reviewed the script and noticed that most of these functions were identical, the only variations were the configuration files themselves.

After taking a close look at the code, I determined that, with only a few exceptions, most of the files to be backed up were either in the root of my home directory or under the .config directory. I created functions to back up files for each case. There were still some exceptions, such as configuratins that might have different names or locations, depending ont the version, desktop environment, or operating system. I wrote functions for those special cases. Now the script would call a more generic function and pass the path and file name to it, or one of the specific functions for the special cases.

Then I started seeing similarities in the special cases and figured that in most of them, I could use the generic functions using the partiuluar parameters for each case. That left only a small handful of files that didn’t fit either generic case. I have a program whose configuration file is in a hidden folder in the root of the home directory and another file that’s a couple levels down in my .local directory. For these special cases, I created another function that places the backup file directly in the backup folder without any directory name.

Finally, there was my dump of my Cinnamon keybindings which I keep in my .config folder. It’s not a configuration file, per se, but it’s important enough to keep a backup. It’s really the only “special case” file I currently have, so it has it own function to handle it. For the most part, it operates much the same as the other functions, but if the system is running the Cinnamon desktop enviroment, and keybinding dump doesn’t exist in the .config folder, it will create the dump file and copy it to the backup folder.

Over time, I’ve improved the appearance of the script’s output. As the script backs up the new or changed files, it displays the pertinent path and the filename, precede by an ASCII arrow (===>). It looks nice and it lets me know what’s been backed up.

Of course, there is a companion script to restore the configuration files from the local backup. Now that I’ve stremlined the backup script, I’m wondering if I can make the restoration script more modular, eliminating many of the individual functions. A precursory look at the restoration script seems to indicate that I can model it after the backup script. That’s a project for the near future.

i3 on Bookworm

I’ve been planning to upgrade my Debian systems since Bookworm was released earlier this summer. Of course, I plan to maintain i3wm on those systems that already have it, and I spent a lot of time researching what I needed to be done to accomplish that.

Drew Grif’s (JustAGuy Linux) video and scripts were very beneficial in helping me get i3 up and running on Debian 11, and I was thrilled wihen he came out with a video where he installed several window managers on Debian 12. I cloned his scripts from his GitHub repository and studied them, but found a ittle difficult to separate the the i3 specific files from the rest. I’m not really interested in the other windown managers, although I may install all of them in a VM or on hardware, to check them out.

What I ultimately decided to do, was to take my current install script and update it to install i3 on Bookworm. I added the appropriate Polybar file, and make the necessary adjustments to the i3 configuration files. Then I upgraded one of my test laptops, a Dell Latitude E-6500, installing Debian 12 on the root partition and keeping the home partition. It took some work, but I got Polybar working with it, and it’s running great. I’ve since updated my local i3-Debian repository, so I think I’m ready to try it on another system. This time I’ll probably do a clean install and then restore my home folder from the backup.

I was able to install Polybar on my HP Elitebook laptop which is working out great. I’m not sure if the current version of Polybar actually supports applets in the taskbar , but I have them loaded by my i3 autostart script after Polybar has loaded, and it seems to be working okay. The Elitebook is a production machine and will probably among the last to be upgrade.

It’s strange that I can’t find Polybar in the Debian 11 repositories on the other systems running Bullseye. That’s not a big deal since the Bumblebee-Status bar works well on those systems, and they’ll all be upgraded to Bookworm before too long.

Drew Grif’s videos and repositories:

Coming Upgrades

Debian 12 Bookworm has been out for about a month , but I haven’t installed it on anything yet. I haven’t even upgraded any of my Bullseye systems. I found a YouTube video by Drew Grif on the JustAGuy Linux channel that was very helpful. In his video, Drew install five different window managers, i3 among them. I’ve been looking at his script and trying to figure what what files go with each window manager and if I can separate which ones I need for i3. He’s also using Polybar as his status bar which I’m familar with. Maybe I should use his installation scripts and install all of them on a system. Maybe I’ll find that I like one of the others better.

I have been adding and deleting files from his installation script to fit my needs. Today I created new repositories on my Gitea server, cloned them to my main computer, and copied the files to them. Now I’ll be able to modify the scripts and configuration files, and track the changes using git. Having a “fork” of Drew’s repository on my Gitea server will enable me to clone the repositories to a new installation.

I’ve been seeing a lot of of good reviews of Debian 12, and I’m looking forward to upgrading from Bullseye, particularly with window managers. I’m also looking forward to LMDE 6 when it comes out. Now that Linux Mint has released Mint 21.2, LMDE 6 will hopefully be out in a few months.

I’ll probably be upgrading my Mint systems before too long once the inplace upgrade procudures are in place. From what I’ve seen, most of the new features are cosmetic or things I don’t really use. There really hasn’t been much in Mint 21 that wows me. The HP ProBooks that I’d been running Mint 20.x on without much problem haven’t been performing nearly as well with 21.x. They’re running noticeable slower. One of those laptops is a “production” machine that I use daily. To help it out, I installed an SSD and that has helped. I also had an HP USDT that’s a dedicated media system and Mint 21 slowed it down considerably. I ended up installing Debian 11 with i3 on that.

I realize that most of my computers are older, acquired during refresh projects. It’s been five years since I worked on one those projects. Most of the machine have third or fourth generation i5 processors, I can’t expect great performance out of them. Maybe it’s time to start acquiring newer old hardware.

This is a bit off the subject, but the HP ProBook that I use a lot seemed to be running a little warm, so I opened it up to take a look. It’s a bit old, so I figured it might be time to put some fresh thermal paste on the CPU. I pulled out the fan and the heatsink, cleaned everything up, applied new paste, and put it all back together. It’s now running several degrees cooler now, still a bit warm but I think this model tends to run warm . It was nice that I didn’t need to completely disassemble it like the service manual wanted me to.

Links for Drew Grif’s

BU Revisited

I’ve recently made a couple of significant changes to Joe Collins’ BU script. With as many modifications I’ve made, it hardly resembles Joe’s original script. The backup, restore, and help functions are still mostly as he wrote them although I have made some changes to them.

One change I made since my April post, Joe’s Backup Script, was to change the if-elif-else construct of checking the command line arguments to a case statement. The default action with no arguments is still to run a backup, but now I’ve allowed explicitly entering it as an option.

Most recently, I discovered a video and an accompanying script by Kris Occhipini that demonstrated a method of checking for a removable drive’s UUID and mounting it. It inspired me to create a function that I could incorporate into the BU script to determine if a paticular backup drive was connected to the appropriate computer, thus preventing me from inadverantly backing up a system to the wrong drive and potentially running out of space.

The function has the backup drive UUID’s and the computer hostnames held in variables and checks for the appropriate UUID. If it detects one of them, it compares the hostname with the hostname associate with that drive. If I have the wrong drive/hostname combinatiion, the function returns a failure code and the script exits with an appropriate error message. Should one of these drives be connected to one of my test computers, it raises the error condition, preventing an extraneous backup on that drive. Should an unlisted backup drive be attached, the backup script will still continue normally. This allows me to use the script to backup my home directory on my test machines.

I initially assigned the UUIDs and the target hostnames to individual variables, and later decided that using arrays to hold this information would be a better way to go. That worked well, and the function looks nicer. It will probably be easier to maintain as long as I keep each UUID associated with the proper hostname.

A special thanks to Joe Collins and Kris Occhipini for sharing their work. They’ve inspired me and provided me with the tools to build upon.

Realtek Driver Issue

Back in January, I upgraded two of my older laptops, a Gateway E-475M and an HP Mini-110 from BunsenLabs Lithium (based on Debian 10) to BunsenLabs Beryillium (based on Debian 11). In the process, I lost the ability to connect via the Ethernet connection on the HP laptop. I presumed that it was probably a driver issue, that either Debian 11 or that the 5.10 kernel no longer supported the driver for the Realtek RTL810xE Ethernet controller. I didn’t bother looking into it at the time, the wireless was working so I reserved a DHCP address for it, and adjusted my scripts accordingly.

In the last couple of days I’ve been revising a script that displays a current snapshot of CPU and memory usage, CPU, SSD, HDD, and NVMe tempatures, and disk usage. While testing the script on the HP 110, I took a look at information file I write to every machine that provides me with info on the hardware. That inspired me to do an Internet search on my problem ethernet device. I found a query on an Ubuntu forum that pointed to a solution in a GitHub repository.

Apparently, Ubuntu, and likely Debian, installs a Realtek driver that doesn’t work with older Realtek adapters. The repository contained the source code, makefiles, a script, and instructions to install the correct driver. I cloned the repository and followed the instructions, and when I finished, the laptop was able to connect on the Ethernet port, and able to connect via SSH from other computes on the network. Now I wish I had looked into it sooner.

Here’s the link to the GitHub repository:

https://github.com/ghostrider-reborn/realtek-r8101-linux-driver

Joe’s Backup Script

A few years ago, I found a backup script by Joe Collins at EzeeLinux that has served me quite well. Over the years, I’ve made many modifications and added a few features to it, but it’s still basically Joe’s script, and I give him full credit for his work. His backup and restore functions have only gotten minor changes to fit my computing environment.

A couple of days ago, I took a closer look at Joe’s function to check the backup drive to see if it was ready for backups. I saw that it had several routines in it that could be functions in their own right, and at least one routine was something I had added. I ended up rewriting the function so that it called four other functions.

One of his drive-test routines checked to see if the mount-point directory existed, surmising that if it did, the backup driive was mounted. I can see that this work with a desktop environment where the system would detect the drive, create the mount-point, mount the drive, and open the file manager. And this is what happens with my systems that use a desktop environment. I have several systems that use a window manager on a minimal installation of Debian. These systems do not automatically detect and mount a USB drive when it’s plugged in. On these systems, I would have to do one of three things — open a file manager and click on the drive, manually mount the drive from the command line, or use a script.

I recently found a way to extract the correct device name for the backup drive using the lsblk command, assuming that the drive is properly labeled to work with Joe’s BU script. Using that, I was able to automate the mounting process without have to look a the lsblk output, finding the right device name, and entering it with the read command. That got me to thinking that this could easily be applied to the BU script.

My new drive_test function makes function calls to run the checks on the drive, and displays error messages and exits only if the prescribed conditions aren’t met. First, the check_usb function checks to see if a USB drive has been plugged in. Then, bu_drive_label checks the partition label of the drive. If the label is correct, mount_bu determines if the drive is mounted. If the drive is not mounted, as in the case of my minimal Debian systems, the function extracts the proper device name, creates the mount-point (if necessary) , and mounts the drive. Once the drive is mounted, an addition check (bu_drive_fs) is run to determine if the drive is formatted with the correct file system.

In the original script, the backup and restore functions ran the sync command to synchronize cached data to ensure all backed up data is saved to the drive. This process can take a while, so I incorporated a function to print dots until the command completes. Then, since it’s used in the two functions, I made a function for it. Other than that change, and a few minor adjustments for my own need, those functions are pretty much as Joe wrote them.

Joe’s original script used several if-statements to check command line arguments. Early on, I combined them into an if-elif-else structure which serves the same purpose, but, satisfies my coding discipline.

As I said, I’ve been using Joe’s script since he made it available, and with a few tweaks and modifications, it has served me well. I use it on a nearly daily basis. Thanks Joe, for the scripts and Linux videos you’ve made available. I’ve learned from you and been inspired.

System-info script

Whenever I do a new Linux installation on one of my computers, I have a script that gathers information about the hardware and the operating system, and puts it into a file in my home directory. I first wrote it a few years ago, and it’s evolved over the years in capabilty and sophistiction.

The script started out just gathering the data and putting it into variables. There were a few condidtional statements to handle things like whether their was a wireless card or something like that. Eventually, I separated most of the related tasks into functions, and they’re called by the main part of the script.

I’ve had to add functions to the script to handle situations the occasionally arise. I found that on some systems, thehdparm output doesn’t include the form factor information. I have thd hard-drive informtion function check for that and call another function that lets me filll in the missing information.

On systems which have Timshift active, lsblk willl often show /run/timeshift instead of /home, and I need to correct that. The script checks for this before the temporary file is written to my home directory and call a function that allows me to change it..

I’ve recently set up a laptop with an NVMe drive, and I had to find a way to include the pertinent information in the file. My original solution only accounted to one NVMe drive in the system which fits my current needs. I felt that I need to consider the possibility of having multiple NVMe drives, so I modified the function to put the NVMe information into an array. and then extract it. for each device.

Recently, I’ve modified the functions that extract data for SATA drives, NVMe drives, and laptop batteries. The modifications reduce the number of calls to the utilities by placing the output from the utilities into a temporary file, an array, or a variable.

The script extracts and prints the following information:

  • Manufacturer, product name, version, and serial number.
  • CPU model, architecture, and number of threads.
  • Physical memory, type, amount installed, maximum capacity.
  • Graphics chip (I don’t know about graphic cards, I only have onboard graphics)
  • Wired and wireless network adapters (manufacturer and model, interface name, MAC address)
  • Hard drive information including model number, serial number, capacity, and form factor.
  • NVMe informtion, including Model number, serial number, and capacity.
  • Laptop battery name, manufacturer, model, serial number, and battery technology.

Nala

I used Nala, a front-end for the apt command, a couple of years ago, and overall, I liked it. However, I began to run into problems with it, mostly dealing with installing it. When I installed it, I had to determine which version to install depending on what distribution I was using. I had a script to handle that, but I was also experiencing other problems with it. It beame more trouble than it was worth to use it in my environment, so I removed it from my systems.

Recently, I became aware that Nala was availble in the Ubuntu 22.04 and the Debian Sid and Testing repositories. I have it installed on Linux Mint 21 and MX Linux 21, and it seems to be working rather well on those machines. It is avaiable from the Voian Scar repository or as a .deb package from their GitLab repository, but I’m not sure how well that’s supported.

Since I have a mix of Mint, Debian, MX, and BunsenLabs, I modified my update scripts with a function that runs Nala if it’s installed. That’s been working quite well. The weekly unattended updates still use apt, so I added a line to clean up the warning about all the source lists that Nala creates to the sed script that cleans up the log file. I haven’t seen any issues with the weekly updates or the log files. I also created an alias that checks if Nala is installed and checks for updates without installing them.

Overall, I like the improved performance with parallel downloads. Since it only fetches the fastest mirrors for Debian and Ubuntu, it doesn’t resolve my problem with downloading updates from the Linux Mint repositories. The Firefox is still going to be excrutiatingly slow.

So far, Nala is looking pretty good on the systems on which it’s installed. If I start seeing problems, I can always uninstall it and remove it from my scripts.

 Thoughts on Linux Mint 21.1

In January, I upgraded two desktops and three laptops, from Linux Mint 20.3 Una to 21.1 Vera. Overall, I’m happy with upgrade on my main desktop computer, but not on the laptops. The laptops, all HP ProBook 6570b’s, are a bit underpowered for it. Applications are slow to open, some updates take an excessively long time time download, and when streaming video from YouTube and other sites, there’s a lot of buffering.

Firefox updates on all of my Mint 21.1 systems are slow. The file itself is about 70 MB and it’s downloaded from the Mint repositories. I’ve noticed that their mirrors are generally much slower than the Ubuntu mirrors. Still, it shouldn’t somewhere between 6 and 15 minutes to just download the file. It’s almost like being on dial-up.

Part of the problem may be that the laptops all have third-generation i5 processors, 8 GB of RAM, and spinning drives. More memory and SSD drives would undoubtedly speed them up, but I’m not willing to make that investment in them, at least not right now. The desktop machines have newer processors and more memory, and they’re running well.

For now, I’ll thinking about installing Linux Mint Debian Edittion (LMDE) on one of the laptops and minimal Debian with i3WM on another. The third one is kind of a production machine, and for what I do with it, I can live with the performance. I’m curious about the performance increase I’ll get by switching to a Debian-based environment.

I’ve been thinking about transistioning some of the Mint computers to either LMDE or Debian i3 for a while, wondering if I could live in them. I’m starting to think I could. My recent experiences with Mint 21.1 and the contortions that the Linux Mint team have to go through to circumvent all the changes that Ubuntu has introduced over the past couple of years, are definitely leading me down that path.

I’ve getting much more comfortable with a tiling window manager and with the Mint 21 updates, I’ve gotten away from using the Ubuntu PPAs. Flatpaks have found a place in my environment whenever I feel I need current software. I’m finding that maybe I don’t need the Ubuntu base. I haven’t messed around much wth Arch-based distributions, but I’m sure I could adapt without too much effort and it might be fun to adapt my scripts to worth with either.

Mint Upgrades

My current project is upgrading my Linux Mint 20.3 systems to Linux Mint 21.1. So far, I have two of the laptops done, and they went quite well. The Mint team has done a great job with their mintupgrade utility for upgrading from one LTS version to the next. It worked well with my LMDE upgrades and it’s working well with Linux Mint.

Since I didn’t upgrade until after the Mint 21.1 release came out, the process had to be done in two phases. First, I had to upgrade to Mint 21 (Vanessa) with the mintupgrade tool and do a little cleanup. Then I used the upgrade feature in Update Manager to upgrade to 21.1 (Vera). With those upgrades completed, I reinstalled the applications I manage through GitHub. Naturally, I ended up modifying some of my installatinon scripts to work with Jammy and Mint 21.

The apt-key functionality has been deprecated for a while and I’ve noticed that Jammy/Mint 21 complains about it more. I’ve read a number of articles on how to manage the keys, but I haven’t been able to get a good understanding of it. Some packages I use have updated their installation instructions to do it correctly, but many still haven’t. There were a handful of PPA I use and I’ve seen nothing on their respective pages to circomvent the apt-key problem, nor do they have a link to download their .asc or .gpg key. As I upgrade to Vera, I’m forgoing the use of PPAs. In a couple of cases, I’ve found Flatpak versions of the applications that seem to work.

I’ve got three more machines to upgrade, two desktops and a laptop, and I don’t foresee any problems with any of them. I’m glad that I can use this method. I haven’t had to redo any of my static addresses or SSH configurations. Everything is pretty much working as before. I’ve rarely experienced such painless and seamless upgrades. Of course, now that I’ve said that, I’ve probably jinxed myself.