• Linux
  • May 2024
    M T W T F S S
     12345
    6789101112
    13141516171819
    20212223242526
    2728293031  
  • Meta

Neofetch to Fastfetch

In the past few days, I’ve seen several articles and videos about the Neofetch reposistory being archived on GitHub. It hadn’t been updated in nearly four years, and the developer has reportly taken up farming. It’s still in most of the distribution repositories, but they’ll likely be dropped as the distributions update their repositories.

One of the recommended replaces I’ve been hearing about is Fastfetch, written in ‘mostly’ C with a jsonc configuration file. I took a look at the Fastfetch GitHub page and it looked pretty intersting with plently of customization options.

I downloaed and installed it. I found one of the preset configurations I liked, and added and removed a few modules from it. I also added the ASCII image that I’ve been usnig in my Neofetch configuration. I found it a bit frustrating to get it to work since I didn’t find the documentation on their GitHub and in the man page very useful. I mostly figured out by studying the presets and through trial and error.

I noticed that the information in the README file didn’t quite match up with what I was what I was experiencing, things like file names and paths for Debian installation, for instance. I also noted thet that README indicated that the current version worked with Debian 11 and newer, and Ubuntu 20.04 and newer. Looking at the Releases page, I saw that starting with version 2.8.2, the Linux binariers are built with glibc 2.35, meaning they no longer supported Debian 11 and Ubuntu 20.04. I figured that the release notes were more likely to be right and went by them. I still have a few macines running Bullseye.

Once I had it up and running on one machine, I wrote an installation script, using another of my installation scripts as a model. Getting the script right was an adventure in iteelf. Several functions and global variables from my sourced function library either didn’t work at all or behaved only. There were also some typos and varibable names that didn’t get renamed, but that’s one of the dangers of cutting and pasting code.

I tested the problem code with other scripts and on the command line, and they seemed to work fine. They just didn’t work with this script. I spent a lot of time watching the debug information as the script ran and what was being display in my terminal

In my function library I have function that takes an array of packages and checks to see if they’re installed. If they are, the function prints the package name and OK, otherwise it installs the package. The function was attempting to install each package, finding that each was was already the latest version.

One of the functions in the script that was giving me trouble was the one that downloaded the confguration and logo files from my Gitea server. To get it to work, I resorted to using hard-coded paths instead of the variables.

On a whim, I looked at the set command at the beginning of the script, set -euo pipefail. I ran the script with various combinations of options to see if one of them was causing my issues. it turns out that -o pipefail was the culprit. Without that option, the package checking function worked as it should. Then I checked the configuration download function with the variables instead of the hard-coded paths, and they worked.

I was using a script format that I use with several other applications that I install and update from their respective GitHub repositories, so I didn’t expect so many porblems. I started off with a script template and copied the applicable code from a similar script. It turns out I had not been using set -o pipefail in those script, so that would have likely caused the same problems in those script.

After struggling for two days, I now have Fastfetch with my configuration installed on all my compatible system, and it looks good. I really liked my Neofetch configuration, but this is good too.

Installing Virginia

I’ve been using Linux Mint for years, at least a decade, and it’s been my daily driver for many of them. Currently, I’m running it on four machines, my main computer, a laptop dedicated to my finances, another laptop that used to be my main laptop, and a desktop I’d intended to replace my wife’s Windows 10 PC. I’d upgraded all of them except my main machine soo after Linux Mint 21.3 Virginia was released, and these upgrade went quite well, with few, if any, problems. And they’ve been working well.

Generally speaking, the updated and new features that have been included in the Mint 21 point releases haven’t had anything that excited me or anything I had a particular need for. Mostly, they were minor cosmetic changes, although I can see the usefulness of the some of the changes made to the Cinnamon desktop and the Nemo file manager might have for someone who primarily uses a GUI interface. I, however, spend most of my time in the terminal and don’t use the GUI all that much. I keep up with the new releases mainly to make updates and upgrades easier down the line.

Yesterday I decided that it was time to upgrade the main machine. The upgrades to the other three Mint machines had been relatively quick and easy, so I didn’t anticipate any problems. Prior to starting the upgrade, I’d installed updates to the other Mint installations, and among the updates was Firefox. Linux Mint, because of their dislike of Snap packages, mantains their own version of Firefox which, I’ve noticed, usually takes twenty to thirty minutes to download and install. I’ve been surprised on occasion by a quick download, but that’s the exception, not the rule., and the latest updates followed the rule.

In over thirty years of working on computers I’ve found that an easy, problem-free upgrade is a rare event, so I should have been wary. While the previous upgrades to Virginia had taken less than an hour, this upgrade took over four hours to complete. (I probably could have installed it from the ISO quicker.) On average, the download rate when downloading packages was about 25 kilobytes per second. I occasionally saw it go as high as 52kb/s and drop down as low as 3kb/s. I do have reasonably fast Internet and while my computer and my network topology isn’t the latest and greatest, it’s quite adequate for the task at hand. Other than the download speeds, the upgrade was otherwise trouble-free.

Already, I’m starting to see information about Linux Mint 22 which will be coming out sometime this summer, and I’m having reservations about my future wth Linux Mint. It has worked very well for me over the years and I really like it. The biggest irritation I have about it lately has been the overall slowness of Firefox updates. Firefox hasn’t been my primary browser for years, preferring chromium-based browsers.Right now, I’m only using Firefox on mly main PC as the web interface for my git repositories.

Yesterday I began an experiment on a Mint laptop where I removed the Mint version of Firefox and replaced it with the Mozilla DEB package. I’ll be keeping an eye on it to see how well it goes.

When Linux Mint 22 Wilma is released this summer will I upgrade to it or try something else as my daily driver? I have Linux Mint Debian Edition (LMDE) running on a couple of machines and it has been working quite well. I can actually see myself living in LMDE on my ‘production’ machines, particularly when considering Canonical’s and Ubuntu’s increased emphasis on pushing Snap packages for applications. It’s going to be increasingly more difficult for the Mint team to work around that.

Debian has become the dominant operating system on my network in the past year or so. I’m running minimal Debian installations with i3WM on several machines as well as Debian-based distributions such as Bunsen Labs, MX Linux, and LMDE. Migrating from the Ubuntu-based Mint to LMDE would likely be a natural transition. As a near term project, I’m planning to install i3WM on one of the LMDE systems as an alternative to the Cinnamon desktop.

So far, Virginia seems to be working well and I’ll likely stay on it until after Wilma is released. Mint 21 will be supported until April 2027, so I don’t have to be in a hurry to upgrade or move on to something else.

Mint 21.3 Upgrades

After a month in beta, Linux Mint finally release 21.3 Virginia last Thursday, and I downloaded the ISO and the checksum files. A couple of days later, the updates appeared in the Update Manager. A day or two later I began updating the four Linux Mint systems. Like the entire 21 series, the updates didn’t contain anything that was particularly important to me. For the most part, most of the changes in the current distribution have been cosmetic. But I think it’s good to upgrade anyway because it will make it easier to do an in-place upgrade to Mint 22 when it comes out this summer.

Three of the four systems have been upgraded with the main system being the only one left to upgrade. I started with the HP ProBook 6570b on the back shelf to see how well it would go. I’m happy to say that it went very well, and was quicker than I’d anticipated. Then, a couple of days later, I upgraded the Finance laptop (another ProBook 6570b) and the HP 800 G1 USDT. Neither of them encountered any problems and the upgrades completed even faster than the first. So far, the only new feature I’ve used is to center the login in the display manager screen. I’m not that concerned about the enhancements they’ve made to the file manager, the icons, and the GUI in general.

I will more than likely get around to upgrading the HP 800 G2 SFF within the next few days. There are still four machines running some form of Debian 11 on the network. It will probably be a while before I upgrade the Gitea server because I want to be sure I can back up and, if necessary, recover the Gitea database. The next Debian upgrade will likely be the HP 800 G1 desktop mini which is currently running Debian 11 with the Cinnamon desktop environment. I will probably do a minimum installation of Bookworm with i3. That leave the Gateway E-475M laptop and the HP 110 Netbook which are both running BunsenLabs Beryllium (base on Debian 11). They both run OpenBox which is okay, but I’m thinking I’ll go with minimal Debian and i3.

Debian Upgrades

After putting it off for a long time, I began upgrading my Debian 11 systems to Debian 12. After reviewing the in-place upgrade process, I wrote scripts to handle each stage of the process. There’s probably a way to do it in one script, but I didn’t feel like messing with that.

The first script updates the current Debian installation. Then, after a reboot, the second script uses sed to replace all the isntances of bullseye with bookworm in /etc/apt/sources.list and in /etc/apt/sources.list.d/bullseye.backports.list if it exists. The script also adds non-free-firmware where necessary, and if the backports list exists, renames it to bookworm.backports.list. Then it performs a full upgrade using the updated source lists. Finally, the third script confirms the upgrade by displaying the release and version information. Then it cleans the apt cache and removes orphaned packages.

Overall, the process has worked quite well for me. The only real problem I’ve come across so far was with my main laptop, an HP ElietBook 850 G3 with the i3 window manager. In a termnal window, I lose the half of the bottom line when the window is full. Applications run unside the terminal such as Micro and Bat look fine, but the bottom line of Htop is cutoff when it’s maximized. It only occurs on this particular laptop. The only change has been the upgrade to Bookworm, none of my configuration files have changed. Online searches have provided me with nothing useful.

I’ve got one more deskotp computer on which to do an in-place upgrade, my Gitea server. I’m going to hold off on that for a while until I get a feel for backing up the database. Maybe once I feel comfortable with that, I can do a complete rebuild of the system and provide it with a larger root partition or maybe just a single partition.

Other than that, I have an HP mini-PC that’s currently running Debian 11 with Cinnamon. That one was upgraded in-place from Debian 10. I plan to wipe it and do a fresh Debian 12 installation with i3wm. I also have two older laptops that are currently running BunsenLabs 11 which I’m considering changing to Debian and i3. BunsenLabs uses Openbox and after using a tiling window manager for a while, a floating window manager just doesn’t have much appeal. Plus, on those laptops, I really don’t need most of the applications and utilities that are included with the distro.

No Zen With Zenity

The idea of incorporating Zenity into my Bash scripts has been on my mind for some time. Having pop-up boxes to prompt for input or to enter a password, or display a warning or an error message, is intriguing. I can think of situation where having a file selector or a graphicl progress bar would be handy. I do have at least a couple of scrlpts in which such things could be very useful.

I’ve come across a lot of online articles and YouTube videos discussing and demonstrating Zenity and similar tools, but what I’ve seen tends to talk about them as individual, one-off tools, not as any kind of integrated system. I see the potential usefulness of these tools, but pop-up boxes seem kind of random, maybe even distracting. I can easily write a prompt, a warning, or an error message, even add color to make it stand out, without having to add extra code to put into a GUI box.

I’m looking to integrate them into a cohesive, integrated workflow. For instance, I have a script that takes an ISO file and its associated checksum file, compares them, shows the progress, and reports the results. The script is completely text based. It would be nice if I could bring up a tool to select the input files, show the progress of the comparison, and display the results. I also have a script to select an ISO file and a USB drive to write it to using the dd command. I’d like to have a simple graphical tool to select the file and the device, write the file to the device, ideally while showing the progress of the operation, and showing success or failure. I’m sure it wouldn’t be very difficult.

Messing with backups

Lately, I’ve been exploring the tar command and tinkering with backups. I don’t know why I’ve never looked into it before. I’ve been using rsync to create snapshots of my home directory on my “production” machines. While technically, not a back up, these snapshots have been useful. I’ve also use rsync to copy certain directories to computers across my network. Over the years, I’ve also written scripts that use zip to create compressed archives of my script directory along with some other directories.

About a month ago I created a few scripts to make, daily, weekly, and monthly snapshots of my local repositories using tar. That’s been working well, and it prompted me to look into setting up incremental and differential backups using tar. I found some articles and YouTube videos on the subject to get familiar with the concepts. It wasn’t until I started actually experimenting with it, that it began to gel, and I cobbled together a couple of rudimentary backup scripts. Soon I was able to flesh them out and write scripts for incremental and differential backups.

I created scripts to make incremental backups of my two main repository directories. One contains a local copy of my public repositories that I have on Github, and the other is my private repository that I store on a local server. As of this writing I’ve only done the initial full backup of the repositories, so it will take awhile to see how well it works. I’ll probably still have to deal with a few bugs. Within hours of doing the first backups, I found a couple of minor bugs which I fixed straight away. They didn’t affect anything operation, they were just minor cosmetic issues how the archive files were named.

I have the scripts setup to append the archive name with a six-digit date (yymmdd) followed by the numerical day of week (0-6). A full backup is done on Sunday (day 0) and incrementals are done the next six days. On Sunday the metadata file is renamed using the date for the previous Sunday and a new metadata file is created for the next week. It will be at least a couple of weeks before I know it’s working as expected, but I’m confident I’ve got it right.

If this works out I’m thinking about implementing incremental backups for other important directories and backing up to external drives. Delving into using tar with incremental and differential backups has kind of opened up some new possibilities.

My Scripts for ISOs

I’ve never really been into distro-hopping, but I do occasionally put different distros in VMs or actual hardware. I currently run Debian, Linux Mint, Linux Mint Debian Edition, MX Linux, and BunsenLabs on different machines. I keep current ISO files for these distributions as well as a few others that look interest me.

I haven’t always checked the ISO files I download against the checksum files. It seemed like a hassle to do it, but I always confirm that the files I download are genuine.

When I need to install a distro on a laptop or desktop, I generally need write the ISO to a USB stick. Some distros, like Mint, have a utility for that, but I’ve found that it’s not always reliable. I’ve copied ISO files to a USB drive with Ventoy installed, but my experience with Ventoy has been rather disappointing. Sometimes it works, and sometimes it doesn’t.

The most consistent method I’ve found, particularly for Debian and Debian-based distributions, has been using dd to write the ISO to a USB drive. As everyone knows, dd is potentially dangerous.

To deal with these problems, I’ve written a couple of scripts to verify and reduce the risks. They aren’t fullproof, but they’ve worked well for me.

My check-iso script displays the ISO and checksum files in a directory and prompts me to enter the appropriate files. I can either type them in, but I usually highlight the file with my mouse and use the center-click to copy it to the prompt. I don’t know if that works in all terminals, but it works in Kitty and Terminator. The script then compares the two checksums and tells me whether they match.

When I download an ISO file from a distro’s website, I get the checksum and put it in a file whose name identifies the distro and the type of checksum, for example, distro-iso.sha256. If the site’s checksum file contains checksums for multiple versions, I’ll break that file down into individual files for each because my script reads the first field on the first line.

The write-iso script lists the available ISO files and a prompt. Then it checks to see if a USB drive is attached and mounted, and lists the available removal media with its capacity. The user enters the appropriate device at the prompt and is prompted to confirm the choice which must be explicity answered with yes or no.

When it comes to scripts, like a poem, they’re never finished. Most of the time, they’re abandoned when they’ve outlived their usefulness or I find something else that does the job better.

i3 on Bookworm

I’ve been planning to upgrade my Debian systems since Bookworm was released earlier this summer. Of course, I plan to maintain i3wm on those systems that already have it, and I spent a lot of time researching what I needed to be done to accomplish that.

Drew Grif’s (JustAGuy Linux) video and scripts were very beneficial in helping me get i3 up and running on Debian 11, and I was thrilled wihen he came out with a video where he installed several window managers on Debian 12. I cloned his scripts from his GitHub repository and studied them, but found a ittle difficult to separate the the i3 specific files from the rest. I’m not really interested in the other windown managers, although I may install all of them in a VM or on hardware, to check them out.

What I ultimately decided to do, was to take my current install script and update it to install i3 on Bookworm. I added the appropriate Polybar file, and make the necessary adjustments to the i3 configuration files. Then I upgraded one of my test laptops, a Dell Latitude E-6500, installing Debian 12 on the root partition and keeping the home partition. It took some work, but I got Polybar working with it, and it’s running great. I’ve since updated my local i3-Debian repository, so I think I’m ready to try it on another system. This time I’ll probably do a clean install and then restore my home folder from the backup.

I was able to install Polybar on my HP Elitebook laptop which is working out great. I’m not sure if the current version of Polybar actually supports applets in the taskbar , but I have them loaded by my i3 autostart script after Polybar has loaded, and it seems to be working okay. The Elitebook is a production machine and will probably among the last to be upgrade.

It’s strange that I can’t find Polybar in the Debian 11 repositories on the other systems running Bullseye. That’s not a big deal since the Bumblebee-Status bar works well on those systems, and they’ll all be upgraded to Bookworm before too long.

Drew Grif’s videos and repositories:

Coming Upgrades

Debian 12 Bookworm has been out for about a month , but I haven’t installed it on anything yet. I haven’t even upgraded any of my Bullseye systems. I found a YouTube video by Drew Grif on the JustAGuy Linux channel that was very helpful. In his video, Drew install five different window managers, i3 among them. I’ve been looking at his script and trying to figure what what files go with each window manager and if I can separate which ones I need for i3. He’s also using Polybar as his status bar which I’m familar with. Maybe I should use his installation scripts and install all of them on a system. Maybe I’ll find that I like one of the others better.

I have been adding and deleting files from his installation script to fit my needs. Today I created new repositories on my Gitea server, cloned them to my main computer, and copied the files to them. Now I’ll be able to modify the scripts and configuration files, and track the changes using git. Having a “fork” of Drew’s repository on my Gitea server will enable me to clone the repositories to a new installation.

I’ve been seeing a lot of of good reviews of Debian 12, and I’m looking forward to upgrading from Bullseye, particularly with window managers. I’m also looking forward to LMDE 6 when it comes out. Now that Linux Mint has released Mint 21.2, LMDE 6 will hopefully be out in a few months.

I’ll probably be upgrading my Mint systems before too long once the inplace upgrade procudures are in place. From what I’ve seen, most of the new features are cosmetic or things I don’t really use. There really hasn’t been much in Mint 21 that wows me. The HP ProBooks that I’d been running Mint 20.x on without much problem haven’t been performing nearly as well with 21.x. They’re running noticeable slower. One of those laptops is a “production” machine that I use daily. To help it out, I installed an SSD and that has helped. I also had an HP USDT that’s a dedicated media system and Mint 21 slowed it down considerably. I ended up installing Debian 11 with i3 on that.

I realize that most of my computers are older, acquired during refresh projects. It’s been five years since I worked on one those projects. Most of the machine have third or fourth generation i5 processors, I can’t expect great performance out of them. Maybe it’s time to start acquiring newer old hardware.

This is a bit off the subject, but the HP ProBook that I use a lot seemed to be running a little warm, so I opened it up to take a look. It’s a bit old, so I figured it might be time to put some fresh thermal paste on the CPU. I pulled out the fan and the heatsink, cleaned everything up, applied new paste, and put it all back together. It’s now running several degrees cooler now, still a bit warm but I think this model tends to run warm . It was nice that I didn’t need to completely disassemble it like the service manual wanted me to.

Links for Drew Grif’s

BU Revisited

I’ve recently made a couple of significant changes to Joe Collins’ BU script. With as many modifications I’ve made, it hardly resembles Joe’s original script. The backup, restore, and help functions are still mostly as he wrote them although I have made some changes to them.

One change I made since my April post, Joe’s Backup Script, was to change the if-elif-else construct of checking the command line arguments to a case statement. The default action with no arguments is still to run a backup, but now I’ve allowed explicitly entering it as an option.

Most recently, I discovered a video and an accompanying script by Kris Occhipini that demonstrated a method of checking for a removable drive’s UUID and mounting it. It inspired me to create a function that I could incorporate into the BU script to determine if a paticular backup drive was connected to the appropriate computer, thus preventing me from inadverantly backing up a system to the wrong drive and potentially running out of space.

The function has the backup drive UUID’s and the computer hostnames held in variables and checks for the appropriate UUID. If it detects one of them, it compares the hostname with the hostname associate with that drive. If I have the wrong drive/hostname combinatiion, the function returns a failure code and the script exits with an appropriate error message. Should one of these drives be connected to one of my test computers, it raises the error condition, preventing an extraneous backup on that drive. Should an unlisted backup drive be attached, the backup script will still continue normally. This allows me to use the script to backup my home directory on my test machines.

I initially assigned the UUIDs and the target hostnames to individual variables, and later decided that using arrays to hold this information would be a better way to go. That worked well, and the function looks nicer. It will probably be easier to maintain as long as I keep each UUID associated with the proper hostname.

A special thanks to Joe Collins and Kris Occhipini for sharing their work. They’ve inspired me and provided me with the tools to build upon.