• Linux
  • May 2024
    M T W T F S S
     12345
    6789101112
    13141516171819
    20212223242526
    2728293031  
  • Meta

Neofetch to Fastfetch

In the past few days, I’ve seen several articles and videos about the Neofetch reposistory being archived on GitHub. It hadn’t been updated in nearly four years, and the developer has reportly taken up farming. It’s still in most of the distribution repositories, but they’ll likely be dropped as the distributions update their repositories.

One of the recommended replaces I’ve been hearing about is Fastfetch, written in ‘mostly’ C with a jsonc configuration file. I took a look at the Fastfetch GitHub page and it looked pretty intersting with plently of customization options.

I downloaed and installed it. I found one of the preset configurations I liked, and added and removed a few modules from it. I also added the ASCII image that I’ve been usnig in my Neofetch configuration. I found it a bit frustrating to get it to work since I didn’t find the documentation on their GitHub and in the man page very useful. I mostly figured out by studying the presets and through trial and error.

I noticed that the information in the README file didn’t quite match up with what I was what I was experiencing, things like file names and paths for Debian installation, for instance. I also noted thet that README indicated that the current version worked with Debian 11 and newer, and Ubuntu 20.04 and newer. Looking at the Releases page, I saw that starting with version 2.8.2, the Linux binariers are built with glibc 2.35, meaning they no longer supported Debian 11 and Ubuntu 20.04. I figured that the release notes were more likely to be right and went by them. I still have a few macines running Bullseye.

Once I had it up and running on one machine, I wrote an installation script, using another of my installation scripts as a model. Getting the script right was an adventure in iteelf. Several functions and global variables from my sourced function library either didn’t work at all or behaved only. There were also some typos and varibable names that didn’t get renamed, but that’s one of the dangers of cutting and pasting code.

I tested the problem code with other scripts and on the command line, and they seemed to work fine. They just didn’t work with this script. I spent a lot of time watching the debug information as the script ran and what was being display in my terminal

In my function library I have function that takes an array of packages and checks to see if they’re installed. If they are, the function prints the package name and OK, otherwise it installs the package. The function was attempting to install each package, finding that each was was already the latest version.

One of the functions in the script that was giving me trouble was the one that downloaded the confguration and logo files from my Gitea server. To get it to work, I resorted to using hard-coded paths instead of the variables.

On a whim, I looked at the set command at the beginning of the script, set -euo pipefail. I ran the script with various combinations of options to see if one of them was causing my issues. it turns out that -o pipefail was the culprit. Without that option, the package checking function worked as it should. Then I checked the configuration download function with the variables instead of the hard-coded paths, and they worked.

I was using a script format that I use with several other applications that I install and update from their respective GitHub repositories, so I didn’t expect so many porblems. I started off with a script template and copied the applicable code from a similar script. It turns out I had not been using set -o pipefail in those script, so that would have likely caused the same problems in those script.

After struggling for two days, I now have Fastfetch with my configuration installed on all my compatible system, and it looks good. I really liked my Neofetch configuration, but this is good too.

Repo Snapshots

Back in September, I created several scripts to create daily, weekly, and monthly snapshots of my local repositories. I don’t remember for sure, but I think I was inspired by one or more videos demonstrating how to create backup using the tar command. The examples I saw were probably separate scripts for each time increment. I created my own scripts and am running them as cron jobs. That’s been working out very well.

This morning I got to thinking that I could probably combine the daily, weekly, and monthly jobs, along with the short script that syncs them to my Gitea server into one script. I cut and pasted the pertinent lines from the scripts into one script and cobbled them together into something workable. The script is meant to run as a daily cron job and use if statements to determine when each snapshot should run.

Later on, I took another look at the script and determined that I could improve upon it. I put the commands for each backup and put them in their own functions, so I could use the test brackets to call the functions, thus eliminating the if statements. The first rendition of this script used the digital representation of the days of the week to deterimine if the weekly snapshot should be performed. Rather than test for 0 or 7 representing Sunday, I changed the day of the week variable to hold the abbreviated day of week (Sun) instead, like I did with my incremental backup scripts.

This new script replaced four scripts in my crontab. It is scheduled to run daily and uses conditional statements to determine which functions will run. The current schedule of backups will be maintained with the weekly snapshot running every Sunday and the monthly on the first of the month. I believe this will be more efficient and help to declutter my crontab.

No Zen With Zenity

The idea of incorporating Zenity into my Bash scripts has been on my mind for some time. Having pop-up boxes to prompt for input or to enter a password, or display a warning or an error message, is intriguing. I can think of situation where having a file selector or a graphicl progress bar would be handy. I do have at least a couple of scrlpts in which such things could be very useful.

I’ve come across a lot of online articles and YouTube videos discussing and demonstrating Zenity and similar tools, but what I’ve seen tends to talk about them as individual, one-off tools, not as any kind of integrated system. I see the potential usefulness of these tools, but pop-up boxes seem kind of random, maybe even distracting. I can easily write a prompt, a warning, or an error message, even add color to make it stand out, without having to add extra code to put into a GUI box.

I’m looking to integrate them into a cohesive, integrated workflow. For instance, I have a script that takes an ISO file and its associated checksum file, compares them, shows the progress, and reports the results. The script is completely text based. It would be nice if I could bring up a tool to select the input files, show the progress of the comparison, and display the results. I also have a script to select an ISO file and a USB drive to write it to using the dd command. I’d like to have a simple graphical tool to select the file and the device, write the file to the device, ideally while showing the progress of the operation, and showing success or failure. I’m sure it wouldn’t be very difficult.

Messing with backups

Lately, I’ve been exploring the tar command and tinkering with backups. I don’t know why I’ve never looked into it before. I’ve been using rsync to create snapshots of my home directory on my “production” machines. While technically, not a back up, these snapshots have been useful. I’ve also use rsync to copy certain directories to computers across my network. Over the years, I’ve also written scripts that use zip to create compressed archives of my script directory along with some other directories.

About a month ago I created a few scripts to make, daily, weekly, and monthly snapshots of my local repositories using tar. That’s been working well, and it prompted me to look into setting up incremental and differential backups using tar. I found some articles and YouTube videos on the subject to get familiar with the concepts. It wasn’t until I started actually experimenting with it, that it began to gel, and I cobbled together a couple of rudimentary backup scripts. Soon I was able to flesh them out and write scripts for incremental and differential backups.

I created scripts to make incremental backups of my two main repository directories. One contains a local copy of my public repositories that I have on Github, and the other is my private repository that I store on a local server. As of this writing I’ve only done the initial full backup of the repositories, so it will take awhile to see how well it works. I’ll probably still have to deal with a few bugs. Within hours of doing the first backups, I found a couple of minor bugs which I fixed straight away. They didn’t affect anything operation, they were just minor cosmetic issues how the archive files were named.

I have the scripts setup to append the archive name with a six-digit date (yymmdd) followed by the numerical day of week (0-6). A full backup is done on Sunday (day 0) and incrementals are done the next six days. On Sunday the metadata file is renamed using the date for the previous Sunday and a new metadata file is created for the next week. It will be at least a couple of weeks before I know it’s working as expected, but I’m confident I’ve got it right.

If this works out I’m thinking about implementing incremental backups for other important directories and backing up to external drives. Delving into using tar with incremental and differential backups has kind of opened up some new possibilities.

Update on my GitHub Repos

After GitHub changed to using SSH to push commits to repositories a couple of years ago, I tried to get that all set up and, for the life of me, I couldn’t get it to work. I was apparently missing something, so I gave up on it. About a year ago, I set up a local Gitea server on my home network and that’s been working well for me. The biggest disadvantage of that was there was no access to from outside my network. Everything on my Gitea server is primarily for my own use, but I do have some scripts and projects I might want to share. In the meantime, everything I had on GitHub was sitting stagnant and well out of date.

A few days ago, I decided to take another look at my GitHub presense. I successfully set up my SSH public key on GitHub, and after searching online, I found a couple of git commands that got my old repositories talking to GitHub again via SSH. Then I set forth to update the repositories. I archived the homebankarchive and yt-dl-utilities repos since they’re probably not very useful. I’ll probably remove them complete soon. The FnLoC and FnLoC-Win repositories are still current despite not having done any work on them in a few years. There are some features and improvements I’d like to make, but I don’t know how to implement them. My knowledge of C programming has diminished over the years.

Then I began updating the bashscripts repository by importing up-to-date versions of the scripts from my Gitea repo and pushing them up to GitHub. I also removed some scripts that were of dubious interest. In the last few days I’ve been going through my scripts n the Gitea server, and exporting some of them that others might find useful. I’ve been adding new scripts to the repo a couple at a time and updating them as needed.

For those who are interested, here’s my GitHub.

My Scripts for ISOs

I’ve never really been into distro-hopping, but I do occasionally put different distros in VMs or actual hardware. I currently run Debian, Linux Mint, Linux Mint Debian Edition, MX Linux, and BunsenLabs on different machines. I keep current ISO files for these distributions as well as a few others that look interest me.

I haven’t always checked the ISO files I download against the checksum files. It seemed like a hassle to do it, but I always confirm that the files I download are genuine.

When I need to install a distro on a laptop or desktop, I generally need write the ISO to a USB stick. Some distros, like Mint, have a utility for that, but I’ve found that it’s not always reliable. I’ve copied ISO files to a USB drive with Ventoy installed, but my experience with Ventoy has been rather disappointing. Sometimes it works, and sometimes it doesn’t.

The most consistent method I’ve found, particularly for Debian and Debian-based distributions, has been using dd to write the ISO to a USB drive. As everyone knows, dd is potentially dangerous.

To deal with these problems, I’ve written a couple of scripts to verify and reduce the risks. They aren’t fullproof, but they’ve worked well for me.

My check-iso script displays the ISO and checksum files in a directory and prompts me to enter the appropriate files. I can either type them in, but I usually highlight the file with my mouse and use the center-click to copy it to the prompt. I don’t know if that works in all terminals, but it works in Kitty and Terminator. The script then compares the two checksums and tells me whether they match.

When I download an ISO file from a distro’s website, I get the checksum and put it in a file whose name identifies the distro and the type of checksum, for example, distro-iso.sha256. If the site’s checksum file contains checksums for multiple versions, I’ll break that file down into individual files for each because my script reads the first field on the first line.

The write-iso script lists the available ISO files and a prompt. Then it checks to see if a USB drive is attached and mounted, and lists the available removal media with its capacity. The user enters the appropriate device at the prompt and is prompted to confirm the choice which must be explicity answered with yes or no.

When it comes to scripts, like a poem, they’re never finished. Most of the time, they’re abandoned when they’ve outlived their usefulness or I find something else that does the job better.

config-bak

A little over 4 years ago, I wrote a script to backup my configuration files in case I made a change to them that didn’t work out or accidentally deleted them. My first rendition of the script was quite linear and consisted individual blocks of code for each file I was backing up and it grew to be quite large. But it worked, and I used it in that for for a couple of years. Later on, I put each of those individual routines into functions. It was still quite linear. Recently, I reviewed the script and noticed that most of these functions were identical, the only variations were the configuration files themselves.

After taking a close look at the code, I determined that, with only a few exceptions, most of the files to be backed up were either in the root of my home directory or under the .config directory. I created functions to back up files for each case. There were still some exceptions, such as configuratins that might have different names or locations, depending ont the version, desktop environment, or operating system. I wrote functions for those special cases. Now the script would call a more generic function and pass the path and file name to it, or one of the specific functions for the special cases.

Then I started seeing similarities in the special cases and figured that in most of them, I could use the generic functions using the partiuluar parameters for each case. That left only a small handful of files that didn’t fit either generic case. I have a program whose configuration file is in a hidden folder in the root of the home directory and another file that’s a couple levels down in my .local directory. For these special cases, I created another function that places the backup file directly in the backup folder without any directory name.

Finally, there was my dump of my Cinnamon keybindings which I keep in my .config folder. It’s not a configuration file, per se, but it’s important enough to keep a backup. It’s really the only “special case” file I currently have, so it has it own function to handle it. For the most part, it operates much the same as the other functions, but if the system is running the Cinnamon desktop enviroment, and keybinding dump doesn’t exist in the .config folder, it will create the dump file and copy it to the backup folder.

Over time, I’ve improved the appearance of the script’s output. As the script backs up the new or changed files, it displays the pertinent path and the filename, precede by an ASCII arrow (===>). It looks nice and it lets me know what’s been backed up.

Of course, there is a companion script to restore the configuration files from the local backup. Now that I’ve stremlined the backup script, I’m wondering if I can make the restoration script more modular, eliminating many of the individual functions. A precursory look at the restoration script seems to indicate that I can model it after the backup script. That’s a project for the near future.

BU Revisited

I’ve recently made a couple of significant changes to Joe Collins’ BU script. With as many modifications I’ve made, it hardly resembles Joe’s original script. The backup, restore, and help functions are still mostly as he wrote them although I have made some changes to them.

One change I made since my April post, Joe’s Backup Script, was to change the if-elif-else construct of checking the command line arguments to a case statement. The default action with no arguments is still to run a backup, but now I’ve allowed explicitly entering it as an option.

Most recently, I discovered a video and an accompanying script by Kris Occhipini that demonstrated a method of checking for a removable drive’s UUID and mounting it. It inspired me to create a function that I could incorporate into the BU script to determine if a paticular backup drive was connected to the appropriate computer, thus preventing me from inadverantly backing up a system to the wrong drive and potentially running out of space.

The function has the backup drive UUID’s and the computer hostnames held in variables and checks for the appropriate UUID. If it detects one of them, it compares the hostname with the hostname associate with that drive. If I have the wrong drive/hostname combinatiion, the function returns a failure code and the script exits with an appropriate error message. Should one of these drives be connected to one of my test computers, it raises the error condition, preventing an extraneous backup on that drive. Should an unlisted backup drive be attached, the backup script will still continue normally. This allows me to use the script to backup my home directory on my test machines.

I initially assigned the UUIDs and the target hostnames to individual variables, and later decided that using arrays to hold this information would be a better way to go. That worked well, and the function looks nicer. It will probably be easier to maintain as long as I keep each UUID associated with the proper hostname.

A special thanks to Joe Collins and Kris Occhipini for sharing their work. They’ve inspired me and provided me with the tools to build upon.

Joe’s Backup Script

A few years ago, I found a backup script by Joe Collins at EzeeLinux that has served me quite well. Over the years, I’ve made many modifications and added a few features to it, but it’s still basically Joe’s script, and I give him full credit for his work. His backup and restore functions have only gotten minor changes to fit my computing environment.

A couple of days ago, I took a closer look at Joe’s function to check the backup drive to see if it was ready for backups. I saw that it had several routines in it that could be functions in their own right, and at least one routine was something I had added. I ended up rewriting the function so that it called four other functions.

One of his drive-test routines checked to see if the mount-point directory existed, surmising that if it did, the backup driive was mounted. I can see that this work with a desktop environment where the system would detect the drive, create the mount-point, mount the drive, and open the file manager. And this is what happens with my systems that use a desktop environment. I have several systems that use a window manager on a minimal installation of Debian. These systems do not automatically detect and mount a USB drive when it’s plugged in. On these systems, I would have to do one of three things — open a file manager and click on the drive, manually mount the drive from the command line, or use a script.

I recently found a way to extract the correct device name for the backup drive using the lsblk command, assuming that the drive is properly labeled to work with Joe’s BU script. Using that, I was able to automate the mounting process without have to look a the lsblk output, finding the right device name, and entering it with the read command. That got me to thinking that this could easily be applied to the BU script.

My new drive_test function makes function calls to run the checks on the drive, and displays error messages and exits only if the prescribed conditions aren’t met. First, the check_usb function checks to see if a USB drive has been plugged in. Then, bu_drive_label checks the partition label of the drive. If the label is correct, mount_bu determines if the drive is mounted. If the drive is not mounted, as in the case of my minimal Debian systems, the function extracts the proper device name, creates the mount-point (if necessary) , and mounts the drive. Once the drive is mounted, an addition check (bu_drive_fs) is run to determine if the drive is formatted with the correct file system.

In the original script, the backup and restore functions ran the sync command to synchronize cached data to ensure all backed up data is saved to the drive. This process can take a while, so I incorporated a function to print dots until the command completes. Then, since it’s used in the two functions, I made a function for it. Other than that change, and a few minor adjustments for my own need, those functions are pretty much as Joe wrote them.

Joe’s original script used several if-statements to check command line arguments. Early on, I combined them into an if-elif-else structure which serves the same purpose, but, satisfies my coding discipline.

As I said, I’ve been using Joe’s script since he made it available, and with a few tweaks and modifications, it has served me well. I use it on a nearly daily basis. Thanks Joe, for the scripts and Linux videos you’ve made available. I’ve learned from you and been inspired.

System-info script

Whenever I do a new Linux installation on one of my computers, I have a script that gathers information about the hardware and the operating system, and puts it into a file in my home directory. I first wrote it a few years ago, and it’s evolved over the years in capabilty and sophistiction.

The script started out just gathering the data and putting it into variables. There were a few condidtional statements to handle things like whether their was a wireless card or something like that. Eventually, I separated most of the related tasks into functions, and they’re called by the main part of the script.

I’ve had to add functions to the script to handle situations the occasionally arise. I found that on some systems, thehdparm output doesn’t include the form factor information. I have thd hard-drive informtion function check for that and call another function that lets me filll in the missing information.

On systems which have Timshift active, lsblk willl often show /run/timeshift instead of /home, and I need to correct that. The script checks for this before the temporary file is written to my home directory and call a function that allows me to change it..

I’ve recently set up a laptop with an NVMe drive, and I had to find a way to include the pertinent information in the file. My original solution only accounted to one NVMe drive in the system which fits my current needs. I felt that I need to consider the possibility of having multiple NVMe drives, so I modified the function to put the NVMe information into an array. and then extract it. for each device.

Recently, I’ve modified the functions that extract data for SATA drives, NVMe drives, and laptop batteries. The modifications reduce the number of calls to the utilities by placing the output from the utilities into a temporary file, an array, or a variable.

The script extracts and prints the following information:

  • Manufacturer, product name, version, and serial number.
  • CPU model, architecture, and number of threads.
  • Physical memory, type, amount installed, maximum capacity.
  • Graphics chip (I don’t know about graphic cards, I only have onboard graphics)
  • Wired and wireless network adapters (manufacturer and model, interface name, MAC address)
  • Hard drive information including model number, serial number, capacity, and form factor.
  • NVMe informtion, including Model number, serial number, and capacity.
  • Laptop battery name, manufacturer, model, serial number, and battery technology.