• Linux
  • May 2024
    M T W T F S S
     12345
    6789101112
    13141516171819
    20212223242526
    2728293031  
  • Meta

Repo Snapshots

Back in September, I created several scripts to create daily, weekly, and monthly snapshots of my local repositories. I don’t remember for sure, but I think I was inspired by one or more videos demonstrating how to create backup using the tar command. The examples I saw were probably separate scripts for each time increment. I created my own scripts and am running them as cron jobs. That’s been working out very well.

This morning I got to thinking that I could probably combine the daily, weekly, and monthly jobs, along with the short script that syncs them to my Gitea server into one script. I cut and pasted the pertinent lines from the scripts into one script and cobbled them together into something workable. The script is meant to run as a daily cron job and use if statements to determine when each snapshot should run.

Later on, I took another look at the script and determined that I could improve upon it. I put the commands for each backup and put them in their own functions, so I could use the test brackets to call the functions, thus eliminating the if statements. The first rendition of this script used the digital representation of the days of the week to deterimine if the weekly snapshot should be performed. Rather than test for 0 or 7 representing Sunday, I changed the day of the week variable to hold the abbreviated day of week (Sun) instead, like I did with my incremental backup scripts.

This new script replaced four scripts in my crontab. It is scheduled to run daily and uses conditional statements to determine which functions will run. The current schedule of backups will be maintained with the weekly snapshot running every Sunday and the monthly on the first of the month. I believe this will be more efficient and help to declutter my crontab.

My Scripts for ISOs

I’ve never really been into distro-hopping, but I do occasionally put different distros in VMs or actual hardware. I currently run Debian, Linux Mint, Linux Mint Debian Edition, MX Linux, and BunsenLabs on different machines. I keep current ISO files for these distributions as well as a few others that look interest me.

I haven’t always checked the ISO files I download against the checksum files. It seemed like a hassle to do it, but I always confirm that the files I download are genuine.

When I need to install a distro on a laptop or desktop, I generally need write the ISO to a USB stick. Some distros, like Mint, have a utility for that, but I’ve found that it’s not always reliable. I’ve copied ISO files to a USB drive with Ventoy installed, but my experience with Ventoy has been rather disappointing. Sometimes it works, and sometimes it doesn’t.

The most consistent method I’ve found, particularly for Debian and Debian-based distributions, has been using dd to write the ISO to a USB drive. As everyone knows, dd is potentially dangerous.

To deal with these problems, I’ve written a couple of scripts to verify and reduce the risks. They aren’t fullproof, but they’ve worked well for me.

My check-iso script displays the ISO and checksum files in a directory and prompts me to enter the appropriate files. I can either type them in, but I usually highlight the file with my mouse and use the center-click to copy it to the prompt. I don’t know if that works in all terminals, but it works in Kitty and Terminator. The script then compares the two checksums and tells me whether they match.

When I download an ISO file from a distro’s website, I get the checksum and put it in a file whose name identifies the distro and the type of checksum, for example, distro-iso.sha256. If the site’s checksum file contains checksums for multiple versions, I’ll break that file down into individual files for each because my script reads the first field on the first line.

The write-iso script lists the available ISO files and a prompt. Then it checks to see if a USB drive is attached and mounted, and lists the available removal media with its capacity. The user enters the appropriate device at the prompt and is prompted to confirm the choice which must be explicity answered with yes or no.

When it comes to scripts, like a poem, they’re never finished. Most of the time, they’re abandoned when they’ve outlived their usefulness or I find something else that does the job better.

config-bak

A little over 4 years ago, I wrote a script to backup my configuration files in case I made a change to them that didn’t work out or accidentally deleted them. My first rendition of the script was quite linear and consisted individual blocks of code for each file I was backing up and it grew to be quite large. But it worked, and I used it in that for for a couple of years. Later on, I put each of those individual routines into functions. It was still quite linear. Recently, I reviewed the script and noticed that most of these functions were identical, the only variations were the configuration files themselves.

After taking a close look at the code, I determined that, with only a few exceptions, most of the files to be backed up were either in the root of my home directory or under the .config directory. I created functions to back up files for each case. There were still some exceptions, such as configuratins that might have different names or locations, depending ont the version, desktop environment, or operating system. I wrote functions for those special cases. Now the script would call a more generic function and pass the path and file name to it, or one of the specific functions for the special cases.

Then I started seeing similarities in the special cases and figured that in most of them, I could use the generic functions using the partiuluar parameters for each case. That left only a small handful of files that didn’t fit either generic case. I have a program whose configuration file is in a hidden folder in the root of the home directory and another file that’s a couple levels down in my .local directory. For these special cases, I created another function that places the backup file directly in the backup folder without any directory name.

Finally, there was my dump of my Cinnamon keybindings which I keep in my .config folder. It’s not a configuration file, per se, but it’s important enough to keep a backup. It’s really the only “special case” file I currently have, so it has it own function to handle it. For the most part, it operates much the same as the other functions, but if the system is running the Cinnamon desktop enviroment, and keybinding dump doesn’t exist in the .config folder, it will create the dump file and copy it to the backup folder.

Over time, I’ve improved the appearance of the script’s output. As the script backs up the new or changed files, it displays the pertinent path and the filename, precede by an ASCII arrow (===>). It looks nice and it lets me know what’s been backed up.

Of course, there is a companion script to restore the configuration files from the local backup. Now that I’ve stremlined the backup script, I’m wondering if I can make the restoration script more modular, eliminating many of the individual functions. A precursory look at the restoration script seems to indicate that I can model it after the backup script. That’s a project for the near future.

Joe’s Backup Script

A few years ago, I found a backup script by Joe Collins at EzeeLinux that has served me quite well. Over the years, I’ve made many modifications and added a few features to it, but it’s still basically Joe’s script, and I give him full credit for his work. His backup and restore functions have only gotten minor changes to fit my computing environment.

A couple of days ago, I took a closer look at Joe’s function to check the backup drive to see if it was ready for backups. I saw that it had several routines in it that could be functions in their own right, and at least one routine was something I had added. I ended up rewriting the function so that it called four other functions.

One of his drive-test routines checked to see if the mount-point directory existed, surmising that if it did, the backup driive was mounted. I can see that this work with a desktop environment where the system would detect the drive, create the mount-point, mount the drive, and open the file manager. And this is what happens with my systems that use a desktop environment. I have several systems that use a window manager on a minimal installation of Debian. These systems do not automatically detect and mount a USB drive when it’s plugged in. On these systems, I would have to do one of three things — open a file manager and click on the drive, manually mount the drive from the command line, or use a script.

I recently found a way to extract the correct device name for the backup drive using the lsblk command, assuming that the drive is properly labeled to work with Joe’s BU script. Using that, I was able to automate the mounting process without have to look a the lsblk output, finding the right device name, and entering it with the read command. That got me to thinking that this could easily be applied to the BU script.

My new drive_test function makes function calls to run the checks on the drive, and displays error messages and exits only if the prescribed conditions aren’t met. First, the check_usb function checks to see if a USB drive has been plugged in. Then, bu_drive_label checks the partition label of the drive. If the label is correct, mount_bu determines if the drive is mounted. If the drive is not mounted, as in the case of my minimal Debian systems, the function extracts the proper device name, creates the mount-point (if necessary) , and mounts the drive. Once the drive is mounted, an addition check (bu_drive_fs) is run to determine if the drive is formatted with the correct file system.

In the original script, the backup and restore functions ran the sync command to synchronize cached data to ensure all backed up data is saved to the drive. This process can take a while, so I incorporated a function to print dots until the command completes. Then, since it’s used in the two functions, I made a function for it. Other than that change, and a few minor adjustments for my own need, those functions are pretty much as Joe wrote them.

Joe’s original script used several if-statements to check command line arguments. Early on, I combined them into an if-elif-else structure which serves the same purpose, but, satisfies my coding discipline.

As I said, I’ve been using Joe’s script since he made it available, and with a few tweaks and modifications, it has served me well. I use it on a nearly daily basis. Thanks Joe, for the scripts and Linux videos you’ve made available. I’ve learned from you and been inspired.

System-info script

Whenever I do a new Linux installation on one of my computers, I have a script that gathers information about the hardware and the operating system, and puts it into a file in my home directory. I first wrote it a few years ago, and it’s evolved over the years in capabilty and sophistiction.

The script started out just gathering the data and putting it into variables. There were a few condidtional statements to handle things like whether their was a wireless card or something like that. Eventually, I separated most of the related tasks into functions, and they’re called by the main part of the script.

I’ve had to add functions to the script to handle situations the occasionally arise. I found that on some systems, thehdparm output doesn’t include the form factor information. I have thd hard-drive informtion function check for that and call another function that lets me filll in the missing information.

On systems which have Timshift active, lsblk willl often show /run/timeshift instead of /home, and I need to correct that. The script checks for this before the temporary file is written to my home directory and call a function that allows me to change it..

I’ve recently set up a laptop with an NVMe drive, and I had to find a way to include the pertinent information in the file. My original solution only accounted to one NVMe drive in the system which fits my current needs. I felt that I need to consider the possibility of having multiple NVMe drives, so I modified the function to put the NVMe information into an array. and then extract it. for each device.

Recently, I’ve modified the functions that extract data for SATA drives, NVMe drives, and laptop batteries. The modifications reduce the number of calls to the utilities by placing the output from the utilities into a temporary file, an array, or a variable.

The script extracts and prints the following information:

  • Manufacturer, product name, version, and serial number.
  • CPU model, architecture, and number of threads.
  • Physical memory, type, amount installed, maximum capacity.
  • Graphics chip (I don’t know about graphic cards, I only have onboard graphics)
  • Wired and wireless network adapters (manufacturer and model, interface name, MAC address)
  • Hard drive information including model number, serial number, capacity, and form factor.
  • NVMe informtion, including Model number, serial number, and capacity.
  • Laptop battery name, manufacturer, model, serial number, and battery technology.

BGMMS Script Update

Way back in September, I created a script to check GitHub repositories for the latest version of some utility programs I frequently use. At the time I called it BGMMS. The script has worked quite well, and I’ve made several improvements to it. I used some of the script’s code to modify the program installation scripts to query the appropriate GitHub page and compare the latest available version with what I have installed. Recently, I wrote a new set of scripts to install, update, or remove the programs.

I renamed and modified the script to accommodate a potentiall longer version number and changed the format of the output to three columns. Then I changed the script to highlight the current version number in yellow if an update is available.

Whilst running the script, I noticed that one of the programs showed a release candidate as the latest version and I realized that there are case where I might want to wait until the new release is ready. To handle that situation, I added code to the installation scripts to confirm whether to install the new version.

iur-marktext v0.1.1 (30 Jan 2022)
Installs, updates, or removes the Mark Text markdown editor.
Current version of Mark Text is 0.16.3, updating to 0.17.0-rc.1...
Are you sure you want to install Mark Text 0.17.0-rc.1?
1) Yes
2) No
Choice: 2
Installation of Mark Text 0.17.0-rc.1 canceled.

BGMMS Script

I have a small network of PCs and laptops and there are a handful of utilities that I usually install on nearly every system:

  • Bat – a cat clone that I use to read scripts and source code.
  • Glow – a utility to read markdown files.
  • Micro – a very nice command-line text editor.
  • Marktext – a markdown editor
  • Stacer – a system optimizer and monitoring tool.

All of these have their repositories on GitHub and, in the past, I’ve had to periodically get on GitHub and check for updates. Most of them don’t update very often but, it was inconvenient to have to check that way.

A few weeks ago, I wrote a script to automate the process a bit. It goes to each package’s repositiory on GitHub, downloads the releases page to a temporary file. Then it uses awk to find the most current version and save it to a variable. It then compares that to the installed version and lets me know if there’s an update available.

In my iniitial version of the script, I had a function for each package, and that worked well enough. But, I knew I could improve upon it and I started thinking on it. Other than the package’s URL and the code to obtain the installed version number, the rest of the process was identical.

I combined the five functions into a single funcion to which I passed the package name from an array using a for loop. The function uses a case statment to determine the appropriate URL and obtain the version number of the package. I eliminated about 60 lines of repetitive code.

bgmms v0.3.1 (10 Sep 2021)
Checks for the newest version number of selected utility programs.

Bat
Latest version: 0.18.3    Installed version: 0.18.3
Glow
Latest version: 1.4.1    Installed version: 1.4.1
Micro
Latest version: 2.0.10    Installed version: 2.0.10
Marktext
Latest version: 0.16.3    Installed version: 0.16.3
Stacer
Not installed

I have scripts to install and update these packages and I modified those scripts to query the appropriate GitHub repositories to determine the newest version and to update the package if necessary. It’s so much better than going to GitHub, checking for updates, then passing the new version to the script as an argument.

ipinfo.sh

A while back I wrote a little script to display some basic IP information, public IP address, private IP address, default gateway, and DNS addresses. I had used the nmcli command piped through grep to display all of the IP4 information which also included route information which didn’t really interested me.

I didn’t particularly like the way the output looked so I took another look at the script to see if I could improve it. I did a little research and found a couple of different ways to get the local IP address and the default gateway. I still used nmcli to get the DNS addresses but found a way to use awk to get the just the addresses and display them nicely. The output looks so much better than before.

Just before I started on this little project I’d been watching a YouTube video covering some of the basics of using awk and I wondered if I could use awk to extract just the data I needed. I was able to use awk to pull and display all of the IP data except the public IP. Using awk, I didn’t need to use grep at all.

$ ipinfo.sh

IP Information
==============
Public IP:
	xx.xx.xx.xxx
Local IP:
	192.168.0.15/24
Default Gateway:
	192.168.0.1
DNS Servers:
	84.200.69.80
	84.200.70.40
	208.67.222.222

The script is on my GitHub Bash scripts repository.

FnLoC Updates

In the last couple of days, I’ve done some work on the FnLoC project and made several commits to my GitHub repository.

First of all, it dawned on my why FnLoC wouldn’t work properly with a lof of my source code from my Computer Science coursework 20 years ago. I had written much of that code using Dos and Windows text editors and IDEs. These editors would leave carriage return characters throughout the file. These characters wreaked havoc on my FnLoC program which was developed to work with files written in a Linus or Unix environment.

To solve the problem, I wrote a Bash script (dos2linux) to clean out the carriage returns. The script uses sed to remove the carriage return characters and creates a backup of the original file.


    sed -i.bak -e 's/\r//g' source-file.c

There are some other methods, such as the tr command, but this worked well for me and was easily incorporated into a script.

After going through and cleaning up all of my old CS source code files, I copied the code for my original LOC counting program then updated it so it was consistent with the line parsing process of FnLoC. I also added a couple of functions to print the results and to display syntax help. When I finished, I named it LLoC for “Logical Lines of Code.” While I was at it, I tidied up the FnLoc code a bit, just some minor formatting things I’d missed previously.

I also compiled the new code on my Windows 7 machine and updated the installation and removal batch files before placing them in a zipped archive.

This morning I modified the loc2file Bash script to incorporate LLOC by having it process header files. While going through my old code, I found a number of C++ files with .cc and .hh file extensions so I added those extensions to those extensions to be used.

Then I updated the deb package and updated the documentation. When I was satisfied I had everything up to date, I updated the Git repository and pushed the changes to my GitHub repository.

Scripts for Conky Scripts

I’ve been using Conky as a system monitor on my desktops for a number of years. At one time I had an extensive script running on them. Then changes to Conky in the Ubuntu 16.04 repositories broke parts of my scripts so I opted for simpler scripts and finally decided upon a simple script that I could use on both desktops and laptops.

Lately, I’ve been using sed in some of my scripts to insert headers and licenses into my scripts and source code. I even came up with scripts that used templates to create the basic framework for new scripts and code.

Today I got to thinking about applying this to my conky scripts. Back when I had more complex conky scripts, I wanted to to create a script to install and configure conky on my systems but the amount of information about devices I needed was a bit overwhelming.

But now my basic conky script is much simpler as there are only three devices whose device names varied — the battery (for laptops), and the wired and wireless network adapters. I figured that I could easily get the device names in a script and use the sed command to put them into the appropriate places in the script. Then it was a simple matter to create a .conkyrc file.

Once I got the script written and tested, it occurred to me that I could easily to add the commands to install Conky, add a .desktop file to the autostart folder, then start Conky in the background.