• Linux
  • May 2024
    M T W T F S S
     12345
    6789101112
    13141516171819
    20212223242526
    2728293031  
  • Meta

config-bak

A little over 4 years ago, I wrote a script to backup my configuration files in case I made a change to them that didn’t work out or accidentally deleted them. My first rendition of the script was quite linear and consisted individual blocks of code for each file I was backing up and it grew to be quite large. But it worked, and I used it in that for for a couple of years. Later on, I put each of those individual routines into functions. It was still quite linear. Recently, I reviewed the script and noticed that most of these functions were identical, the only variations were the configuration files themselves.

After taking a close look at the code, I determined that, with only a few exceptions, most of the files to be backed up were either in the root of my home directory or under the .config directory. I created functions to back up files for each case. There were still some exceptions, such as configuratins that might have different names or locations, depending ont the version, desktop environment, or operating system. I wrote functions for those special cases. Now the script would call a more generic function and pass the path and file name to it, or one of the specific functions for the special cases.

Then I started seeing similarities in the special cases and figured that in most of them, I could use the generic functions using the partiuluar parameters for each case. That left only a small handful of files that didn’t fit either generic case. I have a program whose configuration file is in a hidden folder in the root of the home directory and another file that’s a couple levels down in my .local directory. For these special cases, I created another function that places the backup file directly in the backup folder without any directory name.

Finally, there was my dump of my Cinnamon keybindings which I keep in my .config folder. It’s not a configuration file, per se, but it’s important enough to keep a backup. It’s really the only “special case” file I currently have, so it has it own function to handle it. For the most part, it operates much the same as the other functions, but if the system is running the Cinnamon desktop enviroment, and keybinding dump doesn’t exist in the .config folder, it will create the dump file and copy it to the backup folder.

Over time, I’ve improved the appearance of the script’s output. As the script backs up the new or changed files, it displays the pertinent path and the filename, precede by an ASCII arrow (===>). It looks nice and it lets me know what’s been backed up.

Of course, there is a companion script to restore the configuration files from the local backup. Now that I’ve stremlined the backup script, I’m wondering if I can make the restoration script more modular, eliminating many of the individual functions. A precursory look at the restoration script seems to indicate that I can model it after the backup script. That’s a project for the near future.

BU Revisited

I’ve recently made a couple of significant changes to Joe Collins’ BU script. With as many modifications I’ve made, it hardly resembles Joe’s original script. The backup, restore, and help functions are still mostly as he wrote them although I have made some changes to them.

One change I made since my April post, Joe’s Backup Script, was to change the if-elif-else construct of checking the command line arguments to a case statement. The default action with no arguments is still to run a backup, but now I’ve allowed explicitly entering it as an option.

Most recently, I discovered a video and an accompanying script by Kris Occhipini that demonstrated a method of checking for a removable drive’s UUID and mounting it. It inspired me to create a function that I could incorporate into the BU script to determine if a paticular backup drive was connected to the appropriate computer, thus preventing me from inadverantly backing up a system to the wrong drive and potentially running out of space.

The function has the backup drive UUID’s and the computer hostnames held in variables and checks for the appropriate UUID. If it detects one of them, it compares the hostname with the hostname associate with that drive. If I have the wrong drive/hostname combinatiion, the function returns a failure code and the script exits with an appropriate error message. Should one of these drives be connected to one of my test computers, it raises the error condition, preventing an extraneous backup on that drive. Should an unlisted backup drive be attached, the backup script will still continue normally. This allows me to use the script to backup my home directory on my test machines.

I initially assigned the UUIDs and the target hostnames to individual variables, and later decided that using arrays to hold this information would be a better way to go. That worked well, and the function looks nicer. It will probably be easier to maintain as long as I keep each UUID associated with the proper hostname.

A special thanks to Joe Collins and Kris Occhipini for sharing their work. They’ve inspired me and provided me with the tools to build upon.

Joe’s Backup Script

A few years ago, I found a backup script by Joe Collins at EzeeLinux that has served me quite well. Over the years, I’ve made many modifications and added a few features to it, but it’s still basically Joe’s script, and I give him full credit for his work. His backup and restore functions have only gotten minor changes to fit my computing environment.

A couple of days ago, I took a closer look at Joe’s function to check the backup drive to see if it was ready for backups. I saw that it had several routines in it that could be functions in their own right, and at least one routine was something I had added. I ended up rewriting the function so that it called four other functions.

One of his drive-test routines checked to see if the mount-point directory existed, surmising that if it did, the backup driive was mounted. I can see that this work with a desktop environment where the system would detect the drive, create the mount-point, mount the drive, and open the file manager. And this is what happens with my systems that use a desktop environment. I have several systems that use a window manager on a minimal installation of Debian. These systems do not automatically detect and mount a USB drive when it’s plugged in. On these systems, I would have to do one of three things — open a file manager and click on the drive, manually mount the drive from the command line, or use a script.

I recently found a way to extract the correct device name for the backup drive using the lsblk command, assuming that the drive is properly labeled to work with Joe’s BU script. Using that, I was able to automate the mounting process without have to look a the lsblk output, finding the right device name, and entering it with the read command. That got me to thinking that this could easily be applied to the BU script.

My new drive_test function makes function calls to run the checks on the drive, and displays error messages and exits only if the prescribed conditions aren’t met. First, the check_usb function checks to see if a USB drive has been plugged in. Then, bu_drive_label checks the partition label of the drive. If the label is correct, mount_bu determines if the drive is mounted. If the drive is not mounted, as in the case of my minimal Debian systems, the function extracts the proper device name, creates the mount-point (if necessary) , and mounts the drive. Once the drive is mounted, an addition check (bu_drive_fs) is run to determine if the drive is formatted with the correct file system.

In the original script, the backup and restore functions ran the sync command to synchronize cached data to ensure all backed up data is saved to the drive. This process can take a while, so I incorporated a function to print dots until the command completes. Then, since it’s used in the two functions, I made a function for it. Other than that change, and a few minor adjustments for my own need, those functions are pretty much as Joe wrote them.

Joe’s original script used several if-statements to check command line arguments. Early on, I combined them into an if-elif-else structure which serves the same purpose, but, satisfies my coding discipline.

As I said, I’ve been using Joe’s script since he made it available, and with a few tweaks and modifications, it has served me well. I use it on a nearly daily basis. Thanks Joe, for the scripts and Linux videos you’ve made available. I’ve learned from you and been inspired.

Filled my root

After staying up much later than I should have. I had been working on some of my software installation scripts. I synced my data to my various computers then decided to run a backup since I hadn’t done one in a while. Just before I launched the backup, I saw that my backup drive wasn’t powered on. After turning it on, I waited for the beep before launching the backup. I should have waited for the icon to show up on the desktop and for the drive to open up in the file manager.

I thought there was a lot of files were being backed up, considering that I use rsync which only copies the files that have been changed. Soon I was seeing an error messages that a file that should have been backed up in the current session couldn’t be backed up because I was out of disk space. Oops, I had filled my root partition. My system was now locked up. I rebooted and found that I was not able to log in.

I went to a laptop and I was able to log into my main computer via SSH. I ran `df -h` and confirmed that 100% of my root partition had been used. I went to my folder under `/media`. I made sure that the backup drive was unmounted and ran `rm -rf` on the directory that should have been the mount point for the backup drive. Then I ran `df -h` again and saw that the disk usage of my root partition was back at the usual 30%.

I powered up the backup drive again, rebooted the main system, and I was able to log in. I confirmed that the root partition was still only 30% filled. I opened up the backup drive in the file manager and saw that everything had been deleted from it. I don’t know why that happened but at least I was able to access it. I launched the backup script and monitored it for a few minutes. It seemed to be running properly so I went to bed. Since it was going to be a full backup of the /etc and /home directories, I saw no point in watching it.

After getting up the morning, I checked the backup and saw that it had successfully completed with almost 4 terabytes of data backup up. I ran `df -h` and noted that the disk usage for the backup drive had changed from 1% before the backup to 41% after. (It’s a one terabyte drive.)

This had happened to me back in February when the backup drive powered off during a power failure and I’d neglected to turn it back on after the power had been restored. The difference was that that time, my backup hadn’t been lost.

Note to self: After powering up the system after a power outage, be sure to power on all attached devices and make sure that drives successfully mount.

Root Full

There was a power outage in my neighborhood last night. When the power was restored, I brought up my main PC and took caught up on some things. Before retiring for the night I synchronized the important stuff with a couple of other systems and then ran my backup. When I started the backup program, I noticed that my external drive was not on so I turned it on and ran the backup. I noticed that a lot of files I hadn’t accessed recently were being included which I thought was unusual. Then I got a notification that my root partition was full. What was going on? I decided I’d deal with it in the morning and powered the system down.

This morning it took me a while to get to a TTY prompt and I tried to figure out what was going on. I was eventually able to get to a graphical login screen but I was unable to log in.

I accessed the computer via SSH from my LMDE machine and did some research on the problem. Most of what I found didn’t seem to help. Finally, I found a set of parameters for the du command that clearly showed what had taken up all the space in my root partition.

sudo du -hx --max-depth 1 / 

Apparently, the external backup drive hadn’t been mounted so the backup program began backing up directly to the /media directory where it usually mounts which quickly filled the root partition. I had been wondering why files I hadn’t accessed recently were being backed up. Once I deleted the back up files from the root partition, I was able to log in directly and I rebooted the system. It booted normally and I ran a backup to test it and worked as it should. My Conky display now shows 72% of my root partition free, as it should.

At first I was thinking I might have to find a way to resize my root and swap partitions or that I might have to reinstall completely. I’m glad avoided having to do that.