• Linux
  • May 2024
    M T W T F S S
     12345
    6789101112
    13141516171819
    20212223242526
    2728293031  
  • Meta

Repo Snapshots

Back in September, I created several scripts to create daily, weekly, and monthly snapshots of my local repositories. I don’t remember for sure, but I think I was inspired by one or more videos demonstrating how to create backup using the tar command. The examples I saw were probably separate scripts for each time increment. I created my own scripts and am running them as cron jobs. That’s been working out very well.

This morning I got to thinking that I could probably combine the daily, weekly, and monthly jobs, along with the short script that syncs them to my Gitea server into one script. I cut and pasted the pertinent lines from the scripts into one script and cobbled them together into something workable. The script is meant to run as a daily cron job and use if statements to determine when each snapshot should run.

Later on, I took another look at the script and determined that I could improve upon it. I put the commands for each backup and put them in their own functions, so I could use the test brackets to call the functions, thus eliminating the if statements. The first rendition of this script used the digital representation of the days of the week to deterimine if the weekly snapshot should be performed. Rather than test for 0 or 7 representing Sunday, I changed the day of the week variable to hold the abbreviated day of week (Sun) instead, like I did with my incremental backup scripts.

This new script replaced four scripts in my crontab. It is scheduled to run daily and uses conditional statements to determine which functions will run. The current schedule of backups will be maintained with the weekly snapshot running every Sunday and the monthly on the first of the month. I believe this will be more efficient and help to declutter my crontab.

Penultimate Day 2023

I guess I was busy this year. I messed around with bash scripts a lot; writing them, modifying them, and even abandoned a few.

When I first changed my GitHub access to use SSH, I couldn’t get it working right, so I put it aside for a while, and eventually set up a local Gitea server (October 2022). This past October, I found some information that finally made GitHub usable for me again. Since my scripts repository hadn’t been touched in well over a year, I deleted all of it, and pushed some of my current scripts to it. I’m not putting all of my bash scripts on GitHub, only some that I think might be useful to others. A lot of my scripts are specific to my personal computing environment.

In the past few months, I’ve been messing around with tar as a means of archiving my financial records and my scripting projects. Now I have several incremental backups running every day. I was kind of surprised at how easy it is to work with and setting up the incremental backups was relatively painless.

Speaking of backups, I did a lot of work on my backup script based on Joe Collins’ script. I set it up mount the USB backup drive if it’s not automatically mounted. I needed that capability on my Debian i3 machines until I found a utility that would automatically mount USB drives. For a few machines where I take regular snapshots, I have it set up to recognize if the correct backup drive has been attached. I also replaced the original nested if statements with case statements for better efficiency.

Another small project of mine was to write a couple of scripts to work with downloaded ISO files. When I download an ISO from a distro’s web site, I also down its SHA checksum. My verify-iso script verifies that the checkums match. The write-iso script writes the ISO to a thumbdrive using the dd command. The scripts lists the available files and the writing script lists available removable media that can be written to.

I’ve been expanding my use of i3 on Debian. I have several laptops and a couple PCs running it on Debian 11 which I plan to upgrade to Bookworm. On the Bullseye system, I originally used Bumblebee Status as my status bar, and I’ve replaced it with Polybar. I have one laptop with an external monitor attached to play around with i3 on multiple monitors.

My son bought my wife a new computer for Christmas, and I had the pleasure of setting it up. It wasn’t nearly as painful as it used to be when I built them. Even copying her files over went rather well. Her Windows 10 computer had been acting up a lot for a while. It was slow and would often lose its network connection and the connectinons for everything else on that switch. A couple of months ago, I started setting up Linux Mint on one of my better machines, getting ready for the day her Lenovo finally died. The new machine is working well for her. It’s running Windows 11 Home which is adequate for her needs and works well in my network environment.

Messing with backups

Lately, I’ve been exploring the tar command and tinkering with backups. I don’t know why I’ve never looked into it before. I’ve been using rsync to create snapshots of my home directory on my “production” machines. While technically, not a back up, these snapshots have been useful. I’ve also use rsync to copy certain directories to computers across my network. Over the years, I’ve also written scripts that use zip to create compressed archives of my script directory along with some other directories.

About a month ago I created a few scripts to make, daily, weekly, and monthly snapshots of my local repositories using tar. That’s been working well, and it prompted me to look into setting up incremental and differential backups using tar. I found some articles and YouTube videos on the subject to get familiar with the concepts. It wasn’t until I started actually experimenting with it, that it began to gel, and I cobbled together a couple of rudimentary backup scripts. Soon I was able to flesh them out and write scripts for incremental and differential backups.

I created scripts to make incremental backups of my two main repository directories. One contains a local copy of my public repositories that I have on Github, and the other is my private repository that I store on a local server. As of this writing I’ve only done the initial full backup of the repositories, so it will take awhile to see how well it works. I’ll probably still have to deal with a few bugs. Within hours of doing the first backups, I found a couple of minor bugs which I fixed straight away. They didn’t affect anything operation, they were just minor cosmetic issues how the archive files were named.

I have the scripts setup to append the archive name with a six-digit date (yymmdd) followed by the numerical day of week (0-6). A full backup is done on Sunday (day 0) and incrementals are done the next six days. On Sunday the metadata file is renamed using the date for the previous Sunday and a new metadata file is created for the next week. It will be at least a couple of weeks before I know it’s working as expected, but I’m confident I’ve got it right.

If this works out I’m thinking about implementing incremental backups for other important directories and backing up to external drives. Delving into using tar with incremental and differential backups has kind of opened up some new possibilities.