• Linux
  • February 2019
    M T W T F S S
    « Jan    
  • Meta

  • Follow me on Twitter

  • Advertisements


A while back I wrote a little script to display some basic IP information, public IP address, private IP address, default gateway, and DNS addresses. I had used the nmcli command piped through grep to display all of the IP4 information which also included route information which didn’t really interested me.

I didn’t particularly like the way the output looked so I took another look at the script to see if I could improve it. I did a little research and found a couple of different ways to get the local IP address and the default gateway. I still used nmcli to get the DNS addresses but found a way to use awk to get the just the addresses and display them nicely. The output looks so much better than before.

Just before I started on this little project I’d been watching a YouTube video covering some of the basics of using awk and I wondered if I could use awk to extract just the data I needed. I was able to use awk to pull and display all of the IP data except the public IP. Using awk, I didn’t need to use grep at all.

$ ipinfo.sh

IP Information
Public IP:
Local IP:
Default Gateway:
DNS Servers:

The script is on my GitHub Bash scripts repository.


Script Maintenance

This morning I attended to some script maintenance. I have several scripts that insert header information or licenses into existing files or create templates for new scripts or source files. I thought I’d clean them up and standardize them where I could.

I noticed that there was already a Templates folder in my home directory. It was empty so I figured this would be a good place to put my own templates and get them out of my Work folder. Of course, moving them meant that I had to change the scripts to point to the new location.

In some scripts I had included an option to edit the new or modified files. It made sense to include that option in all of these scripts and then I wondered what other options I should include. I decided that once the data had been processed, there should be options to edit the file, view the file, or quit without doing anything further. I also cleaned up the header comments to make them consistent.

Now that I’ve done that, why shouldn’t I combine some these scripts into a single script in which you could you choose the appropriate licence for your script or source code? I’m already thinking about it.

I added the Templates folders to the scripts I used to keep my files synchronized between systems. Then, as a final bit of cleanup, I made sure that I had the Atom editor, bat, and FnLoC on all of them.

  • The Atom Editor is fast becoming my go-to editor for writing code, scripts, and markdown files. I haven’t gotten beyond the basics yet but it’s very configurable and integrates with GitHub.
  • Bat is described as a cat clone with wings. It supports syntax highlighting for many programming and markup languages, communicaes with git with respect to the index, pipes its own output to less if the file is too large for one screen, and can be used to concatenate files.
  • FnLoC is my own little program that counts logical lines of code in C source files.

Another cleanup script

I’ve written a number of cleanup scriots recently – to clean up my ~/bin directory, to clean up my C programming work folders, and to clean up source code files written on DOS/Windows text editors. And this morning, I wrote another one.

I use HomeBank to keep track of my bank and credit card accounts. A couple of months ago I added the PPA to get the latest version and updates. I noticed that the newer version not only creates a backup copy of the main data file with a tilde at the end, but it also makes another backup appended with the current date and a .bak extension. Naturally, the number of backup files increases every day I use the program so it occurred to me that it would be prudent to delete the older backups after a certain period of time, say 30 days.

The find command has been a mainstay of my other cleanup scripts and it’s the primary command in this script. I also use pushd and popd to move to the HomeBank directory and return to where the script was called. Since the files are only about 170K each, I don’t see much point in adding the script to cron or anacron right now. Maybe later I’ll add it and run it monthly. Running it daily or weekly might be overkill.

Perhaps I should consider putting the previous month’s backup files into a tarball inside a backup folder. That would give me backups beyond 30 days. I’ll look into that.

I love it when I can throw together a script to handle a task.

Makefile Fun

Over the last couple of days I’ve been relearning make files. It had been over twenty years since I’d worked with them and even then I generally had trouble with them. I watched several YouTube videos, some better than others, and they started making more sense to me.

Yesterday, I found C Programming Makefiles by Barry Brown. In the video he started with simple cases and gradually increased the complexity. I liked the way he explained the processes involved and put the dependency tree in a graphical format, making it easier to see the relationships between the source code, the object files, and the executable files.

Two days ago, I created a Makefile for the FnLoC/LLoC project and put it in my git repositories. That got me to thinking about breaking the project down into multiple files since both programs used a lot of common code to parse lines from a file and determine the states. I had an idea that I might even be able to put the main part of that common code into a function.

Then yesterday, after I’d watch the Barry Brown video, I copied the files from the local repository into a working directory and started pulling out the common code. Previously, I had considered files to take care of the common functions, the linked list, and some general functions. Ultimately, I decide just to work with the common declarations and functions which I put into files I finally named lstates.h and lstates.c.

With a little work, I got everything to compile and link. I tested the resulting programs and discovered that both programs were counting nearly twice as many lines of code than there actually were, much like when I’d attempted to change the NewLineNC state to NewLine from within the switch statement. I moved the code back into main() in both source files and everything appeared to be working.

With the programs successfully compiled, linked, and tested, I update the change log and the README. In the README I added a listing of the source files under the description. Under Compiling from source, I took out the part about compiling with GCC without the Makefile. Before I split off the common source code, that was a very simple matter but with the addition of the lstates files, I didn’t feel like going through the whole process. I figure if someone is downloading the program, they probably know enough to compile it.

I’m also considering eliminating the fnloc-install.tar.gz archive. If someone who is not on a Debian- or Ubuntu-based distribution wants the program, they can download the source and the scripts and go from them. If anyone wants to create a package for a different distribution, they’re welcome to do it. If anyone does, it would be nice if they’d share it with me

I haven’t had a chance yet to do anything the new source code on Windows yet. I’ll probably take a look at that later this week. I use MinGW gcc on Windows and I’ve had no problems with compilation so I expect that it will be able to deal with the Makefile.

This little multi-file project has been a confidence builder.

FnLoC Updates

In the last couple of days, I’ve done some work on the FnLoC project and made several commits to my GitHub repository.

First of all, it dawned on my why FnLoC wouldn’t work properly with a lof of my source code from my Computer Science coursework 20 years ago. I had written much of that code using Dos and Windows text editors and IDEs. These editors would leave carriage return characters throughout the file. These characters wreaked havoc on my FnLoC program which was developed to work with files written in a Linus or Unix environment.

To solve the problem, I wrote a Bash script (dos2linux) to clean out the carriage returns. The script uses sed to remove the carriage return characters and creates a backup of the original file.

    sed -i.bak -e 's/\r//g' source-file.c

There are some other methods, such as the tr command, but this worked well for me and was easily incorporated into a script.

After going through and cleaning up all of my old CS source code files, I copied the code for my original LOC counting program then updated it so it was consistent with the line parsing process of FnLoC. I also added a couple of functions to print the results and to display syntax help. When I finished, I named it LLoC for “Logical Lines of Code.” While I was at it, I tidied up the FnLoc code a bit, just some minor formatting things I’d missed previously.

I also compiled the new code on my Windows 7 machine and updated the installation and removal batch files before placing them in a zipped archive.

This morning I modified the loc2file Bash script to incorporate LLOC by having it process header files. While going through my old code, I found a number of C++ files with .cc and .hh file extensions so I added those extensions to those extensions to be used.

Then I updated the deb package and updated the documentation. When I was satisfied I had everything up to date, I updated the Git repository and pushed the changes to my GitHub repository.

Looking at FnLoC Code

This afternoon I took a look at my FnLoC source code and saw a couple of lines of code that I thought I could modify a bit.

While the program is parsing a line of source code, it goes through the line character by character, changing the state of the line as it goes. I use a switch statement that looks at every possible state and potentially changes the state based on the state set by the previous character.

One of the possible states is NewLineNC. It’s a transition state that’s usually entered when a newline character is reached from certain states. I’m not exactly sure of its purpose anymore though I’m sure I knew when I wrote the original program back in 1998. I really no longer fully understand how the processes work but I’ve figured out some of it as I’ve been working on it over the past few months.

When I first entered the source code (after 20 years), I got an error or warning from the compiler about NewLineNC state not being among the possible cases. It’s not a state that can be entered until the very end of a line so there’s no function to process it. To satisfy the compiler, I added a case for it that doesn’t do anything.

     switch (state)
        case (NewLineNC) :

Later in the program, after the line has been parsed, I have a conditional statement that check to see if the final state of the line is NewLineNC. If it is, the state is changed to NewLine which is the initial state of a new line of code.

     if ( state == NewLineNC )
            state = NewLine;

I wondered it I could change the state from NewLineNC to NewLine in the switch case and eliminate the if statement so I gave it a try. After compiling the code, I ran it against my source code file and compared it to the previous LOC information. I immediately noticed a considerable difference.

The total LOC count increased from 245 lines to 276 lines. Counts for several functions increased by one line. The main() function’s count increased by 6 lines and lines of code outside of functions (declarations and compiler directives), increased from 5 lines to 24 lines.

I looked at the code and physically counted the logical lines in several functions, verifying the counts before the changes. I wasn’t able to identify what lines had been added to the count but it was apparent that with the change, the line state was being changed from NewLineNC to NewLine when it shouldn’t have, thus adding lines of code that weren’t really code such as comments. It became quite apparent that the place to change between those states was after the entire line had been processed.

NewLineNC seems to be a possible final state for a processed line while NewLine is an initial state for the line. As I said earlier, I probably understood the code a lot better when I originally wrote it. I’m pretty sure that I had manually parsed many lines of code to determine all of the potential states that could occur in a possible line of code. Since rediscovering the code, I’ve been slowly relearning it and figuring it out.

I undid the changes, leaving it the way it was. I considered adding more comments to help clarify some sections of the code but decided against it. I didn’t really want to recompile it and make a new deb package. I’d do that if there were actual changes to the code itself.

Penultimate Day 2018

The past year kept me busy with a lot of little tech projects, programming, scripting, networking, and honing my Linux skills.

Writing Code

Probably the biggest thing for me was the revival of my interest in writing code. This summer I found a couple of notebooks containing some of my source code from my college days Of particular interest was code from a 1998 Computer Engineering class for which I wrote a couple of programs to count logical lines of code. The Function Lines of Code (FnLoC) program I found wasn’t the latest version I’d written but I typed it in to an editor and compiled it.

Then I started improving it and resolving some of the problems it had. One of the big changes I made was to replace the stack holding the function information with a singly linked list. With the stack, the function data was displayed in the reverse order of how it appeared in the code because the last function was the most recent addition to the stack. My implementation of the linked list added new data to the end of the last and displayed it from the beginning so the functions are listed in the same order as they appear in the source code.

I also fixed problems with properly identifying and counting function headers that spanned two lines. If a function header needs to be split (a practice I try to avoid), both lines will be copied to the linked list and later displayed in the output. The two lines are counted as one logical line of code.

There is still a problem with how the program deals with multi-line data structures where the data is comma delimited as in arrays. Currently, mult-line array declarations are counted as two line, regardless of how many lines are in the declaration. One of the challenges I face is differentiating a function from a data strucuture while parsing a line of code.

As part of the FnLoC project, I created a Bash script wrapper for the program which displays the program’s output to the screen as it writes it to a file. I also learned out to create a .deb package to distribute it to Debian/Ubuntu-base distributions and wrote scripts to install on other Linux distributions and on Windows. I also compiled it on Windows 7.

I’ve also been doing some other coding to keep in practice but haven’t found myself another project yet.

Bash Scripts

I continued to write Bash scripts to do several tasks for my own use and I’ve been adapting others’ scripts (particularly Joe Collins of EzeeLinux) to fit my needs. Among the scripting highlights were scripts to insert headers and licenses into existing scripts and to use templates to create new scripts and source code files.

I also created other scripts solved routine problems like renaming file extensions or file names to make them more readable or to conform to naming conventions.

Conky Scripts

During the year I found that due to changes in conky in Mint 18.x and Ubuntu 16.04 repositories, my conky scripts weren’t functioning right.I wrote a new and simpler base script and applied it to my systems. Since developing my conky scripts is no longer a priority, I removed the Conky Configuration page from the blog. I rarely have much need to tweak them any more.

Recently, I created a script to take a template file, fill in the appropriate device names, create the .conkyrc file, and make a copy of it in a designated folder. Then I expanded on the script to install conky and the conky.desktop.


In late November, I discovered that I had set up an account on GitHub. I have no recollection of when I did this but since I had it I figured I may as well use it. I found some tutorials on-line and set set up git repositories on my main computer and on my GitHub account. Presently I have two repositories on GitHub – FnLoC and bashscripts.


Although I’m primarily a Linux user, I still maintain a few Windows machines. On my Windows PC I installed CygWin and MinGW mainly to use rsync for external backup and to compile C source code, respectively. I still have a small number of tasks I do on Windows so I keep a Windows 7 PC around just for that. The rest of the family still hasn’t converted yet. I avoid using Windows 10 like the plague.


Throughout the year I did several installations of both Linux and Windows. I set up Windows 7 on laptops as gifts for my son and my daughter-in-law and set up a Windows 10 laptop for my grandson. I also put Windows 7 on a machine to replace an older machine. And I set up various flavors of Linux on laptops to try them out.

Near the end of the year, I replaced the aging Dell laptop I keep by the rooter with an HP EliteDesk 8300 USDT. I had intended to install LMDE 3 on it but had some issues with the UEFI partitions so I ended up putting Mint 18.3 on it and letting the installer set it up with a single partition.

A few days later I did get LMDE 3 installed on a desktop mini which is working out very well. It has an issue with sourcing my .profile in the GUI login so my ~/bin folder wasn’t showing in in the path. I had to configure the terminal to login in as a login shell. It’s a work around and something I’ll hope they’ll fix.

Just after Christmas I decided to reinstall Mint on my Lenovo PC. I decided that I was never going to use the Windows 10 partition so I reclaimed that disk space. I struggled with getting the partitions set up to work with UEFI and I somehow managed to disable it and get Mint 19.1 on it. Getting my applications installed was a struggle too. You can read the details in a previous post.


From mid-January through the end of October, I was again employed by The AME Group, working part-time on the Kettering Health Network project to upgrade their systems to Windows 10. I generally worked four evenings a week imaging computers and tracking the computers that weren’t going to be used in the upgrade. I was able to purchase some very nice computers dirt cheap so I have a nice collection of laptops and ultra-small form factor desktops to play around with.

I stopped working at the end of October because I was nearing my income limit where it would have affected my Social Security benefits and I wanted to get some things done around the house. They are hoping I will return after the first of the year. I’m thinking about it because I could use the extra money. However, I’m not sure I want to do the four-day work week again because I found that I really didn’t have as much time to get things done as I would have liked. I’m thinking a three-day work week for between 18 and 24 hours a week would suit my needs well.


In the coming year, I expect to be doing more with Linux along with building up my coding and scripting skills. I’ve got a couple of small projects in mind.

  • Installing LMDE 3 on my HP EliteBook 6570b. I have 18.3 on it now and it’s working well but I want to work with LMDE more.
  • Do more with virtual machines – distro testing, programming environments, testing scripts, and Windows tasks.
  • Finding more coding projects, either C programs or Bash scripts. I’m sure I’ll find things that pique my interest or a problems in need of solutions.
%d bloggers like this: