Category Archives: utiity

Bash for the word lover

It’s a WYSIWYG world. After all, we’re nearly in the future, which I define as 2019, the year Rick Deckard chases down replicants in the Blade Runner. Still no flying cars, which is disappointing. Even so, we have Steve Jobs, so the future coming, right?

GUI everything isn’t all that it could be. For many, many tasks, it is more expeditious to open a terminal and get a bash prompt. CLI. Character. Text. It could be green on black, or it would be a rainbow on white, but it is not different from a Televideo terminal, or a Teletype for that matter. As good as the Bourne Again Shell is, it is not graphical or fancy.

What it is is efficient. For a sharp mind and one given to efficiency, the terminal is power. Want to replace frick with frack in 800 HTML files?

find ~/web/project3 -name '*.php' | xargs perl -pi -e 's/frick/frack/g'

Bam. Done.

This is why I have 3 terminals open right now. One is connected to a server somewhere in Texas. I just fixed some text on a site with a command much like the one above.

But you already knew all that. You Googled and found this page, so you are already 1337 or whatnot. How about some word power on the command line?

Install some packages

This will install the packages we will use on an Ubuntu or Debian system. For other distributions, you will need to use your distributions package system.

To install on Ubuntu or Debian, just install the needed APT packages:

sudo aptitude -y install wordnet wamerican-large curl wget an

Definitions

This grabs a definition for a word from dict.org. For “unusual” for example:

curl --stderr /dev/null dict://dict.org/d:unusual | sed '/^[.,0-9].*$/d'

Which returns:

Unusual \Un*u"su*al\, a.
Not usual; uncommon; rare; as, an unusual season; a person of
unusual grace or erudition. -- {Un*u"su*al*ly}, adv. --
{Un*u"su*al*ness}, n.
[1913 Webster]

As you can see you are using curl to request a definition for “unusual”, then using sed to filter the results, to exclude extra stuff you don’t want. You could just enter “curl dict://dict.org/d:unusual” for the raw deal. Good on ya.

You can turn this into a script:

#! /bin/bash
# display definition of a word
#
curl --stderr /dev/null dict://dict.org/d:$1 | sed '/^[.,0-9].*$/d'

Save that in a file called “def” and run “chmod +x def” to make it executable. Then “def unusual” will return the same definition. You just created your own tool. You rock.

Wordnet

How about more power? Princeton has a project called Wordnet, which organizes nouns, verbs, adjectives and adverbs into set of “cognitive synonyms” and provides tools to use this data. With Wordnet, synonyms, antonyms and other lexical relations can be found for a given word.

To show a definition, (still using “unusual” as an example):

wn unusual -over

Here’s the output:

Overview of adj unusual

The adj unusual has 3 senses (first 3 from tagged texts)

1. (24) unusual -- (not usual or common or ordinary; "a scene of unusual beauty"; "a man of unusual ability"; "cruel and unusual punishment"; "an unusual meteorite")
2. (1) strange, unusual -- (being definitely out of the ordinary and unexpected; slightly odd or even a bit weird; "a strange exaltation that was indefinable"; "a strange fantastical mind"; "what a strange sense of humor she has")
3. (1) unusual -- (not commonly encountered; "two-career families are no longer unusual")

This uses the “-over” option. Some other options are:

-synsa adjective synonyms
-synsn noun synonyms
-synsr adverb synonyms
-antsa adjective antonymns
-antsn noun antonymns
-antsr adverb antonymns

Wordnet is extensive and there are many more options, run “man wn” for more.

Crossword help

This is simply a use of grep to pattern match words in a word list file.

Use a regular expression to find a word. In quotes, start your pattern with a “^” character and end with a “$” character. Use a period “.” for each unknown character.

grep '^.a...f.c.n...$' /usr/share/dict/words

Magnificent!

Rhyming

This uses the rhyme project, which provides a rhyming dictionary for the command line.

To get, build and install rhyme on your system:

sudo aptitude -y install build-essential libgdbm-dev libreadline-dev
cd ~
DIR="src" && [ -d "$DIR" ] || mkdir "$DIR"
cd src
wget http://softlayer.dl.sourceforge.net/project/rhyme/rhyme/0.9/rhyme-0.9.tar.gz
tar -xzf rhyme-0.9.tar.gz
cd rhyme-0.9
make
sudo make install

Holy smokes, you just built software! There is no stopping you. To find a rhyme, using “house” as an example:

rhyme house

House rhymes! Lots of them.

Anagrams

Anagrams are pretty much pure word fun. It is fun to see what an anagram of your name is.

Print single-word anagrams of “andrew”:

an -l 1 andrew

Just call me the wander warden.

My remote MySQL backup script in Perl – rtar_mysql.pl

Before you can use this script, you need to set up SSH so your local cron can access the remote servers without a password.

One thing to note about this script is that it automatically rotates the archived dump files; keeping a fie for the 1st of the week on a month, 1st of the month and 1st of the year.

see: Using Public/Private Key Pairs with SSH

Then, just modify the script for your database/servers (the block @ line 22).

This will create a series of files over time with daily/weekly/monthly MySQL dump backups.

#!/usr/bin/perl -w
# rtar_mysql.pl
#
# No arguments. The program is to be modified to include each database to be archived.
#
# Saves a tar of a remote mysql dump in a rotating file.
#
# This is used on Andrew's workstation to automatically grab a sql dump tar of each database daily.
#
use strict;
use warnings;

use DateTime;

my $fileError;
my $jobError  = 0;
my $jobErrors = "";
my $result;

# Specify a data block for each remote database to be archived.
my %dumpJobs = (
				 'db1' => {
							'remoteServer' => 'server_1',
							'database'     => 'database_name_1',
							'dbUser'       => 'database_username_1',
							'dbPassword'   => 'database_password_1',
							'dumpFilename' => 'server_1-database_name_1.dump.sql',
							'mysqlDumpCmd' => '/usr/bin/mysqldump',
							'tarCmd'       => '/bin/tar',
				 },
				 'db2' => {
							'remoteServer' => 'server_2',
							'database'     => 'database_name_2',
							'dbUser'       => 'database_username_1',
							'dbPassword'   => 'database_password_2',
							'dumpFilename' => 'server_2-database_name_2.dump.sql',
							'mysqlDumpCmd' => '/usr/bin/mysqldump',
							'tarCmd'       => '/bin/tar',
				 },
);

# Process each specified database dump/archive job.
for my $dumpJob ( sort keys %dumpJobs ) {
	$fileError = 0;
	my $tarballFilename = "$dumpJobs{$dumpJob}{'dumpFilename'}-" . tarDateSegment() . ".tgz";
	my $mysqlDumpCmd    = $dumpJobs{$dumpJob}{'mysqlDumpCmd'};
	my $tarCmd          = $dumpJobs{$dumpJob}{'tarCmd'};
	print "$dumpJob\n";

	my $dumpCommand = "ssh $dumpJobs{$dumpJob}{'remoteServer'} '$mysqlDumpCmd ";
	$dumpCommand .= "--user=$dumpJobs{$dumpJob}{'dbUser'} --password=$dumpJobs{$dumpJob}{'dbPassword'} ";
	$dumpCommand .= "$dumpJobs{$dumpJob}{'database'} > $dumpJobs{$dumpJob}{'dumpFilename'}'";
	print $dumpCommand . "\n";
	$result = system($dumpCommand );
	if ($result) { $fileError = 1; }

	if ( !$fileError ) {
		my $remoteMakeTarball = "ssh $dumpJobs{$dumpJob}{'remoteServer'} '$tarCmd ";
		$remoteMakeTarball .= "cvfz $tarballFilename $dumpJobs{$dumpJob}{'dumpFilename'}'";
		print $remoteMakeTarball . "\n";
		$result = system($remoteMakeTarball );
		if ($result) { $fileError = 1; }
	}

	if ( !$fileError ) {

		# using a more flexible naming scheme now
		my $downloadCommand = "scp $dumpJobs{$dumpJob}{'remoteServer'}:$tarballFilename .";
		print $downloadCommand . "\n";
		$result = system($downloadCommand );
		if ($result) { $fileError = 1; }
	}

	if ($fileError) {
		$jobError = 1;
		$jobErrors .= "$dumpJob ";
	}
}
if ($jobError) {
	warn "Errors were encountered: $jobErrors\n";
	exit(1);
}


sub tarDateSegment {
	my $dt = DateTime->now();

	my ( $sec, $min, $hour, $mday, $mon, $year, $wday, $yday, $isdst ) = localtime(time);
	$year += 1900;
	my $dateTime = sprintf "%4d-%02d-%02d %02d:%02d:%02d", $year, $mon + 1, $mday, $hour, $min, $sec;
	my $date     = sprintf "%4d-%02d-%02d",                $year, $mon + 1, $mday;
	my @weekdays = qw( sun mon tue wed thu fri sat );
	my $weekday  = $weekdays[$wday];
	my @months   = qw( jan feb mar apr may jun jul aug sep oct nov dec );
	my $month    = $months[$mon];

	my $weekOfMonth = $dt->week_of_month;

	my $dateTar = "";

	# if the first day of the year, set $dateTar like: 2009-1st
	if ( $yday == 1 ) {
		$dateTar = "$year-1st";
	}

	# if the first day of the month, set $dateTar like: feb-1st
	elsif ( $mday == 1 ) {
		$dateTar = "$month-1st";
	}

	# if the first day of the week, set $dateTar like: mon-1
	# where the number is the week of the month number
	elsif ( $wday == 1 ) {
		$dateTar = "$weekday-$weekOfMonth";
	}

	# otherwise, set the $dateTar like: mon
	else {
		$dateTar = "$weekday";
	}

	# $sec      seconds          54
	# $min      monutes          37
	# $hour     hour             11
	# $mon      month            4
	# $year     year             2009
	# $wday     weekday          3
	# $yday     day of the year  146
	# $isdst    is DST           1
	# $weekday  day of the week  wed
	# $month    month            may
	# $dateTime date and time    2009-05-27 11:37:54
	# $date     date             2009-05-27
	return $dateTar;
}

=head1 NAME

rtar_mysql.pl - Andrew's remote MySQL archive program.

=head1 SYNOPSIS

    use: rtar_mysql.pl

=head1 DESCRIPTION

This is a program I wrote to SSH/dump/tar/download/rotate archives of MySQL databases.

=over

=back

=head1 LICENSE

None.

=head1 AUTHOR

Andrew Ault 

=cut

Perl program for makeiPhoneRefMovie

This creates the small .mov (or whatnot) redirect files that Apple’s makeiPhoneRefMovie generates. This is simply a driver to creat those files with makeiPhoneRefMovie.

#!/usr/bin/perl -w
#
# gen_mwn_iphone_mov_redirect_files.pl
#
# This makes the special iPhone .mov rediect files (~ 300 bytes) that the iPhone
# uses to redirect to the appropriate actual movie file.
#
# To use this utility:
#
# Make sure that the program makeiPhoneRefMovie is in your $PATH with:
#
#  which makeiPhoneRefMovie
#
# Then, let fly!
#
# ./gen_mwn_iphone_mov_redirect_files.pl
#
# Then, FTP the .mov files that are created up to the CDN.
#

use strict;
use File::Basename;
use Net::FTP;
use DBI;
use Cwd;

# Where to output files
my $dirOutput = "./iphone-ref-movs/";

my $dirBase = getcwd;

# Working directories
my %workingDirs = ( dirOutput => $dirBase . "/iphone-ref-movs/", );

# Prepend url strings
my $url3gp = "http://low_bandbidth_url_goes_here/";
my $urlM4v = "http://high_bandbidth_url_goes_here/";

# MySQL connection data
my $dbHost     = "db_hostname_here";
my $dbDatabase = "db_databasename_here";
my $mysqlDsn   = "DBI:mysql:$dbDatabase;host=$dbHost";
my $dbUsername = "db_username_here";
my $dbPassword = "db_password_here";

# Create working directories, if they do not already exist.
while ( my ( $key, $value ) = each(%workingDirs) ) {
	if ( !-d $value ) { mkdir $value or die $!; }
	print $key . ": " . $value . "\n";
}

# Connect to MySQL database
my $dbh = DBI->connect( $mysqlDsn, $dbUsername, $dbPassword )
  or die "Cannot connect to database $dbHost: $@";

# Get a list of @songFilenames from the database
my $sqlQuery = "SELECT filename FROM videos WHERE filename != ''";
my $sth      = $dbh->prepare($sqlQuery);
$sth->execute();
my @songFilenames;
while ( my ($songFilename) = $sth->fetchrow_array() ) {
	push( @songFilenames, $songFilename );
}
$dbh->disconnect();
my $numSongIds = @songFilenames;

# Create the mov redirect files
foreach my $songFilename (@songFilenames) {
	my $dirOutput = $workingDirs{dirOutput};

	my $url3gp = $url3gp . $songFilename . ".3gp";
	my $urlM4v = $urlM4v . $songFilename . ".m4v";
	my $filenameMov = $dirOutput . $songFilename . ".mov";

	my $cmd       = "makeiPhoneRefMovie $url3gp $urlM4v $urlM4v $filenameMov";
	system($cmd);
}

Perl script to compare 2 directories

This is a quick Perl script that I wrote to solve a particular problem; I needed to check two directories, one of original files and one of transcoded files, to see which files were missing from the second directory. The files in the second directory have different filename extensions, so the utility needs to take this into consideration.

So, this utility checks the two directories, ignoring file extensions and shows the missing files in each directory.

#!/usr/bin/perl -w
# dircomp.pl
#
# Compares filenames in two directories, without regard to 3-letter file extension.
# Displays list(s) of the differences.
#
# This was written to show missing transcoded files in one dir compared to another.
#
use strict;
use List::Compare;
use File::Basename;

my $resultsFound = 0;
my $fileName;
my $filePath;
my $fileExt;

if ( $#ARGV < 1 ) {
    &usage;
}

my $dir1 = $ARGV[0];
my $dir2 = $ARGV[1];

print "\ndircomp directory comparison\n";
print "\ncomparing:\t$dir1\nwith:\t\t$dir2";

opendir( DIR1h, $dir1 )
  || die("cannot open directory: $dir1");
opendir( DIR2h, $dir2 )
  || die("cannot open directory: $dir2");

my @files1 = readdir(DIR1h);
my @files2 = readdir(DIR2h);


# Remove filename extensions for each list.
foreach my $item (@files1) {
    my ( $fileName, $filePath, $fileExt ) = fileparse($item, qr/\.[^.]*/);
    $item = $fileName;
}

foreach my $item (@files2) {
    my ( $fileName, $filePath, $fileExt ) = fileparse($item, qr/\.[^.]*/);
    $item = $fileName;
}

my $lc = List::Compare->new( \@files1, \@files2 );

my @onlyInDir1 = $lc->get_Lonly;
my @onlyInDir2 = $lc->get_Ronly;

if ( @onlyInDir1 > 0 ) {
    $resultsFound = 1;
    print "\n\nonly in $dir1:\n\n";
    for my $entry (@onlyInDir1) {
        print "$entry\n";
    }
}

if ( @onlyInDir2 > 0 ) {
    $resultsFound = 1;
    print "\n\nonly in $dir2:\n\n";
    for my $entry (@onlyInDir2) {
        print "$entry\n";
    }
}

if ( !$resultsFound ) {
    print "\n\nboth directories are identical.\n";
}

sub usage
{
    print "usage: dircomp.pl dir1 dir2\n";
    exit(0);
}

My file rotating MySQL database dumper

This is a script to be run from a daily cron that created a series of sanely named SQL dump files: weekly, monthly, etc.

Always have that backup ready!

#!/usr/bin/perl -w
#
# No arguments. The program is to be modified to include each database to be archived.
#
#
use strict;
use warnings;

use DateTime;

my $numRotations = 6;    # base 0,  so 6 = 7 rotations (.0 through .6)... plus the new file, so 8 total files
my $fileError;
my $jobError  = 0;
my $jobErrors = "";
my $result;

# Specify a data block for each remote database to be archived.
my %dumpJobs = (
				 'db1' => {
							'database'     => 'db1',
							'dbUser'       => 'db1username',
							'dbPassword'   => 'db1password',
							'dumpFilename' => 'db1.dump.sql',
							'mysqlDumpCmd' => '/usr/bin/mysqldump',
							'tarCmd'       => '/usr/bin/tar',
				 },
);

# Process each specified database dump/archive job.
for my $dumpJob ( sort keys %dumpJobs ) {
	$fileError = 0;
	my $tarballFilename = "$dumpJobs{$dumpJob}{'dumpFilename'}-" . tarDateSegment() . ".tgz";
	my $mysqlDumpCmd    = $dumpJobs{$dumpJob}{'mysqlDumpCmd'};
	my $tarCmd          = $dumpJobs{$dumpJob}{'tarCmd'};
	print "$dumpJob\n";

	my $dumpCommand = "$mysqlDumpCmd ";
	$dumpCommand .= "--user=$dumpJobs{$dumpJob}{'dbUser'} --password=$dumpJobs{$dumpJob}{'dbPassword'} ";
	$dumpCommand .= "$dumpJobs{$dumpJob}{'database'} > $dumpJobs{$dumpJob}{'dumpFilename'}";
	print $dumpCommand . "\n";
	$result = system($dumpCommand );
	if ($result) { $fileError = 1; }

	# create tarball
	if ( !$fileError ) {
		my $makeTarball = "$tarCmd ";
		$makeTarball .= "cvfz $tarballFilename $dumpJobs{$dumpJob}{'dumpFilename'}";
		print $makeTarball . "\n";
		$result = system($makeTarball );
		if ($result) { $fileError = 1; }
	}

	if ($fileError) {
		$jobError = 1;
		$jobErrors .= "$dumpJob ";
	}
}
if ($jobError) {
	warn "Errors were encountered: $jobErrors\n";
	exit(1);
}

# This rotates a series of files Unix log rotation style.
# CURRENTLY UNUSED - KEPT BECAUSE IT IS SO HANDY
#
# Run this with the name of the file to rotate and the max # of rotations, just before you
# create the newest iteration of the file. This will rename the older versions by appending
# ".0" though the number of rotations specified. (So it will actually keep one more than specified,
# including ".0".)
sub rotateFile {
	my ( $filename, $numRotations ) = @_;

	# if the highest exists, delete it
	if ( -f $filename . ".$numRotations" ) { unlink $filename . ".$numRotations"; }

	#
	for ( my $count = $numRotations ; $count >= 1 ; $count-- ) {
		my $fromFilename = $filename . "." . ( $count - 1 );
		my $toFilename = $filename . "." . $count;
		if ( -f $fromFilename ) {
			rename $fromFilename, $toFilename;
		}
	}
	if ( -f $filename ) {
		rename $filename, $filename . ".0";
	}
}

sub tarDateSegment {
	my $dt = DateTime->now();

	my ( $sec, $min, $hour, $mday, $mon, $year, $wday, $yday, $isdst ) = localtime(time);
	$year += 1900;
	my $dateTime = sprintf "%4d-%02d-%02d %02d:%02d:%02d", $year, $mon + 1, $mday, $hour, $min, $sec;
	my $date     = sprintf "%4d-%02d-%02d",                $year, $mon + 1, $mday;
	my @weekdays = qw( sun mon tue wed thu fri sat );
	my $weekday  = $weekdays[$wday];
	my @months   = qw( jan feb mar apr may jun jul aug sep oct nov dec );
	my $month    = $months[$mon];

	my $weekOfMonth = $dt->week_of_month;

	my $dateTar = "";

	# if the first day of the year, set $dateTar like: 2009-1st
	if ( $yday == 1 ) {
		$dateTar = "$year-1st";
	}

	# if the first day of the month, set $dateTar like: feb-1st
	elsif ( $mday == 1 ) {
		$dateTar = "$month-1st";
	}

	# if the first day of the week, set $dateTar like: mon-1
	# where the number is the week of the month number
	elsif ( $wday == 1 ) {
		$dateTar = "$weekday-$weekOfMonth";
	}

	# otherwise, set the $dateTar like: mon
	else {
		$dateTar = "$weekday";
	}

	# $sec      seconds          54
	# $min      monutes          37
	# $hour     hour             11
	# $mon      month            4
	# $year     year             2009
	# $wday     weekday          3
	# $yday     day of the year  146
	# $isdst    is DST           1
	# $weekday  day of the week  wed
	# $month    month            may
	# $dateTime date and time    2009-05-27 11:37:54
	# $date     date             2009-05-27
	return $dateTar;
}

=head1 NAME

rtar_mysql.pl - Andrew's remote MySQL archive program.

=head1 SYNOPSIS

    use: rtar_mysql.pl

=head1 DESCRIPTION

This is a program I wrote to SSH/dump/tar/download/rotate archives of MySQL databases.

=over

=back

=head1 LICENSE

None.

=head1 AUTHOR

Andrew Ault 

=cut

Perl Low Disk Space Warning Cron Script

This is a quick little script I wrote to warn me when disk space is getting low on a server I’m responsible for.

i just stuck this into a daily cron and now I know when to act!

#!/usr/bin/perl
#
# lowdiskspacewarning.pl
#
use strict;
use Filesys::DiskFree;

# init
my $sendmail = "/usr/lib/sendmail -t";

# file system to monitor
my $dirFilesystem = "/";
my $systemName = "putYourSystemNameHere";

# low diskspace warning threshhold
my $warningThreshhold=20 ; # in percent

# fs disk freespace
my $fsHandle = new Filesys::DiskFree;
$fsHandle->;df();
my $fsSpaceAvail = $fsHandle->;avail($dirFilesystem);
my $fsSpaceTotal = $fsHandle->;total($dirFilesystem);
my $fsSpaceUsed = $fsHandle->;used($dirFilesystem);
my $fsSpaceAvailPct = (($fsSpaceAvail) / ($fsSpaceAvail+$fsSpaceUsed)) * 100.0;

# email setup
my $emailTo='it@yourdomain.com';
my $emailFrom='root@yourdomain.com';
my $emailSubject="WARNING Low Disk Space for: $systemName";
my $emailBody = sprintf("WARNING Low Disk Space on '$systemName $dirFilesystem': %0.2f%%\n", $fsSpaceAvailPct);

# If free space is below threshhold, e-mail a warning message.
if ($fsSpaceAvailPct < $warningThreshhold) {
        open(MAIL, "|$sendmail");
        print MAIL "To: $emailTo\n";
        print MAIL "From: $emailFrom\n";
        print MAIL "Subject: $emailSubject\n";
        print MAIL $emailBody;
        close(MAIL);
}