The programming language developed by Larry Wall

exiftool is very cool

I discovered this command line tool called exiftool written by Phil Harvey. It is a platform independant Perl library used to manipulate, edit, read, write exif data in photographic images. So far I've written a bash script to geotag images using user supplied geographic coordinates (DMS format) but it can also use GPS log files to geotag and geosync your photos. By default it creates a backup of the original but can also write over the original instead. It looks like a pretty powerful little tool. I think it was about 8MB to install it and it is freeware.

When to Automate

I'm just pondering something that I think about fairly often when I'm asked to do some immense task like create output for 2,256 watersheds. That was today's task ... and by “today” I mean ... I'm not going to finish it today, it was just added to my inbox today. I have to use the 2001 NLCD to create forest/non-forest data and then clip out a piece for each of the 2,256 sheds to run Fragstats on. What I wonder about is how much of this can I automate with a script. The answer usually boils down to efficiency. I mean how much time would it take to write the script vs. how much time would it take to do it manually. Usually there are several steps (as in the present case) so I have to answer the question for each step. Sometimes a script can be useful in the future so it may be worth writing one even if it takes a little longer to write than to do it manually (because it will be so much quicker next time you have to do a similar task). Coming Soon !!!

I've been thinking about doing this for some time now. I finally broke down and paid for some server space through I don't have much content there yet but I've set up two domains. The first will be my personal website The second is This should be fun. Its a linux host so I plan to try my hand with some Perl and CGI programming. I've been playing around with Perl for a few months now (perhaps almost 1 year ?).

Perl script for making a web page from a folder full of images (Part III) Navigable Site with CSS

Ok ... I've given it a bit more work and created a new script “” that uses an installation of imagemagick on a windows system to make thumbnail copies of every image and create a series of web pages (one fore each image) with a navigation bar running from top to bottom on the left side of the page for moving from page to page. It uses css to mark up all pages.

You can see an example of the results of this on my nephew Owen's photo page. I created this using this perl script.

The images you start with should probably be a reasonable size because the full-sized images are what are displayed in the main body of the page (and if they are too large it may crowd out other parts of the page). The main overhead in doing this is writing a heading and description of each image.

Perl script for making a web page from a folder full of images (Part II)

This version works with an installation of command-line Image-Magick on a Windows system. I realize this is a little bit limiting and I should be doing the copying and mogrifying in Perl instead of through the "system". This one (like the first version) is to be run in a folder with a bunch of JPEG images. It is an improvement over the first version ( because it creates a copy of each image and turns it into a thumbnail. The thumbnail is what is displayed on the page but the link points to the full-sized image. To run it just type the name of the script and it will interactively prompt you for some answers (same as with

This script and both of the img2html*.pl scripts use cutoff values for sizing the thumbnail. To run it just type the name of the script and it will make the thumbnail images and size them according to the cutoff values.

Perl script for making a web page from a folder full of images

I've been thinking about this one a lot lately (I wrote it a couple of weeks ago). Tonight I fixed it so that it creates valid XHTML (as tested by

If you write a lot of pages with an identical layout, It could be tweaked to suit your needs.

What it does:
It essentially creates a page with a table full of images.

You run the program (e.g. prompt:\>
in a folder that has a group of images in it. Then it asks a series of questions ...
1. Enter an html file name (use an "htm" or "html" extension)
2. Enter the number of images per row
3. Enter a Cell Border Size (hit "Enter" for none)
4. Enter a Title
5. Enter a Major Heading
6. Enter a Minor Heading
7. Add a Link (y/n) ?
#7 enters a loop that runs until you type "n" or "N"

Good article on Regular Expressions

I've been trying to learn Perl by writing my own script to convert all my old HTML to XHTML. I came across this nice article that summarizes how to use regex very nicely !

Regex Article by Chris Spruck.

Syndicate content