Minimal Perl
For UNIX/Linux People
Tim Maher
Foreword by Dr. Damian Conway
  • October 2006
  • ISBN 9781932394504
  • 504 pages
Minimal Perl
For UNIX/Linux People
Tim Maher
Foreword by Dr. Damian Conway

If you are a Unix/Linux user and wish to learn Perl, I recommend this book.

George Wooley, and

Perl is a complex language that can be difficult to master. Perl advocates boast that "There's More Than One Way To Do It", but do you really want to learn several ways of saying the same thing to a computer?

To make Perl more accessible, Dr. Tim Maher has over the years designed and taught an essential subset of the language that is smaller, yet practical and powerful. With this engaging book you can now benefit from "Mininal Perl" even if all you know about Unix is grep.

In Minimal Perl, you will learn how to write simple Perl commands (many just one-liners) that go far beyond the limitations of Unix utilities, and those of Linux, MacOS/X, etc. And you'll acquire the more advanced Perl skills used in scripts by capitalizing on your knowledge of related Shell resources. Sprinkled throughout are many Unix-specific Perl tips.

Table of Contents detailed table of contents




about this book

about the cover illustration

list of tables

Part 1 Minimal Perl: for UNIX and Linux Users

1. Introducing Minimal Perl

1.1. A visit to Perlistan

1.1.1. Sometimes you need a professional guide

1.2. Perl can be simple

1.3. About Minimal Perl

1.3.1. What Minimal Perl isn’t

1.3.2. What Minimal Perl is

1.4. Laziness is a virtue

1.5. A minimal dose of syntax

1.5.1. Terminating statements with semicolons

1.6. Writing one-line programs

1.6.1. Balancing simplicity and readability

1.6.2. Implementing simple filters

1.7. Summary

2. Perl essentials

2.1. Perl’s invocation options

2.1.1. One-line programming: -e

2.1.2. Enabling warnings: -w

2.1.3. Processing input: -n

2.1.4. Processing input with automatic printing: -p

2.1.5. Processing line-endings: -l

2.1.6. Printing without newlines: printf

2.1.7. Changing the input record separator: -0digits

2.2. Using variables

2.2.1. Using special variables

2.2.2. Using the data variable: $_

2.2.3. Using the record-number variable: $.

2.2.4. Employing user-defined variables

2.3. Loading modules -M

2.4. Writing simple scripts

2.4.1. Quoting techniques

2.4.2. True and False values

2.4.3. Handling switches: -s

2.4.4. Using warn and die

2.4.5. Using logical and, logical or

2.4.6. Programming with BEGIN and END blocks

2.4.7. Loading modules with use

2.5. Additional special variables

2.5.1. Employing I/O variables, Exploiting formatting variables

2.6. Standard option clusters

2.6.1. Using aliases for common types of Perl commands

2.7. Constructing programs

2.7.1. Constructing an output-only one-liner

2.7.2. Constructing an input/output script

2.8. Summary

Directions for further study

3. Perl as a (better) grep command

3.1. A brief history of grep

3.2. Shortcomings of grep

3.2.1. Uncertain support for metacharacters

3.2.2. Lack of string escapes for control characters

3.2.3. Comparing capabilities of greppers and Perl

3.3. Working with the matching operator

3.3.1. The one-line Perl grepper

3.4. Understanding Perl’s regex notation

3.5. Perl as a better fgrep

3.6. Displaying the match only, using $&

3.7. Displaying unmatched records (like grep -v)

3.7.1. Validating data

3.7.2. Minimizing typing with shortcut metacharacters

3.8. Displaying filenames only (like grep -l)

3.9. Using matching modifiers

3.9.1. Ignoring case (like grep -i)

3.10. Perl as a better egrep

3.10.1. Working with cascading filters

3.11. Matching in context

3.11.1. Paragraph mode

3.11.2. File mode

3.12. Spanning lines with regexes

3.12.1. Matching across lines

3.12.2. Using lwp-request

3.12.3. Filtering lwp-request output

3.13. Additional examples

3.13.1. Log-file analysis

3.13.2. A scripted grepper

3.13.3. Fuzzy matching

3.13.4. Web scraping

3.14. Summary

Directions for further study

4. Perl as a (better) sed command

4.1. A brief history of sed

4.2. Shortcomings of sed

4.3. Performing substitutions

4.3.1. Performing line-specific substitutions: sed

4.3.2. Performing line-specific substitutions: Perl

4.3.3. Performing record-specific substitutions: Perl

4.3.4. Using backreferences and numbered variables in substitutions

4.4. Printing lines by number

4.4.1. Printing lines by number: sed

4.4.2. Printing lines by number: Perl

4.4.3. Printing records by number: Perl

4.5. Modifying templates

4.6. Converting special characters

4.7. Editing files

4.7.1. Editing with commands

4.7.2. Editing with scripts

4.7.3. Safeguarding in-place editing

4.8. Converting to lowercase or uppercase

4.8.1. Quieting spam

4.9. Substitutions with computed replacements

4.9.1. Converting miles to kilometers

4.9.2. Substitutions using function results

4.10. The sed to Perl translator

4.11. Summary

Directions for further study

5. Perl as a (better) awk command

5.1. A brief history of AWK

5.2. Comparing basic features of awk and Perl

5.2.1. Pattern-matching capabilities

5.2.2. Special variables

5.2.3. Perl’s variable interpolation

5.2.4. Other advantages of Perl over AWK

5.2.5. Summary of differences in basic features

5.3. Processing fields

5.3.1. Accessing fields

5.3.2. Printing fields

5.3.3. Differences in syntax for print

5.3.4. Using custom field separators in Perl

5.4. Programming with Patterns and Actions

5.4.1. Combining pattern matching with field processing

5.4.2. Extracting data from tables

5.4.3. Accessing cell data using array indexing

5.5. Matching ranges of records

5.5.1. Operators for single- and multi-record ranges

5.5.2. Matching a range of dates

5.5.3. Matching multiple ranges

5.6. Using relational and arithmetic operators

5.6.1. Relational operators

5.6.2. Arithmetic operators

5.7. Using built-in functions

5.7.1. One-liners that use functions

5.7.2. The legend of nexpr

5.7.3. How the nexpr* programs work

5.8. Additional examples

5.8.1. Computing compound interest: compound_interest

5.8.2. Conditionally pluralizing nouns: compound_interest2

5.8.3. Analyzing log files: scan4oops

5.9. Using the AWK-to-Perl translator: a2p

5.9.1. Tips on using a2p

5.10. Summary

Directions for further study

6. Perl as a (better) find command

6.1. Introducing hybrid find / perl programs

6.2. File testing capabilities of find vs. Perl

6.2.1. Augmenting find with Perl

6.3. Finding files

6.3.1. Finding files by name matching

6.3.2. Finding files by pathname matching

6.4. Processing filename arguments

6.4.1. Defending against grep’s messes

6.4.2. Recursive grepping

6.4.3. Perl as a generalized argument pre-processor

6.5. Using find | xargs vs. Perl alternatives

6.5.1. Using Perl for reliable timestamp sorting

6.5.2. Dealing with multi-word filenames

6.6. find as an argument pre-processor for Perl

6.7. A Unix-like, OS-portable find command

6.7.1. Making the most of find2perl

6.7.2. Helping non-Unix friends with find2perl

6.8. Summary

Directions for further study

Part 2 Minimal Perl: for UNIX and Linux Shell Programmers

7. Built-in functions

7.1. Understanding and managing evaluation context

7.1.1. Determinants and effects of evaluation context

7.1.2. Making use of evaluation context

7.2. Programming with functions that generate or process scalars

7.2.1. Using split

7.2.2. Using localtime

7.2.3. Using stat

7.2.4. Using chomp

7.2.5. Using rand

7.3. Programming with functions that process lists

7.3.1. Comparing Unix pipelines and Perl functions

7.3.2. Using sort

7.3.3. Using grep

7.3.4. Using join

7.3.5. Using map

7.4. Globbing for filenames

7.4.1. Tips on globbing

7.5. Managing files with functions

7.5.1. Handling multi-valued return codes

7.6. Parenthesizing function arguments

7.6.1. Controlling argument-gobbling functions

7.7. Summary

Directions for further study

8. Scripting techniques

8.1. Exploiting script-oriented functions

8.1.1. Defining defined

8.1.2. Exiting with exit

8.1.3. Shifting with shift

8.2. Pre-processing arguments

8.2.1. Accommodating non-filename arguments with implicit loops

8.2.2. Filtering arguments

8.2.3. Generating arguments

8.3. Executing code conditionally with if/else

8.3.1. Employing if/else vs. and/or

8.3.2. Mixing branching techniques: The cd_report script

8.3.3. Tips on using if/else

8.4. Wrangling strings with concatenation and repetition operators

8.4.1. Enhancing the most_recent_file script

8.4.2. Using concatenation and repetition operators together

8.4.3. Tips on using the concatenation operator

8.5. Interpolating command output into source code

8.5.1. Using the tput command

8.5.2. Grepping recursively: The rgrep script

8.5.3. Tips on using command interpolation

8.6. Executing OS commands using system

8.6.1. Generating reports

8.6.2. Tips on using system

8.7. Evaluating code using eval

8.7.1. Using a Perl shell: The psh script

8.7.2. Appreciating a multi-faceted Perl grepper: The preg script

8.8. Summary

Directions for further study

9. List variables

9.1. Using array variables

9.1.1. Initializing arrays with piecemeal assignments and push

9.1.2. Understanding advanced array indexing

9.1.3. Extracting fields in a friendlier fashion

9.1.4. Telling fortunes: The fcookie script

9.1.5. Tips on using arrays

9.2. Using hash variables

9.2.1. Initializing hashes

9.2.2. Understanding advanced hash indexing

9.2.3. Understanding the built-in %ENV hash

9.2.4. Printing hashes

9.2.5. Using %ENV in place of switches

9.2.6. Obtaining uniqueness with hashes

9.2.7. Employing a hash as a simple database: The user_lookup script

9.2.8. Counting word frequencies in web pages: The count_words script

9.3. Comparing list generators in the Shell and Perl

9.3.1. Filename generation/globbing

9.3.2. Command substitution/interpolation

9.3.3. Variable substitution/interpolation

9.4. Summary

Directions for further study

10. Looping facilities

10.1. Looping facilities in the Shell and Perl

10.2. Looping with while / until

10.2.1. Totaling numeric arguments

10.2.2. Reducing the size of an image

10.2.3. Printing key/value pairs from a hash using each

10.2.4. Understanding the implicit loop

10.3. Looping with do while / until

10.3.1. Prompting for input

10.4. Looping with foreach

10.4.1. Unlinking files: the rm_files script

10.4.2. Reading a line at a time

10.4.3. Printing a hash

10.4.4. Demystifying acronyms: The expand_acronyms script

10.4.5. Reducing image sizes: The compress_image2 script

10.5. Looping with for

10.5.1. Exploiting for’s support for indexing: the raffle script

10.6. Using loop-control directives

10.6.1. Nesting loops within loops

10.6.2. Enabling loop-control directives in bottom-tested loops

10.6.3. Prompting for input

10.6.4. Enhancing loops with continue blocks: the confirmation script

10.7. The CPAN’s select loop for Perl

10.7.1. Avoiding the re-invention of the "choose-from-a-menu" wheel

10.7.2. Monitoring user activity: the show_user script

10.7.3. Browsing man pages: the perlman script

10.8. Summary

Directions for further study

11. Subroutines and variable scoping

11.1. Compartmentalizing code with subroutines

11.1.1. Defining and using subroutines

11.1.2. Understanding use strict

11.2. Common problems with variables

11.2.1. Clobbering variables: The phone_home script

11.2.2. Masking variables: The 4letter_word script

11.2.3. Tips on avoiding problems with variables

11.3. Controlling variable scoping

11.3.1. Declaring variables with my

11.3.2. Declaring variables with our

11.3.3. Declaring variables with local

11.3.4. Introducing the Variable Scoping Guidelines

11.4. Variable Scoping Guidelines for complex programs

11.4.1. Enable use strict

11.4.2. Declare user-defined variables and define their scopes

11.4.3. Pass data to subroutines using arguments

11.4.4. Localize temporary changes to built-in variables with local

11.4.5. Employ user-defined loop variables

11.4.6. Applying the Guidelines: the phone_home2 script

11.5. Reusing a subroutine

11.6. Summary

Directions for further study

12. Modules and the CPAN

12.1. Creating modules

12.1.1. Using the Simple Module Template

12.1.2. Creating a module:

12.1.3. Testing a new module

12.2. Managing modules

12.2.1. Identifying the modules that you want

12.2.2. Determining whether you have a certain module

12.2.3. Installing modules from the CPAN

12.3. Using modules

12.3.1. Business::UPS-the ups_shipping_price script

12.3.3. Shell::POSIX::Select-the menu_ls script

12.3.5. CGI-the survey.cgi script

12.3.6. Tips on using Object-Oriented modules

12.4. Summary

Directions for further study


Appendix A: Perl special variables cheatsheet

Appendix B: Guidelines for parenthesizing code



What's inside

  • A simpler, yet still powerful Perl
  • Development of concise commands and flexible scripts
  • How to package custom software in reusable modules
  • How to exploit CPAN modules to avoid reinventing the wheel
  • Language features in tabular summaries
  • 100+ reusable programs for: system administration, web development (HTML, CGI, Forms), networking, databases, finance, text analysis, and more

About the reader

This book is especially suitable for system administrators, webmasters, and software developers.

About the author

Dr. Tim Maher's multi-decade career as a software professional includes stints at U.C. Berkeley as the Humanities Computer Consultant, at the University of Utah as a Professor of Computer Science, and at AT&T, Sun Microsystems, Hewlett Packard, and Consultix as a Course Developer/Lecturer on operating systems and programming languages. Along the way, he's taught UNIX, Linux, or Perl to many thousands of individuals—ranging from technology-phobic poets to corporate IT engineers. Tim founded Seattle's Perl Users Group, and served as its leader for six years. Many of its 400+ members contributed useful ideas to this book. In his spare time, he enjoys the natural beauty of the Pacific Northwest, where he lives.

FREE domestic shipping on three or more pBooks

This book is not perl tapas. It is a survival tool.

William M. Julien, Fortune 100 Company

No-nonsense and practical, yet with wit and charm. A joy to read.

Dan Sanderson, Software Developer

Shows style, not just facts, valuable.

Brian Downs, Lucent Technologies

Brilliant, never tedious, highly recommended!

Jon Allen, Maintainer of

You could have chosen no better primer that this book.

Damian Conway, from the Foreword