Numeric and date one-liners#

pperl as a calculator, stats tool, and date-arithmetic engine. This chapter collects the recipes that turn numeric columns, timestamps, and IP addresses into useful output without a script file.

The assumed baseline is a reader who has read switches and progression. The -lane / -MList::Util=... idioms used below were introduced there.

Where this chapter reaches for performance: large numeric reductions are one of the cases where pperl’s JIT earns its cost. A for (1 .. 100_000_000) { $s += $_ } accumulation runs as compiled machine code after the interpreter warms up. You do not opt in — it triggers automatically on arithmetic-heavy numeric loops.

Plain-arithmetic one-liners#

pperl -E 'say 2 ** 64'                            # 1.84467440737096e+19
pperl -E 'say 2 ** 64 - 1'                        # rounds (float)
pperl -MMath::BigInt -E 'say Math::BigInt->new(2)->bpow(64)'
                                                  # 18446744073709551616

pperl -E 'say 7 / 3'                              # 2.33333333333333
pperl -E 'say int(7 / 3)'                         # 2
pperl -E 'say 7 % 3'                              # 1
pperl -E 'say sqrt(2)'                            # 1.4142135623731
pperl -E 'say atan2(1, 1) * 4'                    # 3.14159265358979 (pi)

sprintf controls output width and precision:

pperl -E 'printf "%.4f\n", 22/7'                  # 3.1429
pperl -E 'printf "%10.2f\n", 42.5'                #      42.50
pperl -E 'printf "%08d\n", 42'                    # 00000042
pperl -E 'printf "%x %o %b\n", 255, 255, 255'     # ff 377 11111111

Sums, minima, maxima, averages#

List::Util ships with Perl 5 — no install needed.

Per line#

pperl -MList::Util=sum -lane 'print sum @F'                data.txt
pperl -MList::Util=min -lane 'print min @F'                data.txt
pperl -MList::Util=max -lane 'print max @F'                data.txt

# Per-line mean
pperl -MList::Util=sum -lane 'print sum(@F) / @F'          data.txt

Across all lines#

# Total sum
pperl -MList::Util=sum -lane '$s += sum @F; END { print $s }' data.txt

# Running min / max using // (defined-or)
pperl -MList::Util=min -alne '$m = min($m // (), @F); END { print $m }' data.txt
pperl -MList::Util=max -alne '$m = max($m // (), @F); END { print $m }' data.txt

# Mean of one column (third), ignoring blank lines
pperl -lane 'next unless /\S/; $s += $F[2]; $n++;
             END { printf "%.4f\n", $s / $n }' data.txt

$m // () returns an empty list when $m is undefined, so the first iteration does not taint min/max with an undef. See perlop for //.

Sum a single column#

# Sum of column 3 (zero-indexed: $F[2])
pperl -lane '$s += $F[2]; END { print $s }' data.txt

# Negative or zero entries only
pperl -lane '$s += $F[2] if $F[2] <= 0; END { print $s }' data.txt

Standard deviation#

pperl -lane '
    $n++; $s += $_; $s2 += $_**2 for @F;
    END {
        my $n2 = $n * @F;    # total count (assumes rectangular data)
        my $mean = $s / $n2;
        printf "mean=%.4f sd=%.4f\n",
            $mean, sqrt($s2 / $n2 - $mean ** 2);
    }
' data.txt

For anything beyond mean+sd, reach for Statistics::Descriptive:

pperl -MStatistics::Descriptive -lane '
    BEGIN { our $stat = Statistics::Descriptive::Full->new }
    $stat->add_data(@F);
    END { printf "median=%.2f mean=%.2f sd=%.2f\n",
          $stat->median, $stat->mean, $stat->standard_deviation }
' data.txt

Random numbers and sampling#

# 10 random integers in [5, 15)
pperl -E 'say join ",", map { int(rand(10)) + 5 } 1 .. 10'

# 8-character random lowercase password
pperl -E 'say join "", map { ("a".."z")[rand 26] } 1 .. 8'

# 8-character random alphanumeric
pperl -E 'say join "", map { ("a".."z", 0..9)[rand 36] } 1 .. 8'

# Random UUID (using Data::UUID)
pperl -MData::UUID -E 'say Data::UUID->new->create_str'

# Shuffle the fields of each line
pperl -MList::Util=shuffle -alne 'print join " ", shuffle @F' data.txt

# Pick one random line from a file (reservoir sample, size 1)
pperl -ne 'rand($.) < 1 && ($pick = $_); END { print $pick }' file.txt

The reservoir trick is the right answer when the file is too large to slurp or its line count is unknown — every line has an equal probability of being the one printed, with exactly one pass.

Factorials, GCD, LCM, primes#

# Factorial
pperl -E '$f = 1; $f *= $_ for 1 .. 20; say $f'
pperl -MMath::BigInt -E 'say Math::BigInt->new(100)->bfac'

# GCD / LCM (built into Math::BigInt under bgcd / blcm)
pperl -MMath::BigInt=bgcd -anle 'print bgcd(@F)' nums.txt
pperl -MMath::BigInt=blcm -anle 'print blcm(@F)' nums.txt

# Euclid's algorithm for two numbers
pperl -le '$n=20; $m=35; ($m,$n) = ($n, $m % $n) while $n; print $m'

Primality:

# Regex-based primality test (Abigail). Input: one integer per line.
pperl -lne '(1 x $_) !~ /^1?$|^(11+?)\1+$/ && print "$_ prime"' nums.txt

# Reliable for large numbers: use Math::Prime::Util
pperl -MMath::Prime::Util=is_prime -lne '
    print "$_ prime" if is_prime($_)
' nums.txt

The regex trick is a curiosity; Math::Prime::Util is the serious answer.

IP ↔ integer#

# Dotted-quad → 32-bit integer
pperl -MSocket -le 'print unpack "N", inet_aton "127.0.0.1"'
# 2130706433

# Integer → dotted-quad
pperl -MSocket -le 'print inet_ntoa pack "N", 2130706433'
# 127.0.0.1

unpack "N" reads four bytes as a big-endian 32-bit unsigned integer. The matching pack "N" writes one.

Date and time#

Today, now, epoch#

pperl -E 'say time'                               # epoch seconds
pperl -E 'say scalar localtime'                   # Mon Apr 22 14:31:07 2026
pperl -E 'say scalar gmtime'                      # same, UTC
pperl -MPOSIX=strftime -E 'say strftime "%F %T", localtime'
                                                  # 2026-04-22 14:31:07

strftime is the clean way to format; localtime in list context returns (sec, min, hour, mday, mon, year, wday, yday, isdst).

Date arithmetic#

POSIX::mktime normalises out-of-range fields (month 13 becomes month 1 of next year, day 32 becomes day 1 of next month, etc.), so date arithmetic means “subtract N from a field, mktime, strftime”.

# Yesterday's date
pperl -MPOSIX=strftime -MPOSIX=mktime -E '
    my @t = localtime;
    $t[3] -= 1;
    say strftime "%F", localtime mktime @t
'

# 30 days ago
pperl -MPOSIX=strftime -MPOSIX=mktime -E '
    my @t = localtime;
    $t[3] -= 30;
    say strftime "%F", localtime mktime @t
'

# First Monday of next month
pperl -MPOSIX=strftime -MPOSIX=mktime -E '
    my @t = localtime;
    $t[3] = 1; $t[4]++;             # first of next month
    my $epoch = mktime @t;
    my @nm = localtime $epoch;
    $epoch += ((8 - $nm[6]) % 7 || 7) * 86400 if $nm[6] != 1;
    # ^ if not already Monday, advance to next Monday
    say strftime "%F", localtime $epoch
'

For anything more elaborate (time zones, business-day math, ISO weeks), reach for DateTime:

pperl -MDateTime -E '
    my $d = DateTime->now->subtract(days => 30);
    say $d->strftime("%F %T %Z")
'

Timestamps in logs#

# Sum the durations (last field, milliseconds) of lines matching "slow"
pperl -lane '$s += $F[-1] if /slow/; END { print $s, " ms total" }' app.log

# Count events per hour (assumes ISO timestamp first field)
pperl -lane '
    (my $hour = $F[0]) =~ s/:\d{2}:\d{2}.*//;
    $h{$hour}++;
    END { print "$_ $h{$_}" for sort keys %h }
' events.log

CSV and TSV#

TSV with no embedded tabs: -F'\t' is fine. Real CSV needs Text::CSV:

# Sum the third column of a properly quoted CSV
pperl -MText::CSV -E '
    my $csv = Text::CSV->new({ binary => 1 });
    my $total = 0;
    while (my $row = $csv->getline(*STDIN)) {
        $total += $row->[2];
    }
    say $total
' < data.csv

# Re-emit a CSV with only selected columns
pperl -MText::CSV -E '
    my $csv = Text::CSV->new({ binary => 1, eol => "\n" });
    while (my $row = $csv->getline(*STDIN)) {
        $csv->print(*STDOUT, [ @{$row}[0, 2, 4] ]);
    }
' < data.csv > trimmed.csv

The binary => 1 option keeps bytes intact; the eol => "\n" option makes print append newlines.

JSON#

JSON::PP ships with Perl; Cpanel::JSON::XS is faster if it is installed.

# Extract the "name" field from each line of NDJSON
pperl -MJSON::PP -lne '
    my $o = JSON::PP->new->decode($_);
    print $o->{name};
' events.ndjson

# Convert each line of TSV to JSON
pperl -MJSON::PP -F'\t' -lane '
    print JSON::PP->new->encode({ id => $F[0], name => $F[1] })
' data.tsv

# Pretty-print a single JSON document
pperl -MJSON::PP -0777 -ne '
    print JSON::PP->new->pretty->encode(JSON::PP->new->decode($_))
' small.json

For large JSON, streaming parsers (JSON::Streaming::Reader) beat slurping; single-document files under a few hundred megabytes are fine with the above.

Base64, URL encoding, hex#

# Base64 encode/decode (of a whole file)
pperl -MMIME::Base64 -0777 -ne 'print encode_base64($_)' file
pperl -MMIME::Base64 -0777 -ne 'print decode_base64($_)' file.b64

# URL encode/decode
pperl -MURI::Escape -lne 'print uri_escape($_)'   < urls.txt
pperl -MURI::Escape -lne 'print uri_unescape($_)' < encoded.txt

# Hex ↔ integer
pperl -E 'printf "%x\n", 255'                     # ff
pperl -E 'say hex "ff"'                           # 255

# Hex dump of a file (like xxd -p)
pperl -0777 -ne 'print unpack "H*", $_' file.bin

# Reverse: hex dump → bytes
pperl -0777 -ne 'chomp; print pack "H*", $_' file.hex > file.bin

Find out more#

  • aliases — wrap the recipes you use weekly into shell functions with positional arguments.

  • progression — the reduction idioms these numeric recipes build on.

  • sprintf, printf — formatted output.

  • time, localtime, gmtime — the built-ins underneath the date recipes.