### PWC 171 › Odd Abundant Numbers

This post is part of a series on Mohammad Anwar’s excellent Weekly Challenge, where hackers submit solutions in Perl, Raku, or any other language, to two different challenges every week. (It’s a lot of fun, if you’re into that sort of thing.)

This week’s tasks include a simple number theory calculation, and a language feature. Here’s a look at task 1.

## Odd Abundant Numbers

Abundant numbers are numbers where the sum of the proper divisors is greater than the number. The first odd abundant number is 945. 945’s proper divisors are 1, 3, 5, 7, 9, 15, 21, 27, 35, 45, 63, 105, 135, 189, and 315. (Recall that we exclude 945 itself as a proper divisor.)

The sum of those divisors is 975, so therefore 945 is an abundant number.

Equivalently, a number is abundant if the sum of all divisors is greater than twice the number. We’ll see both variations in the solutions below.

I’m going to take you through a few different approaches. Since I don’t like repeating myself, let’s get some foreshadowing for task #2 going, by using a first class function:

sub n_odd_abundant(&$) { my ($is_abundant, $N) = @_; my @r; for (my$n = 3; $N;$n += 2) {
if ($is_abundant->($n)) {
push @r, $n;$N--
}
}

@r;
}


The above code takes in a code ref ($is_abundant) and a limit ($N). We loop over all odd numbers, pushing any that pass the $is_abundant check to our result. Although the above version suited my purposes better, it’s also possible to do this with an iterator, to avoid having to store the intermediate result: sub odd_abundant_iterator(&) { my$is_abundant = shift;
my $n = 1; sub { do {$n += 2 } until $is_abundant->($n);

$n; } }  Now that we have a framework for gathering abundant numbers, let’s try it a few different ways. ### Brute force The first way you might think to try is simply to brute force your way through every divisor of every number. This is O(n) for each number, and takes over a second to find the first 20 numbers: sub n_abundant_naive { n_odd_abundant { my$n = shift;
$n < sum grep {$n % $_ == 0 } 1..$n/2;
} $_[0]; }  ### Using sqrt Stopping at $$\sqrt{n}$$ perhaps unsurprisingly brings the asymptotic time down to O($$\sqrt{n}$$). Since divisors come in pairs, we can simply calculate the other divisor and avoid looping through most of the numbers: sub n_abundant_sqrt { n_odd_abundant { my$n    = shift;
my $sqrt = sqrt($n);

my $sum = sum map {$_,  $n /$_ }
grep { $n %$_ == 0 } 1..$sqrt;$sum -= $sqrt if$sqrt == int $sqrt; 2*$n < $sum; }$_[0];
}


It might not seem like a huge change, but the above code runs about 27 times faster than the naïve version, when asked to find the first 20 odd abundant numbers.

### Using Math::Prime::Utildivisor_sum

Just for fun, our old friend, Math::Prime::Util has a function that seems perfect for our needs: divisor_sum. It does what it says on the tin: it calculates the sum of the divisors of whatever number we give it.

sub n_abundant_mpu {
n_odd_abundant {
my $n = shift; my$sum = divisor_sum($n); 2*$n < $sum; }$_[0];
}


This one is another 12 times faster than the sqrt solution, and 356 times faster than the naïve method. Great, right? Well, not so fast. Under the hood, the divisor_sum function is still finding all divisors for every number, so we’re still at $$O(n \sqrt{n})$$ time. It’s only faster because of the very tightly optimized C code under the hood.

### Sieve

We can still do quite a bit better by realizing that since we’re looking for a whole bunch of abundant numbers, we’re repeating the same calculations over and over again, for every multiple of a number we’ve seen already. So as long as we’re willing to tweak the requirement slightly to find all abundant numbers below a given limit (although we’ll see how we can still accommodate the old calling syntax if we really want to), we can do much better:

sub n_abundant_sieve {
my $lim = shift; my @r; my @div_sum; # Sum of divisors for each number for my$n (1..$lim) {$div_sum[$n*$_] += $n for 1..$lim/$n+1; push @r,$n if $n % 2 and 2*$n <= $div_sum[$n];
}

@r;
}


When called as n_odd_abundant_sieve(10000), this returns the first 23 odd abundant numbers, about twice as fast as the sqrt version returns 20. That’s because this algorithm runs in $$O(n \log{n})$$ time, which is strictly less than $$O(n \sqrt{n})$$ time for all $$n > 0$$. It grows significantly slower.

#### Sneaking up on the limit

In number theory circles, finding all numbers below a limit is usually just fine, but if we wanted to be a stickler for the parameters of the challenge and return only the first 20, we could simply call n_odd_abundant_sieve multiple times, doubling the limit every time, until we have at least 20 results. I wouldn’t bother, though.

You might be wondering, “how does this perform compared to the MPU version?”

For small values (the first 23 numbers in the sequence), the MPU version is faster by about 200%. However, when given more realistic limits, our better algorithm pulls ahead. It breaks even at about 100 numbers on my machine. By the time we ask for 150 numbers, the sieve is already 60% faster.

Benchmarks are an incredibly useful—and essential—tool when you need to write high performance code. Knowing that one algorithm is faster than another doesn’t always translate to real code. When dealing with implementation details, those hidden constants can really make a difference. What’s also really important, however, is knowing your requirements. If we’re only ever going to need to find less than 100 abundant primes, the MPU version might be a worthy choice. However, if we want to push it beyond that, then the better algorithm wins.

Of course, translating the sieve algorithm to tight C code would blow both of them out of the water at any value of n, since the sieve version grows strictly slower than the MPU version.

## Raku

Here’s a quick port to Raku. If you’re only concerned with outputting the results, there’s no point in storing them first:

sub MAIN(Int $lim = 10000) { my @div_sum; # Sum of divisors for each number for 1...$lim -> $n { @div_sum[$n*$_] +=$n for 1..$lim/$n+1;
$n.say if$n % 2 and 2*$n <= @div_sum[$n];
}

}


### PWC 171 › First Class Functions

This post is part of a series on Mohammad Anwar’s excellent Weekly Challenge, where hackers submit solutions in Perl, Raku, or any other language, to two different challenges every week. (It’s a lot of fun, if you’re into that sort of thing.)

Task #2 this week is no doubt about first class functions, but gets more specific, asking us to have a go at function composition:

Create sub compose($f,$g) which takes in two parameters $f and $g as subroutine refs and returns subroutine ref i.e. compose($f,$g)->($x) =$f->($g->($x))

Before we get too far ahead of ourselves, let’s briefly review what these terms mean.

## First Class Functions

A language that supports first class functions simply allows you to pass functions around like any other variable. Passing anonymous functions (also known as lambda functions) around is usually included in this definition as well. Perl makes this easy:

my $add2 = sub {$_[0] + 2 }; # Returns a sub that adds 2 to its argument
$sub->(5); # Returns 7  That example may not be the most compelling, but for some motivation, look no farther than Perl’s map or grep builtins. When you call something like map {$_ * $_ } 1..10 to get the first ten square numbers, that block {$_ * $_ } is an anonymous subroutine. First class functions are incredibly useful, and deserve more discussion than I can cram into this blog post, so perhaps I’ll do them justice with a longer dedicated post in the future. ## Function Composition Function composition is a distinct concept in mathematics. In computer science, it depends on first class functions, but is otherwise not related. Function composition, often denoted with the ∘ operator, takes two functions f and g and produces a new function: $$h = g \circ f \text{ such that } h(x) = g(f(x))$$ The reason it’s usually written as g ∘ f is that in plain English, g follows f, because we are feeding the output of f into g. Of course, f and g are just symbols, so they can be swapped to match the task description with no issues: $$h = f \circ g \text{ such that } h(x) = f(g(x))$$ ## Perl Now that we’ve gotten all of the pesky definitions out of the way, the code is … well, there’s hardly any code at all, really. Here’s a function that generates the composition h = fg: sub comp { my ($f, $g) = @_; sub {$f->($g->(@_)) } }  Here’s an example usage that calculates the sum of squares of a list of numbers: use List::Util qw< sum0 >; my$squares = sub { map { $_ *$_ } @_ };
my $h = comp( \&sum0,$squares );

say "The sum of squares for 1..10 = " . $h->(1..10); # 385  I chose to use List::Util‘s sum0 function so I could demonstrate how to pass in a reference to a named function. The $squares function shows how to use a variable. I could have also done this as an anonymous function:

my $h = comp( \&sum0, sub {$_ * $_ } @_ } );  ## Raku Raku’s first class function support is very good. In fact, the language was designed around higher order features like this, so there are some built-in helpers we can use, such as the composition operator. That’s right, we can use ∘ or o right in our code to do the function composition. I could stick this into a comp function like I did with the Perl example, but that seems less expressive to me. my &sum = sub { [+] @_ }; my &square = sub { @_ »*« @_ }; my &h = &sum ∘ &square; say &h(1..10); # 385  First class functions open up endless possibilities in your code. ### PWC 170 › Primordial Numbers and Kronecker Products This post is part of a series on Mohammad Anwar’s excellent Weekly Challenge, where hackers submit solutions in Perl, Raku, or any other language, to two different challenges every week. (It’s a lot of fun, if you’re into that sort of thing.) We’re back with our noses to the mathematical grindstone this week, with two straightforward tasks. ## Task 1 › Primordial Numbers The nth primordial number is simply the product of the first n primes. P(1) = 2, P(2) = 2×3 = 6, P(3) = 2x3x5 = 30, etc. P(0) is defined to be 1 as a special case. Since I’ve implemented various prime number generators before, I just went with Math::Prime::Util for this one, making use of the prime_iterator function: my$it = prime_iterator;

say my $pri = 1; # P(0) = 1 by definition say$pri *= $it->() for 1..$ARGV[0] // 10;


Easy.

## Task 2 › Kronecker Products

I haven’t worked with Kronecker products for quite a few years, so when I reviewed the definition, I thought it might be tricky to implement, so I got a fresh caffeinated beverage, but much to my dismay, it was all over before my second sip.

Please do check that link for a more rigorous definition of Kronecker products. As a quick review, I will simply show an example, with ⨂ as the operator for the Kronecker product:

$$A = \begin{bmatrix}2 & 3 \\ 4 & 5 \end{bmatrix} \\ B = \begin{bmatrix}a & b \\ c & d \end{bmatrix}$$

$$A ⨂ B = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix} ⨂ \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} 2B & 3B \\ 4B & 5B \end{bmatrix} = \begin{bmatrix} 2a & 2b & 3a & 3b \\ 2c & 2d & 3c & 3d \\ 4a & 4b & 5a & 5b \\ 4c & 4d & 5c & 5d \end{bmatrix}$$

To make this happen, I opted to loop over the total number of rows in A ⨂ B and use division and modulo arithmetic to determine which sub-row and -column in B we need. I use a triple-nested map to achieve the multiplications and orderings of the final matrix.

sub kronecker {
my ($A,$B) = @_;

map {
my $i =$_;
[
map {
my    $aval =$_;
map { $aval *$_ } $B->[$i % @$B]->@*; }$A->[$i / @$B]->@*
]
} 0..(@$A * @$B)-1;
}


@fac == 1 ? $_[0] : home_prime(join '', @fac); }  I like this solution for its expressiveness, but it’s about 20% slower than the following iterative version: Continue reading “PWC 168 › Home Prime” ### PWC 167 › Circular Primes This post is part of a series on Mohammad Anwar’s excellent Weekly Challenge, where hackers submit solutions in Perl, Raku, or any other language, to two different challenges every week. (It’s a lot of fun, if you’re into that sort of thing.) Task #1 this week has us finding circular primes, 3 digits or more. Circular primes are numbers that remain prime through all of their rotations. For example, since all the rotations of 113 (113, 131, and 311) are all primes, 113 is a circular prime. (131 and 311 are circular primes as well, but based on the example given with this task, we are only to give the lowest circular prime in each group.) This would correspond with OEIS A016114. I just wrote yet another Sieve of Eratosthenes in week 164, so this time I’ll just use Math::Prime::Util. The basic algorithm I came up with is rather naïve, but effective: for each digit d in [3..∞) for each prime n with d digits for each unique rotation r next digit if circular[r] or ¬prime[r] circular[r] ← true say "r is circular"  Continue reading “PWC 167 › Circular Primes” ### PWC 167 › Lanczos Gamma Approximation Let’s flex our mathematical legs a bit by implementing an approximation to the gamma function Γ(z). In this article, I’ll introduce what the gamma function is, what it’s for, and how to actually calculate it, using a well known approximation method. This topic leans more heavily into mathematics than usual for this blog. Some of the concepts may be new to you, and I regret I cannot give them a proper treatment in a single article. I’ll do my best to give you the flavor of how everything works, and I have provided some resources at the end should you wish to learn more. ## What’s the gamma function? What’s it for? You’re probably familiar with integer factorials already. Here’s a simple definition: $$n! = 1 \times 2 \times 3 \times \cdots \times (n-2) \times (n-1) \times n$$ For example, $$5! = 1 \times 2 \times 3 \times 4 \times 5 = 120$$. However, that only works for positive integers. The gamma function lets us work with most positive and negative real numbers, and even complex numbers. For positive integers, the gamma function is very closely related to ordinary factorials: $$\Gamma(z) = (z – 1)!$$ Why is it (z – 1)! instead of just z! ? As H. M. Edwards puts it—in a footnote, no less: Continue reading “PWC 167 › Lanczos Gamma Approximation” ### PWC 166 › K-Directory Diff This post is part of a series on Mohammad Anwar’s excellent Weekly Challenge, where hackers submit solutions in Perl, Raku, or any other language, to two different challenges every week. (It’s a lot of fun, if you’re into that sort of thing.) Task #2 this week is another one of mine. The idea is to take any number of pathnames on the command line (and we must support at least three), and output a side-by-side diff of the files that are different. This is a skin-deep analysis, so there is no recursion, and no comparison of file sizes, modification times, or contents, although those would be worthy additions if someone wants to take it further. I will be implementing a fairly basic version, although I have a more complex version I’ve been using for years that I might be persuaded to release if there is interest. ## Data Design I am going to have one main data structure, %dirs, that will be built up as we traverse the directories. It will hold all directory and file information, as well as a maxlen value for each directory, for ease of formatting columns later. Here’s what %dirs might look like, given a very simple set of input directories: %dirs = ( dir_a => { files => { "Arial.ttf" => 1, "Comic_Sans.ttf" => 1, }, maxlen => 14, }, dir_b => { files => { "Arial.ttf" => 1, "Comic_Sans.ttf" => 1, "Courier_New.ttf" => 1, }, maxlen => 15, }, )  Continue reading “PWC 166 › K-Directory Diff” ### PWC 166 › Hexadecimal Words This post is part of a series on Mohammad Anwar’s excellent Weekly Challenge, where hackers submit solutions in Perl, Raku, or any other language, to two different challenges every week. (It’s a lot of fun, if you’re into that sort of thing.) This is another one of my suggested tasks, and another time you’re going to read about a personal story that connects me to a task. For many years, right back to very first days I started hacking filesystems and writing kernel code, I’ve often needed to come up with 16- or 32-bit identifiers for things, usually represented in hexadecimal. And to this day, my repertoire of clever identifiers is more or less limited to 0xdeadbeef and 0xc0dedbad, and a few others. Let’s change that! ## The simple version A very simple solution simply reads the lines, filters out anything that isn’t the right length, or contains characters that can’t be substituted, and prints out the whole works: use File::Slurper qw< read_lines >; my$dict = $ARGV[0] // '../../../data/dictionary.txt'; say for map { y/olist/01157/r } grep { /^[0-9a-folist]{2,8}$/ } read_lines(\$dict);


That already gives me some great results, like 0xd15abled (“disabled”), and is nice and short. However, we didn’t come here for half measures. This version only returns single words, and doesn’t care how many substitutions it has done, so words like 0x57111e57 (“stillest”‽) are fair game.