PWC 259 › Bank Holidays and Line Parser

This post is part of a series on Mohammad Anwar’s excellent Weekly Challenge, where hackers submit solutions in Perl, Raku, or any other language, to two different challenges every week. (It’s a lot of fun, if you’re into that sort of thing.)

This week’s tasks have a little bit more meat on their bones, which I quite enjoyed. They are user-submitted, as well, and it’s always fun to see what people come up with when submitting tasks.

Task 1 – Banking Day Offset

This task comes from the mind of Lee Johnson. Here, we’re given the following inputs:

  • Number of days [$offset]
  • Start date [$start_date]
  • List of dates which are holidays (optional) [@holidays]

From that, we’re supposed to return the date that is $offset working days from the $start_date, ignoring weekends and @holidays. This is straightforward. I opted to use the core Perl module Time::Piece to get the day of the week. There are about n + 1 different ways to do that, though.

The function starts with some initialization:

sub bank_holiday_ofs {
    my ($start_date, $offset, @holidays) = @_;

    my $t = Time::Piece->strptime($start_date => $date_fmt) - 86400;

    my %holiday = map { $_ => 1 } @holidays;
    $offset++; # Account for today

The %holiday hash simply maps the dates to a true value so $holiday{$date} will be true iff $date is in @holidays. The $offset variable gets an extra kick to account for the current day.

From here, we loop until $offset is zero:

    while ($offset) {
        $t += 86400; # Advance day
        $offset-- unless $t->wday == 1 or $t->wday == 7 
                      or $holiday{ $t->strftime($date_fmt) };
    }

From years of frequent use, 86,400 has sufficient semantic meaning as “1 day” that I’m not bothered about the naughty magic number. If that’s not your style, Time::Seconds has a ONE_DAY constant.

The loop is simple. Add a day, and then subtract a day from $offset unless it’s a weekend or holiday.

ch-1.pl source code

Task 2 – Line Parser

The second task, from Gabor Szabo, gives us a particular text record format that we must parse into a Perl hash. For example:

{%  id   field1="value1"    field2="value2"  field3=42 %}

Should become:

{
       name => id,
       fields => {
           field1 => value1,
           field2 => value2,
           field3 => value3,
       }
}

We are also required to handle escaped quotes (\") and (optionally) multiline tags. I wrote a pure-Perl parser that handles all of this. It’s easiest to think of in two parts:

Top level line parser

This is the part that takes in a line of text and decides what to do with it. First, let’s define our $Open and $Close tags:

    my ($O, $C) = (qr/^\s*\{\%\s*/, qr/\s*\%\}\s*$/); # Tokens gobble whitespace

I could have just done a simple {% and %} set, but I wanted to allow optional whitespace. A recurring theme you’ll see with my solution (and my parsers in general) is that I tend to be permissive with inputs, but precise with outputs.

Now I loop through each line of input (in this case, the __DATA__ block):

  for (<DATA>) {
      chomp;
  
      if ($id) {
          if (/${O}end$id${C}/) {
              $id{$id}{text} = @text > 1 ? [ @text ] : $text[0];
              @text = ();
              $id = undef;
          } else {
              push @text, $_
          }
      } 
      elsif (/${O}(?<id>\w+)\s+(?<fields>.+?)${C}/) {
          die "No end token found for <$id>" if $id and @text;
          $id = $+{id};
          die "Duplicate id <$id>" if exists $id{$id};
          $id{$id} = { name => $id, fields => parse_fields($+{fields}) };
      }
      else {
          die "Invalid line: <$_>";
      }
  }

The way this loop works is, if $id is defined, we’ve already seen an open {% tag and we’re expecting either a line of text, or the {% endid %} closing tag, so we look for those and handle them accordingly.

Otherwise, we expect to see a single line {% id key=value, ... %} record or the start of a multi-line record, so we look for that. We pass the key/value portion of the record to the parse_fields() sub, which we’ll look at next.

parse_fields(): The key/value (kv) parser

It might seem like parsing keys and values would be the easy part, and if not for Gabor’s requirement to handle escaped quotes, it might have been. There are several ways I could have tackled this, from tricky eval()s to full-blown grammars, but for my purposes here (and because I felt like it), I decided to implement a simple state machine.

Finite state machines can be described by a directed graph whose vertices contain the possible states the system can be in, and edges are the possible state transitions. To parse the kv pairs as described by this task, the following state machine will do the trick:

State diagram for parsing key/value pairs
State diagram for parsing key/value pairs.

Note: To avoid a cluttered diagram, I’ve omitted arrows on most states that point to themselves, except for field_name and out. It so happens that every state in this particular system can have itself as the next state.

We effectively start from the out state, meaning we are outside of a key/value pair and are waiting for the start of the next key name. Once we see a word character (\w), we go to the field_name state and stay there until we’ve gobbled up all of the \w characters, and then we look for an equal sign. We similarly trundle through the states to find the start of the value (value_start), the value itself, then a comma, and back to out for the next field. We can stop at any time.

One way to implement simple state machines is by simply having a $state variable that you set to the name of the state you’re in, and an if ... elsif ... chain to handle the state transitions. For more complicated, or dynamic state machines, you might reach for one of the many CPAN modules for finite state machines (FSM). But I wanted to show you how it can be done without any help.

My parse_fields() function iterates over the input string character by character. First, we have some top level variables to keep track of:

      my %fields;
      my $state = 'out'; # Outside of KV pair
      my $backslash = 0; # Substate for whether we're backslashed
      my $name = undef;  # Field name
      my $value = undef; # Field value
      my $expected_closing_quote; # If defined, value must end with this

Our output will be %fields. The $backslash variable actually captures a parallel state of whether the last character was a backslash (1), whether the current character was escaped (2), or whether neither of those things is true (0). I might have used named values instead, but this made sufficient sense to me.

I decided values could be optionally quoted, by single ('), double (") or nothing, but that the closing quote had to match the opening quote. So that’s what $expected_closing_quote keeps track of.

Now, here’s how we handle backslashes:

          $backslash = 0 if $backslash == 2;
          if ($backslash) {
              $_ = eval "\$_"; # safe
              $backslash = 2;
          } elsif (/\\/) {
              $backslash = 1;
              next;
          }

I do use eval here, but you’ll note it’s done in a safe manner, as I only ever pass two characters to eval, and the first character is a backslash. I could have built up my own hash of slash characters, but that would be error prone and require updating with the language.

If the current character is a backslash, we just set $backslash and go to the next character. If the previous character was a backslash, we do the eval to get the unescaped character. We then set $backslash to 2, which is used when we’re looking for quotes, to avoid ending the value on an escaped quote.

Here’s what a typical state handler looks like:

          # Handle the value, with optional quotes and escape sequences
          elsif ($state eq 'value_start') {
              next if /\s/;
              if (/['"]/ and not $backslash) {
                  $expected_closing_quote = $_;
                  $state = 'value';
                  $value = '';
                  next;
              }
              $value = $_;
              $state = 'value';
          }

That is the value state. You’ll see that has the logic I just talked about; if we see our $expected_closing_quote and it wasn’t $backslashed, or we see whitespace and the value is not quoted at all, we have reached the end of the value. So we set $fields{$name} = $value (that’s now part of our return value), set our next state to comma, and go to the next character.

On the other hand, if none of those conditions were true, that means we’re still inside the value, so we append the character to $value and continue. (The value state points to itself.)

I won’t show all of the states here, as the above should give you the flavor of it, but of course you can see my full solution here.

All in all, it’s decently robust for a PWC task solution, but a production version could certainly use some better error checking and reporting. Invalid inputs are mostly handled fairly gracefully, but the resulting output might be confusing.

ch-2.pl source code

PWC 258 › Counting Digits and Summing Values

This post is part of a series on Mohammad Anwar’s excellent Weekly Challenge, where hackers submit solutions in Perl, Raku, or any other language, to two different challenges every week. (It’s a lot of fun, if you’re into that sort of thing.)

Yes, it’s been a long while! Many a curveball has been thrown at my face. This week’s tasks are pretty quick and easy, so let’s dive right in.

Count Even Digits Number

The first task is simple. Given a list of integers, return the count of integers that have an even number of digits. So, for example, (10, 1, 111, 24, 1000) has 3 values with an even number of digits.

I was in a regex-y mood, so here’s what I came up with (Perl):

sub even_re { ()= "@_" =~ /\b(\d\d)+\b/g }

The pattern is simple: match two digits (\d\d) one or more times (+), separated by a word boundary (\b). That is matched against "@_", which interpolates to the list, separated by $".

Since we use the /g (global) match, we get all matches back. The ()= pseudo-operator turns that into the count of matches. If that were being assigned to a variable, you’d see it written like this instead:

$result =()= $var =~ /pattern/g

Raku

In Raku, I decided to do the slightly more sane thing and grep the list rather than interpolating first:

sub even_re { +@_.grep(/^(\d\d)+$/) }

Without the +, we would return the list of matches. The + forces it to a scalar, giving us the count of matches instead.

Sum of Values

Task 2 has us take in a list of integers and a number $k, and add up the integers whose index in the array contain $k ones. So, for example, given $k = 1 and @ints = (2, 5, 9, 11, 3), we have to look at the binary representation of the array indicies:

Index [base10]Index [base2]Value
002
115
2109
31111
41003
@ints with index (decimal and binary)

The bold rows (values: 5, 9, 3) have a base-2 index containing only one 1, so the result is 17. Why anyone would want to do this, I couldn’t tell you.

Still feeling a slight bit of regex fever, I based my second task solution around a very simple /1/g regex for counting the 1s in the binary representation of a number. Here’s the full solution:

use List::Util qw< sum >;

sub sum_idx_bit_set {
    my $k = pop;
    sum map { $_[$_] } grep { $k == ( ()= sprintf('%b',$_) =~ /1/g ) } 0..$#_
}

The whole thing is a grep over all of the indices for any where the binary representation contains $k ones. Those indices are fed into a simple map to get the list element, and the whole thing is sum()med up.

Raku

The Raku solution is conceptually similar, but there are a few language differences:

sub sum_idx_bit_set($k, @n) {
    @n[ @n.keys.grep({ (TR:d/10/1/ with .base(2)) == $k }) ].sum
}

I decided to use TR// here, which I could have also done with Perl. Converting to binary is easier in Raku thanks to the base() method. The grep({...}) over the indices of @n sets us up to take a slice of @n and sum() it up.

This solution feels a bit less Raku-ish to me, although my Raku is very rusty at the moment, not having looked at Raku code for nearly two years. Time to get hacking, eh!

PWC 171 › Odd Abundant Numbers

This post is part of a series on Mohammad Anwar’s excellent Weekly Challenge, where hackers submit solutions in Perl, Raku, or any other language, to two different challenges every week. (It’s a lot of fun, if you’re into that sort of thing.)

This week’s tasks include a simple number theory calculation, and a language feature. Here’s a look at task 1.

Odd Abundant Numbers

Abundant numbers are numbers where the sum of the proper divisors is greater than the number. The first odd abundant number is 945. 945’s proper divisors are 1, 3, 5, 7, 9, 15, 21, 27, 35, 45, 63, 105, 135, 189, and 315. (Recall that we exclude 945 itself as a proper divisor.)

The sum of those divisors is 975, so therefore 945 is an abundant number.

Equivalently, a number is abundant if the sum of all divisors is greater than twice the number. We’ll see both variations in the solutions below.

I’m going to take you through a few different approaches. Since I don’t like repeating myself, let’s get some foreshadowing for task #2 going, by using a first class function:

sub n_odd_abundant(&$) {
    my ($is_abundant, $N) = @_;

    my @r;
    for (my $n = 3; $N; $n += 2) {
        if ($is_abundant->($n)) {
            push @r, $n;
            $N--
        }
    }

    @r;
}

The above code takes in a code ref ($is_abundant) and a limit ($N). We loop over all odd numbers, pushing any that pass the $is_abundant check to our result.

Although the above version suited my purposes better, it’s also possible to do this with an iterator, to avoid having to store the intermediate result:

sub odd_abundant_iterator(&) {
    my $is_abundant = shift;
    my $n = 1;
    
    sub {
        do { $n += 2 } until $is_abundant->($n);

        $n;
    }
}

Now that we have a framework for gathering abundant numbers, let’s try it a few different ways.

Brute force

The first way you might think to try is simply to brute force your way through every divisor of every number. This is O(n) for each number, and takes over a second to find the first 20 numbers:

sub n_abundant_naive {
    n_odd_abundant {
        my $n = shift;
        $n < sum grep { $n % $_ == 0 } 1..$n/2;
    } $_[0];
}

Using sqrt

Stopping at \(\sqrt{n} \) perhaps unsurprisingly brings the asymptotic time down to O(\(\sqrt{n} \)). Since divisors come in pairs, we can simply calculate the other divisor and avoid looping through most of the numbers:

sub n_abundant_sqrt {
    n_odd_abundant {
        my $n    = shift;
        my $sqrt = sqrt($n);

        my $sum  = sum map { $_,  $n / $_ } 
                      grep { $n % $_ == 0 } 1..$sqrt;

        $sum -= $sqrt if $sqrt == int $sqrt;
        
        2*$n < $sum;
    } $_[0];
}

It might not seem like a huge change, but the above code runs about 27 times faster than the naïve version, when asked to find the first 20 odd abundant numbers.

Using Math::Prime::Util divisor_sum

Just for fun, our old friend, Math::Prime::Util has a function that seems perfect for our needs: divisor_sum. It does what it says on the tin: it calculates the sum of the divisors of whatever number we give it.

sub n_abundant_mpu {
    n_odd_abundant {
        my $n = shift;
        my $sum = divisor_sum($n);

        2*$n < $sum;
    } $_[0];
}

This one is another 12 times faster than the sqrt solution, and 356 times faster than the naïve method. Great, right? Well, not so fast. Under the hood, the divisor_sum function is still finding all divisors for every number, so we’re still at \(O(n \sqrt{n}) \) time. It’s only faster because of the very tightly optimized C code under the hood.

Sieve

We can still do quite a bit better by realizing that since we’re looking for a whole bunch of abundant numbers, we’re repeating the same calculations over and over again, for every multiple of a number we’ve seen already. So as long as we’re willing to tweak the requirement slightly to find all abundant numbers below a given limit (although we’ll see how we can still accommodate the old calling syntax if we really want to), we can do much better:

sub n_abundant_sieve {
    my $lim = shift;
    my @r;

    my @div_sum; # Sum of divisors for each number
    for my $n (1..$lim) {
        $div_sum[$n*$_] += $n for 1..$lim/$n+1;
        push @r, $n if $n % 2 and 2*$n <= $div_sum[$n];
    }

    @r;
}

When called as n_odd_abundant_sieve(10000), this returns the first 23 odd abundant numbers, about twice as fast as the sqrt version returns 20. That’s because this algorithm runs in \(O(n \log{n}) \) time, which is strictly less than \(O(n \sqrt{n}) \) time for all \(n > 0 \). It grows significantly slower.

Sneaking up on the limit

In number theory circles, finding all numbers below a limit is usually just fine, but if we wanted to be a stickler for the parameters of the challenge and return only the first 20, we could simply call n_odd_abundant_sieve multiple times, doubling the limit every time, until we have at least 20 results. I wouldn’t bother, though.


You might be wondering, “how does this perform compared to the MPU version?”

For small values (the first 23 numbers in the sequence), the MPU version is faster by about 200%. However, when given more realistic limits, our better algorithm pulls ahead. It breaks even at about 100 numbers on my machine. By the time we ask for 150 numbers, the sieve is already 60% faster.

Benchmarks are an incredibly useful—and essential—tool when you need to write high performance code. Knowing that one algorithm is faster than another doesn’t always translate to real code. When dealing with implementation details, those hidden constants can really make a difference. What’s also really important, however, is knowing your requirements. If we’re only ever going to need to find less than 100 abundant primes, the MPU version might be a worthy choice. However, if we want to push it beyond that, then the better algorithm wins.

Of course, translating the sieve algorithm to tight C code would blow both of them out of the water at any value of n, since the sieve version grows strictly slower than the MPU version.

Raku

Here’s a quick port to Raku. If you’re only concerned with outputting the results, there’s no point in storing them first:

sub MAIN(Int $lim = 10000) {
    my @div_sum; # Sum of divisors for each number

    for 1...$lim -> $n {
        @div_sum[$n*$_] += $n for 1..$lim/$n+1;
        $n.say if $n % 2 and 2*$n <= @div_sum[$n];
    }

}

PWC 171 › First Class Functions

This post is part of a series on Mohammad Anwar’s excellent Weekly Challenge, where hackers submit solutions in Perl, Raku, or any other language, to two different challenges every week. (It’s a lot of fun, if you’re into that sort of thing.)

Task #2 this week is no doubt about first class functions, but gets more specific, asking us to have a go at function composition:

Create sub compose($f, $g) which takes in two parameters $f and $g as subroutine refs and returns subroutine ref i.e. compose($f, $g)->($x) = $f->($g->($x))

Task #2

Before we get too far ahead of ourselves, let’s briefly review what these terms mean.

First Class Functions

A language that supports first class functions simply allows you to pass functions around like any other variable. Passing anonymous functions (also known as lambda functions) around is usually included in this definition as well. Perl makes this easy:

my $add2 = sub { $_[0] + 2 }; # Returns a sub that adds 2 to its argument
$sub->(5); # Returns 7

That example may not be the most compelling, but for some motivation, look no farther than Perl’s map or grep builtins. When you call something like map { $_ * $_ } 1..10 to get the first ten square numbers, that block { $_ * $_ } is an anonymous subroutine.

First class functions are incredibly useful, and deserve more discussion than I can cram into this blog post, so perhaps I’ll do them justice with a longer dedicated post in the future.

Function Composition

Function composition is a distinct concept in mathematics. In computer science, it depends on first class functions, but is otherwise not related. Function composition, often denoted with the ∘ operator, takes two functions f and g and produces a new function:

\(h = g \circ f \text{ such that } h(x) = g(f(x)) \)

The reason it’s usually written as g ∘ f is that in plain English, g follows f, because we are feeding the output of f into g. Of course, f and g are just symbols, so they can be swapped to match the task description with no issues:

\(h = f \circ g \text{ such that } h(x) = f(g(x)) \)

Perl

Now that we’ve gotten all of the pesky definitions out of the way, the code is … well, there’s hardly any code at all, really. Here’s a function that generates the composition h = fg:

sub comp {
    my ($f, $g) = @_;

    sub { $f->($g->(@_)) }
}

Here’s an example usage that calculates the sum of squares of a list of numbers:

use List::Util qw< sum0 >;

my $squares = sub { map { $_ * $_ } @_ };
my $h = comp( \&sum0, $squares );

say "The sum of squares for 1..10 = " . $h->(1..10); # 385

I chose to use List::Util‘s sum0 function so I could demonstrate how to pass in a reference to a named function. The $squares function shows how to use a variable. I could have also done this as an anonymous function:

my $h = comp( \&sum0, sub { $_ * $_ } @_ } );

Raku

Raku’s first class function support is very good. In fact, the language was designed around higher order features like this, so there are some built-in helpers we can use, such as the composition operator. That’s right, we can use or o right in our code to do the function composition. I could stick this into a comp function like I did with the Perl example, but that seems less expressive to me.

my &sum    = sub {    [+] @_ };
my &square = sub { @_ »*« @_ };

my &h = &sum ∘ &square;

say &h(1..10); # 385

First class functions open up endless possibilities in your code.

PWC 170 › Primordial Numbers and Kronecker Products

This post is part of a series on Mohammad Anwar’s excellent Weekly Challenge, where hackers submit solutions in Perl, Raku, or any other language, to two different challenges every week. (It’s a lot of fun, if you’re into that sort of thing.)

We’re back with our noses to the mathematical grindstone this week, with two straightforward tasks.

Task 1 › Primordial Numbers

The nth primordial number is simply the product of the first n primes. P(1) = 2, P(2) = 2×3 = 6, P(3) = 2x3x5 = 30, etc. P(0) is defined to be 1 as a special case. Since I’ve implemented various prime number generators before, I just went with Math::Prime::Util for this one, making use of the prime_iterator function:

my $it = prime_iterator;

say my $pri = 1; # P(0) = 1 by definition
say $pri *= $it->() for 1..$ARGV[0] // 10;

Easy.

Task 2 › Kronecker Products

I haven’t worked with Kronecker products for quite a few years, so when I reviewed the definition, I thought it might be tricky to implement, so I got a fresh caffeinated beverage, but much to my dismay, it was all over before my second sip.

Please do check that link for a more rigorous definition of Kronecker products. As a quick review, I will simply show an example, with ⨂ as the operator for the Kronecker product:

\(A = \begin{bmatrix}2 & 3 \\ 4 & 5 \end{bmatrix} \\
B = \begin{bmatrix}a & b \\ c & d \end{bmatrix} \)

\(A ⨂ B = \begin{bmatrix}
2 & 3 \\
4 & 5
\end{bmatrix} ⨂
\begin{bmatrix}
a & b \\
c & d
\end{bmatrix} =
\begin{bmatrix}
2B & 3B \\ 4B & 5B
\end{bmatrix} =
\begin{bmatrix}
2a & 2b & 3a & 3b \\
2c & 2d & 3c & 3d \\
4a & 4b & 5a & 5b \\
4c & 4d & 5c & 5d
\end{bmatrix}
\)

To make this happen, I opted to loop over the total number of rows in A ⨂ B and use division and modulo arithmetic to determine which sub-row and -column in B we need. I use a triple-nested map to achieve the multiplications and orderings of the final matrix.

sub kronecker {
    my ($A, $B) = @_;

    map {
        my $i = $_;
        [
            map { 
                    my    $aval = $_;
                    map { $aval * $_ } $B->[$i % @$B]->@*;
            } $A->[$i / @$B]->@*
        ]
    } 0..(@$A * @$B)-1;
}

The post-deref (->@*) looked a bit cleaner here, so I went with that, but @{$B->[$i % @$B]} would have worked just as well.

The result is a list of array refs containing the rows of the final product. The problem description didn’t specify, but I opted to also include a pretty-print routine to display the output a little nicer than Data::Dump can:

sub pp_matrix {
    print '[ ' . join(' ', map { sprintf '%3d', $_ } @$_). " ]\n" for @_
}

The output from pp_matrix(kronecker([[1,2],[3,4]], [[5,6],[7,8]])) is nicely formatted:

[   5   6  10  12 ]
[   7   8  14  16 ]
[  15  18  20  24 ]
[  21  24  28  32 ]

We were only asked to implement the solution for a product of two specific 2×2 matrices, but both of the above subs accept arbitrary shaped matrices.

PWC 169 › Brilliant and Achilles Numbers

This post is part of a series on Mohammad Anwar’s excellent Weekly Challenge, where hackers submit solutions in Perl, Raku, or any other language, to two different challenges every week. (It’s a lot of fun, if you’re into that sort of thing.)

This week, both tasks are quite short, so I’ll combine them into a single blog post.

Task 1 › Brilliant Numbers

Brilliant numbers are composite numbers with exactly two prime factors. Additionally, the prime factors must be the same length.

My complete code is as follows:

Continue reading “PWC 169 › Brilliant and Achilles Numbers”

PWC 168 › Perrin Primes

This post is part of a series on Mohammad Anwar’s excellent Weekly Challenge, where hackers submit solutions in Perl, Raku, or any other language, to two different challenges every week. (It’s a lot of fun, if you’re into that sort of thing.)

Task #1 this week asks us to find the first 13 Perrin primes. “What’s a Perrin prime,” I can’t hear you asking? To answer that, we first have to look at the Perrin sequence, as described in OEIS A001608. It’s easy to generate:

Starting with [3, 0, 2], each new term is determined by adding the 2nd and 3rd last terms. So, the 4th number is 3 + 0 = 3, giving us [3, 0, 2, 3]. The 5th number is 0 + 2 = 2, and so on.

Perrin primes are simply the elements of the Perrin sequence that also happen to be prime.

Normally (and per the example output in the task) we are to find the unique Perrin primes, in order. So, we’ll just seed the first prime in our @r results, and then rely on the fact that the sequence is strictly increasing after the first five terms.

Building the sequence is very simple from there:

Continue reading “PWC 168 › Perrin Primes”

PWC 168 › Home Prime

This post is part of a series on Mohammad Anwar’s excellent Weekly Challenge, where hackers submit solutions in Perl, Raku, or any other language, to two different challenges every week. (It’s a lot of fun, if you’re into that sort of thing.)

Task #2 this week asks us to calculate a so called Home prime. Home primes are found by factoring a number and concatenating the prime factors (including powers, so 20 = 5×2×2), and repeating this until the result is a prime number.

The given example, HP(10) can be found via the following steps: HP(10) = HP(25) = HP(55) = HP(511) = HP(773), and we stop, since 773 is a prime number.

This is a natural problem for recursion:

sub home_prime_recursive {
    my @fac = factor($_[0]);

    @fac == 1 ? $_[0] : home_prime(join '', @fac);
}

I like this solution for its expressiveness, but it’s about 20% slower than the following iterative version:

Continue reading “PWC 168 › Home Prime”

PWC 167 › Circular Primes

This post is part of a series on Mohammad Anwar’s excellent Weekly Challenge, where hackers submit solutions in Perl, Raku, or any other language, to two different challenges every week. (It’s a lot of fun, if you’re into that sort of thing.)

Task #1 this week has us finding circular primes, 3 digits or more. Circular primes are numbers that remain prime through all of their rotations. For example, since all the rotations of 113 (113, 131, and 311) are all primes, 113 is a circular prime. (131 and 311 are circular primes as well, but based on the example given with this task, we are only to give the lowest circular prime in each group.) This would correspond with OEIS A016114.

I just wrote yet another Sieve of Eratosthenes in week 164, so this time I’ll just use Math::Prime::Util. The basic algorithm I came up with is rather naïve, but effective:

for each digit d in [3..∞)
    for each prime n with d digits
        for each unique rotation r
            next digit if circular[r] or ¬prime[r]
        circular[r] ← true
        say "r is circular"
Continue reading “PWC 167 › Circular Primes”

PWC 167 › Lanczos Gamma Approximation

Gamma function

Let’s flex our mathematical legs a bit by implementing an approximation to the gamma function Γ(z). In this article, I’ll introduce what the gamma function is, what it’s for, and how to actually calculate it, using a well known approximation method.

This topic leans more heavily into mathematics than usual for this blog. Some of the concepts may be new to you, and I regret I cannot give them a proper treatment in a single article. I’ll do my best to give you the flavor of how everything works, and I have provided some resources at the end should you wish to learn more.

What’s the gamma function? What’s it for?

You’re probably familiar with integer factorials already. Here’s a simple definition:

\(n! = 1 \times 2 \times 3 \times \cdots \times (n-2) \times (n-1) \times n\)

For example, \(5! = 1 \times 2 \times 3 \times 4 \times 5 = 120 \). However, that only works for positive integers. The gamma function lets us work with most positive and negative real numbers, and even complex numbers. For positive integers, the gamma function is very closely related to ordinary factorials:

\(\Gamma(z) = (z – 1)!\)

Why is it (z – 1)! instead of just z! ? As H. M. Edwards puts it—in a footnote, no less:

Continue reading “PWC 167 › Lanczos Gamma Approximation”