Some time ago I read Jeremy Gibbons’ article “Unbounded Spigot Algorithms for the Digits of Pi” and liked it a lot- the problem was amusing, and the article was easy to underestand.

A spigot algorithm is an algorithm for producing digits of an unbounded sequence without reusing digits after they have been computed. The digits are produced “one by one, as if from a leaky tap” (Gibbons). An unbounded spigot algorithm doesn’t need to have some predetermined number of digits to run on—it’ll just keep going and going until you run out of memory.

I was recently experimenting with programming an FPGA and thought of this paper; my idea was that a little coprocessor (the FPGA) hanging off my server could just start producing digits of pi and keep running indefinitely. Even though it isn’t the fastest way go generate digits of pi (not by a longshot) it still seemed like fun. And I thought of a nice “agent” architecture where I’d take the digits and run them past an array of recognizers, each one to do some kind of statistics on the digits of pi, or look for special sequences like ten digits in a row made of one each of the ten digits. Those recognizers would all run in hardware, in parallel.

I remembered there was a “catch” about spigot algorithms, so I reread the paper. And the catch is that you obviously can’t generate digits of pi, an irrational (also transcendental of course), from a finite process. So even though the algorithm can be easily specified and for sure will generate one digit at a time, the implementation might need to use arbitrary amounts of memory to produce that digit. In fact, the memory complexity is hidden inside the bignum rational arithmetic that the algorithm uses.

Now, this wasn’t necessarily a stopper. I thought that maybe the problem was that as digits were generated the bignum rational arithmetic was generally done on reasonably sized bignums (maybe several thousand bits of numerator or denominator), and only every once in awhile did you have a few terms which blew up to much larger sizes. In that case you could mainly run the algorithm on the FPGA but which, when it discovered a multiplication that resulted in a too-large-for-the-hardware result, would trap back to the server which would run that particular multiplication with much more resources available, and would then pass the result back to the FPGA, which could continue.

But no, that wasn’t to be. I experimented with an implementation in Java and discovered that the bignums get larger and larger from the very beginning, and even if you not only reduce each bignum to smallest terms, but also do the same for the 2×2 matrix that the algorithm uses, there’s no use. You’re doing *really* big bignum arithmetic after the first few terms.

Anyway. I was happy to see that the Haskell algorithms in the paper transferred very easily and very directly to Java (once I supplied a BigRational class). Here’s the code: spigot.tar

One curious bug kept me puzzled for hours: I got bit by a 32-bit integer overflow. Turns out if you’re computing *3(3i+1)(3i+2)(5i-2)* for *i* from 1 to ∞ … well, that simple expression overflows along around *i* = 280. I didn’t expect it at all and looked everywhere else in my bignum-using expressions for it.

At this time I’d like to point out a couple of typographical errors in the programs in the Gibbons’ paper cited above, “An Unbounded Spigot Algorithm for Pi”, so you can save some time if you want to investigate this algorithm yourself.

- pg 7, definition of
`extr`

—the`/`

should be`%`

- more crucially, pg 9, definition of
`next`

—`+15`

should be`-12`

, also in conjecture 1, definition of*n*

Won’t that overflow eventually, no matter the width of the data type?