Continued Fractions and Square Roots
I recently became aware of continued fractions, because my friend Randy mentioned that they are a representation of some insane thing in topology that he mentioned offhandedly and which sort of made sense while he was talking but I really could not reconstruct. However, continued fractions are interesting in their own right, and I started working on this post before I saw the Mathologer video on this topic.
A continued fraction is a nightmare scenario where the denominator of a fraction contains another fraction, whose denominator contains another fraction, and so on, perhaps forever. We can calculate a continued fraction by repeatedly taking the inverse of a value and adding a constant. This may seem counterintuitive at first, but since the denominator of a denominator is in the numerator, it makes a weird kind of sense that you can just keep on inverting. For instance, the Golden Ratio can be calculated by starting with 1 and then in a loop, adding 1 and inverting. Such an expression will eventually converge to the best floating point approximation, and that calculation looks quite trivial in J and APL:
((1+%)^:_) 1 1.61803
Similarly in APL:
Tyler did mention that I would of course resort to these languages as some kind of humble brag. Which is true and fair. Expect it to get worse if I ever learn linear algebra.
Both of these expressions essentially construct an “add one and invert” function, which they then locate the fixed point of, given an input of one. The choice of input doesn’t really matter much:
(1+÷)⍣=100 1.618033989 (1+÷)⍣=0.1 1.618033989
This turns out to be a stock example in the Dyalog documentation:
1 +∘÷⍣= 1 ⍝ fixpoint: golden mean 1.618033989
Furthermore, it turns out every square root can be expressed in a similar way too, either by a finite or an infinite repeating continued fraction. The trick is discovering what the base number is and what the repeating number is. For instance, for 2, the base is 1 and the repeating digit is 2:
1+÷((2+÷)⍣=2) 1.414213562 (1+÷((2+÷)⍣=2))*2 2
Similarly, for 5, the base number is 2 and the repeating digit is 4:
2+÷((4+÷)⍣=1) 2.236067977 (2+÷((4+÷)⍣=1))*2 5
These same calculations can be made in Haskell if we develop our own fixpoint. We can generate this with
iterate but we need to determine when to quit with something like this:
untilEqual :: (Eq a) -> [a] -> a untilEqual (x:y:xs) | x == y = x untilEqual (x:y:xs) | otherwise = untilEqual (y:xs)
Now we can create somewhat more readable versions of these things:
*Main> untilEqual $ iterate (\x -> 1 + 1 / x) 1 1.618033988749895 *Main> 1 + 1 / (untilEqual $ iterate (\x -> 2 + 1 / x) 1) 1.4142135623730951
Let’s make this a helper function.
squareRootCf :: (Fractional a, Eq a) => a -> a -> a squareRootCf a b = a + 1 / (untilEqual $ iterate (\x -> b + 1 / x) a)
Now we can compute these more easily if we know the basis and repeating digits:
*Main> squareRootCf 1 2 -- sqrt 2 1.4142135623730951 *Main> (squareRootCf 2 4) -- sqrt 5 2.23606797749979 *Main> *(squareRootCf 3 6) -- sqrt 10 3.1622776601683795
It turns out Haskell has a rational type we can use, in Data.Ratio:
infiniteCf :: (Integral a) => a -> a -> [Ratio a] infiniteCf a b = map ((a % 1) +) partials where partials = (1 % b) : [recip ((b % 1) + prev) | prev <- partials]
We can then generate rational approximations:
*CFrac> take 10 $ infiniteCf 1 1 [2 % 1,3 % 2,5 % 3,8 % 5,13 % 8,21 % 13,34 % 21,55 % 34,89 % 55,144 % 89]
As you can see that’s producing 2, 3/2, 5/3, 8/5, 13/8, etc. which is the Fibonacci-like sequence of ratios you get for approximating the Golden Ratio. This is kind of explained by the above Mathologer video, so to spoil it for you he calls this is the “most irrational number” because it has the slowest-to-converge repeating continued fraction.
We can also produce some pretty compelling root-2 approximations the same way:
*CFrac> take 10 $ infiniteCf 1 2 [3 % 2,7 % 5,17 % 12,41 % 29,99 % 70,239 % 169,577 % 408,1393 % 985,3363 % 2378,8119 % 5741]
What’s interesting about this you ask? One thing I find interesting is that you aren’t really bound by what floating point happens to represent. You could of course resort to using some arbitrary precision library, but then you might want to know what’s (one way) of doing whatever it does under the hood. How do you get arbitrary precision? One way that you could build a system like that would be on continued fractions. But you don’t really get perfection out of a continued fraction representation of a square root. Instead, you get something back where you can say, I want more precision than that, and it will give you another, better rational approximation. For instance, we can ask Haskell to take the square root of two and tell us what its ratio is:
*CFrac> toRational $ sqrt 2 6369051672525773 % 4503599627370496
Seems pretty good, but it doesn’t take us very many iterations to beat it:
*CFrac> length $ show $ denominator $ toRational $ sqrt 2 16 *CFrac> takeWhile (\x -> length (show (numerator x)) < 18) (infiniteCf 1 2) [3 % 2,7 % 5,... 41 more elided..., 83922003724759193 % 59341817924539925]
So we have already discovered that 83,922,003,724,759,193 / 59,341,817,924,539,925 is a better approximation of the square root of 2 than 6,369,051,672,525,773 % 4,503,599,627,370,496, which interestingly, does not appear in the list of rational approximations we generate—which would be very surprising, because the rational approximations generated by the continued fraction are proven to be the best rational approximations at their information content, except for a couple problems: namely, that floating point numbers are literally the worst, and we’re probably using some kind of hardware root approximation that is very fast and, you know, tolerably accurate (note that
(sqrt 2)^2 = 2.0000000000000004, for instance).
Another fascinating property of continued fractional representations of these numbers is that they bounce back-and-forth above and below the value they’re approximating. So you could easily specify a precision you want, and just take from the representation until the difference between the last two approximations you generated is less than that precision. This is starting to sound suspiciously like a Dedekind cut (infinitely many better and better rational approximations of a real number), or maybe like the epsilon-delta concept of a limit (find me an approximation that’s closer than epsilon to the number).
It turns out this information isn’t very new; in fact Bill Gosper of Lisp/Macsyma fame developed a procedure for doing arbitrary sums and products with continued fractions. If you find that appendix hard to read, perhaps you’ll enjoy Mark Jason Dominus’s discussion of the same topic a little more, including a reference implementation of continued fractions in C.