Working through the exercises of Richard Bird (Thinking Functionally with Haskell), I found a particular one that can be used to illustrate how equations can give us heuristic guidance in solving particular functional problems. The same pattern of thought will be used in another post to discover some functional equations that appear in optimization problems.

The example which I’ll discuss is derived from an implementation of the transpose function.

Let’s start with an informal definition: the transpose of a matrix M is the matrix MT which is derived very simply – every row of the matrix M becomes the column of the matrix MT.

Example:

Let M be the matrix:

1 2 3
4 5 6

Then the transpose MT is the matrix:

1  4
2  5
3  6

In order to formalize this, using Haskell, we’ll conceive every matrix as a list of rows.

In this way, M = [ [1,2,3] , [4,5,6] ] and MT = [ [1,4], [2,5], [3,6] ]. In other words, a matrix is a list of lists of elements from a generic type ‘a’ (for every matrix M, M::[[a]]).

That being said, we define the function transpose in the following way:
> transpose :: [[a]] -> [[a]] 
> transpose [xs] = [[x] | x <- xs] 
> transpose (xs:xss) = zipWith (:) xs (transpose xss)

Let’s denote E1: transpose [xs] = [[x] | x <- xs] and E2transpose (xs:xss) = zipWith (:) xs (transpose xss).

The interesting part is the base case in this definition is the transpose of a matrix with a single row (transpose [xs]). Which (intuitively) is the natural base case, because we’ll transpose row by row and we have to make sure we know how to transpose a single row.

But what if we want to replace the base case (transpose [xs] = [[x] | x <- xs]) with transpose []? It’s not obvious at all how we can go about and give a definition for transpose [].

One of the ways we can think about it is that we need a definition for transpose [] that will enable us to recover the definition E1 for the transpose of a single row with the help of equation E2 . If we can recover E1 in this way, we’re done and transpose will work correctly.

With this idea in mind, we’ll rewrite the equation E1:

transpose [xs] = transpose (xs:[])

= {definition of E2} zipWith (:) xs (transpose [])

= {definition of E1} [[x] | x <- xs]

In this way, we arrive at the functional equation (in the unknown transpose []):
E3: zipWith (:) xs (transpose []) = [[x] | x <- xs] 

Let’s reason a little element by element and say that:
xs = [x1,x2,…,xn], where x1,..,xn are elements of the generic type a.
transpose [] = [xs1,xs2,…,xsn], where xs1,…,xsn are elements of [a].

In this way, the equation E3 becomes:
zipWith (:) [x1,…,xn] ([xs1,…,xsn]) = [[x1],[x2],…,[xn]]
<=>
[x1:xs1,x2:xs2,…,xn:xsn] =  [[x1],[x2],…,[xn]]

Because two lists are equal iff are equal element by element, we have:
x1:xs1 = [x1]; x2:xs2 = [x2]; … xn:xsn = [xn].

From the definition of the operator (:) it’s clear that xs1 = []; xs2 = []; … ; xsn = [] (for example: x1:[] = [x1], … xn:[] = [xn]).

In this way, we arrived at the conclusion that:
transpose [] = [xs1,…,xsn] = [ [], [], …, [] ]. If we take into consideration that the list xs can be any list (so can have any number of elements), transpose [] must be an infinite list of [], because zipWith will use from it only a finite number of empty lists (more precise, will use as many empty lists as are elements in xs).

So, transpose [] = repeat [] = [ [], [], [], [], …].

In this way, (because we explicitly constructed transpose [] in order to recover transpose [xs]) we’re also assured that transpose [xs] will work as intended and the function transpose will be correct. Of course, you can do a more formal proof for this assertion.

We arrived at the new definition of transpose:

> transpose :: [[a]] -> [[a]]
> transpose [] = repeat []
> transpose (xs:xss) = zipWith (:) xs (transpose xss)

We see now that the equational reasoning provided a lot of heuristic guidance in finding the definition of transpose []. I think that we can apply this kind of reasoning in another contexts too, where our intuition can’t reach a result instantly. A more systematic approach to problem solving, using math techniques consistently, can enlarge our problem solving skills in the long run.

If you have any question or comment to add, don’t hesitate!

You can also take a look on my page on Patreon (click this text).

And don’t forget these very good books about functional programming:

Constantin Liculescu

Facebook 

Share Button