Sebastian Galkin's Blog https://blog.sebastian-galkin.com/atom.xml Sebastian Galkin paraseba@gmail.com 2018-06-02T00:00:00Z An exercise on applicatives https://blog.sebastian-galkin.com/posts/an-exercise-on-applicatives/index.html 2018-06-02T00:00:00Z 2018-06-02T00:00:00Z

# An exercise on applicatives

Rereading the great Functional Pearl “Applicative programming with effects” I found the following:

We began this section by observing that `Accy o` is not a monad. However, given `Monoid o` it can be defined as the composition of two applicative functors derived from monads—which two, we leave as an exercise

What they call `Accy o` is what we would call today `Const o` from `Data.Functor.Const`

``newtype Const o a = Const { getConst :: o }``

a phantom type that forgets about `a` and just carries around the `o`.

`Const o` can be made an instance of `Applicative` if `o` has a `Monoid`:

``````instance Functor (Const o) where
fmap _ (Const o) = Const o

instance Monoid o => Applicative (Const o) where
pure _ = Const mempty
Const a <*> Const b = Const (a <> b)``````

Since we are forgetting about the `a`’s, the only way to combine two `Const o` is to use the monoid on `o`.

`Const o` is a pretty unusual applicative. It’s surprising that it satisfies all the laws by discarding so much. Let’s verify it is actually an `Applicative` by checking the laws:

First the `Functor` laws:

``````-- identity
fmap id (Const o) = Const o = id (Const o)

-- composition
fmap (f.g) (Const o) = Const o =
= fmap f (fmap g (Const o)) = (fmap f . fmap g) (Const o)``````

And now the `Applicative` laws:

``````-- identity
pure id <*> Const o = Const mempty <*> Const o =
= Const (mempty <> o) = Const o

-- composition
pure (.) <*> Const u <*> Const v <*> Const w =
= Const (mempty <> u <> v <> w) = Const (u <> v <> w) =
= Const u <*> (Const v <*> Const w)

-- homomorphism
pure f <*> pure x = Const mempty <*> Const mempty =
= Const (mempty <*> mempty) = Const mempty = pure (f x)

-- interchange
Const f <*> pure y = Const f <*> Const mempty =
= Const f = Const mempty <*> Const f = pure (\$ y) <*> Const f``````

OK, back to the exercise, how can we write `Const` as the composition of two Monads? If we achieved this, we would get the `Applicative` instance, and the proof of the laws for free. That’s because there is an instance

``(Applicative f, Applicative g) => Applicative (Compose f g)``

`Const`’s applicative combines effects using a monoid. The other basic monad we know of with that same behavior is Writer, which also combines its payload using a monoid. So, the `Const o` applicative looks a lot like the `Writer o` applicative. We could write:

``````import Control.Monad.Writer

newtype Const' o a = Const' {getConst' :: (Writer o a)}
deriving (Functor, Applicative)``````

But this doesn’t really ignore the `a` type. When using this code, some type and value for `a` must be selected. Even more, the applicative will actually carry out the work of operating on the `a`’s. What we want is some type that can ignore `a` completely. Enter `Proxy` in the base library

``data Proxy t = Proxy``

Again a phantom type monad. There is a single inhabitant in this type, `Proxy`, so the `Applicative` doesn’t do any computation at all. The functor, applicative and monad laws for `Proxy t` are satisfied by construction, because every `Proxy t` is equal to every “other” `Proxy t`.

Using `Proxy` we get a better candidate for the `Writer` result type:

``newtype Const' o a = Const' {getConst' :: (Writer o (Proxy a))}``

or, rewriting this in terms of `Compose`1:

``````newtype Const' o a =
Const' { getConst' :: Compose (Writer o) Proxy a }
deriving (Functor, Applicative)``````

With this version of `Const'`, we don’t need to write the instance or prove the applicative laws, and yet, now we can use `Const'` to traverse combining with the monoid.

``````sum :: (Traversable t, Num n) => t n -> n
sum =  unwrap . traverse wrap
where
wrap = Const' . Compose . writer . (Proxy,) . Sum
unwrap = getSum . execWriter . getCompose . getConst'``````

1. 1 As a reminder, functor composition is declared as

``newtype Compose f g a = Compose { getCompose :: f (g a) }``
]]>
Monoids talk https://blog.sebastian-galkin.com/posts/monoids-talk-at-scaladores/index.html 2018-06-01T00:00:00Z 2018-06-01T00:00:00Z

# Monoids talk

On May 29th 2018 I gave a talk about Monoids at Scaladores, the Scala meetup group in São Paulo.

Here are the slides. If you want to see the code and tests, go to the GitHub project.

Scaladores should make the video recording available soon.

]]>
Our own mutable variables https://blog.sebastian-galkin.com/posts/our-own-mutable-variables/index.html 2017-10-08T00:00:00Z 2017-10-08T00:00:00Z

# Our own mutable variables

Talking with a friend earlier today we decided to make an experiment in declaring generic mutable variables that can be used in `IO`, `ST` or `State` monads. I don’t think this is useful, but it was fun to write. Here is result.

``````{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE FlexibleContexts  #-}
{-# LANGUAGE FunctionalDependencies  #-}

import Data.Array.MArray
import Data.Array.IO
import Data.STRef
import qualified Data.List.NonEmpty as NE
import Data.List.NonEmpty (NonEmpty)
import qualified Control.Monad.State.Strict as State
import Data.Foldable (forM_)

import Test.QuickCheck

Let’s define the abstract interface. `v` will be the type of variables that can operate on the monad `m`, holding values of type `a`. We need three operation: create a variable, read, and write to it.

``````class Var v m a | m -> v where
new :: a -> m (v a)
get :: v a -> m a
set :: v a -> a -> m ()``````

Notice we had to use `FunctionalDependencies` to ease type inference. I don’t like this, there is probably a better way.

A utility function to do both read and write passing through a function:

``````modify :: (Monad m, Var v m a) => (a -> a) -> v a -> m ()
modify f v = get v >>= set v . f``````

Now we can provide different implementations for variables. First one in `IO`

``newtype IOVar a = IOVar (IOArray () a)``

We represent the value as an array of a single element. This is obviously overkill, but the goal was also to experiment with the low level array API

``````instance Var IOVar IO a where
new = fmap IOVar . newArray ((), ())
get (IOVar ar) = readArray ar ()
set (IOVar ar) a = writeArray ar () a``````

The implementation is straightforward, reading and writing from/to the the array. Creation needs to take care of wrapping the array in the `IOVar` constructor. Notice that `()` is a valid index type for arrays, and it makes obvious in the type the fact that the array has a single element.

Providing an implementation in the `ST` monad is not much harder. Here, we could also use an `STArray`, but we go directly to `STRef` for simplicity

``````newtype STVar s a = STVar (STRef s a)

instance Var (STVar s) (ST s) a where
new = fmap STVar . newSTRef
get (STVar ref) = readSTRef ref
set (STVar ref) a = writeSTRef ref a``````

The code looks very similar to the `IO` case.

Finally, let’s try to implement a variable in the `State` monad. For a variable holding values of type `a`, it is enough to maintain state `a`. So we can define

``newtype StateVar a = StateVar a``

And now to create an instance of `Var` we can do

``````instance Var StateVar (State.State a) a where
new x = State.StateT \$ \_ -> return (StateVar x, x)
get _ = State.get
set _ = State.put``````

`get` and `set` are simple. `new` requires some care. Initializing the variable means setting the state to a given value, so it can then be read by `get`. So in `new` we need it ignore the current state, and set it to `x`. The types are not enough to ensure correctness, there is a wrong implementation that also compiles:

``wrongNew x = return . StateVar``

And that’s it, we have the three types of variables we wanted. Now we can write a stateful looking algorithm, computing the maximum of a list is a good example. The way people do this in non functional languages usually is:

• initialize a variable `max` with the first element of the list
• go through all other elements:
• if the current element is larger than `max`, update `max` with the new value
• when done iterating the list return `max`

We can express exactly this algorithm with our variables, even more, we can do it in a way that is generic for every type of `Var` and every supported `Monad`

``myMaximum :: (Monad m, Var v m a, Ord a) => NonEmpty a -> m a``

Take a look at the signature: given a non empty list (`NonEmpty a`), we return its maximum in some monad (`Monad m`). We can do this as long as `a` can be ordered (`Ord a`), and there is some type of variable `v` which works for the monad `m` and the type `a` (`Var v m a`). The type signature expresses all this pretty well.

``````myMaximum xs = do
max <- new (NE.head xs) -- initialize a new var
forM_ (NE.tail xs) \$ \a -> do  -- for each el after the head
maxSoFar <- get max  -- get the current maximum
when (a > maxSoFar) \$  -- compare with current element
set max a  -- update if needed
get max``````

Just like in the description of the algorithm, we create a variable and update it for every element that is larger than the initial value. When done iterating we return the last value hold by the variable.

Now we need to write some tests:

In `IO`

``````testIO :: (NonEmptyList Int) -> Property
testIO (NonEmpty xs) = monadicIO \$ do
mine <- run . myMaximum . NE.fromList \$ xs
assert \$ mine == maximum xs``````

In `ST`

``````testST :: (NonEmptyList Int) -> Property
testST (NonEmpty xs) = monadicST \$ do
mine <- run . myMaximum . NE.fromList \$ xs
assert \$ mine == maximum xs``````

And in `State`

``````testState :: (NonEmptyList Int) -> Bool
testState (NonEmpty xs) =
State.execState (mine xs) whoCares == maximum xs
where
mine = myMaximum . NE.fromList
whoCares = 42``````

Running the QuickCheck tests

``````main = do
quickCheckWith opts testIO
quickCheckWith opts testST
quickCheckWith opts testState
where opts = stdArgs {maxSuccess = 5000}``````

Success!

``````+++ OK, passed 5000 tests.
+++ OK, passed 5000 tests.
+++ OK, passed 5000 tests.``````
]]>
Shape and contents with traversables https://blog.sebastian-galkin.com/posts/traversable-shape-contents/index.html 2017-01-21T00:00:00Z 2017-01-21T00:00:00Z

# Shape and contents with traversables

One of the first papers I could find that seriously studies the properties of Traversals is “The essence of the iterator pattern”, by Jeremy Gibbons and Bruno C. d. S. Oliveira (PDF)

In there, they show the idea of splitting a traversable collection in its contents and its shape, attributing this idea to Moggi et al. in “Monads, Shapely Functors and Traversals”. The idea is to traverse the collection extracting shape and elements in a way that would allow to reconstruct the original structure.

To represent the contents of a traversable collections `Traversable t => t a` we can use simply `[a]`. For the shape, we need to conserve the traversable structure, discarding the elements: `Traversable t => t ()`.

## Extracting contents

Let’s start with the contents. We want a function of type

``Traversable t => t a -> [a]``

The type class function `traverse` can do the job of iterating over all the elements giving us access to each of them, we just need to provide the right `Applicative f`:

``````traverse
:: (Traversable t, Applicative f)
=> (a -> f b) -> t a -> f (t b)``````

What we want is to accumulate each element on a list, monoid style. Fortunately, every monoid, and lists in particular, can generate an applicative that uses the monoid operation to combine effects in `<*>`, and `mempty` for `pure`. Haskell calls this monoid `Const` apparently, because it looks like the `const` function, it just ignores the second argument:

``````newtype Const a b = Const { getConst :: a }

instance Functor (Const m) where
fmap _ (Const v) = Const v

instance Monoid m => Applicative (Const m) where
pure _ = Const mempty
Const f <*> Const v = Const (f `mappend` v)``````

So this `Const` applicative behaves like the monoid in its first argument

``````\$> Const  <*> Const 
Const [5,1]``````

and it’s exactly what we need to implement our `contents` function:

``````contentsBody :: a -> Const [a] b
contentsBody = Const . (: [])

contents
:: Traversable t
=> t a -> [a]
contents = getConst . traverse contentsBody

\$> contents (Just 42)


\$> contents Nothing
[]``````

To make the examples more interesting, let’s define a `Tree` type

``````data Tree a
= Empty
| Leaf a
| Node (Tree a)
a
(Tree a)
deriving (Show)

instance Functor Tree where
fmap = fmapDefault

instance Foldable Tree where
foldMap = foldMapDefault

instance Traversable Tree where
traverse f Empty = pure Empty
traverse f (Leaf x) = Leaf <\$> f x
traverse f (Node l k r) =
Node <\$> traverse f l <*> f k <*> traverse f r``````

and try `contents` on it

``````\$> t = Node left 3 right
where left = Node (Leaf 1) 2 Empty
right = Node (Leaf 4) 5 (Leaf 6)

\$> elems = contents t
[1,2,3,4,5,6]``````

We will continue to use this tree `t` in future examples.

## Extracting shape

To extract the shape of the collection we want to `traverse` it ignoring all elements. The right applicative to do that is `Identity` found in `Data.Functor` in the transformers package, or in a modern enough ghc base (>= 4.8.0.0)

``````newtype Identity a = Identity { runIdentity :: a }

instance Applicative Identity where
pure  = Identity
Identity f (<*>) Identity a = Identity (f a)``````

This applicative basically “does nothing”, which is what we want to extract the shape, no effects. Using this applicative and `traverse` we can write

``````shapeBody :: a -> Identity ()
shapeBody _ = Identity ()

shape
:: Traversable t
=> t a -> t ()
shape = runIdentity . traverse shapeBody

\$> shape t
Node (Node (Leaf ()) () Empty) () (Node (Leaf ()) () (Leaf ()))``````

## Contents and shape in one pass

If we want to compute both the contents and the shape, we can call `traverse` twice, but there is a better way. The product of two applicatives is guaranteed to be an applicative, unlike for instance the product of two monads. That means that we can write in a generic way the applicative instance for an arbitrary pair of applicatives. The `base` package already has this `Product` type in `Data.Functor.Product`

``````data Product f g a = Pair (f a) (g a)

instance (Applicative f, Applicative g) =>
Applicative (Product f g) where
pure x = Pair (pure x) (pure x)
Pair f g <*> Pair x y = Pair (f <*> x) (g <*> y)``````

As we can see, this applicative tracks the effects of `f` and `g` in parallel, using a tuple-like `Pair` constructor.

With this, and in a single traversal we can compute both the contents and the shape:

``````prod :: (a -> m b) -> (a -> n b) -> (a -> Product m n b)
prod f g a = Pair (f a) (g a)

decompose
:: Traversable t
=> t a -> Product (Const [a]) Identity (t ())
decompose = traverse (prod contentsBody shapeBody)

\$> decompose t
Pair
(Const [1, 2, 3, 4, 5, 6])
(Identity (Node (Node (Leaf ()) () Empty)
()
(Node (Leaf ()) () (Leaf ()))))``````

## Reconstructing

Now the paper proposes to reconstruct the original traversable from it’s shape and contents as extracted in the previous sections. This sounds like a fold, but we can also think about it as a stateful computation. The state being tracked is the list of elements, the contents. For each element of the desired shape, we extract the first element from the state and leave the rest in the new state. Since every monad is an applicative we know we’ll be able to use the `State` monad in a call to `traverse`.

But there is a one extra detail to take into account. If the number of elements provided as content are not enough to fill the shape, we won’t be able to recreate the datastructure. For this reason the end result has to be optional. So we have a combination of a State applicative with a `Maybe`, in a composition of both effects.

Just like in the case of `Product` the composition of two applicatives is also an applicative

``````newtype Compose f g a = Compose { getCompose :: f (g a) }

instance (Applicative f, Applicative g) =>
Applicative (Compose f g) where
pure x = Compose (pure (pure x))
Compose f <*> Compose x = Compose ((<*>) <\$> f <*> x)``````

In this form, `Compose (State [a]) Maybe` gives us the exact combination of effects we want. We can write the function to reassemble now

``````reassemble
:: Traversable t
=> t () -> Compose (State [a]) Maybe (t a)
reassemble = traverse reassembleBody``````

This `reassembleBody` function must take a `()` and return the composed stateful/optional computation

``````reassembleBody :: () -> Compose (State [a]) Maybe a
reassembleBody _ = Compose (state takeHead)
where
takeHead (a:as) = (Just a, as)
takeHead [] = (Nothing, [])``````

Now to reconstruct the datastructure we just need to feed the shape to `reassemble` and then run the stateful computation resulting

``````reconstruct
:: Traversable t
=> t () -> [a] -> Maybe (t a)
reconstruct = evalState . getCompose . reassemble``````

Here, we are discarding any extra elements provided.

## Swapping data

With this machinery we can, for instance, write a generic way to swap the contents of two datastructures of differente shapes.

``````swap
:: (Traversable s, Traversable t)
=> t a -> s b -> (Maybe (t b), Maybe (s a))
swap x y = (reconstruct xShape yData, reconstruct yShape xData)
where
Pair (Const xData) (Identity xShape) = decompose x
Pair (Const yData) (Identity yShape) = decompose y

\$> swap t ['a'..'f']
( Just (Node (Node (Leaf 'a') 'b' Empty)
'c'
(Node (Leaf 'd') 'e' (Leaf 'f')))
, Just [1, 2, 3, 4, 5, 6])

\$> swap t ['a'..'z']
( Just (Node (Node (Leaf 'a') 'b' Empty)
'c'
(Node (Leaf 'd') 'e' (Leaf 'f')))
, Nothing)``````

## Final notes

It’s a great paper, I highly recommend to read it. This shape/contents thing is only a short section in the paper, it goes in several other directions with many other interesting ideas.

If you are an intermediate level Haskell programmer, reading classic papers is great, particularly old ones (this on is from 2009, so not that old). Lots of ideas, plainly explained.

]]>
Misunderstanding Conway's law https://blog.sebastian-galkin.com/posts/misunderstanding-conways-law/index.html 2016-01-09T00:00:00Z 2016-01-09T00:00:00Z

# Misunderstanding Conway's law

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.

That statement is what people in the software industry call the Conway’s law:

I like the less precise but funnier rendering by Eric S. Raymond:

If you have four groups working on a compiler, you’ll get a 4-pass compiler.

I don’t want to discuss the validity of this “law”, the Wikipedia article seems to point to some supporting evidence. My point in this post is not about the validity but about the interpretation of Conway’s law.

## What Conway’s law says

Conway’s law is an impossibility result. It tell us there is no way to have an architecture that doesn’t reflect the organization’s structure.

In particular, Conway’s law doesn’t provide any kind of strategy to find a good architecture. If anything, if Conway’s law is a true statement, it can help avoid time wasted trying to maintain a disagreement between organization structure and architecture.

## What Conway’s law doesn’t say

Many software companies, particularly after they reach certain size, tacitly use a different statement, calling it Conway’s law:

The proper architecture is the one that better reflects the organization’s communication structure

1. This new statement is absolutely not the Conway’s law. It has little relation to it.
2. This new statement is neither proven nor obviously true, and in fact it’s pretty arguable.
3. This new statement is a (poor) strategy to define an architecture.

Companies sometimes use their existing communication structure to justify architectural decision. This could be right or wrong, depending on the circumstances, but Conway’s law provides no support for this justification.

## Common patterns and mistakes

• Decisions made invoking Conway’s law usually ignore the problem being solved completely. “We should use architecture `A` because we have organization structure `S`” is not a statement about software.
• Making organizational decisions without taking into account the architecture is always a mistake. And a pretty common one.
• Companies solving vastly different problems are usually better served by different architectures. This means they should probably have different communication structures. And yet, copying organizational models is very common in the industry.
• When the problem being solved or the solution implemented change significantly, a change in organizational structure should be expected.
]]>
An exercise using Monoids https://blog.sebastian-galkin.com/posts/an-exercise-on-monoids/index.html 2015-12-28T00:00:00Z 2015-12-28T00:00:00Z

# An exercise using Monoids

I found a fun exercise in “Functional Programming in Scala”, a book I’m reading these days. This is the exercise description, slightly generalized and translated to Haskell types:

Use a `Monoid` and `foldMap` to detect if a given `Foldable` is ordered.

Let’s start by thinking what the type of the requested function is. The argument is a `Foldable`, so we will need `(Foldable t) =>`. Since we want to check for ordering, we will need to compare elements in the datastructure, so we will also need `(Ord a) =>`. The result will of course be a `Bool` indicating if the data is sorted. Putting it all together, this is the type of the function we want:

``isSorted :: (Foldable t, Ord a) => t a -> Bool``

## Specifying the function with `QuickCheck`

Let’s now write down a couple of QuickCheck properties for the `isSorted` function:

• `isSorted` should be true for sorted lists `haskell prop_isSortedForSortedLists :: [Int] -> Bool prop_isSortedForSortedLists = isSorted . sort` if we first sort the list, then `isSorted` must return `True`.

• How about unsorted lists? A simple strategy we can use is to compare the output of `isSorted` to the output of a much simple implementation of the same function. The simplest way I kind think of to know if a list is sorted, is to actually sort it and verify that the result is equal to the original. `haskell prop_isSortedIfSorted :: [Int] -> Bool prop_isSortedIfSorted as = isSorted as == isSorted' where isSorted' = sort as == as`

The first property is redundant given the second one, but we keep it to make sure we test `isSorted` with enough sorted lists. Finally, `sort as == as` is not necessarily equivalent to `isSorted` unless `sort` is stable, but Haskell’s list’s `sort` is in fact stable, and we are good to go.

## Developing intuition

The exercise asks us to use `foldMap`

``foldMap :: (Foldable t, Monoid m) => (a -> m) -> t a -> m``

Using `foldMap` we have the opportunity to transform every element in the data structure by turning it into some `Monoid` and then use `mappend` between pairs, starting with `mempty`. For a list, the end result looks something like:

``foldMap f [a,b,c] = f a <> f b <> f c <> mempty``

`<>` is simply an infix synonym for `mappend`, and we don’t need parentheses because `mappend` is associative.

The key is to find a `Monoid` that can keep track of the elements it has seen, and make sure the next one is in the right order.

My first intuition was to use some kind of wrapper over `Maybe a`. `Nothing` would represent an unsorted element detected, and `Just` would wrap the right argument to `mappend`. Something like:

``````newtype Sorted a = S (Maybe a)

instance (Ord a) => Monoid (Sorted a) where
mempty = S Nothing

S Nothing `mappend` _ = S Nothing

_ `mappend` S Nothing = S Nothing

S (Just a) `mappend` S (Just b)
| b >= a = S (Just b)
| otherwise = S Nothing``````

It turns out this simple approach has, at least, two problems:

1. `mempty` has the same representation as an unsorted element detection. This means, for instance, that an empty list would be marked as not sorted.

2. A more significant problem is that this version of `Sorted` is not even a `Monoid` because it doesn’t satisfy the associativity law. Let’s see a counterexample: `haskell (S (Just 1) <> S (Just 0)) <> S (Just 1) = S Nothing <> S (Just 1) = S Nothing` but associating to the right we get: `haskell S (Just 1) <> (S (Just 0)) <> S (Just 1) = S (Just 1) <> S (Just 1) = S (Just 1)` Those two results should be equal to have a valid `Monoid`

To fix those problems we need to track more state. Problem 1 requires us to track more state, in particular a way to differentiate `mempty` from ordering failure. To solve problem 2, maintaining the largest/smallest element is not enough, it introduces associativity problems.

## A solution

We will need a type that can distinguish the “nothing is known” case from “ordering failed”. Let’s start with that:

``data Sorted a = Init | Failed``

`Init` is the initial, know-nothing state1. As we mentioned in the previous section, tracking failure and max element is not associative. What we can do instead is to track the full interval as known so far. In this case `mappend` can expand the interval with each new sorted element, or fail if the new element lies within the previous interval.

Expanding our type to this we get:

``data Sorted a = Init | Failed | Range a a``

We will need a way to initialize a `Sorted` with a single element, but that’s easy, we can create the `Range` with the element as both start and end of the interval.

### The `Monoid` instance

Let’s write the `Monoid` for this type

``````instance (Ord a) => Monoid (Sorted a) where

-- we start knowing nothing
mempty = Init

-- failure propagates contagiously
Failed      `mappend` _           = Failed
_           `mappend` Failed      = Failed

-- we maintain any information we gain
Init        `mappend` s           = s
s           `mappend` Init        = s

-- this is where the detection happens
Range a1 b1 `mappend` Range a2 b2
| a2 >= b1                      = Range a1 b2
| otherwise                     = Failed``````

If we are mappending over a failure, there is nothing to do, we return the failure. Mappending with `Init`, returns the new element. In the interesting case, mappending `Ranges`, we verify if the new range is outside of the interval, and return the new expanded interval, or, if the new element intersects the interval, we fail.

Is this a `Monoid` now? Let’s see

• `mempty <> s = s <> mempty = s` is trivially true given the code on lines 11 and 12.
• `Failed` on both sides of `<>` returns `Failed`, that guarantees associativity when there is a `Failed` in the equation.
• When there is `Init <> s` or `s <> Init` we can replace it for `s`, so we turn the three terms into 2 and associativity holds.
• When we have three `Ranges`
• If ranges are properly ordered, each `<>` will expand the range
``````(Range a1 b1 <> Range a2 b2) <> Range a3 b3
= Range a1 b2 <> Range a3 b3
= Range a1 b3
and
Range a1 b1 <> (Range a2 b2) <> Range a3 b3)
= Range a1 b1 <> Range a2 b3
= Range a1 b3``````
• The case with one or two failing pairs can also be proved easily, left as an exercise.

### The `isSorted` function

Now that we have our `Monoid` writing `isSorted` is easy. We need to map over the `Foldable` creating `Sorted` values with empty ranges. Then reduce with `mappend`, and finally verify that we don’t end up with a `Failed`:

``````isSorted :: (Foldable t, Ord a) => t a -> Bool
isSorted = not . isFailed . foldMap mkSorted

isFailed :: Sorted a -> Bool
isFailed Failed = True
isFailed _ = False

mkSorted :: a -> Sorted a
mkSorted a = Range a a``````

This code passes our specification, we are done.

## A sidenote on lazyness

Our code has an interesting property, it can detect non-ordering in partial datastructures. That is, datastructures where bottom is present as an element. Let’s use the `Foldable` for lists to show this:

``isSorted [1, undefined, 2, 1, 3] = False``

This is nice, and we got it for free. `isSorted` only inspects the insides of the datastructure as much as it needs to make a decision. Since for lists `foldMap` is implemented in terms of `foldr`, we need to “provide evidence” that the list is unsorted to the right of the `undefined` element.

Let’s see how the evaluation proceeds

``````isSorted [0, undefined, 2, 1]

-- substituting isSorted definition
= not . isFailed . foldMap mkSorted) \$ [0, undefined, 2, 1]

-- substituting foldMap definition
= not . isFailed . foldr (mappend . mkSorted) mempty \$ [...]

-- substituting foldr definition and defining
-- ru = Range undefined undefined;
-- r1 = Range 1 1; r2 = Range 2 2
= not . isFailed \$ r1 <> ru <> r2 <> r1 <> Init
= not . isFailed \$ r1 <> ru <> r2 <> r1
= not . isFailed \$ r1 <> ru <> Failed``````

At this point we notice that our `<>` implementation doesn’t evaluate its left `Range` argument 2 when the right argument is `Failed`. So, even in the presence of a `Range undefined undefined`, evaluation can continue as:

``````= not . isFailed \$ r1 <> ru <> Failed
= not . isFailed \$ r1 <> Failed
= not . isFailed \$ Failed
= not True
= False``````

## Code

The complete code for the exercise and tests is on GitHub

1. 1 `Init` is not essential to the problem, it’s an artifact of having to use a `Monoid`, which requires `mempty`. An alternative way would be to replace the `Monoid` with a `Semigroup` and use `foldr1` instead of `foldMap`.

2. 2 The first pattern match in our `mappend` implementation is

``Failed `mappend` _ = Failed``

So, in fact, `<>` will evaluate the left argument, but only to Weak Head Normal Form, that is, only enough to know it’s not a `Failed`, it won’t touch the `undefined`.

]]>

I usually read from two, three or more books during the same week. Unless there is one book that I’m so passionate about that I can’t put it down, I prefer different books for different times of the day. I try to read on different topics at the same time too.

Currently these are the books I’m working on:

## Functional Programming in Scala

by Paul Chiusano and Rúnar Bjarnason

It has an excellent approach to introducing functor, applicative and monad. It constructs them out of pattern repetition, by showing that we are writing the same code in many different domains.

Good exercises, and very good real world examples of functional code. ## Basic Category Theory for Computer Scientists

by Benjamin C. Pierce

I’m group reading this with friends. Very short book, seems to be nice and to the point, but too soon to say. ## Anarchism

### A Collection of Revolutionary Writings

by Peter Kropotkin

This is currently my commute book. A collection of short essays, published originally mostly as pamphlets in Europe, during Kropotkin exile. Kropotkin is quite a character, a former aristocrat, a scientist and mostly a profoundly humane person.

This one gets me plenty of weird looks on the Muni. ]]>
Beautiful Power Series https://blog.sebastian-galkin.com/posts/beautiful-power-series/index.html 2015-12-26T00:00:00Z 2015-12-26T00:00:00Z

# Beautiful Power Series

I’m getting better at Haskell, or at least that’s what I choose to believe. Anyway, I recently joined Haskell-cafe, one of the e-mail distribution lists, and I found this great thread where Kim-Ee Yeoh links to a gorgeous article Power serious: power series in ten one-liners.

In the article Doug McIlroy, in a few one-liners, defines infinite power series for trigonometric functions exploiting the power of Haskell’s lazy evaluation.

As a teaser, this is the code for the `sin` and `cos` series1,

``````sins = int coss
coss = 1 - int sins``````

How awesome is that!

I have to find some time to play with the code. It makes me happy that this can be written so simply and beautifully, we must be doing something right.

1. 1 `int` is integration and it can also be trivially defined.

]]>
Why is Applicative More Efficient Than Monad https://blog.sebastian-galkin.com/posts/why-is-applicative-more-efficient-than-monad/index.html 2015-12-21T00:00:00Z 2015-12-21T00:00:00Z

# Why is Applicative More Efficient Than Monad

It is well known that `Monad` is more powerful than `Applicative` Functor. Using the Monad methods you can implement the Applicative ones, to the point that in recent GHC versions we have

``class Applicative m => Monad m``

with equivalence laws

``````pure = return

(<*>) = ap``````

and default implementation

``````ap :: (Monad m) => m (a -> b) -> m a -> m b
ap m1 m2 = do
x1 <- m1
x2 <- m2
return (x1 x2)``````

There are many good examples of Applicatives that are not, and can not be, `Monads`, like Validation and ZipList

But the question I was asking myself these days:

If we have an Applicative that is also a Monad, is there any reason to prefer `<*>` over `ap`

## Developing some intuition

From the Monad laws above we know that in fact they produce the same result, but could important performance differences exist?

Comparing the `Monad` and `Applicative` minimal implementations, we can expect some kind of performance difference. After all, with `>>=` the continuation function has to create the monadic context dynamically:

``(>>=) :: Monad m => m a -> (a -> m b) -> m b``

The right argument to `>>=` has type `Monad m => a -> m b`, so it has to create the monadic context during execution. On the other hand, for `Applicative`, the output context is fully defined by the “program” not by the evaluation of the function:

``(<*>) :: Applicative f => f (a -> b) -> f a -> f b``

When we do `ma >>= f`, `f` is in charge of creating the final monadic context. But `f :: a -> m b` doesn’t have access to the original context it is being chained to. So the new context gets created with no knowledge of the original one.

On the other hand, when we do `f k <*> f a` the `<*>` operator itself is in charge of creating the output applicative context, and it does so with access to the initial one. In that way, there is opportunity for optimizing the creation of the output context.

Based on this intuition, let’s try to find an example in a monad where creating the output context could be optimized with the extra knowledge.

## An array Monad

Regular Haskell Arrays, in `Data.Array` are not monads or applicatives. They are too powerful, allowing for arbitrary index values. For example, let’s take the right identity law for monads

``  as >>= return = as``

`as` will have certain index values, but we have no way to make `return` create an array with the same index values `as` has, for all `as`. There is no way to satisfy the law.

But we can create our own, much simplified, 1D array that can in fact be turned into a Monad. Let’s write the most basic array, in fact using Haskell’s `Data.Array` as a backend:

``````data Arr a = Arr {toArray :: !(Array Integer a)}

fromList :: [a] -> Arr a
fromList [] = error "No empty arrays"
fromList as = Arr \$ listArray (0, genericLength as - 1) as``````

We can only create these arrays from a list. These are terrible arrays, performance is going to be awful, but that’s not the point.

Now we provide instances for `Functor`, `Applicative` and `Monad`.

``````instance Functor Arr where
fmap f = Arr . fmap f . toArray``````

Nothing fancy there, the usual unwrapping and wrapping and delegating to `Data.Array`’s implementation.

For the `<*>` to behave similarly to lists, we want to apply every function on the left to every value in the right argument array. We can use list comprehension and the fact that `Data.Arrays` are `Foldable` so they provide `toList`. Since we are at it, we make our `Arr` also `Foldable`

``````instance Foldable Arr where
foldMap f = foldMap f . toArray

instance Applicative Arr where
pure =  fromList . pure
fs <*> as = fromList [f a | f <- toList fs, a <- toList as]``````

Again, this is going to be horrible performance, we turn the arrays into lists, then use the lists for the cartesian product, and finally turn the resulting back into an `Arr`. The key here is that we only create a single `Arr`, since we know the applicative contexts on the left and right, we know exactly what the size of the resulting array will be, and we can just construct it.

On the other hand, when we make `Arr` a `Monad`

``````instance Monad Arr where
return = fromList . return
as >>= f = fromList \$ concatMap (toList . f) as``````

There is no way around it, each call to `f` creates a new `Arr`, and finally we need to create yet another big array. `Data.Array` is strict on the indexes, this is more work that for the `Applicative` case.

### Benchmark

Let’s run the same operation using both the `Applicative` and the `Monad`. The fantastic criterion library can be used to get some numbers.

``````applicativeWork :: (Applicative f, Foldable f, Num a) =>
f a -> a
applicativeWork as = sum \$ (+) <\$> as <*> as

monadWork :: (Monad f, Foldable f, Num a) =>
f a -> a
monadWork as = sum \$ ((+) <\$> as) `ap` as``````

We sum the results of a cartesian product, both on the applicative context, using `<*>`, and on the monadic context using `ap`. And now we drive it with criterion’s `defaultMain` and a reasonable size. As a control case, we do the same for lists.

``````main = do
let n  = 500 :: Int
l  = [0..n]
as = fromList l

defaultMain [
bgroup "array" [
bench "applicative" \$ nf applicativeWork as
, bench "monad"       \$ nf monadWork as]

,bgroup "list" [
bench "applicative" \$ nf applicativeWork l
, bench "monad"       \$ nf monadWork l]
]`````` We see for `Arr` the `Applicative` is around three times faster than the `Monad`, while for lists, times are exactly the same.

So, that’s one reason to prefer `<*>` over `ap`, for some monads the former can be vastly more efficient.

## Code

You can find all the source for this post on GitHub

]]>