FP is no silver bullet but the affordances it gives you, I believe, are worth taking the time to learn. It's not as hard as it is made out to be but it's also something that didn't click for me immediately. It didn't click until I wrote a lot of it. It really didn't click until I worked in a FP code base with a few other people. Unfortunately, FP suffers from a chicken and egg problem. I'm a fan of the style and for me it lets me produce the best code I can in a team environment. The worst thing I can say about spending time learning FP is that I think way more about how I put together a program, in any language, than I did before. I spend a lot more time thinking about design decisions and trade-offs and what good design looks like.
I'm still learning. This series is my attempt to shed some light on the motivation of why you would want to use a strongly typed functional programming language along with an easy introduction to programming with effects, functor / applicative / monad / traversable, type classes, and programming within a context. The following series of blog posts is an attempt to short-cut the learning process for others. The resources are getting better every year and hopefully I can contribute back:
- Part 1 - Data / Types / Referential Transparency / Value prop (
this post
) - Part 2 - Programming with effects
- Part 3 - Typeclasses
- Part 4 - Practical effect manipulation with traverse andfriends
- based on a workshop repository you can work through
- the workshop questions
- the workshop solutions
- Part 5 - Basics of Final-tagless / ZIO
They are originally based on a series of 5 presentations I ran at my company in an attempt to get in front of some anti-scala sentiment that pervaded much of the org at the time. The org suffered from the typical problem of hiring Java devs and telling them to write to scala. It assumes someone at least has a few months of scala programming experience.
About me
I did not go to school for CS; however, I do have three degrees (an undergrad in Geological Engineering, a M.A.Sc. in Geological Engineering, and a master's in Data Science) so I've spent a lot of time learning to learn. I was a professional engineer building open pit mines who wrote small amounts of Fortran / VBScript / Python /R and built small webapps using JS (+vue). I didn't get serious about programming until my early thirties. I'm not an expert or the best programmer in the world. My engineering brain appreciates the FP focus on compile time safety, and the (greater) ability to make good models that help me understand what the hell is going on. I appreciate the guard rails I'm able to put up in a language with a rich type system that helps me avoid many footguns (but not all of them).
Outline
- Motivation
- What is FP and why?
- Data vs Objects
- Referential transparency
- Even more constraints
- Trade-offs
- Where to go from here
OOP
Most programmers coming to scala, even junior developers, are familiar with an OOP model and probably have a background similar to:
- Some Java/C++ from school
- Possibly Go from a networking class (not OO, but types)
- JS and (maybe) TS for the frontend folks
Locally here in Vancouver, juniors may be exposed to some functional concepts as the first CS course at UBC is all done in Racket. There is a fourth year course in Haskell but I haven't heard too many good things about it.
If you ask a bunch of developers about affordances/features of the OOP model you will probably get something along the lines of:
- everything is an object (in smalltalk anyway, in Java almost everything)
- internals should be hidden away
- What things are present in my system
- what messages do they send
- how do they represent internal info
- Coupling of state and behavior
- inheritance?
- Dependency injection frameworks?
- SOLID
- mutation, mutation, mutation
- what is a constructor but something that mutates an uninitialized object into it's initial state?
Sandi Metz, one of my favorite OO presenters, gives a good definition of idiomatic OOP:
- Object Orientated programming expects you to be:
- anthropomorphic,
- polymorphic,
- loosely-coupled,
- role-playing,
- factory created objects
- that communicate by sending messages
FP Style
Functional programming is a huge spectrum. If you ask a group of people what FP means you will get many different descriptions, all of which are right and wrong in some way:
- What data do I have and how do I transform it
- First class functions (but even Java has something like this! Ditto python)
- Immutability? (but even haskell has mutable variables)
- Higher-ordered functions? (functions that return functions)
- Types (but JS is rather functional and has no types, see also: Lisp, Racket, Erlang)
- Type classes? (but Rust has these and is definitely not a FP language)
- Higher-kinded types (but F# is functional and doesn't have this, ditto elm)
- Referential transparency?
This post is all about "statically typed functional programming with higher-kinded types" and is mainly talking about scala.
Functional programming is a spectrum:
FP ---->
*************************************************************--> weirder stuff (Idris, Coq)
| | | | | | | |
C, | | Python? Kotlin F#, SCALA, Haskell
ASM, Rust? Java? | JS, OCaML?
Go? C++? Lisp?,
Racket?Scheme?
This ranking is totally arbitrary and invented by me on the spot. The further to the right you go, the more natural programming in a functional style is due to affordances of the compiler. It's not to say you can't do this style going to the left, but if you do you are probably fighting the compiler / libraries more making it ergonomically terrible.
Goals and Good Design
The users of your product don't care how the sausage is made. No one cares that Facebook has some core of PHP (or that random PHP-like language Hack they made). Our job as developers is to:
- build things that provide business value to our employer
- minimize the cost of change
- so we can build more things
- we want things to evolve easily since business needs change rapidly
- minimize the cost of fixing bugs
- so we can spend more time building things to provide business value
- we want to understand old code quickly
- we want other people to wrap their heads around our code quickly
- make things not blow up at runtime
- so those juicy high ARR customers keep giving us those dollars
This is all of software development. Many techniques that are good in Object Oriented Programming are also good ideas in Functional Programming. The analogy I like to use in these presentations is:
- Functional programming is like skiing
- Object oriented programming is like snowboarding
Sure, they are different, but you're still going down the same mountain. They are more alike than not. So you still want:
- small scopes (e.g. single responsibility)
- dependency injection of some kind (loose coupling, easy testing)
- private/safe constructors
- clean APIs
- good domain models
- you know, maybe don't use Int which has 2^32 - 1 values for your ~10 http codes in your application. Use an enum (or something else along those lines)
You will also write terrible code using both approaches. Just doing FP doesn't guarantee success.
Type Systems
One big benefit of a functional programming language is a rich, expressive type system. A rich expressive type system lets you do things that you just can't do as easily, succinctly, or elegantly in simpler type systems like Java. A rich expressive type system helps you:
- define more accurate domain-specific models
- communicate clearly with your team
- your team could also be your future self
- analyze code
- in code review, rich types can help you understand what is going on
The types act as a weak form of documentation. They don't replace good documentation, but in the absence of comments they can help you figure out what's going on. This, combined with good domain modelling, can really help onboarding new people into the code base to work effectively.
As Kris Jenkins says in his type presentation that this post draws inspiration from a good type system lets you:
- describe the stuff
- describe the relationships between stuff
- describe the context of stuff
This is the entire job as a developer and feeds into a big theme in FP. FP is not about mathematical correctness or proving your program is correct. FP is about constraining the stuff so that when things go wrong, your search space for what can go wrong is limited. It's about making impossible states unrepresentable in the relationships between stuff (much easier with a rich type system than without one). Not being able to do something stupid is the best kind of unit test. And if you really go down the rabbit hole, it's about constraining the context in which something is running which further restricts the search space of what can be going wrong. This is a huge benefit but one that doesn't seem obvious until you've worked in a code base in this style. Everything is a little bit easier to understand/find/diagnose. But if you've never been in this kind of code base, this post probably won't change your mind: you have to discover this for yourself.
Describing the stuff
Programming languages give us tools to represent things:
- Sometimes things are values (the integer 10), the same as any other integer 10
- Sometimes things have identity, this 10 is different than that 10
- Java hashcode/equality people screaming here
- Sometimes things become other things (that 10 becomes a 20)
- Sometimes we want internals to be hidden away, other times not
- Sometimes we care about how things get used and or extended.
OO goes down the Object path and FP goes down the data path. This leads to the classic expression problem
Classes:
- it is very cheap to add a new kind of thing (variant)
- just add a new subclass, and as needed define specialized methods
- it is very expensive to add a new operation on things
- you have to add a new method declaration to the superclass
- potentially add a method definition to every existing subclass
- in practice, the burden varies depending on the method
Data:
- it is very cheap to add a new operation on things
- this is just a function
- All the old functions on those things continue to work unchanged
- it is very expensive to add a new kind of thing (variant)
- you have to add a new constructor to an existing data type
- you have to edit every function that uses that type
Data
In OO, say Java, the style really doesn't want you to make just data. Data doesn't really exist on it's own. There is a lot of a syntax noise around it. It's not very concise. Objects are all about coupling shared (usually mutable) state and behavior.
public class Order { private String id; public String getId() { return this.id; } public void setId (final String id) { this.id = id; } private int value; ... private String paymentMethod; ... }
This is a lot of ceremony for simple data and it gets worse the more complicated the data are.
You might think that the ubiquitous JSON we pass around in web apps is a data description but it is not. JSON is an infinite number of examples of what data may look like:
{ "id": "ORD001", "value": 315.0, "payment_method": "Visa" }
It is maybe more precise but it is not a data description. You might think Protobuf might be better but it's not. It's more descriptive than JSON but the proliferation of Option values ends up resulting in a bit of a franken-description. A wire format is not a rich domain model. They are related to, but not identical to, the actual data the application would like to work with.
Instead we would like a good type system for data in order to concisely describe the data in a way that is both human and machine readable. Scala comes close! It's not as concise as something like Haskell or Elm, but it's much more concise than Java. We have support for higher-order function, generics, higher-kinded types, and ADTs.
Product Types
Every language has what are called Product types:
// C struct Order { char orderId[50]; double value; char paymentMethod[50] }
// Scala case class Order(orderId: String, value: Double, paymentMethod: String)
-- Haskell data Order = Order { orderId :: String , value :: Double , paymentMethod :: String }
They are called product types because of their number of inhabitants. If a type
takes an Int and a String, how many inhabitants are there? Well, the number of
different types of Int times the number of possible strings. So 2^32 -1
multiplied by infinity.
Sum types
Most languages support these through enums, but some support them directly:
// Scala sealed trait ClothingSize case object Small extends ClothingSize case object Medium extends ClothingSize case object Large extends ClothingSize case object XLarge extends ClothingSize
-- Haskell data ClothingSize = Small | Medium | Large | XLarge
A type level ClothingSize is not the String "small". Small is Small.
Algebraic Data Types (ADT)
Despite the scary sounding name, they are simple: a mixture of sum and product types! This is a powerful way to narrowly define your data and made great domain objects. Let's look at a rich type describing the possible response to some sort of http OrderRequest:
data OrderResponse = PurchaseSuccessful { newOrder :: Order} | PaymentFailed { paymentProvider:: ProviderId , failureMessage :: String } | NetworkError { statusCode :: Int , message :: String }
This is an ADT with three types. An OrderResponse
is either a
PurchaseSuccessful
or a PaymentFailed
or a NetworkError
. Each of the
inhabitants is a rich type with additional domain-specific information.
Critically, there is no subtyping here. OrderResponse is a sum type. The
above example comes from Elm which has no objects and the type system does not
have any idea of sub-typing.
This is a rich domain model expressed very concisely. It conveys a ton of information in a small amount of space. It's nearly as concise in scala:
sealed trait OrderResponse case class PurchaseSuccessful(newOrder: Order) extends OrderResponse case class PaymentFailed(paymentProvider: ProviderId, failureMessage: String) extends OrderResponse case class NetworkError(statusCode: Int, message: String) extends OrderResponse
The story in Scala is a little murkier. We can define an ADT above which will behave the same way and have comprehensive Pattern Matching but because we run on the JVM, a NetworkError will be a subtype of OrderResponse. When thinking about FP, it's best to avoid thinking about subtyping even though it leaks through the types in Scala.
An ADT let's you encode your assumptions, talk in your DSL, extremely cheaply and thoroughly. In the above case, if we were to pattern match on OrderResponse the compiler will throw an error if we haven't handled all of our cases. This is very useful!
Relationships between stuff
These are just functions (or well, methods in OOP). In a dynamic language, you have to keep track of the relationship in your head:
// javascript function withdraw (userId, amount) { ... }
But what does withdraw even return? A double? A withdraw request that is executed later? In this simple example, we can guess that userId is a string and an amount is a Double but even then we are not sure.
In a typed language like Java this is more clear:
// Java public Double withdraw(string userId, Int amount) { ... }
This is an improvement. I like typed languages because they help me understand what is going on. I have to keep track of less in my brain. The compiler errors on changes (aka: the fastest unit tests you'll ever write for free) are a bonus compared to the code comprehension improvements.
There is a problem with the above though. It's kind of the wrong way around isn't it? I think this is improved in scala where the return type is on the right:
def withdraw(string: userId, amount: Int): Double = { ... }
If we go an ML like language like Haskell or Elm or F# this is even further improved by having the type signature separated from the function definition.
withdraw :: String -> Int -> Double --| ^ ^ ^ --| | | | --| Needs ----.--------. | --| | --| Produces ------------------.
This is not the function definition but the type signature. Looking at the type signature directly as above in the ML case (or in your head in the case of Scala) helps:
- understand what's going on (Weak documentation)
- raise code smells
For example, function signature like String -> Int -> String
should be
setting off alarm bells. Depending on the code base at work, you see these alarm
bell ringing type signatures all over the place (especially in legacy code).
These smelly signatures are a sign you can improve your domain model. The
world's vaguest type signature is String to String
and you see it all the
time:
def foo(s: String): String {..}
Without looking at the code (e.g. in a review situation when you are just starting out trying to get a grip on what the changes are all about) it tells you nothing. What is foo? This function has many possible implementations and the name and the type tell us nothing:
- is it toUpper, toLower, abbreviate?
- A bunch of ML math followed by a
.toString
- an assembly compiler that runs a program that outputs a string?
Here is a painful one I saw in a app talking to facebook. In the following case
SocialProfileInfo
is the wire transport object.
buildLocationPageProfiles :: List SocialProfileInfo -> String -> Future List SocialProfileInfo
The type signature is not informative to say the least. What is different between the SocialProfileInfo of my input and the stuff on the output? What is happening? It could easily be improved:
List LocationProfileInfo -> ParentPageId -> Future List SocialProfileInfo
Types
Types help us describe the inputs and outputs between stuff. More importantly:
- they help us with compiler errors (passing the wrong type)
- they help us constrain the world of possibilities
We get this with any typed language and it's worth spelling out what using good types is important:
- types can be too big
- e.g. using Int to hold http status code in your domain model
- there are 2^31 -1 valid values for http status codes in this model
- types are a set and have a cardinality
- the cardinality of a type should fit the business requirements
In languages with first class functions these functions are also a type. That
is, A => B
is also a type.
Cardinality refers to the number of inhabitants. You want to keep this value as small as possible for your domain. Consider the following two possible definitions for getting the currency of a country:
def getCurrency(country: String): Option[String] = ??? def getCurrency(country: Country): Currency = ???
It's intuitive to see which has a lower cardinality and consequently, which one will be much easier to understand and test. If I was jumping into a code base that I inherited from another team, I know which one I would prefer.
Matt Parson's has an excellent blog post on types to improve program safety. There are two techniques when designing types (and the code that uses them):
- Expansion
- Restriction
For example, if we are taking in some input that doesn't map to the business domain we can:
- push the responsibility forward (expansion)
- the caller of the code can provide whatever value they want
- some condition might fail
- caller of the code needs to handle that failure later on
You see this style often when using Option. For example, some fn that takes in a list and concatenates all the elements but has some business logic saying that there can never be an empty string
def concatStrings(xs: List[String]): Option[String] = if (xs.isEmpty) None else xs.combineAll
This is often the easiest to reach for, but it tends to complicate the code base. Lots of our business logic ends up with Options. Functions may start taking in Optional parameters. It gets harder to understand what's going on --even though we have nice pure functions that don't throw exceptions.
Alternatively, we can push the responsibility backwards (restriction):
- we restrict the range of inputs we will take
- instead of taking Int, we only take Natural numbers
- instead of taking Int for http code, we take an enum of HttpCodes
In this case, the caller is responsible for constructing the right type.
def concatStrings(xs: NonEmptyList[String]): String
In this version this function is much simpler; however the caller needs do more work in order to use this. The benefit is that downstream programs need to no longer worry about invalid inputs. There is a tension here due to ergonomics, but the more of this you can do that simpler your business logic becomes when that logic is complicated.
Pushing safety forward (expansion) does not make things simpler downstream. For example, we've probably all seen someone take a DTO full of Optional values and use them deep in the business logic. Suddenly functions are doing ad-hoc validation all over the place which is mixed in with the actual business logic and it's hard to understand what's going on. Pushing safety backwards does make things simpler downstream, like validating your DTO (json) at the service edge into an internal domain model. By forcing the caller to provide the right thing.
The key intuition here is that when we restrict what we can do, it's easier to understand what we can do. This talk but Runar Bjarnason minutes goes over this concept is great detail (the first 20 minutes or so is worth watching). This is a good idea in any typed language and an expressive, concise type system let's us do this all over the place as the ergonomics are friendlier.
Generics
Scala allows for generic parameters (and something called Higher-Kinded types which we will cover later). When you click on step into a library like Monix, Cats, Zio etc you will usually be confronted with a wall of single letter variables. The reason why this is often fine is that you don't know anything about them! If you did know something about them, you would give them a good name but you don't, so it's just A, B, F, T, Z, whatever.
The generics in scala are no scarier than those in Java. For example, some random code from Guice:
public BindingBuilder<T> toProvider(Provider<? extends T> provider) { return toProvider((javax.inject.Provider<T>) provider); }
or csharp:
class NodeItem<T> where T : System.IComparable<T>, new() { } class SpecialNodeItem<T> : NodeItem<T> where T : System.IComparable<T>, new() { }
The better view of generics you want, when doing generics in any language, is
that the more kinds of things something can potentially be, the less we can
reason about what it actually is. Think about this as it's kind of the
opposite of what you think happens when you go generic. You would think that
making a function take a generic parameter that you are inviting the world in
and it's going to be complicated, but in fact it's actually less complicated
than knowing that your type is say String
. Because you don't know what A
is
you really can't do much with it (assuming there is no F-bounded
polymorphism going on).
How many implementations are there of each type signature in the following:
def foo(a: Int): Int def foo(s: String): String def foo[A](a: A): A
There is near infinite implementations of a function that would fit the first
type signature. The second is our friend String -> String
which has infinite
implementations. The last, even though it's generic, is actually the most
constrained. There is only one function you can write that will work for the
generic foo
and that is identity
. Again, this is counter-intuitive! But
it's the very fact we don't know what A is that gives us this property. By
making something more abstract we've made it more precise. Freedom at one level
leads to restriction at another.
Referential Transparency
Pretend we are not running on the JVM in scala but some more restrained system like the following:
- we have types [Int, String, generic A, B, etc.]
- we have pure functions (
f
) that maps an A to B- it doesn't do anything else
- it doesn't update a counter, call a DB, save a file, print to screen
- output is determined by the input
- evaluating an expression always results in the same answer
- we can always inline a function, or factor one out
- it doesn't do anything else
We call these pure functions and this leads to Referential Transparency:
- we can substitute a variable for the expression it's bound to
- we can introduce a new variable to factor out common sub-expressions
- for any expression, we can replace it's value without changing the program's behavior
val area = (radius: Int) => math.Pi * math.pow(radius, 2) val program = area(3) + area(4)
Area is referentially transparent. If we substitute it's value in program:
val program = (math.Pi * math.pow(radius, 3)) + (math.Pi * math.pow(radius, 4))
It works! We have performed the substitution without changing the program's behavior. Compare this to a typical example you would find all over the place in Java/Python etc:
var total = 0 def addToTotal(x: Int): Int = { total += x total } addToTotal(1) == addToTotal(1) // FALSE!
Clearly this is not referentially transparent! If we think even simpler, are these two programs the same?
val a = <some expression> val program1 = (a,a) val program2 = (expr, expr)
In FP using functions that are referentially transparent the answer is always yes! In OOP, who knows. Expr could be updating some global state, talking to a DB, or lots of other actions. We can't know without having to read the code.
This property of referential transparency is very useful. It allows us to reason locally about what is going on. This optimizes for the reader (who may be your future self). A new hire can look in a code base and only has to read a small amount of code to understand what is going on. A reviewer can review in github/gitlab without having to pull in the project to an IDE to explore since they can read what is infront of them to grok what is happening. This is a huge benefit.
In FP we talk about expressions, not statements. To the left of an equal sign is a name, and to the right is the expression
val add_one = x => x + 1 // add_one is equal to the expression that appears on the right
Functional programs are evaluation of expressions, not a sequence of statements which is what we are use to. Running a program means we are evaluating an expression. We build bigger programs out of composing smaller ones (function composition). We understand what is going on by repeated use of substituting expressions (referential transparency).
Pure Functions
The big picture we want, which is a good idea in all languages:
------------------------------
| |
| ---------------- |
| | | |
| | PURE | |
| | FUNCTIONS | |
| |________________| |
| |
| Side-effecting functions |
|____________________________|
outside world / program
boundary
- we want as much of our business logic living in pure functions
- the output is determined solely by the input which is easy to test
- we want a clean edge at the boundary of our program (our side-effecting functions that take in JSON at an endpoint, make http requests) that deal with the possibility of failures, parsing errors, etc.
- we want to use restriction at this boundary so the downstream dependencies deal with nice domain models and focus on the business logic
For now we will call side-effecting functions anything that is not a pure function like talking to the outside world, updating DBs, mutable counters, etc. There are some mental leaps to wrap your head around which we will cover later to show that you can still have referential transparency with side-effecting functions.
Non-deterministic functions are not pure
Remember types and functions are sets! A pure function is a mapping from A => B
that is every value in A has a corresponding value in B.
// pure
input / domain output / codomain
-------------- ------------------
| a1 -----------------------> b1
|
| a2 -----------------------> b2
// non-deterministic, a2 maps to two possible values
-------------- ------------------
input / domain output / codomain
-------------- ------------------
| a1 -----------------------> b1
|
| a2 -----------------------> b2
| |
|___________> b3
-------------- ------------------
For example:
import scala.util.Random Random.nextInt(100) // 28 Random.nextInt(100) // 17
Partial functions are not pure
I don't mean PartialFunction
in the scala syntax way. I mean partial
functions where a value in the input domain does not map to a corresponding
value in the output domain. The two common cases we see for these are:
- exceptions
- nulls
def addOne(x: Int): Int = x + 1 addOne(null) // boom, scala.matchError: null def div(x: Int, y: Int): Int = if (y != 0) x / y else throw new Exception("boom")
Worse, we've lied in our type signatures. We've thrown away the power we could
have --that is, reading the type signature can tell us what's going on. We have
not signalled our intention that something can go wrong. Now upstream callers are
forced to defensively put try {..} catch {..}
everywhere and the code becomes
hard to read.
So a pure function is:
- deterministic
- total (not partial)
- has no mutation (local mutation that does not escape the function is fine)
- no exceptions
- no nulls
- no reflection
- no side-effects
The benefit of this is that we gain referential transparency. Which, say it with me, means local reasoning which means less surface area of code to grok.
Consequences of Referential Transparency
All the machinery of FP with the funny math words and the fact we can map some category theory to FP comes from referential transparency. We gain a ton of ability to reason about our programs. The type signatures act as huge markers to tell us what's going on to reduce the cognitive load when we read code. The design patterns and abstractions in use all come from wanting to maintain this property.
The big hitch is that a program of pure functions as we've been talking about is pretty useless. It can't do anything. We know we need to talk to the outside world. We know network requests fail. We know there is a world of impure things we have to deal with:
- partiality
- exceptions
- non-determinism
We also like dependency injection because it's hella useful, but we don't have a runtime dependency graph so what do we do? What does logging look like? How do we do mutable state between threads?
Contexts
All of these it turns out are a context. Sometimes called an effect. You can think of them as a box. The following are all a context of some kind:
- partiality
- exceptions
- non-determinism
- dependency injection
- logging
- mutable state
- IO side-effects
These contexts are all around us and in most languages we don't think about them: they are implicit contexts. But we are in a language with a rich expressive type system! We want to make these contexts explicit and put them into our type signatures. The sooner you see these "busy" type signatures as friends, the easier your code will become to read. They are telling you a wealth of information that in any other programming style, you would need to go read a bunch of code to figure out what the implicit context is and if your code change needs to concern itself with said implicit context.
Effects
Just another word for context. It's a vague term so let's explore what effects are. Rob Norris has a great talk on programming with effects that gets into this in more detail. The second part of this series focuses more on this; however, let's look at the common effects we encounter in scala when we are learning the language:
Option
Option gives us a way to represent what we don't have an answer for. The intuition here is exceptions. We know we have some functions that are partial and Option gives us these back in a referentially transparent way.
sealed trait Option[+A] case object None extends Option[Nothing] case class Some[A](a: A) extends Option[A] // intuition: functions that may not yield an answer (partiality) val f: A => Option[B] val g: B => Option[C]
Either
Similar to Options they give us a way to represent partial functions. The intuition here is that functions may fail with a reason. This kind of gives us exceptions back (even more so than Option):
sealed trait Either[+A, +B] case class Left[+A, +B](a: A) extends Either[A, B] case class Right[+A, +B](b: B) extends Either[A, B] val f: A => Either[String, B] val g: B => EIther[String, C]
List
Yes! List is an effect. A weird one that we don't usually think of but listness is a kind of nondeterminism. For example, we can define functions that might have several possible answers.
sealed trait List[+A] case object Nil extends List[Nothing] case class ::[+A](head: A, tail: List[A]): Extends List[A] val f: A => List[B] val g: B => List[C]
If we composed functions that give us multiple answers, we would expect to get every possible answer we could get.
Future / Task
The intuition here is something happening later, possibly on another thread.
def getStuff(a: User): Future[Permissions] = for { response <- httpRequest(..) permissions <- parsePermissions(response) } yield permissions
Future is not referentially transparent, but other things you might use like Monix Task or Cats-effect IO or Zio are. We will discuss this distinction later, as it's a bit of a mental leap at first to understand how side-effects inside of IO are still pure.
Effects Redux
So what do they all have in common?
- they all compute some sort of answer with some extra stuff associated with them
- the extra stuff is what we call an effect
They all share the same Shape F[A]:
type F[A] = Option[A] type F[A] = Either[E, A] // for any fixed E type F[A] = List[A] type F[A] = Reader[E, A] // for any type E type F[A] = Writer[W, A] // for any type W type F[A] = State[S, A] // for any type S // intuition: this extends to other "effects" type F[A] = Future[A] type F[A] = Task[A] type F[A] = Validation[E, A] // for any type E
What is an effect? Whatever distinguishes F[A]
from A
- F[A] is sometimes called a Context
- sometimes F[A] is called "a program in F that computes a value of A"
- sometimes F[A] is called a computation
- there are many of these
- they share many commonalities in terms of how you interact with them
Constraining the Context
The problem with an effect like Future / Task / IO
is that it's vague.
It's the String -> String
of the effect world. What is Task[Permission]
doing?
- hitting a DB (probably)
- hitting some kafka audit service for DB access
- logging
- hitting some microservice Mat wrote 6 months ago that no one really knows about
- mining bitcoin?
This context doesn't give us strong guarantees. We're basically back in Java land. In fact, because of limitations of Scala (for Java interop) we don't have a great story here. The following section is all about a certain style of programming that gives benefits but can be circumvented at the type level in scala. You need a stronger type system than Scala to lock it down more and this style is much more enforced in a language like Haskell. That does not mean it's not a good idea! You still gain a lot by restricting the context in Scala but you need to spend more time on review to make sure it's not being circumvented due to the lack of compiler support.
Say we are doing some sort of OAuth work. We have some sort of Signed Request for security purposes:
-- java -- is this blocking? on a new thread? who knows, it could be deleting files -- we have to read the code to find out since anything can happen anywhere -- and nothing is in the type system signOauth :: Oauth -> Credential -> Request -> Request -- not really any better in scala signOauth :: Oauth -> Credential -> Request -> Future Request
Just slapping on the fact this is running in the Future context tells us essentially nothing. The capabilities of Future is infinite.
We would like to restrict the context and introduce a constraint at the type level. In haskell this looks like:
signOauth :: MonadOauth m => Oauth -> Credential -> Request -> m Request
This means that we can make this request, but only in the context of MonadOauth. This type is likely IO (e.g. M above will be IO at runtime and there will be a implementation of MonadOauth for IO) but from the view of this program the only thing it knows about m is that it has the capabilities defined by MonadOauth. In this case, MonadOauth is a typeclass. We will discuss typeclasses, and their uses, later in this series.
This doesn't really have a one to one mapping in scala. A later post about typeclasses, ZIO Environments and Final Tagless will show what it looks like in scala. Don't worry that this doesn't make sense just yet:
def signOauth(o: Oauth, c: Credential, r: Request): ZIO[OauthCtx, Throwable, Request]
The above ZIO[..]
types translates roughly to: I will give you a value of Request
asynchronously when given an OauthCtx
context (or I may fail with a
Throwable
). We will explore the idea of context and type capabilities
(vague
for now!) later in the series.
These constraints at a type level are great, but more of a social construct in scala. The JVM allows too many escape hatches to get around any sort of compiler enforcement but still an extremely useful pattern.
The FP Value prop
All of this brings you to the scala FP value prop. Using scala as a language we get:
- highly expressive domain modelling compared to go/java/c#
- first class functions
- concise generics
- support for ADTs and pattern matching
- typeclasses over inheritance (talk 3)
- can also do neat compile time function derivation to remove boilerplate
- Can be referentially transparent
- reasonably fast (jvm)
- can drop into pure Java for speed
- can be reasonably type safe
- lots of compile side type magic in our libraries for:
- nice json reading/writing
- [refinement types]
- type-safe database queries
- lots of compile side type magic in our libraries for:
- not only model your types but model the context you are running in
- having ``List[Future[Option[Result]]]` is a strength giving you important program boundaries, and is not a messy type signature
Scala is a good fit when you aren't doing just simple IO/Crud. The overhead of context tracking is probably not worth it then. If you don't need an expressive domain modelling language in a CRUD app then honestly Java + Springboot will do you. But if you do have a mixture of IO/interesting domain magic then scala really starts to shine. I worked on a super interesting proprietary in-memory DB written in scala (with occasional java) for speed. It was the second version of the DB with the original in Java and it was much easier to work in the scala re-write. There are also some excellent concurrency primitives in the FP world if that is your problem space. I'm really happy to use Scala in my day job.
Scala/FP Downsides
- very different mental model
- onboarding, onboarding, onboarding
- doesn't stop you from writing bad code
- compiler doesn't stop you from doing something stupid
- mixed FP/OOP footguns
- too many tools can lead to mental overload
- different teams will likely program in different styles
- runtime DI doesn't jive with much of the compile time FP
Everything I've talked about in this post is a strength if you use it. Scala's type system affords you a rich language to express your DSL, but if you just write it like short-hand Java you don't the benefit.
Resources
This is not a comprehensive list by any means but these were talks I enjoyed on my journey.
General
- These two talks by Kris Jenkins are about elm but honestly are the best FP value prop talks
BOOKS
In this order:
- Essential Scala
- the first 3-4 chapters of Functional And Reactive Domain Modelling
- Scala With Cats
- Practical FP in Scala
For something different, if you like the front end, check out elm
Talks
- how to build a functional API by the guy who made the Fp foundation course
- constraints liberate, liberties constraint - up the 36 minute mark, after that it's not useful to the beginner
More of an advanced talk but a useful mindset to get into eventually:
- programming with effects up to about the 27th minute mark
Another advanced talk that is fascinating in terms of the functional mindset:
Jumping ahead to something more relevant to the last post in this series is this ~2 hour overview of Monad transformers, final-tagless, and Zio. If you feel like reading ahead it's worth the watch. The author is quite critical of final tagless but other people like final tagless and use it successfully so make up your own mind and play with both, you won't be disappointed:
For cats-effect you can beat these two for a great introduction: