Introduction to the Lambda Calculus

This post is basically my handout to a talk that I have to prepare for a bachelor’s seminar at my university, mostly digesting and summarizing some parts of the awesome introductionary book “Types and programming languages”.

The writing style might be a bit too formal for a casual internet post, but here I could try out the LaTeX-to-Markdown conversion and MathJax support of Pandoc (which by the way works pretty well). I hope it is enjoyable or useful nonetheless.

Much more to-the-point and a bit less formal are the slides.

Introduction

You may ask yourself, why do you need a type system? It is possible to get work done without it just fine, the types are only in the way. There are many popular programming languages which do not require adding type information to variables all over the place and it still works, right? If this is what you think of when you hear “type system”, I assume that you have never used a powerful one.

A sane, advanced type system, as mainly found in ML, Haskell or other functional languages is a very useful tool in the process of software development. It rejects code which is obviously broken, offers additional documentation for functions and needs only minimal hints, as there are algorithms which figure out the types of different expressions all by themselves, most of the time (a feature known as type inference). A strong, static type system also makes refactoring easier — if an API changes, the code will not pass the type checking in all the affected places. Being constrained by the bounds of a type system forces the developer to construct models which are, at the very least, self-consistent.

But before we can dive into the simply typed lambda calculus as a fundamental example of a type system, we first need to understand its predcessor, the untyped lambda calculus, as introduced by Alonzo Church in 1936 and shown to be equivalent in expressiveness to Turing machines by Turing in 1937. Language features like partial application, currying and the whole notion of higher-order-functions, i.e. functions as values root directly from the lambda calculus, making it a worthy object of study for everyone interested in the development of programming languages.

The Untyped Lambda Calculus

Syntax and evaluation

The pure untyped lambda calculus has a very simple grammar.

Syntax of $\lambda$-calculus:

$\tt t ::= x \mid \lambda x.t \mid t\medspace t$

Which means that a valid lambda term is either a variable, an abstraction or an application. The intuition shall be to see an abstraction as function definition and application as function application, where the right term is used as the argument for the left term. Note that for readability we will use parentheses around terms, although they are not a formally defined part of the calculus. If omitted, assume that an abstraction opens a pair of parentheses and extends to the end of the expression.

A variable is bound in a term if its name is used in a lambda abstraction it is embedded in, otherwise the variable is free. In $\tt \lambda x.x\ y$ the $x$ is bound and $y$ is free.

Also notice that in the pure lambda calculus there are no primitive data types like integers, booleans, etc. so the only possible value of a variable is unevitably another lambda term and the things we can possibly “compute” are just other lambda terms as well. Another interesting property is that the defined “function” syntax just allows one argument, what seems rather limiting on the surface, but is a restriction easy to circumvent.

The only form of computation possible in the lambda calculus is beta reduction which in fact, is just function evaluation — substituting the variable of an abstraction with the argument on the right side. A reducible term, i.e. of the form $\tt (\lambda x.\ t_1)\ t_2$ is also called redex which is just a shorthand for reducible expression. For such a substitution we will write $\tt [x \mapsto t_2] t_1$, to be read as “substitute all occurences of x in term $t_1$ with term $t_2$”.

There is a number of different ways to define the rules and order of term evaluation, we will use the call-by-value strategy. It means, that we start on the outside, we will recursively evaluate the terms, left to right, until it is no longer possible and if the resulting expression forms a redex, we will substitute the variable in the abstraction on the left with the value on the right, removing the binder (keeping only the substituted body) and proceed with evaluation until we reach a term which is not a redex. In this context we call non-reducible terms just values, as we have no other kind of data and such a term is as “flat” and “primitive” as it can get in this setting.

If not careful, when substituting terms which contain the same variable names in different contexts, we can accidently change the meaning of the expression:

$$\tt [x\mapsto z](\lambda z.x) = (\lambda z.[x\mapsto z]x) = (\lambda z.z)$$

Here we changed the meaning from a constant function to an identity function, which should not be possible for obvious reasons. So a limiting rule for substitution is that we only allow it if the variable bound by the abstraction is not a free variable in the right-hand term. A different approach would be to rename conflicting bound variables in the abstraction prior to substitution, which is also called alpha conversion. Terms which are the same except of the choice of variable names, i.e. with identical structure, are called alpha equivalent.

Working with the Lambda Calculus

Currying

As mentioned, our abstractions in the lambda calculus can take only one argument. But we can easily define a function which takes an argument and returns a different function, with one variable already substituted. We just apply our arguments one at a time:

$$\tt (\lambda x.\lambda y.x\ y)\ a\ b \to (\lambda y.a\ y)\ b \to a\ b$$

What we did is that we defined an abstraction that takes one argument, returning an abstraction that takes a different argument and returns the first argument applied to the second. Then we successively did the substitutions $\tt [x\mapsto a]$ and $\tt [y\mapsto b]$. This is effectively a function that takes another function and an argument and returns the supplied function applied to the argument.

This technique is commonly called currying, attributed to Haskell Curry, but actually is thought to go back to the mathematician Moses Schönfinkel. In reverse, pre-applying a function with one or more arguments before passing it somewhere else is called partial application.

Encoding booleans

The only values in the pure $\lambda$-calculus are abstractions, but to actually give our computations some meaning outside of the abstract realm of the calculus, we somehow need at least to encode some kind of values we want to regard as true or false. Without extensions to the calculus we can do nothing else but just assign some specific terms a meaning and make sure that they behave the way we want. Fortunately Church also solved this problem:

$$ \begin{aligned} \tt tru &= \tt \lambda t.\ \lambda f.\ t \newline \tt fls &= \tt \lambda t.\ \lambda f.\ f \end{aligned} $$

We simply interpret a function that takes two arguments and returns the first one as true, and a similar function which returns the second one as false. This is a wise choice, as now we can easily define logical operators exploiting this definition of our Church booleans:

$$ \begin{aligned} \tt not &= \tt \lambda b.\ b\ fls\ tru \newline \tt and &= \tt \lambda b.\ \lambda c.\ b\ c\ fls \newline \tt or &= \tt \lambda b.\ \lambda c.\ b\ tru\ c \newline \tt test &= \tt \lambda l.\ \lambda m.\ \lambda n.\ l\ m\ n \end{aligned} $$

The not function is straight forward, and takes two booleans and evaluates the first with the second and fls as argument. If b is true, it will return c, whose value corresponds to the value of b $\land$ c. If b is false, it will return its second argument fls, which is also correct. A similar logic is behind the definition of or. The test function simply applies the second and third arguments to the first, giving us a kind of if-condition, but with the inconvenience that both “branches” (arguments two and three) will be evaluated before it is decided which one will be returned, because of our call-by-value semantics. We can fix this by wrapping our arguments in dummy abstractions to prevent further evaluation before we want it and then apply the returned abstraction to anything, in order to unpack the wrapped expression.

Encoding numbers

We can even represent natural numbers:

$$\begin{aligned} \tt c_0 &= \tt \lambda s.\ \lambda z.\ z \newline \tt c_1 &= \tt \lambda s.\ \lambda z.\ s\ z \newline \tt c_2 &= \tt \lambda s.\ \lambda z.\ s\ (s\ z) \newline …\end{aligned}$$

Here our natural number is a function which applies an arbitrary function to a denoted zero value, thus the number of applications of s corresponds to the number we mean. This representation is known as Church numerals and an interesting fact is that our definition of $c_0$ is alpha equivalent to the Church boolean fls, which can be useful for some applications. Now we need some functions to manipulate such numbers:

$$\begin{aligned} \tt scc &= \tt \lambda n.\ \lambda s.\ \lambda z.\ s\ (n\ s\ z) \newline \tt plus &= \tt \lambda m.\ \lambda n.\ \lambda s.\ \lambda z.\ m\ s\ (n\ s\ z) \newline \tt times &= \tt \lambda m.\ \lambda n.\ m\ (plus\ n)\ c_0\newline \tt iszro &= \tt \lambda m.\ m\ (\lambda x.\ fls)\ tru\end{aligned}$$

The function scc generates the successor (increments) a numeral, taking a numeral n and returning a new numeral, with s applied one more time than it is applied by n itself. plus works similarily, applying s first n and then m times to z, implementing addition. Multiplication can be defined as repeated addition, as demonstrated in the definition of times — we exploit the fact that we can use an arbitrary function to evaluate our numerals, so that after full evaluation we do get the result we want. We can test whether a value is zero by evaluating a numeral with the constant false function and true as our zero value — if it is zero, we just get true, as the function is not applied, and otherwise we get false, because the function will be applied to a value.

Subtraction is also possible, but is quite a bit more complicated both in definition and evaluation, so we skip it here. With a test for zero and a predcessor function it is also possible to define a function testing two numerals for equality by verifying that subtraction in both directions returns zero.

Beyond the pure calculus

Real booleans and numbers

Instead of struggling with Church booleans, we can extend our calculus with real ones. So we define true and false to be valid, atomic values (in contrast to tru and fls above, which just stand for a lambda abstraction that itself can be evaluated). To put our booleans to use, we also introduce the if-expression of the form if t then t else t, where $t$ is any lambda term. Our evaluation rule for these forms is - first evaluate the condition. If it is true, replace the if-expression with the then-branch, if it is false, replace the expression with the else-branch.

We can mix our real booleans with the Church booleans, as they are easy to convert:

$$\begin{aligned} \tt realbool &= \tt \lambda b.\ b\ true\ false \newline \tt churchbool &= \tt \lambda b.\ if\ b\ then\ tru\ else\ fls \end{aligned}$$

Real numbers can be also introduced rather easily – we add the new syntactic terms 0, succ t and pred t and semantically we treat a chain of succ applications to 0 as the number represented. Whenever succ and pred meet, they form a redex which is removed in the evaluation step. Again, the difference to Church encoded numbers is that these tokens are not lambda abstractions, but are a distinct part of the language.

A real programming language would never use Church encoded data types, because atomic primitive data types in a machine-friendly format can be processed much faster with hardware support. Still, the Church encodings are an interesting showcase to see what can be done even in the pure calculus.

…And more

There are many useful extensions one can add to the lambda calculus, e.g. more primitive base types, complex data types like tuples and lists and also different kinds of syntactic sugar, which is some convenient notation for a construction that can be expressed in the basic terms, but in a more verbose way. Also there are techniques to enable recursion (and thereby support looping) in the lambda calculus, but all these things are out of scope of this essay. The transition to a fully fledged functional language is rather fluent, but for us, as also in most formal, theoretical settings, we want to keep our object of study simple and, in our case, just focus on our topic of interest — types.

Adding Types

Motivation

Now what does prevent us from passing a boolean value instead of a number when calculating something? Nothing does. The evaluation will either stop, as we have no rule to handle such a situation, or, depending on the setting, it will continue and produce meaningless results, which is even worse. So we need a way to tell our compiler or interpreter what we expect to get, something that can be easily checked automatically and which will reliably reject invalid, ill-typed input. We want to accept well-typed expressions, ones which can be evaluated correctly and result in a meaningful value. So we need to expand our definition of $\lambda$-calculus.

The simply typed lambda calculus

Syntax

We want to assign to each function a type of the form $T_1 \to T_2$, which is to be read like: the function takes an argument of type $T_1$ and evaluates to a value of the type $T_2$. So $\to$ acts as a type constructor giving us a new type from the type variables $T_1$ and $T_2$, which themselves can stand for any other constructed type of this form or a base type like Bool, that can not be decomposed any further. The $\to$ operator is right-associative, so $A\to B\to C$ is to be read as $A\to (B\to C)$. Now the syntax changes just a little:

Syntax of simply typed $\lambda$-calculus:

$\tt t ::= x \mid \lambda x\textbf{:T}.\medspace t \mid t\medspace t$

We just annotate a type to every lambda abstraction. Note, that ‘simply typed’ just means that the only way to construct types is $\to$, whereas there exist extended systems that have additional operators which can not be expressed in terms of $\to$. The simply typed lambda calculus is abbreviated as $\lambda^\to$ and has been first developed by Church in 1940.

Semantics

Now we need to establish some rules about correct type assignment to terms. Let $\Gamma$ be the set of assumptions about the types of some terms like e.g. free variables (this is our typing context), then $\tt \Gamma \vdash t : T$ means ‘under given assumptions the term t has the type T’. The context can also be empty and can be omitted in this case. The notation with the horizontal line used below is a more readable form of implication ($\Rightarrow$) with the left side on top and the right on the bottom. This notation is used to specify deduction rules.

The first rule is trivial and just formalizes the fact that a variable has the type we assume it to have in the typing context:

Typing of variables (T-Var):

$$\frac{\tt x:T \in \Gamma}{\tt\Gamma \vdash x : T}$$

The next rule is about abstractions and says that if under the assumption that $x$ has the type $T_1$ the term $t_2$ has the type $T_2$, we deduce that an abstraction taking $x$ as argument and having $t_2$ as its body has the type $T_1\to T_2$:

Typing of abstractions (T-Abs):

$$\frac{\tt \Gamma,x : T_1 \vdash t_2 : T_2}{\tt \Gamma \vdash \lambda x : T_1. t_2 : T_1\to T_2}$$

The third rule takes care of application, saying that if we have an abstraction and apply it on a term of the expected type, the result will have the type of the body of this abstraction:

Typing of applications (T-App):

$$\frac{\tt \Gamma\vdash t_1 : T_{11} \to T_{12}\hspace{5mm} \Gamma\vdash t_2 : T_{11}} {\tt \Gamma \vdash t_1\medspace t_2 : T_{12}}$$

Additionally we can assign our real booleans and numbers a type. Note, that the rule for if has to enforce that both branches of the condition have the same type.

T-True, T-False, T-If:

$$\tt true : Bool \quad false : Bool \quad \frac{\tt\Gamma\vdash t_1 : Bool \quad \Gamma\vdash t_2 : T \quad \Gamma\vdash t_3 : T} {\tt \Gamma\vdash if\medspace t_1\medspace then\medspace t_2\medspace else\medspace t_3 : T}$$

T-Zero, T-Succ, T-Pred, T-IsZero:

$$\tt 0 : Nat \quad \frac{\tt\Gamma\vdash t_1 : Nat}{\tt\Gamma\vdash succ\ t_1 : Nat} \quad \frac{\tt\Gamma\vdash t_1 : Nat}{\tt\Gamma\vdash pred\ t_1 : Nat} \quad \frac{\tt\Gamma\vdash t_1 : Nat}{\tt\Gamma\vdash iszero\ t_1 : Bool}$$

Deduction example

To see the rules in action, we can deduce

$\tt f:Bool\to Bool \vdash \lambda x:Bool.\ f\ (if\ x\ then\ false\ else\ x) : Bool\to Bool$

Proof:

$$\frac{\frac{\frac{\tt f:Bool\to Bool \in f:Bool\to Bool} {\tt f:Bool\to Bool \vdash f:Bool\to Bool} {\tt T{-}Var} \quad \frac{\frac{\tt x:Bool\in x:Bool}{\tt x:Bool \vdash x:Bool} {\tt T{-}Var} \quad \frac{}{\tt false:Bool} {\tt T{-}False} } {\tt x:Bool\vdash if\ x\ then\ false\ else\ x:Bool} {\tt T{-}If} } {\tt f:Bool\to Bool,x:Bool\vdash f\ (if\ x\ then\ false\ else\ x) : Bool}{\tt T{-}App}} {\tt f:Bool\to Bool \vdash \lambda x:Bool.\ f\ (if\ x\ then\ false\ else\ x) : Bool\to Bool} {\tt T{-}Abs} $$

Properties of typing

From the rules given above we can get type safety, a guarantee that well-typed expressions are valid expressions which do not get stuck during evaluation. Safety of a type system is shown by two theorems, the progress and preservation theorems. The progress theorem makes sure that a well-typed term is either a value or can be evaluated one step further, so that a well-typed term is never in a stuck, undefined state. The preservation theorem says that if a well-typed term takes one step of evaluation, the term we get is also well-typed. Both theorems together ensure a clean evaluation chain, if we start out with a well-typed expression.

Progress:

Suppose t is a closed, well-typed term (that is, $\tt\vdash t : T$ for some T). Then either t is a value or else there is some t’ with $\tt t\to t’$.

Preservation:

If $\tt \Gamma \vdash t : T$ and $\tt t\to t’$, then $\tt \Gamma \vdash t’ : T$.

Both theorems can be basically shown using structural induction and some helping lemmas. The reasoning is not so difficult to follow, but is quite long. Because of this the proofs are omitted and can be found in the book by Pierce.

Another property of typing is that we can remove all type annotations, thus converting a term from simply typed to the untyped calculus, and the evaluation behaviour stays the same. This fact is used in real-world programming languages in the way that the types can be checked before compilation and can be removed in the process, when it is verified that everything is ok. The reverse action to this type erasure is type reconstruction, which is the procedure of finding a possible type of a term which has incomplete typing information. If it is possible to find a type, then the given term is typable, if it is not, then the code is most likely broken (or the compiler has insufficient information and can not compute the type, which can happen in more expressive type systems).

System F

Now consider the following situation - we need to define a function which applies another numeric function twice. No problem:

$$\tt double = \tt \lambda f:Nat\to Nat.\ \lambda x:Nat.\ f\ (f\ x)$$

Later we notice, that such a function would also be useful for booleans. Okay:

$$\tt doubleBool = \tt \lambda f:Bool\to Bool.\ \lambda x:Bool.\ f\ (f\ x)$$

Now what if we want to define this function for any type? In the defined calculus there is no way to express this. We would have to define exactly the same function for every type we want to use it with! What we want is something like Generics in Java, or Templates in C++. The general name for this feature (’write once, use with any type’) is called parametric polymorphism and it helps solving this ugly code duplication problem in an elegant way. So we extend our calculus to the next level, which is called System F, first discovered by Jean-Yves Girard in 1972:

Syntax of System F:

$\tt t ::= x \mid \lambda x\textbf{:T}.\medspace t \mid t\medspace t \mid\medspace \lambda X.t\medspace \mid\medspace t\ [T]$

The uppercase letters denote type variables while the lowercase mean terms. The newly introduced forms are called type abstraction and type application, which work similarily to the regular abstractions and applications, but on type variables (now type variables are part of our language, whereas before we could only use concrete types). Type abstraction and application also form a redex, substituting the type variable in the body of the abstraction with the type argument given in the brackets: $\tt (\lambda X.t_{12})\ [T_2] \to [X\mapsto T_2]t_{12}$

Now we know how to evaluate these new forms, but we still need to provide new typing rules for this new kind of data. So we introduce universal types, which have the general form $\tt\forall X.T$ and are required to say that a function is polymorphic, i.e. works with any type we provide. Because we now have a new type constructor besides $\to$, this calculus is also called second-order lambda calculus. The missing typing rules are:

Typing of type abstractions (T-TAbs):

$$\frac{\tt \Gamma,X \vdash t_2 : T_2}{\tt \Gamma \vdash \lambda X.\ t_2 : \forall X. T_2}$$

Typing of type applications (T-TApp):

$$\frac{\tt \Gamma \vdash t_1 : \forall X. T_{12}}{\tt \Gamma \vdash t_1\ [T_2] : [X\mapsto T_2] T_{12}}$$

Examples

In System F we can again define things like Church encoded booleans and numerals, which would not be possible in the simply typed calculus, as they would be not general enough or just impossible to formulate well-typed. For example, the identity function can be formulated universally now and instantiated for any type:

$$\begin{aligned} \tt id &= \tt \lambda X.\ \lambda x:X.\ x &\rm type\ \forall X. X\to X \newline \tt idNat &= \tt id\ [Nat] = \lambda x:Nat.\ x &\rm type\ Nat\to Nat\end{aligned}$$

Likewise, we can define a polymorphic double function:

$$\begin{aligned} \tt double &= \tt \lambda X.\ \lambda f:X\to X.\ \lambda x:X.\ f\ (f\ x) &\rm type\ \forall X. (X\to X)\to X \to X\newline \tt dblBool &= \tt double\ [Bool] &\rm type\ (Bool\to Bool)\to Bool\to Bool \newline &= \tt \lambda f:Bool\to Bool.\ \lambda x:Bool.\ f\ (f\ x)\end{aligned}$$

But we are not limited to instantiating polymorphic functions to concrete types, we can also substitute with yet another polymorphic type expression:

$$\begin{aligned} \tt quad &= \tt \lambda X.\ double\ [X\to X]\ (double\ [X]) &\rm type\ \forall X. (X\to X)\to X\to X\end{aligned}$$

What did happen here? We just defined a polymorphic function which applies a given function four times, by applying our polymorphic double function twice! Does that really work? Let’s check the types – $\tt double\ [X\to X]$ expands to a function with the type $\tt \forall X. ((X\to X)\to X\to X)\to (X\to X)\to X\to X$, $\tt double\ [X]$ has the type $\tt \forall X. (X\to X)\to X\to X$. Now as we partially apply the latter double to the first, the remaining type is again $\tt \forall X. (X\to X)\to X\to X$. If you are still not convinced, look at the expansion of this function:

$$\begin{aligned} \tt quad &= \tt \lambda X.\ double\ [X\to X]\ (double\ [X]) \newline &= \tt \lambda X.\ (\lambda f:(X\to X)\to X\to X.\ \lambda a:X\to X.\ f\ (f\ a))\ (double\ [X]) \newline &= \tt \lambda X.\ \lambda a:X\to X.\ double\ [X]\ (double\ [X]\ a) \newline &= \tt \lambda X.\ \lambda a:X\to X.\ (\lambda g:X\to X.\ \lambda b:X.\ g\ (g\ b))\ (double\ [X]\ a) \newline &= \tt \lambda X.\ \lambda a:X\to X.\ \lambda b:X.\ double\ [X]\ a\ (double\ [X]\ a\ b) \newline &= \tt \lambda X.\ \lambda a:X\to X.\ \lambda b:X.\ a\ (a\ (a\ (a\ b))) \end{aligned}$$

As you can see, parametric polymorphism is a quite powerful means of abstraction and one can express a lot in System F. So it is probably no big surprise that the Haskell programming language, when stripped of all syntactic sugar during compilation, is an extended form of System F.

The preservation and progress theorems still hold for System F, giving us type safety, whereas type reconstruction is undecidable in many cases, so type annotations often can not be omitted. Programming languages based on System F often add restrictions to what terms and types are allowed, to keep a partial type reconstruction feasible.

Type theory and OOP

One could get the impression that type theory is only practiced and relevant in the realm of functional programming. But an important branch of development of lambda calculi is calculi with subtyping, a concept first explored formally by Reynolds and Cardelli in the 1980’s. To every object oriented programmer the intuition and meaning of subtyping should be apparent – it means that if we have a function working on vehicles, we also expect it to work with a car, or a train, which are special types of a vehicle. This is deeply connected to the notion of inheritance in OOP – we want to be able to give our types (the object classes we defined) a hierachy and to be able to use functions on a whole category of inputs which are related in a way. The formalism for this is the subtyping relation, which comes with its own set of deduction rules specifying which types can be treated as more general types, by ignoring their specialization in the context of a given function. There also have been multiple efforts to prove type safety of Java, first by Drossopoulou, Eisenbach and Khurshid in 1999, using calculi with subtyping which resemble different subsets of Java.

Type theory and logic

Worth to mention is another fascinating property that can be seen in many type systems, which is called Curry-Howard-Correspondence, as it was first described by Haskell Curry in 1958 and William Alvin Howard in 1980. It is an observation, that types can be seen as propositions and terms having that type are a proof of these propositions. So a proposition can be proved by constructing a term with that type. This bridge between constructive mathematical logic and computer science is being researched and already has lead to the development of tools like Coq, which is a functional programming language with its primary purpose being an interactive theorem prover – it assists the user in the formulation of assertions, helps finding proofs in form of programs and automatically checks their correctness by type-checking the proof-program.

The type systems used in this setting are mostly based on calculi with dependent types, which is an extension that allows types to be constructed bound to a value, e.g. functions with types like $\tt Array\ n\to Array\ (n+1)$ are possible and it is verified by the type system that this function will indeed return an array with exactly one element more for any input given.

Conclusion

So what do we gain using a type system in the context of software verification?

  • We know that our program will compile and execute (catch syntax errors)

  • We know that our functions will take and return the intended data types (catch violations of our mental model/the designed API)

  • Using advanced type systems we can control which functions can do which effects and prevent specific values to be taken out of context (e.g. the Monad typeclass in Haskell)

  • Using dependent types it is even possible to construct types that prove almost arbitrary properties, e.g. to check for array-out-of-bound errors on compile time or even prove that a sorting algorithm does indeed return the data sorted

Of course the more information the types shall carry, the more additional work and thought by the developer is required – unfortunately expressivity comes with a price. To profit from type systems one needs to have a clear model of the software being developed and one has to define types which suit the use-case – type annotations are also a form of formal specification. If every field of an address record is declared as String, nothing will stop us from using the first name as the telephone number. Introducing and using different types for distinct semantic entities, even when they are the same on a low level, can help to deal with such errors.

I hope you enjoyed this little introduction and that it might have sparked an interest in type systems, both in theory and practice. The lambda calculus family and related formal languages are for type theorists and programming language designers like the Drosophila fly for biologists - it is small enough to add modifications, study the resulting entity and prove different properties, but the findings can be translated and applied to bigger systems in the ‘real world’ and give software developers new tools to work with, regarding both expressivity and safety of their code.