Thursday, 22 December 2011

Asynchronous workflows in Clojure

Asynchronous workflows is a very powerful feature of F#, and recently I wanted to explore the state of the JVM and in particular Clojure when it comes to replicate the functionality. In this post I'll share some of my findings and I'll include some background material to explain the problems.

Let's start with an example of a webclient using "async" in F#.
The magic here is that you can write continuation-style code in a sequential manner. This combines the scalability of asynchronous programs with the readability of sequential code. So, what lessons can we learn from this code and how would we do this with the JVM and Clojure? First of all, this is not the same as using futures over blocking calls;
While naively this looks good; we are handing over some tasks to a worker thread, but this is infact the old "one thread per connection" chestnut. It will spawn a thread in a thread pool that just sits around and waits for the IO to complete. Here is a picture (credit to Tomas Petricek) that tries to visualise the problem;
In our example we have 2 function calls that can take a long time; openStream and line-seq (of the BufferedReader). Unfortunately these 2 calls are both blocking, so like in the picture, our thread spends most of it's time blocked. What the F# code above achieves, is that while the the GetResponse and ReadToEnd function wait for the result, the thread yields (back to the thread pool) and other tasks can do work on that thread. See this picture of how we can get the same amount of work done with just 2 threads.
Why is it so bad with many threads you might wonder. Well, first of all they are expensive, both in the JVM and on .NET. If you want to write some code that can scale to thousands of connections, one thread per connection simply doesn't work. If you are using futures like above, when you try to make thousands of simultaneous connections, they will just queue on the thread pool, and the threads that are claimed will spend almost all their time blocked on IO (plus it will be dominated by accesses to slow servers). So either you kill you system spawning thousands of threads or you take for ever to complete when the thread pool slowly easts through the queued up work.

We need non blocking versions of openStream and the readers to have a chance to make this work. While .NET have plentiful supply for async begin/end "callback" APIs, the JVM has been relying on 3rd party libraries such as Netty. Java7 shipped with something called "NIO.2" which provides async channels for sockets and files, but it's still quite new and is hard to get on for instance OSX.

Let's get back to our web client example; here's how a better version could look like using Netty;
This example downloads 70 F# snippets in parallel. This is better than the sequential code since execution has been broken up in 2 callbacks (one for each long running operation). It is also better than the naive "future" code since the threads yield back to the thread pool when they are blocked on IO. The connection-ok callback is called (from a netty pool thread) when a connection to the web server is established. Here we trigger the "GET" request from the server. The second callback is the response-handler, where we log the read data in an Clojure agent. When we run this example we are closer to the second "interleaved" picture above, and we are not consuming one thread per connection.

We managed to solve the scalability problem, but we also made the code much harder to read and maintain. And if we were to make this code "real" it would get even worse. All error handling try/catch would have to be duplicated in all callbacks, since they are run in separate contexts. Just imagine what happens when we have more than 2 callbacks!

How can we get closer to the F# example above? One solution is to embrace the concept of channels and pipelines and wrap the netty internals in some Clojure idiomatic way. Fortunately this is exactly what the projects lamina and aleph have done. With aleph the example above can be written as succinctly as this;
Errors will propagate on the result channel, so they can be handled in one place. Lamina also provides an async macro, taking is a little bit further to the F# example above.
We managed to create a solution in Clojure/netty/aleph that was scalable, and almost as readable as the F# starting point. However, we only covered network accesses in this post. The fact is that the JVM is far behind .NET in the standardisation of asynchronous APIs. Where on .NET the Async begin/end pattern is common in anywhere where you have long-running operations, the JVM user is left to 3rd party libraries, all with their own non compliant APIs. Java7 today only supports network and file access, but it's async recipe can hopefully be a new standard that can proliferate to other areas (database access etc). This could set some kind of API standard making it easier to interface to. In the Clojure world, I think the channels approach set out by the lamina project is promising, and could well be considered to merge into clojure.contrib.

Saturday, 3 December 2011

Parsing with Matches and Banana Clips

I find myself working with DSLs quite a bit, and thus I write a few parsers. Some languages are better than others for parsers and pattern matching is a technique that makes writing parsers a true joy. I will not go over the basics of pattern matching here, rather show how F#'s active patterns can be used to take pattern matching to the next level.

The traditional steps of a "parser" are roughly lexical analysis (tokenizer), syntactic analysis (parser) and then evaluator (interpreter). In this post we'll focus on the parsing step of a simple DSL. A parser typically consume a list of tokens and produces an Abstract Syntax Tree (AST), ready to be passed on the evaluator/interpreter.

You can think of the main bulk of a parser being a loop containing a switch of the tokens types. It looks for some predefined patterns (syntax) in the token list. Some are valid and some are not (syntax error). This sounds like a perfect "match" :-) for pattern matching! And indeed it is.

Let's say we have a simple DSL made up of a list of fields. Each field has a type and a name;
int32   version
myAlias data
As you can see there are two kinds of types; we'll call them atomic types and alias types (myAlias is an pre-defined alias for some other atomic type). The main "switch" in the parser (using pattern matching) can look something like this;
This function takes a list of tokens and returns a Field and the rest of the tokens. An outer loop would run this repeatedly until the token list is empty. The "T" types are tokens, and "Field" is the resulting type ready for the evaluator.

Now let's say we want to make some fields optional, they should only be present if a specific condition holds true. We extend the syntax like so;
int32   version
int16   dodgy       if? version > 2
myAlias data
This means we have to extend our switch to handle all cases;
We just doubled the number of cases. It's still kind of nice and clear, but as a F# developer, this level of duplication is already making me a bit nauseous. Let's say we extend the DSL even more, we want each field for have a set of options;
hidden     int32   version
deprecated int16   dodgy       if? version > 2
           myAlias data
This doubles the number of cases again, the level of duplication is now pretty much unbearable :-) Thankfully, F# active patterns come to the rescue! Active patterns can be thought of as a way to impose a structure onto some of set of data (such as a list), and reason about these structures (treating said list as a binary heap for example). This can remove duplication and make code more easy to read and maintain. Let's start and tackle the newly introduced options, by defining a couple of active patterns;
The "(|" brace is called a banana clip and is used for active patterns. In this case we have defined a partial active pattern "ValidFieldOption" which only matches on 2 types of tokens. The "FieldOptions" pattern is recursive and builds up an returns a set of valid options. It easts one token at a time and if that token satisfies the ValidFieldOption pattern it's added to the set (and the pattern calls itself with the rest of the tokens for another round of matching). Our main switch can thus be simplified;
One interesting thing to note here is that in the same line as the active pattern is triggered, we also match (on the sub pattern) of the result list from FieldOptions. I.e. in the first case the "TAtomic(t) :: TString.." is another pattern that is matched on FieldOption's returned list!

Let's try to simplify the duplication for the two field types;
Which gives a cleaner "switch" like so;
And finally we can "banana clip up" the condition expressions;
Which leaves us with our final version of the main parser switch;
Pattern matching is very powerful and useful in many circumstances. F#'s addition of active patterns makes it even better. It is easier to break the patterns apart and avoid duplication, thus making the code easier to read and maintain. Pattern matching is available in some other languages (ML, Erlang, Haskell etc) and we will look at Scala and Clojure in future posts. Clojure solves pattern matching the "Lisp way", by using macros, and this can be extended do something like Active Patterns as well.

Tuesday, 29 November 2011

Scheme as an embedded DSL in Clojure

If you give someone Fortran, he has Fortran.
If you give someone Lisp, he has any language he pleases.
-- Guy Steele
Replace Fortran with whatever language you are currently using, and the quote still holds true today. Lisp has been around for a long time, and it's built in flexibility is still unmatched by other languages. In this post we will look at key Lisp concepts such as code-is-data and powerful macro semantics.

When you write programs in Lisp, you tend to solve problems very differently from how you would solve them with OO languages, and also different how you would in other functional languages. Where in ML you would write a set of types and functions to operate (match) on them, in Lisp and Clojure specifically you are more likely to stick to the core data types and write functions and macros that make up a Domain Specific Language (DSL) for the problem at hand. More specifically, an internal/embedded DSL using the Lisp syntax but with new functionality that makes the solution or logic simple and clear. The ability to transform code in Lisp is very powerful indeed, and makes you think of code in a different way.

One big benefit of internal DSLs are the speed of execution. Since we map to Clojures native constructs, the examples below will run at the same speed as one defined directly in Clojure (they are infact the same!). In future posts, when we look at external DSLs using interpreters which are much slower.

So here's an example of how (a subset of) Scheme can be written as an internal DSL in Clojure. The full code is available on github.

As you might suspect, this is pretty simple since a lot of functions are exactly the same in Scheme and Clojure.
Some Scheme functions simply has a different name in Clojure, and can be bound to a new var like so;
Scheme's define form is called def in Clojure, and since def is a special form, we can't use the same strategy as with display, rather we need a macro to transform define code to use def;
This uses syntax quotes to get a new list and unquote slicing to get back to the argument list that def requires. Note that this example ignores the fact that in Scheme "define" is used for simple var bindings and function definitions (see full source code for a macro that handles both cases).

A slightly more involved example is Scheme's cond form which uses a extra pair of parens for each case;
In Clojure cond is a macro that transforms the code to a list of nested if statements. So we can write a macro for the Scheme-cond that loops over the list of cases and transforms them directly to nested ifs.
As you can see we also replace the "else" symbol with a Clojure keyword (which actually can be any keyword since they are all "true", :else is used to clarity).

Clojure provides a simple but powerful "debug" functionality for macros, macroexpand. It returns the expanded code for each step in the recursion like so;
Here we can observe the recursive nature of the macro, any errors in the macro will be clear.

One note on this cond macro in particular is that replacing build-in function in the "host language" is a bad idea for an internal DSL. The user of the DSL expects to use the power of that host language and the extra functionality provided by the DSL. Replacing Clojure's cond can be very confusing!

Not everything is a macro, if we look at the cons funcion, there is a subtle difference between Scheme and Clojure;
The second parameter in clojure is a sequence. A function is more suited than a macro to translate this;
We put the second parameter in a vector unless it's already a collection. The trick here is the recursive nature of cons; (cons 1 (cons 2 3)).
This will call our new cons function twice, from the "inside out". The number 3 will be put in a vector, but the result of the nested cons will not.

Finally the read-eval-print-loop is very simple;
Since our DSL have name clashes with Clojure, we need to exclude those when defining the repl namespace. The REPL itself is a simple recursive loop that reads a line, evals it and prints the result. That's it!

Sunday, 20 November 2011

Tail Calls in F#, Clojure and Scala

I recently looked into Tail Call Optimisation/Elimination (TCO) and the implications for 3 modern languages, namely F#, Clojure and Scala. In this post I share my my findings. If you're new to the subject or just looking into some of these languages I hope this post can be of some use to you. I will mix code snippets in the 3 languages freely (and without warning! :)

TCO is a well documented topic in books and articles about functional programming and the TCO in .NET and the lack their of in the JVM has been debated "to death" on various programmer's boards. I don't indend to add any fuel to the fire here, rather some background and practical implications.

Recursion is a fundamental corner stone of programming and is particularly emphasised in functional programming. It is the idiomatic way to loop over sequences in languages like Clojure. Here's a classic example of a function calculating the sum of all natural number less or equal to n.
This implementation is both very easy to understand and correct, so what's the problem? Well, this implementation is not "tail recursive". A tail recursive function has the recursive call at it's tail and noting else (immediately returning the result of the call). In this case the result of the recursive function is used in an addition, and the result of the addition is returned. The practical implication of this is that during execution we are building up a chain of call-stacks, which cannot be freed until we reach n=0 and the results "bubble up".

When n is large this will lead to a "Stack Overflow" exception.
fsi> sum 1000000;;
Process is terminated due to StackOverflowException.
Every functional programmer have two handy tools in his toolbox to solve this problem; re-write using accumulators (folding) and continuation passing style.

If we look at the sum function's loop, and think how we would implement this using imperative programming, we would probably write a for loop like so;
The recursive variant of this will look like this;
By passing around the result we have a tail recursive solution.

Continuation passing style is a bit more tricky to understand, but basically instead of passing the result as parameter we are passing a calculation (or continuation). This continuation will be a chain of closures which can be completely resolved when we reach the end of the recursion. Here is an example;
If we call it with the id function as shown above, result of the "closure-chain" will be the result we are looking for. As you can see, this is also a tail recursive solution.

Clever compilers
We have now converted our simple example to a tail recursive version, so now it should run with very big n-s without any problem right? Well, not always. To understand why we need to dig into how our compiler and runtimes work (in this case .NET and JVM).

If we look at what the F# compiler produces for a accumulator-tail-recursive sum function above we'll see this (de-compiled into C# with ILSpy).
That's is great, the compiler has realised that the tail recursion can be converted to a while loop and removed any recursive calls. The Scala compiler does the same (de-compiled to java with Java Decompiler)
However, Clojure does not! The clojure compiler require an explicit form to convert "mundane" recursion into a non-recurisve loop (Scala also support explicit tail call checking with the @tailrec annotation). This is the loop/recur form;
Awesome, problem solved, what's all this fuss about TCO then?

Let's say we have two functions a and b calling each other recursively;
This is called mutual recursion, and are commonly used in functional implementations of finite state machines. Even though the a+b functions are tail recursive, we cannot convert to while loops but must proceed with the recursive calls. Quite naturally this will blow up with stack overflow quite quickly. Steele and Sussman realised in their famous lambda papers back in the 70's, that a tail-recursive functions' stack resources can be freed as soon as the call is made, there is not point of keeping that stack frame around.

For a tail recursive sum example, with TCO we don't build up a chain of stacks and can thus handle summations of any depth (value of n).

The "dealloc-on-call" functionality is something the runtime have to support, which .NET does. If we look at the byte code for the function "b" above (compiled with tailcall-optimization on) we see;
 IL_0000: nop
 IL_0001: newobj instance void Program/b@12::.ctor()
 IL_0006: ldarg.0
 IL_0007: ldc.i4.1
 IL_0008: add
 IL_0009: tail.
 IL_000b: call int32 Program::a(class [FSharp.Core]Microsoft.FSharp.Core.FSharpFunc`2, int32)
 IL_0010: ret
Please note the "tail." op code, that's the secret sauce! The F# compiler has found a tail call and inserted the "tail." op code. This tells the .NET runtime to destroy the caller's resources and proceed "in" the callee's stack. This allows the a/b example above to run indefinitely without any stack overflows.

So what about the JVM? The bad news is that there is no "tail." java byte code (even if experimental implementations do exist). Here is a what the Clojure compiler produced for the b function (the invokeinterface is the recursive call to a);
 0 getstatic #27 
 3 invokevirtual #54 
 6 checkcast #56 
 9 getstatic #31 
12 invokevirtual #54 
15 aload_1
16 aconst_null
17 astore_1
18 lconst_1
19 invokestatic #62 
22 invokeinterface #65  count 3
27 areturn
Clojure solves the mutual recursion problem with a "trampoline". The idea is that instead of a and b calling each other directly, they return a closure containing that call. The trampoline will then run those closures in it's own stack-frame eliminating the stack build up.
Similar examples exist for Scala, a trampoline is infact trivial to implement.

General TCO is always best, so F# and .NET has the upper hand here. However Clojure and Scala are still fit for use, even if you follow a "strict" functional paradigm with lots of recursion. You have to be more explicit in the JVM languages and be careful to remember trampolines in cases of mutual recursion (this is especially true for FSMs that change state slowly and can act as time bombs for your program). Being explicit about tail calls is not necessarily a bad thing, it shows the programmer have thought about his code, and highlighted the behaviour.

Update: I should point that using Continuation Passing Style in Clojure, although being tail recursive, still suffers from the TCO problem. You will get Stack Overflows when the "closure chain" gets too big.

Update2: A clever compiler can convert mutual recursion into while loops. The a/b example above can be transformed into something like this; However, I don't know of any compiler that does this!

Thursday, 10 November 2011

Applied Symbolic Execution with KLEE/LLVM

This is a follow up article to my previous post on symbolic execution. Here we look at KLEE and LLVM in more detail, and present a potential practical application for a symbolic executor. We also discuss some of the limitations and drawbacks with this approach.

Our changes for KLEE and LLVM can be found on github

One limitation of symbolic execution (and dynamic code analysis in general) is that the code under analysis needs to be buildable and linkable. Thus, its harder to analyse sub systems (or snippets) than with a lint tool. Another complication is that the symbolic executor’s virtual machine also needs to understand / model the system calls that the code uses. This makes the tool OS dependant, since you have to emulate all calls that "escapes" the executor. Cadar, Dunbar and Engler explains how this can be done for Linux analysing GNU coreutils in [1].


KLEE is built on the LLVM Compiler infrastructure . LLVM defines a language independent intermediate code representation called LLVM-IR. KLEE contains a LLVM-IR interpreter (executor) capable of executing any program in LLVM-IR format. Further, KLEE allows you to mark certain areas of memory as symbolic, altering the execution to cover previously untouched code. At a very high level, KLEE creates an internal state for each instance of execution that runs a unique path. The points at which new states are created (forked) are typically branches where the condition is symbolic.

At any given time, KLEE can calculate concrete values for the symbolic memory that forces the code down a particular path. So, this technology can be used to generate a set for test vectors to drive full test coverage of any given piece of code.

In its current implementation, KLEE also checks for erroneous memory references and division by zero defects.  Whenever a defect like this is found, KLEE will generate the concrete values that caused this defect. This means that in the set of defects KLEE finds, there will be no false positives.
As you can imagine, for a big program, a potential huge amount of states can be created, due to the CFG path explosion problem. This is one of the limitations of a symbolic executor.

KLEE makes heavy use of a component called STP [2], a purpose built constraint solver to evaluate the accumulated path constraints for the symbolic data. Solving the path constraints (and the Boolean SAT that they are converted to) is in fact a NP-complete problem, so the time it takes to find a solution in unbound. Fortunately the area of theorem provers and SAT solvers are under heavy research and progress is steady. STP is a combination of best in class SAT solver and many heuristics and optimizations tailored for the kind of constrains you see from executing code. It has been proven to be very effective.

Finally, while executing a program and building up a big set of states, KLEE has to determine which state to “schedule” next. Using an optimized searcher (KLEE terminology) is crucial to find the correct path and solving the problem you are interested in. KLEE provides a wide range of different searchers, optimized for different use cases like accomplishing maximum code coverage etc.

The first step in analysing code with KLEE is to generate a single LLVM-IR executable containing the code you want to test, and the libraries it depends on. At the time of writing, there are 2 compilers available of generating LLVM-IR; llvm-gcc  and clang . With llvm-gcc you can use all gcc front-ends (i.e. C, C++ etc) whereas clang offers support for C and objective-C (however, support for C++ is being worked on).
When you have compiled all your code (and supporting libraries) into LLVM-IR, you need to link them together into a single file (the command llvm-link can be used for this). KLEE can now execute and analyze the code in this binary.

However, in most cases this will only result in a normal execution of your code, since KLEE has to be told what parts of memory to treat as symbolic. It is also very likely that the code you are interested in is not reachable for the default main() function in this binary. More often than not, you will need to edit your code to remedy these issues. Fortunately, marking variables, arrays etc as symbolic is very easy, and only requires one added line to your code. Making the code of interest reachable is potentially harder. Basically you will need to call into the APIs/functions etc of the code of interest. If you have an existing test-suite, that’s a very good starting point, if not you need to write at least parts of it. It is also worthwhile to note that once a test suite exists, KLEE is a very powerful tool to be part of the regression testing, in order to automatically cover and look for certain defects in new code.

These manually aspects of setting up code for analysis is one of the drawbacks of symbolic executors like KLEE, limiting the level of automation you can apply. However, certain aspects of this manual work can be simplified [6, 7].

It is important that you compile all libraries into LLVM-IR alongside the code you want to test. The KLEE interpreter will resort to calling into “the environment” (i.e. the runtime / operating system) for all unresolved symbols in the LLVM-IR binary. This is fine for normal execution; however, KLEE cannot make calls outside the LLVM-IR binary with symbolic arguments. That will result in the termination of the execution state.

As a general rule, you only want to resort to calling outside the LLVM-IR environment at as a low level as possible (i.e. system calls). KLEE contains a model of the 40 most common linux systems calls, and can cope with calls to them since this model understand the semantics of the desired action well enough to generate the desired constraints. By creating your model at this level, you also limit the size of the model, and can keep the rest of the library and runtimes executed as normal by the KLEE interpreter.

How we altered KLEE and LLVM
We mainly focused on the CFG path explosion problem during our study. The basic problem is to decide (using a searcher) what execution state to pick in order to reach the part of the code you are interested in. Picking a state by random or doing depth first search is very likely to “get lost” in the combinatorial explosion of possible paths in a large piece of code.

Depending of what your objective of running KLEE is, several approaches can he taken. If the goal is to maximize the code coverage, picking an execution state that will most likely lead to new code being covered can be taken. KLEE comes with a searcher that picks a state that is “closest” to uncovered code.
During our study we used the assumption that we would already know (several) areas of interests in the code under test. Let’s say we have an “Oracle” pin-pointing out areas of interest in code. This Oracle could for instance be a Static Code Analysis tool like Coverity, CodeScanner, Lint or even manual code review.
In this case, KLEE can be used to verify the findings of these Oracles, removing false positives and generating test cases for the true defects. It can also help answer the question under which circumstances a particular part of the code is executed.

Given a set of areas of interest (in the form of file names and line numbers) and a LLVM-IR binary of the code, we created a LLVM analysis pass that does the following;
  1. Translates the file name, line number tuples to LLVM-IR basic blocks.
    Explanation; Given a number of potential problems given by an Oracale (in the form of filename and line number), the LLVM pass tries to map this to a specific basic block in the IR.
    The goal for the Klee searcher is now to "hit" that block and see whether it contains a defect.
  2. Generates at set of N unique paths between the entry point the basic block from 1.
    Explanation; The set of basic blocks from 1/ is not enough information for the searcher to find a path. It also needs hints on potential basic blocks leading towards it. We used a naive approach using graph theory to generate a large number of paths connecting the root block to the block of interest. Most of these paths will probably be impossible, so it is still a rough approximation.
Dependant of the complexity of the code, this operation can take a long time.

We wrote a new searcher to be used when executing the same LLVM-IR binary in KLEE. This searcher selects states by matching them to the set of pre-generated paths and terminates when the all basic blocks of interest have been covered.

Areas of improvement
A particularly hard problem for symbolic executors is how to reason with symbolic pointer dereferences. At its current implementation KLEE does an exhaustive search for each symbolic pointer deference. This implies checking if any solution of the pointer’s path constraint lies outside any allocated memory area. Though being correct, this is very expensive, and will lead to massive increase of states in a big program. There are suggested solutions to this problem; one that looks particularly promising is described in [4].

The annotation of the code to mark memory areas as symbolic (i.e. mechanically inserting the klee_make_symbolic) can be automated. See DART [6] for API analysis, or KleeNet [7] for ANTLR solution.

For supporting big real-world programs, more aggressive pruning of execution paths must be done. One very good way to do this is the record actual execution paths during program execution. This is done in both GodeFroid Dart [3] and Bitblaze. They both use a emulated environment for non-symbolic execution (to track actual execution paths) which can then be fed into the symbolic executor for further analysis. Bitblaze uses QEMU for this purpose, and is a nice practical hybrid  of VMs and Valgrind VEX-IR. Avalanche is a simpler solution relying solely on Valgrind.

  1. Cadar, Dunbar, Engler 2008
    KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs
  2. Cadar, Ganesh, Pawlowski, Dill, Engler 2006 EXE: Automatically Generating Inputs of Death
  3. GodeFroid, Nori, Rajamani, Tetali 2010
    Compositional May-Must Program Analysis: Unleashing The Power of Alternation
  4. Godefroid, Elkarablieh, Levin 2009
    Precise Pointer Reasoning for Dynamic Test Generation
  5. Godefroid, Levin, Molnar 2008
    Automated Whitebox Fuzz Testing
  6. Godefroid, Klarlund, Sen 2005
    DART: Directed Automated Random Testing
  7. Sasnauskas, Link, Alizai, Wehrle 2008
    Bug Hunting in Sensor Network Applications

Tuesday, 8 November 2011

Is LLVM the beginning of the end for GNU (as we know it)?

GNU and Richard Stallman was a real catalyst for the open source movement and it's crown jewel; the Linux kernel. Not only did Mr Torvalds early Linux releases have near 100% GNU "user-land", he also decided to use release it under the GNU Public License; GPL. GNU and Stallman is forever linked with the birth and popularization of open source, and innovated both technically and legally by turning copyright laws on it's head with the copy-left licenses. The Free Software Foundation, the custodians of the GPL, is a constant source of spicy statements about the state of the software industry.

As the years have progressed, the "GNU percentage" of the popular Linux distributions has dwindled, quite naturally if you ask me. GNU focuses on core *ix command line and developer tools. The most important project is the GNU Compiler Collection (or GCC), used by pretty much everybody in the Linux community (and other platforms) for compiling C/C++ code. Since GCC (with binutils) can generate code for pretty much any CPU, the term "Linux community" in this context includes smartphones (Android, Linaro, Meego etc), and countless other embedded systems.

When Apple transformed NeXT (mach micro-kernel and BSD user-land) into OSX, the whole tools suite was based on GCC and it's debugger GDB (targetting PowerPC). They made significant contributions to GDB to make it sing in the XCode IDE.

Even though there is a plethora of GNU projects, it's GCC (and perhaps Emacs) that kept GNU relevant moving into the 21st century.

However, there's a new kid on the open source compiler block, and it's called LLVM and Clang. LLVM is real breath of fresh air in the compiler space. It features a beautiful modular architecture making it ideal for research and innovation. It already outperforms GCC in both compiler speed, and more importantly, quality of the generated code. The GCC code base has always been known to be almost impossible to work on, and LLVM has really called that bluff. LLVM/Clang is moving ahead in lightning speed, already supporting C/C++/Obj-C, features a debugger and having it's own c++ std library.

Apple is the main benefactor of the LLVM project, and they have already replaced GCC as the default compiler in XCode4. GDB to being swapped over pretty soon as well. With Apple "gone", how long will it take for the Linux community to switch over? The technical inertia should be small, since the Clang boys has gone for drop-in GCC replacement capability. If the technical benefits are there, I think the switch should be pretty rapid.

GNU without GCC is less relevant, the "GNU percentage" will drop even further. I believe we are witnessing the beginning of the end for GNU, at least in the form we know it. The free software foundation still has a role to play, as opinion makers, lobbyists and evangelists of the GPL.

Saturday, 5 November 2011

Javascript and the future of code in browsers

Here's an observation; Javascript (the lingua franca for code in browsers) is transforming from a programming language to a specification for code generators, an IL if you will.

Popular tools like CoffeScript, and more experimental ones like ClojureScript and Google Dart are just some examples. That's great right? Well, yes kind of. Developers gets better tools and can be more productive / write more correct programs. However, this can be seen as another form of obfuscation because the output is anything but readable. The output is certainly not small, leading to slower websites. The famous Dart "hello world" example being some 17000 lines of javascript.

Javascript is not well suited as an IL. We can do much better. What if the had a proper managed runtime in common for all browsers? Think JVM or CLR. Something actually designed to be an IL, and not a crippled language taken by surprise once again. This would mean that, with the right compiler, you could use any language you wanted for your web development. Heck, you could use any combination of languages you wanted. I really like the sound of that!

One obvious objection to this idea is that we don't want binary blobs in our HTML pages, it should be readable. But how readable is obfuscated Javascript anyway, especially if it has been code generated? The statement that IL isn't readable doesn't really hold true either, check out what you can do with reflector or java decompilers. That stuff is pretty readable if you ask me;

So how would the HTML with this code look like? Well, the obvious way is to put the link in there;
Or we could use the byte codes directly;

Another example is to put the blob in there;
With good APIs, we could merge stuff like WebGL into this thing. And given that it's a proper IL, we could make the runtime run really fast, using JIT compilation etc. This would eliminate the need for "inventions" like Google NaCl, which is a bad fit for the web if you ask me. Also, any Java or Silverlight plugins are not needed anymore.

I can't shake the feeling that Google missed a great opportunity when they announced Dart. They were thinking to small. Dart should have been an proper IL instead of a new language and a new language specific runtime.

Thursday, 3 November 2011

Why F# needs Mono (and really should be a JVM language)

When people think about .NET development, they think of C#. Sure there are other languages (VB, ASP.NET etc) but .NET and C# are very tightly linked (just drop an .NET assembly in reflector for technical proof). If you're writing a new Windows application (and it's not a high performant game), chances are you are reading WPF books right now.

One of the promises of .NET when it was released was "the great language independent" runtime, making all these languages interoperate in joyful blizz. Technically this still holds, but in practice it's all about C#.

F# is still considered a new .NET language, but the fact is that it's been around since 2007. Let me tell you, it's an absolute gem of a language, and it's a fully supported in Visual Studio 2010. F# is however not .NET the same way C# is. Drawing from it's OCaml roots, it doesn't feels like typical Microsoft technology (it's just too darn good :) It seems more at home in the open source world (the F# compiler is in fact open source).

In the recent Microsoft 2011 build conference, where Windows8, WinRT, future of Silverlight, new Visual Studio etc was announced. F# wasn't mentioned in the headlining presentations. How much mind-share does F# have internally at Microsoft?

The approach taken by the F# team is to educate the C# crowd, and win them over on technical excellence. It's a valid strategy, but it's going to be a very long process. Force feeding a new language and a new programming paradigm to the OO crowd is a very hard sell. What needs to happen in the .NET community for F# to gain ground is pull rather than push. A couple of high profile Microsoft apps showing off F# to the world (just see what Rails did for Ruby) and get the C#-ers' attention.

.NET being Microsoft technology, it's Windows only. Microsoft have opened the runtime standard, but it's a mine field of patents, and Microsoft has an army of lawyers. If you're a non-windows user your only hope is Mono, a open-source .NET runtime and C# compiler with a limp. In my experience, it's not ready for running production code. Some of Mono's core technologies, like it's garbage collector is not up to scratch. I can still easily get the runtime to crash using the experimental "generational gc". Some really core .NET libraries is missing, like WPF, and it doesn't support tail call elimination.

F#'s creator Dom Syme is flirting heavily with Mono, and he understands the importance of getting the universities and hobbyists on board. There is no way they are going to fork out for a Windows and Visual Studio license. F# is included in the standard Mono release, but it's still missing from the MonoDevelop IDE. Tomas Petricek has made a F# plugin for MonoDevelop, but not yet officially included in MonoDevelop, and doesn't work for version 2.8. A big part of the potential F# community is in the open source world, so F# needs Mono. Too bad Mono isn't good enough.

F# is perceived more of a "server room" language, leaving the UI code for C#. Technically, this is false, but that is a bias that will be near impossible to break. Server rooms are the domain of *UX, thus once again F# needs Mono.

So, F# have a tough battle of survival against C# on Windows and a weak story in the open source world. F# sorely needs a good open runtime, it deserves it. Mono's main contributing company Xamarin seems to have it's focus elsewhere (iPhone apps written in C#) instead of fixing the basics. It's worth noting that there are other promising projects on the horizon however, vmkit might come along and save the day? Unfortunately it's going to be a long wait.

I've thought many times how awesome F# would be if it targeted the JVM. Scala would be completely redundant and F# would have a much bigger and more eager user base. There is a F#-shaped hole in the JVM language space, and given it's open source heritage, I am sure it would be big hit. The server-room developers are now starting to use Scala and Clojure, just imagine what they could do with active patterns and asynchronous workflows! The JVM world needs solid ML language, there have been some valiant attempts, but none of them have taken off.

Mr Syme; just think how much easier this would all be if you were in the JVM camp? :-)

Ps. Here's a workshop presentation by Richard Minerich showing F# in Mono and MonoDevelop.

Update: After talking to some people in the F# / Mono community, my lacklustre attitude of Mono has slightly changed. The TCO problems is fixed, which I can verify. And even if we all agree that the Mono + F# tooling situation is bad, the runtime seems to in a better shape than I feared, and are indeed used in production. I stand by my absolute conviction that F#'s fate is tightly linked to Mono's, and thus deserves the full attention from the F# community.

Wednesday, 2 November 2011

Scheming in F#

Given the fact that I worship at the SICP altar, it should come as no surprise that I follow the recipe outlined in chapter 4 of said book; implementing a Scheme interpreter in every language I am trying to learn. Over the years it has turned out to be a very useful exercise, since the problem is just "big enough" for to force me to drill into what the language have to offer.

I'll post the source of the interpreters on github in this and future posts and highlight some of my findings in more detail in the posts. I am not going to write and explain too much about the languages themselves, there are plenty of books and tutorials for that purpose, just highlights :)

F# is part of the ML family and largely compatible with OCaml. It's one of the new hybrid functional / OO languages (like Scala, Clojure etc) that the kids are raving about these days. This means it can expose and interact with .NET libraries and objets code seamlessly. It also have a whole host of other functionality like active patterns, asynchronous workflows and (soon) type providers that I will get back to in future posts.

Let's start with discriminated unions which is a very powerful way of concisely describing (in this case) the syntax of the language;
This is means what it reads, "a scheme expression is either empty or a list of expressions or a list of listtypes or ...". It really can't be any clearer than that now can it?
ML's pattern matching is just fantastic when writing parsers, well any code really. This is how the evaluator of Expressions look like;
This is the famous recursive eval/appy loop as described in SICP. Of all the languages I've written this in, nothing is as concise and readable as ML. The main match statement is basically a switch, but there are a few subtleties here. For instance the Combination matches have a nested pattern separating the empty ([]) case and list (head::tail) case.
The interactive Read-Eval-Print-Loop (REPL) also deserves a shout out, this is using the F# pipe operator that passes the result of a function to the (last) parameter of another. So that try/catch is basically saying;
  1. read a line from the console
  2. convert it to a list
  3. parse that list
  4. take the head (first element) of the list (this is the expression)
  5. evaluate it
  6. call itself recursively
Please note that the new environment map is passed as parameter in the loop, meaning it can be immutable!
All the code can be found here, there are few bugs (see failing tests) that I might fix later, or maybe you are up for it! :)

Symbolic Execution

A while back I worked with a colleague (Philippe Gabriel) on a research project looking into automating defect finding and improving over-all test coverage. The main defect of concern at that particular time was null pointer dereferences, which could cause system wide crashes. We looked at many different strategies and tools (both free and commercial). What really spiked my interest was a field of research called "Symbolic Execution". Here's the elevator pitch; what if you had a tool that automatically found "nasty bugs" by analysing your source code, with very little or no false positives, and produced the input stimuli to provoke that bug?

Dawson Engler at Stanford along with Patrice Godefroid at Microsoft Research have made great contributions to this field of research. Their detailed papers are a thrilling read.

This article contains some background information about what symbolic execution is, and in future posts I'll explain how this can be applied in practice (using frameworks such as LLVM and klee).

Static Code Analysis has made a lot of progress in recent years, and can now be applied to large real-world code bases and generate valid results. One of the key metrics is the ratio between actual defects to the false positives. Optimizing this ratio is a very hard problem to solve in includes a lot of heuristics, optimization and tweaking. One of the biggest benefits of Static Code Analysis is that it is done by analyzing the source code itself. There is no requirement to build, link and run the code in question, analyzing “snippets” of code perfectly valid.

Unlike a Static Code Analysis tools, a symbolic executor executes the code under test in a “sandbox” or virtual machine. The main difference with normal execution is that some input are treated as symbolic (allowed to be “anything”), and for the code dependent on symbolic input the symbolic executor builds up boolean expressions (path constraints) used to force the code down previously unexplored paths.

Symbolic Execution
With Symbolic Execution, the program is "executed" in an abstracted manner. Let’s describe this process, with the following code fragment:
Say we want to insure that func always returns a valid (non-null) pointer.

Control Flow Graph
The first step in symbolic execution is to generate a Control Flow Graph or CFG. A CFG is an abstracted representation of the code in the form of a directed graph. Each node is a "basic block" terminated in a conditional (here an if statement). Each edge is a boolean "truth value" for the condition.

Once the code is expressed in this way it is easier to see the paths of execution. Note that CFGs are not specific to Symbolic Execution; any compiler generates a CFG when building the code, as this representation lends itself well to optimization techniques.

Analysing the CFG with symbolic values
With Symbolic execution we are interested in finding feasible paths which result in "bad things" happening. In this case we are interested in finding path(s) that lead to returning p without being allocated. We are interested to understand how the value of p changes, so taking the same CFG we state the value of p at each node. Or more precisely, we state the symbolic value of p, we do not actually care about which specific memory location p points to. Instead we only care whether p is null or non-null.

Note that we also draw implicit code path, as they lead to different symbolic value of p. Now with the CFG rewritten like this, it is trivial to see that there is indeed a path that leads to returning p without prior initialization. This simple example illustrates in practice 2 basic techniques;
  • Transformation to CFG
  • Reasoning on symbolic values

Scalability and Accuracy
Anyone designing a "symbolic tool" is faced with many hard problems. At a very high level, the problems can be put in one of these 2 categories;
  • Scalability / speed of analysis, because any non trivial function has a large number of paths.
    Ideally we want the tool to finish before the end of the universe.
  • Accuracy, because not all these paths are actually feasible and any defect reported along an infeasible path is a false positive
If you look a much simpler "lint" tool, it's usability is greatly reduced by the cheer amount of "garbage" it produces. An ideal tool will only report "real" errors, and as you will see further on, test vectors that triggers the error.

Scalability and Inter-Procedural analysis
The scalability problem is compounded by the fact that for any non trivial analysis, we cannot limit the analysis to the paths contained within a single function. For instance, in the previous example, it doesn't matter that func returns a null pointer if any code that calls func actually checks for this. For a meaningful analysis we need to follow the path until the pointer is actually used. In order to check that, we need to analyse paths going in and out of the function (inter-procedural analysis). The upside of doing inter-procedural analysis is that it increases the accuracy. The downside is that it exponentially increases the number of paths to analyse. Taking into account inter-procedural analysis, a real life a CFG is more likely to look like:

A symbolic execution engine that would naively enumerate paths exhaustively through this CFG would never complete. Knowing how to prune the CFG and identifying the small number of "interesting" paths is a very lively domain of academic research and a large bibliography exists on the topic. Heuristics to limit the path explosion problem come from either graph theory or are based on inferring more information from the context of execution.

An added complication that comes from having to analyze long path for actually finding anything interesting is that the "Path Conditions" along the paths are non satisfiable. Let's examine this fact in greater detail, as this leads us to introducing another big idea that underlines symbolic execution.

Path Condition
Let's revisit our previous example and explain what path conditions are. We now rewrite the CFG with the truth value for all the conditions that we encountered along given paths. The path condition is the boolean equation that synthesises the truth values for conditional expressions we encountered at each node.

Here for the path of interest (leading to a null p), this expression is: x*y==0 && y!=0 Hence we can now associate a set of Boolean equations with each edges along a given path. In the example above, we can now state, x*y==0 && y!=0 implies that p==0

Inter-Procedural analysis and Path Condition
Now that we know what path conditions are, let's see how we use these to solve the accuracy problem. We need to find out whether the path that is feasible in the narrow context of this function is actually valid in the larger context of a program, (i.e. when you perform interprocedural analysis). For instance, we might find that on all the paths containing call to this function, prior to the call there is a ASSERT statement that checks: x!=0;
The path condition has become: x*y!=0 && y!=0 && x!=0
It is clear to see that this Boolean equation cannot be solved, or according to the lingo there is no assignment of Boolean terms that satisfy this equation. This is otherwise known as the Boolean Satisfiability Problem.

Boolean SAT applied to Symbolic Execution
Now, there's good news and bad news...
The bad news is that this is an NP Complete problem, or in other words a fiendishly complex problem to solve, with not upper limit to it's running complexity. This is not really apparent in this simple example, but any non-trivial path will be summarized by a Path Condition with 100s of terms, which means that simply iterating through possible assignment for Boolean variables would never complete.
The good news is that there are solutions for it. Most notably, solving Boolean equation with large number of terms is something that the VLSI industry has been doing for years and have developed fairly efficient software along the way, aka SAT solvers. In fact there is a surprisingly large number of people constantly refining algorithms for their SAT solvers to solve ever larger boolean equation ever faster who race them every year.
As a result all modern symbolic execution engines can now incorporate them. A boolean SAT solver would tell you in no time that x*y!=0 && y!=0 && x!=0 is indeed unsatisfiable and that there is no point reporting such a defect which is clearly a false positive.

Monday, 31 October 2011

Rich Hickey

Rich Hickey is the creator of the Clojure programming language. Clojure runs on the JVM and is part of the Lisp family. It's been generating quite a lot of buzz for a couple of years and together with Scala is the hottest ticket in the JVM world. And good riddance, since Java is showing signs of rigor mortis.

Anyway, Mr Hickey is a great speaker and he's got some very interesting stuff to say.

Some of his ideas that I agree with;
  • Object Oriented design as popularised in C++, C#, Java etc really isn't any good for big complex systems. "Mutability by default" leads to incomprehensible and buggy programs
  • Thousands of objects (concurrently or not) modifying each others' state is just another form of spaghetti code (no matter how many design patterns and clever locking schemes you use)
  • Our obsession with testing is just a side effect of our insufficient programming tools. If we had better tools (for instance languages) we would more easily write correct programs and we wouldn't need to obsess about test coverage
  • Agile and short sprints can lead to always picking the "easy solution" instead of a better thought out "simple solution". If this is repeated over a long time, this accumulation of technical debt leads to programs full or errors.
Obviously the real world isn't as black and white as stated above. It's very possible to write buggy programs in ML and Clojure, and absolutely gorgeous big programs in C++. However, generally speaking Mr Hickey is onto something. The OO paradigm needs to be questioned and not taken as gospel, the new breed of functional/OO hybrid languages really have something to offer. Especially when multi core concurrent programming is becoming the norm.

Here's link to a very good talk by Mr Hickey on some of these topics...

Much more on Mr Hickey and Clojure in future posts.

Update: Are we there yet? Another awesome presentation dealing with popular languages, OO vs Functional, immutability, incidental complexity.

Sunday, 30 October 2011

What is software?

Having worked on many software projects and with a lot of different people, one thing that strikes me is the lack of understanding of what software is and how it's created. Even among us developers, there's is still this belief lurking, that if you senior enough, you can design and size a problem, before you hand it over to some less senior developers to implement. (Some) people still believe that building software is the same as building cars on an assembly line.

Abelsson and Sussman speaks about a "sorcerer's spirit" in chapter 1 of SICP;

A computational process is indeed much like a sorcerer's idea of a spirit. It cannot be seen or touched. It is not composed of matter at all. However, it is very real. It can perform intellectual work. It can answer questions. It can affect the world by disbursing money at a bank or by controlling a robot arm in a factory. The programs we use to conjure processes are like a sorcerer's spells. They are carefully composed from symbolic expressions in arcane and esoteric programming languages that prescribe the tasks we want our processes to perform.
Some years back I say the great presentation by Brian Cantrill (then DTrace developer at Sun). The whole presentation is really interesting (DTrace is awesome btw), but the first "philosophical" part really stuck with me, and goes some ways to explain what software is. Here's a transcript of the first part, and the full presentation is here.

Software is really different, it's unique. It is different from everything else that we (as humans) have made. And when we try to draw analogies between software and other things we built, those analogies always come apart, the are always loaded with fallacies. So what is it that makes software so special? <...>

Here's the paradox; is software information or is software machine? The answer is that it's both. Like nothing else, in software the blueprints are the machine. In software, once you designed the thing you've built it, it is the machine. That's why the waterfall model is fundamentally flawed. This idea that you can design the design before you design it, which is what the waterfall model is essentially saying, is flawed. <...>

This is a point that was hit home to me when I (Brian) first came to Sun, and helped develop the ZFS file systems with Jeff Bonwick. We are in Jeff's office and we had the source code up in one window and the debugger up in another. And all of a sudden Jeff steps away from his keyboard and says "Does it bother you what none of this actually exists? Where are looking at the source code over there, and the debugger there, and we think that we are looking at a thing. But we are not looking at a thing, we see an representation of an abstraction." This (the software in the debugger) does not exist anymore that your name exists. "Your name doesn't exist, we just made it up, doesn't it bother you?" <...>

We can't see software, what does running software look like? It doesn't look like anything. It doesn't emit heat, it doesn't attract mass, it's not physical. It's a mathematical machine. And that is the problem with all the thinking about software engineering has come from the gentlemanly per suits of of civil engineering. Men in top hats and suits would employ millions to build bridges and dams. It's very romantic but has no analogy in software. That is not the way software is built. Software is more like a bunch of people sitting around trying to prove theorems.

Saturday, 29 October 2011

Welcome and some guiding words...

Let's start with a great quote from my favourite book on computer programming, the mighty SICP (or Structure and Interpretations of Computer Programs) by Ableson and Sussman.
Underlying our approach to this subject (computer programming) is our conviction that ``computer science'' is not a science and that its significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology -- the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects. Mathematics provides a framework for dealing precisely with notions of ``what is.'' Computation provides a framework for dealing precisely with notions of ``how to.''
The entire SICP book is available on the web here, and if you think of yourself as a programmer, and haven't read it, well, what are you waiting for?

So, welcome to yet another blog about programming, not computers, but programming, or at least a very small slice of it. I'll put my thoughts of some topics that interest me here, and the focus will be on how to create readable, correct and maintainable code, functional  programming, some key problems a "modern" developer will face, like concurrency.

So, add this one to your RSS aggregator if you want to read about OCaml, Clojure, Scala, F#, Mono, concurrent programming etc...