Posts for the month of February 2014

A small, well-balanced set of very few selected ideas.

Zen and Art of Symbolics Common Lisp.

Let's say that Lisp is very different because it was based on a "good ideas", well researched in "good places" like MIT AI and CS labs (no punks could get there). It has something to do with shoulders of Titans. The set of selected "good ideas" is intentionally keept small and well-balanced. Let's take a walk.

The "layers of DSLs" metaphor mimics as close as possible the way our mind uses a language, which we call "verbal", or "logical" mind, as opposed to "non-verbal, ancient, emotional" mind, which is based on evolved reflexes, captured by genes (Sorry!).

We are using symbols to refer to inner representations (concepts) we have in our minds. so "everything is a symbol" (just a reference to a "storage") is a "good idea".

When we're talking about some specific context (situation, or "domain") we tend to use some reduced, appropriate set of words (symbols) related to it, along with usual "glue". This what we call Domain Specific Language, or a Slang, if you wish. This is also a "good idea", groups has slangs.

Layered structure is, in some sense, what the Nature is. Atoms, molecules, proteins, tissues, organs, systems, brain, body, you see. So, layers of data is a good idea, but layers of DSLs is even better one. It not only mimics how the world is, but how we should structure our programs.

Neither the language nor its layers are set in stone, they could be transformed, extended, adapted using the very same mechanism which underlies it. Iterative looping constructs, for example, were build out of macros, and the Loop DSL is the most striking example of how a language could be expended with itself as a Meta-language.

Some "good people", like R. Gabriel, have argued that we need more complex control constructs (special forms) as long as we are trying to write complex programs (language should be adequate to the domain), so, the ability do define new special forms as we go, without breaking everything, is a "good idea".

This is, btw, the idea behind the recursive bottom-up process of software development, popularized by SICP and championed by pg and rtm. Language should evolve together with our understanding of the problem domain.

Structures is also a DSL. This gives us the way to structure our data (everything is an expression in a Lisp, everything could be evaluated, this is another very "good idea") to mimic or represent more conveniently the objects of real world. Structures could be nested, getters and setters were created automatically, but could be redefined, etc.

So, by extending the language with new, appropriate constructs (by defining new special forms) we could improve our way of modeling the reality and create better "inner representations" for the concepts we made (captured)? Seems like a "good idea".

But wait, because everything is an expression (a Lisp form) which could be evaluated, why not just put code blocks (expressions) into the same structures? Thus we have "data structures" which captures not just characteristics (state) but also behavior of real world "objects".

To capture the behavior we need the notion of "protocols", which is just a named set of generic functions. So we have defprotocol which, essentially, creates a structure and binds names (symbols) to procedures (expressions) which consists of Lisp forms. Thus we got MOP implemented, which is the basis of CLOS.

I forgot to mention, that since everything is a symbol (reference) we could combine "objects" into lists and other aggregates, map them, reduce them - the whole set of layers of language "below" are available.

This doesn't mean that this is the only way to program, this is just one of the possible programming paradigms. What is really matter is that we could, by using the very same means of combination and abstraction "import" any other paradigm we wish.

And everything so uniform and concise that it could be easily traced back to "conses".

btw, this is not a "strict", "rigid", "set in stone" language (it is a nonsense to try to "fix" a language, it contradicts with the its nature). We have reasonable, "evolved" defaults, such as Lexical scoping for variables, but we could have Dynamic scoped ones if we wish. Thus, it is possible to extend the language with "non-local exit" control structures, which is your fucking Exceptions.

Immutability has also reasonable defaults. List and mapping functions are always producing a new copy of a list, leaving original ones unaltered, while their "destructive" equivalents were segregated by following an explicit calling convention (Scheme is famous for this).

Evaluation strategies could also be explicitly selected, so lazy lists or streams could be defined using the very same conses and macros and list notation.

Being a small language (after all the transformations - macro-expansions, rewriting rules, inlining has been done) it could be efficiently compiled (using a compiler written in itself) diretly into Machine code, which runs on more plathorms than fucking JVM.

This is what some people are considered as a work of a fine art, and called "programmable programming language".

But this is only a half of the story. There were machines (a hardware FSM if you wish) which was able to run the Lisp code efficiently. But this is another story.

OO and type "safety" memes.

When intelligent people are talking about different "kinds of objects", they are used to describe objects in terms of "flavors", "behaviors", "traits", while idiots tend to talk about "classes".

There are abstract, ephemeral creations of a mind, which almost always are in contradiction with so-called real world. Ideas, concepts, even logic, they are usually utopias or disconnected from reality oversimplifications.

Lets be very careful here. Numbers in Math are very good example. A number can be either of this or that "type". And it *sometimes* could be "coerced" into another "class" without "losing precision".

This is the most sacred notion of proponents of strictness and rigidness, so-called "strong typing", "strict languages", fixed routines.

Lets call this a "OR-mind". It is a rather naive notion that "things are either this or that, right or wrong, black or white, integer or rational (yeah, numbers are a special case).

This is the "naive logic" everyone begins with. Later they are trying to use "strict", "rigid categories" of the same kind. This is how wast (but meaningless) class hierarchies were created. And, indeed, in case of numbers they are good. As long as there is nothing but numbers to "model" Java is a great, the most strict language. Packers love it.

In so-called objective reality, however, their "strict" and "rigid" hierarchical classifications are failing. Packers are trying to classify the phenomena of the Nature in terms of "strong is-a", "strict this OR that" and fail.

They are talking about birds as something that "has wings" but there are bats and fish. They say that it "has wings" and "lay eggs" but there are reptiles. (they usually doesn't know that birds *were* reptiles, but that's OK - they have no notion of "were" in the packer's world)

First packers have tried to remove contradiction with so-called "multiple-inheritance". They could say "this IS this AND that", which is also a naive. Ostrich "is-a" Bird but while the method "HasWings()" returns "True", the call to the method "flyThere()" produces a very long stack trace.

Then "generics" and "interfaces" were added much later to these "packer's" languages (to mess everything up even more) so they could create "compound objects" as a "collection of traits".

Less "rigid" people, however, had the notion of so-called "duck-typing" from the very beginning. Smalltalk and modeled after it the Flavors DLS (the mother of CLOS) has exactly this approach. If it can do this-AND-that (follows the protocol) then it could be viewed (considered) as "one-of" this kind. This way of thinking we could describe as the "AND-mind". Less restricted, "light", context-aware.

The notion of "pattern matching" which is based exactly on the notion of "catching" different nuances (particulars, subtleties) of reality is closely related to a such mind-set.

Now about "type safety" meme. Again, if the world consist only of numbers and abstract ideas, then, perhaps, it could make some sense. However, processes are not "linear", classifications are not "three-like" when everything is just this OR that. Reality is much more complex than that.

Such naivety manifests itself when packers are trying to "model" moderately complex systems. They think that "there is a Stream, so we could define a class with methods. While reading a stream we will get Either data OR Nothing" they say. "So as our code covers both "cases", and the compiler checks the types of variables we are safe!". OK, what they got instead is SIGPIPE signal, or EAGAIN error code or "connection reset by peer" state, which are of very different "type".

Well, they say these are not our problems, it must be handled by runtime. We want Either data or Nothing, we don't want any signals, or states, or conditions.

In other words, when learning OO, follow the best minds, those behind Smalltalk or CLOS, which have a "correct" notion of what OOP is and what it is for. It is a way to structure the code for reuse and to avoiding duplication. It is a way to model a "complex" representation of real-world "objects" by composing an distinct, usually unrelated "behaviors" or "traits". It is about communication between these "objects", strong isolation of each particular "instance", and encapsulation of the its "internal state".

All this was implemented using only "conses", "closures" and "message-passing".

There is a beautiful and telling similarity between defstruct and defclass "special forms" in Common Lisp. Classes are viewed just as "structured expressions" while expressions themselves are "data to be evaluated". Code is data, so it could be "structured" this or that way.

Thus, logically, OO "adds a second (and third, etc) dimension" to the code, the same way "structures" do for the data. As long as one treats code as data, everything is perfectly natural and "logical".

The most important real-world idea behind OO is, perhaps, that something could exhibit more that one behavior, has more than one trait, the way people do. Food has more than one flavor too.

So, OO programming is not only about the way of packaging a code into methods of classes and enforcing the restriction that "every object must be a memeber of a class" (for most Java or Ruby coders OO ends here), but rather a way to model real-world objects by structuring the data not just as "2D-trees of variables of this OR that type" but as "nD-graphs of expressions AND actors/agents".

The people behind Lisps, Smalltalk, MOP/CLOS were much brighter, broad-minded, better educated and more enthusiastic than most of modern "punks".

Two ways of cooking.

Sometimes to understand better one complex system or a process we are applying knowledge from a different field, because it seems that there are some subtle ideas and general principles which seems to be valid across many domains and even cultures.

Lets consider cooking, a process of preparing food. Leaving enumerable subtle nuances aside, there are two common approaches to cooking.

The first one goes like this. One went to the shop and have brought a lot of very expensive, branded goods - an overpriced branded "organic farming meat", a huge piece of fillet in a vacuum box, ten boxes of different premium "oriental" spices, most expensive Italian olive oil, and then a full cart of different kinds of "organic" vegetables and bunch of fresh coriander, rosemarin, dill, green onion, etc.

Then he comes home and start cooking. Usually, such cooking is a process of "frying everything together" or some-times to "making a village-style curry". He just puts everything in a big pan and heats it up for a while. Because there are lots of expensive ingredients the whole dish is usually eatable, so he considers himself a good cook, get his gratification and re-enforces his self-esteem, becomes even more over-confident and proud of his accomplishments.

There is no wonder, because, at least for me, almost any freshly cooked, still hot, non-synthetic meal is quite eatable. The real question is - does this amounts for a good cooking at all?

The above scenario is very over-simplified and over-optimistic. First of all, if he would put to much heat, as they usually do, he would burn everything, or if he notices the burning and turns heat off at the right moment, would end up with a burned-outside-raw-inside pieces. Or he might easily over-boil everything with too much heat, causing all the complex molecules to be broken, with results is a thick, almost tasteless stew.

Heating food is a very subtle process, which requires literally tons of empirical knowledge which cannot be brought in an expensive grocery store or being copied from a recipe-book. One must know how to vary the temperature depending on which ingredients are processed now (like it is OK to heat oil before pitting spices in it, but if one puts onions first they will burn). One also supposed to know that different vegetables requires very different mount of exposure to heating in order to be properly cooked, not dissolved into a soft mesh, and so on.

So, roughly speaking, there is notions of varying of the amount of heat, notions of order - when to put what, and most importantly, to notions of how the whole process is changing with each new ingredient and/or change in processing.

This brings us to the notion of the second approach to the cooking. We could call it professional, or ironically, poor-mans cooking. It comes mostly from Asian countries, where people still aren't spoiled by over-consumption and not engaging in practices of spoiling huge quantities of expensive foods.

In Asia people have noticed millenia ago that some ingredients form a very good match and that this is just good-enough. With time they just refine the recipes, so traditional dishes emerge. Indian traditional roti-sabji, or alu-mottor, or Nepali daal-baat with tarkari, or all these amazing Chinese and Tibetan dishes.

The underlying principles here is, not surprisingly, less is more (because food is less abundant, especially in Himalayas) and to capture a good match (I prefer to call it to maintain a balance) and last but not least, heat it just-enough. It could be a bit raw (this means heated up to the temperature high-enough to kill all germs, but not for too long), but never spoiled by over-heating or burning. It also almost always consist of not more that 3 or 4 ingredients of different kinds with some hot spices (even fried vegetables are quite tasteless without proper spices).

This is by no means accurate outline of the Asian approach to cooking, and the point is in not to be very accurate. The point is to show that the same different approaches we have in software engineering.

The first approach corresponds to the very common and very popular "branded toolbox" approach, or, as I prefer to state it "put all the crap inside". Java, C++11, Ruby, PHP - they are all about having hundreds of classes with tens of methods, because, you know, "the more the better". They also usually follow some over-simplified all-or-nothing principles, similar to religious beliefs, which is based on a notion like "OOP is the best approach for all tasks and was the culmination of human knowledge" and similar nonsense. For them to have a soft mess of hundreds of classes (but everything must be an object! no exception!) is what makes them happy.

What is quite remarkable, is that they really believe that by pilling up even more Java crap, by adding even more expensive ingredients, putting more stuff and overheating it, the whole result would be better. Well, it definitely would taste a bit differently, but still like a crap.

The the second approach is of minority, of marginals, who, for some obscure and non-rational, not-customer-centric, non-efficient, not-getting-shit-done reasons are trying to find a balance, a "perfect" match, of, say, FP and networking and concurrency primitives, like in Erlang, or to model its runtime system according to the notions of how really complex systems of Nature, such as our body and brain work, the notions of loose-coupling, share-nothing (brain), hierarchy of receptors and actors, feedback loops (neurons system), communicating by message-passing (blood vessels are communication channels), etc.

Sometimes it works. Not perfectly, of course (Erlang's syntax is.. but it is much better than Java), but it works remarkably better, incomparable better and it tastes really good with very few, carefully selected ingredients and appropriate processing.

Java or C++ tastes like crap. No matter what. You look at the source and you feel sick. You look at runtime and you get nauseous. You look at documentation, you get headache. You look at forums and see over-confident idiots, oh, pardon me, amateur cooks.

The other example of a finding a good-enough balance approach are Lisps, like S4RS Scheme or Arc (while CL and S5RS+ Schemes are already suffering from putting everything in) Clojure, on the other hand, is a counter-example - it is a product of "put it all inside" approach. Haskell (but some syntax constructs are here because the more is better), Smalltalk in its best time, probably Effel, etc.

What is wonderful is that there is unlimited space for mixing a few selected ingredients in a new ways, like Arc-like Lisp which compiles directly into native code (X86_64 or ARM CPU is much better VM than JVM) with networking and concurrency primitives from Erlang, some selected abstractions form Haskell, etc.