When I went to school, my math teacher said, “If there’s an X in several different parts in the same equation, then all the Xs mean the same thing.” That’s how we can solve equations: if we know that X+Y=10 and X-Y=2, then X will be 6 and Y will be 4 in both equations.

...

In Erlang, variables are just like they are in math. When you associate a value with a variable, you’re making an assertion—a statement of fact. This variable has that value. And that’s that.

-- Joe Armstrong.

# Programming Languages

We are going further away from the long lost ideal into the mess we are in today.

## Homoiconicity

In Lisps, which are using the same common underlying list structure (made out of cons cells) to represent both lisp forms (code) and data, the code literally is the data (moreover, the code is already in the form of AST) and could be generated and analyzed by a program itself and evaluated at compile time (macros) or even at runtime. This allows a programmer to structure a program as a multiple layers of embedded DSLs (with its own special forms, which means that each special form could have its own evaluation rule).

## First class procedures

In languages with first-class procedures code can be stored together with the data as a member of a data-structure. A block of code could be passed and referenced like any other first-class value. This is the way to define a primitive, function-based DSLs (which will be bound by the general evaluation strategy of the language).

## Classes as advanced data structures

Classes are blue-prints of objects - the convenient way to define a common structure of data slots and procedures - a prototype for an object. Classes are mere traditional structures with added code slots (so the instances became callable) and a resolution protocol to support multiple methods.

An object is a set of named slots for data and code (methods) referenced by a pointer or a name (bound to a symbol).

This is about understanding the principles and concepts behind some programming languages. It is one-sided, biased, unfair, but observations here are mostly correct.

First of all. Static typing without type-inference, when a programmer must write an explicit type-signature of each procedure and each variable is just a nonsense.

Programming should be done on the highest-level language available, with exception of very special cases when we have no choice (development of Linux kernel traditionally done in C).

High-level language means that we shouldn't write any type information. The less we clutter the code - the better.

Java with it's pathological verbosity can't be seriously considered as a high-level language. The example of a high-level languages are Lisp-family, Erlang and ML-family and so-called scripting languages.

Now about so-called "functional languages" (actually, popularity of "non-functional" ones is an unfortunate inconvenience, a regrettable misunderstanding).

In order to be called a functional language it must support "first-class" high-order procedures in the first place, and must have no "side-effects".

To correctly implement that a language should follow "everything is a pointer" semantics (a binding is a reference to a typed datum). Thus a procedure is nothing but a value of a certain type.

To avoid "side effects" a language must obey to so-called "single assignment principle" - bindings cannot be rebound, only shadowed (via new frame of environment).

Lexical scoping for bindings must be a default behavior, so procedures will be closures.

We call languages founded on such principles mostly-functional languages.

Giving that, we have a few mostly-functional languages worth of considering. This is not an exhaustive match.

• static typing
• type inference

• type-tags (values has types, not "variables")
• bindings (of symbols to values, there is no "box-like" variables)
• "everything is a pointer" semantics (a reference to a typed value)
• "single assignment" (once bind a binding cannot be rebound)
• lexical scope (shadowing, "free variables", lexical closures)
• everything is an expression (every expression must have a value)
• numbers, symbols or atoms are self-evaluating expressions
• general evaluation rule (order of evaluation doesn't matter)
• environment (a "model" of evaluation)
• high-order functions (functions are first-class values)

Lisps, Erlang, Julia

• heterogeneous lists (instead of touples and homogeneous lists)

Lisps, Julia

• "special forms" (a few exceptions form general evaluation rule)

Lisps, Julia

• "code is data" (a way to generate source code)
• macros (a way to define a new "special form")

Very simple. With "a binding is a pointer" and "everything is an expression" we could have nested functions (actually, nested everything) and heterogeneous lists.

With heterogeneous lists (being used as source code representation) gives us "code is data" semantics. This gives us ability to "generate code" in macros. General evaluation rule makes it predictable and less messy.

Macro is the way to define a new special form (to be used in an expression). Along with high-order procedures it gives us advanced DSLs embedded into a core language. This is the way we could "grow" or "adapt" it to the problem domain.

Important technical detail. One could have a DSL made out of high-order procedures. The limitation of such DSL is that all the sub-expression has to be evaluated before applying a procedure. With advanced macros (special forms) we could alter the evaluation order and specify our own evaluation rules for a new expression.

Not bad trade-off for "static type-checking" of clumsy touples and homogeneous lists.

A "strong typed" "dynamic typing" (these are mere words) high-level mostly-functional language with macros is good-enough.

AlgebraicType Algebraic types] of ML-family is a clever feature indeed. But from the implementation perspective these are based on "constructors" and "selectors" with are hidden behind some syntactic sugar. In other words, pattern-matching on algebraic types has been implemented (less elegantly) in Lisp as a DSL.