# End of knowledge

It seems that there is a way out of the terrible mess in which programming as an engineering discipline has been plunged by crowds of idiots.

This by no means is a new or uncommon pattern. Crowds of idiots are ruining everything which is good and true on a regular basis, be it a naive but true nature worshipping of pre-vedic seers, teachings of the Buddha, ancient philosophy of seeking after Truth, the naive but pure teaching of Christ etc, etc.

Even mathematics and physics returned back to abstract dogmatic scholastic theoretical bullshit, because it is easy to secure a high-status social position of *sage* by specializing in an abstract bullshit which cannot be even falsified.

No wonder programming as a discipline, like everything else, is full of bullshit too. To *cosplay* a sage is a much better strategy than to know or understand.

## Philosophy

The main method could be intuitively described as *recursive reduction to What Is*, with one of the first principles as a *base case* (after reaching the base case, if at all, recursion unfolds). It is basically the same scientific method, which accepts as a *operational truth* whatever is withstanding all the attempts to be falsified. Empirical, experimental verification must follow, but in programming type-checking and testing is part of the craft.

## First Principles

*from philosophy import principles*

- Precise use of the language

There is a maxim in philosophy which says that

in order for a proposition to remain true the meaning of words must remain fixed. This is precisely the notion of Immutability -whatever has been bound cannot be rebound. Faithful following of this principle eliminates 1/3 of the hard problems in programming. This is just a matter offaithanddiscipline.

- Avoidance of abstract concepts

The art of juggling with abstract concepts, which is called

dialectics, is good to secure a position in some third-rateliberal artsschool, which teaches Hegel or Marx. In programming however words likesubstance,purpose,idea(meaning an eternal pure entity) must be clearly marked by a linter as acode smell- a smell of dangerous rigid stupidity (so-calledpacker's mindset), which is incompatible with programming as a craft.

- A single unfolding process

The whole Universe is

one unfolding process(while everything in it is asub-process), which implies thatby the very nature of the Universe contradictions does not exist(hello, Ayn Rand!). Everything is a continuum controlled by the law of causality (the Law of Karma).

- The world is
*mostly*(or*partially*) Deterministic

Lawsandprocesses(the causes) are deterministic, whileeffectsorchangesare stochastic. Everything has its cause.

## Mathematics

*from mathematics import formalizm*

- Mathematics is a set of formalisms of generalized patterns.

Numbers are

generalized patternsof this particular Universe (ofWhat Is) observed and formalized by a mind of anintelligent observer(a human mind). This implies that Numbers do not exist outside of a human mind, while processes which produce the patterns obviously do exist.

- Geometry is an formalizm of idealized shapes

- Fuck Plato and Aristotle (finally!)

The axioms and theorems of mathematics (including geometry) are immaterial (and do not exist outside a human mind) and have nothing to do with this Universe, which has no notion of a Number or a Circle. Processes could have a form or a shape (molecules, planets, stars or galaxies do indeed) but this has nothing to do with

idealcircles or spheres, co-ordinate systems or the numberPi.

## Physics

*from physics import experimental testing excluding theoretical dogmas*

- The Hamilton-Jacobi Theory

Particles do not minimize any goddamn integrals to find its way through the Universe (they slide). Mathematical artifacts, like any other

modelsorsimulations, do not imply anything, except their own correctness (soundness).

- Fuck the Schrodinger's cat

There is no notion of

time(andspace) outside human's mind (these are notions of an intelligent observer, conditioned byserializedsensory input of the brain's machinery evolved in this particular environment), so all the models which involve time and space are mental bullshit. The correct formulations of the Universal Laws, therefore, must not include thesemental-only' notions (artifacts of serialized perception).

## Cut the crap

It seems to me that the whole approach of teaching programing from mathematics is wrong. Programming, like mathematics itself, should be the study of patterns and processes in *what is* (everything is a process, and from processes, sustained by the laws of Universe, patterns emerge) Patterns are effects of the laws. From patterns the laws could be generalized by *inductive logic*.

The starting point should be not the numbers or axioms and theorems, but *Logic* (in a sense that it strictly follows The Law of Causation, which govern everything what is) and Interfaces upon which *the Life itself* has been built by endless permutations of atomic structures.

In particular, Life has no concern about how exactly the atoms it uses are "implemented" in this Universe - are they "localized energy" (or fields) or perpetually rotating tiny spheres, like the Sun and planets (what a bullshit!). This is what is meant when we are talking about the notion of an Abstraction Barrier.

## Atoms

*Atoms are interfaces* upon which Life rests. Life is possible due to the notion of *Immutability* of atoms (which is ultimately a mere temporal localization of "energy"). Life is there because atoms are stable-enough in this particular corner of the Universe with its laws (a sub-process in a simulator, so to speak).

Atoms form *Transformable* Structures which could be *produced* (`cons`

ed), *torn apart*, *traversed*, `fold`

ed and *transformed* (`map`

ped.

`Sir John McCarthy`

discovered what we could call a *protein programming* where the same linear (`list`

) structures (*proteins*) act as *code* and *data* (actually *code is data*). He managed to define *seven* functions (4 of which form the `List`

ADT) and *two* special-forms which are enough for anything.

Linear structures has its properties, including linearity itself, and what is called morphisms (or isomers), they are Traversable in a linear time (proportional to the number of elements or length), and could be thought as having some *abstract properties*, such as forming a Monoid, or, which is completely orthogonal to its structure, and even more *abstract* - a Monad.

## Life

Life uses *Linear Structures* made out of a finite set of possible parts - 4 bases of RNA and 20 amino acids of proteins for example - which, due to various physical properties of its parts and its relative positions against each other, form what we call "3D" structures (there is no "D" outside the mind of an observer), which could be considered as a *higher-level* structures (non-linear), due to its resulting properties. Such structures are called *proteins*.

Some of the proteins are *enzymes*, which is the name of a protein which is a *machine* (yes, a mere folded linear structure could form a *machine* - "a mechanical procedure which performs a function", if you wish).

These machines act as Pure functions on an Immutable data - molecular structures.

RNA and related *enzymes* constitute what we call an Abstract Data Type while being separated from proteins by the very real Abstraction Barrier

## Patterns

Patterns are generalized as `Notions`

.

- Abstract notions, such as Equality
- More concrete (physical) notions of Equivalence or Ordering

The language defines precisely *what it takes to be a... .
*

## Types (names for common patterns)

Biology labels a some important sequences with what we call *prefixes* or *start bits* and pattern-match on them in order to distinguish one kind of a similar molecular structure from another. So called *message RNAs* are such examples. All cellular biology could do is pattern-match an exact molecular structure. There is no intelligent observer, no notion of numbers, no counting and no counters (and no overflows!).

In early `Lisps`

*type-tags* were just `cons`

ed to a datum, the way cellular molecular biology would do. A value could belong to more than one type, so, there would be multiple *type-tags* attached.

On a vastly higher level there is a notions of *Applicability* and *Relativity*, which means that certain notions are **NOT applicable** to certain concepts, and that certain notions are dependent on a point of view or where *the origin of a coordinate system (an observer) has been arbitrary placed* (`6`

is `9`

for those who are looking at it from above, a pair `(2 . 3)`

is could be read as `(3 . 2)`

from right-to-left, etc.).

*Non-applicability* is the common flaw of all theology and post-Socrates philosophy - most of the propositions of theology should be rejected with a *type-error: this notion is NOT applicable to this abstract concept*.

Nowadays we have evolved in applied CS the generalized notions such as *Traversable*, *Foldable* or *Equable*, *Orderable* and formalized the way of defining such generalized notions or precisely *what it takes to be a _____* with what is called a

*type-class*.

From there comes the formalized use of *equivalence* - whatever is *Foldable* could be substituted to another *Foldable* - *equivalent for an equivalent* instead of more strict (and non-existent in the real world) *equal for equal*.

This, BTW, is what is called *Duck-Typing* and no wonder this is exactly what molecular biology uses - as long a molecular structure is pattern-matched (fits), it will be used (as a *trigger* or *energy source* or whatever) , no matter what it is really. Biology have tried all the tricks, and picked up what works reliably, just by merely long-enough *trial-and-error* (actually everything is much more complicated and involves things like "intermediate forms" which once being used are almost impossible to change - Hemoglobin and Insulin are obvious examples).

So, a type-system is a *Domain Specific Language*, or *a meta-language* (with its own *static environment* and even computations) to describe categories (or classes) of values. In Haskell it is approached perfection by looking like a *DSL* to annotate functions (and to define the types, of course). In Scheme types are implicit and hidden in the implementation and being checked explicitly in the code at runtime - the way biology does.

To have a *static type checker* is to prevent nonsense and bullshit (theology, abstract philosophy) from being published (hahaha), while *dynamic typing* is about *trial-and-error* (test-runs). Since the language is strong-typed (no implicit coercions), most of the procedures are *pure functions* and the data is *immutable* a few tests is enough. Clojure pushes this paradigm as far as humanly possible.

Strong-typing, purity and immutability is *the absolute minimum required*. *Static typing* is good to have, especially in Haskell and Scala, but it has its costs (such as homogeneous aggregates and conditions).

## Logic

A Natural Implication is possible *because and only because* this Universe has its laws and the law of causality is among them.
An Implication require a certain actual (demonstrable and falsifiable) *causal dependence* between *antecedent* and *consequent* of an implication.
Truth must be *traced back* to a *universal law* or one of *the first principles* (direct *consequences* (or implications!) of the laws).

*Partial function* which defines a Natural Implication

(=>) :: Bool -> Bool -> Bool True => x = x

And the second part is an utter nonsense

False ==> x = True

Logical OR must be exclusive, which does not introduce abstract bullshit.

(||) True _ = True (||) False x = x

## Language

Formalized, precise use of a language and notation, like it is in maths, is the main principle.

- Immutability. Once a meaning is bound to a symbol it cannot be rebound, however additional new bindings and synonyms could be made at any time.
- Expressions. Everything is an expression, which evaluates to a value. Each value has a type.
- Substitution. Evaluation of expressions is done by re-writing. Equal (in the most cases -
*equivalent*) could be only substituted to equal (equivalent). - Reduction. There are
*reduction rules*, which, when applied to an expression, yield a transformed (simplified) expression equal and logically consistent. - Recursion. Recursive application of a finite set of rules to an expression eventually evaluates to a value, once no more reductions could be applied.

Haskell is famous for faithfully following this set of principles without an exception. This makes Haskell a "pure" *domain-specific* logic.

Haskell program is a single, *enzyme-like* expression to be *eventually* evaluated by the runtime (a cell).

While Scheme (a refinement of the *Original Lisp* and reconciliation with the *Lambda Calculus*) is sort of universal assembly language, Haskell (a refinement of *Logic* and reconciliation with *Functional Programming*) is a way of thinking.

Scala is a heroic attempt to make a pragmatic, *mostly functional* implementation language on top of these principles. Think of it as a pure-functional `CLOS`

.

**Note:**See TracWiki for help on using the wiki.