Concurrency vs. Parallelism.
Concurrency is not the same as parallelism. Like for a parallel lines or planes - there should not be a single overlap, not a single point with belongs to more than one line or plane. They must share nothing. This is the very definition of parallel - sharing nothing.
Concurrent, on the contrary, means at least one overlap, a single point in which two or more "lines" (flows of control) overlap, intersect, touches the same thing.
For truly parallel processes (which share nothing) we need literally nothing, because they have literally nothing in common - no shared state, no synchronization, no serialization, no locks, no semaphores - no concurrency primitives whatsoever. Hardware will do the job.
For concurrent processes we do need mutexes, locks, serialization, busy-waiting, all these complex things.
So, share-nothing along with immutability are the two biggest principles based on which a distributed (truly parallel) system could be made. It fits together with "working" non-stop-the-world garbage-collection, fault-tolerance via automatic restarts and error recovery and other important ideas.
Systems which have been build on the basis of so-called "mutable shared state" or "shared concurrency" are unsuitable for development of (building blocks of) distributed systems.
Contrary to what idiots are talking about in a context of yet another event-loop async framework, we know how to do concurrency in a reasonable way. Hardware people do it.
The right unit of abstraction is an interrupt handler routine - a CPU-level subroutine (procedure) with its own stack. Technically, it is hardware-assisted isolation of a block of code. Its execution could, of course, be interrupted (unless all other interrupts are explicitly disabled) but the state of execution (all the registers) is preserved and protected by hardware. This is the right level of abstraction - as lightweight as possible and as isolated from any other process as possible.
Uses channels (with blocking which plays role of implicit synchronization) to pass references to data between goroutines.
Explicitly pass around mutable-references (SML).
A State Monads (a
Mvar - a reference wrapped into a Monad).
Serialization of only one write reference at a type is guaranteed by
- explicit passing via channel (Go
- explicit passing as a value (SML
- by the type-system and explit passing as a value (Haskell
- an enforced behavior in Rust.
- Futures in Scala
The main principle is - only one mutable reference at a time, so no locks are needed.
This is a harder problem, but we certainly do not want to do busy-waiting on a handler (resource). Busy-waiting (or constantly polling) is like you are sitting all day long in front of a mobile phone waiting for an SMS to arrive. You are busy waiting.
What we really want is to be interrupted when something we requested has been arrived. Yes, we need to register an appropriate interrupt handler, which knows what to do, and forget about it.
We should describe what to do instead of how to do. We want to delegate.
We ought to be able to say literally "Go to there AND THEN get that thing AND THEN give it to that person", like give an order and forget.
For that we need a
Modad ( oh, god, no! ) combinator, which combines two expressions together in a way that are not sequential. We want to express "and eventually, when that will be available (ready) do this".
The only case where the prefix notation really sucks
(and-then (lambda (x) (do-something-with x)) (lambda (y) (do-something-with y)))
consider this instead
(fn x -> blah-blah x) and_then (fn y -> blah-blah y)
datatype Promise = Pending | Failed | Memoized v
We do not want to block or busy-wait on a
Or, rather, we want some Blocking Promise or
Future to be either
datatype Future = TimedOut | Memoized v
Promise + a Value (a
Channel (or a
Pipe) is a first-class value.
Channel with a buffer of size
Promise + an