wiki:ML/Principles

First principles for Machine Learning

Learning, in principle, is an ability to develop due to repeated exposure to various aspects of reality (through sensory input generalized as experience) an inner representation which reflects various properties of what is and to use pattern matching against this inner representation to achieve better performance in the same (or similar) shared environment.

A computer program is said to learn from experience E with respect to some task T and performance measure P, if its performance at task T, as measured by P, improves with experience E.

The inner representation must necessarily have a structure which reflects or represents various constraints or properties of the environment. The better reflection (less "distance" from what is) the better representation. The structure could be physical (made out of neurons, axons and dendrites) or virtual, made out of ones and zeroes in a computer memory. The principle is the same - an inner structure that matches outer structure as best as it could.

This does not mean, of course, that the inner structure must match the outer structure of the universe up to last atom. To the contrary, it should, at least in theory, use as economical (smallest in terms of matter and energy consumption, but good-enough) representation as possible. The most important criteria is that representation must be free from capturing of what is not there (like to ignore things like, say, shadows of a tree, which are very complex but useless phenomena). The inner representation must match reality only. Any highest animal has a surprisingly adequate, good-enough representation of its environment (habitat).

In other words, it is a building and using an adequate map of what is, given that a map is NOT a territory. It must be accurate in the first place, and detailed-enough. No bullshit (or explicitly marked as bullshit). This, by the way, is what a good education supposed to accomplish.

Backpropagation

Backpropagation is an implementation of a feedback loop. TODO.

Feature extraction

This is the most important and the most difficult part of Machine Learning. The fundamental question which must be answered is "How does an animal knows that creatures in this world has eyes or a new-born baby knows that humans are "leaking" their internal state of the mind with different facial expressions"? The simple answer is that it is implicitly encoded somehow in its genome, out of which a certain wiring of the brain develops in the uterus which allows these tasks to be performed instinctively. OK, but how exactly this "knowledge" emerged (the pattern has been captured) in the first place?

My favourite story is about "eyes" on the wings of some butterflies as a protective mechanism from predators. Butterflies, of course, have no idea that some creatures have big eyes, or even they themselves have. The miracle happened because evolution selects "what works in reality". One aspect of the shared environment is that in it there are some other big creatures with big eye, and almost all big creatures have exactly two eyes. So, by developing (due to a random chance) of these pairs of bright spots on its wings (and then gradually improving their shape and colors to mimic the eyes, presumably of a big predator) improved the survival rate of some butterfly species and the spots ended up in their genome and got gradual improvements within successive generations!

Notice, that this would never happen if creatures have various numbers of eyes, lets say, from one to nine! This is what a Feature Extraction is as a principle. The environment has some distinct features (such as day and night or seasonal changes, presence of other creatures or physical phenomena, etc.) and organisms which are evolving in this environment adapt (without even being aware of it) to the various real (and only real!) aspects of it.

Neural network architecture

Presumably, out brain has a vast amount of redundant, unused connections among neurons, out of which some could be "strengthened" and used to propagate certain signals which form a higher-level "structure", similar to pathways humans or animals make in a fresh snow. Due to repeated exposure and feedback loops patterns emerge.

Last modified 2 years ago Last modified on Feb 5, 2018, 12:21:23 PM
Note: See TracWiki for help on using the wiki.