What Is Wrong With GPT3 and related models.

The linguistic researchers of the past were much more systematic guys than modern rML imposers.

Most notably, the fathers of NLP (Neuro Linguistic Programming, a pseudo-science) realized that humans have at least two representations, in principle. One, so called Deep Structure (representation) is how our abstractions (maps of the world) are stored in a brain, and Surface Structure one, which is used for verbal communication, after verbalization (literally encoding) for a transmission.

What they did not realize, that this Deep Structure is not arbitrary (by no means) but reflects the constraints of the environment, of which everything, including a brain, is a product.

Genetically transmitted structure of a brain encodes environmental constraints.

This is not for visual or motor cortexes, but for speech areas too. It reflects, for example, that there are things, process, attributes, and events. Deep structure is not arbitrary, like they trying to make it with NNs, it is the opposite - the structure is highly optimized, and it mimics (maps) reality (environment).

This is precisely why (and how) a meaningful speech could be produced - it is just a verbalization of inner conceptual "maps" (represented as brain structures), which reflects what is real.

This is why children are producing meaningful phrases instead of infinite patterns of arbitrary noise, for example.

So, any model based on merely weight will never produce anything meaningful. Only almost indistinguishable from meaningful, which is even more dangerous.

Comments

No comments.