Introspection is a process of examination of features of an inner Map (knowledge representation) instead of mere usage.
One of not so obvious examples is reflection on the structure ("hidden" meaning) of the words of a human language being used. Some symbols (words) has meaning in itself, some words are arbitrary sequences of sounds. When we see "resistor" in the context of electronics we could "pick up" the meaning that it "resist or blocks the current", etc. "Current" itself is another hint. Of course, when usage of words have no correlation with underlying reality (as it is in highly abstract subjects) it is a total disaster and waste of mind.
This notion is related to the discipline of Precise use of a language which is a hallmark of intelligence and related to the famous "to see things as they are" and "to call things by its proper names".
In terms of AI it is about examining the actual structure of actual evolved pattern-matchers in the actual neural network to gain some additional insights. It must be easy to infer from cat-recognizing neural net that cats has relatively big eyes and small noses, like babies (that's why we like them - notice that "like").