|Project Description: ||Knowledge Representation. The representation of knowledge accounts for much of the symbolic processing done in AI and is one way of treating memory organization within the human brain. A lesser-known type of representation is that of numerical "weights" which are used in neural networks. The foundation of the neural networks paradigm was laid in the 1950's. In symbolic machine learning, knowledge is represented in the form of symbolic descriptions of the learned concepts, e.g., production rules or concept hierarchies. In connectionist learning, on the other hand, knowledge is learned and remembered by a network of interconnected neurons, weighted synapses, and threshold logic units. Learning algorithms can be applied to adjust connection weights so that the network can predict or classify unknown examples correctly.