Monday, October 30, 2017

Artificial Proscenium

Within our brain we [and presumably any creature with eyes] have a visual proscenium on which we view our surroundings and correlate what we see with all the inputs from our other senses. This visual confirmation is the basis of our reality and our existence in a perceived environment. Artificial Intelligence [AI] does not yet know how to implement this proscenium. Optical inputs from video cameras can be stored in a First In First Out [FIFO] digital memory, and perhaps even correlated with simultaneous inputs from other sensors. But there is no proscenium in the machine that allows it to ‘know’ what it is seeing with these data. The visual cortex of the brain appears to be the location of our proscenium. Also it is known from tests with blind people, that activity occurs in the visual cortex due to performing tasks involving hearing and touch. But what kind of memory area [or volume] is required to combine sensory inputs in a way such that the machine can realize, can know, that it can see.