Re: MDF
I would add the possiblity that each differentiated input has its own encrypted algorithm. and looking at it from too high an altitude provides little info about each one..i.e. optic nerve encrption different than nasal receptors . maybe even a one time code . that allows only the individual to access certain stored info.
Indeed! Each individual will form its own code, for each modality. On the other hand, these codes do not simply diverge, but they are the result of the individual's adaptation to its own (changing, developing, deteriorating) physiology. The nervous system is designed to extract structure based on the statistical properties of the input, and to compensate for defects. For instance, replacing the fine-grained input provided by the many receptors of the cochlea with a crude implant (today's models sample only a handful of frequencies) will usually result in a subjective experience of continuous auditory perception; splicing the data of a few pixels into the optic nerve of a blind person may allocate those pixels their correct positions within the visual field. An interesting question: what are the limits of the plasticity of the sensory modalities? For instance, could we switch modalities to some extent?
More than hundred years ago, Stratton did a famous experiment, where he wore glasses that turned the world upside down (using prisms). After a few days, his brain adapted and he would perceive everything as being upright again. An experiment that I would like to see one day (and of which I am not aware if someone already tried it): equip a subject with an augmented reality display, for instance Google Glass, and continuously feed a visual depiction of auditory input into a corner of the display. The input should transform the result of a filtered Fourier analysis of the sounds around the subject into regular colors and patterns that can easily be discerned visually. At the same time, plug the ears of the subject (for instance, with noise canceling earplugs and white noise). With a little training, subjects should be able to read typical patterns (for instance, many phonemes) consciously from their sound overlay. But after a few weeks: Could a portion of the visual cortex adapt to the statistical properties of the sound overlay so completely that the subject could literally perceive sounds via their eyes? Could we see music? Could we make use of induced synesthesia to partially replace a lost modality?
Cheers,
Joscha
