Japanese scientists were able to show that the brain processes dream images similarly to what it sees
There has already been some progress in mind-reading by means of brain scans, even though it is still far from being possible to really tell from neuronal activity what a person is currently thinking or perceiving. However, it is not possible to recognize individual words or images in a pattern of activity from a scan, but only the coincidence of categories or features that are thought to be used by the brain to identify objects. Japanese scientists have now attempted for the first time to decode dream images based on neural activity patterns through machine learning (mind reading in the age of brain scanners).
Tartini’s dream by louis-leopold boilly. The privacy of the dreams is also not possible? Image: public domain
We know that we often dream visual scenes, but these are subjective experiences that we can only communicate post festum. Whether someone is dreaming and especially what he is dreaming cannot be directly observed by others. It is amed that physiological quantities can be used to determine whether a sleeping person is dreaming, but this is controversial. Commonly, rem sleep, in which, among other things.A. Rapid eye movements and with the eeg activity patterns, v.A. Theta waves, are associated with trauma, but dreams also occur in other stages of sleep.
The scientists, whose study was published in the journal science, focused on visual dreams in sleep phases 1 (light sleep shortly after falling asleep) and 2, because here the test subjects can be awakened and questioned even more often, which would be very impairing in deep sleep and rem sleep. Words characterizing scenes or objects were extracted from the verbal representations of the dream images and analyzed using the wordnet lexical-semantic database to determine characteristics of the images. Semantically similar words were organized into hierarchical "synsets" grouped. Finally, using the basic synsets that occurred in at least 10 reports from each subject, the fmri activity patterns were labeled with a visual vector indicating whether or not each synset was present.
The researchers hypothesized that activity patterns measured with functional magnetic resonance imaging (fmri) during the dream phase just before awakening would at least partially represent the images reported by the subjects. From imagenet, an image database built after wordnet, as well as from google images images were used to train the program. The program was also trained using machine learning to recognize neuronal activity patterns that arose in the brains of the awake test subjects when they looked at images from the web.
The results, however, were obtained from only three test subjects, each of whom was awakened from sleep 200 times – on average every 342 seconds – and then questioned about their dream images. Dreams were reported in 75 percent of the case. Synsets of dream images and perceived images were compared with neuronal activity patterns of the brain during sleep and wakefulness. Only those were selected that showed high accuracy in associating a visual content with a pattern. This showed that the activity patterns in the dreaming and waking states that could be associated with visual content were very similar.
Some content of dream images, scientists say, can be discerned from neural activity patterns that also represent images seen while awake. This speaks for the principle of perceptual aquivalence, i.E. That there is a common neuronal basis for perceived and dreamed, probably also hallucinated images. The scientists ame that their decoding program, developed on reports of spontaneously formed dream images in the initial sleep, is also suitable for the recognition of images in rem sleep.