When you see something, what happens in your brain? Scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) asked this question. The answer contributes to the growing body of research on how the brain works. In the first non-invasive procedure of its kind, the research team engineered a new technique for measuring the location and timing of how visual input is received by the human brain. The results may lead to better understanding of memory disorders or dyslexia.
The new brain-scanning technique was devised using two other methods—fMRI and MEG—as its foundation. fMRI scans measure changes in the brain’s blood flow, which alert researchers to which parts of the brain are active. Magnetoencephalography (MEG) uses an array of sensors, positioned around the head, to measure the magnetic field produced by neuronal activity. Individually, neither technique paints a complete picture of brain activity. fMRI scanning helps scientists to identify the location of brain activity, but it is not very fast. An MEG can measure brain activity to the millisecond, but cannot identify the activity’s location.
For the study, the researchers scanned 92 human volunteers. Each participant went through two fMRIs and two MEG scans. During the scans, they observed images of faces, animals, and objects flashing before them in half-second intervals. Using a technique called representational similarity analysis, the scientists then correlated the data from the fMRI and MEG scans.
With the new technique, the scientists were able to produce a timeline, accurate to the millisecond, of how the brain recognizes objects. When someone sees an image, the visual information enters the primary visual cortex, where the object’s basic shape is identified in the first 50 milliseconds. Next, the input flows to the inferotemporal cortex, which it reaches in as little as 120 milliseconds. Within 160 milliseconds, the brain is able to classify an object into categories like plant or animal.
The researchers are already using the results from the study to measure the accuracy of their computer models of vision. The results may aid in understanding how the brain processes motor, verbal, or other sensory information. These findings could eventually support work in understanding memory, dyslexia, or neurodegenerative diseases.
“This is the first time that MEG and fMRI have been connected in this way, giving us a unique perspective. We now have the tools to precisely map brain function both in space and time, opening up tremendous possibilities to study the human brain,” explained Dimitrios Pantazis, a research scientist at MIT’s McGovern Institute for Brain Research and an author of the paper.
This research is published in the journal Nature Neuroscience.
Previous news in sensory processing: