Brain Computer Interface
– Brain-Computer Interface (BCI)
– Brain-Machine Interface (BMI)
– Mind-Machine Interface (MMI)
– Direct Neural Interface (DNI)

1. Brain Computer Interface (BCI) is an interface system that directly connects a brain and a machine to directly operate a computer or machine. There are several names, but it is commonly used as BCI. It can be largely divided into Invasive BCI and non-Invasive BCI. The above fields are broadly part of Human Computer Interaction technology. Because it is a technology that connects the human brain and the machine, it is deeply related to neuroscience and medicine in addition to computer science and robotics.
BCI technology is also called Brain-Machine Interface (BMI) because it can also operate machines such as wheelchairs, prosthetic hand or leg through electrical signals from brain waves or brain cells. It is attracting attention as the core of cybernetics technology, which is the basis of cyborg technology, and in the long term, grafting with virtual reality is also being considered.
The implementation of BCI technology is realized by sensing and recognizing brain activity through a mechanical device, and then analyzing it through a signaling process and instructing the input / output device. For example, in the case of an EEG-sensing BCI that detects and reads brain waves, which are the ‘results’ of electrical activity of the brain, 1. After receiving the EEG through a device that recognizes EEG stimulation, 2. Signaling process after that, the brainwaves are analyzed to issue commands to the input / output devices. In the case of the BCI of the neural signal detection method (direct neural connection method) that is more directly connected to the nervous system and detects and reads the neural signal, the ’cause’ of the brain’s electrical activity by reading the electrical neural signals of the brain cells directly instead of the brain waves. There is a slight difference in that it detects.
The BCI under study is not a miracle device to read the mind. Of course, depending on the application, it can be used to read a person’s mind, emotion, or mental state, but it is only a secondary use. Because it is essentially a device for reading human thoughts, it is used as a form of reading electrical nerve signals of brain waves or brain cells and reading certain patterns among them as input signals. Currently, still need to make better speed of the accuracy and signal analysis.
It is not difficult to read brain signals from outside. The technical difficulty is low and the EEG detection type BCI that can be implemented in various forms is sufficient. Conversely, it is difficult to transmit external signals to the brain. This is because the technical difficulty is high and the neural signal detection method (direct neural connection method) BCI that must be implemented in the form of an embedded BCI is required. Also, even if it is realized, it is difficult to know how it will affect the brain, so it may not guarantee safety. Currently, the degree of implantable BCI for patients who have lost sight or touch has been limitedly studied.
2. Brain Computer Interface (BCI) Development
In reality, in the 1970’s first experiment is that succeeded in operating a machine using a monkey. This technology has evolved significantly in the 1990’s and has achieved some results as the 2010’s.

Brainwave (EEG) is a spontaneous electrical activity that can be measured on the human scalp, is a means to visually and spatially grasp changes in brain activity. For this reason, since the first brainwave (EEG) was recorded by Hans Berger in 1929, it has been widely used in clinical and brain function research.

Brainwave as a computer interface means has all the advantages and disadvantages. First, the advantage is that it is cheaper to measure than a large device such as fMRI, and the non-invasive method that does not invade the sensor into the scalp is harmless to the human body and enables real-time analysis of information responses in the brain. But, non-invasive type BCI method has difficult to sensitive analyze due to inevitable mixing of miscellaneous waves and loss of information. However, in the invasive type BCI method has high surgical procedure risk.

BCI technology was used mainly for medical purposes, such as the control of ADHD (attention deficit hyperactivity disorder) children or severely disabled people in the early stages of development. However, in the past BCI machine weight was heavy and also hard to wear them due to the large number of sensors. However, recently many companies like as Neurosky, Emotive, and InteraXon have released lightweight and easy to wear devices in the form of headsets at low prices, and they are also used for various purposes such as games and concentration improvement exercises.
If BCI research and technology development accelerate, it is expected to be used as a next-generation interface connecting touch screens and augmented reality in the future. Classic methods such as keyboards, mice, and keypads used as input interfaces for computers and smartphones have recently evolving into touch pads and motion recognition, and if BCI technology development accelerates, it is highly likely to be used as a next-generation interface. In particular, BCI is able to give orders naturally without using hands or other bodies, so it is judged to be suitable for fields such as virtual reality, image and photo recognition.
2.1) distinction of Brain Computer Interface technical method
The BCI technology method is classified into an invasive type (inserted type) and a non-invasive type (non-inserted type) according to a site for measuring electrical neural signals of brain waves or brain cells.
– Invasive BCI
The implantable BCI opens the skull and implants them into the brain. Through this, the electric signal or movement of the motor cortex is read, and the signal connected to the implant line is got them up and reflected on the external machine. In addition to BCI, an EEG detection method that detects and reads brain waves, which are ‘results’ of brain electrical activity, instead of brain waves, it reads nerve signals from brain cells directly and detects the ’cause’ of electrical activity in the brain. Since the BCI of the neural signal detection method is a method that directly connects to the nervous system and detects and reads a neural signal (direct neural connection method), it can be implemented only in the form of such an embedded BCI.
This method has the advantage of being the most sensitive and capable of reading nerve signals from brain cells in addition to brain waves, and also being able to respond to direct information input and output to the brain through more direct access to the nervous system (BCI without the aid of other interface devices). There is a risk of side effects caused by scarring in the brain where it comes into contact with the electrode. Over time, there could be problems such as a natural healing of the scar area or a weakening of the signal while.
Implantable BCI focuses on research to return vision to people who have lost sight in an accident, and to provide self-manipulating machines to patients who has general paralysis. Indeed, among those who have had medical procedure, some people can get limited visual information was obtained to reach the driving at a slow speed. General paralysis patient also has had an invasive BCI medical procedure.
– Partially invasive BCI
Partial insertion BCI refers to the extent to which an implant is inserted inside the skull, but not to reach the gray matter of the brain. BCI, an EEG detection method that detects and reads EEG, which is mainly the ‘result’ of electrical activity of the brain, is implemented in this form.
The basic principle of reading EEG is the same as the method of recording and interpreting EEG potential as in non-invasive BCI. However, a thin plastic plate is planted under the dura mater to measure the EEG by directly contacting the electrode with the cerebral cortex.
Compared to the non-invasive type, the signal not be halved caused by the skull, so the resolution is higher, and also the scarring problem in the brain is not generated compared to the invasive type.
– Non-invasive BCI
Non-invasive BCI is a method that does not insert an interface into the brain. As the same case of the partially invasive BCI, the EEG detection type BCI, which mainly detects and reads the EEG, which is the ‘result’ form of electrical activity of the brain, is implemented.
Currently, the most influential method is an EEG (Electroencephalography) socket that reads brainwave potential, which is attached to the scalp to read brainwave. In addition, magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI) to read brain magnetism, and a method using ELF / SLF / ULF waves, which are weak frequency bands that brain neurons can absorb, are being studied.
Also, “Magnetoencephalography (MEG)” and “functional magnetic resonance imaging (fMRI)” to read brain magnetic fields, and “ELF / SLF / ULF wave”, a weak frequency band that brain neurons can absorb, are being studied.
The EEG socket is a device that has already been used for medical purposes. Using this, it is commercialized in a relatively low-cost band. In particular, it gained momentum with the emergence of a dry EEG terminal that does not need to be applied with a gel to increase connection efficiency.
Invasive-type and partially invasive-type BCI are require to surgery, so it is difficult to get suppliment to the public. Non-invasive BCI is an advantageous form for commercialization for the general public and is a promising field.
However, the non-invasive type has disadvantages. First of all, the non-invasive type cannot absolutely follow the precision of the invasive type. It is also a problem that the BCI of the neural signal detection method (direct neural connection method) cannot be implemented in a non-insertion type, so that it can respond only to inputs during input and output of information, and cannot respond to outputs. In fact, this is more of a problem than precision. There is no problem with precision, they already has been improved so much to usable level. But the fact of that the usage can be limited, because the neural signal detection method cannot be implemented. This is because it is fundamentally difficult to resolve.
Of course, if it is used as a means to assist other interface devices, such as the use of EEG-sensing BCI, it is evaluated that there is considerable commercial value despite the above-mentioned technical disadvantages because the non-invasive type is sufficiently useful. There are limitations to use for applications such as cyborg technology or fully-immersive virtual reality, but it can still be useful for applications such as operation of a computer or mobile device. It is also possible to use other interface devices to convey the senses to the user in order to compensate for the fact that it does not respond to the output of information.
3. Brain Computer Interface Process
The essence of BCI is ‘analyze what humans think they are trying to do and draw corresponding results.’ Therefore, a neurologically meaningful signal is obtained and BCI is constructed using the signal. In general, EEG is the most commonly used. This is because EEG is the cheapest, has good mobility, is actively researched, has many cases, and is easy to apply.
In addition to EEG, various methods are emerging, such as processing nerve signals through NIRS, which measures changes in blood flow in the brain using near-infrared rays. The problem is that the cost of constructing an NIRS system is higher than that of an EEG, and that much research data has not yet been accumulated.
The brainwave detection type BCI system measures brainwave signals of a specific state through EEG measurement devices, extracts singularities or features, classifies them, converts them into general control signals, and controls computers or devices.
1. After attaching an electrode to the user’s head, EEG data is measured using a measuring instrument.
2. The measured brainwave data is converted into a digital signal through an AD converter and input to a computer.
3. After processing the input EEG data using various algorithms, it is recognized and classified to generalize it as a control signal.
4. The final output signal is applied to various terminal devices such as computers, game machines, and medical devices.
4. Cases and Prospect
4.1) Brain Computer Interface cases
It can be used for mobile applications, toys, and fully-immersive interactive content.
4.2) Headset Type Brain Computer Interface
BCI measurement equipment or neurofeedback-related equipment that was used in the existing experiments has the advantage of being capable of relatively sophisticated and accurate measurements, but it is expensive and too bulky to be practically used as a personal equipment. However, “NeuroSky”, “InteraXon”, “Emotive” and “OpenBCI” have recently released headset-type BCI measurement devices at relatively low prices.
4.3) Brain Computer Interface Game
Recently, research and commercialization attempts to utilize BCI as a game interface have been actively conducted. The game’s initial interface was in the form of a joystick, joypad, and Nintendo’s console game Wii started using a motion controller, and the game interface began to evolve in earnest. Since then, the era of popularization of motion controllers, which are used to manipulate games by recognizing human movements, from PlayStation 3 to Move and Xbox 360’s Kinect, has begun.
BCI is currently in the early stages of technology and is expected to be actively used in functional games such as strengthening children’s concentration, preventing dementia for the elderly, and training mental relaxation. As such a functional game, the significance of BCI technology is an example that can overcome the controversy over the harmfulness of the game, which to be making a positive contribution to the game industry.
– MindBalance
MindBalance is a video game using BCI jointly produced by researchers at University College Dublin and MediaLabEurope. In this game, the movement of the 3D avatar character in the virtual reality game is controlled by BCI. As shown in the image below, this game is people need to be balancing on the single rope using gorilla avatar.
This game uses SSVEP to control gorilla’s movement. The frequency of signals generated in response to a visual stimulus at a specific frequency of SSVEP is different, and we are using that difference. On both sides of the gorilla avatar, there are checkered patterns with different patterns. If you look at the right checkered pattern, the gorilla balances to the right and the left checkered pattern balances to the left.
According to the results of six experimenters playing the game, the result was accurate enough to succeed 41 times out of 48 gameplays. In addition, the accuracy of the real-time control shown by the experimenters reached 89 percent, which proved that BCI could be used as a real-time game control.
– Smart Brain Games
Smart Brain Game is a BCI control system developed by Cyber Learning Technology of the United States. The user plays the game by wearing a helmet with three sensors, and collects and analyzes the user’s brainwaves through a device called a smart box connected to the game console, and uses it to control the game.
This system supports existing games and is limited in that it is a level that replaces the input of a specific button with BCI instead of the front, rear, left, and right arrow keys among controllers.
– Mindball
Mindball is a BCI game developed by Sweden’s Interactive Productline, where two users sit at both ends of a table that with a transparent tube containing a small ball and play the game.
The game’s rule of thumb is to win by pushing the ball away from you and sending it to the other side. The more physically and mentally stable you are during the game, the more the ball will move towards your opponent.
This game broke the old rule that ordinary games can only win if they competed in a more excited state, and took the reverse idea of winning by making a more stable relaxed state. Developers are promoting the game as a way to train players to take a physically and mentally stable or relaxed way, which helps to improve their mental and physical health.
5. Brain Computer Interface studies
The University of Tech in Sydney, Australia is conducting a Mind Switch study aimed at overcoming physical disabilities and using future homes. Using the principle that alpha waves appear when normal people close their eyes and maintain a stable state, and alpha waves decrease when they open their eyes, experiments were conducted to turn on and off electrical appliances.
In Japan, the Institute for Brain Function conducted a study on the distinction between positive and negative doctors for the purpose of developing emotional interfaces. It is said that a positive / negative doctor is distinguished by focusing consciousness on the left ear in case of “yes” and on the right ear in case of “no” as an EEG-inducing method.

IBVA, a US venture company, conducted a study on biofeedback for the purpose of application to virtual reality. EEG is applied to computer games, and in car racing games, the direction is controlled by the joystick and the speed is adjusted by the EEG.

In 1999, the Eberhard Karls University of Tübingen in Germany developed a word processor for the disabled using SCP (Slow Cortical Potential). This is a method of finally selecting one character by continuously selecting one of the two characters on the screen. It showed a typing speed of two characters per minute.
The ERP Lab Donchin team at the University of Illinois in the United States studied a word entry interface for people with disabilities using ERP. If the average value is extracted through repetitive practice for a specific stimulus, BCI control through ERP is possible. A character matrix as shown at the right of the image below is given, and the subject selects characters one by one in the vertical and horizontal directions through ERP activation to input a specific character.
The Salk Institute for Biological Studies and Naval Health Research Center in the United States are jointly working on a portable real-time awakening monitoring system that measures the level of arousal through brain waves and issues an appropriate alert when it falls below a certain level.
This technology can be used as a system to respond to drowsy driving or emergency situations by monitoring the level of alertness of truck drivers and aircraft pilots.
The team at the University of Duke of the United States, Miguel Nicolelis, succeeded in moving the robot arm by inserting an electrode into the monkey’s brain to capture signals related to arm movement. The results of this experiment were published in the Nature journal.
The State University of New York, Jonathan R. Wolpaw research group conducted an experiment using the EEG to move the cursor left and right on a monitor. However, this method has a drawback that requires long training of the subject.
5.1) Recent research trend
– If previous studies have focused mainly on the development of the interface for the disable people or the basic processing of the EEG and the use of the interface, more practical and commercial use studies are in recent progress. In addition to those mentioned, there are approaches such as image grasping classification, manipulation in virtual reality, and neural feedback.
For example, attention monitoring studies can be used for workers in occupations that require long-term attention. Through BCI, it is possible to install a device that recommends or warns of rest by identifying the degree of awakening and installs it in trucks, airplanes, and airport facilities. Mercedes-Benz conducted a study on a system that monitors the driver’s condition through the Mind-Lab project and detects and informs when drowsy driving occurs.
– Neuralink Corporation is an American neurotechnology company founded by Elon Musk and others, developing implantable brain–machine interfaces (BMIs).
5.2) Corruption cases in Brain Computer Interface technology
Ex) China: Students Undergo Trials For New Brainwave-detecting Headbands
6. Present Brain Computer Interface technology and comparison with the fiction
Virtual reality in fiction or movie is described as a fully immersive virtual reality. That is, all input / output processes are processed only with the Brain-Computer Interface (BCI) without any other interface device, and thus, virtually all input / output is performed using only the brain and the computer.
However, the current virtual reality technology is pursuing limited immersive virtual reality rather than full immersive virtual reality. Development has been conducted in a direction to implement immersive virtual reality to a limited extent by utilizing existing interface devices such as HMD. It is pursuing directionality also using with common interface devices such as displays (monitors, HMDs, etc.), speakers, keyboards, and mouse. Still we cannot pursue a completely immersive virtual reality like in fiction or movie.
Why are you pursuing “limited immersive virtual reality” rather than “completely immersive virtual reality”? This is because in order to realize a fully immersive virtual reality such as that depicted in fiction, the user’s brain must not only issue commands to the computer, but also have to assume that the computer can read what is being delivered. In other words, the direct and smooth data communication between the computer and the user’s brain must be achieved, but the actual brain-computer interface technology has not progressed to such an extent. Although little by little progress has been made, the technology to create a cheap and convenient device that most people will be satisfied with has not yet emerged.
If the safety of fully immersive virtual reality, such as fiction, is proven and popularized to be widely used at a low price, it is expected to quickly replace limited access virtual reality.
6.1) Technical limitation
In order to realize a fully immersive virtual reality like in fiction, a technology capable of acting not only as an input device but also as an output device is required. It should be able to detect and interpret the ’cause’ of electrical activity in the brain by reading the nerve signals directly beyond the level of reading brain waves, which are simply ‘results’ of electrical activity in the brain. In addition, since it is necessary to cope with both input and output, a technical system that can directly input and output information to the brain is required. In addition to this, it is also necessary to temporarily stop the functions of the sensory and motor nerves between the user’s brain and the body, and to make the state in which only virtual reality data interacts.
The user’s brain only receives data from the virtual reality. Similarly, the electrical signal from the user’s brain is transmitted only to the virtual reality, and the signal transmitted from the sensory nerves of the body does not affect the user’s consciousness. Likewise, the brain needs to create a state where it cannot give orders to the motor nerves. If we want to realize that, we need to be able to interfere with the brain and directly induce the electrical activity of the brain, not just to sense the electrical activity of the brain.
However, the real brain-computer interface (BCI) technology is still insufficient to realize a fully immersive virtual reality. Currently, a field mainly studied in the field of brain-computer interface is a non-invasive BCI that is not inserted into a user’s body. However, most of these non-invasive BCIs employ an EEG detection method that detects and reads EEG, which is the ‘result’ of the brain’s electrical activity. It does not detect the ’cause’ of electrical activity in the brain, such as by directly reading nerve signals. So, it responds to inputting brain information into a computer, but it is common to not be able to directly output information to the brain.
It can serve as an input device such as a keyboard or mouse, but it cannot play the role of an output device such as a display or speaker (as a monitor and HMD). As such, the non-invasive BCI cannot completely replace the existing interface devices because it only supports the input and not the output. Even though the role of the input device is possible, it cannot function as an output device that directly shows or hears the screen or sound in the head, so a fully immersive virtual reality cannot be realized yet. As a means to assist the existing interface devices, the degree of utilization in the implementation of limited immersive virtual reality is limited.
Meanwhile, in the case of an invasive BCI that is inserted into the user’s body and directly connected to the brain nerve, it is largely divided into two methods. Whether or not a fully immersive virtual reality can be realized depends on the method employed. The invasive BCI is divided by two “EEG detection method that detects and reads brainwaves” and “when a direct neural connection method that can directly read nerve signals and send information to the brain nerves is adopted”.
In the former case, the neural signal cannot be read directly, and it cannot cope with direct information input / output to the brain, which has the same limitations as the non-invasive BCI. It may be utilized in the implementation of limited-immersive virtual reality but cannot adapted fully immersive virtual reality. In the latter case, although it is a technology that can read neural signals directly and cope with direct input and output of information to the brain, it is a rudimentary stage in which research is currently being conducted with priority to application to cyborg technologies represented as prosthetic hand and foot. It still takes a lot of time to develop to the point where it can be applied to the realization of a fully immersive virtual reality by being applied to the virtual reality field.
In order to realize a fully immersive virtual reality, the technical limitations of the brain-computer interface need to be resolved. Therefore, a much higher level of technology is needed than now. BCI can be used as an output device as well as an input device to realize a fully immersive virtual reality, but this is difficult to realize in the near future.
In addition, although the realization of fully immersive virtual reality is possible through the development of the brain-computer interface, there are still many barriers to be overcome. The fully immersive virtual reality presupposes that the user’s body is blocked in the five senses and the motor ability is stopped and becomes a de facto vegetative state while accessing the virtual reality. Therefore, there are problems such as reducing the user’s ability to respond to unexpected errors or increasing the likelihood of an accident. As it directly affects the brain nerve of the user, there is high risk and the safety problem cannot be ignored. In addition, there is considerable concern about exploitation as information is transmitted and output from the outside.
6.2) Stability problem
As mentioned above, a fully immersive virtual reality such as in the fiction must halt the user’s body being. Directly interferes with the detection of electrical activity in the brain and induces electrical activity. Without moving the body, the brain interacts with data from virtual reality.
However, even if the above-mentioned concept solves technical problems, there is another problem that ethical and legal problems must also be overcome. As you may have noticed from the above description, blocking the five senses and stopping the body while accessing the virtual reality is in fact a plant state. What if there is an earthquake or fire in this situation? What if you have to get up and run away suddenly from emergence situation? Such a suspension of the body can increase the possibility of unexpected errors or accidents, and greatly reduces the user’s ability to respond. Although it is necessary to discuss whether the responsibility for the accident occurred in the virtual reality provider or the individual, the reality is still in the beginning stage due to controversy related to autonomous vehicles. In addition, if a more advanced approach is allowed, there is a high possibility of causing a specific risk or sequela as it directly affects the user’s brain nerve, and safety issues must be raised when dealing with this.
This safety problem is not only the suspension of the body, but also the direct output to the brain. The fully immersive virtual reality in fiction shows a direct effect on the user’s brain and nervous system, for example, feeling a pain similar to the real thing, or calming or manipulating the opponent’s consciousness in the name of skill. These inputs and outputs are problems in themselves, but as they actually act on the brain and nervous system, they are very likely to cause certain risks or sequela unless safety devices are provided. here fore, in order for virtual reality such as fiction or movie to be realized, it is necessary to pass the problem. What is the safety policy, and what is the ethical permission? Should it be understood as a continuation of existing law? Do you have to approach with different perspectives and perceptions? etc…
6.3) Ethical and social problem
There are many set-ups in the fiction that fully immersive virtual reality is a medium to satisfy the deficiency in reality. For example, in the scene of the Minority Report, many people are portraying the eruption of honor, sexuality, and violence that are not satisfied in reality through virtual reality.
In this way, the immersive virtual reality in the fiction includes as a simple game, a difficult aspect to be considered as a particular hobby, and even an unconsciousness aspect. Therefore, if the reality of immersive virtual reality develops in a form that satisfies the deficiency of reality, it is necessary to discuss how to manage addiction and over-immersion and manage it. Also, depending on the type of addiction, it is necessary to discuss whether it should be approached as a different field or whether it is managed by one special law. However, the image of virtual reality addiction is currently the known as for expressed same as an image of game addiction.
But is it right for the state to control individual needs in the name of over-immersion and addiction? That kind of question also exists. The above paragraph is stigmatizing that it is ‘not right’ to meet a need in virtual reality that is not met in reality. The standards of right and wrong vary from person to person, and it is fascism to set the standard of right and wrong in the country and to control the needs of all people accordingly.
For some people, it may be right to fulfill their needs in virtual reality. However, from the perspective of the government, people are working all day long and making money for their needs. But when if people can easily satisfy their needs in virtual reality and spend only a minimum amount of time for working it is may not good to them. However, there is no reason for an individual to give up his hobby for the benefit of the country as a whole.
In addition, virtual reality in fiction implements various ideals and environments, and a setting is introduced that consumes less cost and time than reality. This, in turn, means that even inhuman situations can be easily achieved. As of now, even the problems of reality have not been properly solved, so it can be to worry that this may happen.
On the other hand, it is a story related to artificial intelligence that cannot be left out of fiction’s virtual reality. Artificial intelligence in the fiction’s virtual reality world seems to behave like a person and even looks like a human being. However, these artificial intelligences appear to be consumed like tools for the happiness of others in virtual reality. However, as much as human beings, their rights will need to be issue are bound to be discussed.
Excessive addiction and the collapse of life can hinder the pursuit of justice in human society and the maintenance of self-identity. It would also be right to limit people to keep them safe from mental and emotional deterioration. Indeed, in situations where the flow of thought and memory is under human control due to technology, in order to be truly free, it is necessary to stop danger by limiting power.
Of course, the above problems are only seen from the perspective of modern society, so in the optimistic future, there is a possibility that it will end with misunderstanding like digital dementia. However, after a problem arises there is no way to stop it, we need to start worrying that. It can be an efficient and effective way to replace the experience. However, if confusion with reality, it has a fatal effect and increase the social problem.
Brainwave
What is a brainwave?
Neural oscillations, or brainwaves, are rhythmic or repetitive patterns of neural activity in the central nervous system. Neural tissue can generate oscillatory activity in many ways, driven either by mechanisms within individual neurons or by interactions between neurons. In individual neurons, oscillations can appear either as oscillations in membrane potential or as rhythmic patterns of action potentials, which then produce oscillatory activation of post-synaptic neurons. At the level of neural ensembles, synchronized activity of large numbers of neurons can give rise to macroscopic oscillations, which can be observed in an electroencephalogram. Oscillatory activity in groups of neurons generally arises from feedback connections between the neurons that result in the synchronization of their firing patterns. The interaction between neurons can give rise to oscillations at a different frequency than the firing frequency of individual neurons. A well-known example of macroscopic neural oscillations is alpha activity.
Basically, five types of brainwaves are well known, starting from the lower frequency band.

Delta wave – brainwave havingbetween 0.5 ~ 4Hz. A delta wave is a high amplitude brain wave with a frequency of oscillation between 0.5 and 4 hertz. Delta waves, like other brain waves, are recorded with an electroencephalogram (EEG) and are usually associated with the deep stage 3 of NREM sleep, also known as slow-wave sleep (SWS), and aid in characterizing the depth of sleep.
Usually that comes out when people are sleeping. In addition, it is a signal that is emitted from all over the brain. The amplitude is very high compared to the low frequency. The frequency band is close to 0 Hz, the band is directly affected by the direct current component (DC), and is often removed by filtering when measuring EEG except sleeping EEG. It is a frequency band that is affected by human body movement or other action during measurement. It is often written that the delta is contaminated.

Theta wave – brainwave having between 4 ~ 7Hz. It refers to an EEG that comes out of hypnosis and drowsiness. Uniquely, theta wave is active in species like rodents. In humans, alpha and beta waves are surprising compared to their predominant activity. The limbic system is considered to occur mainly in the hippocampus and the thalamus among brain structures. That is, it is known to occur in the mammalian brain. It is said that it occurs more in childhood and in childhood than in adults.
Two types of theta rhythm have been described. The hippocampal theta rhythm is a strong oscillation that can be observed in the hippocampus and other brain structures in numerous species of mammals including rodents, rabbits, dogs, cats, bats, and marsupials. “Cortical theta rhythms” are low-frequency components of scalp EEG, usually recorded from humans.
Cortical theta rhythms observed in human scalp EEG are a different phenomenon with no clear relationship to the hippocampus. In human EEG studies, the term theta refers to frequency components in the 4–7 Hz range, regardless of their source. Cortical theta is observed frequently in young children. In older children and adults, it tends to appear during meditative, drowsy, hypnotic or sleeping states, but not during the deepest stages of sleep. Several types of brain pathology can give rise to abnormally strong or persistent cortical theta waves.

Alpha wave – Brainwave having between 8 ~ 12Hz. If the delta and theta waves coming from an un-awakening state, the alpha wave coming from an awakening state. It is an EEG in a relatively relaxed state among the awakened states, and is particularly noticeable when the eyes are closed. Therefore, it is closely related to the visual area. The brain waves belonging to the alpha wave include mu wave (μ wave) and SMR wave (sensorimoter wave).
They predominantly originate from the occipital lobe during wakeful relaxation with closed eyes. Alpha waves are reduced with open eyes, drowsiness and sleep. Historically, they were thought to represent the activity of the visual cortex in an idle state. More recent papers have argued that they inhibit areas of the cortex not in use, or alternatively that they play an active role in network coordination and communication. Occipital alpha waves during periods of eyes closed are the strongest EEG brain signals.

Beta wave – brainwave having between 12.5 ~ 30Hz. Among the brain waves in the awakening state, Beta states are the states associated with normal waking consciousness and thinking activities. In addition, it is the most powerful brainwave normally active. Compared to the delta wave, the amplitude itself is low compared to the high frequency, and is divided into beta 1, beta 2, and beta 3 according to the frequency band : Low Beta Waves (12.5–16 Hz, “Beta 1 power”); Beta Waves (16.5–20 Hz, “Beta 2 power”); and High Beta Waves (20.5–28 Hz, “Beta 3 power”).

Gamma wave – brainwave with a frequency of between 25 Hz to 140Hz, the 40-Hz point being of particular interest. Gamma rhythms are correlated with large scale brain network activity and cognitive phenomena such as working memory, attention, and perceptual grouping, and can be increased in amplitude via meditatio or neurostimulation
Also, it is a high frequency brainwave that comes out of an extremely tense or excited state. Since gamma waves are difficult to measure and maintain, that is not actively being studied. Therefore, the known fact is less than other brain waves, and it is generally known that the brain waves appear when the concentration is very deep.
And altered gamma activity has been observed in many mood and cognitive disorders such as Alzheimer’s disease, epilepsy, and schizophrenia.
How to measure?
Electroencephalography (EEG) is an electrophysiological monitoring method to record electrical activity of the brain. It is typically noninvasive, with the electrodes placed along the scalp, although invasive electrodes are sometimes used, as in electrocorticography. EEG measures voltage fluctuations resulting from ionic current within the neurons of the brain. Clinically, EEG refers to the recording of the brain’s spontaneous electrical activity over a period of time, as recorded from multiple electrodes placed on the scalp. Diagnostic applications generally focus either on event-related potentials or on the spectral content of EEG. The former investigates potential fluctuations time locked to an event, such as ‘stimulus onset’ or ‘button press’. The latter analyses the type of neural oscillations (popularly called “brain waves”) that can be observed in EEG signals in the frequency domain.
In conventional scalp EEG, the recording is obtained by placing electrodes on the scalp with a conductive gel or paste, usually after preparing the scalp area by light abrasion to reduce impedance due to dead skin cells. Many systems typically use electrodes, each of which is attached to an individual wire. Some systems use caps or nets into which electrodes are embedded; this is particularly common when high-density arrays of electrodes are needed.

Electrode locations and names are specified by the International 10–20 system for most clinical and research applications (except when high-density arrays are used). This system ensures that the naming of electrodes is consistent across laboratories. In most clinical applications, 19 recording electrodes (plus ground and system reference) are used. A smaller number of electrodes are typically used when recording EEG from neonates. Additional electrodes can be added to the standard set-up when a clinical or research application demands increased spatial resolution for a particular area of the brain. High-density arrays (typically via cap or net) can contain up to 256 electrodes more-or-less evenly spaced around the scalp.
– What if we applied this brainwaves study to the art?
How do we respond to the art?
Prior to the emergence of systematic brain science in the late 20th century, researchers studied how the human mind works based on psychology and visual perception, which has just begun to be understood. The human’s essential activities were also the subject of inquiry. That was ‘How do you perceive and create art works?’ An interesting question was raised from here. Can we objectively study various aspects of art, a creative and subjective experience? To answer this question, we must first find out what we know about how the mind responds to the art.
Beholder’s share
How do we respond to the art? The first people to explore this question were the ‘Alois Riegl’, Ernst Kris’ and ‘Ernst Gombrich’ of the Vienna School of Art History. Wrigley, Chris and Gombrich gained international fame by striving to establish art history as a science field based on psychological principles around the 20th century.
Alois Riegl (1858-1905) emphasized a psychological aspect of art that was obvious but has been ignored so far. That is, it is incomplete without the perceptual-emotional participation of the viewers. We cooperate with the painter by converting the 2D concept image contained in the canvas into a 3D depiction of the visual world. In addition, it interprets what you see in the canvas from a personal point of view and adds meaning to the picture. Alois Riegl called this phenomenon “beholder’s involvement.” Based on the concepts derived from Riegl’s research and insights beginning to emerge from cognitive psychology, visual perception biology, and psychoanalysis, Kris and Gombrich further developed this concept. Gombrich called it “beholder’s share.”
Kris (1900 ~ 1957), who later became a psychoanalyst, began by studying the ambiguity of perception. He responds to this ambiguity in terms of conflicts and experiences that the artist experiences throughout his life, and as a painting, he recreates the experience of the artist who created the image to some extent. For the painter, the creative process is also an interpretive process, and for the viewer, the interpretive process is also a creative process.
Inverse optics problem: Inherent limitations of visual perception
Gombrich accepted Kris’ concept of ‘the viewer’s reaction to the ambiguity of the painting’ and extended it to all visual perception. In the process, he came to understand one important principle of brain function. It was to take incomplete information about the outside world from the eyes and our brain would complete it.
The image reflected on the retina is first deconstructed by electrical signals describing lines and contours, creating the contours of the face or object. These signals are delivered to the brain and reorganized, and the Gestalt rules involved in organization (The brain tends to grasp the perceptual object in detail, and then not from the whole, but from the overall perspective according to certain rules such as shape, background, similarity, and continuity. This is called the Gestalt principle.) and it is reconstructed and elaborated based on prior experience. Thus, it becomes the image we perceive. Surprisingly, each of us can create an image of the outside world that is surprisingly similar to the image seen by others, but rich and meaningful. In the process of reconstructing the inner representation of this visual world, we see the brain’s creative process in action.
The image projected on the retina of the eye can be interpreted innumerably. George Berkeley, Bishop of Cloyne and Anglo-Irish Anglican philosopher, discovered the key issue of this vision as early on 1709. He wrote that we are not looking at the material object, but the light reflected there. As a result, the two-dimensional image projected on the glass retina cannot directly point to the three-dimensional structure of the object one by one. When we try to understand how we perceive images, this is a challenge. This is called the ‘inverse optics problem’.
The inverse optics problem refers to the fundamentally ambiguous mapping between sources of retinal stimulation and the retinal images that are caused by those sources.
For example, the size of an object, the orientation of the object, and its distance from the observer are conflated in the retinal image. For any given projection on the retina there are an infinite number of pairings of object size, orientation and distance that could have given rise to that projection on the retina. Because the image on the retina does not specify which pairing did in fact cause the image, this and other aspects of vision qualify as an inverse problem.
Gombrich understood this problem properly and quoted George Berkeley’s view that “the world we see is a construct that each of us has slowly built through years of experimentation.”
Our brain is constantly reconstructing it even if it does not receive enough information to reconstruct it accurately. Moreover, the images reconstructed by individuals are surprisingly similar to each other. How can that be? Prominent doctor and physicist of the 19th century, Hermann von Helmholtz, argued that we solve the problem of inverse optics by adding two pieces of information. It is ‘bottom-up information’ and ‘Top-down information’.
The ‘bottom-up information’ is provided through computational activities in the circuits of our brain. Through this calculation, the brain can extract key elements such as contours, boundaries, and intersections of lines from the image of the physical world.
‘Top-down information’ refers to cognitive effects and higher-level mental functions such as attention, imagery, expectations, and learned visual association. Based on our experience, we guess the meaning of the image in front of us. The brain does this by building and testing hypotheses.
Perception is the process by which the brain integrates the information it receives from the outside world with knowledge learned through previous experience and hypothesis verification. We attach this knowledge to every image we see. So when looking at a work of art, we associate it with what we have experienced in the physical world over and over again. It also connects with the memories of all the other works of art we’ve encountered so far.
In this way, when we look at art, we need to study what causes emotional reactions. Emotion is an instinctive process. Emotions help you cope with fundamental challenges such as making life colorful, avoiding pain and pursuing pleasure. The emotions we think of in the picture are essentially the same as those we think of in everything else in our daily lives. So, it can be said that art raises countless questions about perceptions and emotions that we are only now beginning to notice and identify.
Prototype test
After researching the above content, I applied the EEG I had to the Unity project. First, I connected InteraXon Muse2 with a mobile phone, and then connected to the computer’s Muse Lab through OSC Streaming. Then I opened a new OSC port in the Muse Lab and connected to Unity.
In this test, differents brainwaves data were received, and EEG data was connected to the cube, and then the cube was rotated and the colour was changed.