Hi everybody, how are you doing?

What I am gonna present today is another project of mine on Brain/Human-Computer Interface (abbreviated as BCI). In this topic, I will not only introduce my project but also talk a little bit more about the fundamental (like what it is, how it works, and why I chose it as a method for my project).


If you watch TV, follow some Tech pages on the Internet, or read Tech news every day, you will see that BCI systems, time by time, have proved its ability to bring significant benefits to daily-life applications, especially in supporting handicapped people as an alternative communication system. There are two types of BCI that are frequently used for controlling in real life are motor imagery based and Steady-state visual evoked potential (abbreviated as SSVEP).

Motor imagery based BCI

The operation of the former one is based on the characteristics of our sensorimotor area.

To illustrate how it could be used in BCI system, let’s take a look at the video above. At first, we need to know what we are watching. The two images on the top show the mapping of C3 (left) and C4 (right) electrodes on the scalp. Now, look at the two graphs below, it describes the change of the power of the alpha rhythm (at C3 and C4) when an event-related potential (ERP) occurs (left-hand or right-hand imagery movement in this case). There is one more point that you need to notice is the relation between the top mappings and the bottom graphs. If the power of alpha at the electrodes increase, the color of the area around the electrode position will turn to more red, or else, it will turn bluer. In the next step, let’s me explain the phenomenon that is the fundamental of motor imagery based BCI system. Now, let’s take a look at the left part, when the left-hand imagery movement occurs, as we can see in the left-bottom graph, the power of alpha at the position C4 decrease significantly. In fact, concurrently, the power of beta at the position C4 increase considerably, however, it is not illustrated in the video. By looking at the mapping on the right-top of the video, we can see the right area turn to the background color whilst the left are still remained its one. Then, we could ask ourselves, why the left-hand imagery movement affect the power of alpha and beta on the right? It can be explained by the mechanism of the crossover connection in the brain (as shown in Fig. 1).


Fig. 1 The crossover connections of left hand to right hemisphere, right hand to left hemisphere [1]

There are many BCI systems were built based on this approach and many research were conducted over the world for the purpose of improving the performance and increasing the accuracy (or the classification result) of the motor imagery based BCI system. However, non-invasive BCI systems based on motor imagery is still not reliable and robust to apply to multi-task (i.e. three or more) in daily-life apps. Recently, some research groups worldwide have developed some invasive one though, it still not possible to apply to the real life because of the cost and risks of brain surgery.


Regarding the latter, EEG signal used in this type of BCI generated from visual cortex (Fig. 2) when it reacts to visual stimulation (such as a flickering LED at specific frequencies).


Fig. 2 Visual cortex area [2]

The principle of this type of BCI is that when you’re looking at a flickering sign or LED at a specific frequency, your brain will generate an electrical activity that has the same (or multiple of) frequency of visual stimulation [3]. For example, if we look at an 8 Hz visual stimulation (as I’ve said before, it could be a flickering LED), your brain will generate steady-state potentials at 8, 16, and 24 Hz (Fig. 3).


Fig. 3 Response of brain to visual stimulation of 8 Hz [4]

Here is another video that shows an application of SSVEP based BCI in controlling a game.

We can see that, in a BCI system based on SSVEP, there are more than one visual stimulations could be used to adapt to the design condition. Each visual one is assigned to a state or control command that can be classified by using some popular classifiers (random forest, support vector machine or neural network).

The signal generated by using the SSVEP approach has a high signal-to-noise ratio and relative immunity to artifacts (like EMG, ECG, or EOG). As a result, it could be used in real-life applications reliably and robustly. However, though it could cope the disadvantage of the former one (i.e. reliability and robustness in multi-task), it can cause some annoyance to users because of exposure to flickering LED for a long time of use.


In my project, I proposed a simple approach for BCI application by combining it with IoTs. The project is implemented for the purpose of supporting the handicapped people, or more exactly, supporting families with disabled people in caring their relatives who have disabilities. The Fig. 4 shows my overall BCI system.


Fig. 4 Overall BCI-based IoTs application

In the project, I used EPOC that is an inexpensive EEG device to acquire the signal from the user’s scalp. Then, EEG signal is translated to control commands by the BCI module. These commands will be received by IoTs module after that to help the handicapped people signal to their relative (as Fig. 5(a)) or send an SOS message (in case of nobody is at home as Fig. 5(b)) to show that they are in urgent need of help.


Fig. 5 The use of our IoTs application

To help the user to use our application easily, we design a GUI that is shown in Fig. 6.


Fig. 6 Graphical user interface

And then, we build a prototype for the experiment.


Fig. 7 The prototype of the IoTs module


In my project, I decided to build the BCI system based on motor imagery because of two main reasons. Firstly, as I said before, even though the SSVEP approach is more reliable to use in daily life, it could cause some eye problems if it is used day by day. Secondly, using the BCI system based on motor imagery seems to be more natural than that based on SSVEP. The reason of that is, normally, before getting stroke or injuries, people often use their arms to reach their purpose, for example, they want to take a cup to make tea. Now, after accidents, people could also think of their limbs too, but to signal to get help from the others instead.



Fig. 8 Development in future

For further developments in future, it would be more effective if we can embed the ability of emotion recognition. On the other hand, the families with the disabled would also need to coordinate with medical organizations to support for taking care the disabled better.

If you are interested in my project, please feel free to contact me. I will be glad to share ideas and receive feedback from you.


Thank you and see you,

Curious Chick



[1] https://www.joshuanava.biz/memories/the-double-brain.html

[2] https://www.slideshare.net/lasitham/brain-anatomy-12098255

[3] http://synaptitude.me/blog/a-quick-intro-to-ssvep-steady-state-visually-evoked-potential/

[4] https://www.intechopen.com/books/advances-in-robot-navigation/brain-actuated-control-of-robot-navigation



Author: curiouschick

There are many things you may never know about me. But, two things you absolutely know when you visit my blog for the first time: I am a chick and really curious to know everything.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s