[PROJ 6] A BCI-BASED IOTs APPLICATION

Hi everybody, how are you doing?

What I am gonna present today is another project of mine on Brain/Human-Computer Interface (abbreviated as BCI). In this topic, I will not only introduce my project but also talk a little bit more about the fundamental (like what it is, how it works, and why I chose it as a method for my project).

WHAT IT IS AND HOW IT WORK?

If you watch TV, follow some Tech pages on the Internet, or read Tech news every day, you will see that BCI systems, time by time, have proved its ability to bring significant benefits to daily-life applications, especially in supporting handicapped people as an alternative communication system. There are two types of BCI that are frequently used for controlling in real life are motor imagery based and Steady-state visual evoked potential (abbreviated as SSVEP).

Motor imagery based BCI

The operation of the former one is based on the characteristics of our sensorimotor area.

To illustrate how it could be used in BCI system, let’s take a look at the video above. At first, we need to know what we are watching. The two images on the top show the mapping of C3 (left) and C4 (right) electrodes on the scalp. Now, look at the two graphs below, it describes the change of the power of the alpha rhythm (at C3 and C4) when an event-related potential (ERP) occurs (left-hand or right-hand imagery movement in this case). There is one more point that you need to notice is the relation between the top mappings and the bottom graphs. If the power of alpha at the electrodes increase, the color of the area around the electrode position will turn to more red, or else, it will turn bluer. In the next step, let’s me explain the phenomenon that is the fundamental of motor imagery based BCI system. Now, let’s take a look at the left part, when the left-hand imagery movement occurs, as we can see in the left-bottom graph, the power of alpha at the position C4 decrease significantly. In fact, concurrently, the power of beta at the position C4 increase considerably, however, it is not illustrated in the video. By looking at the mapping on the right-top of the video, we can see the right area turn to the background color whilst the left are still remained its one. Then, we could ask ourselves, why the left-hand imagery movement affect the power of alpha and beta on the right? It can be explained by the mechanism of the crossover connection in the brain (as shown in Fig. 1).

1887_28_36-right-hand-left-hemisphere

Fig. 1 The crossover connections of left hand to right hemisphere, right hand to left hemisphere [1]

There are many BCI systems were built based on this approach and many research were conducted over the world for the purpose of improving the performance and increasing the accuracy (or the classification result) of the motor imagery based BCI system. However, non-invasive BCI systems based on motor imagery is still not reliable and robust to apply to multi-task (i.e. three or more) in daily-life apps. Recently, some research groups worldwide have developed some invasive one though, it still not possible to apply to the real life because of the cost and risks of brain surgery.

SSVEP based BCI

Regarding the latter, EEG signal used in this type of BCI generated from visual cortex (Fig. 2) when it reacts to visual stimulation (such as a flickering LED at specific frequencies).

brain-anatomy-38-728

Fig. 2 Visual cortex area [2]

The principle of this type of BCI is that when you’re looking at a flickering sign or LED at a specific frequency, your brain will generate an electrical activity that has the same (or multiple of) frequency of visual stimulation [3]. For example, if we look at an 8 Hz visual stimulation (as I’ve said before, it could be a flickering LED), your brain will generate steady-state potentials at 8, 16, and 24 Hz (Fig. 3).

image10

Fig. 3 Response of brain to visual stimulation of 8 Hz [4]

Here is another video that shows an application of SSVEP based BCI in controlling a game.

We can see that, in a BCI system based on SSVEP, there are more than one visual stimulations could be used to adapt to the design condition. Each visual one is assigned to a state or control command that can be classified by using some popular classifiers (random forest, support vector machine or neural network).

The signal generated by using the SSVEP approach has a high signal-to-noise ratio and relative immunity to artifacts (like EMG, ECG, or EOG). As a result, it could be used in real-life applications reliably and robustly. However, though it could cope the disadvantage of the former one (i.e. reliability and robustness in multi-task), it can cause some annoyance to users because of exposure to flickering LED for a long time of use.

MY BCI-BASED IOTs APPLICATION

In my project, I proposed a simple approach for BCI application by combining it with IoTs. The project is implemented for the purpose of supporting the handicapped people, or more exactly, supporting families with disabled people in caring their relatives who have disabilities. The Fig. 4 shows my overall BCI system.

proj6_overallBCIs

Fig. 4 Overall BCI-based IoTs application

In the project, I used EPOC that is an inexpensive EEG device to acquire the signal from the user’s scalp. Then, EEG signal is translated to control commands by the BCI module. These commands will be received by IoTs module after that to help the handicapped people signal to their relative (as Fig. 5(a)) or send an SOS message (in case of nobody is at home as Fig. 5(b)) to show that they are in urgent need of help.

proj6_application

Fig. 5 The use of our IoTs application

To help the user to use our application easily, we design a GUI that is shown in Fig. 6.

proj6_gui

Fig. 6 Graphical user interface

And then, we build a prototype for the experiment.

proj6_prototype

Fig. 7 The prototype of the IoTs module

WHY DID I CHOOSE FOR MY PROJECT?

In my project, I decided to build the BCI system based on motor imagery because of two main reasons. Firstly, as I said before, even though the SSVEP approach is more reliable to use in daily life, it could cause some eye problems if it is used day by day. Secondly, using the BCI system based on motor imagery seems to be more natural than that based on SSVEP. The reason of that is, normally, before getting stroke or injuries, people often use their arms to reach their purpose, for example, they want to take a cup to make tea. Now, after accidents, people could also think of their limbs too, but to signal to get help from the others instead.

DEVELOPMENTS IN FUTURE

proj6_development

Fig. 8 Development in future

For further developments in future, it would be more effective if we can embed the ability of emotion recognition. On the other hand, the families with the disabled would also need to coordinate with medical organizations to support for taking care the disabled better.

If you are interested in my project, please feel free to contact me. I will be glad to share ideas and receive feedback from you.

 

Thank you and see you,

Curious Chick

 

References

[1] https://www.joshuanava.biz/memories/the-double-brain.html

[2] https://www.slideshare.net/lasitham/brain-anatomy-12098255

[3] http://synaptitude.me/blog/a-quick-intro-to-ssvep-steady-state-visually-evoked-potential/

[4] https://www.intechopen.com/books/advances-in-robot-navigation/brain-actuated-control-of-robot-navigation

 

Advertisements

[PROJ 5] BRAIN COMPUTER INTERFACE

Many people who watch sci-fi film a lot may be familiar with the idea of using brain to control devices or even people. In fact, this have been partly done thanks to developments in neuroscience and computational methods. Of course, till now, scientists have not yet pioneered any technology which can be able to be used to control other people. But at least, Brain-Computer interface (BCI) – a recently developed technology relating to the classification of brain waves – helps us to control some electrical and electronic devices, which bring sharp benefits to people, especially the disabled one. For example, the disabled can use BCI to activate their wheel chair or switch some household devices like TV, air conditioner, etc. without any help from others. On the other hand, BCI technology can also be used for educational purposes or analyzing decision making of customers in fields of economy such as sale and marketing, too name but a few. From these, it can be said that, BCI has been becoming one of sexiest disciplines in the world, along with machine learning and internet-of-things trends.

Limitations notwithstanding, BCI still is an awesome technology in the belief that, by which, we can make superpowers real or become superheroes in the near future. With this belief, I tried to convince my professor to allow me to build a non-invasive BCI system for my thesis, and after a while of deliberation, he agreed. The result is showed in the video below.

 

From an old story,

Curious Chick

[PROJ 4] LINE FOLLOWING ROBOT

This is our project on the 8th semester which belongs to the subject “Mechatronic system desgin”.

Honestly, it was the first time the mobile robot achieved the line, so it is kind of slow (take about 40 seconds to accomplish).

However, the final result was nearly 15 seconds which is only after the top of my class (approximately 14 seconds).

From an old story,

Curious Chick

[PROJ 3] KINECT SCANNER

When I was in the third year at university, our team was assigned to a small project that required to design a Kinect scanner in order to implement 3D model of things. To simplify the task, we used the software Kinect SDK beta to construct the 3D model. At rest, we built electrical and electronic system, desgin mechanical system, built a prototype and control it.

To control the prototype, we decided to use open loop control as you can see in Fig. 1.

proj3_1

Fig. 1 Open loop control system

The prototype is showed in Fig. 2.

proj3_2.png

Fig. 2 The prototype of Kinect scanner

Honestly, the purpose of design was just for things whose weight lie in the range from 1 to 2.5kg. We used a box to test and the results are showed in Fig.3 and Fig. 4.

proj3_3.png

Fig. 3 3D model of the box

Exporting STL file for 3D printer.

proj3_4.png

Fig. 4 STL file of the box

If you enjoy the project, feel comfortable to contact me for more detail.

 

From an old story,

Curious Chick

[PROJ 2] AGRICULTURE IOT APPLICATIONS

Five years ago, there was a workshop on internet of things (IoT) that was held at my university. By chance, I – who was a freshman at that time – encountered the workshop banner when I was trying not to be late for a Calculus class. With an intense curiosity, I stopped by workshop even though I might get late, looked into the meeting room, and saw that no more than 20 peoples in one of the largest room at my university. Um, … That is how IoT was began at my university?

Five years later, the number of IoT applications have soared and IoT have been being a modern trend for many theses and projects at any university in my country. A lot of workshops were held and even Prof. Timothy Chou from Stanford Univ. also opened a workshop at my university to introduce some concepts of IoT applications and … to sell his new book (well, I got one but it seemed not to be useful for me, so I gave it to my roommate who have had plan to start up with IoT). The excitement of IoT have been not just on campus, many businesses and entrepreneurs have focused their researches on IoT applications, especially the IoT applications for agriculture which were expected to be one of the most fertile lands to make money in an agricultural country as my country. Nevertheless, only small minority of businesses and entrepreneurs have got successes in this new discipline, while the rest were eliminated from the game because of two main reasons: lack of patience and funds.

To be honest, when I was a junior student, my Professor used to assign me and my friend to an IoT project. Our tasks were to design an IoT device that help farmer to track some parameters such as temperature and humidity from the garden and to control a pump to water the plants. At that time, I and my friend chose Blynk which is a platform with iOS and Android apps to help us simplify the problems about smartphone apps so that we could focus on hardware development. In my opinion, Blynk really is an amazing app with lots of basic and advance widgets that allow user to build their own applications quickly without any app development skill. A simple Blynk app used to control the states of three LEDs was showed in the video below. However, there are many limits of graphic and function when we used Blynk to develop an IoT application. So, it was just a temporary plan.

After one week, we completed the design of the device and also made a prototype (see Fig. 1) for the test. It seemed to work well in a period of 1 month.

wilay_ver1

Fig. 1 The first three prototypes of the agriculture IoT device

However, there was inconvenient for user when they have to change source code if they change the SSID or the password of their wifi network. Thus, my Professor suggested us adding a function that allow people to enter their SSID and wifi password directly without change in source code.

The mission became more difficult because if we use the 4×4 keypad to enter the wifi information, the device will be larger. Concurrently, my friend was stuck in his thesis, so I had to do this project by myself. Three weeks later, I had an idea for keypad design by using potentiometer to enter the ASCII characters instead of the use of keypad. The potentiometer keypad seemed to work well when I used it to enter the wifi information (you can see that on the video below).

After two weeks, I completed the design and made a prototype as you can see in Fig. 2. Actually, I used three potentiometers to represent the ASCII characters. The first one was used for the numbers from 0 to 9, the second one was used for alphabets and the other for special characters such as ‘*’, ‘#’, etc. I also used an LCD to display which character was chose and added some button to allow user to enter their information.

wilay_ver2

Fig. 2 The prototype of agriculture device version 2

Honestly, the design was still not be optimal but it proved time and time again that it performed better than the old version.

Anyway, this project was unsuccessful because of lack of funds. Meanwhile, there was a large cooperation in my country also developed their own IoT devices for agriculture. With a large and professional team, they released their products after several months of research. Their products were highly appropriated by the local council and how about our prototypes, … um … it died itself in a storehouse.

 

From a sad story,

Curious Chick

[PROJ 1] MAKING A SNAKE GAME WITH A MATRIX LED MODULE

Have you played snake game at least once in your life?

OK, even though snake game may not be popular today, it is still one of the greatest games made by Nokia. So, how cool is it if you can make your own snake game with some simple electronic equipment and basic knowledge of electrical engineering. For that reason, I made a snake game with a matrix led module when I studied micro-controller subject in third year at university.

Let me introduce a bit of information about this project.

First thing I want to show you is devices I used to build the project.

  • Tiva C Launchpad (it sounds unreasonable when I used an ARM platform just to implement a small project. OK, the reason is to learn this powerful weapon).
  • A keypad (I know it was not an optimal option, some people may use joystick to make the project cooler).
  • A matrix led module was used as a screen to display the position of snake and fruit (in this project, I used 8×8 module, some people may want to use larger module).
  • Electrical source.

When all devices were prepared, in the next step, let me show you the strategy I used to make the snake move.

matrix_led_coordinate

Fig. 1. The imagined coordinate on the matrix led module

Imagine we have a 2D Cartesian coordinate as the Fig. 1 and every led in the matrix module is considered as a unit in this coordinate.

  • For the snake:
    • I initialized two arrays (including x-array and y-array) to save x and y coordinate of the snake. In which, (x[0], y[0]) is the position of snake’s head on the screen and the rest of elements in these arrays are the positions of snake’s body parts.
    • In the project, my purpose was to control the snake’s head by keypad, snake’s body parts were programmed to track its head.
    • On the other hand , a variable is declared to save the length of the snake. It was limited within 15 points (i.e. the maximum length of the snake was 15 points on the screen).
  • For the fruit: I made two other variables to save the position of the fruit. These two variable get the value from a random function which is support by TI library.

And this is my results.

I hope you enjoy it,

Curious Chick