SJP SCI-TECH CLUB
  • Home
  • Student Research
    • Eel Grass Studies
    • Aquaponics Blog
    • Wind Energy Research
    • Deep Learning for BCI
    • Cloud Chamber Blog
    • And much more.. >
      • Bioluminesence
  • Lab Visits
    • Novartis Cambridge
    • Greentown Labs
    • MASS CEC
    • MIT Plasma Physics Center
    • Histogenics
    • US GreenBuild - Boston
  • Physics Olympics
    • Paper Airplanes
    • Glider Competition
  • Internships + More
    • Histogenics 2017
Picture

A new approach

4/23/2018

 
An AE-LSTM-FNN architecture for the non-invasive, motor imagery Brain Computer Interface system.
1. Why Non-invasive BCI: Compare to the invasive BCI, non-invasive BCI receives brain signal with much lower quality. Although the brain signal contain more noise, but the non-invasive BCI still has advantage over invasive BCI. Non-invasive BCI is more secure than the invasive one because the infection caused by implanting the sensors can be avoided. What’s more, the neurons near the sensors inside the brain will degraded as the time goes, then the sensors becomes ineffective. However, this will not happen when the subject is using non-invasive BCI. Moreover, because of the current low accuracy of non-invasive BCI, the system can still be improved as the classification algorithm becomes more and more advanced.

​2. 
Why motor imagery: Unlike P300 and SSVEP that requires permanent attention to the external stimuli, motor imagery is independent of any stimulation and can be operated by users with free will. However, the motor imagery signal does has some disadvantages. The training costs a long time for users, may requiring several weeks or months. This is due to the brain tries to adapt Brain Computer Interface and produce more distinguishable pattern. The advanced algorithm can reduce the time of training.

Architecture of Algorithm

Picture
AutoEncoder:

The Autoencoder is an algorithm that can reduce the noise and dimensionality of the sample. This is an unsupervised training algorithm, meaning this algorithm does not require the labels for learning the patterns of the datasets. In the case of Brain Computer Interface, the autoencoder does not need the intention of moving right or left arm the users thought to learn the patterns from their brain activities. An Autoencoder consists of two parts, the encoders and decoders.  The encoders transform the original data into a compressed representation with much lower dimensions. Then, the decoders transform the compressed representation back to the representation with original dimension [10]. Autoencoders try to learn the weights of transformation that minimize the difference between original input and reconstructed inputs. Because the compressed representation has lower dimension, this encoding process is lossy. Therefore, the algorithm has to learn the weight that can preserve the information from the original input in lower dimension, resulting the compressed representation contain more useful information and less noise.
Picture
Long Short Term Memory:
Long Short Term Memory is a deep learning architecture that has the ability to learn a pattern from a time series or a sequence data. LSTM was applied to the voice recognition tasks in the past, and classifiers achieved higher accuracy than most classicial algorithms. Because the LSTM will learn the pattern of the brain activity across all the time step, it is possible that applying LSTM to classify motor imagery EEG can improve the accuracy of the algorithm. 
Picture

Comments are closed.

    Author

    Jack Lin 

    Archives

    April 2018
    March 2018

    Categories

    All

    RSS Feed

Proudly powered by Weebly
  • Home
  • Student Research
    • Eel Grass Studies
    • Aquaponics Blog
    • Wind Energy Research
    • Deep Learning for BCI
    • Cloud Chamber Blog
    • And much more.. >
      • Bioluminesence
  • Lab Visits
    • Novartis Cambridge
    • Greentown Labs
    • MASS CEC
    • MIT Plasma Physics Center
    • Histogenics
    • US GreenBuild - Boston
  • Physics Olympics
    • Paper Airplanes
    • Glider Competition
  • Internships + More
    • Histogenics 2017