Cortex API
  • Getting Started
  • Connecting to the Cortex API
  • Overview of API flow
  • Authentication
    • getCortexInfo
    • getUserLogin
    • requestAccess
    • hasAccessRight
    • authorize
    • generateNewToken
    • getUserInformation
    • getLicenseInfo
  • Headsets
    • controlDevice
    • queryHeadsets
    • updateHeadset
    • updateHeadsetCustomInfo
    • syncWithHeadsetClock
    • Headset object
  • Sessions
    • createSession
    • updateSession
    • querySessions
    • Session object
  • Data Subscription
    • subscribe
    • unsubscribe
    • Data sample object
  • Records
    • createRecord
    • stopRecord
    • updateRecord
    • deleteRecord
    • exportRecord
    • queryRecords
    • getRecordInfos
    • configOptOut
    • requestToDownloadRecordData
    • Record object
  • Markers
    • injectMarker
    • updateMarker
    • Marker object
  • Subjects
    • createSubject
    • updateSubject
    • deleteSubjects
    • querySubjects
    • getDemographicAttributes
    • Subject object
  • BCI
    • queryProfile
    • getCurrentProfile
    • setupProfile
    • loadGuestProfile
    • getDetectionInfo
    • training
    • Readonly profile
  • Advanced BCI
    • getTrainedSignatureActions
    • getTrainingTime
    • facialExpressionSignatureType
    • facialExpressionThreshold
    • mentalCommandActiveAction
    • mentalCommandBrainMap
    • mentalCommandGetSkillRating
    • mentalCommandTrainingThreshold
    • mentalCommandActionSensitivity
  • Warning Objects
  • Error Codes
  • Troubleshooting Guide
  • Release Notes
  • cortexaccess tool
Powered by GitBook
On this page
  • Profile
  • Training workflow
  • Getting live actions

BCI

PreviousSubject objectNextqueryProfile

Last updated 1 year ago

You can use the Cortex API to develop a brain–computer interface (BCI), using facial expressions, mental commands, or both.

Before you start programming your own BCI, it is strongly recommended that you use to get familiar with the main BCI features of Cortex. You should also check out the .

To use the BCI features with an EPOC Flex headset, you must use a special EEG mapping. Please see for details.

To get the results of the facial expression detection and the mental command detection, you must to the data streams "fac" and "com" respectively.

To use the mental command detection, you must a training profile that contains training data for at least one action. For the facial expression detection, the training is optional.

Profile

A profile is a persistent object that stores training data for the facial expression and mental command detections. A profile belongs to a user and is synchronized to the EMOTIV cloud.

You can use to list the profiles of the current user. Use to manage the profiles, and also to load or unload a profile for a headset. You can use to add training data to a profile.

Before v3.6.5, a profile can be loaded for both Epoc/Flex and Insight. Since v3.6.5, a profile can only be loaded for Epoc/Flex or Insight since their EEG channels are different. If you upgrade from version before 3.6.5, the existing profiles will become readonly. See

Training workflow

The training works that same for the mental command detection and the facial expression detection. However, they don't use the same actions, events or controls. So you should call to know what actions, controls and events you can use for each detection.

Before you start a training, you must to the data stream "sys" to receive the training events from Cortex. Then you can follow these steps:

  1. Start the training by calling with the action you want to train and the control "start".

  2. On the "sys" stream, you receive the event "started".

  3. After a few seconds, you receive one of these two events:

    1. Event "succeeded", the training is a success. Now you must accept it or reject it.

    2. Event "failed", the data collected during the training is of poor quality, you must start over from the beginning.

  4. Call with the control "accept" to add the training to the profile. Or you can use the control "reject" to reject this training, and then you must start over from the beginning.

  5. Cortex sends the event "completed" to confirm that the training was successfully completed.

  6. You should to persist this training. If you unload the profile without saving, then the training is lost.

After step 2, but before step 3, you can send the control "reset" to cancel the training.

The following chart illustrates the flow of BCI training with corresponding request for each step:

Getting live actions

The following chart illustrates the streaming of Mental Commands data in live:

EmotivBCI
EmotivBCI Node-RED Toolbox
subscribe
load
queryProfile
setupProfile
training
Readonly training profile.
getDetectionInfo
subscribe
training
training
save the profile
Data Subscription