You can use the Cortex API to develop a brain–computer interface (BCI), using facial expressions, mental commands, or both.
A profile is a persistent object that stores training data for the facial expression and mental command detections. A profile belongs to a user and is synchronized to the EMOTIV cloud.
The training works that same for the mental command detection and the facial expression detection. However, they don't use the same actions, events or controls. So you should call getDetectionInfo to know what actions, controls and events you can use for each detection.
- 2.On the "sys" stream, you receive the event "started".
- 3.After a few seconds, you receive one of these two events:
- 1.Event "succeeded", the training is a success. Now you must accept it or reject it.
- 2.Event "failed", the data collected during the training is of poor quality, you must start over from the beginning.
- 5.Cortex sends the event "completed" to confirm that the training was successfully completed.
After step 2, but before step 3, you can send the control "reset" to cancel the training.
The following chart illustrates the flow of BCI training with corresponding request for each step:
The following chart illustrates the streaming of Mental Commands data in live: