Skip to main content
Heliyon logoLink to Heliyon
. 2024 Feb 20;10(5):e26255. doi: 10.1016/j.heliyon.2024.e26255

Augmented-reality based brain-computer interface of robot control

Junying Hu 1
PMCID: PMC10915352  PMID: 38449664

Abstract

Brain Computer Interface (BCI) is a new approach to human-computer interaction. It can control the external devices directly with the brain without words and body movements. Brain-controlled robot is a major research area in the field of BCI, which organically integrates BCI with robotic systems to achieve safe and effective real-time control of robots using the user's electroencephalogram (EEG). Currently, there are two types of control methods for brain-controlled robots. One is direct control and the other is shared control. Direct brain control has its shortcomings, namely, low control efficiency and easy user fatigue. Shared control technique can effectively improve the control of brain-controlled robots and reduce the thinking ability of brain-controlled robots, thus making it the main control method of brain-controlled robots. The brain-computer collaborative control system based on augmented reality (AR) technology studied in this paper is a human-computer shared control method. In the experimental analysis of virtual reality (VR) systems and AR systems, this paper processes polylines through a series of control vertices with specific coordinates, using the relative distance measured between each point and the starting point as the relative coordinates, and calculates the operational errors of the two types of systems. In the system error of machining broken lines, when the relative coordinates are (10, 20), (40, 50), and (70, 80), the error values of the VR system are 0.17 mm, 0.36 mm, and 0.55 mm, respectively, while the error values of the AR system are 0.11 mm, 0.24 mm, and 0.41 mm, respectively. Therefore, the studies have illustrated the importance of AR systems for the study of brain-computer collaborative control of robots.

Keywords: Brain-controlled mobile robot, Augmented reality, Brain computer interface, Brain computer collaborative control

1. Introduction

The brain generally transmits messages from the peripheral nerves and then the muscle tissue to control the external apparatus. BCI technology uses EEG acquisition devices and computer hardware and software equipment to acquire and process EEG, thus achieving direct interaction with and control of the external environment. BCI can provide a better understanding of the brain's cognitive, information exchange and control patterns, and offer a new approach to “mind control”. The study of BCI has some scientific value and practical application to a certain extent. BCI was originally designed to help people with disabilities who have lost their mobility and expression, but are able to think normally and who are able to interact with the outside world. For people with mobility impairments, they think like normal people, and they just lose control of their limbs and muscles. BCI technology can provide more ways for people with disabilities to interact with the outside world, such as intelligent wheelchairs, rehabilitation robots and text entry systems.

BCI, as a form of human-computer interaction, can control external devices directly through the brain. To establish a direct communication channel between the brain and external devices, Gao, Xiaorong reviewed various BCI paradigms and proposed an evolutionary model of generalized BCI technology that consists of three stages: interface, interaction, and intelligence. In addition, he highlighted the challenges, opportunities and future prospects of the new BCI technology development [1]. Katona, Jozsef's goal was to develop a BCI system to observe the level of vigilance computed through ASIC (Application Specific Integrated Circuit) technology and evaluate the output with a learning efficiency test applied in cognitive neuroscience [2]. Brain-controlled robot has attracted much attention as an important application of BCI. To enhance the development of brain-controlled robots for enhanced future research, Li, Hongqi developed a powerful sliding-mode nonlinear predictive controller for brain-controlled robots. He developed the proposed controller based on the kinematics and dynamics theory of mobile robots through cascaded predictive controllers and smooth sliding mode controllers. This work provided an enabling design [3]. However, the efficiency of the method used for the study of brain-computer robots is not very obvious and needs to be improved.

Currently, the application of virtual reality technology for collaborative control of brain-controlled mobile robots has become increasingly common. To promote the research of brain-controlled robots and multi-robot systems, Yang, Zhenge constructed a whole-brain controlled multi-robot physical system and tested the proposed system by practical experiments in human-in-the-loop. His experimental results showed that the system can track the user's direction, velocity, and formation control intentions while ensuring the safety of multiple robots [4]. However, the detection of single-trial event-related potentials from EEG signals remains a challenge. Brennan, Chris presented a cross-session EEG dataset based on rapid serial visual presentation (RSVP) for collaborative BCI systems that can be used to develop more efficient algorithms so as to improve the performance and utility of RSVP-based collaborative BCI systems [5]. Cruz, Aniana proposed a p300-based self-paced BCI combining dynamic time window commands and collaborative controllers. Dynamic time window commands allow balancing the reliability and speed of BCI. The co-controller combines user intent and navigation information, offering the possibility to navigate in complex environments and improving the overall reliability of the system. The validity of the proposed method was demonstrated by the quantitative and subjective results of the questionnaire assessment [6]. Chen Xiaogang combined augmented reality, computer vision, and steady-state visual evoked potential BCI to design and implement a robotic arm control system. According to the online results of 12 participants, the average classification accuracy of the system is 93.96 ± 5.05%, which is expected to further promote the practicality of brain computer interface control robots [7]. Angrisani Leopoldo proposed a wearable monitoring system for detection under the Industry 4.0 framework. This instrument integrates augmented reality (AR) glasses with non-invasive single channel brain computer interface (BCI), replacing the classic input interface of the AR platform. A case study on the inspection of integrated AR-BCI equipment shows that the accuracy of laboratory values is approximately 80% [8]. However, due to the traditional thinking and definition, the two are not highly integrated and advantageous.

This paper focused on the problem of shared control between human brain and computer intelligence in brain-controlled robots, and proposed an AR-based collaborative brain-computer control scheme on this basis. On this basis, an AR model was established, and it was used to provide a comprehensive description of the entire working condition of BCI, which facilitated the analysis and improvement of BCI, thus improving the adaptability of the BCI system. Evolutionary decision making was also performed using this model, which leaded to brain-machine cooperative control. Then an AR-based BCI cooperative control experimental platform was constructed. Finally, a complete BCI network model was built using AR, and offline and online experiments were conducted to verify the effectiveness of the AR model proposed in this paper [9,10]. Compared with existing research, the robot brain machine collaborative control method based on augmented reality in this paper can achieve smaller errors. In practical applications, the robot brain machine collaborative control method based on augmented reality can effectively improve the human-machine interaction experience, achieve human-machine brain machine collaborative control, and improve the intelligence and autonomy of robots.

2. AR-based brain-computer collaborative technology

2.1. Virtual reality

Virtual reality technology refers to the creation of a virtual three-dimensional space through computer technology, in which players can perceive everything around them through visual, auditory, tactile and other senses [11]. This paper presents a new kind of control and management for mobile robots with 3D modeling based on virtual reality technology, with a basic idea that everything the operator does in the virtual world is “projected” into the real world using robots or other automated devices. It provides a scalable framework for convergence in the two main domains of virtual reality and robots. This approach achieves an unprecedented level of “intuitive” operation, where the operator is able to move freely in the virtual environment as if in a real one. This allows for a higher level of human-computer interaction. The three-layer structural system of the projected virtual reality-based mobile robot model is shown in Fig. 1.

Fig. 1.

Fig. 1

Structure of the virtual reality model.

As shown in Fig. 1, there is a “user interface” at the first level. The “gesture detection” module receives commands from the user to the virtual robot, and the “gesture conversion command interpretation” module is responsible for decomposing the user commands into corresponding independent control events. At the second level, “motion planning” is a core part of the process. The main role of this module is to break down the control tasks on the first layer into a specific command, such as getting the robot to go left, right, forward, backward, stop, etc. The third level includes mainly modules for actual control, drive, multi-sensor fusion, and virtual robots. In addition, the “real-time path planning and collision detection” module based on local intelligence can realize the robot's path planning and automatic control of obstacles in a certain area.

2.2. General introduction of AR brain-computer interface system

The system is based on python language and is implemented in a multi-threaded way to ensure the stability and speed of transmission [12]. It can perform flicker visual stimulation, real-time transmission, target tracking, EEG signal processing and data interaction functions in multiple threads. The general framework structure of the AR-BCI system is shown in Fig. 2. In this study, the terminal is a one-person robot, which displays a real-time video stream and a visual interface marked by dynamic visual stimuli.

Fig. 2.

Fig. 2

Overall architecture of AR-BCI system.

2.3. Mobile robots

To test the feasibility of AR-based brain-computer collaborative control system, this paper combines BCI system, augmented reality technology and robot system to build a complete AR experimental platform. In this paper, the AR is firstly simulated and verified. The paper then provides offline and feedback-based training and summarizes the entire experimental process. On this basis, the AR-based brain-computer collaborative control scheme is studied, and the feasibility of the scheme is analyzed and demonstrated. This paper provides a more detailed description of how the BCI system, the AmigoBot mobile robot testbed and augmented reality technology work. In this paper, the three systems are combined to form a complete BCI collaborative control system. Fig. 3 shows the AR-based brain-computer collaborative control system.

Fig. 3.

Fig. 3

Structure of AR-based brain-computer collaborative control system.

As can be seen from Fig. 3, this system includes BCI system, augmented reality technology, and robot system. The BCI system mainly acquires and processes EEG signals, and then encodes the processed EEG signals and sends them to AR.

The user first imagines the movements of the left and right hands to generate an initial EEG signal, which is then connected to the electrode cover by an EEG amplifier. After pre-processing, feature extraction, feature classification and signal transformation, the control commands of the EEG can be obtained and transmitted to the augmented reality technology in the form of shared files. AR is an AR-based evolutionary decision system. Using the mathematical model of AR, combined with the evolutionary algorithm of AR, Petri evolutionary decision making can be implemented on Matlab testbed. The two inputs to AR are brain control commands and feedback signals of action states. The AR evolutionary program evolves the input signals, makes decisions and transmits the final control commands to the robotic system according to the control principles of collaborative brain-computer control [13,14].

In this paper, the AmigoBot robot from MobileRobots is used as the research object to implement automatic recognition and operation of AR on Visual C++ platform. The control commands include stop, forward, left, right, avoiding obstacles, etc., and the robot can execute the commands, which is then inspected. Upon completion, the robot immediately sends instructions to the AR system. At the same time, the user can conceive the next action based on the result of this robot's operation.

2.4. System design of brain-controlled robot

  • (1)

    System requirement analysis

When researching and designing robots, they must be tested and demonstrated in a real-world environment [15]. However, in practical applications, not only does it take a lot of human and financial resources, it also takes a long time to debug, and it is difficult to achieve the expected results with the time and effort spent. Moreover, demonstrations in different environments can be limited by the venue and location. Therefore, it is necessary to develop a set of virtual robots that can be demonstrated in different virtual scenarios and also communicate with the real devices so that they can interact with the real robots.

Currently, there are many virtual robot display systems with virtual reality technology, but using augmented reality technology to simulate real scenes has the problem of difficult to control the accuracy of the scenes and the development time. The article divides the requirements of the system into functional requirements of the system and performance requirements of the system by the type of requirements, and elaborates on these two aspects.

  • (2)

    Functional requirement analysis

According to the subject background and practical application needs, the functional requirements of the system are divided into basic functions, advanced functions and auxiliary functions, of which the specific function points are shown in Table 1.

Table 1.

Functional requirements of the system.

Serial Number Functional Category Function Points
1 Basic Skills Interface Design
Fusion Of Reality And Reality
Effect Design
Roaming Effect
2 Upgrade Function Brightness Setting
Shadow Compositing
Lighting Dynamic Adjustment
User Interaction
3 Accessibility Virtual Robot Attribute Adjustment
Special Rendering Effects
Synchronized Movement With Real Robot

As can be seen from Table 1, the basic functional requirements of the system is the premise of its work, using camera calibration to determine the pose of the camera, so as to achieve the purpose of combining reality and reality. The functional requirement for special effects design is the loading of the physical engine in the virtual environment. The upgrade function of the system mainly includes the light coordination of the system and the interaction of the system (including the roaming of the virtual robot and the manual operation of the roaming path). The auxiliary functions require the user to adjust all attributes of the virtual robot (such as movement speed and volume).

  • (3)

    Performance requirements analysis

The system must ensure its real-time and stability while satisfying various functions in order to improve the user experience.

The frame rate of the system: The image data captured by the camera is sent to the virtual scene via network communication to be instantly updated as background mapping. If the frame rate is too low, it may cause the system to run poorly. The background mapping of the virtual scene must be updated at a frame rate of 20 frames per second or more to ensure a good user experience.

The real-time of the system: In the actual scene, when the camera position and angle change, it must update its position and angle in time and adjust to the current image. In addition, the real-time also includes the requirement for lighting and shadows, and the ability to adjust the lighting in the virtual environment in real time according to the brightness of the current image.

The accuracy of the system: Similar to the real-time, the accuracy is divided into two parts. One is to ensure the perspective relationship between the virtual scene and the real scene, and the other is to ensure the integration of the objects in virtual reality and the real world. In terms of lighting adjustment, the lighting of the virtual environment and the realistic lighting should be combined to achieve consistency of lighting.

  • (4)

    System architecture design

This paper presents the basic structure of a brain-controlled robot with brainwave signal acquisition, signal processing and robotics as its core. The EEG signal is acquired mainly by using a special device for the acquisition of the EEG of the user [16]. In this paper, the EEG signal is first pre-processed using band-pass filtering technique, and then extracted using augmented reality technique. The EEG signals are classified using local-density approximation (LDA), and the classified EEG signals are transformed into recognizable control commands and transmitted to the robot via a shared document. Meanwhile, the user can determine what to do next based on the feedback from the robot, as shown in Fig. 4.

Fig. 4.

Fig. 4

Framework diagram of brain-computer interface BCI system

BCI system block diagram.

It can be seen from Fig. 4 that EEG acquisition is mainly through specific devices to obtain the user's brain waves. There are two types of acquisition devices, one is implantable, and the other is to implant an electrode into the user's scalp. In this way, low noise and low loss EEG signals can be obtained. It would increase the cost of surgery and cause unknown harm to users. Non-implanted type usually uses electrodes and special parts of the cerebral cortex to collect electric waves, and then amplifies them by amplifiers. This method is simple, safe and convenient for the application of BCI.

Pre-processing EEG signals can effectively eliminate noise. At present, there are three common preprocessing technologies: low-pass and band-pass filters; spatial filtering, such as common average reference, etc; space-time filter, such as independent component analysis, principal component analysis, etc. In BCI system, a better preprocessing method can be adopted, which can not only eliminate noise, but also reduce the EEG signal irrelevant to the user's intention to a certain extent.

Feature extraction is based on the amplitude, frequency band energy, emission rate of cortical neurons and other characteristics of EEG wave to extract the most reflective of user's wishes. Autoregressive model and other feature extraction technologies have been widely used in BCI.

Feature classification refers to classifying data according to the characteristics of EEG signals and converting them into specific control instructions, so as to realize the recognition of users’ real purpose. The correct classification method must be able to adapt to the characteristics of EEG signals to ensure the correct recognition of EEG signals. In addition, the classification effect of EEG signals is often related to the characteristics of EEG signals. When the number of samples is small, linear classification methods such as linear discriminant analysis can be used. When classifying, if the number of samples is large or has multiple characteristics, then nonlinear classification methods such as artificial neural network and support vector machine need to be used.

2.4.1. Signal preprocessing

The preprocessing of EEG signal is mainly to eliminate interference. The excellent preprocessing method can effectively eliminate the artifacts and noise irrelevant to the user's control intention, and can effectively improve the signal-to-noise ratio. Because the EEG signal is a series of continuous multi-channel with a large amount of data, the time window is used to intercept the original data [17]. Therefore, this paper selects a 2-s time window, which can not only intercept a lot of useful information, but also ensure real-time signal processing.

After the EEG data are collected, the noise contained in the EEG can be eliminated by using band-pass filtering technology. This filtering method can remove the noise in the EEG signal well, so that the distortion of the signal can be slightly improved. Its amplitude-frequency characteristics are shown in the following formula (1):

|H(p)2|=11+(ppc)2N (1)

pc is the cut-off frequency of the filter, and N is the order of the filter. Part of the noise in the intercepted EEG signal can be filtered by the band-pass filter, but in order to obtain more effective information, it is also necessary to extract the features of the filtered EEG signal.

2.4.2. Feature extraction

This paper selects Common Spatial Pattern (CSP) as the feature extraction method of EEG. The advantage of this method is that the feature vectors obtained in the extraction process are independent of each other. Its basic idea is:

First, set the number of collected channels to N, and M represents the acquisition point of each channel. The EEG data obtained in one test is XaM. Therefore, the spatial covariance can be expressed by the following formula(2).

Ca=XaXaMtrace(XaXaM) (2)

A represents the EEG signal category, L represents the left imaginary movement, and R represents the right imaginary movement. trace(XaXaM) represents the sum of diagonal elements of the matrix. Therefore, The average calculation formula of the spatial covariance of the left and right sample matrices is as follows (3):

C¯R=1ma=1mCRC¯L=1na=1nCL} (3)

In formula(3), n and m are the number of left and right experiments, and then the whitening matrix can be obtained by extrapolation as shown in the following formula (4):

C=C¯R+C¯LC=AλATD=λ12AT (4)

In the formula(4), A is the eigenvector matrix.

Multiply it with the mean of left and right covariance to get formula (5):

SR=DC¯RDTSL=DC¯LDT} (5)

Because the left and right sides have the same eigenvector W, let its expression be as follows (6):

SR=WλRWTSL=WλLWT} (6)

In formula (6), λR+λL=1, so when the matrix SR is the maximum eigenvalue, SL is the minimum eigenvalue, and vice versa.

In order to obtain the maximum uncorrelated two types of eigenvectors, the first n and the last n eigenvectors are generally selected to construct the spatial filter. Let WRT be the eigenvector of the largest eigenvalue in the λ eigenvalue diagonal matrix, and WLT be the eigenvector of the largest eigenvalue in the 1λ eigenvalue diagonal matrix. Therefore, the filter can be obtained as follows (7):

OL=WLT×DOR=WRT×D} (7)

The EEG data XL and XR of unilateral-motion imagination in the training set are filtered by the corresponding filter to obtain formula (8):

ZL=WLT×XLZR=WRT×XR} (8)

Finally, a feature vector [ZL,ZR] is obtained. The method of feature classification is to classify the data using a classifier. Classifier is a good way to solve problems. The main idea of LDA is to find the appropriate projection direction in the given training sample set, and project the sample onto a straight line, so that the projection of the same type can be concentrated as much as possible. The other is far away, and then uses the same method to classify new samples.

First, a set of BCI system is constructed by using augmented reality technology and mobile robot system.

The effectiveness of the shared control system of AR can be proved by experiments [18]. Secondly, this paper also describes in detail the preparation work before each online brain-controlled robot experiment. On this basis, people can use AR to carry out the online test of brain-machine cooperative control, introduce AR into the robot's automatic obstacle avoidance ability, and use AR's evolutionary decision-making ability to make decisions together with machine intelligence. AR can be used to carry out the online test of BCI collaborative control, which proves the effectiveness of the algorithm.

3. Brain-computer collaborative experiment based on augmented reality technology

3.1. Robot brain-computer cooperation experiment

All online experiments in this paper are conducted using MobileSim simulation software. It takes the simulated map as the simulation, with Start at the bottom left of the map as the starting point and Goal as the ending point. There are seven subjects (C1, C2, C3, C4, C5, C6, C7) in this online experiment. Each subject is controlled by direct brain control and AR sharing, from Start to Goal on the map. During the test, when the robot collides, this paper would stop the test. In the experiment, both direct brain control and AR sharing control are carried out step by step. The experimental results show that the experimenter's brain operation time is too long, and there would be obvious fatigue, so it is set to 100 steps. In the experiment, this paper recorded the total number of steps, obstacle avoidance times and the number of experiments completed by each subject on the robot, as shown in Fig. 5.

Fig. 5.

Fig. 5

The results of the number of steps and obstacle avoidance times for brain computer collaborative operation of the system.

In the case of direct brain control in Fig. 5 (a), the highest number of operation steps is C6, which is 91 steps. The lowest number of running steps is C3, which is 64 steps. In the case of shared control, the highest number of running steps is C5, and its value is 100 steps. The lowest number of running steps is C2, which is 91 steps.

In Fig. 5 (b), in the case of direct brain control, the number of obstacle avoidance times is C2, which is 4 times. C4 has the least number of obstacle avoidance times, and its value is 1. In the case of shared control, the number of obstacle avoidance times is C6, which is 7 times. The number of obstacle avoidance is C4, which is 4 times. Therefore, it can be seen from Fig. 5 that the shared control is better than the direct brain control in terms of the number of running steps or the number of automatic obstacle avoidance.

3.2. AR-based robot control experiment

In order to test the functions of the system in improving human-computer interaction and improving the accuracy of robot operation, a series of experiments have been carried out in this paper. This paper takes the AR-MARCO platform and VR-MOBI platform as experimental objects. Among them, the AR-MARCO platform is a mobile robot control platform based on augmented reality technology, which mainly provides real-time augmented reality scenes through headworn display devices, integrating robot control and real-time feedback; VR-MOBI is a mobile robot control platform based on virtual reality technology, which provides virtual reality scenes through virtual reality headworn display devices, achieving remote control and real-time feedback of robots. On this basis, this paper also carried out the same experiment on the original robot based on virtual reality, and compared the experimental results of the two.

First, in the control system of mobile robot based on virtual reality, this paper processes the workpiece into a specific shape through processing, and then processes it with augmented reality technology [19,20]. On this basis, this paper compares the machining errors of the two systems. In the fusion image, the virtual workpiece is only represented as the processing path. When the actual robot works, the motion path of the machine tool is the motion of the machining tool along the virtual path. When an error occurs, it can be adjusted by manual control. For this reason, two experiments were carried out to test the system.

  • (1)

    Experiment 1

First, this paper processes a rectangle into a rectangle parallel to the x-axis and y-axis, then measures the length of each side, and then calculates the error to get the average. Secondly, this paper processes “X”. This “X” shape corresponds to the two diagonals of each rectangle. This paper measures the length of the two straight lines, and then calculates the error to get the average value. The results of the test data are shown in Fig. 6 (a) and 6 (b).

Fig. 6.

Fig. 6

Error results of rectangle and “X" machining In the VR system error in Fig. 6 (a), the error values of 40 × 30, 100 × 60 and 160 × 90 are 0.14 mm, 0.31 mm and 0.39 mm respectively. Among the AR errors, the error values of 40 × 30, 100 × 60 and 160 × 90 are 0.07 mm, 0.26 mm and 0.37 mm respectively.

In the VR system error in Fig. 6 (b), the error values of 40 × 30, 100 × 60 and 160 × 90 are 0.23 mm, 0.34 mm and 0.45 mm respectively. Among the AR errors, the error values of 40 × 30, 100 × 60 and 160 × 90 are 0.21 mm, 0.32 mm and 0.41 mm respectively. In the experimental environment, the presence of noise interference may increase the error of the experimental data, resulting in insignificant differences in the error results between the two types of systems. However, as shown in Fig. 6, the error generated by AR technology is smaller than that generated by VR system.

  • (2)

    Experiment 2

Firstly, this paper processes a quarter circle arc, measures the relative distance between the two ends of the arc and calculates the error. Secondly, this paper processes a polyline through a series of control vertices with specified coordinates. This paper measures the relative distance between each point and the starting point and calculates the error. This paper mainly measures the absolute error of the system, which is obtained by calculating the difference between the measured value of each point and the true value to obtain the absolute error of the relative distance. Firstly, the measurement data is filtered and a threshold is set to determine whether the data is abnormal. If there is a system error, it is corrected by calibrating the data based on known reference data. The experimental data results are shown in Fig. 7.

Fig. 7.

Fig. 7

Error results of arc and polyline machining In the system error of VR in Fig. 7 (a), when the arc radius is 10 mm, 20 mm and 70 mm, the error values are 0.28 mm, 0.35 mm and 0.79 mm. In the system error of AR, when the arc radius is 10 mm, 20 mm and 70 mm, the error values are 0.15 mm, 0.23 mm and 0.65 mm.

In the system error of VR in Fig. 7 (b), when the relative coordinates are (10,20), (40,50) and (70,80), the error values are 0.17 mm, 0.36 mm and 0.55 mm. In the system error of AR, when the relative coordinates are (10,20), (40,50) and (70,80), the error values are 0.11 mm, 0.24 mm and 0.41 mm.

Through the comparison of different test data, it is found that the error of the two systems in processing rectangle is very similar because the robot has high accuracy and the augmented reality technology cannot fully play its advantages. In practical applications, its operation error is much smaller than that of virtual reality system [21,22].

The reason why this happens is that the robot uses a straight line interpolation algorithm when dealing with oblique lines (a line that is not parallel to the coordinate axis). That is to say, the drill bit does not move strictly in the direction of the oblique line, but processes the oblique line approximately along the stepped broken line step by step. When performing linear interpolation, the robot needs to deal with a large number of short line segments, and its processing errors would gradually accumulate, resulting in large errors. In the work environment based on augmented reality technology, when there is a large error, the operator can correct it by comparing the virtual and real work process. This can avoid the influence of error accumulation and ensure the accuracy of the system.

Similarly, the robot also uses the curve interpolation algorithm to complete the arc processing, so it would produce similar error accumulation. In the virtual reality system, the machining error increases with the distance from the vertex to the initial point, but in AR, the machining error is almost zero. This is because in virtual reality, when the distance between the processing point and the initial point increases, the error accumulation caused by the linear interpolation algorithm would gradually increase. However, in augmented reality technology, errors in virtual, real alignment, operator's vision and operation lead to processing errors. There is no accumulated error, so the error changes little.

4. Brain-controlled robot cooperative control results and discussion

At present, most brain control robots only use human intelligence or machine intelligence, and do not conduct in-depth discussion on the cooperative behavior between human and machine intelligence. In this paper, a new method of brain-computer cooperative control using AR is proposed, which uses AR to make evolutionary decision. It can effectively prevent the system from falling into deadlock due to the decision conflict between people and machines [23]. The work of this paper mainly includes.

4.1. Summarizing the advantages and disadvantages of robots

On this basis, this paper studies a brain-computer cooperative control scheme based on AR. By using the simulation and processing characteristics of AR for asynchronous and concurrent dynamic systems, this paper constructs a brain-computer cooperation model based on AR.

4.2. Establishing control mode

Based on the AR method, this paper constructs a brain-controlled robot system, which can describe the robot's motion, brain instructions, surrounding obstacle information and control strategy, and simulates it. This paper uses AR to realize evolutionary judgment based on BCI input information, robot motion information and environmental obstacle information, and designs, validates and implements this strategy based on AR model.

4.3. Testing the existing model

This paper verifies the feasibility of the system through experiments. By comparing the virtual scene with the real scene, it can accurately and timely detect and correct the accumulated error of the model and motion of the simulation system, thus greatly improving the operating accuracy and working efficiency of the robot.

5. Conclusions

Through the research of BCI system, augmented reality technology and AmigoBot robot system, offline and online experiments were carried out in this paper, which proved that the brain-computer cooperative control scheme based on AR was feasible. Through intelligent control of AR, it can design and verify the control strategy by means of AR dynamic evolution and reachability graph analysis, thus providing reference for the application of BCI system. However, this paper also faced many problems. A brain-computer cooperative control scheme based on AR was proposed to improve its adaptability. At present, this paper was only an experimental study, which was far from the real use. Based on AR model, this paper proposed a BCI model based on AR, and added new practical functions. In the brain-computer cooperative control system, the intelligence of the machine can be used to reduce the burden of users, thus enhancing the adaptability of the brain-controlled robot.

Funding

This work was supported by2022 university-domestic visiting engineer “Research and development of a visualized campus comprehensive management and control platform based on AR technology”, (FG2022040).

Data availability statement

All data generated or analyzed during this study are included in this published article.

CRediT authorship contribution statement

Junying Hu: Writing – original draft.

Declaration of competing interest

The authors declare the following financial interests/personal relationships which may be considered as potential competing interestsJunying Hu reports was provided by Research and development of a visualized campus comprehensive management and control platform based on AR technology (FG2022040). If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  • 1.Gao Xiaorong, Wang Yijun, Chen Xiaogang, Gao Shangkai. Interface, interaction, and intelligence in generalized brain–computer interfaces. Trends Cognit. Sci. 2021;25:671–684. doi: 10.1016/j.tics.2021.04.003. 8. [DOI] [PubMed] [Google Scholar]
  • 2.Katona Jozsef, Kovari Attila. Examining the learning efficiency by a brain-computer interface system. Acta Polytechnica Hungarica. 2018;15:251–280. 3. [Google Scholar]
  • 3.Li Hongqi, Bi Luzheng, Yi Jingang. Sliding-mode nonlinear predictive control of brain-controlled mobile robots. IEEE Trans. Cybern. 2020;52:5419–5431. doi: 10.1109/TCYB.2020.3031667. 6. [DOI] [PubMed] [Google Scholar]
  • 4.Yang Zhenge. Brain-controlled multi-robot at servo-control level based on nonlinear model predictive control. Complex System Modeling and Simulation. 2022;2:307–321. 4. [Google Scholar]
  • 5.Brennan Chris. Performance of a steady-state visual evoked potential and eye gaze hybrid brain-computer interface on participants with and without a brain injury. IEEE Transactions on Human-Machine Systems. 2020;4:277–286. 50. [Google Scholar]
  • 6.Cruz Aniana. A self-paced BCI with a collaborative controller for highly reliable wheelchair driving: experimental tests with physically disabled individuals. IEEE Transactions on Human-Machine Systems. 2021;2:109–119. 51. [Google Scholar]
  • 7.Chen Xiaogang, Huang Xiaoshan, Wang Yijun, Gao Xiaorong. Combination of augmented reality based brain-computer interface and computer vision for high-level control of a robotic arm. IEEE Trans. Neural Syst. Rehabil. Eng. 2020;12:3140–3147. doi: 10.1109/TNSRE.2020.3038209. 28. [DOI] [PubMed] [Google Scholar]
  • 8.Angrisani Leopoldo, Arpaia Pasquale, Esposito Antonio, Nicola Moccaldi A wearable brain–computer interface instrument for augmented reality-based inspection in industry 4.0. IEEE Trans. Instrum. Meas. 2019;4:1530–1539. 69. [Google Scholar]
  • 9.Burger Benjamin. A mobile robotic chemist. Nature. 2020;583:237–241. doi: 10.1038/s41586-020-2442-2. 7815. [DOI] [PubMed] [Google Scholar]
  • 10.Zhu Kai, Zhang Tao. Deep reinforcement learning based mobile robot navigation: a review. Tsinghua Science and Technology 26. 2021;5:674–691. [Google Scholar]
  • 11.Lv Z., Chen D., Lou R., et al. Industrial security solution for virtual reality. IEEE Internet Things J. 2020;(99) 1-1. [Google Scholar]
  • 12.Lv Z., Qiao L., Wang Q., Piccialli F. IEEE/ACM Transactions on Computational Biology and Bioinformatics; 2020. Advanced Machine-Learning Methods for Brain-Computer Interfacing. 99. [DOI] [PubMed] [Google Scholar]
  • 13.Imaoka Noriaki. Autonomous mobile robot moving through static crowd: arm with one-DoF and hand with involute shape to maneuver human position. J. Robot. Mechatron. 2020;1:59–67. 32. [Google Scholar]
  • 14.Li Jiehao. Fuzzy-torque approximation-enhanced sliding mode control for lateral stability of mobile robot. IEEE Transactions on Systems, Man, and Cybernetics: Systems. 2021;4:2491–2500. 52. [Google Scholar]
  • 15.Qiao Liang, Li Yujie, Chen Dongliang, Serikawa Seiichi, Guizani Mohsen, Lv Zhihan. A survey on 5G/6G, AI, and Robotics. Comput. Electr. Eng. 2021;95 [Google Scholar]
  • 16.Wang Q., Mu Z. Application of music in relief of driving fatigue based on EEG signals. EURASIP J. Appl. Signal Process. 2021;(2021):89. [Google Scholar]
  • 17.Wang Q., Li Y., Liu X. The influence of photo elements on EEG signal recognition. Eurasip Journal on Image and Video Processing. 2018;2018(1):134. [Google Scholar]
  • 18.Zhang Chao. Direct brain-controlled multi-robot cooperation task. J. Biomed. Eng. 2018;6:943–952. doi: 10.7507/1001-5515.201802022. 35. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Drew Liam. The ethics of brain-computer interfaces. Nature. 2019;571 doi: 10.1038/d41586-019-02214-2. S19-S19. [DOI] [PubMed] [Google Scholar]
  • 20.Sakhavi Siavash, Guan Cuntai, Yan Shuicheng. Learning temporal information for brain-computer interface using convolutional neural networks. IEEE Transact. Neural Networks Learn. Syst. 2018;11:5619–5629. doi: 10.1109/TNNLS.2018.2789927. 29. [DOI] [PubMed] [Google Scholar]
  • 21.Zhuang Miaomiao. State-of-the-art non-invasive brain–computer interface for neural rehabilitation: a review. J. Neurorestoratol. 2020;1:12–25. 8. [Google Scholar]
  • 22.Kwon O-Yeon. Subject-independent brain–computer interfaces based on deep convolutional neural networks. IEEE Transact. Neural Networks Learn. Syst. 2019;10:3839–3852. doi: 10.1109/TNNLS.2019.2946869. 31. [DOI] [PubMed] [Google Scholar]
  • 23.Zhang Wen, Wu Dongrui. Manifold embedded knowledge transfer for brain-computer interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 2020;5:1117–1127. doi: 10.1109/TNSRE.2020.2985996. 28. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

All data generated or analyzed during this study are included in this published article.


Articles from Heliyon are provided here courtesy of Elsevier

RESOURCES