Skip to main content
HFSP Journal logoLink to HFSP Journal
. 2008 May 23;2(3):136–142. doi: 10.2976/1.2931144

Brain controlled robots

Mitsuo Kawato 1
PMCID: PMC2645562  PMID: 19404467

Abstract

In January 2008, Duke University and the Japan Science and Technology Agency (JST) publicized their successful control of a brain-machine interface for a humanoid robot by a monkey brain across the Pacific Ocean. The activities of a few hundred neurons were recorded from a monkey’s motor cortex in Miguel Nicolelis’s lab at Duke University, and the kinematic features of monkey locomotion on a treadmill were decoded from neural firing rates in real time. The decoded information was sent to a humanoid robot, CB-i, in ATR Computational Neuroscience Laboratories located in Kyoto, Japan. This robot was developed by the JST International Collaborative Research Project (ICORP) as the “Computational Brain Project.” CB-i’s locomotion-like movement was video-recorded and projected on a screen in front of the monkey. Although the bidirectional communication used a conventional Internet connection, its delay was suppressed below one over several seconds, partly due to a video-streaming technique, and this encouraged the monkey’s voluntary locomotion and influenced its brain activity. This commentary introduces the background and future directions of the brain-controlled robot.


Recent computational studies on how the brain generates behaviors are progressing rapidly. In parallel, the development of humanoid robots that act like humans is now part of the focus of robotic research. The Japan Science and Technology Agency (JST) has succeeded in making a humanoid robot execute locomotion-like movement via data detected from cortical brain activity that was transmitted through an internet interface between the USA and Japan in real time. In our projects [EARTO (1996–2001) web page, http://www.kawato.jst.go.jp/, ICORP (2004–2009) Web page, http://www.cns.atr.jp/hrcn/ICORP/project.html], we have developed information-processing models of the brain and verified these models using real robots in order to better understand the human brain mechanisms in producing behaviors. In addition, we aim to develop humanoid robots that behave like humans to facilitate our daily life. This experiment is epoch-making both from a computational neuroscience viewpoint and for further development of brain machine interface. In this commentary, I explain the background and future directions of brain-controlled robots.

COMPUTATIONAL NEUROSCIENCE AND HUMANOID ROBOTS

Ten years have passed since the Japanese “Century of the Brain” was promoted, and its most notable objective, the unique “Creating the Brain” approach, has led us to apply a humanoid robot as a neuroscience tool (Kawato, 2008). Our aim is to understand the brain to the extent that we can make humanoid robots solve tasks typically solved by the human brain by using essentially the same principles. In my opinion, this “Understanding the Brain by Creating the Brain” approach is the only way to fully understand neural mechanisms in a rigorous sense. But even if we could create an artificial brain, we could not investigate its functions, such as vision or motor control, if we just let it float in incubation fluid in a jar. The brain must be connected to sensors and a motor apparatus so that it can interact with its environment. A humanoid robot controlled by an artificial brain, which is implemented as software based on computational models of brain functions, seems to be the most plausible approach for this purpose, given the currently available technology. With the slogan of “Understanding the Brain by Creating the Brain,” in the mid-1980s we started to use robots for brain research (Miyamoto et al., 1988), and about ten different kinds of robots have been used by our group at Osaka University’s Department of Biophysical Engineering, ATR Laboratories, ERATO Kawato Dynamic Brain Project [EARTO (1996–2001) web page, http://www.kawato.jst.go.jp/], and ICORP Kawato Computational Brain Project [ICORP (2004–2009) Web page, http://www.cns.atr.jp/hrcn/ICORP/project.html].

A computational theory that is optimal for one type of body may not be optimal for other types of bodies. Accordingly, if a humanoid robot is used for exploring neuroscience theories rather than for engineering, it should be as close as possible to a human body. Within the ERATO project, in collaboration with the SARCOS research company’s team under Professor Stephen C. Jacobsen of the University of Utah, Dr. Stefan Schaal led his colleagues in developing a humanoid robot called DB (Dynamic Brain) (Fig. 1) with the aim of most closely replicating a human body, given the robotics technology of 1996. DB possessed 30 degrees-of-freedom and human-like size and weight. From the mechanical point of view, DB behaves as a human body, that is mechanically compliant unlike most electric-motor-driven and highly-geared humanoid robots, since the SARCOS’ hydraulic actuators are powerful enough to avoid the necessity of using reduction mechanisms at the joints. Within its head, DB is equipped with an artificial vestibular organ (gyro sensor), which measures head velocity, and four cameras with vertical and horizontal degrees-of-freedom. Two of the cameras have telescopic lenses corresponding to foveal vision, while the other two have wide-angle lenses corresponding to peripheral vision. SARCOS developed the hardware and low-level analog feedback-loops, while the ERATO project developed high-level digital feedback-loops and all of the sensory-motor coordination software.

Figure 1. Demonstrations of 14 different tasks by the ERATO humanoid robot DB.

Figure 1

The photographs in Fig. 1 introduce 14 of the more than 30 different tasks that can be performed by DB (Atkeson et al., 2000). Most of the algorithms used for these demonstration tasks are based roughly on principles of information processing in the brain, and many of them contain some or all of the three learning elements: imitation learning (Miyamoto et al., 1996; Schaal, 1999; Ude and Atkeson, 2003; Ude et al., 2004; Nakanishi et al., 2004), reinforcement learning, and supervised learning. Imitation learning (“Learning by Watching,” “Learning by Mimicking,” and “Teaching by Demonstration”) was involved in the tasks of Okinawan folk dance “Katya-shi” (Riley et al., 2000) (A), three-ball juggling (Atkeson et al., 2000) (B), devil-sticking (C), air-hockey (Bentivegna et al., 2004a; Bentivegna et al., 2004b) (D), pole balancing (E), sticky-hands interaction with a human (Hale and Pollick, 2005) (L), tumbling a box (Pollard et al., 2002) (M), and tennis swing (Ijspeert et al., 2002) (N). The air hockey demonstration (Bentivegna et al., 2004a; Bentivegna et al., 2004b) (D) utilizes not only imitation learning but also a reinforcement-learning algorithm with reward (a puck enters the opponent’s goal), penalty (a puck enters the robot’s goal), and skill learning (a kind of supervised learning). Demonstrations of pole-balancing (E) and visually-guided arm reaching toward a target (F) utilized a supervised learning scheme (Schaal and Atkeson, 1998), which was motivated by our approach to cerebellar internal model learning.

Demonstrations of adaptation of the vestibulo-ocular reflex (Shibata and Schaal, 2001) (G), adaptation of smooth pursuit eye movement (H), and simultaneous realization of these two kinds of eye movements together with saccadic eye movements (I) were based on computational models of eye movements and their learning (Shibata et al., 2005). Demonstrations of drumming (J), paddling a ball (K), and tennis swing (N) were based on central pattern generators (CPG). These are neural circuits that can spontaneously generate spatiotemporal movement patterns even if afferent inputs are absent and descending commands to the generators are temporally constant. CPG concepts were formed in the 1960s through neurobiological studies of invertebrate movements, and they are key to understanding most rhythmic movements and essential for biological realization of biped locomotion as described below.

The ICORP Computational Brain Project (2004–2009), which is an international collaboration project with Professor Chris Atkeson of the Carnegie Mellon University, follows the ERATO Dynamic Brain Project in its slogan “Understanding the Brain by Creating the Brain” and “Humanoid Robots as a Tool for Neuroscience.” Again in collaboration with SARCOS, at the beginning of 2007, Dr. Gordon Cheng led his colleagues in developing a new humanoid robot called CB-i (Computational Brain Interface), shown in Fig. 2 (Cheng et al., 2007b). CB-i is even closer to a human body than DB. To improve the mechanical compliance of the body, CB-i also used hydraulic actuators rather than electric motors. The biggest improvement of CB-i over DB is its autonomy. DB was mounted at the pelvis because it needs to be powered by an external hydraulic pump through oil hoses arranged around the mount. A computer system for DB was also connected to DB by wires. Thus, DB could not function autonomously. In contrast, CB-i carries both onboard power supplies (electric and hydraulic) and a computing system on its back and, thus, it can function fully autonomously. CB-i was designed for full-body autonomous interaction, specifically for walking and simple manipulations. It is equipped with a total of 51 degrees-of-freedom (DOF): 2×7 DOF legs, 2×7 DOF arms, 2×2 DOF eyes, 3 DOF neck∕head, 1 DOF mouth, 3 DOF torso, and 2×6 DOF hands. CB-i is designed to have similar configurations, range of motion, power, and strength to a human body, allowing it to better reproduce natural human-like movements, in particular for locomotion and object manipulation.

Figure 2. New humanoid robot called CB-i (Computational Brain Interface).

Figure 2

Within the ICORP Project, biologically inspired control algorithms for locomotion have been studied while utilizing three different humanoid robots [DB-chan (Nakanishi et al., 2004), Fujitsu Automation HOAP-2 (Matsubara et al., 2006) and CB-i (Morimoto et al., 2006)] as well as the SONY small-sized humanoid robot QRIO (Endo et al., 2005) as test beds. Successful locomotion algorithms utilize various aspects of biological control systems, such as neural networks for CPGs, phase resetting by various sensory feedbacks including adaptive gains, and hierarchical reinforcement learning algorithms. In the demonstration of robust locomotion by DB-chan, three biologically important aspects of control algorithms are utilized: imitation learning, a nonlinear dynamical system as a CPG, and phase resetting by a foot-ground-contact signal (Nakanishi et al., 2004). First, a neural network model developed by Schaal et al. (2003) quickly learned locomotion trajectories correctly demonstrated by humans or other robots. In order to synchronize this limit-cycle oscillator (CPG) with a mechanical oscillator functioning through the robot body and the environment, the neural oscillator is phase-reset by foot-ground contact. This guarantees stable synchronization of neural and mechanical oscillators with respect to phase and frequency. The achieved locomotion is quite robust against different surfaces with various frictions and slopes, and it is human-like in the sense that the robot body’s center of gravity is high, while the knee is almost nearly fully extended at the foot contact. This is in sharp contrast to locomotion engineered by zero-moment-point control, a traditional control method for biped robots, which was proposed by Vukobratovic 35 years ago and then successfully implemented by Ichiro Kato and developers of Honda and Sony humanoid robots. This method usually results in a low center of gravity and bent knees. Of particular importance to the brain machine interface experiment, Jun Morimoto succeeded in locomotion with CB-i based on the CPG models (Morimoto et al., 2006).

THREE ELEMENTS OF BRAIN MACHINE INTERFACE

Brain machine interface (BMI) can be defined as artificial electrical and computational neural circuits that compensate, reconstruct, repair, and even enhance brain functions ranging from sensory and central to motor control domains. BMI has already moved beyond mere science-fiction fantasy in the domain of sensory reconstruction and central control repair as exemplified by artificial cochlear and deep brain stimulation. Furthermore, in reconstruction of motor control capabilities for paralyzed patients, much progress has been made in the last 15 years (Nicolelis, 2001) and chronic implantations of BMI in human patients have already begun in 2004; accordingly, large-scale introduction of therapeutic techniques is expected dramatically in the near future.

Any successful BMI relies on at least one, and in most cases all, of the following three essential elements: brain plasticity through user training, neural decoding by a machine learning algorithm, and neuroscience knowledge. Sensory and motor BMI is a kind of new tool for a brain. Unlike the usual tools such as screw drivers, chopsticks, bicycles, and automobiles, which are connected to the brain via sensory and motor organs, BMI is connected directly to the brain via electrical and computer circuits. Still, BMI reads out neural information from the brain and feeds information back to the brain, and thus a closed-loop is formed between the brain and BMI, just as with usual tools. If delays associated with a BMI closed loop are below one of several seconds, they are within the temporal window of spike timing dependent plasticity of neurons, and thus learning to utilize BMI better could take place in the brain. Consequently, based on the synaptic plasticity of the brain, BMI users can learn how to better control BMI. This process can be regarded as an operant conditioning, and it is reminiscent of “biofeedback.” Eberhard Fetz is the pioneer of this first element of BMI (Fetz, 1969). Most of the BMI systems based on electroencephalogram, often called a brain computer interface, depend heavily on this first element of user training.

The second element is neural decoding by machine learning techniques. For example, in the Duke-JST BMI controlled robot (Fig. 3), the neural activities of a few hundred motor cortical neurons were recorded simultaneously with the three-dimensional positions of the monkey’s legs. Linear regression models were trained to predict the kinematic parameters from neural firing rates (Nicolelis, 2001), and they were used in real time decoding of leg position from the brain activity (Cheng et al., 2007a). Generally speaking, any machine-learning technique can be used to reconstruct physical variables such as motor apparatus position, velocity or acceleration, and different kinds of movements from brain activity such as neural firings of many neurons or noninvasive brain signals such as electroencephalogram. Typically, training data and test data sets consist of pairs of neural activity X and some target variable Y, (X,Y). A machine-learning algorithm is used to determine the optimal function F that can predict Y from X; Y=F(X) only using the training data set. A machine-learning algorithm is considered successful if it generalizes well even for an unseen test data set, that is, F(X) effectively predicts Y not only for the training set but also for the test set. For example, Honda Research Institute of Japan, in collaboration with ATR Computational Neuroscience Laboratories (ATR-CNS), demonstrated real-time control of a robot hand by decoding three motor primitives (rock-paper-scissors, as in the children’s game) from the fMRI data of a subject’s motor cortex activity [press release 2006 (http://www.atr.jp/html/topics/press_060526_e.html)]. This was based on the machine-learning algorithm called support vector machine, previously utilized by Kamitani and Tong (2005, 2006) for decoding the attributes of visual stimuli from fMRI data.

Figure 3. Experimental overview of brain controlled robot.

Figure 3

After decoding walking-related information from a monkey’s brain activity while walking on a treadmill, we were able to relay these data from Duke University in USA to the Advanced Telecommunication Research (ATR) in Japan in real time. We were then able to control our humanoid robot in Japan to execute locomotion-like movements in a similar manner as the monkey (with visual feedback of the robot presented to the monkey.).

The third element is the neuroscience knowledge. In the case of the Duke-JST BMI-controlled robot, neural recordings were made in the primary motor cortex, which is known as the motor control center in neuroscience for a long time. Instantaneous neural firing rates (pulses per millisecond) were utilized as regressors to estimate the kinematic parameters, since firing rates are believed to be the most important information carriers in the brain. fMRI signals in visual cortical areas were used in Kamitani and Tong (2005, 2006) for decoding visual attributes. This third element is further elaborated in the following sections.

BRAIN NETWORK INTERFACE

From a computational point of view, our understanding of neural mechanisms for sensory-motor coordination has not yet been fully utilized in BMI design. For example, the population-coding hypothesis of movement directions by an ensemble of motor cortical neurons (Georgopoulos et al., 1982) was advocated as the basis of some BMI design (Taylor et al., 2002), but the hypothesis itself is still controversial (Todorov, 2000). In most motor BMI systems, cursor positions or arm postures are determined directly from neural decoding and no computational model of sensory-motor integration was seriously incorporated [with a small number of exceptions such as Koike et al. (2006)]. However, it is obvious that a simple approach of decoding the three dimensional positions of hands or legs and giving the results to a simple position controller as a desired trajectory cannot deal with practical control problems such as object manipulation, locomotion, or posture control. All of these control problems incorporate instability of mechanical dynamics, thus requiring intelligent and autonomous control algorithms such as CPGs, internal models, and force control with passive dynamics on the robot side. To be more specific, let us take an example of locomotion. If joint torques or joint angles during monkey locomotion were decoded from monkey brain activity and then simply and directly fed into a torque or joint angle controller of CB-i, CB-i could not achieve stable locomotion because its body is different from the monkey’s body, so the same dynamic or kinematic trajectories would lead to the robot falling down (Fig. 4). CB-i should possess an autonomous and stable locomotion controller such as CPGs on its controller side. A simple trajectory control approach can work only for the simplest control problems, such as visually-guided arm reaching or cursor control, which have been the main tasks investigated in the BMI literature. We definitely need some autonomous control capability on the robot’s side to deal with real-world sensory-motor integration problems. The Duke-JST BMI experiment is very important in highlighting this requirement for future BMI research.

Figure 4. Brain controlled humanoid robot.

Figure 4

Masa-aki Sato and his colleagues at ATR-CNS have been developing a “brain-network interface” (BNI) based on a hierarchical, variational Bayesian technique to combine information from fMRI and magnetoencephalography (Sato et al., 2004). They succeeded in estimating brain activities with spatial resolution of a few millimeters and millisecond-level temporal resolution for various domains such as visual perception, visual feature attention, and voluntary finger movements. In collaboration with the Shimadzu Corporation, we aim to develop within ten years a portable and wireless combined EEG∕NIRS (electroencephalography∕near infrared spectroscopy)-based Bayesian estimator for millimeter and millisecond accuracy. “Brain-network interface” is a term we have created for this project, and it is similar to a brain-machine interface or a brain-computer interface. BNI noninvasively estimates brain activity by solving the inverse problem, and it also estimates neural activities and reconstructs represented information. Accordingly, it is not a brain-machine interface because it is noninvasive, and it is not a brain-computer interface because it does not require extensive user training since it decodes information. We have already succeeded, for example, in estimating the velocity of wrist movements from single trial data without subject training (Toda et al., 2007).

HIERARCHICAL CONTROL AND DECODING MODELS

The brain utilizes its hierarchical structure in solving the most difficult optimal control problems in sensory-motor integration. This is because a simple randomly connected uniform neural network cannot be powerful enough to solve complicated optimal and real-time control issues with a large degree of freedom and strong nonlinearity in controlled objects, and large time delays are associated with feedback loops (Kawato and Samejima, 2007). Consequently, different brain areas contribute to the solution by solving different sub-problems; the cerebellum for internal models (Kawato, 2008), the premotor cortex for trajectory planning, and the basal ganglia for reward prediction in reinforcement learning (Kawato and Samejima, 2007). For tackling real-world sensory-motor control problems that any practical BMI controlled robots may face, we definitely need to introduce such a hierarchy and modularity into the controllers of robots. These controllers should be as close as possible to real brain movement controllers. We need to decode different neural representations in a different hierarchy of brain controllers, and then provide these decoded representations to the corresponding hierarchy of the robot controller. BNI could be an ideal framework to simultaneously estimate hierarchically arranged neural representations from the brain in a noninvasive manner. For an example of locomotion, self-motion could be estimated from MST; the decision to move, stay, turn left or right could be estimated from the prefrontal cortex; planning to start motion could be estimated from the pre-motor cortex; joint angles and torques could be estimated from the primary motor cortex; and predictions and estimations of the current states and motor commands could be decoded from the cerebellum. Maximum benefit could be derived from such a hierarchically arranged list of neural representations if the robot’s locomotion controller had a similar hierarchical and modular structure.

Estimating cortical electrical currents at thousands of lattice points on the cortical surface from electrical or magnetic signals measured by hundreds of electroencephalogram or magnetoencephalogram sensors is called the inverse problem, since it is the inverse of the forward process modeled by electromagnetic equations in physics, making it mathematically ill-posed and the most difficult part of BNI. The current realization of BNI utilizes somewhat ad hoc sparseness and spatial continuity assumptions in this inverse problem (Sato et al., 2004; Toda et al., 2007), but in the future to attain a better BNI we must incorporate dynamical models of the brain activity used in solving the inverse problem. We believe that there should be a mathematical duality relationship between the models used in this observation process and the models used for control described above. Both kinds of models should possess hierarchy and modularity and be mathematically matched to each other. This is an interesting future mathematical issue associated with BNI controlled robots.

FUTURE OF BRAIN CONTROLLED ROBOTS

Ahumanoid robot that can be controlled by natural brain activity at will might be regarded as a second body for humans, and this conceptualization can open up a wide area of applications. It could be utilized as a nursing robot for disabled people as their second body to help them in a natural way. If exoskeleton or power suits replaced humanoid robots, movement reconstruction could be possible for paralyzed people. However, we will oppose any military applications of this technology by all means. In telecommunication, BNI controlled robots can be postulated as future cellular phones having all of the capabilities of the human body, such as movement execution, tactile sensing and so on, in contrast to the current cellular phones that mimic only visual and auditory senses and speech motor control (Fig. 5). Face-to-face bodily communication may become possible in the future at temporally and spatially distant locations based on BNI-controlled humanoid robots. Bearing in mind the Duke-JST BMI controlled robot, this possibility cannot be dismissed as a mere whim of science-fiction fantasy.

Figure 5. BNI controlled humanoid robots as a future telecommunication interface.

Figure 5

Let us assume that a husband and wife who enjoy playing tennis are living apart because the wife lives in Japan and the husband has been stationed in the US for work. Nevertheless, they want more than anything to be able to play tennis together. In order to actually play tennis (to experience the physical feelings), there would have to be an “agent” robot of the husband near the wife, and an “agent” robot of the wife near the husband, with the two playing tennis in Japan and the US at the same time. The greatest obstacle to enabling these two people to play simultaneously is the time delay that accompanies communications; accordingly, BNI and quantitative brain models seem ideal solutions to this most difficult obstacle.

References

  1. Atkeson, C G, Hale, J, Pollick, F, Riley, M, Kotosaka, S, Schaal, S, Shibata, T, Tevatia, G, Vijayakumar, S, Ude, A, and Kawato, M (2000). “Using humanoid robots to study human behavior.” IEEE Intell. Syst. 10.1109/5254.895860 15, 46–56. [DOI] [Google Scholar]
  2. Bentivegna, D C, Atkeson, C G, and Cheng, G (2004a) “Learning tasks from observation and practice.” Robotics Auton. Sys. 47, 163–169. [Google Scholar]
  3. Bentivegna, D C, Atkeson, C G, Ude, A, and Cheng, G (2004b). “Learning to act from observation and practice.” Int. J. Humanoid Rob. 1, 585–611. [Google Scholar]
  4. Cheng, G, Fitzsimmons, N A, Morimoto, J, Lebedev, M A, Kawato, M, and Nicolelis, M A L (2007a). “Bipedal locomotion with a humanoid robot controlled by cortical ensemble activity.” Society for Neuroscience 37th Annual Meeting. San Diego, CA, USA. [Google Scholar]
  5. Cheng, G, Hyon, S, Morimoto, J, Ude, A, Hale, J G, Colvin, G, Scroggin, W, and Jacobsen, S C (2007b). “CB: a humanoid research platform for exploring neuroscience.” Adv. Rob. 21, 1097–1114. [Google Scholar]
  6. Endo, G, Morimoto, J, Matsubara, T, Nakanishi, J, and Cheng, G (2005) “Learning CPG sensory feedback with policy gradient for biped locomotion for a full body humanoid.” The Twentieth National Conference on Artificial Intelligence (AAAI-05) Proceedings, Pittsburgh, USA, July 9–13, 1237–1273.
  7. Fetz, E E (1969). “Operant conditioning of cortical unit activity.” Science 10.1126/science.163.3870.955 163, 955–958. [DOI] [PubMed] [Google Scholar]
  8. Georgopoulos, A P, Kalaska, J F, Caminiti, R, and Massey, J T (1982). “On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex.” J. Neurosci. 2, 1527–1537. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Hale, J G, and Pollick, F E (2005). “‘Sticky hands’: learning and generalisation for cooperative physical interactions with a humanoid robot.” IEEE Trans. Syst., Man, Cybern., Part B: Cybern. 10.1109/TSMCC.2004.840063 35, 512–521. [DOI] [Google Scholar]
  10. Ijspeert, A J, Nakanishi, J, and Schaal, S (2002). “Movement imitation with nonlinear dynamical systems in humanoid robots.” Proceedings of the IEEE International Conference on Robotics and Automation (ICRA2002), Washington, USA, May 11–15, 1398–1403.
  11. Kamitani, Y, and Tong, F (2005). “Decoding the visual and subjective contents of the human brain.” Nat. Neurosci. 10.1038/nn1444 8, 679–685. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Kamitani, Y, and Tong, F (2006). “Decoding seen and attended motion directions from activity in the human visual cortex.” Curr. Biol. 10.1016/j.cub.2006.04.003 16, 1096–1102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Kawato, M (2008). “From ‘Understanding the brain by creating the brain’ toward manipulative neuroscience.” Philos. Trans. R. Soc. London, Ser. B (in press). [DOI] [PMC free article] [PubMed]
  14. Kawato, M, and Samejima, K (2007). “Efficient reinforcement learning: computational theories, neuroscience and robotics.” Curr. Opin. Neurobiol. 10.1016/j.conb.2007.03.004 17, 205–212. [DOI] [PubMed] [Google Scholar]
  15. Koike, Y, Hirose, H, Sakurai, Y, and Iijima, T (2006). “Prediction of arm trajectory from a small number of neuron activities in the primary motor cortex.” Neurosci. Res. (N Y) 55, 146–153. [DOI] [PubMed] [Google Scholar]
  16. Matsubara, T, Morimoto, J, Nakanishi, J, Sato, M, and Doya, K (2006). “Learning CPG-based biped locomotion with a policy gradient method.” Rob. Auton. Syst. 10.1016/j.robot.2006.05.012 54, 911–920. [DOI] [Google Scholar]
  17. Miyamoto, H, Kawato, M, Setoyama, T, and Suzuki, R (1988). “Feedback-error-learning neural network for trajectory control of a robotic manipulator.” Neural Networks 10.1016/0893-6080(88)90030-5 1, 251–265. [DOI] [Google Scholar]
  18. Miyamoto, H, Schaal, S, Gandolfo, F, Gomi, H, Koike, Y, Osu, R, Nakano, E, Wada, Y, and Kawato, M (1996). “A Kendama learning robot based on dynamic optimization theory.” Neural Networks 10.1016/S0893-6080(96)00043-3 9, 1281–1302. [DOI] [PubMed] [Google Scholar]
  19. Morimoto, J, Endo, G, Nakanishi, J, Hyon, S, Cheng, G, Bentivegna, D C, and Atkeson, C G (2006). “Modulation of simple sinusoidal patterns by a coupled oscillator model for biped walking.” IEEE International Conference on Robotics and Automation (ICRA2006) Proceedings, Orlando, USA, May 15–19, 1579–1584.
  20. Nakanishi, J, Morimoto, J, Endo, G, Cheng, G, Schaal, S, and Kawato, M (2004). “Learning from demonstration and adaptation of biped locomotion.” Rob. Auton. Syst. 10.1016/j.robot.2004.03.003 47, 79–91. [DOI] [Google Scholar]
  21. Nicolelis, M A (2001). “Actions from thoughts.” Nature 10.1038/35053191 409, 403–407. [DOI] [PubMed] [Google Scholar]
  22. Pollard, N S, Hodgins, J K, Riley, M J, and Atkeson, C G (2002). “Adapting human motion for the control of a humanoid robot.” Proceedings of the IEEE International Conference on Robotics and Automation (ICRA2002), Washington, USA, May 11–15, 1390–1397.
  23. Riley, M, Ude, A, and Atkeson, C G (2000). “Methods for motion generation and interaction with a humanoid robot: case studies of dancing and catching.” Proceeding of 2000 Workshop on Interactive Robotics and Entertainment (WIRE-2000), Pittsburgh, USA, April 30–May 1, 35–42.
  24. Sato, M, Yoshioka, T, Kajiwara, S, Toyama, K, Goda, N, Doya, K, and Kawato, M (2004). “Hierarchical Bayesian estimation for MEG inverse problem.” Neuroimage 23, 806–826. [DOI] [PubMed] [Google Scholar]
  25. Schaal, S (1999). “Is imitation learning the route to humanoid robots?” Trends Cogn. Sci. 10.1016/S1364-6613(99)01327-3 3, 233–242. [DOI] [PubMed] [Google Scholar]
  26. Schaal, S, and Atkeson, C G (1998). “Constructive incremental learning from only local information.” Neural Comput. 10.1162/089976698300016963 10, 2047–2084. [DOI] [PubMed] [Google Scholar]
  27. Schaal, S, Peters, J, Nakanishi, J, and Ijspeert, A (2003). “Control, planning, learning and imitation with dynamic movement primitives.” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2003) Workshop on Bilateral Paradigms of Human and Humanoid, Las Vegas, USA, October 27–31, 39–58.
  28. Shibata, T, and Schaal, S (2001). “Biomimetic gaze stabilization based on feedback-error learning with nonparametric regression networks.” Neural Networks 10.1016/S0893-6080(00)00084-8 14, 201–216. [DOI] [PubMed] [Google Scholar]
  29. Shibata, T, Tabata, H, Schaal, S, and Kawato, M (2005). “A model of smooth pursuit based on learning of the target dynamics using only retinal signals.” Neural Networks 18, 213–225. [DOI] [PubMed] [Google Scholar]
  30. Taylor, D M, Tillery, S I, and Schwartz, A B (2002). “Direct cortical control of 3D neuroprosthetic devices.” Science 10.1126/science.1070291 296, 1829–1832. [DOI] [PubMed] [Google Scholar]
  31. Toda, A, Imamizu, H, Sato, M, Wada, Y, and Kawato, M (2007). “Reconstruction of temporal movement from single-trial non-invasive brain activity: a hierarchical Bayesian method.” Proceedings of ICONIP 2007, WED-4.
  32. Todorov, E (2000). “Direct cortical control of muscle activation in voluntary arm movements: a model. Nat. Neurosci. 10.1038/73964 3, 391–398. [DOI] [PubMed] [Google Scholar]
  33. Ude, A, and Atkeson, C G (2003). “Online tracking and mimicking of human movements by a humanoid robot.” Adv. Rob. 10.1163/156855303321165114 17, 165–178. [DOI] [Google Scholar]
  34. Ude, A, Atkeson, C G, and Riley, M (2004). “Programming full-body movements for humanoid robots by observation.” Rob. Auton. Syst. 10.1016/j.robot.2004.03.004 47, 93–108. [DOI] [Google Scholar]

Articles from HFSP Journal are provided here courtesy of HFSP Publishing.

RESOURCES