Prof. Saeed Saeedvand is Associate professor in the Department of electrical engineering at the National Taiwan normal University (NTNU) located in Taiwan. Since 2009, he has directed the design and development of several robotic platforms, such as humanoid kid-size, adult-size, and wheeled-legged robots, as well as autonomous mobile robots. His teams have won over 20 international awards and trophies in major robotics competitions and international events, which testify to high levels of integration of mechanical design, control, perception, and intelligence. The research of Prof. Saeed is dedicated to advanced robotics, Machine Learning, and Deep Reinforcement Learning (DRL), in particular, to multi-agent learning, humanoid locomotion, robot manipulation, and sim-to-real transfer. He has also introduced and worked on some new DRL schemes to control robots, such as multi-actor and cooperative policy optimization schemes aimed at enhancing stability, learning performanceAI-related and robustness in challenging robots. His lab is an integration of NVIDIA Isaac-powered simulation platforms with real robots, with the main areas of interest being balance control and object manipulation, learning in changing environments, and multi-robot learning. He is engaged in the effort of bringing the advances in theoretical reinforcement learning to realistic hardware implementation. In addition to research, Prof. Saeed co-leads student teams in global competitions and oversees graduate students in advanced robotics and real-world issues. His work is also used to create intelligent autonomous systems that can safely and effectively work in environments.
The Educational Robotics Center (ERC) of the Department of Electrical Engineering, National Taiwan Normal University (NTNU) is a global, state-of-the-art laboratory in research and development of robotic systems and intelligent systems. Our lab unites global students and researchers to discuss the latest solutions in robotics including fundamental hardware development, as well as advanced artificial intelligence and machine learning systems. With a focus on practical real-world deployment and multidisciplinary innovation, at ERC, we create a broad product portfolio - humanoid robots, autonomous mobile systems, and special-purpose robots. We carry out studies in the areas of robot design and control, SLAM and perception, machine learning and deep reinforcement learning (DRL), computer vision, inverse kinematics, and autonomous robotic behavior. Our teams also have a culture of practical experimentation and competition, and frequently compete in the largest international robotics competitions with our robots like the FIRA Robotics Olympics, where our robots have won several top positions and awards. ERC has a strong research culture and an excellent faculty, led by such prominent individuals as Prof. Jacky Baltes and Prof. Saeed Saeedvand, offering students a stimulating and interactive setting in which they conduct both theoretical and practical research in the field of robotics, including dynamic locomotion and autonomous navigation, as well as intelligent human-robot interaction. The center is intended to take the intelligence of robots to the next level in industrial, educational, entertainment, and athletic spheres, developing the new generation of robotic and AI innovators.
Sim-to-Real Deep Reinforcement Learning for Dual-Arm Dynamic Object Balancing:
- This study explores the dual-arm robotic control through Deep Reinforcement Learning (DRL) to stabilize a board enshrined with dynamic loads, like a rolling ball or a cup of water. The project aims to sim-to-real transfer with simulation on NVIDIA Isaac and ROS combination, nonlinear dynamics, coordinated control,, and robust deployment in a real-world scenario of physical uncertainty.
Learning-Based Whole-Body Control for Humanoid Dynamic Balance and Interaction:
- This study explores the dual-arm robotic control through Deep Reinforcement Learning (DRL) to stabilize a board enshrined with dynamic loads, like a rolling ball or a cup of water. The project aims to sim-to-real transfer with simulation on NVIDIA Isaac and ROS combination, nonlinear dynamics, coordinated control and robust deployment in a real-world scenario of physical uncertainty.
Reinforcement Learning for Adaptive Quadruped Locomotion and Terrain-Aware Stability
- This project researches deep reinforcement learning m,ethods to quadruped robots to enable stable locomotion on uneven and dynamic surfaces. The study highlights adaptive gait generation, disturbance rejection and sim-to-real transfer of learned locomotion policies. The goal is to make it possible to perform strong autonomous navigation and load-bearing in the real-world situations.
List of Only some recent Robotic Competitions:
2025, FIRA HuroCup Kid-Sized, Taiwan:
- 1st place in All-Rounds (Team A)
- 3rd place in All-Rounds (Team B)
- 2nd place in Sprint, Spartan Race, and Marathon (Team A)
- 3rd place in Sprint, Archery, and Marathon (Team B)
2024, FIRA HuroCup Kid-Sized, Brazil:
- 2nd place in All-Rounds (8 second places at all events)
2024, FIRA HuroCup Kid-Sized, Taiwan:
- 1st place in Sprint
- 2nd place in Marathon
- 1st place in All-Rounds
2023, FIRA HuroCup Kid-Sized:
- 2nd place in Marathon
- 2nd place in All-Rounds
2022, FIRA HuroCup Adult-Sized and Kid-Sized:
- 2nd place in Sprint
- 3rd place in Basketball
- 2nd place in All-Rounds
Applicants should have an academic background in one or more of the following fields:
- Electrical Engineering
- Mechanical Engineering
- Mechatronics Engineering
- Robotics
- Computer Science
- Control Engineering
- Artificial Intelligence
- Automation Engineering
Required Knowledge
- Basic programming skills (Python preferred)
- Fundamental understanding of linear algebra and calculus
- Basic knowledge of control systems or robotics
Preferred (But Not Mandatory)
- Experience with Machine Learning or Deep Reinforcement Learning
- Familiarity with robotics simulation tools (e.g., NVIDIA Isaac Sim, Gazebo, Gym environments)
- Knowledge of ROS or robotic hardware systems
Job Description
Key Responsibilities
- Develop DRL-based control policies for coordinated dual-arm robot manipulation.
- Design and implement robotic learning environments using simulation platforms such as NVIDIA Isaac Sim / Isaac Lab / Gym.
Apply Sim-to-Real transfer techniques, including:
- Domain randomization
- System identification
- Sensor and actuator noise modeling
- Policy robustness optimization
- Train reinforcement learning agents for dynamic balance and interaction tasks (e.g., object stabilization or cooperative manipulation).
- Deploy trained policies onto real robotic hardware
Preferred Intern Educational Level
Master or Bachelor. Ph.D. students are welcomed.
Skill sets or Qualities
- Background in Robotics, Electrical Engineering, Computer Science, or related fields.
- Knowledge of Python programming.
- Basic understanding of:
- Reinforcement Learning
- Robot kinematics and dynamics
- Control systems
- Familiarity with Linux environments.