A Hybrid AI-Driven Fault-Tolerant Control Framework: Reinforcement Learning and LSTM-Based Adaptive Backstepping for High-Precision Robotic Manipulators
Main Article Content
Abstract
This paper presents an advanced control framework for robot manipulators that improves accuracy, stability, and fault tolerance in dynamic environments by utilizing reinforcement learning and recurrent neural networks. This method is a combination of backstepping adaptive control and fast non-singular terminal sliding mode control, which improves system performance to a higher level by intelligently adjusting parameters and predicting faults in real-time. Long-short term memory (LSTM) neural network is designed as a fault detector and has been able to predict the occurrence of actuators and sensors faults with 96.3% accuracy. Besides that, a reinforcement learning agent based on the Proximal Policy Optimization (PPO) algorithm is responsible for optimizing the control parameters. The simulations performed on a robot manipulator with two degrees of freedom show that the proposed method, in addition to reducing the trajectory tracking error, has improved the response time of the system in dynamic conditions and provided reliable performance even in the face of severe failures in the actuators. This hybrid approach, due to the ability to predict faults and adjust in real time, not only has the ability to expand to more complex robotic systems, but also has a significant impact in sensitive applications such as robotic surgery and self-driving vehicles. The results show that the proposed framework is an innovative step in the design of advanced control systems and has a high capacity to improve reliability and accuracy in real environments.
Article Details

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.