Robot path planning using enhanced Q-learning algorithm based on single parameter.
Robot path planning using enhanced Q-learning algorithm based on single parameter.
Blog Article
One of the challenging aspects of robot navigation is path planning in a dynamic environment.The Q-learning algorithm is one of the reinforcement learning techniques that can be applied to the path DEODORANT CREAM GERANIUM planning of a mobile robot.The vital algorithm for any intelligent mobile robot is path planning.
On the other hand, the traditional Q-learning method examines every conceivable state of the robot to choose the optimal path.As a result, this method is very computationally intensive, especially when there is a need to compute a large environment.This study proposes a modified version of the technique for planning robot paths.
Using the learning rate (1-α) instead of the certification discount factor (γ), the algorithm became completely dependent on the reliance parameters, making it one of those that depend on a single parameter.This reliance can reduce the number of parameters and increase the algorithm’s execution efficiency.A modified version of Q-learning was investigated with to determine the optimal path planning in several dynamic obstacle environments.
Learning efficiency was enhanced by using priority trial replay in the improved Inclined Eight Connection Q-learning Algorithm (I8QA).A simulated environment tote bag was used for the suggested method, and it was shown that it can successfully plan optimal paths in dynamic obstacle environments.Overall, Q-learning, a strong and adaptable reinforcement learning method, is utilized for dealing with a wide range of problems.
The improvement ratio of path length in the experiment environment is 40.812%, indicating that the I8QA algorithm is more compatible with dynamic environments.