Advanced
Command Fusion for Navigation of Mobile Robots in Dynamic Environments with Objects
Command Fusion for Navigation of Mobile Robots in Dynamic Environments with Objects
Journal of information and communication convergence engineering. 2013. Mar, 11(1): 24-29
Copyright ©2013, The Korean Institute of Information and Commucation Engineering
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/li-censes/bync/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • Received : September 21, 2012
  • Accepted : December 18, 2012
  • Published : March 31, 2013
Download
PDF
e-PUB
PubReader
PPT
Export by style
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Taeseok Jin
jints@dongseo.ac.kr

Abstract
In this paper, we propose a fuzzy inference model for a navigation algorithm for a mobile robot that intelligently searches goal location in unknown dynamic environments. Our model uses sensor fusion based on situational commands using an ultrasonic sensor. Instead of using the “physical sensor fusion” method, which generates the trajectory of a robot based upon the environment model and sensory data, a “command fusion” method is used to govern the robot motions. The navigation strategy is based on a combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance based on a hierarchical behavior-based control architecture. To identify the environments, a command fusion technique is introduced where the sensory data of the ultrasonic sensors and a vision sensor are fused into the identification process. The result of experiment has shown that highlights interesting aspects of the goal seeking, obstacle avoiding, decision making process that arise from navigation interaction.
Keywords
I. INTRODUCTION
An autonomous mobile robot is an intelligent robot that performs tasks by interacting with the surrounding environment through sensors without human control. Unlike general manipulators in a fixed working environment [1 , 2] , intelligent processing in a flexible and variable working environment is required. Robust behavior by autonomous robots requires that the uncertainty in such environments be accommodated by a robot control system. Therefore, studies on fuzzy rule-based control are attractive in this field. Fuzzy logic is particularly well suited for implementing such controllers due to its capabilities for inference and approximate reasoning under uncertainty [3 - 5] . Many fuzzy controllers proposed in the literature utilize a monolithic rule-based structure. That is, the precepts that govern the desired system behavior are encapsulated as a single collection of if-then rules. In most instances, these rules are designed to carry out a single control policy or goal. However, mobile robots must be capable of achieving multiple goals whose priorities may change with time in order to achieve autonomy. Thus, controllers should be designed to realize a number of taskachieving behaviors that can be integrated to achieve different control objectives. This requires formulation of a large and complex set of fuzzy rules. In this situation, a potential limitation to the utility of the single command fuzzy controller becomes apparent. Since the size of the complete single command rule-base increases exponentially with the number of input variables [6 , 7] , multi-input systems can potentially suffer degradations in real-time response. This is a critical issue for mobile robots operating in dynamic surroundings [8 , 9] . Hierarchical rule structures can be employed to overcome this limitation by reducing the rate of increase to linear [1 , 10] .
This paper describes a hierarchical behavior-based control architecture. It is structured as a hierarchy of fuzzy rule bases that enables the distribution of intelligence amongst special purpose fuzzy-behaviors. This structure is motivated by the hierarchical nature of the behavior as hypothesized in ethological models. A fuzzy coordination scheme is also described that employs weighted decision making based on contextual behavior activation. Performance is demonstrated by simulation that highlights interesting aspects of the decision making process that arise from behavior interaction.
First, this paper briefly introduces the operation of each command and the fuzzy controller for navigation system in Section II. Section III explains the behavior hierarchy based on fuzzy logic. In Section IV, the experimental results to verify the efficiency of the system are shown. Finally, Section V concludes this work and outlines possible future related work.
II. SYSTEM MODEL AND METHODS
The proposed fuzzy controller is as shown in Fig. 1 . We define three major navigation goals: target orientation, obstacle avoidance, and rotation movement. Each goal is represented as a cost function. Note that the fusion process has a structure of forming a cost function by combining several cost functions using weights. In this fusion process, we infer each weight of the command by a fuzzy algorithm, which is a typical artificial intelligence scheme. With the proposed method, the mobile robot navigates intelligently by varying the weights depending on the environment and selects a final command to keep minimum variation of the orientation and velocity according to the cost function [11 - 15] .
PPT Slide
Lager Image
Overall structure of the navigation algorithm.
- A. Goal Seeking Command
The orientation command of a mobile robot is generated as the nearest direction to the target point. The command is defined as the distance to the target point when the robot moves to the orientation, θ, and the velocity, v . Therefore, the cost function is defined as Eq. (1).
PPT Slide
Lager Image
where v is the v max k ⋅| θc θ | and k represent the reduction ratio of the rotational movement.
- B. Avoiding Obstacle Command
We represent the cost function for obstacle-avoidance as the shortest distance to an obstacle based upon the sensor data in the form of a histogram. The distance information is represented as a form of second order energy and represented as a cost function by inspecting it for all θ, as shown in Eq. (2).
PPT Slide
Lager Image
To navigate in a dynamic environment to the goal, a mobile robot should recognize the dynamic variation and react to it. For this, the mobile robot extracts the variation in the surrounding environment by comparing the past and the present. For continuous movement of a robot, the transformation matrix of a past frame w . r . t . the present frame should be clearly defined.
In Fig. 2 , the vector,
PPT Slide
Lager Image
is defined as the position vector of the mobile robot w . r . t . the {n-1} frame, and
PPT Slide
Lager Image
is defined as the vector w . r . t . the {n} frame. Then, we obtain the relation between
PPT Slide
Lager Image
and
PPT Slide
Lager Image
as follows:
PPT Slide
Lager Image
Transformation of the frame.
PPT Slide
Lager Image
Here,
PPT Slide
Lager Image
is the rotation matrix from {n-1} to the {n} frame, and
PPT Slide
Lager Image
is the translation matrix from the {n-1} frame to the {n} frame.
According to Eq. (3), the environment information measured in the {n-1} frame can be represented as w . r . t . the {n} frame. Thus, if Wn -1 and Wn are the environment information in the polar coordinates measured in the {n-1} and {n} frames, respectively, we can represent Wn -1 w . r . t . the {n} frame, and extract the moving object using Eq. (4) in the {n} frame.
PPT Slide
Lager Image
where nWn -1 represents Wn -1 transformed into the {n} frame.
- C. Minimizing Rotation Command
Minimizing rotational movement aims to rotate the wheels smoothly by restraining rapid motion. The cost function is defined as the minimum at the present orientation and is defined as the second order function in terms of the rotation angle, θ, as in Eq. (5).
PPT Slide
Lager Image
The command represented as the cost function has three different goals to be satisfied at the same time. Each goal contributes differently to the command by a different weight, as shown in Eq. (6).
PPT Slide
Lager Image
III. BEHAVIOR HIERARCHY BY FUZZY LOGIC
- A. Behavior Decision
Primitive behaviors are low-level behaviors that typically take the inputs from the robot’s sensors and send the outputs to the robot’s actuator. This forms a nonlinear map between them. Composite behaviors make up a map between the sensory input and/or the global constraints and the degree of applicability (DOA) of the relevant primitive behaviors. The DOA is the measure of the instantaneous level of the activation of a behavior. The primitive behaviors are weighted by the DOA and aggregated to form the composite behaviors. This is a general form of behavior fusion that can degenerate to behavior switching for DOA = 0 or 1 [16 , 17] .
At a primitive level, behaviors are synthesized as fuzzy rule bases, i.e., a collection of fuzzy if-then rules. Each behavior is encoded with a distinct control policy governed by fuzzy inference. If x and y are the input and output universes of the discourse of a behavior with a rule-base of size n, the usual fuzzy if-then rule takes the following form:
PPT Slide
Lager Image
where x and y represent the input and output fuzzy linguistic variables, respectively, and Ai and Bi (I = 1…n) are the fuzzy subsets representing the linguistic values of x and y . Typically, x refers to the sensory data and y to the actuator control signals. The antecedent and the consequent can also be a conjunction of the propositions (e.g., IF xi is Ai ,1 AND… xn is Ai,n THEN…)
At the composition level, the DOA is evaluated using a fuzzy rule base in which the global knowledge and constraints are incorporated. An activation level (threshold) at which the rules become an application is applied to the DOA giving the system more degrees of freedom. The DOA of each primitive behavior is specified in the consequent of the applicability rules of the form:
PPT Slide
Lager Image
where x is typically the global constraint, αj ∈[0,1] is the DOA and Ai and Di are the fuzzy set of the linguistic variables describing them. As in the former case, the antecedent and the consequent can also be a conjunction of the propositions.
- B. Inference System
We infer the weights of Eq. (6) by means of a fuzzy algorithm. The main reason for using a fuzzy algorithm is that it is easy to reflect human intelligence into the robot control. A fuzzy inference system is developed through the process of setting each situation, developing fuzzy logic with the proper weights, and calculating the weights for the commands.
PPT Slide
Lager Image
Structure of the fuzzy inference system.
Inference rule of each weight system
PPT Slide
Lager Image
Inference rule of each weight system
Fig. 3 shows the structure of a fuzzy inference system. We define the circumstances and the state of a mobile robot as the inputs of a fuzzy inference system and infer the weights of the cost functions. The inferred weights determine the cost function to direct the robot and determine the velocity of rotation. For control of the mobile robot, the results are transformed into the joint angular velocities by the inverse kinematics of the robot. Table 1 output surface of the fuzzy inference system for each fuzzy weight subset using the inputs and the output. The control inference rule is: ω 1 , the fuzzy logic controller of the seeing goal; ω 2 , the fuzzy logic controller of avoiding the obstacle; and ω 3 , the fuzzy logic controller of minimizing the rotation, as shown in Table 1 .
IV. EXPERIMENTS
Fig. 4 a is the image used in the experiment. Fig. 4 b shows the values resulting from matching after image processing. Fig. 4 shows that the maximum matching error is within 4%. Therefore, it can be seen that our vision system is feasible for navigation. The mobile robot navigates along a corridor of a width of 2 m with some obstacles, as shown in Fig. 5 a. The real trace of the mobile robot is shown in Fig. 5 b. It demonstrates that the mobile robot avoids the obstacles intelligently and follows the corridor to the goal.
V. CONCLUSIONS
A fuzzy control algorithm for both obstacle avoidance and path planning was implemented in experiments. It enables a mobile robot to reach its goal point in unknown environments safely and autonomously.
PPT Slide
Lager Image
Experimental result of the vision system (a) Input image, (b) result of matching.
PPT Slide
Lager Image
Navigation of a robot in a corridor environment (a) Navigation in a corridor without a local minimum, (b) navigation robot in a corridor with the local minimum.
We also present an architecture for intelligent navigation of mobile robots that determines the robot’s behavior by arbitrating the distributed control commands: seek goal, avoid obstacles, and maintain heading. The commands are arbitrated by endowing them with a weight value and combining them, and the weight values are obtained by a fuzzy inference method. The arbitrating command allows multiple goals and constraints to be considered simultaneously. To show the efficiency of the proposed method, real experiments were performed. The experimental results show that a mobile robot can navigate to the goal point safely in unknown environments and can also avoid moving obstacles autonomously. Our ongoing research endeavors will include validation of more complex sets of behaviors, both in simulation and with an actual mobile robot. Further improvements of the prediction algorithm for obstacles and the robustness of performance are required.
Acknowledgements
This paper was supported by the Business forCooperative R&D between Industry, Academy, andResearch Institute funded by the Korea Small and MediumBusiness Administration in 2012 (Grants No. 00045079),and the Basic Science Research Program through theNational Research Foundation of Korea (NRF) funded bythe Ministry of Education, Science and Technology (No.2010-0021054).
References
Er M. J. , Tan T. P. , Loh S. Y. 2004 “Control of a mobile robot usinggeneralized dynamic fuzzy neural networks” Microprocessors and Microsystems 28 (9) 491 - 498    DOI : 10.1016/j.micpro.2004.04.002
Zadeh L. A. 1973 “Outline of a new approach to the analysis of complexsystems and decision processes” IEEE Transactions on Systems 3 (1) 28 - 44
Nair D. , Aggarwal J. K. 1998 “Moving obstacle detection from anavigation robot” IEEE Transactions on Robotics and Automation 14 (3) 404 - 416    DOI : 10.1109/70.678450
Bentalba S. , El Hajjaji A. , Rachid A. 1997 “Fuzzy control of a mobile robot: a new approach” in Proceedings of the IEEE International Conference on Control Applications Hartford: CT 69 - 72
Furuhashi T. , Nakaoka K. , Morikawa K. , Maeda H. , Uchikawa Y. 1995 “A study on knowledge finding using fuzzy classifiersystem” Journal of Japan Society for Fuzzy Theory and Systems 7 (4) 839 - 848
Itani H. , Furuhashi T. 2002 “A study on teaching informationunderstanding by autonomous mobile robot” Transactions of the SICE 38 (11) 966 - 973
Mehenen J. , Koppen M. , Saad A. , Tiwari A. 2009 Application ofSoft Computing : From Theory to Praxis. Springer Berlin
Beom H. R. , Cho H. S. 1995 “A sensor-based navigation for a mobilerobot using fuzzy logic and reinforcement learning” IEEE Transactions on System, Man and Cybernetics 25 (3) 464 - 477    DOI : 10.1109/21.364859
Ohya A. , Kosaka A. , Kak A. C. 1998 “Vision-based navigation by amobile robot with obstacle avoidance using single-camera visionand ultrasonic sensing” IEEE Transactions on Robotics and Automation 14 (6) 969 - 978    DOI : 10.1109/70.736780
Mehrjerdi H. , Saad M. M. , Ghommam J. 2011 “Hierarchical fuzzycooperative control and path following for a team of mobilerobots” IEEE/ASME Transactions on Mechatronics 16 (5) 907 - 917    DOI : 10.1109/TMECH.2010.2054101
Wang D. , Zhang Y. , Si W. 2011 “Behavior-based hierarchicalfuzzy control for mobile robot navigation in dynamicenvironment” in Proceedings of the 2011 Chinese Control and Decision Conference (CCDC) Mianyang, China 2419 - 2424
Jouffe L. 1998 “Fuzzy inference system learning by reinforcementmethods” IEEE Transactions of System, Man, and Cybernetics Part C 28 (3) 338 - 355    DOI : 10.1109/5326.704563
Leng G. , McGinnity T. M. , Prasad G. 2005 “An approach for onlineextraction of fuzzy rules using a self-organising fuzzy neuralnetwork” Fuzzy Sets and Systems 150 (2) 211 - 243
Nishina T. , Hagiwara M. 1997 “Fuzzy inference neural network” Neurocomputing 14 (3) 223 - 239    DOI : 10.1016/S0925-2312(96)00036-7
Takahama T. , Sakai S. , Ogura H. , Nakamura M. 1996 “Learningfuzzy rules for bang-bang control by reinforcement learningmethod” Journal of Japan Society for Fuzzy Theory and Systems 8 (1) 115 - 122
Tunstel E. 2000 “Fuzzy-behavior synthesis, coordination, and evolutionin an adaptive behavior hierarchy” Physica-Verlag Heidelberg, Germany in Fuzzy Logic Techniques for Autonomous Vehicle Navigation
Tunstel E. 1999 “Fuzzy behavior modulation with threshold activationfor autonomous vehicle navigation” in Proceedings of the 18th International Conference of the North American Fuzzy Information Processing Society New York: NY 776 - 780