In this paper, we propose a fuzzy inference model for a navigation algorithm for a mobile robot that intelligently searches goal location in unknown dynamic environments. Our model uses sensor fusion based on situational commands using an ultrasonic sensor. Instead of using the “physical sensor fusion” method, which generates the trajectory of a robot based upon the environment model and sensory data, a “command fusion” method is used to govern the robot motions. The navigation strategy is based on a combination of fuzzy rules tuned for both goalapproach and obstacleavoidance based on a hierarchical behaviorbased control architecture. To identify the environments, a command fusion technique is introduced where the sensory data of the ultrasonic sensors and a vision sensor are fused into the identification process. The result of experiment has shown that highlights interesting aspects of the goal seeking, obstacle avoiding, decision making process that arise from navigation interaction.
I. INTRODUCTION
An autonomous mobile robot is an intelligent robot that performs tasks by interacting with the surrounding environment through sensors without human control. Unlike general manipulators in a fixed working environment
[1
,
2]
, intelligent processing in a flexible and variable working environment is required. Robust behavior by autonomous robots requires that the uncertainty in such environments be accommodated by a robot control system. Therefore, studies on fuzzy rulebased control are attractive in this field. Fuzzy logic is particularly well suited for implementing such controllers due to its capabilities for inference and approximate reasoning under uncertainty
[3

5]
. Many fuzzy controllers proposed in the literature utilize a monolithic rulebased structure. That is, the precepts that govern the desired system behavior are encapsulated as a single collection of ifthen rules. In most instances, these rules are designed to carry out a single control policy or goal. However, mobile robots must be capable of achieving multiple goals whose priorities may change with time in order to achieve autonomy. Thus, controllers should be designed to realize a number of taskachieving behaviors that can be integrated to achieve different control objectives. This requires formulation of a large and complex set of fuzzy rules. In this situation, a potential limitation to the utility of the single command fuzzy controller becomes apparent. Since the size of the complete single command rulebase increases exponentially with the number of input variables
[6
,
7]
, multiinput systems can potentially suffer degradations in realtime response. This is a critical issue for mobile robots operating in dynamic surroundings
[8
,
9]
. Hierarchical rule structures can be employed to overcome this limitation by reducing the rate of increase to linear
[1
,
10]
.
This paper describes a hierarchical behaviorbased control architecture. It is structured as a hierarchy of fuzzy rule bases that enables the distribution of intelligence amongst special purpose fuzzybehaviors. This structure is motivated by the hierarchical nature of the behavior as hypothesized in ethological models. A fuzzy coordination scheme is also described that employs weighted decision making based on contextual behavior activation. Performance is demonstrated by simulation that highlights interesting aspects of the decision making process that arise from behavior interaction.
First, this paper briefly introduces the operation of each command and the fuzzy controller for navigation system in Section II. Section III explains the behavior hierarchy based on fuzzy logic. In Section IV, the experimental results to verify the efficiency of the system are shown. Finally, Section V concludes this work and outlines possible future related work.
II. SYSTEM MODEL AND METHODS
The proposed fuzzy controller is as shown in
Fig. 1
. We define three major navigation goals: target orientation, obstacle avoidance, and rotation movement. Each goal is represented as a cost function. Note that the fusion process has a structure of forming a cost function by combining several cost functions using weights. In this fusion process, we infer each weight of the command by a fuzzy algorithm, which is a typical artificial intelligence scheme. With the proposed method, the mobile robot navigates intelligently by varying the weights depending on the environment and selects a final command to keep minimum variation of the orientation and velocity according to the cost function
[11

15]
.
Overall structure of the navigation algorithm.
 A. Goal Seeking Command
The orientation command of a mobile robot is generated as the nearest direction to the target point. The command is defined as the distance to the target point when the robot moves to the orientation, θ, and the velocity,
v
. Therefore, the cost function is defined as Eq. (1).
where
v
is the
v
_{max}
−
k
⋅
θ_{c}
−
θ
 and
k
represent the reduction ratio of the rotational movement.
 B. Avoiding Obstacle Command
We represent the cost function for obstacleavoidance as the shortest distance to an obstacle based upon the sensor data in the form of a histogram. The distance information is represented as a form of second order energy and represented as a cost function by inspecting it for all θ, as shown in Eq. (2).
To navigate in a dynamic environment to the goal, a mobile robot should recognize the dynamic variation and react to it. For this, the mobile robot extracts the variation in the surrounding environment by comparing the past and the present. For continuous movement of a robot, the transformation matrix of a past frame
w
.
r
.
t
. the present frame should be clearly defined.
In
Fig. 2
, the vector,
is defined as the position vector of the mobile robot
w
.
r
.
t
. the {n1} frame, and
is defined as the vector
w
.
r
.
t
. the {n} frame. Then, we obtain the relation between
and
as follows:
Transformation of the frame.
Here,
is the rotation matrix from {n1} to the {n} frame, and
is the translation matrix from the {n1} frame to the {n} frame.
According to Eq. (3), the environment information measured in the {n1} frame can be represented as
w
.
r
.
t
. the {n} frame. Thus, if
W_{n}
_{1}
and
W_{n}
are the environment information in the polar coordinates measured in the {n1} and {n} frames, respectively, we can represent
W_{n}
_{1}
w
.
r
.
t
. the {n} frame, and extract the moving object using Eq. (4) in the {n} frame.
where
^{n}W_{n}
_{1}
represents
W_{n}
_{1}
transformed into the {n} frame.
 C. Minimizing Rotation Command
Minimizing rotational movement aims to rotate the wheels smoothly by restraining rapid motion. The cost function is defined as the minimum at the present orientation and is defined as the second order function in terms of the rotation angle, θ, as in Eq. (5).
The command represented as the cost function has three different goals to be satisfied at the same time. Each goal contributes differently to the command by a different weight, as shown in Eq. (6).
III. BEHAVIOR HIERARCHY BY FUZZY LOGIC
 A. Behavior Decision
Primitive behaviors are lowlevel behaviors that typically take the inputs from the robot’s sensors and send the outputs to the robot’s actuator. This forms a nonlinear map between them. Composite behaviors make up a map between the sensory input and/or the global constraints and the degree of applicability (DOA) of the relevant primitive behaviors. The DOA is the measure of the instantaneous level of the activation of a behavior. The primitive behaviors are weighted by the DOA and aggregated to form the composite behaviors. This is a general form of behavior fusion that can degenerate to behavior switching for DOA = 0 or 1
[16
,
17]
.
At a primitive level, behaviors are synthesized as fuzzy rule bases, i.e., a collection of fuzzy ifthen rules. Each behavior is encoded with a distinct control policy governed by fuzzy inference. If
x
and
y
are the input and output universes of the discourse of a behavior with a rulebase of size n, the usual fuzzy ifthen rule takes the following form:
where
x
and
y
represent the input and output fuzzy linguistic variables, respectively, and
A_{i}
and
B_{i}
(I = 1…n) are the fuzzy subsets representing the linguistic values of
x
and
y
. Typically,
x
refers to the sensory data and
y
to the actuator control signals. The antecedent and the consequent can also be a conjunction of the propositions (e.g., IF
x_{i}
is
A_{i}
_{,1}
AND…
x_{n}
is
A_{i,n}
THEN…)
At the composition level, the DOA is evaluated using a fuzzy rule base in which the global knowledge and constraints are incorporated. An activation level (threshold) at which the rules become an application is applied to the DOA giving the system more degrees of freedom. The DOA of each primitive behavior is specified in the consequent of the applicability rules of the form:
where
x
is typically the global constraint,
α_{j}
∈[0,1] is the DOA and
A_{i}
and
D_{i}
are the fuzzy set of the linguistic variables describing them. As in the former case, the antecedent and the consequent can also be a conjunction of the propositions.
 B. Inference System
We infer the weights of Eq. (6) by means of a fuzzy algorithm. The main reason for using a fuzzy algorithm is that it is easy to reflect human intelligence into the robot control. A fuzzy inference system is developed through the process of setting each situation, developing fuzzy logic with the proper weights, and calculating the weights for the commands.
Structure of the fuzzy inference system.
Inference rule of each weight system
Inference rule of each weight system
Fig. 3
shows the structure of a fuzzy inference system. We define the circumstances and the state of a mobile robot as the inputs of a fuzzy inference system and infer the weights of the cost functions. The inferred weights determine the cost function to direct the robot and determine the velocity of rotation. For control of the mobile robot, the results are transformed into the joint angular velocities by the inverse kinematics of the robot.
Table 1
output surface of the fuzzy inference system for each fuzzy weight subset using the inputs and the output. The control inference rule is:
ω
_{1}
, the fuzzy logic controller of the seeing goal;
ω
_{2}
, the fuzzy logic controller of avoiding the obstacle; and
ω
_{3}
, the fuzzy logic controller of minimizing the rotation, as shown in
Table 1
.
IV. EXPERIMENTS
Fig. 4
a is the image used in the experiment.
Fig. 4
b shows the values resulting from matching after image processing.
Fig. 4
shows that the maximum matching error is within 4%. Therefore, it can be seen that our vision system is feasible for navigation. The mobile robot navigates along a corridor of a width of 2 m with some obstacles, as shown in
Fig. 5
a. The real trace of the mobile robot is shown in
Fig. 5
b. It demonstrates that the mobile robot avoids the obstacles intelligently and follows the corridor to the goal.
V. CONCLUSIONS
A fuzzy control algorithm for both obstacle avoidance and path planning was implemented in experiments. It enables a mobile robot to reach its goal point in unknown environments safely and autonomously.
Experimental result of the vision system (a) Input image, (b) result of matching.
Navigation of a robot in a corridor environment (a) Navigation in a corridor without a local minimum, (b) navigation robot in a corridor with the local minimum.
We also present an architecture for intelligent navigation of mobile robots that determines the robot’s behavior by arbitrating the distributed control commands: seek goal, avoid obstacles, and maintain heading. The commands are arbitrated by endowing them with a weight value and combining them, and the weight values are obtained by a fuzzy inference method. The arbitrating command allows multiple goals and constraints to be considered simultaneously. To show the efficiency of the proposed method, real experiments were performed. The experimental results show that a mobile robot can navigate to the goal point safely in unknown environments and can also avoid moving obstacles autonomously. Our ongoing research endeavors will include validation of more complex sets of behaviors, both in simulation and with an actual mobile robot. Further improvements of the prediction algorithm for obstacles and the robustness of performance are required.
Acknowledgements
This paper was supported by the Business forCooperative R&D between Industry, Academy, andResearch Institute funded by the Korea Small and MediumBusiness Administration in 2012 (Grants No. 00045079),and the Basic Science Research Program through theNational Research Foundation of Korea (NRF) funded bythe Ministry of Education, Science and Technology (No.20100021054).
Er M. J.
,
Tan T. P.
,
Loh S. Y.
2004
“Control of a mobile robot usinggeneralized dynamic fuzzy neural networks”
Microprocessors and Microsystems
28
(9)
491 
498
DOI : 10.1016/j.micpro.2004.04.002
Zadeh L. A.
1973
“Outline of a new approach to the analysis of complexsystems and decision processes”
IEEE Transactions on Systems
3
(1)
28 
44
Nair D.
,
Aggarwal J. K.
1998
“Moving obstacle detection from anavigation robot”
IEEE Transactions on Robotics and Automation
14
(3)
404 
416
DOI : 10.1109/70.678450
Bentalba S.
,
El Hajjaji A.
,
Rachid A.
1997
“Fuzzy control of a mobile robot: a new approach”
in Proceedings of the IEEE International Conference on Control Applications
Hartford: CT
69 
72
Furuhashi T.
,
Nakaoka K.
,
Morikawa K.
,
Maeda H.
,
Uchikawa Y.
1995
“A study on knowledge finding using fuzzy classifiersystem”
Journal of Japan Society for Fuzzy Theory and Systems
7
(4)
839 
848
Itani H.
,
Furuhashi T.
2002
“A study on teaching informationunderstanding by autonomous mobile robot”
Transactions of the SICE
38
(11)
966 
973
Mehenen J.
,
Koppen M.
,
Saad A.
,
Tiwari A.
2009
Application ofSoft Computing : From Theory to Praxis.
Springer
Berlin
Beom H. R.
,
Cho H. S.
1995
“A sensorbased navigation for a mobilerobot using fuzzy logic and reinforcement learning”
IEEE Transactions on System, Man and Cybernetics
25
(3)
464 
477
DOI : 10.1109/21.364859
Ohya A.
,
Kosaka A.
,
Kak A. C.
1998
“Visionbased navigation by amobile robot with obstacle avoidance using singlecamera visionand ultrasonic sensing”
IEEE Transactions on Robotics and Automation
14
(6)
969 
978
DOI : 10.1109/70.736780
Mehrjerdi H.
,
Saad M. M.
,
Ghommam J.
2011
“Hierarchical fuzzycooperative control and path following for a team of mobilerobots”
IEEE/ASME Transactions on Mechatronics
16
(5)
907 
917
DOI : 10.1109/TMECH.2010.2054101
Wang D.
,
Zhang Y.
,
Si W.
2011
“Behaviorbased hierarchicalfuzzy control for mobile robot navigation in dynamicenvironment”
in Proceedings of the 2011 Chinese Control and Decision Conference (CCDC)
Mianyang, China
2419 
2424
Jouffe L.
1998
“Fuzzy inference system learning by reinforcementmethods”
IEEE Transactions of System, Man, and Cybernetics Part C
28
(3)
338 
355
DOI : 10.1109/5326.704563
Leng G.
,
McGinnity T. M.
,
Prasad G.
2005
“An approach for onlineextraction of fuzzy rules using a selforganising fuzzy neuralnetwork”
Fuzzy Sets and Systems
150
(2)
211 
243
Takahama T.
,
Sakai S.
,
Ogura H.
,
Nakamura M.
1996
“Learningfuzzy rules for bangbang control by reinforcement learningmethod”
Journal of Japan Society for Fuzzy Theory and Systems
8
(1)
115 
122
Tunstel E.
2000
“Fuzzybehavior synthesis, coordination, and evolutionin an adaptive behavior hierarchy”
PhysicaVerlag
Heidelberg, Germany
in Fuzzy Logic Techniques for Autonomous Vehicle Navigation
Tunstel E.
1999
“Fuzzy behavior modulation with threshold activationfor autonomous vehicle navigation”
in Proceedings of the 18th International Conference of the North American Fuzzy Information Processing Society
New York: NY
776 
780