We introduce a nonEuclidean metric for transportation systems with a defined minimum transportation cost (initial fare) and investigate the continuous singlefacility Weber location problem based on this metric. The proposed algorithm uses the results for solving the Weber problem with Euclidean metric by Weiszfeld procedure as the initial point for a special local search procedure. The results of local search are then checked for optimality by calculating directional derivative of modified objective functions in finite number of directions. If the local search result is not optimal then algorithm solves constrained Weber problems with Euclidean metric to obtain the final result. An illustrative example is presented.
AMS Mathematics Subject Classification : 90B85.
1. Introduction
Weber problem
[29]
is a continuous optimization problem for finding a point
X
^{*}
∈ ℝ
^{n}
satisfying
Here,
A
_{i}
∈ ℝ
^{n}
,
i
= 1, . . . ,
N
are some known demand points,
w
_{i}
∈ ℝ,
w_{i}
≥ 0 are some weighting coefficients, ∥ · ∥ : ℝ
^{n}
→ ℝ is a vector norm
[20]
.
Main appearances of the Weber problem include the warehouse location
[10
,
7]
, positioning computer and communication networks
[14]
, locating base stations of wireless networks. Solving a Weber problem (searching for a centroid) is a step of many clustering algorithms
[25
,
19
,
9]
.
The problem (1) was originally formulated by Weber
[29]
with Euclidean norm (∥ · ∥ =
l
_{2}
(·)) and it is generalized to
l_{p}
norms and other metrics
[29
,
6]
.
Detailed explanation of various norms and metrics is presented in
[21
,
18
,
8]
. The
l_{p}
norms play an important role in the theory and practice of location problems. The most common distance metrics in continuous space are Euclidean (
l
_{2}
), rectangular (
l
_{1}
) and Chebyshev (
l_{∞}
) metrics but other metrics are also important for specific cases
[1
,
8
,
23]
. Various distance metrics can be used for solving clustering problems
[26
,
30]
. In
[16]
, authors consider norm approximation and approximated solution for Weber problems with an arbitrary metric using random search
[15]
. Problems with barriers are described in
[18]
. In special cases, such problems can be transformed into discrete problems
[22]
.
In the case of public transportation systems, the price usually depends on distance. However, some minimum price is usually defined. For example, the initial fare of the taxi cab may include some distance, usually 15 km. Having rescaled the distances so that this distance included in the initial price is equal to 1, we can define the price function
d_{P}
as
where ∥ · ∥ is a vector norm. We use the term ”taxi metric” to denote the metric defined by (2). In this paper, we consider ∥ · ∥ as Euclidean norm in ℝ
^{2}
only (∥ · ∥
_{2}
).
In clustering problems, such metric can be used to describe the distance between the samples and the core of the cluster
[27]
with fixed core diameter. A metric which neglects the distances smaller than some predefined observational error
ε
is equivalent with our ”taxi” metric:
The Radar Screen
[3]
metric is a very similar norm metric with the distance function defined by
The Weber problem with the Radar Screen metric is a special case of the problem considered in
[11]
. Unlike (3), our distance function (2) is convex and our approach significantly differs from that proposed in
[11]
.
The paper is organized as follows. In Chapter 2, we restate some basic definitions and describe existing algorithms and investigate some features of the objective function. In Chapter 3, we restate the algorithm for the Weber problem with new metric. In chapter 4, we give a simple example and results of the algorithm.
2. Preliminaries
The singlefacility Weber problem (1) in ℝ
^{2}
(planar problem) with ”taxi metric” (2) can be formulated as
Here,
The problem proposed by Weber is based on the Euclidean metric
The most common algorithm for Weber problem with the metrics induced by the
l_{p}
norms is Weiszfeld procedure
[28
,
10]
.
For the simplicity, we assume that
Lemma 2.1.
If
then any point X
∈
S_{E} is a solution of problem
(4).
Moreover, any X′
∉
S_{E} is not a minimizer of
(4).
Proof
. Let us assume that
X
^{∗}
∈
S_{E}
. Then for arbitrary ∀
X
∈ ℝ
^{2}
we have
which implies
f
(
X
^{∗}
) = min{
f
(
X
),
X
∈ ℝ
^{2}
}. □
Lemma 2.1 describes the case when the noniterative solution is possible. More several cases when the noniterative approach is applicable are described in
[2]
.
Let us denote the set
Lemma 2.2.
If X
^{∗}
is the solution of the problem
(4)
and X
^{∗}
∈
R
_{0}
then X
^{∗}
is the solution of Weber problem
(5)
and vice versa
.
Proof
. Under the assumption
X
^{∗}
∈
R
_{0}
we have
f
(
X
^{∗}
) =
f_{E}
(
X
^{∗}
). □
Lemma 2.3.
The objective function of problem
(4)
is convex
.
Proof
. The sum of convex functions
f_{i}
(
X
) = max{1, 
X
−
A_{i}

_{2}
},
i
= 1, . . . ,
N
is convex. □
For any arbitrary point
X
∈ ℝ
^{2}
, let us denote the sets of demand point indices
and a set of points (a region)
The regions
R
(
X
) of any point
X
∈ ℝ
^{2}
are bounded by arcs
[11
,
12]
of radius 1 with centres in points
A_{i}
,
i
= 1, . . . ,
N
(see
Fig. 1
). In
[4]
, authors prove that the quantity of regions is quadratically bounded by the number of demand points.
Illustration of the problem (4), its regions and unions U_{1}, U_{2}.
An algorithm for solving constrained Weber problems with regions bounded by arcs is proposed in
[17]
.
Let our problem have
M
different regions
R_{k}
,
k
= 1, . . . ,
M
:
Note that
R
_{0}
was introduced above. The border point of each region belong to at least one other region.
The algorithm for enumerating all the disc intersection points is given in
[5]
. Let our problem has
I
disc intersections
D
_{1}
, . . . ,
D_{I}
.
Lemma 2.4.
If X
^{∗}
is a solution of the problem
(4)
and X
^{∗}
∈
R_{k}
,
k
= 0, . . . ,
M and S
_{>}
(
X
^{∗}
) ≠ ∅
then X
^{∗}
is the solution of the following constrained Weber problem with the Euclidean metric:
Proof
. The value of the objective function for
X
∈
R_{k}
is
Since the first summand in (14) is constant, we have an equivalent problem (12) with the constraint (13). □
The solution of constrained optimization problems with convex objective functions coincides with the solution of the corresponding unconstrained problem or lays on the border of the forbidden region
[12]
(moreover, the solution of the constrained problem is said to be visible from the solution of the unconstrained problem).
Corollary 2.5.
If X
^{∗}
is a solution of problem
(4)
then it is the solution of the unconstrained problem
(12)
or
∃
i
∈ {1, . . . ,
N
}
which satisfies
∥
A_{i}
−
X
^{∗}
∥
_{2}
= 1.
Let us denote by
U_{q}
,
q
= 1, . . . ,
N_{U}
the unions of regions
R_{k}
,
k
= 1, . . . ,
M
surrounded by the region
R
_{0}
:
Denote also the borders of those unions by
and set of points of all the borders as
Lemma 2.6.
If X
^{∗∗}
is the unique solution of the problem
(5)
and X
^{∗}
is a solution of the problem
(4)
then
Proof
. Let us consider a constrained problem with the Euclidean metric
Let
X′
be a solution of this problem.
As the objective function of this problem is convex, two cases are possible.
Case 1.
X′
=
X
^{∗∗}
.
Case 2. Solution
X′
of this constrained problem lies on the borderline of the feasible set, i.e.
X′
∈
B
. Moreover,
X′
is visible from
X
^{∗∗}
.
In Case 1, in accordance with Lemma 2.2,
X′
≠
X
^{∗∗}
unless
X′
∈
B
. Thus, if
X
^{∗∗}
∈
U_{q′}
then
X′
∈
U_{q′}
. From
X′
∈
R
_{0}
, we have
X′
∈
B_{q′}
. Let us denote the set (see
Fig. 1
)
From
f
(
X
) =
f_{E}
(
X
) ∀
X
∈
R
_{0}
,
X′
is the solution of the constrained problem
From the convexity of the objective function
f
(·) immediately follows that
S
is convex. Let us denote a set
X′_{S}
of optimizers of the constrained problem (15) – (16). From
we have
Thus,
Therefore, the set
S
does not contain any barriers
B_{q}
of the unions
U_{q}
(
q
= 1, . . . ,
N_{U}
) except the points from
X′_{S}
and ∃
X′
∈
X′_{S}
:
X′
∈
U_{q′}
. Since
X′
∈
U_{q′}
and (
S
∩
R
_{0}
) ⊂
B
, we have
Since
X
^{∗}
is the optimizer of (4),
f
(
X
^{∗}
) ≤
f
(
X′
). Thus,
X
^{∗}
∈
S
and
X
^{∗}
∈
U_{l′}
. □
Lemma 2.7.
Let X′ S be the set of solutions of the constrained problem
(12)(13).
Let G_{k} be the set of border points of region R_{k}. Then the set G
∩
X′_{S} is finite unless S
_{>}
(
X
^{∗}
) = ∅∀
X
^{∗}
∈
X′_{S}
.
Proof
. The case 
A_{i}
−
X

_{2}
≤ 1∀
i
= 1, ...,
N
,
X
∈
X′
Let
X
^{∗}
∈
X′_{S}
be an arbitrary point. If
S
_{>}
(
X
^{∗}
) ≠ ∅ then, being the Weber problem with the Euclidean metric, problem (12) has a strictly convex objective function unless all its demand points
A_{i}
,
i
∈
S
_{>}
(
X
^{∗}
) are collinear. In this case, the problem has exactly one solution.
If the demand points are collinear, the solution coincides with one of demand point
A_{i}
,
i′
∈
S
_{>}
(
X
^{∗}
) or all points of some line segment
A_{i′}
A_{i′′}
,
i′
∈
S
_{>}
(
X
^{∗}
),
i′′
∈
S
_{>}
(
X
^{∗}
) are solutions. The border
G
is formed by arcs. Thus, it has finite number of intersections with the line segment. □
The algorithms proposed in the next section are based on the lemmas above. Algorithms for both constrained and unconstrained Weber problem with Euclidean metric are well investigated, see
[12
,
13
,
29]
. We use these algorithms as subroutines in our algorithm.
3. Algorithm description
Our algorithm starts the local search procedure from the initial point which is calculated by the Weiszfeld procedure as the solution of the unconstrained Weber problem with the Euclidean metric (5). If the solution
X
^{∗}
satisfies
X
^{∗}
∈
R
_{0}
(i.e. ∥
X
^{∗}
−
A_{i}
∥
_{2}
≥ 1,
i
= 1, . . . ,
N
) then, in accordance with Lemma 2.2,
X
^{∗}
is the solution of problem (4). Otherwise, algorithm continues further search from point
X
^{∗}
.
Having solved problem (12) with constraint
X
∈
R
(
X
^{∗}
), we obtain a new solution
X
^{∗}
or a set of solutions. If the unique solution all points from the solution set belong to the border of the union of regions
U_{q′}
then, in accordance with Lemma 2.6, we have the optimal solution.
If the unique solution
X
^{∗}
or every point of the solution set does not contain any border points of region
R
(
X
^{∗}
), due to convexity of the objective function, we have the solution final and algorithm stops.
If the solution
X
^{∗}
lays on the borderline of region
R
(
X
^{∗}
) or the solution set contains any border points then we must solve the constrained Weber problem for the regions containing
X
^{∗}
. If there are some better solutions, continue with the best solution. Otherwise, stop.
Since the objective function is convex, we can use any local search procedure. The following heuristics provides the significant speedup. First, the value of the objective function is calculated for the circle intersection points of the region
R
(
X
^{∗}
) (i.e. its angular points) where
X
^{∗}
if the solution of the unconstrained Weber problem (5). This intersection
X
^{∗∗}
with the best result is chosen as an initial point for the further search. The local search procedure continues then with the neighbor intersection points (i.e. the intersection points which are the ends of the arcs starting from
X
^{∗∗}
). When the local search stops at some intersection
X
^{∗∗}
, our algorithm checks if this point is the local minimum in each of its neighbor regions. If it is not the local (and global) minimum, the search continues with solving constrained Weber problem as described above.
If a temporal solution
X
^{∗∗}
is an intersection point, the algorithm checks if this point is the local minimum in each of its regions. The angular point of the convex region is the point of minimum of the function in this region if all possible directional derivatives are nonnegative. But our regions can be nonconvex.
Let us denote
P
(
R_{k}
) such a convex polygon that all its vertices coincide with the angular points of the region
R_{k}
. Then region
is convex.
Let us denote two rays
l
_{1}
and
l
_{2}
with initial point
X
^{∗∗}
and an angle
ϕ
∈ (0,
π
) between them such that all points of the region
ϱ
(
R_{k}
) are situated between
l
_{1}
and
l
_{2}
and both
l
_{1}
and
l
_{2}
are tangent to the borderline of the region
ϱ
(
R_{k}
). All possible directions from
X
^{∗∗}
in the region
ϱ
(
R_{k}
) lay between
l
_{1}
and
l
_{2}
.
From the convexity of region
ϱ
(
R_{k}
) and the objective function (12), if
then
X
^{∗∗}
is the minimum point of (12) in
ϱ
(
R_{k}
). Here,
are directional derivatives with directions
l
_{1}
and
l
_{2}
correspondingly. From
ϱ
(
R_{k}
) ⊂
R_{k}
, this point
X
^{∗∗}
is the minimum point in
R_{k}
.
If
X
^{∗∗}
is the point of local minimum for all regions which it joins then
X
^{∗∗}
is the solution of problem (4) and solving the constrained Weber problem (12), (13) is not needed. The experiments on the randomly generated problems and the rescaled problems from
[24]
show that solving the constrained Weber problem is not needed in most cases.
In our algorithm, regions
R_{k}
are enumerated as follows. The number
k
is an array of
N
digits, one digit for each of the demand points. The
i
th digit is set to 1 if 
X
−
A_{i}
 < 1 for all internal points
X
of region
R_{k}
. If 
X
−
A_{i}
 > 1 then the
i
th digit is set to 0. For example, region
R
_{6}
(see
Fig. 1
) in new notation is
R
_{11100}
. Using this method of enumeration, it is not necessary to enumerate all regions at the first steps of the algorithm.
Analogous method of enumeration is used for intersection points
D_{j}
. The index
j
contains
N
digits. If 
D_{j}
−
A_{i}
 > 1 then the
i
th digit is set to 0. If 
D_{j}
−
A_{i}
 < 1 then the
i
th digit is set to 1. If 
D_{j}
−
A_{i}
 = 1 then the
i
th digit is set to 2. For example, the angulous point
D
_{1}
(see
Fig. 1
) of regions
R
_{1}
,
R
_{2}
,
R
_{5}
and
R
_{6}
in the proposed algorithm is denoted as
D
_{12200}
(it is an internal point of the circle with center in
A
_{1}
, border point of circles with centers in
A
_{2}
and
A
_{3}
and it is situated outside circles with centers in
A
_{4}
and
A
_{5}
).
With this notation, it is easy to determine the region or regions for any arbitrary point
X
^{∗}
. We use the following algorithm (here,
k
is an array of digits).
Note that the region
R
_{0}
, see (7), in this notation is
R
_{000...0}
.
Algorithm 3.1.
Determine the region index

Require:CoordinatesX= (x1,x2) ot the point, coordinates of the demand points

Step 1:fori= 1, . . . ,Ndo:

Step 1.1:If ∥Ai−X∥ = 1 thenk[i] = 2;

Step 1.2:else if ∥Ai−X∥ < 1 thenk[i] = 1;

Step 1.3:elsek[i] = 0;

Step 1.4:Continue Step 1;

Step 2:Rarray[1] = {k};Nr= 1;

Step 3:Foriindo:

Step 3.1:ifk[i] = 2 then

Step 3.2:Parray=Rarray;

Step 3.3:forjindo:

Step 3.3.4:Rarray[j][k] = 0;Parray[j][k] = 1; Continue Step 2.3;

Step 3.4:Add all elements ofParrayto the end of the arrayRarray;Nr=Nr∗ 2;

Step 3.5:Continue Step 2;

Step 4:STOP, returnRarrayand number of its elementsNr.
The algorithm above returns a set (an array)
R_{array}
of region indexes
k
such that
X
∈
R_{k}
. Steps 1 to 1.4 form an array of digits describing the distance from the given point to each of the demand points: digits 0,1 and 2 mean distance more than 1, less than 1 and equal to 1, correspondingly. In Steps 3 to 3.5, array
R_{array}
of indexes is formed. Initially, it contains one element coinciding with the array
k
formed in Steps 1 to 1.4. For each demand point having distance equal to 1 from the given point (digit 2 in array
k
), array
R_{array}
is duplicated: instead of digit 2, digit 0 is substituted in the first copy of the initial array
R_{array}
and digit 1 in its second copy. Thus, array
R_{array}
contains 2
^{e}
^{1}
indexes where
e
_{1}
is quantity of the demand points having distance equal to 1 from point
X
.
For any intersection point
D_{j}
, the index
j
is known and we can start this algorithm for such point from Step 2 assuming
k
=
j
.
For determining the set of the neighbor intersection points for a given intersection point
D_{j}
, we use the following algorithm.
Algorithm 3.2.
Form a list of neighbor angular points

Require:An indexj∗of the intersection pointDj∗(here,j∗is an array of digits 0,1,2), a set of all intersection pointsDall.

Step 1:Dneighbour= ∅;

Step 2:For eachDiincalDall\ {Dj∗} do: Comment: here, the indexesj∗andiare considered as arrays of digits.

Step 2.1:ncommon= 0;bok= 1.

Step 2.2:Fork= 1, . . . ,Ndo:

Step 2.2.1:Ifi[k] = 2 andj∗[k] = 2 thenncommon=ncommon+ 1; ;

Step 2.2.2:else ifj∗[k] ≠ 2 andj∗[k] ≠i[k] thenbok= 0; break Step 2.2 and go to Step 2.3.

Step 2.2.5:Continue Step 2.2.

Step 2.3:ifncommon> 0 andbok= 1 thenDneighbour=Dneighbour∪{Di}.

Step 2.4:Continue Step 2.

Step 3:STOP, returnDneighbour.
In Steps 2 to 2.4, all known intersetion points are scanned. For the indexes of the intersection points, the notation from 3.1 is used: digit 2 in the
k
th position of the index means that distance from the intersection point to the
k
th demand point is equal to 1. In Steps 2.2 to 2.2.5, searching for digits 2 in indexes is organized.
Our algorithm for solving problem (4) is organized as follows.
Algorithm 3.3.
Solving the location problem (4)

Require:Set ofNdemand pointsAwith coordinatesof the demand points and their weightswi,i= 1, . . . ,N.

Step 1:Solve the Weber problem with Euclidean metric (5) implementing Weiszfeld procedure, store the result toX∗.

Step 2:ifthen STOP and returnX∗.

Step 3:Determine the regionRk∗=R(X∗) with Algorithm 3.1. The result is indexkwhich is an array ofNdigits. If a set of regions is returned then we use the first one.

Step 4:Form the setDallof all intersection points of the circles with centres inAi,i= 1, . . . ,Nand radius 1.

Step 5:Form the setDof all angular points (intersections) of the regionRk; SetDchecked=D.

Step 6:F∗∗= +∞.

Step 7:For each elementDjof the setDdo:

Step 7.1:Iff(Dj)

Step 7.2:Continue Step 7.

Step 8:bfound= 1.

Step 9:whilebfound= 1 do:

Step 9.1:bfound= 0; Call Algorithm 3.2 to form the setDneighbourof the neighbor intersections ofX∗∗.

Step 9.2:ForX′inDneighbour\Dcheckeddo:

Step 9.2.1:Iff(X′)

Step 9.3:Dchecked=Dchecked∪Dneighbour.

Step 9.4:Continue Step 9.

Step 10:Form the setLof regions joint by the setX∗∗with Algorithm 3.1;

Step 11:Ltosearch= ∅.

Step 12:For each regionRkinLdo:

Step 12.1:For the convex regionϱ(Rk) =Rk∩P(Rk), calculate two directions (rays with initial pointX)l1andl2tangent to the borderline of the regionRk.

Step 12.2:Ifthen

Step 12.2.1:Ltosearch=Ltosearch∪ {Rk}.

Step 12.3:Continue Step 12.

Step 13:WhileLtosearch≠ ∅ do:

Step 13.1:For each elementRkinLtosearchdo:

Step 13.1.1:Solve the constrained Weber problem (12)–(13) using the modified Weiszfeld procedure[11,4], store the result toX′.

Step 13.1.2:Iff(X′)

Step 13.1.3:Continue Step 13.1.

Step 13.2:Continue Step 13.

Step 14:STOP and returnX∗∗.
4. Numerical example
Let us solve the problem shown in
Fig. 2
Example problem scheme and its objective function graph.
Here,
N
= 7,
A
_{1}
= (0, 0.25),
A
_{2}
= (0.25, 0),
A
_{3}
= (0.25, 0.75),
A
_{4}
= (1.35, 0.25),
A
_{5}
= (1, 0.77),
A
_{6}
= (3.45, 0.2),
A
_{7}
= (3.55, 0.4),
w
_{1}
=
w
_{6}
= 1,
w
_{2}
= 9,
w
_{3}
= 4,
w
_{4}
= 3,
w
_{5}
=
w
_{7}
= 2.
The result of Weiszfeld procedure at Step 1 of Algorithm 3.3 is
At Step 2, this point is not in
R
_{0000000}
since ∥
A
_{1}
−
X
^{∗}
∥ < 1. Thus, the algorithm goes on.
At Step 3, from ∥
A
_{1}
−
X
^{∗}
∥ < 1, ∥
A
_{2}
−
X
^{∗}
∥ < 1, ∥
A
_{3}
−
X
^{∗}
∥ < 1, ∥
A
_{5}
−
X
^{∗}
∥ < 1,
R
(
X
^{∗}
) =
R
_{11101000}
.
At Step 4, our algorithm forms a set
D_{all}
of all 22 intersection points.
At Step 5, the set of angular points (intersections) of region
R
_{1110100}
is
After Step 6 and three iterations in Steps 7 to 7.2, we have
At Step 8, a boolean variable
b_{found}
is set to 1 and our algorithm start the iteration (Step 9).
At Step 9.1,
b_{found}
is reset to 0. Algorithm 3.2 returns the list of the neighbor intersections for
D
_{1212100}
:
At Step 9.2 to 9.2.1, the algorithm estimates the objective function for these intersections except
D
_{1210200}
,
D
_{1112200}
and after these iterations, we have
At Step 9.3, the algorithm adds
D_{neighbour}
to the set
D_{checked}
and we have
D_{checked}
= {
D
_{1212100}
,
D
_{1210200}
,
D
_{1112200}
,
D
_{2012100}
,
D
_{2211100}
}.
Step 9 is then repeated.
At the second iteration of Step 9.1,
b_{found}
is reset to 0. Algorithm 3.2 returns the list of the neighbor intersections for
D
_{1212100}
:
At Step 9.2 to 9.2.1, the algorithm estimates the objective function for these intersections except
D
_{2012100}
,
D
_{1212100}
and after two iterations, we have
At Step 9.3,
D_{checked}
= {
D
_{1212100}
,
D
_{1210200}
,
D
_{1112200}
,
D
_{2012100}
,
D
_{2211100}
,
D
_{0221100}
,
D
_{2121100}
}, Step 9 is then repeated.
At the third iteration of Step 9.1,
b_{found}
is reset to 0. Algorithm 3.2 returns the list of the neighbor intersections for
D
_{1212100}
:
At Step 9.2 to 9.2.1, the algorithm estimates the objective function for the intersections
D
_{0201200}
and
D
_{002210}
and after two iterations, we have no improvement of
X
^{∗∗}
=
D
_{0221100}
and
F
^{∗∗}
= 26.209559.
Thus,
b_{found}
= 0 and the iterations of Step 9 finish.
At Step 10, the list of the regions joint by
X
^{∗∗}
=
D
_{022110}
is
Algorithm sets
L_{tosearch}
= ∅.
In region
R
_{0001100}
, the direction
l
_{1}
is a ray on the line connecting
X
^{∗∗}
and
D
_{0201200}
(
d
_{7}
in
Fig. 3
),
l
_{2}
is a ray on the line connecting
X
^{∗∗}
and
D
_{0022100}
(
d
_{2}
in
Fig. 3
).
Neighbor regions and directions for directional derivatives calculations (Steps 12 to 12.3 of Algorithm 3.3).
In region
R
_{0011100}
, the direction
l
_{1}
is a ray on the line tangent to the circle with center in
A
_{3}
(
d
_{1}
in
Fig. 3
),
l
_{2}
is a ray on the line connecting
X
^{∗∗}
and
D
_{2211100}
(
d
_{4}
in
Fig. 3
).
In region
R
_{0111100}
, the direction
l
_{1}
is a ray on the line tangent to the circle with center in
A
_{2}
(
d
_{3}
in
Fig. 3
),
l
_{2}
is a ray on the line tangent to the circle with center in
A
_{3}
(
d
_{6}
in
Fig. 3
).
In region
R
_{010110}
, the direction
l
_{1}
is a ray on the line tangent to the circle with center in
A
_{2}
(
d
_{8}
in
Fig. 3
),
l
_{2}
is a ray on the line connecting
X
^{∗∗}
and
D
_{2121100}
(
d
_{5}
in
Fig. 3
).
In Steps 12 to 12.2, our algorithm calculates all directional derivatives
All values are positive (Step 12.2).
Thus,
L_{tosearch}
= ∅ and iterations in Steps 13 to 13.2 are not performed.
The resulting point is
X
^{∗∗}
= (0.177025, 0.375000), value of the objective function is
F
^{∗∗}
= 26.209559.
5. Conclusion
The location problems for the systems with the minimum transportation cost can be formulated as the problems with a special metric
The proposed algorithm is able to solve such problems. The implemented local search heuristic reduces the computational complexity to the complexity of solving few constrained and one unconstrained Weber problems with Euclidean metric. However, the computational complexity of the proposed algorithm is subject to the further research.
BIO
Lev A. Kazakovtsev was awarded the candidate of technical sciences degree at the Research Institute of Control Systems, Wave Processes and Technologies (Krasnoyarsk, Russian Federation) in 2002. He is an associate professor of the Krasnoyarsk State Agrarian University (Chair of Mathematical Modeling and Informatics) and Deputy Director of the Department of Information Technologies of the Siberian State Aerospace University. His research interests include the location problems, discrete optimization, random search algorithms.
Department of Information Technologies, Siberian State Aerospace University named after M. F. Reshetnev, prosp. Krasnoyarskiy Rabochiy 31, Krasnoyarsk, 660093, Russian Federation.
email: levk@bk.ru
Predrag S. Stanimirovic received his M.Sc. in 1990 and Ph.D in 1996 at the University of Niˇs, Faculty of Sciences and Mathematics. He is a Full Professor at the same faculty since 2002. His research interests include operation research, linear programming, nonlinear programming, generalized inverses and symbolic computations.
Faculty of Sciences and Mathematics, University of Niˇs, Viˇsegradska 33, 18000 Niˇs, Serbia.
email: pecko@pmf.ni.ac.rs
Brown R.G.
1997
Advanced Mathematics: Precalculus with discrete Mathematics and data analysis, (A.M.Gleason, ed.)
McDougal Littell
Evanston, Illinois
Chen R.
Noniterative Solution of Some FermatWeber Location Problems, Advances in Operations Research (2011)
http://downloads.hindawi.com/journals/aor/2011/379505.pdf.
Article ID 379505, Published online
DOI : 10.1155/2011/379505
Deza M.M.
,
Deza E.
2009
Encyclopedya of Distances
Springer Verlag
Berlin, Heilderberg
Drezner Z.
,
Mehrez A.
,
Wesolowsky G.O.
(1991)
The facility location problem with limited distances
Transportation Science
25
183 
187
DOI : 10.1287/trsc.25.3.183
Drezner Z.
,
Wesolowsky G.O.
(1980)
A maximin location problem with maximum distance constraints
AIIE Transact.
12
249 
252
DOI : 10.1080/05695558008974513
Drezner Z.
,
Klamroth K.
,
Schobel A.
,
Wesolowsky G.O.
2002
The Weber problem, in Z. Drezner and H.W. Hamacher (editors), Facility Location: Applications and Theory
SpringerVerlag
1 
36
Drezner Z.
,
Scott C.
,
Song J.S.
(2003)
The central warehouse location problem revisited
IMA Journal of Managemeng Mathematics
14
321 
336
DOI : 10.1093/imaman/14.4.321
Drezner Z.
,
Hamacher M.
2004
Facility location: applications and theory
SpringerVerlag
Berlin, Heidelberg
Gordon S.
,
Greenspan H.
,
Goldberger J.
Applying the Information Bottleneck Principle to Unsupervised Clustering of Discrete and Continuous Image Representations
Computer Vision. Proceedings. Ninth IEEE International Conference on, Vol.1
(2003)
370 
377
Farahani R.Z.
,
Hekmatfar M.
2009
Facility Location Concepts, Models, Algorithms and Case Studies
SpringerVerlag
Berlin Heidelberg
Fernandes I.F.
,
Aloise D.
,
Aloise D.J.
,
Hansen P.
,
Liberti L.
2012
On the Weber facility location problem with limited distances and side constraints, Optimization Letters, issue of 22
published online
DOI : 10.1007/s1159001205389.
Hansen P.
,
Peeters D.
,
Thisse J.F.
(1981)
Constrained location and the WeberRawls problem
NorthHolland Mathematics Studies
59
147 
166
Idrissi H.
,
Lefebvre O.
,
Michelot C.
(1988)
A primaldual algorithm for a constrained FermatWeber problem involving mixed norms, Revue francaise d’automatique, d’informatique et de recherche operationnelle
Recherche Operationnelle
22
313 
330
Kazakovtsev L.A.
(2013)
Wireless coverage optimization based on data provided by builtin measurement tools
WASJ
Special Volume on Techniques and Technologies
22
8 
15
Antamoshkin A.N.
,
Kazakovtsev L.A.
(2013)
Random search algorithm for the pmedian problem
Informatica (Ljubljana)
37
267 
278
Kazakovtsev L.A.
(2012)
Adaptation of the probability changing method for Weber problem with an arbitrary metric
Facta Universitatis, (Niš) Ser. Math. Inform.
27
289 
254
Kazakovtsev L.A.
(2013)
Algorithm for Constrained Weber Problem with feasible region bounded by arcs
Facta Universitatis, (Niš) Ser. Math. Inform.
28
271 
284
Klamroth K.
2002
Singlefacility location problems with barriers
Springer Verlag
Berlin, Heilderberg
Minkowski H.
2001
Gesammelte Abhandlungen, zweiter Band
Chelsea Publishing
Stanimirovic I.P.
(2013)
Successive computation of some efficient locations of the Weber problem with barriers
J. Appl. Math. Comput.
42
193 
11
DOI : 10.1007/s121900120637x
Staminirović P.S.
,
Cirić M.
,
Kazakovtsev L.A.
,
Osinuga I.A.
(2012)
Singlefacility Weber location problem based on the Lift metric
Facta Universitatis, (Niš) Ser. Math. Inform.
27
31 
46
Taillard E.
Location problems
web resource available at
Vimal A.
,
Valluri S.R.
,
Karlapalem K.
An Experiment with Distance Measures for Clustering
International Conference on Management of Data COMAD 2008
Mumbai, India
(2008)
241 
244
Voevodski K.
,
Balcan M.F.
,
Roglin H.
,
Teng S.H.
,
Xia Y.
Minsum Clustering of Protein Sequences with Limited Distance Information
Proceedings of the First International Conference on Similaritybased Pattern Recognition (SIMBAD’11)
Venice, Italy
(2011)
192 
206
Weiszfeld E.
(1937)
Sur le point sur lequel la somme des distances de n points donnes est minimum
Tohoku Mathematical Journal
43
335 
386
Wesolowsky G.
(1993)
The Weber problem: History and perspectives
Location Science
1
5 
23
Ying Y.
,
Li P.
(2012)
Distance Metric Learning with Eigenvalue Optimization
Journal of Machine Learning Research
13
1 
26