In this paper, we generalize the concept of the energy of Seidel matrix
S
(
G
) which denoted by
S^{α}
(
G
) and obtain some results related to this matrix. Also, we obtain an upper and lower bound for
S^{α}
(
G
) related to all of graphs with 
detS
(
G
) ≥ (
n
 1);
n
≥ 3.
AMS Mathematics Subject Classification : 05C50, 90C90, 34L15.
1. Introduction
All of graphs considered in this paper are finite, undirected and simple. Let G be a graph with vertex set V(G) and edge set E(G). Let A(G) be the (0,1)adjacency matrix of G. In 1966, Van Lint and Seidel in
[13]
introduced a symmetric(0,1,1)adjacency matrix for a graph G, called the
Seidel matrix
of G as
S
(
G
) =
J
− 2
A
(
G
) −
I
, where J is a square matrix which all of entries are equal to 1. Thus S(G) has 0 on the diagonal and ±1 off diagonal, where 1 indicates adjacency, unless is equal to 1. It is obvious that S(G) is the Seidel matrix of the complement of G. Haemers in
[10]
, similar to the normal energy, defined the Seidel energy
E_{s}
(
G
) of G which is the sum of the absolute values of the eigenvalues of the Seidel matrix. For example consider the complete graph
K_{n}
, its Seidel matrix is
I
−
J
. Hence the eigenvalues of
S
(
K_{n}
) are (1n) and 1 with multiplicity (n1). So
E_{s}
(
K_{n}
) = 2
n
− 2. The Seidel matrix of a graph can be interpreted as the incidence matrix of a design, or as the generator matrix of an alternative binary code. We refer the reader to
[5
,
6
,
9]
for more information related to eigenvalue and adjacency matrix and their properties.
In section 2, we proceed with the study of generalization of Seidel matrix and define
S^{α}
(
G
) and we obtain a lower bound for
S^{α}
(
G
),
α
≥ 2.
Section 3 contains a brief summary of KKT method and establishes the relation between nonlinear programming and upper bound.
In
[7]
, Ghorbani obtained a lower bound for
S^{α}
(
G
), 0 ≤
α
≤ 2. In this section we obtain an upper bound for
S^{α}
(
G
),
α
≥ 2. As for prerequisites, the reader is expected to be familiar with nonlinear programming. Undefined notations and terminology from nonlinear programming, can be found in
[2
,
12]
.
2. The generalization of Seidel matrix
At first, we define the concept of generalized Seidel matrix and then we obtain some results related to this concept.
Definition 2.1.
Let λ
_{1}
, λ
_{2}
, …, λ
_{n}
be the eigenvalues of the Seidel matrix S(G). The
power of eigenvalues
of Seidel matrix S(G) is denoted by
S^{α}
(
G
) and define as follows:
Remark 2.1.
If α
= 1
, then S^{α}
(
G
) =
E_{s}
(
G
)
; i.e., S^{α}
(
G
)
is a generalization of Seidel energy of G
.
For the proof of the next theorem, we need the concept of conference graph.
Definition 2.2
(Conference matrix
[10]
). A conference matrix is a square matrix C of order n with zero diagonal and ±1 off diagonal, such that
CC^{T}
= (
n
− 1)
I
. If C is symmetric, then C is the Seidel matrix of a graph and this graph is called a
conference graph
.
Conference matrices are a class of Hadamard matrices and its have the Hadamard properties. For more details we refer the reader to
[3
,
10]
. The remainder of this section will be devoted to the proof of a lower bound for
S^{α}
(
G
) for all
α
≥ 2.
Definition 2.3
(Hölder’s Inequalities
[8]
). Let
with
p
,
q
> 1. Then Hölder’s inequality for the ndimensional Euclidean space, when the set S is {1, …,
n
} with the counting measure, we have
for all
X
= (
x
_{1}
,
x
_{2}
, …,
x_{n}
),
Y
= (
y
_{1}
,
y
_{2}
, …,
y_{n}
) ∈ ℝ
^{n}
with equality when 
b_{k}
 =
c

a_{k}

^{p−1}
. If
p
=
q
= 2, this inequality becomes Cauchy’s inequality.
Theorem 2.4.
Let G be a graph with n vertices, then for all α
≥ 2,
n
≥ 3,
and equality holds if and only if G is a conference graph
.
Proof
. Let λ
_{1}
, λ
_{2}
, …, λ
_{n}
be the eigenvalues of the Seidel matrix S(G), then the trace of
S
^{2}
(
G
) is equal
. Let
and
Y
= (1, 1, …, 1). By the Hölder’s Inequalities, we have 
X^{T}Y
 ≤ ║
X
║
_{p}
║
Y
║
_{q}
,
. But since
=
n
(
n
 1), we have
n
(
n
 1) ≤
, and hence
, therefore
≥
n
(
n
 1)
^{p}
. We assume that 2
p
=
α
, then we get
S^{α}
(
G
) =
with equality if and only if λ
_{i}
 =
for i = 1, …, n. Moreover, if each eigenvalue equal to
, then
S
(
G
)
S^{T}
(
G
) =
S
^{2}
(
G
) = (
n
− 1)
I
, which means that the Seidel matrix S(G) is a symmetric and hence the graph G is a conference graph. □
3. Computation of upper bound ofSα(G) by using KKT method
In nonlinear programming, the KarushKuhnTucker (KKT) conditions are
necessary
for a local solution to a maximization problem provided that some regularity conditions are satisfied. Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. This construction is adapted from
[1
,
2
,
12]
. We consider the following nonlinear optimization problem:

Maximize f(X)

subject to:
where
I
and
J
are finite sets of indices. Suppose that the objective function
f
: ℝ
^{n}
→ ℝ and the constraint functions
g_{i}
: ℝ
^{n}
→ ℝ,
i
∈
I
and
h_{j}
: ℝ
^{n}
→ ℝ,
j
∈
J
are continuously differentiable at a point
X
^{∗}
.
Definition 3.1.
Let
X
^{∗}
be a point satisfying the constraints:
and let
I
_{∗}
be the set of indices
i
for which
g_{i}
(
X
^{∗}
) = 0. Then
X
^{∗}
is said to be a
regular point
of the constraints (3.2), if the gradient vectors ∇
h_{j}
,
j
∈
J
, ∇
g_{i}
(
X
^{∗}
),
i
∈
I
_{∗}
are linearly independent.
Definition 3.2
(KarushKuhnTucker Conditions(KKT)
[1
,
12]
). Let
X
^{∗}
be a relative maximum point for the problem (3.1) and suppose
X
^{∗}
is a regular point for the constraints, then there exist constants
μ_{j}
(
j
∈
J
), λ
_{i}
(
i
∈
I
), which these constants are called KKT multipliers, such that the following conditions are hold:
Stationarity
Primal feasibility
Dual feasibility
Complementary slackness
In the particular case, set I is empty, i.e., when there are no inequality constraints, the KKT conditions turn into the Lagrange conditions, and the KKT multipliers are called Lagrange multipliers.
Definition 3.3
(Linear Independence Constraint Qualification
[2
,
12]
). Given the point
X
^{∗}
is feasible and the active set
I
_{∗}
= {
i

g_{i}
(
X
^{∗}
) = 0,
i
∈
I
} defined in (3.2), we say that the linear independence constraint qualification (
LICQ
) holds if the set of active constraint gradients {∇
g_{i}
(
X
^{∗}
),
i
∈
I
_{∗}
)} is linearly independent.
Theorem 3.4
(LICQ and Multipliers
[12]
).
Given a point X
^{∗}
,
that satisfies the KKT conditions, along with an active set
with multipliers
,
if LICQ holds at X
^{∗}
,
then the multipliers are unique
.
Definition 3.5
(MangasarianFromovitz Constraint Qualification
[11]
). Given
X
^{∗}
is a local solution of (3.1), and active set is
A
(
X
^{∗}
). The MangasarianFromovitz Constraint Qualification
MFCQ
is defined by linear independence of the equality constraint gradients and the existence of a search direction
d
such that ∇
g_{i}
(
X
^{∗}
)
^{T}
d
< 0,∇
h_{j}
(
X
^{∗}
)
^{T}
d
= 0, for all
i
,
j
in
A
(
X
^{∗}
).
The MFCQ is always satisfied if the LICQ is satisfied. Also, satisfaction of the MFCQ leads to bounded multipliers,
μ
^{∗}
, λ
^{∗}
, although they are not necessarily unique.
Theorem 3.6
(
[11]
).
If a local maximum X
^{∗}
of the function f
(
X
)
subject to the constraints g_{i}
(
X
^{∗}
) = 0,
i
∈
I
_{∗}
,
h_{j}
(
X
^{∗}
) = 0,
j
∈
J, satisfies MFCQ, then it satisfies the KKT conditions
.
Now we continue by recalling the relevant upper bound of eigenvalue Seidel matrix. Let
G
= (
V
,
E
) be a simple, undirected graph on vertex set
V
= {
v
_{1}
, …,
v_{n}
} and let λ
_{1}
≥ λ
_{2}
≥ … ≥ λ
_{n}
be the set of all eigenvalues of
S
(
G
). We formulate this as an optimization problem. For doing this case, we need to come up with appropriate constraints. The following assumption will be needed throughout the paper. The main constraint is made by the assumption 
detS
(
G
) ≥
n
− 1. The other ones are obtained by the following straightforward lemma.
Lemma 3.7.
For any graph G with n vertices, we have:

(i)S2(G) == (n− 1)2+ (n− 1)

(ii)S4(G) =≤ (n− 1)4+ (n− 1)

(iii)S4(G) =≥n(n− 1)2

(iv)≤ (n− 1)2
Lemma 3.8
(
[4]
).
Suppose α
,
β
,
ν
,
ω
,
a
,
b
,
c
,
d are positive numbers and that
Then the inequality αa^{p}
+
βb^{p}
≤
νc^{p}
+
ωd^{p} holds for p
≥ 1.
The reminder of this section will be devoted to the proof of Theorem 3.9.
Theorem 3.9.
Let G be a graph with n
≥ 3
vertices and let
λ
_{1}
, λ
_{2}
, …, λ
_{n} be the eigenvalues of S
(
G
).
If

detS
(
G
) ≥
n
− 1
, then the following condition is hold:
Proof
. We prove this theorem by using of
KKT
method in nonlinear programming. Now, we can describe our problem as the maximization of the function
f
(
X
) with assume λ
_{i}

^{2}
=
x_{i}
. Hence, we have:
Since 
detS
(
G
) ≥
n
− 1, we have λ
_{i}
> 0, 1 ≤
i
≤
n
and hence
δ
must be non zero and nonnegative, as a fixed number. Also, for all i, we must have
x_{i}
≠
δ
, because, if for some i,
x_{i}
=
δ
, then
< (
n
 1)
^{2}
which is contradiction with (3.7). In continue, we need prove the following claim:
Claim 3.10.
Let λ be a local maximum of
f
(
X
) according to the constraints (3.4)(3.9). Then λ satisfies MFCQ.
Proof: Let λ = (λ
_{1}
, …, λ
_{n}
). Without loss of generality, we can assume λ
_{1}
≥ λ
_{2}
≥ … ≥ λ
_{n}
. If λ
_{1}
= λ
_{n}
, then in view of (3.4), all of λ
_{i}
are equal to
n
−1. In this case, in the above formulas (3.5) till (3.9), we have the equality only in (3.6) and hence λ is not a local maximum where
f
(λ) =
n
(
n
− 1)
^{p}
< (
n
− 1)
^{2p}
+ (
n
− 1),
p
> 1,
n
≥ 3, and therefore λ is not satisfies MFCQ. If λ
_{1}
> λ
_{n}
, then MFCQ is fulfilled by setting
d
= (1, 0, …, 0,−1). □
Now, we continue the proof of Theorem 3.9. We show that the maximum value of
f
(
X
) according to conditions (3.4)(3.9) is equal to (
n
− 1)
^{2p}
+ (
n
− 1). So assume that
X
= (
x
_{1}
,
x
_{2}
, …,
x_{n}
) is a local maximum of
f
(
X
) subject to the constraints (3.4)  (3.9). With no loss of generality suppose that
x
_{1}
≥
x
_{2}
≥ … ≥
x_{n}
. By Theorem (3.6) and Lemma (3.7),
X
satisfies KKT conditions, namely:
By the choice of
δ
, we have
n_{i}
(
X
) < 0 for
i
= 1, 2, …,
n
and hence by (3.14),
γ
_{1}
=
γ
_{2}
= … =
γ_{n}
= 0. We assume that
D
=
, then (3.10) can be written as
We consider the following cases:
Case 1:
Let
x
_{1}
= (
n
− 1)
^{2}
. Then by (3.11) and since
X
satisfies (3.7), we have
It turns out that
x
_{2}
=
x
_{3}
= … =
x_{n}
= 1 and we have
f
(
X
) = (
n
− 1)
^{2p}
+ (
n
− 1).
Case 2:
Let
x
_{1}
< (
n
− 1)
^{2}
. So, by (3.13),
ρ
_{1}
=
ρ
_{2}
= … =
ρ_{n}
= 0. It turns out that
x
_{1}
,
x
_{2}
, …,
x_{n}
must satisfy the following equation:
Assume that
μ
=
μ
_{2}
−
μ
_{3}
, then we have:
The curves of
y
=
px^{p}
and Parabolic curve
y
= −
μ
_{4}
D
−
μ
_{1}
x
−2
μx
^{2}
intersect in at most two points, i.e., the formula (3.17) at most two distinct positive roots. Now, we have two subcases:
Subcase(i): We have one positive root. Then by (3.11),
x
_{1}
=
x
_{2}
= … =
x_{n}
=
n
− 1. Hence
f
(
X
) =
n
(
n
− 1)
^{p}
which is smaller than (
n
− 1)
^{2p}
+ (
n
− 1), for
n
> 3,
p
> 1.
Subcase(ii): If (3.18) has two positive roots, say
a
and
b
, then by Lemma 3.8, we assume that
c
= (
n
− 1)
^{2}
and
d
= 1, we have
f
(
X
) ≤ (
n
− 1)
^{2p}
+ (
n
− 1), which is the desired conclusion and the proof is completed. □
BIO
Ali Iranmanesh received his Ph.D. degree from the University of Tarbiat Modares, Tehran, Iran, in 1995. Since 1995 he has been employed in the same university and from 2005 he is a full professor of mathematics. His research interests are Character theory of finite groups, Characterization of finite groups by order components, application of group theory in Chemistry, Mathematical Chemistry, BioMathematics, covering of some finite groups, nano computation, hyperstructures and applications, history of Mathematics, education of Mathematics.
Department of Mathematics, Faculty of Mathematical Sciences, Tarbiat Modares University, P. O. Box: 14115137, Tehran, Iran.
email: iranmanesh@modares.ac.ir
Jalal Askari Farsangi received M.Sc. from Sharif University, and currently is a Ph.D student in Tarbiat Modares University. His research interests are graph theory, applications of linear algebra, computational mathematics, numerical optimization.
Department of Mathematics, Faculty of Mathematical Sciences, Tarbiat Modares University, P. O. Box: 14115137, Tehran, Iran.
email: jalal.askari@modares.ac.ir
Abadie J.
1967
On the KuhnTucker Theorem in Nonlinear Programming
North Holland
Amsterdam
19 
36
Bazaraa M.S.
,
Sherali H.D.
,
Shetty C.M.
1993
Nonlinear Programming Theory and Algorithms
second ed
John Wiley & Sons
NewYork, NY
Belevitch V.
(1968)
Conference networks and Hadamard matrices
Ann. Soc. Sci. Bruxelles
Ser. I
82
13 
32
Bondy J.A.
,
Murty U.S.R.
1976
Graph Theory with Applications
Macmillan
Brouwer A.E.
,
Haemers W.H.
2012
Spectra of Graphs
Springer
New York
Ghorbani E.
(2013)
On eigenvalues of Seidel matrices and Haemers'conjecture
http://arxiv.org/abs/1301.0075v1
Finner H.
(1990)
Some new inequalities for the range distribution, with application to the determination of optimum significance levels of multiple range tests
J. Amer. Statist. Assoc.
85
191 
194
DOI : 10.1080/01621459.1990.10475325
Gutman I.
2001
The energy of a graph: Old and new results, in: A. Betten, A. Kohner, R. Laue, A. Wassermann (Eds.), Algebraic Combinatorics and Applications
Springer
Berlin
196 
211
Haemers W.H.
(2012)
Seidel switching and graph energy
MATCH Commun. Math. Comput. Chem.
68
653 
659
Mangasarian O.L.
,
Fromovitz S.
(1967)
The Fritz John necessary optimality conditions in the presence of equality and inequality constraints
J. Math. Anal. Appl.
17
37 
47
DOI : 10.1016/0022247X(67)901631
Nocedal J.
,
Wright S.J.
2006
Numerical Optimization
Second Edition
Springer
New York