Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 1) Theoretical Principle

Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography.
2014.
May,
32(3):
191-204

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http:// creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

- Received : March 03, 2014
- Accepted : June 06, 2014
- Published : May 27, 2014

Download

PDF

e-PUB

PubReader

PPT

Export by style

Share

Article

Metrics

Cited by

TagCloud

In recent years, multi-camera systems have been recognized as an affordable alternative for the collection of 3D spatial data from physical surfaces. The collected data can be applied for different mapping(e.g., mobile mapping and mapping inaccessible locations)or metrology applications (e.g., industrial, biomedical, and architectural). In order to fully exploit the potential accuracy of these systems and ensure successful manipulation of the involved cameras, a careful system calibration should be performed prior to the data collection procedure. The calibration of a multi-camera system is accomplished when the individual cameras are calibrated and the geometric relationships among the different system components are defined. In this paper, a new single-step approach is introduced for the calibration of a multi-camera system (i.e., individual camera calibration and estimation of the lever-arm and boresight angles among the system components). In this approach, one of the cameras is set as the reference camera and the system mounting parameters are defined relative to that reference camera. The proposed approach is easy to implement and computationally efficient. The major advantage of this method, when compared to available multi-camera system calibration approaches, is the flexibility of being applied for either directly or indirectly geo-referenced multi-camera systems. The feasibility of the proposed approach is verified through experimental results using real data collected by a newly-developed indirectly geo-referenced multi-camera system.
et al
., 2001), a marine vehicle (Adams, 2007), or a human operator (Ellum, 2001). A directly geo-referenced multi-camera system also includes a positioning and orientation system (POS) – encompassing a Global Navigation Satellite System (GNSS), an Inertial Measurement Unit (IMU), a Dead Reckoning (DR) system, and/or a Distance Measurement Instrument (DMI).
So far, different low-cost multi-camera systems have been developed for diverse applications. Wang
et al
.(2012) designed and manufactured a hand-held multi-camera system– Portable Panoramic Image Mapping System (PPIMS) –for mapping applications in areas which cannot be accessed by mobile-mapping systems (e.g., rugged terrain or forest area) (
Fig. 1(a)
). Shu
et al
.(2009) developed a multicamera system which is utilized for 3D measurements in industrial applications. Fritsch
et al
. (2012) introduced and utilized a multi-camera system for cultural heritage documentation (
Fig. 1(b)
). Malian
et al
. (2004) developed a multi-camera system– MEDical PHOtogrammetric System (MEDPHOS) –for biomedical applications (e.g., bedsore analysis and wound measurement)(
Fig. 1(c)
).
Some of the developed multi-camera systems: (a) PPIMS (for mapping applications), (b)IPF multicamera system (for cultural heritage documentation), and (c) MEDPHOS (for biomedical applications)
In order to achieve the potential accuracy of multicamera systems for different metrology applications, a careful system calibration should be carried out prior to the data collection (Habib
et al
., 2011; Rau
et al
., 2011).A multi-camera system calibration is accomplished when the individual cameras are calibrated and the mounting parameters relating the system components (e.g., the cameras, GPS, and IMU) are estimated. This system calibration is specifically important for integrated sensor orientation in mobile mapping applications (Rau
et al
., 2011), dense image matching for 3D object reconstruction (Haala and Rothermel, 2012), long-term and short-term infrastructure monitoring (Detchev
et al
., 2013; Kwak
et al
., 2013), and biomedical applications (Detchev
et al
., 2011).In multi-camera systems which encompass a direct geo-referencing component, themounting parameters include two sets of relative orientation parameters: (a) The mounting parametersrelating the cameras to each other, and(b) The mounting parameters relating the cameras to the POS unit (Habib
et al
., 2011).One should note that these two sets of parameters are not independent from each other, i.e., the mounting parameters between the individual cameras can be derived from the mounting parameter relating these cameras to the POS unit. For multi-camera systems which do not include a direct geo-referencing component, the system mounting parameters will be confined to those defining the relationship among the individual cameras. Once these parameters have been estimated through a system calibration procedure, they can be utilized as prior information in future data collection and processing activities.
The mounting parameters, which describe the geometric relationships between the different system components, can be determined using either a two-step or a singlestep calibration procedure. In the two-step procedure, the mounting parameters of a directly geo-referenced multicamera system are estimated by comparing the POS-based position and orientation information of the platform with the Exterior Orientation Parameters (EOPs) of the cameras which are determined through an independent bundle adjustment procedure (Cramer, 1999; Skaloud, 1999; Casella
et al
., 2006). In a multi-camera system which is not equipped with a POS unit, the system mounting parameters can be derived by comparing the EOPs, which have been estimated through the bundle adjustment procedure, of the cameras at different epochs to each other. Although the twostep procedure can be easily implemented for both directly and indirectly geo-referenced multi-camera systems, its reliability is highly dependent on the availability of a calibration site with well-distributed ground control points and strong data acquisition geometry. Furthermore, the correlations among the EOPs and the Interior Orientation Parameters (IOPs) are ignored in this approach.
In the single-step procedure, the system mounting parameters and the cameras’ self-calibration parametersare estimated in a single bundle adjustment procedure (Cramer and Stallmann, 2002; Smith
et al
., 2006; Yuan, 2008). This procedure requires less strict data acquisition configuration and is able to handle the dependencies between the EOPs and IOPs more robustly – i.e., the IOPs can be refined along with the mounting parameters (Yuan, 2008). There are two approaches for the single-step procedure, which differ on how the POS-derived information and the system mounting parameters are incorporated in the bundle adjustment procedure. The commonly used single-step approach is implemented by extending the conventional bundle adjustment procedure with additional observation equations for POS-assisted systems and constraints for indirectly georeferenced systems (King, 1992; El-Sheimy, 1992; Lee
et al
., 2008; Lerma
et al
., 2010; Tommaselli
et al
., 2013). For indirectly geo-referenced multi-camera systems, Relative Orientation Constraints (ROCs) are explicitly included in the bundle adjustment procedure to enforce the invariant relationship among the cameras at different epochs. The incorporation of the ROCs among the cameras reduces the number of independent unknown parameters, that define the cameras’ position and orientation for the acquired imagery at different epochs - from (
6 × n _{c} × n _{e}
) in the conventional bundle adjustment procedure to (
6 × (n _{c} -1) + 6 × n _{e}
) , where
n
_{e}
is the number of the involved cameras and
n
_{e}
is the number of epochs. Incorporating the ROCs for the enforcement of the invarying mounting parameters among the cameras involves complicated implementation (e.g., extensive partial derivatives and manual formatting of the camera pairs to be used in the ROCs). One should note that such complexityis intensified as the number of the increase (Rau
et al
., 2011).
For multi-camera systems equipped with a POS unit, an alternative single-step approach has been proposed and employed for the estimation of the system mounting parameters (Rau
et al
., 2011).This approach employs the concept of the modified collinearity equations, which has already been exploited in some research work for the integrated sensor orientation (ISO) of single-camera systems (Pinto and Forlani, 2002; Ellum, 2001; Habib
et al
., 2010). In Rau
et al
., (2011), the mounting parameters relating the individual cameras and the POS unit (
Fig. 2
) are explicitly incorporated in the modified collinearity equations (Eq. (1)). In this equation, the notation
defines the position vector of point a relative to a coordinate systems associated with point b and
describes the rotation matrix that transforms a vector expressed relative to the coordinate system denoted by
a
into a vector expressed relative to the coordinate system denoted by
b
. One should note that the ROCs between the camerasare implicitly enforced in this model.
Mounting parameters in a multi-camera system equipped with a POS unit (Single-step calibration approach)
Where,
is the position vector of an object point (I) relative to the local mapping frame (M),
(t) is the position vector of the POS body frame (b) relative to the local mapping frame (M) at time (t),
(t) is the rotation matrix relating the local mapping frame and the POS body frame (b) at a given time (t) defined by (ω(t), φ (t), κ(t)),
is the lever-arm offset vector (ΔX, ΔY, ΔZ) between the POS body frame and the j^{th} camera (C_{j}) perspective center, defined relative to the IMU body frame,
is the rotation matrix relating the POS body frame and the j^{th} camera coordinate system, defined by the boresight angles (Δω, Δφ, Δκ) ,
is the vector from the perpective center of the j^{th} camera to the image point (i) with respect to the j^{th} camera coordinate system, and
is the scale factor specific to the j^{th} camera and the j^{th} point combination. This scale factor can be implicitly determined from overlapping imagery through the bundle adjustment proceduce.
In contrast to the ROC equations used for the mounting parameters calibration in indirectly geo-referenced multi-camera systems, this approach offers much simpler implementation. The simplicity of this procedure is not affected by the number of the involved cameras and the number of utilized epochs. The drawback of this procedure is that the presence of the POS unit is necessary for the calibration of the system mounting parameters. Therefore, it cannot be applied for multi-camera systems which are intended for indoor applications. Moreover, if prior information regarding the mounting parameters relating the different cameras is available, it cannot be incorporated in the bundle adjustment-based calibration procedure.
In order to overcome the shortcomings of the aforementioned approaches for a multi-camera system calibration, an adapted single-step procedure is introduced in this paper. The proposed approach has the flexibility to be applied for the calibration of multi-camera systems utilized for outdoor/indoor applications in the presence/ absence of a POS unit. Furthermore, the simplicity of this approach is not affected by increasing the number of the utilized cameras or the involved epochs. Also, this approach is capable of considering prior information regarding the mounting parameters among the different cameras.In this procedure, the mounting parameters relating the cameras to a reference camera are explicitly incorporated in the modified collinearity equations. For multi-camera systems equipped with a POS unit, the mounting parameters relating the individual cameras to the POS unit are implicitly considered in the bundle adjustment procedure, except for the mounting parameters relating the reference camera to the POS unit which is explicitly incorporated in the proposed model.
This research work is presented in two parts: In the first part, the theoretical principles of the proposed single-step approach for the multi-camera system calibration will be introduced, and in the second part, the automation and implementation of this system calibration procedure will be discussed. This paper starts by describing the proposed model for the multi-camera system calibration – estimation of camera calibration parameters and the system mounting parameters. Afterwards, the implemented bundle adjustment procedure for the simultaneous estimation of system calibration parameters will be described. An automated system calibration procedure is only practical when the correspondence between the image and object space features/targets is established. In order to establish such a correspondence, EOPs of the individual images should be approximately estimated. Therefore, a linearbased projective transformation followed by a Single Photo Resection (SPR) is introduced for the apprpximation of EOPs of the individual images, in the next section. Preliminary experimental results are finally provided to verify the feasibility and investigate the performance of the proposed multi-camera system calibration approach. Finally, this paper presents some conclusions and recommendations for future research work. More detailed discussion of the automated target extraction and labelling as well as comprehensive analysis of the proposed approach for evaluating the stability of the estimated system calibration parameters through experimental results will be discussed in the second part of this paper.
c
, and the distortions parameters compensating for deviations from the assumed perspective geometry (e.g., radial lens distortions, de-centering lens distortions, and affine deformations). The assumed perspective geometry is represented by the collinearity equations,which mathematically describe the light ray from an object point through the camera perspective center to the image point while considering the displacements caused by various distortions (Eq.(2)).
Where,
(x_{a}, y_{a})
are the image coordinates of point
a
;
(X_{A}, Y_{A}, Z_{A})
are the ground coordinates of the corresponding object point
A; x_{p},y^{p},
and
c
are the principal point coordinates and the principal distance of the utilized camera; and
(X_{0}, Y_{0}, Z_{0})
are the ground coordinates of the camera’s perspective center. The camera’s attitude parameters are embedded in the rotation matrix elements
(r_{11}, r_{12}, ⋯, r_{33})
and Δ
x
and Δ
y
are the compensations for deviations from the collinearity condition. The image coordinate displacements introduced by the distortions (Δ
x
and Δ
y
) can be estimated using the mathematical models in Eq.(3). The adopted additional parameters encompass the radial lens distortion coefficients
(K_{1}, K_{2}, K_{3},)
, the de-centering lens distortion coefficients
(P_{1}, P_{2})
, and in-plane distortion coefficients
(b_{1},b_{2})
(Fraser, 1997).
Where
and
In order to determine the IOPs of the utilized cameras, a bundle adjustment with self-calibration is usually performed while using control information (Fraser, 1997). This control information can be in the form of a 2D or 3D test field which includes several signalized targets, whose coordinates are either determined from a pre-surveying process or estimated using a free-network adjustment procedure (Granshaw, 1980). Depending on the nature of the utilized test field, convergent images and images in portrait and landscape mode might be required to avoid dependencies between the IOPs and EOPs during the bundle adjustment with selfcalibration procedure.
et al
. (2011), is capable of dealing with multi-camera systems which are either equipped with a direct geo-referencing component or indirectly geo-referenced. In this approach, one of the involved cameras is selected as the reference camera and the system mounting parameters are defined with respect to that reference camera(
Fig. 3
). Therefore, the system mounting parameters are defined by two independent sets: (1) the mounting parameters relating the cameras to the reference camera (solid black vectors in
Fig. 3
) and (2) the mounting parameters relating the reference camera to the POS body frame (dashed gray vector in
Fig. 3
). To have a simpler model that is not affected by the number of the utilized cameras or involved epochs, these mounting parameters are directly incorporated in the collinearity equations. The proposed mathematical model for a multi-camera system, that includes a POS unit, is shown in Eq.(4).
Mounting paraments for a multi-camera system equipped with a direct geo-referencing unit(proposed model)
Where:
The introduced model has the flexibility to be applied for multi-camera systems which do not include a POS unit (i.e., indirectly geo-referenced multi-camera systems). In order to adapt this model for an indirectly geo-referenced multicamera system, a virtual POS unit is set at the reference camera location (
Fig. 4
). Therefore, the boresight angles and lever-arm offset between the POS body frame and the reference camera are fixed to zeros, i.e.,
= 𝐼 and
= 0. More specifically, the reference camera is used as a basis for defining the position and orientation of the platform
Mounting parameters for an indirectly georeferenced multi-camera system (proposed model)
Accordingly, the proposed mathematical model (Eq.(4)) is adapted for the multi-camera systems which do not include a POS unit–indirectly geo-referenced multi-camera systems-as shown in Eq.5, Which is an equivalent model to the one in Eq.(4) when setting
= 𝐼 and
=0.
In this equation,
(t ) is the position vector of the reference camera (C_{R}) relative to the mapping reference frame at a given time (t), and (t ) is the rotation matrix relating the local mapping frame and the reference camera (C_{R}) at a given time (t) defined by (ω(t), φ(t), κ(t)) .
In summary, the mounting parameters relating the individual cameras to the reference camera and the mounting parameters relating the reference camera to the POS body frame can be directly determined using the proposed single-step procedure (Eq.(4)). The introduced method can also be used for an indirectly geo-referenced system (i.e., a multi-camera system, which is not equipped with a POS unit) to directly estimate the mounting parameters relating individual cameras to the reference one(Eq.(5)).One should note that Eq.(4) and (5) are not different models.
(x_{i}^{cj}, y_{i}^{cj})
to the left side of these equations. The observation equations in their final form (i.e., the modified collinearity equations) are shown in Eq.(6). One should note that the scale factor (
) is eliminated through the division process.
The multi-camera system calibration parameters are then estimated through a general Least-squares Adjustment (LSA) procedure, while considering all the quantities on the right side of Eq.(6), (
, the distortion parameters,
), as unknown parpmeters. In this procedure, the modified form of collinerity equations (observation equations) is firstly linearized with respect to unknown parameters (Eq.(7)).
Where,
In the general LSA, all the involved quantities in the mathematical model can be treated either as unknowns, unknowns with prior information, or error-free (constant) parameters. In order to include prior information regarding any of the unknown parameters (e.g., POS-derived position and orientation and ground coordinates of control points) in the LSA procedure, pseudo observation equations can be added for such parameters. The corrections to the approximate values of unknown parameters,
x
^, are then derived through Eq.(8).
To treat a specific parameters as a constant in Eq.(8), all the elements occupying the row and column corresponding to that element in the normal matrix (N) in Eq.(8) are changed to zero, expect for the diagonal element, which is changed to one. The corresponding row of the cvector in Eq.(8) is also changed to zero. In a multi-camera system which is not equipped with a POS unit, the mounting parameters relating the reference camera and the POS body frame −
− are fi xed to zero. Therefore, these parameters are treated as constants in the bundle adjustment procedure.
In order to accurately estimate the multi-camera system calibration parameters through a bundle adjustment procedure, corresponding points/targets in the image and object space are required. The establishment of this correspondence is carried out through a labelling procedure. A labelling process can be implemented if we have reasonable estimates for the ground coordinates of the object points/targets as well as the EOPs of the images. In this research work, object space targets are included on a 2D test fi eld that has been printed to scale from a CAD file. Therefore, initial coordinates of the CAD fi le provide good estimates of the object space targets. The EOPs of the individual images can also be estimated using homologous coded targets – black rectangles including 8 white circles – in the image and object space (
Fig. 5
). The object space and image space coordinates of corresponding coded targets are fi rstly used in a linear-based projective transformation to approximate the EOPs of the individual images. A Single Photo Resection (SPR) procedure is then utilized to refi ne the approximate values forthe EOPs. Having a reasonable estimate of the EOPs, we can label the image space targets after projecting the coordinates of the object space targets onto the imagery. In the fi nal step, common IDs will be assigned to nearby projected/ extracted targets. In the following subsection, the proposed procedure for the approximation of the EOPs of the images using corresponding coded targets in the image and object space will be introduced. Detailed explanation regarding the utilized targets and their extraction will be presented in the second part of this paper.
Utilized 2D calibration test fi eld with signalized checkerboard and coded (black rectangles with white blobs) targets
X, Y
) of coded targets–mounted on a 2D calibration test fi eld – to their corresponding image coordinates (x, y) as in Eq.(9).
Where, the X and Y axes of the mapping frame are consider to aligned along calibration board (i.e., Z=0 for all the targets within the calibration board). To estimate the projective transformation coefficients
(c_{1},c_{2}, …,c_{8})
, a minimum of four non-collinear coded targets are required. Once the coefficients of the projective transformation are estimated, the EOPS of the images could be recovered from these coefficients. Kobayashi and Mori (1997) introduced a procedure for recovering the EOPs of the images andthe principal distance (
c
) of the utilized camera from the projective transformation coefficients, while ignoring the lens distortions and the principal point coordinates. Their approach is complicated and suffers from ambiguities in the recovery of the Euler angles. To overcome these drawbacks, a sequential procedure is used in this research to recover the principal distance (
c
), the position of the perspective center
(X_{0}, Y_{0}, Z_{0})
, and the rotation matrix from the projective transformation coefficients.
The procedure starts by estimating the camera’s principal distance through the reformulation of Eq.(9) to the form in Eq.(10) – Where γ is a scale factor(Seedahmed and Habib, 2002).
To correlate the projective transformation coefficients with the IOPs/EOPs of the camera/image in question, the collinearity equations are reformulated while considering the planarity of the object space as in Eq.(11), which can be reduced to the form in Eq.(12) through multiplication by (-1/
c
) and rearrangement of the terms that include the prinicipal distance.
Comparing the forms in Eqs.(10) and (12), one could derive the form in Eq.(13), which demonstrates the parametric relationship between the projective transformation coefficients and the IOPs/EOPs in the collinearity equations. The principal distance (
c
) is then estimated by considering the first two columns of Eq.(13) and imposing the orthogonality constraint between the first two columns of the rotationmatrix
(Eq.(14))., where
r_{ij}
are the elements of the rotation matrix
.
The sequential procedure continues by estimating the perspective center coordinates
(X_{0} , Y_{0} , Z_{0})
through the manipulation of Eq. (13) to produce the form in Eq.(15). The perspective center coordinates are then estimated using Eq.(16) where A is the combined matrix equivalent to the left hand side of Eq. (15) and
(A^{T} A)_{ij}
refers to the element in the ith row and jth column of the matrix product
A^{T} A
.
In the final step, the unknown rotation matrix (
) is recovered by enforcing a constraint thatensures the coalignment of the two vectors connecting the estimated perspective center to the image point (
X
) and to the object point (
X
) for the corresponding image-object point pairs (
Fig. 6
). This alignment can be mathematically described as
where
x_{i}
and
X_{i}
are the normalized image and object vectors and
e_{i}
is the misalignment error for the i
^{th}
image-object pair. A least-squares adjustment procedure is then employed to estimate
by minimizing the Sum of Squared Errors (SSE) of all the involved nimage-object point pairs (Eq.(17)).
Required rotation for aligning the vectors connecting the perspective center to corresponding image and object points
In Eq.(17), the first two terms are always positive since they represent the squared magnitudes of
x_{i}
and
X_{i }
vectors. Therefore, to minimize the SSE in Eq.(17), the rotation matrix
has to be estimated in a way that maximizes the term
. This term can be representedas the dot product in Eq.(18) and maximized using the proposed quaternion approach by Horn (1987). To provide some background, some of the necessary basics for quaternions will be provided and interested readers can refer to Horn (1987) for more details.
A quaternion is a four-dimensional vector that has one real and three imaginary elements and is denoted in this maunscript by the symbol (·). Unit quaternions (i.e., quaternions whose magnitude
is unity) can be used to represent any rotation in 3D space using a rotation angle which is defined by the real component of the quatenion vector around the defined axis by the imaginary components of the quatenion vector. In other words, for any 3 × 3 rotation matrix, there is a corresponding unit quarternion
. The rotation multiplication
x_{i}
is equivalent to quaternion multiplication
, where the unit quaternion
corresponds to
and
is constructed by negating the imaginary part of
(conjugate quatenion). The term
x_{i}
is the quaternion form of
x_{i}
, which is constructed by setting the real part to zero and the three elements of
x_{i}
as the imaginary part, i.e.,
x_{i}
= (0,
x_{i}
). Using quaternion properties (Horn, 1987), Eq.(19) can be derived, where
C
and
C
are 4 × 4 matrices that convert the quaternion-based multiplication and N is a 4 × 4 matrix constructed using the components of
x_{i}
and
X_{i}
for the available image-object point pairs. The unknown quaternion (Eq.(20)). To estimate the
while imposing the unity constraint, one should use the Lagrange multiplier λ and maximize the target function
φ
(Eq.(21)). To derive the desired quaternion, the target function
φ
should be differentiated with respect to
according to Eq.(22), which yields the expression in Eq.(23).
The expression in Eq.(23) is satisfied if and only if λ and
are the corresponding eigen values and eigen vectors of the matrix N. In such a case, the term
will reduce to λ since the rotation defined by
is a unit quaternion (Eq.(24)). Therefore, based on Eq.(24), the term
is maximized when when λ is the largest eigen value of the matrix N and therefore the
is the eigen vector corresponding to the largest eigen value. The rotation matrix
and corresponding Euler angles (ω,
φ
, κ) can be finally derived from
as explained in Horn (1987). In summary, the quaternion and potential ambiguities that are associated with the proposed procedure by Kobashi and Mori(1997).
In the second step of this procedure, the derived EOP approximations (position and orientation) of the images through the linear-based projective transformation are refined through a SPR procedure, using the image coordinates of coded targets together with their corresponding object space coordinates. In this procedure, the nominal values of the IOPs and EOP approximations of the images are incorporated in a LSA for the estimation of more accurate EOPs (the approximate value for the principal distance is used while assuming zero values for the principal point coordinates and the distortions parameters). The refined EOPs of the images, together with the nominal IOPs of the utilized camera, are then manipulated to establish the correspondence between the object and image space targets.
So far, this paper has focused on the introduction of a new single-step approach for multi-camera system calibration anda closed-form procedure for the estimation of the EOPs of the involved imagery, which will be utilized for labelling the calibration targets in the image space. In order to automate the proposed multi--camera system calibration procedure, automatic techniques will be introduced for the detection, localization, and identification of instances of object space targets in the images. Comprehensive discussion of the targets extraction and labelling procedure as well as experimental results are presented in the second part of this paper. In the meantime, preliminary results are presented in the following section.
K_{1}
and
K_{2}
parameters (radial lens distortion parameters), i.e., the de-centring lens and in-plane distortion coeffi cients were ignored since they were deemed insignifi cant for the utilized cameras.
Table 1
reports the calibration results for the individual cameras in this multi-camera system. Closer investigation of the reported values in
Table 1
verifi es that the IOPs of the individual cameras have been estimated with high precision.
Camera calibration results using the proposed technique for multi-camera system calibration
The utilized multi-camera system
Table 2
reports the estimated mounting parameters (lever-arm components and boresight angles) among the involved cameras using the proposed single-step procedure for the utilized calibration dataset. In the reported results, camera “4” was taken as the reference camera (i.e., the position and the orientation of the platform refer to the position and orientation of camera “4”). We can observe in Table 2 that the mounting parameters among the cameras have been estimated with high precision through the implemented calibration procedure (i.e., the standard deviations for the boresight angles range from±4.82" to ±28.93", while the lever arm offsets have been estimated with a precision ranging from ±0.05 to ±0.31 mm). Also, the lever-arm offsets are very close to their physically measured values. The estimated a-posteriori variance factor, for the implemented calibration procedure, is also reported in
Table 2
, confi rming the validity of the derived system calibration results.
Estimated a-posteriori variance factor and system mounting parameters (lever-arm components and boresight angles) w.r.t. camera 4 (ref. camera)
Experimental results using real data have verified the feasibility of the proposed single-step approach for the estimation of the cameras calibration parameters and system mounting parameters. The second part of this paper will focus on the automation of the proposed multi-camera system calibration procedure. This automation is achieved by the automatic detection, localization, and identification of specifically-designed coded and signalized targets in the images. The corresponding coded targets will be utilized in the introduced two-step procedure for the approximation of EOPs of the involved images. Furthermore, the coordinates of homologous signalized targets in the image and object space will be utilized for the implementation of the system calibration through the proposed bundle adjustment with self-calibration procedure.The implementation of the system calibration procedure for a newly-developed multi-camera system and the optimal data collection configuration for an accurate system calibration will also be discussed in the second part of the paper. In addition, further experimental results from real data will be provided to verify the feasibility of the proposed automated approach for multi-camera system calibration.
Kobayashi K.
,
Mori C.
1997
Relations between the coefficients in the projective transformation equations and the orientation elements of a photograph
Photogrammetric engineering and remote sensing
63
1121 -
1127

Multi-camera system
;
Calibration
;
Mounting parameters
;
Positioning and orientation system
;
Indirect georeferencing

1. Introduction

In the last few decades, different digital imaging systems (LiDAR, multi-camera systems, and range cameras) have been developed and adapted for the acquisition of 2D/3D data from physical surfaces. The collected data can be utilized for 3D surface reconstruction and mapping applications – e.g., digital building model generation, industrial sites modelling, cultural heritage documentation, and biomedical applications. Among these systems, LiDAR has gained more popularity due to its fast and accurate data acquisition capability, versatility, and applicability in different situations. However, these systems are usually expensive, have limited field of view, do not provide semantic information for the scanned surfaces, and require direct geo-referencing units.In recent years, the application of fully integrated multi mediumformat camera systems has also become very common in mobile mapping vehicles (e.g., for transportation corridor mapping and facade texturing applications). These systems are becoming popular since they can compete with laser scanning system for the generation of high density point cloudfrom overlapping imagery using dense matching techniques. Point Grey Research (Ladybug), Immersive Media Corporation (Dodeca 2360), Google, Earthmine Inc., Facet Technology, and Pictometry Corp. have developed different types of these multi-camera systems mounted on moving airborne or ground vehicles. These systems are expensive and high-weight, usually require a direct georeferencing unit, and cannot be easily used for small area mapping and 3D surface reconstruction applications. In order to overcome these shortcomings and provide a more affordable 3D data collection technique for the aforementioned applications (especially for smallarea mapping and monitoring applications), multiple lowcostdigital cameras have been utilized in an integrated system. The integrated multi-camera system is able to provide larger field of view to cover larger areas or perform a 360° scene reconstruction. The other advantage of such a system is that it can be applied in different operational environments, indoor or outdoor, and in the presence or absence of direct geo-referencing units. A low-costmulticamera system is usually composed of multiple digital cameras mounted on a rigid platform. The utilized platform can be either static (stationary) or kinematic, e.g., a land vehicle (El-sheimy,1996), a helicopter (Mostafa
PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

2. Multi-Camera System Calibration

The calibration of amulti-camera system, as mentioned earlier, involves the calibration of the individual cameras and the estimation of mounting parameters relating the system components. In this section, the utilized procedure for estimating the system calibration parameters is presented. In the first subsection, the camera calibration parameters are introduced and possible alternatives for the estimation of these parameters are reviewed. The proposed single-step approach for the estimation of the cameras IOPs and system mounting parameters in the presence/absence of POS information is then introduced in the next subsection. Finally, the implemented least-squares adjustment for the estimation of system calibration parameters – camera calibration and system mounting parameters – is outlined.
- 2.1 Camera calibration parameters

The objective of camera calibration is toprovide an accurate estimate of the internal characteristics of the utilized camera. These internal characteristics, which are commonly known as the Interior Orientation Parameters (IOPs), include the principal point coordinates – xp and yp, principal distance –
PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

- 2.2 Mounting parameters modelling

In multi-camera systems, the mounting parameters are categorized as: (1) the mounting parameters relating the cameras to the positioning/orientation system (POS), and (2) the mounting parameters relating the cameras to each other. These parameters are estimated using either twostep or single-step approaches, which have been reviewed in the previous section. In order to overcome the drawbacks of the aforementioned techniques, an alternative single-step approach for the estimation of the system mounting parameters is introduced in this subsection. This single-step procedure, which is an extension of the system calibration approach proposed by Habib
PPT Slide

Lager Image

PPT Slide

Lager Image

- is the lever-arm offset vector, between the POS body frame and the reference camera (CR)
- perspective center, defined relative to the POS body frame,is the rotation matrix relating the POS body frame and the reference camera (CR) coordinate system, defined by the boresight angles
- is the lever-arm offset vectorbetween the reference camera (CR) and the jthcamera (Cj) perspective centers, defined relative to the reference camera coordinate system, and
- is the rotation matrix relating the reference camera coordinate system (CR) and the jthcamera coordinate system (Cj) coordinate system, defined by the boresight angles

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

- 2.3 Estimation of the system calibration parpmeters

The estimation of the system calibration parameters – IOPs of the individual cameras, the mounting parameters relating the individual cameras to the reference camera, and the mounting parameters relating the reference camera to the POS body frame for directly geo-referenced multicamera systems – is usually carried out through a bundle adjustment procedure. In order to implement this bundle adjustment procedure, the proposed form of collinearity equationsis modified by rearranging the terms in Eq.(4).This modification is carried out by dividing the first two equations by the third one after moving the image coordinates
PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

- yis then× 1 vector of differences between the measured and predicted observations using the approximate values of unknown parameters,
- xis them× 1 correction vector to the approximate values of the unknown parameters,
- Ais then×mdesign matrix (i.e., partial derivative matrix with respect to the unknown parameters),
- eis the is then× 1 vector of random noise, which is nomally distributed with a zero mean, variancecovariance matrix Σ,
- is the a-priori variance factor, and
- Pis then×nweight matrix of the noise vector.

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

- 2.3.1 Closed-form approach for the estimation of the EOPs of involved images

In this research, a two-step procedure is applied for the estimation of the EOPs of the involved images. In the fi rst step of this procedure, the EOPs of the images are approximated through a linear-based projective transformation procedure and in the second step, the approximate values of EOPs are refi ned using a Single Photo Resction (SPR). The linearbased projective transformation is employed, in the fi rst step of the proposed closed-form approach, to relate the 2D (planar) ground coordinates (
PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

3. Experimental Results

In this section, an experiment with real data is conducted to illustrate the performance of the proposed multi-camera system calibration approach. The utilized dataset for this experiment was acquired using a newly-developed multi camera system which is applied for different metrology applications. This system includes seven low-cost digital cameras mounted on a reinforced arc-shape aluminum arm. These cameras are aligned in a way to capture convergent images of the object of interest (
Fig. 7
). This dataset included 140 images collected at 20 epochs. The calibration of this multi-camera system was carried out using bundle adjustment with self-calibration under an indirect georeferencing procedure. The embedded distortion model for this self-calibration procedure only includes
Camera calibration results using the proposed technique for multi-camera system calibration

PPT Slide

Lager Image

PPT Slide

Lager Image

Estimated a-posteriori variance factor and system mounting parameters (lever-arm components and boresight angles) w.r.t. camera 4 (ref. camera)

PPT Slide

Lager Image

4. Conclusions and Recommendations for Future Research Work

In recent years, multi-camera systems have been recognized as fast and cost-effective data collection tools for various applications. These systems integrate multiple cameras on a platform to obtain larger object space coverage. An accurate system calibration – calibration of individual cameras as well as calibration of the mounting parameters relating the system components – procedure is crucial to ensure the achievement of the expected accuracy of these system. In this paper, a new single-step procedure has been introduced for the calibration of multi-camera systems. In the proposed system calibration procedure, one of the cameras is considered as the reference camera and the system mounting parameters are estimated relative to that camera. The advantages of the proposed method, when compared to existing multi-camera system calibration approaches, can be summarized as follows:
- 1. The modified collinearity equations, which have been previously implemented for the calibration of directly georeferenced single-camera systems, are modified in this research work to handle multi-camera systems,
- 2. The proposed approach offers a simple implementation; the simplicity of this procedure is not affected by increasing the number of the utilized cameras and involved epochs,
- 3. In this procedure, the size of the normal matrix (N) is reduced by decreasing the number of the unknown parameters, which are necessary for defining the position and orientation of the imaging platform and the individual cameras. This reduction will lead to a decrease in the storage and execution time requirements as well as avoiding possible correlations among the unknowns,
- 4. The developed method allows for a single-step estimation of two sets of mounting parameters (i.e., the mounting parameters relating the individual cameras and the reference camera and the mounting parameters relating the reference camera to the POS body frame),
- 5. This proposed model can deal with multi-camera systems which include a POS unit or indirectly georeferenced,
- 6. The proposed approach can incorporate prior information regarding the mounting parameters among the involved cameras, and
- 7. The introduced procedure will make the calibration procedure more robust against problems in image configuration and control distribution. This robustness is achieved by explicitly enforcing the relative orientation constraints among the cameras in the modified collinearity equations.

Admas D.
2007
Commercial marine-based mobile mapping and survey systems
Proceedings of the 5th International Symposium on Mobile Mapping Technology MMT ’07
Padua, Italy

Casella V.
,
Galetto R.
,
Franzini M.
2006
An Italian project on the evaluation of direct georeferencing in photogrammetry
Proceedings Eurocow 2006

Cramer M.
,
Stallmann D.
,
Haala N.
1999
Sensor integration and calibration of digital airborne threeline camera systems
Proceedings of the International Workshop on Mobile Mapping Technology
Bangkok, Thailand
451 -
458

Cramer M.
,
Stallmann D.
2002
System calibration for direct georeferencing, International Archives of Photogrammetry
Remote Sensing and Spatial Information Sciences, Proceedings of the ISPRS Commision III Symposium Photogrammetric Computer Vision
34
(3/A)
79 -
84

Detchev I.
,
Habib A.
,
Chang Y.-C.
2011
Image matching and surface registration for 3D reconstruction of a scoliotic torso
Geomatica
65
175 -
187

Detchev I.
,
Habib A.
,
El- Badry M.
2013
Dynamic beam deformation measurements with off-the-shelf digital cameras
Journal of Applied Geodesy
7
147 -
157

El-Sheimy N.
1992
A mobile multi-sensor system for GIS applications in urban centers
International Archives of Photogrammetry and Remote Sensing
31
(B2)
55 -
100

El-Sheimy N.
1996
The Development of VISAT : A Mobile Survey System for GIS Applications, Ph.D. dissertation
Department of Geomatics Engineering, University of Calgary
198 -

Ellum C.
2001
The Development of a Backpack Mobile Mapping System, M.Sc. thesis
Department of Geomatics Engineering, University of Calgary
172 -

Fraser C.S.
1997
Digital camera self-calibration
ISPRS Journal of Photogrammetry and Remote Sensing
52
149 -
159

Fritsch D.
,
Abdel-Wahab M.
,
Cefalu A.
,
Wenzel K.
2012
Photogrammetric point cloud collection with multi-camera systems
Springer
Berlin Heidelberg
Progress in Cultural Heritage Preservation, Lecture Notes in Computer Science
11 -
20

Granshaw S.I.
1980
Bundle adjustment methods in engineering photogrammetry
The Photogrammetric Record
10
181 -
207

Habib A.
,
Kersting A.P.
,
Bang K.-I.
2010
Comparative analysis of different approaches for the incorporation of position and orientation information in integrated sensor orientation procedures
Proccedings of Canadian Geomatics Conference 2010 and ISPRS COM I Symposium
Calgary, Canada

Habib A.
,
Kersting A.P.
,
Bang K.-I.
,
Rau J.
2011
A novel single-step procedure for the calibration of the mounting parameters of a multi-camera terrestrial mobile mapping system
Archives of Photogrammetry, Cartography and Remote Sensing
22
173 -
185

Haala N.
,
Rothermel M.
2012
Dense multiple stereo matching of highly overlapping UAV imagery
International Archives of Photogrammetry and Remote Sensing and Spatial Information Sciences
39
(B1)
387 -
392

Horn B.K.
1987
Closed-form solution of absolute orientation using unit quaternions
Journal of the Optical Society of America
4
629 -
642

King B.
1992
Optimisation of bundle adjustments for stereo photography
International Archives of Photogrammetry and Remote Sensing
29
(B5)
168 -
173

Kobayashi K.
,
Mori C.
1997
Relations between the coefficients in the projective transformation equations and the orientation elements of a photograph
Photogrammetric engineering and remote sensing
63
1121 -
1127

Kwak E.
,
Detchev I.
,
Habib A.
,
El-Badry M.
,
Hughes C.
2013
Precise photogrammetric reconstruction using model-based image fitting for 3D beam deformation monitoring
Journal of Surveying Engineering
139
143 -
155

Lee C.N.
,
Lee B.K.
,
Eo Y.D.
2008
Experiment on camera platform calibration of a multi-looking camera system using single non-metric camera
Journal of the Korean Society of Surveying, Geodesy, Photgrammetry, and Cartography
26
(4)
351 -
357

Lerma J.L.
,
Navarro S.
,
Cabrelles M.
,
Seguí A.E.
2010
Camera calibration with baseline distance constraints
The Photogrammetric Record
25
140 -
158

Malian A.
,
Azizi A.
,
Heuvel F.A.
2004
Medphos : a New Photogrammetric System for Medical Measurement
Proceedings of Commission V. Presented at the XXth ISPRS Congress
Istanbul, Turkey
311 -
316

Mostafa M.M.R.
,
Hutton J.
,
Lithopoulos E.
2001
Airborne direct georeferencing of frame imagery: An error budget
Proceedings of the 3rd International Symposium on Mobile Mapping Technology
Cairo, Egypt

Pinto L.
,
Forlani G.
2002
A single-step calibration procedure for IMU/GPS in aerial photogrammetry
International Archives of Photogrammetry and Remote Sensing
34
(3)
210 -
213

Rau J.-Y.
,
Habib A.F.
,
Kersting A.P.
,
Chiang K.-W.
,
Bang K.-I.
,
Tseng Y.-H.
,
Li Y.-H.
2011
Direct sensor orientation of a land-based mobile mapping system
Sensors
11
7243 -
7261

Seedahmed G.H.
,
Habib A.F.
2002
Linear recovery of the exterior orientation parameters in a planar object space
International Archives of Photogrammetry and Remote Sensing
34
(3)
245 -
248

Skaloud J.
1999
Optimizing Georeferencing of Airborne Survey Systems by INS/DGPS, Ph.D. dissertation
Department of Geomatics Engineering, University of Calgary
179 -

Shu F.
,
Zhang J.
,
Li Y.
2009
A multi-camera system for precise pose estimation in industrial applications
Proceedings of the IEEE International Conference on Automation and Logistics, ICAL ’09
206 -
211

Smith M.J.
,
Qtaishat K.S.
,
Park D.W.G.
,
Jamieson A.
2006
IMU and digital aerial camera misalignment calibration
Proceedings of EuroCow 2006

Tommaselli A.M.G.
,
Galo M.
,
de Moraes M.V.A.
,
Marcato J.
,
Caldeira C.R.T.
,
Lopes R.F.
2013
Generating virtual images from oblique frames
Remote Sensing
5
1875 -
1893

Wang P.C.
,
Tsai P.C.
,
Chen Y.C.
,
Tseng Y.H.
2012
One-step and two-step calibration of a portable panoramic image mapping system, International Archives of the Photogrammetry
Remote Sensing and Spatial Information Sciences
39
(B1)
173 -
178

Yuan X.
2008
A novel method of systematic error compensation for a position and orientation system
Progress in Natural Science
18
953 -
963

Citing 'Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 1) Theoretical Principle
'

@article{ GCRHBD_2014_v32n3_191}
,title={Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 1) Theoretical Principle}
,volume={3}
, url={http://dx.doi.org/10.7848/ksgpc.2014.32.3.191}, DOI={10.7848/ksgpc.2014.32.3.191}
, number= {3}
, journal={Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography}
, publisher={Korean Society of Surveying, Geodesy, Photogrammetry and Cartography}
, author={Zahra, Lari
and
Ayman, Habib
and
Mehdi, Mazaheri
and
Kaleel, Al-Durgham}
, year={2014}
, month={May}