Vision-based Ground Test for Active Debris Removal
Vision-based Ground Test for Active Debris Removal
Journal of Astronomy and Space Sciences. 2013. Dec, 30(4): 279-290
Copyright ©2013, The Korean Space Science Society
This is an open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License ( which premits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • Received : August 08, 2013
  • Accepted : September 09, 2013
  • Published : December 15, 2013
Export by style
Cited by
About the Authors
Seong-Min Lim
Department of Satellite Systems and Applications Engineering, University of Science and Technology, Daejeon 305-350, Korea
Hae-Dong Kim
IT Convergence Technology Team, Korea Aerospace Research Institute, Daejeon 305-806, Korea
Jae-Dong Seong
Department of Satellite Systems and Applications Engineering, University of Science and Technology, Daejeon 305-350, Korea
Due to the continuous space development by mankind, the number of space objects including space debris in orbits around the Earth has increased, and accordingly, difficulties of space development and activities are expected in the near future. In this study, among the stages for space debris removal, the implementation of a vision-based approach technique for approaching space debris from a far-range rendezvous state to a proximity state, and the ground test performance results were described. For the vision-based object tracking, the CAM-shift algorithm with high speed and strong performance, and the Kalman filter were combined and utilized. For measuring the distance to a tracking object, a stereo camera was used. For the construction of a low-cost space environment simulation test bed, a sun simulator was used, and in the case of the platform for approaching, a two-dimensional mobile robot was used. The tracking status was examined while changing the position of the sun simulator, and the results indicated that the CAM-shift showed a tracking rate of about 87% and the relative distance could be measured down to 0.9 m. In addition, considerations for future space environment simulation tests were proposed.
With the continuous space development, the activities of satellites have become essential part of mankind. For a long time, satellites have performed a lot of activities in various fields (e.g., weather forecast, long-range communication, Earth environment and seawater observation, military missions, and space exploration). In the future, they are expected to perform more diverse tasks such as the fueling, repair, and mission change of satellites with the use of servicing satellites. However, the disposal of satellites which completed their missions has not been properly considered so far, and other uncontrollable satellites or part of space launch vehicles revolving around the Earth have collided with space objects and have produced a lot of space debris. Especially, the amount of space debris has abruptly increased due to the interception experiment of Feng-Yun 1C from China in 2007 and the collision between IRIDIUM 33 from the United States and COSMOS 2251 from Russia in 2009. Currently, the number of space debris for entire orbits is estimated to be about 10 12 for those with a diameter of more than 0.1 mm, and about 750,000 for those with a diameter of more than 1 cm. Among these, the number of cataloged space objects is estimated to be about 17,000 (Krag 2013). The collision between space debris and satellites operated by various countries causes the breakdown of the satellites, and significantly affects the missions. To prevent this, many efforts have been made to avoid collision in advance via continuous monitoring, but it cannot be an ultimate solution.
On the other hand, to prevent the formation of space debris, countries operating satellites have recently proposed the ‘post-mission disposal (PMD)’ regulation, by which launched satellites should be decayed within 25 years of the mission completion, and are actually designing missions that follow the regulation. NASA reported that if the ‘active debris removal (ADR)’, by which 5 pieces of space debris are removed every year, is simultaneously performed along with PMD, the low Earth orbit environment could be stabilized by maintaining the current number of space debris ( Fig. 1 ) (Liou 2009), and also reported that if more than 5 pieces of major space debris are removed every year for 200 years from 2020, the increasing trend could become more gentle (NASA ODPO 2007).
Lager Image
LEO environment projection with PMD & ADR (Liou 2009).
Based on the above results, various methods for removing major space debris have been studied in many countries around the world. The methods that have been studied include a method which induces de-orbiting by attaching a tether, which has been conducted by the electromagnetic force around the Earth, to space debris (Nishida et al. 2009), a method which induces the de-orbiting of space debris using an ion-beam and a laser from the Earth’s surface or on the orbit (Merino et al. 2011, Phipps et al. 2012), a method which induces de-orbiting by deploying a sail loaded on a satellite and by receiving the solar wind (Stohlman & Lappas 2013), and a method which uses a capture system that removes a single or multiple pieces of small space debris using a robot-arm or nets (Benvenuto & Carta 2013, Richard et al. 2013).
Also, overseas research institutes have constructed various ground-based test beds for testing the space orbit techniques (e.g., rendezvous, docking, and proximity operation) that are applied to space debris removal and service satellite activities. Especially, vision-based ground test beds focused on simulating satellite motion and space environment. For the simulation of three-dimensional motion, a configuration using a 6-DOF robot (Wimmer & Boge 2011) and an underwater test bed configuration (Atkins et al. 2002) have been constructed. For the simulation of two-dimensional motion, a configuration, in which a floor was made to have a frictionless state for floating condition, has also been constructed (Romano et al. 2007). In Astrium, INRIA/IRISA, a ground-based test has been performed, which acquires distance information using the contour motion of a space object as the image information, without a marker or prior information (Petit et al. 2011). In Korea, research on a space debris removal system is also in progress along with the development of a space debris collision risk management system (Kim et al. 2013).
In this study, the implementation of an algorithm for a vision-based approach technique, and the ground test results of an approach (rendezvous) platform using a low-cost test bed were described as the preliminary research on the development of a capture system for active debris removal.
- 2.1 Vision sensor for space environment
Core space technologies such as rendezvous and docking require precise technology, and thus have been carried out in manned condition. However, as manned space missions incur a lot of cost and have a high risk level, studies on unmanned missions have been actively performed to replace manned missions. Accordingly, for a more accurate decision in proximity condition, it is necessary to select a sensor that is appropriate for the condition. Also, a method, which sends the data obtained from the sensor to the ground, processes the received data on the ground, and again transmits the processed data to the spacecraft, leads to a time delay, and thus is not appropriate for prompt missions. In this case, an on-board system needs to be constructed, which can process the sensor input data in real time at the platform. Fig. 2 shows the navigation sensors that are needed to perform rendezvous and docking on a space orbit, depending on the distance. For about 100 km which is the beginning stage of the rendezvous, a global positioning system (GPS) is used to initiate the far-range rendezvous. For the mid- and close-range rendezvous, radar and a vision-type sensor are appropriate, and for the proximity condition of less than 5 m, a laser range finder is used.
Lager Image
Typical operational ranges and measurement accuracies of rendezvous sensors (Fehse 2003).
For the ground test range of this study, the approach is assumed to have been completed, using GPS data information, down to about 100 m that is the range in which a vision-type sensor can be used. Then, starting from this position, it is assumed that the approach to a space object is carried out down to just before the proximity operation such as docking and berthing. For the vision-type sensor, a stereo camera was used so that the distance to an object without prior information on the size and shape could be measured.
- 2.2 Vision algorithm
- 2.2.1 Vision algorithm for the tracking of space debris
Object tracking using images has been applied to various situations such as face tracking and surveillance (Hsia et al. 2009) and object chasing and sports game analysis (Kim et al. 2010), and the algorithms that have been generally used include difference image, optical flow (Choi et al. 2011), SIFT, SURF, contour tracing, mean-shift, and CAM-shift (Salhi&Jammoussi 2012).
The difference image extracts a foreground by subtracting an image including the foreground from a background image, and is typically appropriate for surveillance at a fixed position. The optical flow is a method in which feature points within an image estimate the motion between frames, and can be used when a background and an object move together. The mean-shift tracks an object by searching local extrema from the density distribution of data. For the CAM-shift, size modification is enabled regarding the tracking window of the mean-shift.
Space environment has a simpler background compared to ground environment, but the background change within an image is very abrupt because the motion of a camera and a platform is faster than those on the ground. For the difference image and optical flow, object recognition is relatively not easy because the change of background motion is similar to that of tracking object motion. For the mean-shift and CAM-shift that use the density distribution of data, object recognition in space environment is relatively easy because the environment is simple. Also, for object tracking in space environment, the object size at the beginning of rendezvous is significantly different from the object size just before docking, and space debris exists in a tumbling state that is uncontrollable and unstable because of collision. Therefore, the CAM-shift, which is capable of tracking window modification, is appropriate for space debris tracking.
- 2.2.2 Cam-shift
The CAM-shift is based on the mean-shift, and operates by continuously updating the size of the tracking window. The mean-shift algorithm tracks a specific object by searching part of the tracking area, and the necessary information such as the position of the center of gravity, the width and height of the tracking window, and the coordinate information are calculated using the 0th, 1st, and 2nd moment ( M 00 , M 01 , M 10 , M 11 , M 20 , M 02 ) of Eqs. (1-6) (Horn 1986, Freeman et al. 1996).
Lager Image
Lager Image
Lager Image
Lager Image
Lager Image
Lager Image
The coordinates of the center of gravity for the tracking window ( xc, yc ) can be expressed as Eq. (7). These calculated coordinates serve as the state vector of the Kalman filter.
Lager Image
The direction angle between the principal axis of the image and the tracking window ( θ ), and the width ( w ) and height ( h ) of the tracking window are expressed in Eqs. (8-10), respectively.
Lager Image
Lager Image
Lager Image
Lager Image
Lager Image
Lager Image
Fig. 3 is a flowchart that shows the process by which the image information inputted from the camera is calculated at the vision system using the CAM-shift and the Kalman filter.
- 2.2.3 Range measurement using disparity
In space environment, a distance change is obtained by measuring the time for the transmission and reception of electromagnetic signals or by integrating velocity all over the time. In the case of a method using images, a distance can be measured using the proportion of an object in the vision image with prior information on the object. Another method is to use the stereo vision that extracts depth information using more than two images. The stereo vision acquires images in a form similar to the visual structure of a human, and generates a depth map by calculating the displacement of the feature points of the left and right images using triangulation. Fig. 4 shows the geometric configuration of the stereo vision. The Z coordinate of the object P represents the vertical distance from the stereo camera, and it can be calculated using the following method (Horn 1986).
The point P of the real world appears as pl and pr on each image plane, and the coordinates of each image plane are ( xl, yl ) and ( xr, yr ), respectively. The disparity is expressed as Eq. (14).
Lager Image
Flow chart of the tracking system. Dashed area is the CAM-shift.
Lager Image
Using the coordinates and parameters of the image plane, the coordinates of the real world, P ( X, Y, Z ), can be expressed as Eqs. (15-16). For Eq. (16), pr ( xr, yr ) is used because the right image is utilized for the image navigation. f represents the focal length, and b represents the base line.
Lager Image
Lager Image
- 2.3 Ground test bed & system
Fig. 5 shows the configuration of the test bed system for active debris removal. For the mobile robot, NTCommander-1 (Ntrex Co., Ltd.) was used. For the vision sensor, Bumblebee 2 Stereo Camera (PointGrey, Inc.) was used, and the specifications of the vision sensor and the control computer are summarized in Table 1 . As for the mobile robot, a two-wheel type mobile robot, which has similar motion to the satellite motion where a satellite moves forward by setting a direction using an attitude control system, was used. For the vision algorithm, the OpenCV library was used, and for the distance measurement algorithm, the Flycapture and Triclops, which are the libraries of Bumblebee 2 Stereo Camera, were used. The test was performed at a stereo camera frame rate of 30 FPS and an image size of 640 × 480 pixels.
Lager Image
Geometry of stereo vision. P is the position of the target. Cl, Cr is the each camera of the stereo camera. Distance Z is the distance between the camera and the tracking object.
Lager Image
Block diagram of chaser system. Chaser system consists of two part as control system and mobile robot system. The control system has a control computer with frame grabber. The mobile robot system is equipped with a stereo camera and rover controller which control the motor. The stereo camera is connected the frame grabber by 1394b. The control computer sends commands to rover controller by RS232 serial communication.
Chaser system speculation.
Lager Image
Chaser system speculation.
Position of sun simulator.
Lager Image
Position of sun simulator.
Lager Image
1/50 target object mockup (COMS).
Lager Image
Different position of the sun simulator. P1 is the front of the object, P2 is the left diagonal of the object, P3 is on the left of the object, P4 is on the top of the object.
Lager Image
Space environment simulation test bed. The test bed consists of mobile platform, COMS target mock up and sun simulator.
The tracking target object was a model of Chollian Satellite (Communication, Ocean and Meteorological Satellite) shown in Fig. 6 . The Chollian Satellite model used in the test had a size of 11 cm × 8 cm × 18.5 cm (width × length × height), which was a 1/50 scale model of the actual satellite.
Fig. 7 shows the position of the sun simulator during the test. The differences in the sunlight directions were compared by changing the included angle between the field of view of the camera and the sun simulator. As summarized in Table 2 , P 1 is the forward direction of the mobile robot; P 3 is the side of the tracking target; P 2 is between P 1 and P 3 ; and P 4 is above the tracking target. Table 2 shows the straightline distance between the light source of the sun simulator and the target satellite model, and the distances in each direction.
Lager Image
Tracking using the vision algorithm (P3-2). Green rectangular is the tracking window. Distance from tracking target a) 2.064 m (Frame 150), (b) 1.778 m (Frame 173), (c) 1.273 m (Frame 201), (d) N/A (Frame 247), (e) N/A (Frame 267), (f) N/A (Frame 323).
Lager Image
Histogram of different position of the sun simulator ((a) P1-2, (b) P2-2, (c) P3-2, (d) P4-2). The bar graph shows the distribution of each color in the image. (a)is the brightest case and (d) is the darkest case.
Fig. 8 shows the implemented test bed. To remove the effect of the reflection from the sun simulator, black paper was attached to the periphery. The tracking target object was assumed to be incapable of attitude control, and thus it was hung on a string and rotated randomly.
- 2.4 Test results
The test was performed 3 times at each sun simulator position (P 1 ~ P 4 ), for a total of 12 times. For example, the second test at position P 1 was expressed as P 1 -2. Fig. 9 shows the image capture data of the P 3 -2 test, and it approaches the target in the order of (a) to (f ). From a relative distance of 3 m, the test was performed while continuously measuring the distance using the vision algorithm. (a) represents the 2.04 m distance, (b) represents the 1.77 m distance, and (c) represents the 0.95 m distance. For (d) ~ (f ), the distances could not be measured.
Fig. 10 shows the histogram of the target satellite model at positions P 1 ~ P 4 . As the color of the light from the sun simulator was vermilion and the color of the satellite model was yellow, the color appeared reddish. For P 1 and P 3 , yellow color was also observed because much light, which had been reflected from one side of the satellite, was received. Especially, for P 1 , it was the brightest because the amount of the reflected light, which had been received by the camera, was the largest. For P 4 , yellow color was not observed and it appeared red because relatively weak light was incident on the front and the side that the camera faced. For P 2 , yellow color was also not observed and it also appeared red because the distance was relatively far.
Chaser system speculation.
Lager Image
Chaser system speculation.
Table 3 shows the tracking rate of the CAM-shift tracking algorithm for each case. The maximum tracking rate was more than 97% at P 1 -2, and the minimum tracking rate was about 74% at P 4 -1. The results depending on the sun simulator position indicated that P 3 showed the highest tracking rate. For P 4 , it is thought that the amount of received reflection was small because the light was incident only on the upper part of the target satellite model. For P 1 and P 2 , it is thought that the tracking rate slightly decreased because white noise occurred as the light was incident on the same direction as the moving direction. For P 3 , it is thought that the tracking was satisfactory because the light could be stably incident on the target satellite model. For the entire test, the average tracking rate was about 87%.
Lager Image
Kalman filter tracking error ((a) P1-2, (b) P2-2, (c) P3-2, (d) P4-2).
The graph in Fig. 11 shows the tracking pixel error of the Kalman filter, which has been used to improve the tracking performance of the CAM-shift algorithm. In the early stage of the tracking, an error occurred to some extent because the pixels of the tracking window changed abruptly to search the object after the tracking window setting of the CAM-shift algorithm, but during the object tracking, the error was less than 10 pixels, indicating that the tracking was satisfactory. However, the results in the latter stage object tracking and in Fig. 11 b indicated that a large error, which was more than 20 pixels in the early stage, was observed. As shown in Table 3 , the tracking rate was low at P 2 -2. Therefore, it is thought that the error became large in order to search the accurate center of the tracking window. Also, it is thought that when the distance to the object was less than 50 cm, the center of the tracking window changed abruptly and could not converge since the target satellite model occupied large area of the image as shown in Fig. 9 f.
Lager Image
Range Measurement ((a) P1, (b) P2, (c) P3, (d) P4). The distance is measured to approximately 0.9 m. And it is impossible to measure distance after 0.9 m. Because disparity which is used to calculate the distance isn’t accurate.
The graph in Fig. 12 shows the results of the distance measurement depending on the sun simulator position. As the disparity values were not calculated at some frames, there were intervals with a constant distance. However, for most of the tests, the tracking started at a relative distance of 3 m, and the distance was measured down to about 0.9 m. For all the case of P 2 ( Fig. 12 b) and P 4 -3 ( Fig. 12 d), there were intervals with increasing distance, and it is thought that this was an error occurred during the direction change. For frame 288 of P 3 -2, a distance of 1 m was calculated, and it is thought that this value was calculated based on an outlier of the disparity at a proximity state to the target satellite model.
In Fig. 13 , the trajectory was drawn using the position coordinate of the target satellite model (the P coordinate in Fig. 5 ) with the use of Bumblebee 2 Stereo Camera. This coordinate represents the relative coordinate of the target satellite model and the camera. If the X coordinate is 0, it indicates that the mobile platform is aligned with the target satellite model. As shown in Eq. (16), the X coordinate was calculated based on the straight-line distance Z. Therefore, the trajectory was drawn excluding the values where the distance Z was less than 0.9 m. For the coordinates with a relative distance of down to 0.9 m, the X coordinate was mostly within 0.1 m, indicating that the mobile platform and the target satellite model were almost aligned. Therefore, it is thought that a normal close-range rendezvous state for proximity operation was successfully achieved.
Lager Image
Tracking trajectory ((a) P1, (b) P2, (c) P3, (d) P4). In all cases, the x-coordinate is measured to within 0.1 m at the last frame.
In this study, a vision-based approach algorithm for active debris removal was developed, and the implemented algorithm was tested by constructing a ground-based space environment simulation test bed. For the continuous object tracking using only a vision sensor without the addition of other sensors, a stereo camera that can measure distances using a depth map was used. For the implemented algorithm, the CAM-shift, which is less affected by external environment and has high speed, was used, and the Kalman filter was added to improve the accuracy of the algorithm. For the mobile platform, a two-wheel type platform, which performs rotational and translational motion in the rotation direction of the motor and which can effectively simulate the motion of a satellite, was used. In the ground simulation test using the test bed, it was investigated whether the image tracking could be successfully performed in space, where the light environment condition is different from the ground. Especially, to examine the mission performance ability depending on the position change of the Sun while revolving around the Earth, the test was carried out by changing the incidence angle of the light. The test was conducted using the target satellite model at a distance of 3 m, and theCAM-shift algorithm mostly showed a tracking rate of more than 70%. It is thought that the results were affected by the intensity of the light rather than the position of the sun simulator. The distance could be measured down to 0.9 m.
In the future, to implement a test bed for the study of a space debris removal system, the simulation of motion in space environment and the equivalence depending on the distance to a tracking object need to be considered, and especially, the effects of direct sunlight and Earth’s albedo need to be considered. As the stereo cameras used in this study were parallel, the distance measurement was difficult in the proximity state within a certain distance to the tracking object. Therefore, it is thought that the included angle between the two cameras needs to be changed or other type of sensors such as a laser range finder that is useful for proximity distance measurement need to be added besides the vision sensor.
This study was conducted as part of ‘Development of aspace debris collision risk management system and researchon a space debris removal system’, which is the collaborativeresearch project of ‘NAP optical wide-field patrol’ of theKorea Research Council of Fundamental Science andTechnology. The authors appreciate the financial supportfrom the Korea Research Council of Fundamental Scienceand Technology and the Korea Aerospace ResearchInstitute.
Atkins EM , Lennon JA , Peasco RS Vision-based following for cooperative astronaut-robot operations in Proceedings of the IEEE Aerospace Conf. Big Sky, MT 9-16 Mar 2002
Benvenuto R , Carta R Active debris removal system basedon tethered-nets: experimental results in Proceedings of the 9th PEGASUS-AIAA Student Conf. Milano, Italy 4 Apr 2013
Choi JH , Lee D , Bang H Tracking an unknown movingtarget from UAV: Extracting and localizing an movingtarget with vision sensor based on optical flow in Proceedings of the 5th Int’l Conf. on Automation, Robotics and Applications Wellington, New Zealand 6-8 Dec 2011
Fehse W 2003 Automated rendezvous and docking of spacecraft,vol. 16, Cambridge Aerospace Series, eds. Rycroft MJ,Shyy W Cambridge University Press Cambridge 124 - 126
Freeman WT , Tanaka K , Ohta J , Kyuma K Computer visionfor computer games in Proceedings of the 2nd IEEE Int’l Conf. on Automatic Face and Gesture Recognition Killington, VT Oct 1996
Horn BKP 1986 Robot vision, MIT Electrical Engineering andComputer Science Series, eds. Horn BKP MIT Press Cambridge 278 - 422
Hsia SC , Hsiao CH , Huang CY (2009) Single-object-based segmentationand coding technique for video surveillance system JEI 18 033007 -
Kim HD , Lee SC , Cho DH , Seong JD Development of KARISpace debris collision risk management system in Proceedings of the KSAS 2013 Spring Conf. Jungsun 10-12 Apr 2013
Kim K , Grundmann M , Shamir A , Matthews I , Hodgins J Motion fields to predict play evolution in dynamicsport scenes in Proceedings of the 23rd IEEE Conf. on Computer Vision and Pattern Recognition San Francisco, CA 13-18 Jun 2010
Krag H Space Debris: Public Talk in Presentations at the KARI Space Debris Short Lecture Daejeon 10-11 Jul 2013
Liou NASA’s long-term debris environment and activedebris removal modeling activities in Proceedings of the Int’l Conf. on Orbital Debris Removal Chantilly, VA 8-10 Dec 2009
Merino M , Ahedo E , Bombardelli C , Urrutxua H , Pelaez J Space debris removal with an ion beam shepherdsatellite: Target-plasma interaction in Proceedings of 47th AIAA/ASME/SAE/ASEE Joint Propulsion Conf. & Exhibit San Diego, CA 31 Jul-03 Aug 2011
NASA Orbital Debris Program Office (2007) A Preliminary activedebris removal study Orbital Debris Quarterly News 11 (4)
Nishida S , Kawamoto S , Okawa Y , Terui F , Kitamura S (2009) Space debris removal system using a small satellite Acta Astronautica 65 95 - 102
Petit A , Despre N , Marchand E , Kanani K , Chaumette F 3D model-based visual tracking for spaceautonomous rendezvous in Proceedings of the 8th Int’l ESA Conf. on Guidance and Navigation Control Systems Karlovy Vary, Czech Republic 5-10 Jun 2011
Phipps CR , Baker KL , Libby SB , Liedahl DA , Olivier SS (2012) Removing orbital debris with lasers Advanced in space research 49 1283 - 1300
Richard M , Kroning L , Belloni F , Rossi S , Gass V Uncooperative rendezvous and docking for microsats in Proceedings of the 6th Int’l Conf. on Recent Advances in Space Technologies Istanbul, Turkey 12-14 Jun 2013
Romano M , Fri edman DA , Shay TJ (2007) L aborator yexperimentation of autonomous spacecraft approachand docking to a collaborative target Journal of Spacecraft and Rockets 44 164 - 173
Salhi A , Jammoussi AY (2012) Object tracking system usingcamshift, meanshift and kalman filter Journal of World Academy of science, engineering and technology 64 674 - 679
Stohlman OR , Lappas V Deorbitsail: a deployable sail forde-orbiting in Proceedings of the 54th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conf. Boston, MA 8-11 Apr 2013
Wimmer T , Boge T EPOS: A Hardware-in-the-Loop roboticsimulation assembly for testing automated rendezvousand docking GNC sensor payloads in Proceedings of the 8th Int’l ESA Conf. on Guidance and Navigation Control Systems Karlovy Vary, Czech Republic 5-10 Jun 2011