Advanced
Follicular Unit Classification Method Using Angle Variation of Boundary Vector for Automatic Hair Implant System
Follicular Unit Classification Method Using Angle Variation of Boundary Vector for Automatic Hair Implant System
ETRI Journal. 2016. Mar, 38(1): 195-205
Copyright © 2016, Electronics and Telecommunications Research Institute (ETRI)
  • Received : August 18, 2014
  • Accepted : July 22, 2015
  • Published : March 01, 2016
Download
PDF
e-PUB
PubReader
PPT
Export by style
Share
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Hwi Gang Kim
Tae Wuk Bae
Kyu Hyung Kim
Hyung Soo Lee
Soo In Lee

Abstract
This paper presents a novel follicular unit (FU) classification method based on an angle variation of a boundary vector according to the number of hairs in several FU images. The recently developed robotic FU harvest system, ARTAS, classifies through digital imaging the FU type based on the number of hairs with defects in the contour and outline profile of the FU of interest. However, this method has a drawback in that the FU classification is inaccurate because it causes unintended defects in the outline profile of the FU. To overcome this drawback, the proposed method classifies the FU’s type by the number of variation points that are calculated using an angle variation a boundary vector. The experimental results show that the proposed method is robust and accurate for various FU shapes, compared to the contour-outline profile FU classification method of the ARTAS system.
Keywords
I. Introduction
Hair transplantation is a surgical technique that moves individual hair follicles from one part of the body, called the donor site, to a bald or balding part of the body, known as the recipient site [1] , [2] . In this minimally invasive procedure, grafts containing hair follicles are transplanted to a recipient site. Generally, grafts contain almost all of the epidermis and dermis surrounding the hair follicle, and rather than a single strip of skin, many tiny grafts are instead transplanted [3] , [4] . Because hair grows naturally in groupings of one to five hairs (groupings of 4–5 hairs are much less common than groupings of 1–3 hairs), most advanced techniques harvest and transplant the more naturally occurring groupings of 1–3 hair follicular units (FUs) (see Fig. 1 ).
PPT Slide
Lager Image
FU types.
Generally, this type of hair transplant procedure is called follicular unit transplantation (FUT) [4] . The donor hair can be harvested in two different ways — strip harvesting and follicular unit extraction (FUE). Using strip harvesting, a surgeon harvests a strip of skin from a posterior scalp within an area of good hair growth [1] , [2] .
The excised strip is about 1 cm × 15 cm to 1.5 cm × 30 cm in size. While closing the resulting wound, assistants begin to dissect individual FU grafts, which are small, naturally formed groupings of hair follicles, from the strip. Working with binocular stereo-microscopes, they carefully remove excess fibrous and fatty tissue while trying to avoid damage to the follicular cells that will be used for grafting.
Through FUE harvesting, individual FUs containing one to three hairs are removed under local anesthesia; this micro removal typically uses tiny punches between 0.6 mm to 1.0 mm in diameter, as shown in Fig. 2(a) [5] [8] . Punch size is selected based on FU density; the greater the density, the smaller the punch size required. If possible, the smallest punch is chosen to minimize trauma to the scalp [9] . The surgeon then uses very small micro blades or fine needles to puncture the sites for receiving the grafts, placing them in a predetermined density and pattern, and angling the wounds in a consistent fashion to promote a realistic hair pattern. Technicians generally conduct the final part of the procedure, inserting the individual grafts in place. Although FUE is considered more time consuming, depending on the operator’s skill, and because there are restrictions on patient candidacy, FUE has many advantages, including a small donor wound, less pain, and a slender graft without extra surrounding tissue. After the appropriate number of units has been removed, technicians separate the grafts into units of one to three hairs, and these grafts are implanted much the same way as in the “strip” method [10] . An improved FUE procedure using a motor-driven handpiece (that is, the powered FUE (P-FUE) method) was introduced in [11] .
PPT Slide
Lager Image
Harvesting device: (a) FUE and (b) ARTAS.
The newest and most advanced method of hair transplantation is called NeoGraft, which is an automated FUE and implantation system [9] , [12] . Through the NeoGraft hair transplant medical system, the difficulties associated with a manual FUE procedure have been eliminated. An FUE hair transplant using NeoGraft can be conducted within the same amount of time as a traditional strip hair transplant, reducing the cost of a long manual FUE transplant. In addition, to reduce the time required for the hair transplant, NeoGraft uses pneumatic pressure to extract the follicles and precisely implant them. This eliminates the need for touching the follicles with forceps during harvesting, leaving them in a more robust state.
The ARTAS robotic system has recently been developed to automatically harvest FUs using imaging technology and robotics [13] [21] . The ARTAS system is an automated traditional FU harvest method using FUE, as shown in Fig. 2(b) . Its features include high-resolution digital imaging, micron-level targeting accuracy using image-guided robotic alignment, and precision beyond that of manual techniques; its minimally invasive dissection delivers healthy, intact grafts with nearly undetectable harvest sites [17] . Through high-resolution digital imaging, it visualizes the surface of the scalp in three dimensions with micron-level precision, and the stereo digital imaging maps the coordinates, angle orientation, and direction for each FU, and determines the density and distribution of the FUs [11] . Its algorithms set the alignment and depth for individual FU harvesting.
As previously mentioned, hair transplantation techniques have been researched and advanced through developments in the harvesting devices [11] , [18] [20] . In particular, the recently developed FU harvest system, the ARTAS robotic system, uses digital image processing techniques to obtain FU information (type, angle orientation, and direction) on the donor area and automatically selects and harvests the FU grafts without human intervention [14] .
Among the related image processing techniques, the FU classification technique is very important for future automatic FU harvest and transplantation systems because different sizes of punch blades according to the FU type should be used for minimizing blooding during automatic FU harvesting [11] , [19] , [20] . For future automatic FU transplantation by using the FUE method, furthermore, the exact number of hairs and the hair density of the recipient area should be known for precise hair transplantation [21] [23] . For these reasons, this paper presents a novel FU classification method using boundary vector variations for improving the contour-based FU classification of the ARTAS system.
II. Related Work
The ARTAS system is a method for classifying FUs using digital imaging and processing techniques for use in hair transplantation procedures [15] , [16] . The method for classifying FUs based on the number of hairs in an FU of interest is as follows: (a) acquire an image of the body surface having the FU of interest; (b) process the image to calculate the contour of the FU and an outline profile that disregards the concavities in the contour; and (c) determine the number of defects in the outline profile to determine the number of hairs in the FU.
From a segmented image of an FU through binarization, a contour around the outer perimeter of the hairs of an FU may be calculated as shown in Fig. 3(a) . For example, for an F1 (an FU with one hair), a contour will generally be a line or surface following the outer surface of a single hair. For relatively straight hair, the contour will look like a rectangle. For an F2 (an FU with two hairs), the hairs typically form a “V” shape such that the contour looks like a block lettered “V.” The segmented image also allows the calculation of an outline profile of the FU, as shown in Fig. 3(b) . The outline profile disregards concavities in the contour of the image. For instance, for an F2, there is a concavity or “inwardly curved” portion in the contour formed by the descent in the contour from one side of the top of the “V” to the vertex of the “V” and back up to the other side of the top of the “V.” The calculated profile disregards this concavity such that the resulting outline profile looks like a triangle with one of the vertices of the triangle generally tracing the vertex of the “V” of the contour of the FU.
The outline profile is then compared to the contour to determine the number of “defects,” as shown in Fig. 3(b) , in the outline profile. A defect in the outline profile may be defined, for example, as each of the concavities in the outline profile that divert from the contour. In the F2 example, there is one defect in the outline profile represented by the concavity formed by the “V” shape. In an F3 (an FU with three hairs), the contour will generally be shaped like two Vs sharing a common vertex, and with one line forming on one side of both Vs. The outline profile of an F3 will also have a generally triangular shape (although it may be a wider triangle than that found in an F2). Thus, an F3 will have two defects.
PPT Slide
Lager Image
Information on FU in ARTAS: (a) contour and (b) outline profile.
Therefore, it can be seen that the number of defects has a direct relationship with the type of FU (TOF). In this case, the number of hairs for the FU equals the number of defects plus one. In one embodiment of the method for classifying FUs, the outline profile may be determined by calculating a convex hull contour pursuant to well-known image processing techniques.
The aforementioned contour-outline profile method of the ARTAS system for classifying FUs has a drawback in that it is sensitive to the shape of the FUs. Figure 4 shows that the contour-outline profile method may cause unintended defects in addition to the supposed defect in the outline profile for F2. When the body of an FU is long and irregular, it may be more difficult to classify its type based on the number of defects owing to the existence of unintended defects. To solve this drawback, we present a novel FU classification method using the angle variation of the boundary vectors in a segmented image of an FU.
PPT Slide
Lager Image
Drawback of contour-outline profile method: (a) input FU image, (b) contour, (c) outline profile, and (d) unintended defects.
III. Proposed FU Classification Method
The proposed FU classification method using the angle variation of boundary vectors in a segmented image of an FU, is shown in Fig. 5 . The method consists of six steps for classification based on the angle variation of the boundary vector of an FU: (a) image binarization and segmentation; (b) FU boundary tracing; (c) angle variation estimation of the boundary vector direction; (d) Gaussian smoothing for noise reduction; (e) difference in angle variation of the boundary vector; and (f) variation of the boundary vector angle. These steps can also be categorized into two major steps — FU boundary detection and difference extraction of the angle variation of the boundary vector.
PPT Slide
Lager Image
Proposed FU classification method.
- 1. FU Boundary Detection
A. Image Binarization and Segmentation
For an input FU image, as shown in Fig. 6(a) , only the FU region is segmented through binarization and complementation of the input image, as shown in Fig. 6(b) . Generally, FU images consist of a follicle area at the hair root from the human scalp and a hair area leading off from the hair root. In addition, we can classify an FU according to the number of hairs, such as in a one-, two-, or three-hair FU (F1, F2, and F3, respectively). In this paper, the follicle and hair are called the “head” and “tail,” respectively, as shown in Fig. 6(b) .
PPT Slide
Lager Image
Segmentation of FU image: (a) input FU image and (b) segmented image of head and tail of FU.
B. FU Boundary Trace
For the segmented FU image, the boundary pixels of the FU region can be detected using a boundary trace filter [24] . The filter structure for detecting the boundary pixels of an FU region is shown in Fig. 7 . Using the boundary trace filter, the boundary of an FU is detected, as shown in Fig. 8(a) . In the detected FU boundary, a boundary pixel and its neighboring pixel can be regarded as a boundary vector of the FU boundary with a given length and angle. Such a boundary vector is represented by a column vector, which identifies the row and column coordinates of the point on the FU boundary. The starting point for detecting the boundary vectors of an FU is the left- and upper-most object pixel point, with a trace being taken in a clockwise direction, as shown in Fig. 8(b) .
PPT Slide
Lager Image
Boundary trace filter.
PPT Slide
Lager Image
Example of FU boundary trace: (a) FU boundary trace and (b) its starting point.
- 2. Angle Variation Estimation of Boundary Vector
A. Angle Estimation of Boundary Vector Direction
The proposed FU classification method is based on the angle variations of the boundary vectors for counting the number of hairs for an FU. For this procedure, the angle θ ( n ) of the boundary vector for a current boundary pixel ( xn , yn ) is examined using a previous boundary pixel ( x n–1 , y n–1 ) as follows:
(1) θ(n)= tan −1 ( y n − y n−1 x n − x n−1 ),
where n denotes the number of boundary pixels ( n ≥ 2). In (1), n represents the order of the boundary vectors according to the boundary pixels of an FU. In addition, the angles of the boundary vectors are in the range [0, ±π], as shown in Fig. 7 . As shown in (1), the angle ( θ ) of the boundary vector is defined as the degree from the slope of a boundary vector formed by two neighboring boundary pixels on an FU.
Figure 9(a) shows the angles of the boundary vectors of the FU with the two hairs shown in Fig. 6 . In addition, the numbers in red in Fig. 9(a) are rapidly changing sections and represent significant variation points, which are indicated in Fig. 9(b) with the same numbering. Furthermore, these variation points are related to the head (that is, no. 3 in Fig 9(b) ) and tail areas (that is, no.s 2 and 4), as shown in Fig. 6(b) . Focusing on these significant angle variations of the boundary vector of an FU, the method described in this paper determines the TOF based on the number of hairs by extracting the angle variation points (head and tail) of the FU from the boundary vectors.
PPT Slide
Lager Image
Angles of boundary vector: (a) angle graph and (b) numbering significant variation points (head and tail) on FU image.
B. Gaussian Smoothing
In Fig. 9(a) , the obtained angles of the boundary vectors contain camera noise caused by dense pixel variations of the FU boundary points. In addition, the camera noise makes it difficult to extract the angle variation points exactly. Therefore, we generate a smooth signal through convolution using a Gaussian window to smooth the angle variations of the boundary vectors as follows:
(2) θ sm (n)=G(x)*θ(n),
(3) G(x)= 1 2π σ ⋅ e − x 2 2 σ 2 ,
where subscript “sm” indicates smoothness, and the Gaussian window in (3) is normalized using a 20-point window size. Gaussian smoothing is performed for a rough angle variation curve to estimate exactly the angle variation at the head or tail of an FU. Figure 10 shows the averaged angles for the original angles of the boundary vector in Fig. 9(a) , and the graph looks smoother in shape.
PPT Slide
Lager Image
Gaussian smoothing, and numbering based on Fig. 9.
C. Difference in Smoothed Angle
To extract significant variation points, we calculate the difference in the smoothed angles, Δ θ dsm , as follows:
(4) Δ θ dsm (n)= θ sm (n)− θ sm (n−1),
where{ n | n = 2,…, n –1}. The difference in the smoothed angles is shown in Fig. 11 . We can see that the significant variation points (head and tail) appear in the maximum (or minimum) sections by subtracting the neighboring smoothed angles. Based on the information regarding the important variation points, we can determine the TOF based on the number of hairs by counting the maximum (or minimum) sections.
PPT Slide
Lager Image
Difference in smoothed angles, and numbering based on Fig. 9.
D. Extraction of Boundary Vector Variation
This step takes the absolute value of the difference in the smoothed angles, |Δ θ dsm |, to extract the angle variations in the head and tail of the FU easily. Using an absolute graph, it can extract and count the number of peaks over an arbitrary threshold value. The threshold vales are calculated and determined as half of the maximum values. Finally, our method can classify the TOF, or the number of hairs in an FU image with half of the number of peaks, N peak , counted from the absolute graph, as one, two, or three hairs, as follows:
(5) TOF= 2 N peak .
For example, in the case of the FU with two hairs in Fig. 9 , the method can count four peaks and classify it as a two-hair ( N peak /2 = 4/2 = 2) FU, as shown in Fig. 12 .
PPT Slide
Lager Image
Absolute values of angle differences.
IV. Experimental Results
Computer simulations were conducted using the proposed FU classification method to confirm its performance for an FU image set. The tested FU image set consists of six, eighteen, and four images for one, two, and three hairs, respectively. Generally, a two-hair FU has a very high distribution unrelated to the type of patient. The FU image set was generated under various conditions; in other words, different camera resolutions, image sizes, and hair slopes were used.
First, the ARTAS FU classification algorithm (that is, the contour-outline profile method) was tested using the test image set. Table 1 shows the classification results of the ARTAS system based on the contour-outline profile. The FU images are used to determine the hair type based on the number of defects. As mentioned in Section II, the type of hairs for the FU image equals the number of defects plus one. In addition, the defects are defined by their triangular shape and area in the difference image of the contour image and outline profile image of the FU. The second column in Table 1 shows the input FU images for each hair type. In addition, in the third column, the resulting images are listed along with their defects in the same order as in the second column. Detailed captions are attached to the resulting images, and their defects and unintended defects are marked on the resulting images by the red and yellow dotted lines, respectively. In the case of a one-hair type, there should be no defects; however, there are two unintended defects in the wide white area in the second image. In addition, the one-hair type is classified as a three-hair type despite the image showing one hair. Moreover, the case of two hairs results in six false classification results for 17 images. There are some unintended defects with a triangular shape, which are similar in size to an accurate defect. In particular, the third image of the three-hair type has one defect and is classified as a two-hair type. The false classification error is caused by the ARTAS system because the two defects are connected into one. We can also see the unintended defects in the resulting image of the three-FU hair type.
Experimental results of ARTAS (contour-outline profile method) classification system.
Type Input images Defect and unintended defects (hair type = the number of defects + 1) Classification result
1 hair 5 succeeded, 1 failed
2 hairs 11 succeeded, 6 failed
3 hairs 2 succeeded, 2 failed
Second, the proposed classification method was tested using the same test image set shown in Table 2 . Table 2 shows the classification results, the FU boundary images, and a graph of the peak detection result when using the proposed method. The FU images are classified by counting the number of peak points in the resulting graphs, and the hair type of each FU image determines one-half of the counted number of peaks as one to three hairs. In Table 2 , the peak detection graphs show 2, 4, and 6 peaks when the FU images are of one, two, and three hairs, respectively. We can see that the proposed method classifies the FU images successfully according to the number of hairs, except in the case of the two-FU hair type where there is one failure.
Experimental results of proposed FU classification method.
Type FU boundary images Peak detection (hair type = number of peaks/2) Classification result
1 hair 2 peaks (=1 hair), 6 succeeded
2 hairs 4 peaks (=2 hairs), 16 succeeded, 1 failed
3 hairs 6 peaks (=3 hairs), 4 succeeded
As a result, we can see that the proposed method is superior in terms of its classification performance, compared to the contour-outline profile method. Furthermore, the new FU classification method for an FU harvest system solves the problem inherent to the ARTAS algorithm using the angle variation of the boundary vector. On the other hand, this classification method is only useful for “V” or “two-V” shapes and cannot be applied to other FU shapes, such as an “II” or “X.”
V. Conclusion
This paper presented a novel FU classification method based on the angle variation of a boundary vector for classifying an FU based on the number of hairs. ARTAS, a recently developed FU harvest system, classifies the TOF based on the number of hairs, using the number of defects based on the difference in the contour and outline profile of the FU of interest. However, this method has a drawback in that the FU classification is inaccurate, because it causes unintended defects in the outline profile of the FU. To solve this drawback, the proposed method classifies the TOF image by extracting the number of angle variation peak points of the boundary vector. The experimental results show that the proposed method is robust and accurate for a limited number of FU shapes compared to the contour-outline profile FU classification method of ARTAS. On the other hand, we are developing a newly modified algorithm for classifying shapes such as an “II” or “X” shape. Thus, in this paper, determining the number of peaks may not yet provide the most accurate classification result.
This work was supported by ETRI R&D Program (16ZC3110, Local-based medical device/robot development & medical IT convergence for small and medium enterprise revitalization project) funded by the Government of Korea.
BIO
hwigangkim@etri.re.kr
Hwi Gang Kim received his BS and MS degrees in electrical engineering from Kyungpook National University, Daegu, Rep. of Korea, in 2008 and 2010, respectively. He is currently working at ETRI, Daegu, Rep. of Korea. His research interests include image processing, computer vision, and medical signal processing.
Corresponding Author twbae@etri.re.kr
Tae Wuk Bae received his BS, MS, and PhD degrees in electrical engineering from Kyungpook National University, Daegu, Rep. of Korea, in 2004, 2006, and 2010, respectively. He is currently working at ETRI, Daegu, Rep. of Korea. His research interests include medical devices and medical signal processing.
jaykim@etri.re.kr
Kyu Hyung Kim received his BS and MS degrees in electrical engineering, and his PhD degree in computer science and engineering from Kyungpook National University, Daegu, Rep. of Korea, in 1998, 2000, and 2014, respectively. He is currently working at ETRI, Daegu, Rep. of Korea. His research interests include IoT technology and medical devices.
hsulee@etri.re.kr
Hyung Soo Lee received his BS degree in electrical engineering from Kyungpook National University, Daegu, Rep. of Korea, in 1980 and his PhD degree in IT engineering from Sungkyunkwan University, Suwon, Rep. of Korea, in 1996. Since 1983, he has been a principal researcher at ETRI, Daegu, Rep. of Korea. His research interests are spectrum engineering, WPAN system design, and biomedical IT convergence devices.
silee@etri.re.kr
Soo In Lee received his MS and PhD degrees in electronics engineering from Kyungpook National University, Daegu, Rep. of Korea, in 1989 and 1996, respectively. Since 1990, he has been with ETRI, where he is currently a principal researcher with the Broadcasting System Research Department. His main research interests include digital broadcasting systems and IT convergence technology.
References
Okuda S. 1939 “The Study of Clinical Experiments of Hair Transplantation,” Japanese J. Dermatology 46 (135)
Unger W.P. 1994 “Delineating the Safe Donor Area for Hair Transplanting,” American J. Cosmetic Surgery 11 239 - 243
Orentreich N. 1959 “Autografts in Alopecias and Other Selected Dermatological Conditions,” Annals New York Academy Sci. 83 463 - 479
Bernstein R.M. , Rassman W.R. 2005 “Follicular Unit Transplantation,” Issue Adv. Cosmetic Surgery, Dermatologic Clinics 23 (3) 393 - 414
Rassman W.R. 2002 “Follicular Unit Extraction: Minimally Invasive Surgery for Hair Transplantation,” Dermatologic Surgery 28 (8) 720 - 728
FUE – A Few Words about Transplant Methods, The Blog, SURE HAIR INTERNATIONAL http://surehairtransplants.com/fue-words-transplant-methods
P.A., Hair Restoration for Men and Women BAUMAN MEDICAL GROUP http://baumanmedical.com
Follicular Unit Extraction, Hair Transplant Network.com http://www.hairtransplantnetwork.com/Hair-Loss-Treatments/follicular-unit-extraction.asp
Bicknell L.M. 2014 “Follicular Unit Extraction Hair Transplant Harvest: A Review of Current Recommendations and Future Considerations,” Dermatology Online J. 20 (3)
Rashid R.M. , Morgan B.L.T. 2012 “Follicular Unit Extraction Hair Transplant Automation: Options in Overcoming Challenges of the Latest Technology in Hair Restoration with the Goal of Avoiding the Line Scar,” Dermatology Online J. 18 (9)
Onda M. 2008 “Novel Technique of Follicular Unit Extraction Hair Transplantation with a Powered Punching Device,” Dermatology Surgery 34 (12) 1683 - 1688
Procedure Overview, NeoGraft http://neograftdocs.com/ procedure-overview/neograft-automated-fue-hair-transplant-System/
ARTAS Robotics System, RESTORATION ROBOTICS http://restorationrobotics.com/minimally-invasive-dissection/
Rashid R.M. 2014 “Follicular Unit Extraction with Artas Robotic Hair Transplant System: An Evaluation of FUE Yield,” Dermatology Online J. 20 (4)
Qureshi S.A. , Bodduluri M. 2009 System and Method for Classifying Follicular Units US Patent 7,477,782 B2 filed Aug. 25, 2006
Qureshi S.A. , Bodduluri M. 2009 System and Method for Classifying Follicular Units US Patent 7,627,157 B2 filed Aug. 24, 2007
Bodduluri M. , Gildenberg P.L. , Caddes D.E. 2011 System and Methods for Aligning a Tool with a Desired Location or Object US Patent 7,962,192 B2 filed Apr. 28, 2006
Qureshi S.A. , Canales M.G. 2011 System and Method for Harvesting and Implanting Hair Using Image-Generated Topological Skin Models US Patent 8,048,090 B2 filed Mar. 11, 2009
Qureshi S.A. , Canales M.G. 2013 System and Method for Harvesting and Implanting Hair Using Image-Generated Topological Skin Models US Patent 8,361,085 B2 filed Sept. 23, 2011
Bodduluri M. 2013 System and Method for Harvesting and Implanting Hair Using Image-Generated Topological Skin Models US Patent 8,454,627 B2 filed Jan. 6, 2012
Qureshi S.A. , Bodduluri M. 2012 System and Method for Counting Follicular Units US Patent 8,199,983 B2 filed Aug. 24, 2007
Qureshi S.A. , Bodduluri M. 2012 System and Method for Counting Follicular Units US Patent 8,290,229 B2 filed May 16, 2012
Shin J.W. 2015 “Characteristics of Robotically Harvested Hair Follicles in Koreans,” J. American Academy Dermatology 72 (1) 146 - 150    DOI : 10.1016/j.jaad.2014.07.058
Documentation, MathWorks http://kr.mathworks.com/help/images/ref/bwtraceboundary.html