Advanced
An Evolutionary Optimized Algorithm Approach to Compensate the Non-linearity in Linear Variable Displacement Transducer Characteristics
An Evolutionary Optimized Algorithm Approach to Compensate the Non-linearity in Linear Variable Displacement Transducer Characteristics
Journal of Electrical Engineering and Technology. 2014. Nov, 9(6): 2142-2153
Copyright © 2014, The Korean Institute of Electrical Engineers
  • Received : June 16, 2014
  • Accepted : August 13, 2014
  • Published : November 01, 2014
Download
PDF
e-PUB
PubReader
PPT
Export by style
Article
Author
Metrics
Cited by
TagCloud
About the Authors
S. Murugan
Corresponding Author: Dept. of Electrical and Electronics Engineering, Einstein College of Engineering, Tirunelveli, Tamil Nadu, India.(srivimurugan@gmail.com)
SP. Umayal
Dept. of Electrical and Electronics Engineering, ULTRA College of Engineering and Technology for Women, Madurai, Tamil Nadu, India (umayalbabu@gmail.com)

Abstract
Linearization of transducer characteristic plays a vital role in electronic instrumentation because all transducers have outputs nonlinearly related to the physical variables they sense. If the transducer output is nonlinear, it will produce a whole assortment of problems. Transducers rarely possess a perfectly linear transfer characteristic, but always have some degree of non-linearity over their range of operation. Attempts have been made by many researchers to increase the range of linearity of transducers. This paper presents a method to compensate nonlinearity of Linear Variable Displacement Transducer (LVDT) based on Extreme Learning Machine (ELM) method, Differential Evolution (DE) algorithm and Artificial Neural Network (ANN) trained by Genetic Algorithm (GA). Because of the mechanism structure, LVDT often exhibit inherent nonlinear input-output characteristics. The best approximation capability of optimized ANN technique is beneficial to this. The use of this proposed method is demonstrated through computer simulation with the experimental data of two different LVDTs. The results reveal that the proposed method compensated the presence of nonlinearity in the displacement transducer with very low training time, lowest Mean Square Error (MSE) value and better linearity. This research work involves less computational complexity and it behaves a good performance for nonlinearity compensation for LVDT and has good application prospect.
Keywords
1. Introduction
Linear Variable Displacement Transducer (LVDT), a patent by G.B.Hoadley in 1940. It is arranged with two sets of coil, one as the primary and the other secondary having two coils connected differentially for providing the output. The coupling between the primary and the secondary coils varies with the core plunger moving linearly and the output differential voltage varies linearly. The displacement produced by the plunger will be gained by calculating the differential voltage. So LVDT is widely used in the measurement and control system which is associated with displacement. Because of the mechanism structure and others, LVDT often exhibit inherent nonlinear input-output characteristics. Complicated and accurate winding machines are used to solve this. It is difficult to have all LVDT to be equally linear. Nonlinearity also arises in due to change in environment conditions such as temperature and humidity. Due to such nonlinearities direct digital readout is not possible. Their usable range gets restricted due to the presence of nonlinearity. If a transducer is used for full range of its nonlinear characteristics, accuracy and sensitivity of measurement is severely affected. The nonlinearity present is usually time-varying and unpredictable as it depends on many uncertain factors.
Literature survey suggests that in [1] , the authors proposed a Functional Link Artificial Neural Network (FLANN) with the practical setup for the development of a linear LVDT. In the conventional design, sophisticated and precise winding machines are used to achieve the nonlinearity compensation [2 - 4] . Some digital signal processing techniques have been suggested to achieve better sensitivity and to implement the signal conditioning circuits [5 , 6 , 13] . It is reported in [7 - 9] that the artificial neural network (ANN)-based inverse model can effectively compensate for the nonlinearity effect of the sensors. LVDTs show a nonlinearity behavior when the core is moved toward any one of the secondary coils. In the primary coil region (middle) of the characteristics, core movement is almost linear. Because of that, the range of operation is limited within the primary coil. The nonlinearity estimation and its compensation in the case of a capacitive pressure sensor and an LVDT using different ANNs are proposed in [7 - 9] . In [14] , compensation of Capacitive Pressure Sensor (CPS) nonlinearities is done using neurofuzzy algorithms. In [15] , Calibration of CPS is discussed using circuits. In [16] , calibration of CPS is done using least square support vector regression, and for temperature compensation one more CPS is used. In [17] , extension of linearity is achieved using Hermite neural network algorithm. In [18] , Chebyshev neural network algorithm is used for extension of linearity. In [19] , non linearity of CPS is compensated by using Hybrid Genetic Algorithm- Radial Basis Function neural network (HGA-RBF). In [20] , calibration of CPS is done using DSP algorithms. In [21] , Functional Link ANN (FLANN) algorithm is used for calibrations of CPS. In [22] , Laguerre neural network is used for calibration of CPS. In [23] , Calibration of CPS is achieved using ANN. Adaptation to physical properties of diaphragm, and temperature is also discussed. In [24] , relation between diaphragm properties and CPS output is discussed. In [25] , effect of dielectric properties on CPS output is discussed. In [26] , effect of temperature on CPS output is discussed. An intelligent pressure measurement technique is proposed as an improvement to the earlier reported works [23] . The technique is designed to obtain full scale linearity of input range and makes the output adaptive to variations in physical properties of diaphragm, dielectric constant, and temperature, all using the optimized ANN model.
This paper is organized as follows: after introduction in Section 1, a brief description on LVDT is given in Section 2. Specifications and experimental observations of two different LVDTs are also discussed in this section. Section 3 deals with the mathematical analysis of ELM, DE and GA. The computer simulation study of the proposed models by using the experimental data of two different LVDTs are carried out in this Section. Results and discussion with output performance curves before and after compensation of nonlinearity using the specified algorithms are mentioned in Section 4. Finally conclusion and future scope are discussed in Section 5.
2. Linear Variable Displacement Transducer (LVDT)
The LVDT consists of a primary coil and two secondary coils. The two secondary coils are connected differentially for providing the output. The secondary coils are located on the two sides of the primary coil on the bobbin or sleeve, and these two output windings (secondary coils) are connected in opposition to produce zero output at the middle position of the armature.
The lengths of primary and two identical halves of the secondary coils are b and m , respectively. The coils have an inside radius ri and an outside radius of ro . The spacing between the coils is d . Inside the coils, a ferromagnetic armature of length La and radius ri (neglecting the bobbin thickness) moves in an axial direction. The number of turns in the primary coil is np , and ns is the number of turns in each secondary coils. The cross-sectional view of LVDT is shown in Fig. 1 . With a primary sinusoidal excitation voltage Vp and a current Ip (RMS) of frequency f , the RMS voltage v 1 induced in the secondary coil S 1 is
PPT Slide
Lager Image
PPT Slide
Lager Image
Cross-sectional view of LVDT
and that in coil S 2 is
PPT Slide
Lager Image
Where
  • x1− distance penetrated by the armature toward the secondary coilS1;
  • x2− distance penetrated by the armature toward the secondary coilS2
The differential voltage v = v 1 v 2 is thus given by
PPT Slide
Lager Image
Where
PPT Slide
Lager Image
is the armature displacement and
PPT Slide
Lager Image
PPT Slide
Lager Image
PPT Slide
Lager Image
k 2 is a nonlinearity factor in (3), with the nonlinearity term ∈ being
PPT Slide
Lager Image
The nonlinearity factor and nonlinearity term of (6) and (7) are calculated from the core movement of the LVDT. These two terms depends on the geometric parameters taken from the corresponding LVDT. For a given accuracy and maximum displacement, the overall length of the transducer is minimum for x 1 = b , assuming that at maximum displacement, the armature does not emerge from the secondary coils. Taking the length of armature La = 3 b + 2 d , neglecting 2 d compared with b , and using (4), (3) can be simplified as
PPT Slide
Lager Image
For a given primary sinusoidal excitation, the secondary output voltage v is nonlinear with respect to displacement x . This is shown in Fig. 2 in which the linear region of the plot is indicated as xm .
PPT Slide
Lager Image
Range of linear region of LVDT
This limitation is inherent in all differential systems, and methods of nonlinearity compensation are proposed mainly by appropriate design and arrangement of the coils. Some of these are given as follows.
  • 1) Balanced linear tapered secondary coils: improvement in linearity range is not significant
  • 2) Overwound linear tapered secondary coils: linearity is improved to a certain range
  • 3) Balanced overwound linear tapered secondary coils: the range specification is similar to 2)
  • 4) Balanced profile secondary coils: helps in extending linearity range by proper profiling of the secondary coils
  • 5) Complementary tapered windings method: extends the linearity range as well, but the winding is quite complicated as sectionalized winding is done[1]
- 2.1 Linearity
One of the best characteristics of a transducer is considered to be linearity, that is, the output is linearly proportional to the input. The computation of linearity is done with reference to a straight line showing the relationship between output and input. This straight line is drawn by using the method of least squares from the given calibration data. This straight line is sometimes called an idealized straight line expressing the input-output relationship. The linearity is simply a measure of maximum deviation of any of the calibration points from this straight line.
Fig. 3 shows the actual calibration curve i.e., a relationship between input and output and a straight line drawn from the origin using the method of least squares.
PPT Slide
Lager Image
Actual calibration curve
PPT Slide
Lager Image
Eq. (9) expresses the nonlinearity as a percentage of full scale reading. It is desirable to keep the nonlinearity as small as possible as it would in that case result in small errors.
- 2.2 Geometric parameters and experimental observations of LVDT
The performance of the LVDT is highly influenced by transducer geometry, arrangement of primary and secondary windings, quality of core material, variations in excitation current and frequency, and changes in ambient and winding temperatures. The geometric parameters and specifications of a conventional LVDT is listed in the below Table 1 .
In this research work, we have taken the performance of two different LVDTs. The experimental data are collected from two different LVDTs having the specifications listed in Table 1 . The variable parameters of the conventional LVDT are choosen as lowest range for LVDT-1 and highest range for LVDT-2. The data obtained by conducting experiments on the two LVDTs are given in Table 2 and Table 3 . The output response curves of two LVDTs are shown in Figs. 4 and Fig. 5 . It is clear that the output response of the two LVDTs shows the presence of nonlinearity.
Geometric parameters and specifications of LVDT
PPT Slide
Lager Image
Geometric parameters and specifications of LVDT
Experimental observations of LVDT-1
PPT Slide
Lager Image
Experimental observations of LVDT-1
Experimental observations of LVDT-2
PPT Slide
Lager Image
Experimental observations of LVDT-2
PPT Slide
Lager Image
Input-Output Response of LVDT-1 %of Nonlinearity for LVDT1 = ×100 = 27.70%
PPT Slide
Lager Image
Input-Output Response of LVDT-2 of Nonlinearity for LVDT2 = ×100 = 21.66%
The percentage of nonlinearity is calculated using the Eq. (9). The lowest range choosen LVDT (LVDT-1) having highest percentage of nonlinearity when compared with the highest range LVDT (LVDT-2). So it is necessary to compensate the percentage of nonlinearity present in both LVDTs.
It has been observed from the above graphs ( Fig. 4 and Fig. 5 ), that the relation between input displacement and voltage output of LVDT are nonlinear. The following algorithms are used to compensate the nonlinearity of two different LVDTs in this work.
  • AL-1: Extreme Learning Machine Method (ELM)
  • AL-2: ANN trained by Differential Evolution algorithm (ANN-DE)
  • AL-3: ANN trained by Genetic Algorithm (GA-ANN)
3. Nonlinearity Compensation using Soft Computing Techniques
- 3.1 Extreme learning machine based nonlinearity compensation
Extreme Learning Machine (ELM) is a simple tuningfree three-step algorithm. The learning speed of is extremely fast. The hidden node parameters are not only independent of the training data but also of each other. Unlike conventional learning methods which must see the training data before generating the hidden node parameters, ELM could generate the hidden node parameters before seeing the training data. Unlike traditional gradient-based learning algorithms which only work for differentiable activation functions, ELM works for all bounded nonconstant piecewise continuous activation functions. Unlike traditional gradient-based learning algorithms facing several issues like local minima, improper learning rate and over fitting, etc, ELM tends to reach the solutions straight-forward without such trivial issues. The ELM learning algorithm looks much simpler than many learning algorithms: neural networks and support vector machines. It is efficient for batch mode learning, sequential learning and incremental learning. It provides a unified learning model for regression, binary / multi-class classification. It also works with different hidden nodes including random hidden nodes (random features) and kernels.
- 3.2 Single hidden layer feed-forward neural network
Recently, Huang et al [31 , 32] proposed a new learning algorithm for Single Layer Feed forward Neural Network architecture called Extreme Learning Machine (ELM) which overcomes the problems caused by gradient descent based algorithms such as Back propagation applied in ANNs and significantly reduces the amount of time needed to train a Neural Network. It randomly chooses the input weights and analytically determines the output weights of SLFN. It has much better generalization performance with much faster learning speed. It requires less human interventions and can run thousands times faster than those conventional methods. It automatically determines all the network parameters analytically, which avoids trivial human intervention and makes it efficient in online and real time applications. Extreme Learning Machine has several advantages, Ease of use, Faster Learning Speed, Higher generalization performance, suitable for many nonlinear activation function and kernel functions.
Single Hidden Layer Feed-forward Neural Network (SLFN) function with L hidden nodes [34 , 35] can be represented as mathematical description of SLFN incorporating both additive and RBF hidden nodes in a unified way is given as follows.
PPT Slide
Lager Image
Where ai and bi are the learning parameters of hidden nodes and βi the weight connecting i th hidden node to the output node. G ( ai , bi , x ) is the output of i th hidden node with respect to the input x . For additive hidden node with the activation function g ( x ) : R R (e.g. sigmoid and threshold), G ( ai , bi , x ) is given by
PPT Slide
Lager Image
Where ai is the weight vector connecting the input layer to the i th hidden node and bi is the bias of the i th hidden node. ai , x denotes the inner product of vector ai and x in Rn
For N arbitrary distinct samples ( xi , ti ) ∈ Rn × Rm Here, xi is a n ×1 input vector and ti is a m ×1 target vector. If an SLFN with L hidden nodes can approximate these N samples with zero error. If then implies that there exist βi , ai and bi such that
PPT Slide
Lager Image
The above equation can be written as
  • Hβ=T
Where
PPT Slide
Lager Image
With
PPT Slide
Lager Image
PPT Slide
Lager Image
H is the hidden layer output matrix of SLFN with i th column of H being the i th hidden node’s output with respect to inputs x 1 , x 1 , … xN
From the observed readings of LVDT-1 and LVDT-2 shown in Tables 2 and 3 , the simulation study has been carried out and the following results have been obtained.
The results obtained by ELM based nonlinearity compensation of two different LVDTs are listed in Table 4 and Table 5 . Two different activations functions namely sine and sigmoid are used here. The training time, testing time and Root Mean Square Error (RMSE) values are tabulated. The testing and training time are zero by using sine function for both LVDTs. There are 20 hidden nodes assigned for ELM algorithm. 50 trials have been conducted for the algorithm and the average results are shown in Tables 4 and Table 5 . It can be seen from Table 4 that ELM learning algorithm spent 0 seconds CPU time obtaining the testing root mean square error (RMSE) 0:0087 with sine activation function, and 0.0156 seconds CPU time obtaining the RMSE value of 0.0088 with sigmoid activation function. Similarly from Table 5 , the ELM algorithm spent 0 seconds CPU time obtaining RMSE value of 0.0265 with sine activation function and 0.0156 seconds CPU time obtaining RMSE value of 0.5513 with sigmoid function. The new ELM runs 170 times faster than the conventional BP algorithms.
ELM based nonlinearity compensation of LVDT-1
PPT Slide
Lager Image
ELM based nonlinearity compensation of LVDT-1
ELM based nonlinearity compensation of LVDT-2
PPT Slide
Lager Image
ELM based nonlinearity compensation of LVDT-2
- 3.3 Differential evolution algorithm based nonlinearity compensation
The Differential Evolution (DE) algorithm is a stochastic, population-based optimization algorithm introduced by Storn and Price in 1996. It is developed to optimize real parameter and real valued functions. It is a population based algorithm like genetic algorithms using the similar operators; crossover, mutation and selection. The main difference in constructing better solutions is that genetic algorithms rely on crossover while DE relies on mutation operation. This main operation is based on the differences of randomly sampled pairs of solutions in the population. The algorithm uses mutation operation as a search mechanism and selection operation to direct the search toward the prospective regions in the search space. The DE algorithm also uses a non-uniform crossover that can take child vector parameters from one parent more often than it does from others. By using the components of the existing population members to construct trial vectors, the recombination (crossover) operator efficiently shuffles information about successful combinations, enabling the search for a better solution space. An optimization task consisting of D parameters can be represented by a D-dimensional vector. In DE, a population of NP solution vectors is randomly created at the start. This population is successfully improved by applying mutation, crossover and selection operators. The main steps of the DE algorithm are given as follows:
PPT Slide
Lager Image
General evolutionary algorithm procedure
The general problem formulation is:
For an objective function
PPT Slide
Lager Image
where the feasible region
PPT Slide
Lager Image
, the minimization problem is to find x * ∈ X Such that f ( x *) ≤ f ( x )∀ x X where f ( x *) ≠ −∞
Suppose we want to optimize a function with D real parameters, we must select the size of the population N (it must be at least 4). The parameter vectors have the form:
PPT Slide
Lager Image
Where, G is the generation number.
Initialization:
Define upper and lower bounds for each parameter:
PPT Slide
Lager Image
Randomly select the initial parameter values uniformly on the intervals:
PPT Slide
Lager Image
Mutation:
Each of the N parameter vectors undergoes mutation, recombination and selection. Mutation expands the search space.
For a given parameter vector xi,G randomly select three vectors x r1,G , x r2,G an x r3,G such that the indices i , r 1, r 2 and r 3 are distinct.
Add the weighted difference of two of the vectors to the third
PPT Slide
Lager Image
  • The mutation factorFis a constant from [0,2]
  • vi,G+1is called the donor vector
Recombination:
Recombination incorporates successful solutions from the previous generation. The trial vector u i,G+1 is developed from the elements of the target vector, xi,G and the elements of the donor vector, v i,G+1
Elements of the donor vector enter the trial vector with probability CR
PPT Slide
Lager Image
  • randj,i~U[0,1],Irandis a random integer from[1,2,… …D]
  • Irandensures thatvi,G+1≠xi,G
Selection:
The target vector x i,G is compared with the trial vector v i,G+1 and the one with the lowest function value is admitted to the next generation
PPT Slide
Lager Image
Mutation, recombination and selection continue until some stopping criterion is reached.
It has been observed from the above graphs ( Figs. 4 and Fig. 5 ) that, the relation between input displacement and voltage output of LVDT are nonlinear before compensation. After compensation by DE algorithm, the nonlinearity is successfully compensated. The DE algorithm has a few control parameters: number of population NP , scaling factor F , combination coefficient K , and crossover rate CR . The problem specific parameters of the DE algorithm are the maximum generation numbers Gmax and the number of parameters designing the problem dimension D . The values of these two parameters depend on the problem to be optimized. The following results were obtained by using DE algorithm in this research work. From the observed readings of LVDT-1 and LVDT-2 shown in Table 2 and Table 3 , the simulation study has been carried out and the following results have been obtained.
The DE algorithm has a few control parameters: number of population NP , scaling factor F and crossover rate CR . In the simulations, it was observed that the value of scaling factor significantly affected the performance of DE. This can be seen in Table 6 and Table 7 . In order to get the best performance from the DE, the scaling factor value F and cross over value CR must be optimally tuned for each function. Of course, this is a time-consuming task. For the simplicity and flexibility, the value of F was randomly chosen between [0 2] and the value of CR was chosen between [0 1] for each generation instead of using a constant value. DE algorithm was run 1000 times for each function to achieve average results. For each run, the initial population was randomly created by means of using different seed numbers. The corresponding MSE values and average training time are calculated and listed in Table 6 and Table 7 .
DE based nonlinearity compensation for LVDT-1
PPT Slide
Lager Image
DE based nonlinearity compensation for LVDT-1
DE based nonlinearity compensation for LVDT-2
PPT Slide
Lager Image
DE based nonlinearity compensation for LVDT-2
- 3.4 ANN trained by Genetic algorithm based nonlinearity compensation
To guide ANN learning, GA is employed to determine the best number of hidden layers and nodes, learning rate, momentum rate and weight optimization. With GA, it is proven that the learning becomes faster and effective. The flowchart of GANN for weight optimization is shown in Fig. 7 . In the first step, weights are encoded into chromosome format and the second step is to define a fitness function for evaluating the chromosome’s performance. This function must estimate the performance of a given neural network. The function usually used is the Mean Squared Errors (MSE). The error can be transformed by using one of the two equations below as fitness value.
PPT Slide
Lager Image
PPT Slide
Lager Image
Flow chart of GANN weight optimization
In GANN for optimum topology, the neural network is defined by a “genetic encoding” in which the genotype is the encoding of the different characteristics of the MLP and the phenotype is the MLP itself. Therefore, the genotype contains the parameters related to the network architecture, i.e. number of hidden layers (H), number of neurons in each hidden layer (NH), and other genes representing the Bp parameters. The most common parameters to be optimized are the learning rate (η) and the momentum (α). They are encoded as binary numbers. The parameter, which seems to best describe the goodness of a network configuration, is the number of epochs (ep) needed for the learning. The goal is to minimize the ep. The fitness function is:
PPT Slide
Lager Image
The parameters of GANN training algorithm are listed in Table 8 and Table 9 . After several runs the genetic search returns approximately the same result each time as the best solution despite the use of different random generated populations and a different population size reaching the lowest value of MSE with a very few number of generations are carried out. The maximum number of training cycles may be set relative to the size of the network.
ANN Trained by GA based nonlinearity compensation for LVDT-1
PPT Slide
Lager Image
ANN Trained by GA based nonlinearity compensation for LVDT-1
ANN Trained by GA based nonlinearity compensation for LVDT-2
PPT Slide
Lager Image
ANN Trained by GA based nonlinearity compensation for LVDT-2
The first step in developing a neural network is to create a database for its training, testing and validation. The output voltage of LVDT is used to form the other rows of input data matrix. The output matrix is the target matrix consisting of data having a linear relation with the displacement. The process of finding the weights to achieve the desired output is called training. The optimized ANN is found by considering different algorithms with varying number of hidden layers, iterations and epochs. Mean Square Error (MSE) is the average squared difference between outputs and targets. Lower values of MSE are better. Zero means no error. For ANN trained by GA, the number of iterations is assumed initially as 10 and corresponding MSE and training time are noted. Then the iterations are increased to 20 and training is repeated. The process is repeated up to 100 iterations and MSE and training time is noted.
From the observed readings of LVDT-1 and LVDT-2, the simulation study has been carried out and the following results have been obtained.
4. Results
A computer simulation is carried out in the MATLAB. 12 environment using an experimental dataset. The experimental data are collected from two different LVDTs having different specifications shown in Table 1 . The data obtained by conducting experiments on the two LVDTs are given in Tables 2 and Table 3 . The observed simulation results are shown in various figures listed below. It is observed that ELM model yields the lowest training time of zero seconds to obtain better linearity in the overall response when compared to others. At the same time DE algorithm produces the lowest MSE value of 0.000311 for F=0.4, CR=0.9 and NP=100. The average values of training time and MSE values are compared and listed in Table 10 .
Comparison of different methodologies for nonlinearity compensation of two different LVDTs (average best values)
PPT Slide
Lager Image
Comparison of different methodologies for nonlinearity compensation of two different LVDTs (average best values)
5. Conclusion and Future Scope
This paper has proposed Extreme Learning Machine (ELM) method and two optimized ANN models to adaptively compensate for the nonlinearity offered by two different LVDTs. On comparison, ELM method based nonlinearity compensation produces a less training time and Differential Evolution (DE) algorithm based nonlinearity compensation yields better mean square error value when compared to others. Results reveal that ELM method has given best linearization approximation and compensated the nonlinearity with very less training time and lowest MSE among the proposed tools. The proposed algorithm offers a less complexity structure and simple in testing and validation procedure. This adaptive algorithm can also be applied to any transducer having a nonlinear characteristic. This hybrid technique is used to make a transducer output as more linear as possible. Further this adaptive algorithm is preferable for real time implementation also.
PPT Slide
Lager Image
ELM based nonlinearity compensation of LVDT-1 (sine function)
PPT Slide
Lager Image
ELM based nonlinearity compensation of LVDT- 1(sigmoid function)
PPT Slide
Lager Image
ELM based nonlinearity compensation of LVDT- 2(sine function)
PPT Slide
Lager Image
ELM based nonlinearity compensation of LVDT-2(sigmoid function)
PPT Slide
Lager Image
DE algorithm based nonlinearity compensation of LVDT-1(F=0.8, CR=0.5 & NP=100)
PPT Slide
Lager Image
DE algorithm based nonlinearity compensation of LVDT-1(F=0.7, CR=0.4 & NP=100)
PPT Slide
Lager Image
DE algorithm based nonlinearity compensation of LVDT-2 (F=0.8, CR=0.5 & NP=100)
PPT Slide
Lager Image
GA-ANN based nonlinearity compensation of LVDT-1 (NP=10)
PPT Slide
Lager Image
GA-ANN based nonlinearity compensation of LVDT-1 (NP=20)
PPT Slide
Lager Image
GA-ANN based nonlinearity compensation of LVDT-1 (NP=100)
PPT Slide
Lager Image
GA-ANN based nonlinearity compensation of LVDT-2 (NP=10)
PPT Slide
Lager Image
GA-ANN based nonlinearity compensation of LVDT-2 (NP=30)
PPT Slide
Lager Image
GA-ANN based nonlinearity compensation of LVDT-2 (NP=50)
Acknowledgements
The authors would like thank the management of Einstein College of Engineering, Tirunelveli and Ultra College of Engineering and Technology for women, Madurai for providing us the support to publish this completion of research work.
BIO
S. Murugan received the B.E degree in Electronics and Instrumentation Engineering from Manonmaniam Sundaranar University, Tamil Nadu, India, in 2002 and the M.E. degree in Control and Instrumentation engineering from the Anna University, Tamil Nadu, India, in 2005. From 2005 to 2006, he was a Lecturer with the Kamaraj College of Engineering and Technology, Virudhunagar. From 2007 to 2010, he was an Assistant Professor with the Department of Electrical and Electronics Engineering, Francis Xavier Engineering College. In July 2011, he joined Ph.D as a part time research scholar at Anna University, Chennai, India. He is currently working as an Associate Professor with Department of Electrical and Electronics Engineering, Einstein College of Engineering, Tirunelveli, India. His research interests include Sensor design, Measurement system design, and Control systems. He was born in the year 1981 at Srivilliputhur, Tamil Nadu, India.
SP. Umayal is a professor and Head of Electrical and Electronics Engineering of ULTRA College of Engineering and Technology for Women, Madurai. She completed her Bachelor of Engineering in Electrical and Electronics Engineering at Thiyagarajar college of Engineering in the year of 1990 and Master of Engineering in Power System in the year 1999 in the same institution. She received her Ph.D in the year 2008 in the field of Power System Optimization. Her areas of research include Power system optimization, FACTS and power quality. She has 35 publications in her research area. Presently she is guiding 10 P.G and 5 Ph.D. research scholars. She is the life member of IE (I) and ISTE.
References
Mishra Saroj Kumar , Panda Ganapati , Das Debi Prasad 2010 ‘A Novel Method of Extending the Linearity Range of Linear Variable Differential Transformer Using Artificial Neural Network’ IEEE Transactions on Instrumentation and Measurement 59 (4) 947 - 953    DOI : 10.1109/TIM.2009.2031385
Kano Y. , Hasebe S. , Miyaji H. 1990 “New linear variable differential transformer with square coils” IEEE Trans. Magn. 26 (5) 2020 - 2022    DOI : 10.1109/20.104605
Saxena S. C. , Seksena S. B. L. 1989 “A self-compensated smart LVDT transducer” IEEE Trans. Instrum. Meas. 38 (3) 748 - 753    DOI : 10.1109/19.32186
Tian G. Y. , Zhao Z. X. , Baines R. W. , Zhang N. 1997 “Computational algorithms foe linear variable differential transformers (LVDTs)” Proc. Inst. Elect. Eng. -Sci. Meas. Technol. 144 (4) 189 - 192    DOI : 10.1049/ip-smt:19971262
Crescini D. , Flammini A. , Marioli D. , Taroni A. 1998 “Application of an FFT-based algorithm to signal processing of LVDT position sensors” IEEE Trans. Instrum. Meas. 47 (5) 1119 - 1123    DOI : 10.1109/19.746567
Ford R.M. , Weissbach R. S. , Loker D. R. 2001 “A novel DSP-based LVDT signal conditioner” IEEE Trans. Instrum. Meas. 50 (3) 768 - 774    DOI : 10.1109/19.930452
Patra J. C. , Kot A. C. , Panda G. 2000 “An intelligent pressure sensor using neural networks” IEEE Trans. Instrum. Meas. 49 (4) 829 - 834    DOI : 10.1109/19.863933
Patra J. C. , van den Bos A. , Kot A. C. 2000 “An ANNbased smart capacitive pressure sensor in dynamic environment” Sens. Actuators A, Phys. 86 (12) 26 - 38    DOI : 10.1016/S0924-4247(00)00360-5
Mishra S. K. , Panda G. , Das D. P. , Pattanaik S. K. , Meher M. R. 2005 “A novel method of designing LVDT using artificial neural network” in Proc. IEEE Conf. ICISIP 223 - 227
Patra J. C. , Pal R. N. 1995 “A functional link artificial neural network for adaptive channel equalization” Signal Process 43 (2) 181 - 195    DOI : 10.1016/0165-1684(94)00152-P
Patra J. C. , Pal R. N. , Baliarshing R. , Panda G. 1999 “Nonlinear channel equalization for QAM signal constellation using artificial neural network” IEEE Trans. Syst., Man, Cybern. B, Cybern. 29 (2) 262 - 271    DOI : 10.1109/3477.752798
Widrow B. , Sterns S. D. 1985 Adaptive Signal Processing Prentice-Hall Englewood Cliffs, NJ
Flammini A. , Marioli D. , Sisinni E. , Taroni A. 2007 “Least mean square method for LVDT signal processing” IEEE Trans. Instrum. Meas. 56 (6) 2294 - 2300    DOI : 10.1109/TIM.2007.908248
Li Jun , Zhao Feng 2006 “Nonlinear Inverse Modeling of Sensor Characteristics Based on Compensatory Neurofuzzy Systems” Proc. 1st International Symposium on System and Control in Aerospace and Astronautics Harbin, China
Leng Yi , Zhao Genbao , Li Qingxia , Sun Chunyu , Li Sheng 2007 “A High Accuracy Signal Conditioning Method and Sensor Calibration System for Wireless Sensor in Automotive Tire Pressure Monitoring System” Proc. International Conference on Wireless Communications, Networking and Mobile Computing Shangai, China
Wang Xiaoh 2008 “Non-linearity Estimation and Temperature Compensation of Capacitor Pressure Sensor Using Least Square Support Vector Regression” Proc. International Symposium on Knowledge Acquisition and Modeling Workshop Beijing, China
Patra Jagdish C. , Bornand Cedric , Chakraborty Goutam 2009 “Hermite Neural Network-based Intelligent Sensors for Harsh Environments” Proc. International Conference on Neural Networks Georgia, USA
Patra Jagadish C , Bornand Cedric 2010 “Development of Chebyshev Neural Network-based Smart Sensors for Noisy Harsh Environment” Proc. World Congress on Computational Intelligence Barcelona, Spain
Wang Zhiqiang , Chen Ping , Zhao Mingbo 2010 “Approaches to Non-Linearity Compensation of Pressure Transducer Based on HGA-RBFNN” Proc. 8th World Congress on Intelligent Control and Automation Jinan, China
Chuan Yang , Chen Li 2010 “The Intelligent Pressure Sensor System Based on DSP” Proc. 3rd International Conference on Advanced Computer Theory and Engineering Chengudu, China
Huang Jiaoying , Yuan Haiwen , Cui Yong , Zheng Zhiqiang 2010 “Nonintrusive Pressure Measurement With Capacitance Method Based on FLANN” IEEE Transactions on Instrumentation and Measurement 59 (11) 2914 - 2920    DOI : 10.1109/TIM.2010.2045933
Patra Jagadish Chandra , Meher Pramod Kumar , Chakraborty Goutam 2011 “Development of Laguerre Neural-Network- Based Intelligent Sensors for Wireless Sensor Networks” IEEE Transactions on Instrumentation and Measurement 60 (3)
Santhosh K V , Roy B K “An Improved Smart Pressure Measuring Technique” Proc. The International Joint Colloquiums on Computer Electronics Electrical Mechanical and Civil Muvatupuzha, India 20-21 September 2011
Soin Norhayati , Majids Burhanuddin Yeop 2002 “An Analytical Study on Diaphragm Behavior for Micro- Machined Capacitance Pressure Sensor” Proc. International Conference on Semiconductor Electronics Penang, Malaysia
Zhang Zongyang , Wan Zhimin , Liu Chaojun , Cao Gang , Lu Yun , Liu Sheng 2011 “Effects of Adhesive Material on the Output Characteristics of Pressure Sensor” Proc. 11th International Conference on Electronic Packaging Technology and High Density Packaging Shanghai, China
sheng Cui Chun , Hua Ma Tie 2011 “The research of temperature compensation Technology and Hightemperature Pressure Sensor” Proc. International Conference on Electronic & Mechanical Engineering and Information Technology Harbin, China
Neubert H. K. P 1975 Instrument Transducers: an Introduction to their Performance and Design Clarendon Press Oxford
Liptak Bela G 2003 Instrument Engineers’ Handbook: Process Measurement and Analysis 4th Edition CRC Press
Li Jeng-Bin , Chung Yun-Kung 2005 “A Novel Back propagation Neural Network Training Algorithm Designed by an Ant Colony Optimization” IEEE/PES Transmission and Distribution Conference & Exhibition: Asia and Pacific Dalian China
Bianchi L. , Gambardella L.M. , Dorigo M. 2002 “An ant colony optimization approach to the probabilistic travelling salesman problem” Springer Verlag Proc. of PPSN-VII, Seventh Inter17 national Conference on Parallel Problem Solving from Nature Berlin, Germany
Huang Guang-Bin , Zhu Qin-Yu , Siew Chee-Kheong 2004 Extreme Learning Machine: A New Learning Scheme of Feedforward Neural Networks International Joint Conference on Neural Networks 2 985 - 990
Wang Dianhui , Huang Guang-Bin 2005 “Protein Sequence Classification Using Extreme Learning Machine Proceedings of International Joint Conference on Neural Networks 3 1406 - 1411
Huang Guang-Bin , Zhu Qin-Yu , Siew Chee-Kheong 2006 Extreme Learning Machine: Theory and Applications Neurocomputing 70 489 - 501    DOI : 10.1016/j.neucom.2005.12.126
Huang Guang-Bin , Chen Lei , Siew Chee-Kheong 2006 Universal Approximation Using Incremental Constructive Feed forward Networks with Random Hidden Nodes IEEE Transactions on Neural Networks 17 (4) 879 - 892    DOI : 10.1109/TNN.2006.875977
Liang Nan-Ying , Huang Guang-Bin , Rong Hai-Jun , Saratchandran P. , Sundararajan N. 2006 A Fast and Accurate On-line Sequential Learning Algorithm for Feed forward Networks IEEE Transactions on Neural Networks 17 (6) 1411 - 1423    DOI : 10.1109/TNN.2006.880583