Situation invariant Face Recognition using Neural Networks

Here we consider human face as biometric. Original method of feature extraction from image data is introduced using feed forward Neural Network (multilayer perceptron) and PCA component analysis). This method is used in human face recognition system and results are compared to face recognition system using PCA system with direct classification of input images by NN a network

Here we consider human face as biometric. Original method of feature extraction from image data is introduced using feed forward Neural Network PCA (principal component analysis). This method is used in human face recognition system and results are compared to PCA directly, to a system with direct classification of input images by a network, and to a system using PCA as a network in the role of  Where v j is linear combination of inputs xl,x2, ..., xp of neuron j , w jo = θ j is threshold weight connected to special input xo = -1 , y j is the output of neuron j and ψ j (.) is its activation function. Herein we use special form of sigmoidal (non-constant, boun monotone-increasing) activation function function In a multilayer perceptron, the outputs of the units in one layer form the inputs to the next layer. The weights of the network are usually computed by training the network using the back-propagation (BP) algorithm. A multilayer perceptron represents nested sigmoidal scheme [I], its form for single output neuron is where is a sigmoidal activation function, the synaptic weight from neuron j in the last hidden layer to the single output neuron 0, and so on for the other synaptic weights, xi is the i th element of the input vector x . The weight vector w denotes the entire set of synaptic weights ordered by neurons in a layer, and then number in a neuron.

FACE DATABASE
We use the face database from Yale database, which consists of face images of 15 people (shown in  approach is the use of evolutionary algorithms to optimize feature scaling. Another popular approach is to scale features by the mutual information of the training data with the training classes. The nearest neighbor algorithm has some strong consistency results. As the amount of data approaches infinity, the algorithm is guaranteed to yield an error rate no worse than twice the Bayes error rate (the minimum achievable error rate given the distribution of the data). k-nearest neighbor is guaranteed to approach the Bayes error rate, for some value of k (where k increases as a function of the number of data points). The k-NN algorithm can also be adapted for use in estimating continuous variables. One such implementation uses an inverse distance weighted average of the k-nearest multivariate neighbors. This algorithm functions as follows: 1. Compute Euclidean distance from target plot to those that were sampled.
2. Order samples taking for account calculated distances.
3. Choose heuristically optimal k nearest neighbor based on RMSE done by cross validation technique.
4. Calculate an inverse distance weighted average with the k-nearest multivariate neighbors.
Epoch (astronomy): -In astronomy, an epoch (or sometimes epochal moment) is a moment in time for which celestial coordinates or orbital elements are specified. In the case of celestial coordinates, and with modern technology, the position at other times can be computed by taking into account precession and proper motion. In the case of orbital elements, it is necessary to take account of perturbation by other bodies in order to calculate the orbital elements for a different time. The currently used standard epoch is J2000.0, which is January 1, 2000 at 12:00 TT. The prefix "J" indicates that it is a Julian epoch. The previous standard epoch was B1950.0, with the prefix "B" indicating it was a Besselian epoch.

CONCLUSION:
The main limitation of the current system is that it only detects upright faces looking at the camera. Separate versions of the system could be trained for each Lightening condition, and the results could be combined using arbitration methods similar to those presented here. Preliminary work in this area indicates that detecting profiles views of faces is more difficult than detecting frontal views, because they have fewer stable features and because the input window will contain more background pixels. We have also applied the same algorithm for the detection of car tires and human eyes, although more work is needed. Even within the domain of detecting frontal views of faces, more work remains. When an image sequence is available, temporal coherence can focus attention on particular portions of the images. As a lighting condition changes, as per the its location in one frame is a strong predictor of its location in next frame. Standard tracking methods, as well as expectationbased methods, can be applied to focus the detector's attention..