Academic Open Internet Journal
www.acadjournal.com
Volume 1, 2000

 

Neural Network Technique Application for
the Identification of Aircraft Flight Manoeuvres

Boev J., Tagarev T., Stoianov Tz.


Abstract

This paper aims to present a method and a technique which enable to solve the identification problem of the aircraft space manoeuvres, in case of an information insufficiency. For this purpose is used the neural network technique. The back-propagation algorithm application is explained in details. The validity of the method is proved by accomplishment of a numerical experiment.
 
 

1. INTRODUCTION
 
 

One of the main tasks of the express flight data processing is the aircraft flight manoeuvres identification after every flight. Every one of the flight manoeuvres can decompose to separate images (parameters), which characterize them (the change of the altitude, acceleration, speed, roll, pitch, ect., for example). Some of this parameters can be recorded by aircraft flight recorders.

The used methods and algorithms don‘t give a possibility to identify the aircraft flight manoeuvres with high precision and reliability, using only one of this parameters. To identify the separate flight figure this methods need more of one parameter (usually three or four).

In this work is proposed to use the neural network technique for express aircraft space manoeuvres identification in case of an information insufficiency. For a neural network learning to identify the flight figure is used the standard back-propagation learning algorithm (BPLA)for the multilayer perceptron (MLP).
 
 

2. GIST OF THE METHOD
 
 

The neural behavior should be determined on the basis of a set of input/output pairs. Each learning example is composed of n input signals xi (i=1, 2, ... n) and m corresponding desired output signals dj (j=1, 2, ... m). The input/output pairs are expressed as stable states of neurons which are usually represented by +1 (ON) and –1 (OFF). Learning of the MLP consists in adjusting all weights such that the error measure between the desired output signals djp. and the actual output signals yjp. averaged over all learning examples p will be minimal (possibly zero). The standard back-propagation learning algorithm uses the steepest-descent gradient approach to minimize the mean-square error function.
 
 

The local error function for the p-th. learning example can be formulated as
 
 

,
 
 

and the total error function as
 
 


 
 

where djp and yjp are desired and actual output signal of the j-th output neuron for the p-th pattern, respectively.
 
 

To find the minimum of the global error function E we will use the on-line learning technique in which the training patterns are presented sequentially, usually in random order. The architecture of the used BPLA for a three-layer perceptron is shown on the fig. 1.

For each learning example the synaptic weights  ( s=1, 2, 3 ) are changed by an amount  proportional to the respective negative gradient of the local error function Ep , which can be written mathematically as
 
 

, ?>0
 
 

It has been proved that, if the learning parameter ? is sufficiently small, this procedure minimizes the global error function E = [ ].

The updating formula (in case of a three-layer perceptron) for the synaptic weights of the output layer is
 
 


 
 

Fig.1.




,

where  .
 

The updating formulas for the hidden layers are

,
 

where  ;
 

where 
 
 

The chosen sigmoid activation function for all neurons may be an unipolar, i.e. described by
 

,
 

or a hyperbolic tangent function, i.e.
 

.
 

The major difference of the learning rule for the output layer and the hidden layers is the evaluation of the local error  ( s=1, 2, 3 ). In the output layer the error is a function of the desired and the actual output and the derivative of the sigmoid activation function. For the hidden layers the local errors are evaluated on the basis of the local errors in the upper layer.
 

The algorithm can be performed by realizing the following steps [ ]:
 

Step 1. Initialize all synaptic weights  to small random values.

Step 2. Present an input from the class of learning examples and calculate the actual outputs of all neurons using the present values of .
Step 3. Specify the desired output and evaluate the local errors  for all layers.

Step 4. Adjust the synaptic weights according to the iterative formula
 

( s=1, 2, 3 ) .
 

Step 5. Present another input pattern corresponding to the next learning example and go back to Step 2.

An improvement of the algorithm is possible by adding the so-called momentum term
 


 

?>0 ; 0? ?<1 (typically ?=0,9) ; s=1, 2, 3 .
 

The weights are now updated using the formula

.
 

3. NUMERICAL RESULTS

The method described in Sec. 2 is imployed to identify the aircraft manoeuvres using data of one of the flight parameters. We use records, made by the flight recorder type SARPP-12, of the altitude change for four types of manoeuvres - zoom (hump), take off, combat turn and immelman. The data shown on the fig.2 are normalized in the limits of (0 , 1).

fig.2


To realize the learning of the neuron network (three layer perceptron) to it input are supplied in series 50 random samples for every one of the manoeuvres - total 200 samples. The network realizes the identification, calculates the error and changes the synaptic weights. The learning processes completed when the error reaches the admissible value or after determined number of iterations (epochs).

After training the multilayer perceptron has the ability for proper response to input patterns not presented during the learning process.

The network has tested, first by the same 200 samples, second by 200 new samples, which the network was not “saw”. During the second testing the initial values of the synaptic weights are the values stabilized during the training process. In Fig.3 is presented the change of the total error and in Fig. 4 the learning rate during the learning process.
 
 


 
 

Fig. 3.


 
 

Fig. 4.

The results of the numerical experiment show that if the learning epochs are more than 4000 the learning examples are identified correctly and the total error is admissible. No more than four of the testing examples are identified wrong.
 

4.CONCLUSIONS

The described back-propagation algorithm can be applied for the aircraft manoeuvres identification, based on the data recordered by the simple flight recorder units. These results are encouraging and future work will be devoted to the improvement of the method. The derived algorithms can be utilized as a base for developing program systems for an aircraft flight manoeuvres identification.
 

REFERENCES
 

[1] Cichocki A., Unbehauen R. Neural Networks for Optimization and Signal Processing, John Wiley & Sons, Stuttgart, 1993.

[2] Rodin E.Y., Wu Y. Artificial Intelligence Methodologies for Aerospace and Other Control Systems, Washington University, 1993.

[3] Scsvanefeld R.W., Goldsmith T.E. Neural Network Models of Air Combat Maneuvering, Brooks AFB, Texas, 1992.

[4] Tagarev T.D., Ivanova P.I., Moscardini A. Computing Methods for Early Warning of Violent Conflicts , Proceeding of the AFCEA Sofia seminar, 1996, pp 44-51.

Technical College - Bourgas,
All rights reserved, © August, 2000