Statistical Method to Extract Evoked Potentials from Noise

. Evoked Potentials are induced by visual or auditory stimulation. The Evoked Potentials represent transient electrical activities of some limited brain regions. The signal-noise ratio (SNR) of the EPs is typically around -10 dB. In order to study brain activities related to information processing in the brain, one has to “extract” the single EPs from the noise. We propose a method does not require a priori information concerning the characteristics (time, frequency) of the signal and does not use a template. The method proposed in this work use the wavelet transform associated with a statistical test.


Introduction
The evoked potentials (EPs) are discrete signals embedded in the spontaneous Electroencephalography activity (EEG). Their extraction from noise requires a repetition of the recording. The Visual or auditory stimulation trigger the acquisition system and then collect the "evoked potential". The evoked potential differs from spontaneous nerve activity (EEG) because it is synchronous with a triggering "event". In practical terms, the signal triggering the event is used to the acquisition of the evoked signal. The evoked potential (PE) is defined as a transient variation of the electrical potential of a limited region of the brain relative compared to another electrically neutral region. The EP is captured by an electrode placed in the electric field emitted by the active structure and compared to the potential detected by a so-called "reference" electrode. When the reference electrode captures encephalic nerve activity, the sensor system is called bipolar. On the other hand, when the reference electrode is located on an area without cerebral activities for example on the ear lobe, the sensor system is called monopolar. In the best case, the evoked potential (PE) of which we have just seen the interest is captured very far from its source with an amplitude so small that it does not exceed ten micro-volts. Moreover, it is embedded in a continuous brain activity (the EEG is higher than 100 microvolts) also captured by the electrodes. The PE is sometimes lower than the background of the amplifiers. It is therefore necessary to extract PE from background noise before examining its characteristics. The classical method used for 40 years is average method. This method consists of the average of the synchronous successive responses. The Evoked Potential is a brain activity that evolves according to the attention of the subject, therefore the average is insufficient to study it satisfactorily. The average is not representative of the reaction of the subject during the accomplishment of a task and the difficulty to realize it. In order to extract the single variations of the recorded signals, it is then necessary to look at alternative methods more relevant than the averaging. For this purpose, some techniques have been proposed. They consider the event-related potential as a stationary signal. These methods use parametric modelling [1][2][3]. Some researchers [4,5] apply a digital filter to the EP signal in order to improve his SNR. The frequency band is optimized from a spectral analysis of a series of averaged response subsets. Benkherrat [6] applies a series of successive digital filters called a bank of filters to improve the SNR of the single ERPs signals. This method combines the properties of the signal in the frequency and time domain to build the reference signal. It first calculates the average of the records, and then divides the average signal into potentially useful segments. Subsequently, it calculates the spectrum for each segment, which uses it to build the band-pass of the filter associated with the studied segment. The method proposed in this work use the wavelet transform associated with the statistical method. Our method does not require a priori information concerning the characteristics (time, frequency) of the signal and does not use a template like the methods mentioned above. The performances of the method will be presented after its application on the synthetic and real signals. First, we defining the multiresolution wavelet transform. Second, we will describe our method which combines a statistical test and multiresolution wavelet transform.

Multiresolution Wavelet Transform
The aim of the multiresolution wavelet transform [7,8] is to describe, by providing information on the local E3S Web of Conferences 170, 0 (2020) EVF'2019 regularity, the evolution of the signal over time at different time scales. The multiresolution wavelet transform is based on a simple principle. We consider a function whose main characteristic is to be of limited duration in time. This basic function is called the analyzing wavelet. The wavelet transform consists, at a given time position, in compressing or expanding the analyzing wavelet by a scale factor, and then calculating the product between the wavelet and the signal at each scale factor. The wavelet coefficient is higher when the variation occurring in the signal is at the same scale as the wavelet. Thus, the variation of very short duration will be detected at a very small scale and conversely. The variation of long duration will be detected at a high scale.
The Multiresolution Wavelet Transform is written as follows: Where is the analyzing wavelet, is the scale and is time. The applications of the wavelet transform are numerous: multi-scale analysis, the detection of singularities in a signal, compression of the signal and denoising. The last application is the one that interests us in this work.

Algorithm of decomposition
We use the Mallat decomposition algorithm [9]. It allows a rapid decomposition of the signal at different scales, as well as a reconstruction of the signal. The decomposition is carried out by a cascading algorithm proceeding by successive filtering. The signal to be analyzed is projected on two orthogonal sub-spaces.
Where − ( ) is called the approximation of the level -1, and − ( ) is called the detail of level -1. By noting (respectively) the sequence of the coefficients of the approximation (respectively of the detail) of level -j, one passes to the next level by: Where h and g are defined by:

Algorithm of denoising
The problem is to distinguish the noisy coefficients from the coefficients related to the useful waves. The proposed method is based on two principles. The first is the standard deviation of the noise coefficients is larger than the standard deviation of the signal coefficients. The second, the orthogonal wavelet transform, compresses the signal energy into a relatively small number of large coefficients; it arranges the signal in some compartments. The energy of the noise is dispersed over the whole transform, and gives small coefficients; in wavelet space, noise and signal dissociate [10]. By averaging the coefficients of the wavelet decomposition, the coefficient of variation (CV) can be calculated. The coefficient of variation [11] is the ratio between the standard deviation of the coefficients of the wavelet decomposition and the average of these coefficients. Thresholding can be applied as a function of the value of the coefficient of variation to separate the signal from the noise. A noisy coefficient gives a large CV and inversely for a less noisy coefficient. We note the recorded signals 1, , 2, , 3, …. , with the total number of signals and the number of samples per signal. Each signal is decomposed into five levels of detail ( = 1 ∶ 5 ) and the approximation 5 . We denote by ( ) the coefficients of the wavelet decomposition at the level and represents the number of the coefficient. Let the average of the coefficients at the discrete instant for the level on the signals.
With the number of coefficients of level . We can estimate the standard deviation for each level and for all coefficients at the discrete instant of the trials. When the standard deviation is high the coefficients of the number at the discrete instant are noisy and conversely for less noisy coefficients containing a useful signal. We assumed that the distribution of the coefficients for each number follows a normal distribution, hence the thresholding condition [12]:

Simulation data
In order to assess the performance of our method, we constructed 60 simulation signals. Each signal contains 512 samples and a sequence of waves. The latency and amplitude of these waves are varied to simulate the variation of amplitude and latency of the waves of the event-related potentials. To generate the noise, we use the random generator with uniform distribution. We build 60 different sequences of the noise. This noise is added to the synthetic signal with different SNR. The SNR varies between 0 dB and -10 dB. To evaluate the performance of the method described above; we compared it to digital low pass filtering. We used a Butterworth low pass digital filter. To compare the two methods: wavelet method and low pass digital filtering. We used two parameters. The first parameter is the improvement of the signal-to-noise ratio (SNR). The second parameter is the mean squared error (MSE). The MSE is calculated between noisy signals and denoised signals by using our method and digital low pass filter. In figure 1, the improvement of the SNR is higher for wavelet method (red curve) than for low pass digital filtering (blue curve). The difference between the two curves is 4 dB. The maximum of the SNR enhancement is 6 dB for the wavelet method while it is less than 0 dB for the low pass digital filtering method. These results show the superiority of the wavelet method compared to digital filtering. Figure 2 compares the MSE of the low pass digital filter (blue curve) and the wavelet method (red curve). The MSE is lower for wavelet method than that of low pass digital filtering. This result confirms the previous results and shows the superiority of our method.

Real data
We have tested the wavelet method (standard deviation thresholding) and low pass digital filtering on real signals. These signals were recorded using a visual stimulation protocol.
The signals are digitized with a sampling frequency of 256 Hz. Figure 3 represents the average of the signals denoised by using our method (blue curve), low pass digital filtering (red curve) and the average of the noisy signals (black curve). The number of signals averaged is 103. The signal filtered by the digital low pass filter is less smooth. We can also see that the pre-event interval (-0.5 s to -0.2 s) that contains the noise has been removed. The post-event interval (0.4s up to 0.5s) was also removed. This proves that the wavelet method (standard deviation thresholding) retains only the useful signal between 0s and 0.5s.

Conclusion
Estimation of individual signals is a difficult problem because of the lack of information about signal Fig. 3. The average of the noisy signals (black curve), the average of the denoised signals by using wavelet method (blue curve) and by using digital filtering (red curve).

Amplitude [µV]
E3S Web of Conferences 170, 0 (2020) EVF'2019 characteristics and low signal-to-noise ratio. We have proposed a method that can be applied without a priori. This method is based on the coefficient of variation. In this study, we show that the wavelet method (standard deviation thresholding) is more robust than the low pass filtering method in estimating of the single ERPs. Using two parameters of comparison, the mean squared error and the improvement of the signal-to-noise ratio. These two parameters demonstrated the superiority of the wavelet method compared to the classical method of the digital filtering. The low pass digital filtering method is not suitable to extract ERPs signals because of the overlap of the ERPs signal spectrum and noise (EEG). The enhancement of the SNR of the individual signals in the case of the standard deviation threshold method will allow us to estimate the average with a smaller number of signals. The temporal and frequency localization thresholding method can be applied in the case of signals whose temporal characteristics are stable along a recording session. The standard deviation thresholding method is simple to implement and can be extended to other signal processing applications. Digital signal processing is an important stage for application, like the automatic diagnosis of neurological disease. The biosignal processing is very useful for telemedicine of the future. Filtering and denoising the neurophysiological signals is an integrate part of system like brain-computer interface (BCI). The BCI is useful for severely disabled people. This technology participates in the social integration of people with disabilities and therefore to build the city of the future with more integration of these people.