Adaptive Detection of UnknownBinary Waveforms
J. J. Spilker, Jr.
Philco Western Development LaboratoriesPalo Alto, California
This work was supported by the Philco WDL Independent Development Program. This paper, submitted after the Symposium, represents a more detailed presentation of some of the issues raised in the discussion sessions at the Symposium and hence, constitutes a worthwhile addition to the Proceedings.
This work was supported by the Philco WDL Independent Development Program. This paper, submitted after the Symposium, represents a more detailed presentation of some of the issues raised in the discussion sessions at the Symposium and hence, constitutes a worthwhile addition to the Proceedings.
One of the most important objectives in processing a stream of data is to determine and detect the presence of any invariant or quasi-invariant “features” in that data stream. These features are often initially unknown and must be “learned” from the observations. One of the simplest features of this form is a finite length signal which occurs repetitively, but not necessarily periodically with time, and has a waveshape that remains invariant or varies only slowly with time.
In this discussion, we assume that the data stream has been pre-processed, perhaps by a detector or discriminator, so as to exhibit this type of repetitive (but unknown) waveshape or signal structure. The observed signal, however, is perturbed by additive noise or other disturbances. It is desired to separate the quasi-invariance of the data from the truly random environment. The repetitive waveform may represent, for example, the transmission of an unknown sonar or radar, a pulse-position modulated noise-like waveform, or a repeated code word.
The problem of concern is to estimate the signal waveshape and to determine the time of each signal occurrence. We limit this discussion to the situation where only a single repetitive waveform is present and the signal sample values are binary. The observed waveform is assumed to be received at low signal-to-noise ratio so that a single observation of the signal (even if one knew precisely the arrival time) is not sufficient to provide a good estimate of the signal waveshape. The occurrence time of each signal is assumed to be random.
The purpose of this note is to describe very briefly a machine[2]which has been implemented to recover the noise-perturbed binary waveform. A simplified block diagram of the machine is shown inFigure 1. The experimental machine has been designed to operate on signals of 10³ samples duration.
Each analog input sample enters the machine at left and may either contain a signal sample plus noise or noise alone. In order to permit digital operation in the machine, the samples are quantized in a symmetrical three-level quantizer. The samples are then converted to vector form,e.g., the previous 10³ samples form the vector components. A new input vector,Y⁽ⁱ⁾, is formed at each sample instant.
Define the signal sample values as s₁, s₂, ..., sₙ. The observed vector Y⁽ⁱ⁾ is then either (a) perfectly centered signal plus noise, (b) shifted signal plus noise, or (c) noise alone.
At each sample instant, two measurements are made on the input vector, an energy measurement‖Y⁽ⁱ⁾‖²and a polarity coincidence cross-correlation with the present estimate of the signal vector stored in memory. If the weighted sum of the energy and cross-correlation measurements exceeds the present threshold valueΓᵢ, the input vector is accepted as containing the signal (properly shifted in time), and the input vector is added to the memory. The adaptive memory has2Qlevels,2Q-1positive levels, 1 zero level and2Q-1-1negative levels. New contributions are made to the memory by normal vector addition except that saturation occurs when a component value is at the maximum or minimum level.
The acceptance or rejection of a given input vector is based on a hypersphere decision boundary. The input vector is accepted if the weighted sumγᵢexceeds the thresholdΓᵢ
γᵢ = Y⁽ⁱ⁾∙M⁽ⁱ⁾ + α‖Y⁽ⁱ⁾‖² ⩾ Γᵢ.
Figure 1—Block diagram of the adaptive binary waveform detector
Figure 1—Block diagram of the adaptive binary waveform detector
Geometrically, we see that the input vector is accepted if it falls on or outside of a hypersphere centered at
having radius squared
Both the center and radius of this hypersphere change as the machine adapts. The performance and optimality of hypersphere-type decision boundaries have beendiscussed in related workby Glaser[3]and Cooper.[4]
The threshold value,Γᵢ, is adapted so that it increases if the memory becomes a better replica of the signal with the result thatγᵢincreases. On the other hand, if the memory is a poor replica of the signal (for example, if it contains noise alone), it is necessary that the threshold decay with time to the point where additional acceptances can modify the memory structure.
The experimental machine is entirely digital in operation and, as stated above, is capable of recovering waveforms of up to 10³ samples in duration. In a typical experiment, one might attempt to recover an unknown noise-perturbed, pseudo-random waveform of up to 10³ bits duration which occurs at random intervals. If no information is available as to the signal waveshape, the adaptive memory is blank at the start of the experiment.
In order to illustrate the operation of the machine most clearly, let us consider a repetitive binary waveform which is composed of 10³ bits of alternate “zeros” and “ones.” A portion of this waveform is shown inFigure 2a. The waveform actually observed is a noise-perturbed version of this waveform shown inFigure 2bat-6 db signal-to-noise ratio. The exact sign of each of the signal bits obviously could not be accurately determined by direct observation ofFigure 2b.
(a) Binary signal(b) Binary signal plus noiseFigure 2—Binary signal with additive noise at-6 db SNR
(a) Binary signal
(b) Binary signal plus noise
Figure 2—Binary signal with additive noise at-6 db SNR
(a)(b)
(a)
(a)
(b)
(b)
(c)(d)
(c)
(c)
(d)
(d)
(e)Figure 3—Adaption of the memory at-6 db SNR: (a) Blank initial memory; (b) Memory after first dump; (c) Memory after 12 dumps; (d) Memory after 40 dumps; (e) Perfect “checkerboard” memory for comparison
(e)
Figure 3—Adaption of the memory at-6 db SNR: (a) Blank initial memory; (b) Memory after first dump; (c) Memory after 12 dumps; (d) Memory after 40 dumps; (e) Perfect “checkerboard” memory for comparison
Figure 3—Adaption of the memory at-6 db SNR: (a) Blank initial memory; (b) Memory after first dump; (c) Memory after 12 dumps; (d) Memory after 40 dumps; (e) Perfect “checkerboard” memory for comparison
As the machine memory adapts to this noisy input signal, it progresses as shown inFigure 3. The sign of 103memory components are displayed in a raster pattern in this figure.Figure 3ashows the memory in its blank initial state at the start of the adaption process.Figure 3bshows the memory after the first adaption of the memory. This first “dump” occurred after the threshold had decayed to the point where an energy measurement produced an acceptance decision.Figure 3cand 3d show the memory after 12 and 40 adaptions, respectively. These dumps, of course, are based on both energy and cross-correlation measurements. As can be seen, the adapted memory after 40 dumps is already quite close to the perfect memory shown by the “checkerboard” pattern ofFigure 3c.
The detailed analysis of the performance of this type of machine vs. signal-to-noise ratio, average signal repetition rate, signal duration, and machine parameters is extremely complex. Therefore, it is not appropriate here to detail the results of the analytical and experimental work on the performance of this machine. However, several conclusions of a general nature can be stated.
(a) Because the machine memory is always adapting, there is a relatively high penalty for “false alarms.” False alarms can destroy a perfect memory. Hence, the threshold level needs to be set appropriately high for the memory adaption. If one wishes to detect signal occurrences with more tolerance to false alarms, a separate comparator and threshold level should be used.(b) The present machine structure, which allows for slowly varying changes in the signal waveshape, exhibits a marked threshold effect in steady-state performance at an input signal-to-noise ratio (peak signal power-to-average noise power ratio) of about -12 db. Below this signal level, the time required for convergence increases very rapidly with decreasing signal level. At higher SNR, convergence to noise-like signals, having good auto-correlation properties, occurs at a satisfactory rate.
(a) Because the machine memory is always adapting, there is a relatively high penalty for “false alarms.” False alarms can destroy a perfect memory. Hence, the threshold level needs to be set appropriately high for the memory adaption. If one wishes to detect signal occurrences with more tolerance to false alarms, a separate comparator and threshold level should be used.
(b) The present machine structure, which allows for slowly varying changes in the signal waveshape, exhibits a marked threshold effect in steady-state performance at an input signal-to-noise ratio (peak signal power-to-average noise power ratio) of about -12 db. Below this signal level, the time required for convergence increases very rapidly with decreasing signal level. At higher SNR, convergence to noise-like signals, having good auto-correlation properties, occurs at a satisfactory rate.
A more detailed discussion of performance has been published in the report cited in footnote reference 1.