Elsevier

Neurocomputing

Volume 34, Issues 1–4, September 2000, Pages 239-252
Neurocomputing

Letters
Blind separation of circularly distributed sources by neural extended APEX algorithm

https://doi.org/10.1016/S0925-2312(00)00161-2Get rights and content

Abstract

The aim of this work is to present a generalized Hebbian learning theory for complex-weighted linear feed-forward network endowed with lateral inhibitory connections, and to show how it can be applied to blind separation from complex-valued mixtures. We start by stating an optimization principle for Kung–Diamantaras’ network which leads to a generalized APEX-like learning theory relying on some non-linear functions, whose choice determines network's ability. Then we recall the Sudjianto–Hassoun interpretation of Hebbian learning and show that it drives us to the choice of the right set of non-linear functions allowing the network to achieve blind separation. The proposed approach is finally assessed by numerical simulations.

Introduction

Blind Source Separation (BSS) by neural networks and adaptive algorithms has raised much interest in the scientific community (see for example [11], [13], [14], [15], [16], [17] and references therein). Among other approaches, it has been proven that adding non-linearity to linear Principal Component Analysis (PCA) neural networks makes them able to improve the independence of their outputs so as to allow blind separation of real-valued independent sources [13], [14], [15], [16], [17]. Moreover, attention has been paid to blind separation of complex-valued signals because of the useful applications to telecommunications and array processing and to blind separation of convolutive mixtures (see [1], [2], [6], [8], [18], [20], [25] and references therein). Consequently, as for instantaneous real-valued separation, the generalization of classical PCA techniques to complex-valued data processing seems interesting. Also, some examples of the usefulness of complex principal component analysis and principal subspace decomposition algorithms may be found for instance in [10], [19].

Recently, some attempts have been made in order to extend the best-known PCA algorithms to the complex case. In [3] Chen and Hou presented an heuristic complex version of the well-known APEX algorithm [9]; in [7] De Castro et al. gave a heuristic complex extension of the GHA algorithm [23]. In this work we formally derive a new learning theory as a generalized complex-valued Hebbian learning for a linear feed-forward network with lateral inhibitory connections [9], [21], [22], which relies on non-linear functions that affect network's signal processing ability. Then we discuss the choice of the set of non-linear functions under the theoretical framework proposed by Sudjianto and Hassoun [24] that we extend to the complex case. A particular non-linearity, having the shape of the Rayleigh probability distribution function, allows the neural network to separate out mixed independent complex-valued source signals, as illustrated by computer simulations.

Notation

In the paper we use the following notation: Ex[f(x)] denotes mathematical expectation of f(x) with respect to the statistics of the multivariate random process x; superscript T denotes ordinary transpose while superscript H denotes Hermitian (conjugate) transpose; symbol denotes complex conjugation.

Section snippets

Extended APEX algorithm and blind separation of complex sources

Consider the complex-weighted neural network with input xCp and output yCm, with mp, described by the following relationships:zk=wkHx,yk=zk+hkHy,k=1,…,m,where zCm, vectors wkCp represent the network's direct connections, and vectors hkCp represent the network's inhibitory lateral connections. A schematic of this topology is reported in Fig. 1. The original APEX learning rule for such a network was derived by Kung and Diamantaras by applying Oja's first principal component rule, which is a

Computer simulations

Suppose input xC4 be formed by a linear mixture of four independent signals arranged in a vector sC4. Signal s1 is QAM4, s2 is QAM16, signal s3 is PSK. Signal s4 is a Gaussian noise of variance σ2=0.25. The first row of Fig. 2 depicts the independent signals while second row shows the obtained four mixtures. Also, the bar graphs reported in Fig. 3 represent the histograms of the four sources’ modules squashed by function QR(u). A close examination of these graphs shows that the QAM4 and PSK

Conclusion

In this work a new adapting rule for linear neural networks as generalization of APEX learning has been presented; it provides a generalization in that it applies to complex-weighted neural networks and involves non-linear functions in the classical Hebbian learning. A particular choice of the non-linearity is discussed by recalling the Sudjianto–Hassoun interpretation of non-classical Hebbian learning extended to the complex case.

References (25)

  • M.C.F De Castro et al.

    A complex valued Hebbian learning algorithm

    Proceedings of IEEE-IJCNN

    (1998)
  • G. Desodt, D. Muller, Complex ICA applied to the separation of radar signals, Proceedings of EUSIPCO, Vol. I, 1990, pp....
  • Cited by (33)

    • Neural learning by geometric integration of reduced 'rigid-body' equations

      2004, Journal of Computational and Applied Mathematics
    • Uniqueness of complex and multidimensional independent component analysis

      2004, Signal Processing
      Citation Excerpt :

      Complex S–D theorem Various algorithms for solving the complex BSS problem have been proposed [1,2,7,13,16]. We want to note that many cases where complex BSS is applied can in fact be reduced to using real BSS algorithms.

    View all citing articles on Scopus

    This research was financially supported by the Italian MURST.

    View full text