Bayesian Source Separation
x(t) = A s(t)
The problem of source separation is a ubiquitous problem in the sciences where there are multiple signal sources s(t), which are recorded by multiple detectors. In this scenario, each detector records a mixture x(t) of the original signals. The goal is to recover estimates of the original signals.
The problem of source separation is by its very nature an inductive inference problem. There is not enough information to deduce the solution, so one must use any available information to infer the most probable solution. This information comes in two forms: the signal model and the probability assignments. By adopting a signal model appropriate for the problem, one can develop a specially-tailored algorithm. Many people like the idea of a general blind source separation algorithm that can be applied anywhere. However, since the quality of the results depends on the information put into the algorithm, one will do better with an algorithm that incorporates more specific knowledge.
What I appreciate about the Bayesian approach is that it requires one to make the assumptions explicit. This is not case the case with ad hoc source separation algorithms, which are almost impossible to modify intelligently if they do not quite work for a particular application. With a Bayesian solution, one needs only to trace the problem back to the model, the probability assignments, or a simplifying assumption and modify it appropriately. While this is often easier said than done, it is still better than the situation that one is in when dealing with an ad hoc algorithm where the model and assumptions are implicit and often unknown.
Bayesian ICA (top)
Since our first papers introducing Bayesian source separation in 1997, 1998 and 1999, we have been involved in developing new techniques for separating mixed signals and applying them to a variety of problems.
Our early works considered the Infomax Independent Component Analysis (ICA) algorithm developed by Bell and Sejnowski, which was a neural network based source separation algorithm. We worked to re-cast the problem as an inference problem where the machinery of Bayesian inference could be employed to accommodate additional prior information.
Source Separation and Localization (top)
In this picture, the sensors around the bridge recorded sounds from each of the characters during one of the crew's weekly catastrophic events. Since we know that the Starship Enterprise officers won't wander far from their posts, we can use their approximate locations to help separate their recorded speech signals from the mayhem.
In the SPIE98 paper below, I considered an example where the source positions are known with some accuracy. This combined with the propagation laws of the signal (inverse square) leads to a prior probability on the values of the mixing matrix, which, in general, improves the separation. The results aren't perfect however, because I use a prior on the source amplitude histograms that is inappropriate for some of the other recorded signals, such as the photon torpedo blast. These difficulties are discussed in the MaxEnt97 paper above, although in a different context. More detailed information can be found at my old BSE site, and in the papers below. I won't tell you who survived, but its a sure bet that Ensign Jones is toast.
Neural Source Estimation (top)
The Neural Source Estimation problem
(And proof that I have a brain)
Neural activity in the brain results in the generation of both electric currents and magnetic fields. Electric currents flowing through the volume of the brain can be detected using electrodes on the scalp (or capacitors above the scalp) in a technique called electroencephalography (EEG). On the other hand, magnetic fields can be detected using superconducting quantum interference devices (SQUIDs) in a technique called magnetoencephalography (MEG).
For any given stimulus, multiple areas in the brain respond. This results in multiple neural sources each generating electric currents and magnetic fields, a linear superposition of which is recorded by the detectors (EEG and/or MEG). This linear mixing of source signals results in a classic source separation problem.
In our MaxEnt 1998 paper, we explored the possibility of simultaneously performing source separation and source localization by modeling the neural sources as current dipoles.
In later works, we considered the fact that neural sources exhibit some variability in the timing of their responses to stimuli. We realized that in some cases we could use the fact that different neural sources vary differently in latency (differential variability) to aid in separating the neural responses from different sources. This gave rise to a series of papers that resulted in an algorithm called differentially Variable Component Analysis (dVCA).
The dVCA algorithm is is a highly specialized algorithm that takes into account the fact that EEG/MEG experiments record data in a finite number of experimental trials. Since the activity produced by neural ensembles varies from trial to trial, our signal model accounts for this by allowing the source waveshape to vary in both amplitude and latency. These effects are estimated for each trial along with the stereotypic source waveshape. The relevant publications are below:
Informed Source Separation (top)
The Bayesian approach to the source separation problem requires the designer to explicitly describe the signal model in addition to any other information or assumptions that go into the problem description. This leads naturally to the concept of informed source separation, where the algorithm design incorporates relevant information about the specific problem. This approach, which is at the opposite end of the spectrum to blind source separation, promises to enable researchers to design their own high-quality algorithms that are specifically tailored to the problem at hand.