In this talk I'm going to present a new stochastic modelling of speech that takes into account the sparseness of the speech signal time-frequency representation. More specifically, we assume the time-frequency coefficients can be factorized as a product of a Bernoulli (0/1) discrete random variable and a continuous (often Gauss or Laplace) r.v. We approach the Blind Source Separation (BSS) problem having this model in mind. The (acoustic) BSS problem assumes a number of voice signals are recorded simultaneously using a number (possibly different) of microphones. The problem is to separate the speech signals, based on the microphone recording signals only (hence "blindly"). We consider the case of two microphones and a larger-than-two number of signals. In this analysis, we develop several mixing parameters estimators, as well as signal separation procedures. This is a joint work at Siemens with J. Rosca and S. Rickard.