# Using Brownian Motion Processes to Compute the Electroneutral Limit of Electrodiffusion

Wednesday, March 14, 2018 - 4:00pm - 4:25pm

Lind 305

Adam Stinchcombe (University of Toronto)

The Poisson-Nernst-Planck (PNP) equations can be used to describe cellular electrical activity. However, on domains where the space-charge layer is small, these equations are intractable and therefore it is useful to assume that the ionic solution is everywhere electrically neutral. The much more manageable electroneutral model results from a boundary layer analysis of the PNP equations. The electroneutral model consists of a drift-diffusion equation for each ionic concentration and boundary conditions on the cell membranes that relate the ion flux to transmembrane currents and capacitive currents. An elliptic equation and a jump condition on the cell membranes determine the electrostatic potential.

In this talk, I detail a numerical method to simulate the electroneutral model for intricate cell membranes geometries. A backward differentiation formula for the drift-diffusion equations results in elliptic equations in space that are solved using Brownian motion processes and the Feynman-Kac formula. This approach is easily parallelized, works well in three dimensions, and can handle complicated boundaries. In a naive formulation, solution values are computed at each point in space independently, which is terribly inefficient. This is overcome with function approximation and temporal difference learning borrowed from the reinforcement learning literature.

This work is in collaboration with Mihai Nica.

In this talk, I detail a numerical method to simulate the electroneutral model for intricate cell membranes geometries. A backward differentiation formula for the drift-diffusion equations results in elliptic equations in space that are solved using Brownian motion processes and the Feynman-Kac formula. This approach is easily parallelized, works well in three dimensions, and can handle complicated boundaries. In a naive formulation, solution values are computed at each point in space independently, which is terribly inefficient. This is overcome with function approximation and temporal difference learning borrowed from the reinforcement learning literature.

This work is in collaboration with Mihai Nica.