Inferring topologies and classifying interactions of spiking neural networks using sorted local transfer entropy
Felix Goetze1,2*, Pik-Yin Lai1, C. K. Chan1,3
1Department of Physics, National Central University, Chung-Li, Taiwan
2Taiwan International Graduate Program for Molecular Science and Technology, Institute for Atomic and Molecular Sciences, Academia Sinica, Taipei, Taiwan
3Institute of Physics, Academia Sinica, Taipei, Taiwan
* presenting author:Felix Goetze,
An important step in understanding how the brain processes information is to understand the networks neurons form. While multielectrode arrays allow to simultaneously measure the spiking of thousands of neurons, it still remains challenging to identify the underlying neural network topology that gives rise to the recorded spatiotemporal patterns.
Transfer entropy is an information theoretic measure which quantifies the predicted information transfer between two time series[1]. It can be used to measure the effective connectivity of a system of neurons, however not distinguishing between excitatory or inhibitory interactions[2].
We propose a complementary quantity, which we call sorted local transfer entropy, that also quantifies the interaction type.
In our simulations of a cortical network model we show that applying the two measures to spike trains of neurons can identify the synapses and label them correctly.

1. Schreiber T: Measuring information transfer. Physical review letters 2000
2. Ito S, Hansen ME, Heiland R, Lumsdaine A, Litke AM, Beggs JM, Zochowski M: Extending Transfer Entropy Improves Identification of Effective Connectivity in a Spiking Cortical Network Model. PLoS ONE 2011 

Keywords: transfer entropy, neural networks