Complex systems often contain feedback loops that can be described as cyclic causal models. Intervening in such systems may lead to counter-intuitive effects, which cannot be inferred directly from the graph structure. After establishing a framework for differentiable interventions based on Lie groups, we take advantage of modern automatic differentiation techniques and their application to implicit functions in order to optimize interventions in cyclic causal models. We illustrate the use of this framework by investigating scenarios of transition to sustainable economies.

Independent component analysis provides a principled framework for unsupervised representation learning, with solid theory on the identifiability of the latent code that generated the data, given only observations of mixtures thereof. Unfortunately, when the mixing is nonlinear, the model is provably nonidentifiable, since statistical independence alone does not sufficiently constrain the problem. Identifiability can be recovered in settings where additional, typically observed variables are included in the generative process. We investigate an alternative path and consider instead including assumptions reflecting the principle of independent causal mechanisms exploited in the field of causality. Specifically, our approach is motivated by thinking of each source as independently influencing the mixing process. This gives rise to a framework which we term independent mechanism analysis. We provide theoretical and empirical evidence that our approach circumvents a number of nonidentifiability issues arising in nonlinear blind source separation.

Self-supervised representation learning has shown remarkable success in a number of domains. A common practice is to perform data augmentation via hand-crafted transformations intended to leave the semantics of the data invariant. We seek to understand the empirical success of this approach from a theoretical perspective. We formulate the augmentation process as a latent variable model by postulating a partition of the latent representation into a content component, which is assumed invariant to augmentation, and a style component, which is allowed to change. Unlike prior work on disentanglement and independent component analysis, we allow for both nontrivial statistical and causal dependencies in the latent space. We study the identifiability of the latent representation based on pairs of views of the observations and prove sufficient conditions that allow us to identify the invariant content partition up to an invertible mapping in both generative and discriminative settings. We find numerical simulations with dependent latent variables are consistent with our theory. Lastly, we introduce Causal3DIdent, a dataset of high-dimensional, visually complex images with rich causal dependencies, which we use to study the effect of data augmentations performed in practice.

The hippocampus has a major role in encoding and consolidating long-term memories, and undergoes plastic changes during sleep1. These changes require precise homeostatic control by subcortical neuromodulatory structures2. The underlying mechanisms of this phenomenon, however, remain unknown. Here, using multi-structure recordings in macaque monkeys, we show that the brainstem transiently modulates hippocampal network events through phasic pontine waves known as pontogeniculooccipital waves (PGO waves). Two physiologically distinct types of PGO wave appear to occur sequentially, selectively influencing high-frequency ripples and low-frequency theta events, respectively. The two types of PGO wave are associated with opposite hippocampal spike-field coupling, prompting periods of high neural synchrony of neural populations during periods of ripple and theta instances. The coupling between PGO waves and ripples, classically associated with distinct sleep stages, supports the notion that a global coordination mechanism of hippocampal sleep dynamics by cholinergic pontine transients may promote systems and synaptic memory consolidation as well as synaptic homeostasis.

Generative models can be trained to emulate complex empirical data, but are they useful to make predictions in the context of previously unobserved environments? An intuitive idea to promote such **extrapolation** capabilities is to have the architecture of such model reflect a causal graph of the true data generating process, to intervene on each node of this graph independently of the others. However, the nodes of this graph are usually unobserved, leading to a lack of identifiability of the causal structure. We develop a theoretical framework to address this challenging situation by defining a weaker form of identifiability, based on the principle of **independence of mechanisms**.

The postulate of independence of cause and mechanism (ICM) has recently led to several new causal discovery algorithms. The interpretation of independence and the way it is utilized, however, varies across these methods. Our aim in this paper is to propose a group theoretic framework for ICM to unify and generalize these approaches. In our setting, the cause-mechanism relationship is assessed by perturbing it with random group transformations. We show that the group theoretic view encompasses previous ICM approaches and provides a very general tool to study the structure of data generating mechanisms with direct applications to machine learning.

We investigate how oscillations of cortical activity in the gamma frequency range (50–80 Hz) may influence dynamically the direction and strength of information flow across different groups of neurons. We found that the arrangement of the phase of gamma oscillations at different locations indicated the presence of waves propagating along the cortical tissue were observed to propagate along the direction with the maximal flow of information transmitted between neural populations. Our findings suggest that the propagation of gamma oscillations may reconfigure dynamically the directional flow of cortical information during sensory processing.

Michel Besserve 2015-2022 · Powered by themefisher for Hugo.