The free-energy FUQ (Frequently Unasked Questions)

A site where free-energy and related concepts are explored using the mnemonic medium developed by Orbit.

All models are wrong, some are illuminating and/or useful

...cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an "ideal" gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules. For such a model there is no need to ask the question "Is the model true?". If "truth" is to be the "whole truth" the answer must be "No". The only question of interest is "Is the model illuminating and useful?".

Getting better grades by studying harder or taking easier classes. Free-energy optimization is like optimizing for grades. It is a statistical quantity that would be optimized under the idea that the brain uses an internal generative model to predict incoming sensory data

Any thing that can sense and act on its environment using a generative model would do. For example, the brain could optimise probabilistic beliefs about the variables in the generative model (i.e. perceptual inference). Alternatively, by acting on the world, it could change the sensory data, such that they are more consistent with the model. This implies a common objective function (variational free energy) for action and perception that scores the fit between an internal model and the world. Active inference outcome orientation with a generative model The graphic is from Active inference: demystified and compared, Neural Computation , 2021

There is a conceptual relationship but not a physical one. The first formulation in the supplementary material below expresses free energy as energy minus entropy. This formulation is important for three reasons.
  • it connects the concept of free energy as used in information theory with concepts used in statistical thermodynamics.
  • it shows that the free energy can be evaluated by an agent because the energy is the surprise about the joint occurrence of sensations and their perceived causes, whereas the entropy is simply that of the agent’s own recognition density.
  • it shows that free energy rests on a generative model of the world, which is expressed in terms of the probability of a sensation and its causes occurring together. This means that an agent must have an implicit generative model of how causes conspire to produce sensory data. It is this model that defines both the nature of the agent and the quality of the free-energy bound on surprise.
Supplementary math from 2010 Nature Review

Yes. In fact most of the examples shared by Karl Friston in his presentations are simulations, and hence non living.

It is a way of looking at and measuring things rather than a predictive model. Though Information Theory, Natural Selection, and String Theory may not qualify as theories in this sense either.

To make sense of lots of experimental data and many different ways of looking at brain behavior, structure and function.

We actually don't have a good answer for this one.

Active Inference is the corollary for FEP that does seem to have a good number of applications. Active inference applications for Markov Decision Processes Generalised free energy and active inference, Biological Cybernetics, 2019

Better accounting for language and other people.

Perhaps in the wild. Where rewards change or are hard to model. For now some toy tests like the one below with changing environments are promising. Active inference toy Open AI Gym problem Active inference Benchmarks Vs Other RL Git Repo for the paper Active inference: demystified and compared, Neural Computation , 2021