Thermostats in Molecular Dynamics
Molecular Dynamics (MD) simulation are widely used to study the dynamics and properties of biomolecules. However, one of the main challenges of MD is to carry outsimiulations at different conditions or ensembles.
Maintaining the temperature of the system at a constant value throughout the simulation (canonical ensemble) requires the use of sophisticated algorithms named thermostat.
Understanding how thermostats work and how to use them effectively is crucial for running accurate and meaningful MD simulations. Here I provide a more rigorous mathematical description of how this condition can be achieved. Furthermore, I highlight the most famous thermostat algorithms that you may encounter and some of the ones you can set in a GROMACS simulation using the proper parameters in the mdp file.
Velocity rescaling
Let’s start from the basics. In statistical mechanics, the $T$ of an unconstrained system is related to the time average of the kinetic energy ($K$). For an $N$ polyatomic system we have that:
$$ \langle K \rangle = \frac{3}{2}Nk_BT $$
where $k_B$ is the Boltzmann constant. To conserve the temperature of the system we need to make sure that the average kinetic energy is fixed, while the instantaneous $K$ is allowed to fluctuate. Keeping in mind that the kinetic energy can also be expressed as:
$$ K = \frac{1}{2}\sum_i m_iv_i^2 $$
one possible way to alter the temperature of the system so that it stays constant is to scale the velocities. In other words, if we want to implement a mathematical thermostat we need to make sure that the velocities are able to produce a constant average kinetic energy $\langle K \rangle$ over a certain period of time.
This condition is obtained by multiplying the velocity by a scaling factor ($\lambda$).
$$ v_i^{new} = v_i^{old} \cdot \lambda $$
At each time step, if the computer realizes that the atoms are moving too fast, it scales the velocities by multiplying all of them by $\lambda<1$. On the contrary, if the velocity is too low we multiply by a number bigger than one $\lambda>1$. The explicit expression for the scaling factor depends on the desired temperature ($T_{des}$) and on the actual temperature of the system at that specific time step $T(t)$ :
$$ \lambda = \sqrt{\frac{T_{des}}{T(t)}} $$
When the desired temperature is bigger than the actual one the scaling factor term is bigger than one ($\lambda >1$). As a result, the old velocity is scaled up and atoms move faster, increasing the temperature so that it approaches the $T_{des}$. When the desired temperature is lower than the actual temperature we have the opposite situation. Atoms are moving too fast and the scaling factor, which is lower than one $\lambda<1$, scales the velocity down decreasing the temperature. It is worth remembering once again that this operation is performed at each time step of our simulation.
This is the general idea behind the velocity rescaling method, the most basic thermostat algorithm.
Berendsen thermostat
For the reason we just highlighted, we also have more sophisticated thermostat models. Some of them are based on the same principle we previously saw, but they use more complicated equations for the scaling factor. One example is the Berendsen thermostat aiming to reproduce a system coupled to an external bath that is fixed at the desired temperature.
We can imagine the external bath acts as a source of thermal energy, supplying or removing heat from the system depending on the temperature we need. That being the case, we can suppose that the change in temperature will be proportional to the difference in temperature between the bath ($T_{bath}$) and the system.
$$ \frac{\partial T(t)}{\partial t} = \frac{1}{\tau} \left(T_{bath}- T(t)\right) $$
where $\tau$ is a parameter signaling the strength of the coupling between the bath and the system.
The equation for the scaling factor in the Berendsen thermostat is the following:
$$ \lambda = \sqrt{1+\frac{\delta t}{\tau} \left( \frac{T_{bath}}{T(t)}-1\right)} $$
where $\delta t$ is the time step.
The main advantage of this method is that we can choose the value of the coupling according to our needs. For large $\tau$ values we have that $\lambda =1$ and therefore the old velocity is not modified by the scaling factor. In other words, we have no coupling between the heat bath and the system. For smaller $\tau$ values we have the opposite situation resulting in a strong coupling, a fast equilibration, and an unrealistic too low-temperature fluctuation.
The value of $\tau$ needs to be selected with great care. For a typical time step of $1 fs$ the default value is generally $\tau = 1 ps$. The system will be coupled to the thermostat every 1ps.
The Berendsen algorithm does not guarantee the correct sampling of the canonical ensemble. For this reason, it should be avoided for production runs. However, it is extremely useful when we want to relax the system, right after the energy minimization step of our MD simulation. This is mainly due to its capability to quickly and smoothly reach the desired temperature, even when we deal with extremely compromised systems.
If you are not familiar with these terms, check out my article on the different steps involved in an MD simulation.
Andersen thermostat
Another possible approach was proposed by Andersen who decided to implement a stochastic collisions method. Generally speaking, we refer to a stochastic process as a random process. The Andersen thermostat is based on the simulation of random collisions acting on the particles of the system, ultimately altering the $T$ of the system. We can envision this as a heat bath being in contact with our system, and emitting random particles colliding with atoms in our system. The strength of the coupling between the heat bath and the system is specified by the collision frequency ($\nu$).
From a practical point of view, the collisions are simulated by randomly selecting atoms and reassigning them with velocities selected from a Maxwell-Boltzmann distribution at the desired temperature $T$.
The time interval ($t$) between two velocity reassignments of the same atom is given by a Poisson distribution:
$$ P(t) = \nu e^{-\nu t} $$
The collision frequency has to be chosen paying particular attention. Low $\nu$ values cause poor temperature control and the generation of a canonical ensemble is not guaranteed. On the contrary, if the collision frequency is too large we may have a heavy perturbation of the system.
In summary, the Andersen algorithm can be divided into three steps:
- Starting from an initial configuration we integrate the equations of motion
- Some atoms are randomly selected to undergo a collision with particles from the heat bath
- The “lucky” particles are reassigned to new velocities from a Maxwell-Boltzmann distribution. The remaining atoms are not affected.
The algorithm is reiterated for as many time steps as we desire.
Nosè-Hoover thermostat
The Nosè-Hoover thermostat belongs to a different class of methods named extended system. This was introduced by Nosé and further developed by Hoover.
The main difference from the previous approaches is that, in this case, the thermal bath is directly incorporated into the system by including an additional degree of freedom $s$ in the Hamiltonian operator. The heat reservoir has a given mass $Q$ as well as potential and kinetic energy.
The essential idea is that $Q$ represents the coupling between the system and the bath. High 𝑄 mass determines low coupling and vice versa. It can be changed according to our needs to slow down or accelerate the particles until we reach the desired temperature.
As always, care must be taken in the choice of parameter 𝑄. High values lead to the absence of heat transfer. Small values may lead to a high-frequency transfer causing unwanted temperature oscillations. The energy of the physical system fluctuates due to heat transfer. In practice, the usage of more thermostats coupled to specific parts of the system may facilitate an equal repartition of the heat through the system
Bussi-Donadio-Parrinello thermostat
The Bussi-Donadio-Parrinello thermostat, also known as stochastic velocity rescaling is an extension of the Berendsen thermostat. It allows to properly sample a canonical ensemble.
The general idea is the same we saw in the previous thermostats but with a difference. Instead of forcing the kinetic energy to be exactly equal to the desired value, the rescaling is done to a kinetic energy value that is stochastically chosen from the canonical equilibrium distribution for the kinetic energy.
V-rescale
.