Difference between revisions of "Simulation Information Gathering"
Line 12: | Line 12: | ||
*[[Weighted Histogram Analysis Method|WHAM]] and [[Multistate Bennett Acceptance Ratio|MBAR]] have to have the complete set of <math>\Delta U_{k,j}(\vec{q})</math> with <math>j=1\ldots K</math> along the entire transformation path; WHAM must have this information binned. | *[[Weighted Histogram Analysis Method|WHAM]] and [[Multistate Bennett Acceptance Ratio|MBAR]] have to have the complete set of <math>\Delta U_{k,j}(\vec{q})</math> with <math>j=1\ldots K</math> along the entire transformation path; WHAM must have this information binned. | ||
− | TI | + | The potential derivative required for TI must generally be calculated during the simulation; it cannot be postprocessed by a code that does not evaluate the derivatives. The potential energy differences required for EXP, BAR, MBAR, and WHAM be calculated either during the simulation or in post-processing. We recommend calculating the potential differences in code when possible to avoid extra overhead and possible errors produced by running the simulation twice, and to reduce the amount of stored information. Although TI must be usually be calculated in code, as it requires the derivative, there is one condition under which it actually has the fasted computation time. If the alchemical path you have chosen is a [[Intermediate_States#Linear_Alchemical_Potential|linear alchemical path]], then you get |
:{{#tag:math|{{dudl}}=U_0(\vec{q}) - U_1(\vec{q})}} | :{{#tag:math|{{dudl}}=U_0(\vec{q}) - U_1(\vec{q})}} | ||
− | which can be calculated with only the endpoint energies. However, because of the [[Intermediate_States#Problems_with_Linear_Transformations | problems with linear paths]], this simplification is rarely | + | which can be calculated with only the endpoint energies. However, because of the [[Intermediate_States#Problems_with_Linear_Transformations | problems with linear paths]], this simplification is rarely that useful. |
=Correlation= | =Correlation= | ||
− | Once we have extracted the information, we | + | Once we have extracted the information, we need to make sure that the data we process to extract free energies is independent. As [[Simulating_States_of_Interest#Independent_and_Uncorrelated_Samples mentioned in running simulations]], samples close together in time are correlated to each other in all but the most simple systems and we '''must''' have uncorrelated samples for our data to be meaningful. |
+ | |||
+ | The autocorrelation time is a measure which tells us the time between effectively uncorrelated samples and a number of approaches exist for calculating it. The most common start point is to compute the normallzied autocorrelation function an observable ''X'' over the duration of the whole simulation, <math>\mathcal{T}</math>.<!-- Trying to make the time T distinguishable from the temperature T--> We first make a notation of | ||
:<math>\displaystyle \delta X(t) = X(t) - \mathcal{T}^{-1}\int_{t=0}^\mathcal{T} X(t) dt</math> | :<math>\displaystyle \delta X(t) = X(t) - \mathcal{T}^{-1}\int_{t=0}^\mathcal{T} X(t) dt</math> |
Revision as of 17:21, 3 October 2012
Free Energy Fundamentals |
---|
Methods of Free Energy Simulations
|
Free Energy How-to's |
---|
Once you have actually run your simulations, you now need to extract the correct information and in a meaningful way. It is unfortunately not as easy as simply running all the data through a master analysis program; one must first choose how they will analyze the data (e.g. using any of the analysis methods shown in the theory section), then ensure that your data is both independent and uncorrelated as discussed previously.
Extracting the Correct Information
For this step, we must ensure that the data we are extracting from our simulation is meaningful to the analysis technique we will run. Recall that each method needs different information
- TI requires [math]\displaystyle{ \frac{dU}{d\lambda} }[/math] at each [math]\displaystyle{ \vec{q} }[/math] point.
- EXP needs either [math]\displaystyle{ \Delta U_{k,k+1}(\vec{q}) }[/math] or [math]\displaystyle{ \Delta U_{k,k-1}(\vec{q}) }[/math] depending on which direction EXP is being operated in.
- BAR must have both [math]\displaystyle{ \Delta U_{k,k+1}(\vec{q}) }[/math] and [math]\displaystyle{ \Delta U_{k,k-1}(\vec{q}) }[/math] between all pairs of states.
- WHAM and MBAR have to have the complete set of [math]\displaystyle{ \Delta U_{k,j}(\vec{q}) }[/math] with [math]\displaystyle{ j=1\ldots K }[/math] along the entire transformation path; WHAM must have this information binned.
The potential derivative required for TI must generally be calculated during the simulation; it cannot be postprocessed by a code that does not evaluate the derivatives. The potential energy differences required for EXP, BAR, MBAR, and WHAM be calculated either during the simulation or in post-processing. We recommend calculating the potential differences in code when possible to avoid extra overhead and possible errors produced by running the simulation twice, and to reduce the amount of stored information. Although TI must be usually be calculated in code, as it requires the derivative, there is one condition under which it actually has the fasted computation time. If the alchemical path you have chosen is a linear alchemical path, then you get
- [math]\displaystyle{ \frac{dU}{d\lambda} =U_0(\vec{q}) - U_1(\vec{q}) }[/math]
which can be calculated with only the endpoint energies. However, because of the problems with linear paths, this simplification is rarely that useful.
Correlation
Once we have extracted the information, we need to make sure that the data we process to extract free energies is independent. As Simulating_States_of_Interest#Independent_and_Uncorrelated_Samples mentioned in running simulations, samples close together in time are correlated to each other in all but the most simple systems and we must have uncorrelated samples for our data to be meaningful.
The autocorrelation time is a measure which tells us the time between effectively uncorrelated samples and a number of approaches exist for calculating it. The most common start point is to compute the normallzied autocorrelation function an observable X over the duration of the whole simulation, [math]\displaystyle{ \mathcal{T} }[/math]. We first make a notation of
- [math]\displaystyle{ \displaystyle \delta X(t) = X(t) - \mathcal{T}^{-1}\int_{t=0}^\mathcal{T} X(t) dt }[/math]
where we have defined the instantaneous value of X less the average value of X. We now compute the quantity:
- [math]\displaystyle{ \displaystyle C_X(\Delta t) = \frac{\int_{\tau=0}^{\mathcal{T}} \delta X(\tau) \delta X(\tau+\Delta t) d\tau}{\int_{\tau=0}^{\mathcal{T}} \delta C(\tau)^2 d\tau} }[/math]
where [math]\displaystyle{ \tau }[/math] is the autocorrelation time. If we get [math]\displaystyle{ C_X(\Delta t)=0 }[/math] bot at and after [math]\displaystyle{ \Delta t }[/math], then the two samples separated by [math]\displaystyle{ \Delta t }[/math] are uncorrelated and can be treated as independent.
Given that we usually have N samples taken at [math]\displaystyle{ \delta t }[/math] time apart, then [math]\displaystyle{ C_X(\delta t) }[/math] is now discrete at particular [math]\displaystyle{ i }[/math], requiring us to redefine our two equations:
- [math]\displaystyle{ \delta X(i) = X(i) - \frac{1}{N}\sum_{i=0}^N X(i) }[/math]
and
- [math]\displaystyle{ C_X(i) = \frac{\sum_{j=0}^{N} \delta X(j) \delta X(j+i)}{\sum_{j=0}^N \delta X(j)^2} }[/math]
where the autocorrelation time, <matth>\tau</math> is now the integral under <maht>C_X</math>. One must be careful when integrating this function though as the noise, especially at more than half simulation time, becomes rather substantial. Often, the autocorrelation function can be fit to an exponential, which makes [math]\displaystyle{ \tau }[/math] the relation time of the exponential function. A good rule of thumb is that simulations should run a minimum 50[math]\displaystyle{ \tau }[/math] as an estimate since longer correlation times may not be detected in short simulations. Once you have [math]\displaystyle{ \tau }[/math], under standard statistical assumptions, samples can be considered independent if they are spaced by 2[math]\displaystyle{ \tau }[/math]. If you do not calculate the correlation times, your statistical uncertainty will be lower than true uncertainty. Fortunately, many simulation packages come with methods, some of which are more sophisticated than that presented here, to calculate the autocorrelation time.
Applying Correlation Corrections
Once the time is calculated, you still must apply it to your data. If your free energy method computes single state averages, like TI, then the average over all samples can be used for your mean; this included correlated samples. Your effective variance is then the regular variance multiplied by [math]\displaystyle{ \sqrt{2\tau/\Delta t} }[/math] where [math]\displaystyle{ \Delta t }[/math] is the time difference between samples. As an alternate method, or when your averages are not computed from single states, you can subsample the data by analyzing only the set of samples separated by 2[math]\displaystyle{ \tau }[/math]. Consider the following simple example with BAR wwhere we want the free energy difference between states 1 and 2.
- 5 ns of simulation time
- 10 ps between each snapshot (500 snapshots total)
- Assume autocorrelation for 1 → 2 of 20ps
- Assume autocorrelation for 2 → 1 of 40ps
Under these conditions, when we go to analyze [math]\displaystyle{ \Delta U_{1,2} }[/math], we need every fourth sample as the correlation time is 20 ps and we want samples every 2[math]\displaystyle{ \tau }[/math]). Similarly, we should take every eight sample from [math]\displaystyle{ \Delta U_{2,1} }[/math] since the correlation is time is 40 ps and [math]\displaystyle{ 2\tau = 80\,\mathrm{ps} }[/math]. If this is done correctly, we will not have discarded any unique data as the information we ignored is already contained in the samples we kept.
True independent samples between configurations is achieved only if all coordinates are also uncorrelated between samples, not just energies. Although independent energies usually implies independent samples, there are situations where the energy is approximately independently sampled within the noise, but the configuration space is not as well sampled. This may occur, for example, when a ligand contains a configuration with comparable binding affinity as another, but rarely visits that conformer. If one were looking only at energies, it may be hard to detect this lack of sampling.