As you learn fairly early in chemistry, for some particles A and B that are able to form a complex AB, we have the equilibrium equation

where $K_d$ is the dissocation constant, $[A]$ is the concentration of unbound A particles, $[B]$ the concentration of unbound B particles and $[AB]$ the concentration of AB complex.

All good. Until you try to take the limit $\lim_{[B] \to \infty} K_d$ as I did in an attempt to solve an excercise in a simpler way.. Then, the equality breaks down, and $K_d$ is not a constant anymore. Of course, an infinite concentration is very unphysical. But even at less extreme concentrations some strange behaviour should be visible I thought. After some dicussion with the supervisor we figured that at sufficient high concentration, B effectively becomes the solvent, and maybe, a simple unimolecular equilibrium equation comes into effect

While these equations nicely describe what happens in their linear regimes, I got very interested in the concentration range in between. So I used a overly simple statistical physics model to investigate that. I’ve studied chemistry, not physics, so take this with a grain of salt.

# The model

The simplest possible model I could think of that should display such behaviour was a model on a grid, where every particle or complex takes up 1 lattice site. The actual topology doesn’t matter, as long as all system states are reachable (eg. no disconnected sites). Each site on the grid can be occupied by either a particle A, a particle B, a complex C or nothing at all.

For the states without no complexes C, I defined $A$ as the total number of sites occupied by A and $B$ as the total number of sites occupied by B. Defining $C$ as the total number of sites occupied by a complex C, then, $A-C$ is the amount of free A and $B-C$ is the amount of free B.

On complex formation, the total energy changes with an amount $\mu$. I gave A and B no internal energy for simplicity (thus $\mu$ is actually $\mu_C - \mu_A - \mu_B$, H is decreased with the constant $-A\mu_A -B\mu_B$):

Iterating over all posible energy levels of $\mathrm{H}$, multiplied by the amount of states that can have that energy, we obtain the partition function:

To calculate $K_r$ and $K_d$, we need to take averages of the particle concentrations over the entire phase space (all posble configurations of particles) of the system.

The equation for Z does look a bit asymmetrical at first (like A and B are differently handled), but the binomial coeffient has an identity relation $\binom{n}{h}\binom{n-h}{k}=\binom{n}{k}\binom{n-k}{h}$ which solves this. Filling in the definition of the binomial coeffient we get the nice symmetrical equation:

Then we can use Strirling’s approximation, and modify it a bit so it becomes computational tractible (the second approximation actually makes things worse (as in larger exponents) I think, but it leads to less operations):

After trying to get to a analytical solutions for $K_r$ and $K_d$ for a while, I gave up and just used this function directly to calculate statistical phase space averages. So no $\lim_{N\to\infty}$, but just $N=10^6$ :).

# Outcome

Below are the phase space averaged $K_r$ and $K_d$ plotted. The constants are normalized by division by their maximum value. As you can see, the bimolecular dissocation constant $K_d$ has a fairy large linear regime, and then goes to 0 as all particles A are fully converted to complexes. In the limit of large B, the unimolecular dissocation constant $K_r$ seems to flat out, but only so because there are no particles A anymore. The value of $K_d$ in the linear range is equal to $\ln \beta \mu$ and independent of A in the limit of small A. This is not visible in the graph but you can see that by playing around with the code. Again, I’m not certain that this is in any way a correct model so don’t use this in production. For starters, I haven’t found anything online about limits of dissocation constants. Maybe in a more realistic (continuous) model other effects balance out, I forgot to add something required to the model or just got statistical phsyics plain wrong.

Anyway, I made a nice graph so I’m happy. If you have any opinions on this post I’d like to hear them!

# The code

import math
from decimal import *
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

context = getcontext()
context.Emax = 999999999999999999

def fac(n):
"Stirling's approximation: n! ~~ sqrt(2 pi n) (n/e)^n"
if n == 0: return Decimal(1)
return (Decimal(2 * math.pi) * n).sqrt() * context.power(n / Decimal(math.e), n)

def nfdnmxf(n, x):
'approximate n! / (n - x)!' "n factorial divided-by n minus x factorial"
# derived from Stirling's approximation
return context.power(n, n+Decimal(0.5)) / context.power(n - x, n - x + Decimal(0.5)) * (-x).exp()

def phase_space_average(N, A, B, beta, mu, f=(lambda C: 1)):
top = Z = Decimal(0)
for C in range(int(min(A, B)+1)):
p = nfdnmxf(N, A+B-C) / (fac(C) * fac(A - C) * fac(B - C)) * (-beta * C * mu).exp()
top += f(C) * p
Z += p

df = []
N = Decimal(1e6)
A  = Decimal(100)
beta = Decimal(1)
mu = Decimal(-1)

for B in np.logspace(1, math.log10(N)):
B = Decimal(min(int(B), N - A - 1))
a = phase_space_average(N, A, B, beta, mu, lambda C: (A - C) / N)
b = phase_space_average(N, A, B, beta, mu, lambda C: (B - C) / N)
c = phase_space_average(N, A, B, beta, mu, lambda C: C / N)
df.append(dict(B=B, a=a, b=b, c=c, kr=float(a/c), kd=float(a*b/c)))

df = pd.DataFrame(df)

plt.title(f'N={N} A={A} beta={beta:.2f} mu={mu}')
plt.xlabel('Total of B')
plt.ylabel('Dissocation Constant Value')
plt.plot(df.B, df.kr / df.kr.max(), label=r'Unimolecular $K_r = \frac{[A]}{[AB]}$')
plt.plot(df.B, df.kd / df.kd.max(), label=r'Bimolecular $K_d = \frac{[A][B]}{[AB]}$')
plt.xscale('log')
plt.legend()
plt.show()