Kerley Technical Services
Gerald I. Kerley

 

Final Thoughts

The website for Kerley Technical Services is closed, and the materials formerly provided here will no longer be available. I’m not going to discuss my reasons for shutting down, except to repeat what I’ve already said many times: EOS modeling is my profession, not my hobby.

This message will remain in place for a few months, until the prepaid hosting period expires. In the meantime, I will leave you with some final thoughts.

Science vs. Engineering

EOS modeling involves a mixture of science and engineering, and I have willingly done both during my 46-year career. The boundary between these disciplines is perhaps arbitrary, but a few generalizations are useful. My definitions of these terms, which are consistent with what is given in most dictionaries, are:

Science attempts to understand physical phenomena using experiments and theories or models.

Engineering attempts to apply the results of science to the solution of practical problems.

An imbalance between science and engineering can lead to difficulties in EOS modeling and similar endeavors.

—Some scientists seem to feel that they are “too good” to do engineering, that practical applications should be left up to “lesser mortals.” Besides being arrogant, this point of view fails to recognize that practical applications are valuable to scientific inquiry, offering challenges to theories and leading to greater insights.

—Some engineers employ the scientific tools without understanding them; this practice leads to unrealistic expectations and to short cuts that avoid the hard work necessary for good results. Moreover, those focused only on applications often fail to recognize the need to develop and improve existing methods.

When it comes to funding, my experience has been that the engineers call the shots. Most of my supervisors and clients have taken a dim view of my requests to do research into better EOS modeling methods. So my strategy has been to deliver the best product I could with existing methods and siphon off some time from each project to make incremental improvements to the methods. I have been able to make a lot of progress using this approach, but there is still much work remaining to be done. (And my arguments for doing that work continue to fall on deaf ears.)

Research vs. Methodology

Scientists and engineers often differ in the way they approach problem solving.

—Engineers tend to prefer a “by the numbers” methodology that is founded on “linear logic”—i.e., one consisting of well-defined steps that can be spelled out in advance. (This approach also appeals to program managers, who want to design their programs in terms of milestones.)

—Scientists are better able to cope with a “research” approach, which involves unknown elements, where each step of the process depends upon the results of previous steps, and where the methods often have to be invented as one proceeds. (Unfortunately, program managers generally don’t like this approach.)

The engineering approach is appropriate and useful when a reliable methodology actually exists, especially when one is doing more or less the same thing over and over—like building a bridge, for example. That is not the case in EOS modeling. Yes, there are some routine tasks that can be handled in this way. But research is essential for a really good EOS (assuming one actually cares enough to put in the required effort).

A person attempting to approach EOS modeling from an engineering viewpoint might envision the following steps: 1—identify what input parameters and other data are needed to “calibrate” the model; 2—obtain data from existing databases; 3—carry out new experiments, where necessary; 4—construct an input file for the modeling code; 5—let the code compute and output the EOS. This simplistic scenario could be made more sophisticated by allowing for decision trees, iterations, etc., but I think the basic idea is clear.

This scenario may sound reasonable, but it has many limitations and weaknesses. EOS modeling is not just a matter of fitting data. Even if it were, it would be impossible to obtain all the data that would be required; many regions are inaccessible to experiment, and many quantities cannot be measured experimentally. A good model explicitly treats the chemical and physical phenomena that govern the material behavior, and an understanding of those phenomena cannot be directly obtained from experiments. Existing models also have limitations; they sometimes need to be modified or even replaced.

In short, a good EOS can only be created when the modeler devotes time and energy into research. He/she must study and learn something about the material to be modeled, explore different ideas about phase transitions and changes in chemical structure, try various options to see what gives the best results, improve on and even invent theories. This process cannot be carried out “by the numbers.”

DFT—The Gold Standard?

Numerical calculations are all the rage in EOS modeling at the present time. Many people view them as the cutting edge of EOS work, a kind of “gold standard,” against which all other EOS models are to be measured. Some even appear to believe that my approach to EOS modeling will ultimately become obsolete.

I will conclude this essay by explaining how I disagree with others on this issue, why I think these numerical calculations are not as rigorous and trustworthy as they are perceived to be.

I will confine my remarks to DFT/MD, the most popular numerical method at the present time. It uses density functional theory (DFT) to calculate the electronic structure and free energy of a system (nuclei + electrons) as a function of the nuclear coordinates. This free energy function is then used as the potential energy surface for the motions of the nuclei, which are calculated using molecular dynamics (MD). This method is sometimes called “quantum molecular dynamics” (QMD). That term is more compact (and sounds more classy) than DFT/MD, but it is misleading, because MD has no QM corrections. (However, some DFT calculations use lattice dynamics instead of MD.)

A misconception, commonly found in the literature, is that the density functional is the main source of error in these calculations. Efforts to improve the calculations typically focus on the density functional. Numerical issues and corrections to classical MD are also acknowledged as sources of error. However, there are other approximations that should also be considered and are virtually always ignored.

Single Potential Approximation

As noted above, the DFT/MD method assumes a single potential surface for the nuclear motion. This approximation is absolutely essential to the viability of the method. An exact treatment of the problem, even when the Born-Oppenheimer approximation is applied, would employ a separate potential surface and manifold of nuclear wave functions for every electronic configuration of the system—a completely intractable calculation, even with the fastest modern computers.

Where does this approximation come from, and how is it justified? It is usually attributed to a 1965 paper by David Mermin; but Mermin was not concerned with justifying this approximation, only with applying the DFT theorem to the electronic free energy. The justification can be traced to a 1957 paper by Robert Zwanzig, who showed it to be the leading term in a high temperature expansion; he took the classical limit of the nuclear motions while retaining the QM treatment of the electrons. Zwanzig also derived a first-order correction term; but this correction is never even mentioned in DFT papers, let alone tried out.

It should be noted that “high temperature”—a qualitative term at best—does not guarantee the accuracy of the single potential approximation. Quantum corrections are likely to be important for the rotational and vibrational degrees of freedom, when molecules are present. Even more important: the nature of the electronic potential surface and the nuclear motions will depend markedly upon the state of dissociation in a hot fluid. Therefore, corrections to the single potential surface should be most important for systems undergoing changes in chemical structure with temperature and/or pressure.

Treatment of Localized States

Another problem with DFT is that it uses wave functions and statistical formulas that do not give correct results for localized states. I have discussed this issue elsewhere, and it is too complicated to discuss in detail in this essay. So I will only make some brief remarks.

The problem with the wave functions can be illustrated by a simple case: the molecular orbital treatment of the H2 molecule does not give correct results for the separated atoms. The problem arises from the single determinant wave function. This problem cannot be corrected by modifying the density functional or by using the unrestricted Kohn-Sham approximation. (See the book by Koch and Holthausen for further discussion.)

The problem with the statistical formulas can be illustrated by a collection of isolated H atoms. It is easy to show that there are 2N possible configurations for the ground state of this system. The standard treatment, in which band orbitals are constructed as linear combinations of the atomic orbitals, gives 4N configurations, making the entropy off by a factor of 2. (And the error becomes even larger for many other elements.)

 

That’s all I wanted to say. I’m going to stop here and say goodbye and best wishes.

 

Gerald I. Kerley