Simulations: What Gets Modeled And What Doesn’t

When I’m not flogging away at code these days I’m thinking about continuous simulations and the details that get modeled within them. Specifically I’ve been reading about and thinking about operations in petrochemical refineries, and even more specifically certain classes of catalytic reactions, like those found in hydrodesulfurization processes. In such a process the (naptha) feedstock is passed through a catalyst chamber where hydrogen is also injected. The idea is that sulfhydryl groups are split apart from their hydrocarbon bases and replaced with one hydrogen atom from an H2 molecule. The other hydrogen molecule then bonds to form H2S. In short, C2H5SH + H2 => C2H6 + H2S.

In thinking about this I was less concerned about the details than about how I would go about simulating such a process. The simulation I would create is greatly dependent on the purpose for which it is to be used.

If I did not know what was going to happen when I mixed certain materials together in the presence of a certain amount of energy and at a certain pressure I would create an extremely low-level simulation that modeled the behavior of every atom. I know this is done with drug reactions of various types and my understanding is that they are often hard to get right in novel situations. Such models also require a lot of computing resources. If the chemical reactions to be modeled involve a catalyst then there are just that many more factors of chemistry and geometry that have to be worked out.

If, however, the purpose of the simulation is to train operators, to size plants, or to experiment with different operating concepts (batching, controls and instrumentation, heat recovery, safety, etc.) I would create a higher-level simulation that modeled the process in a more abstract way.

At some level, assuming that the conditions were right with respect to factors like feed chemistry, temperature, pressure, catalyst area, and the presence and mixing of sufficient reactant materials on a mass or molar basis, I would assume that the reaction works and generates or absorbs heat as designed. Within a reasonable range of operating conditions I would be able to correctly model reactions, heat transfers, flows, pressures, temperatures, and end products. I would be able to create a simulation that could be initialized to a steady running state, be closed down and purged, and be restarted and returned to the original running state. Alternatively I might start at the shut-down state, ramp up to the operating state, and then shut down again. In either case I would have to know the efficiency of any planned reactions at given conditions of temperature, pressure, and catalyst. I would have to know if different reactions happened that also needed to be modeled.

Considering the reaction described above, if different reactions happened at, say, a different temperature, then I would have to make sure those reactions were modeled in place of the ones I’d hoped for. The point is that in a macro-scale simulator I wouldn’t be modeling the reactions of the molecules from first principles, but would instead model conversions, reactions, and state changes as a function of conditions. This can be tricky for a number of reasons.

This kind of simulation might be limited in its ability to handle widely varying feed compositions. You could not, for example, feed ice cream or cornmeal into such a process and expect it to do anything meaningful. It can only model what the simulation allows for. The makeup of the initial petrochemical feed (for the whole refinery and not just the hydrodesulfurization process) would have to be completely defined for every component. The thermodynamic properties of all components would have to be known and specified for all applicable ranges of temperature, pressure, and so on, so the model would correctly represent state changes and the various separation, pumping, heat transfer, and other processes. (This turns out to be somewhat difficult, though resources like this would clearly be helpful, though such data are rarely available in as complete or granular a form as have been derived for water and steam.)

Such a model could handle reasonable variations in feedstock and still work as expected. It could be made to handle changes in the efficiency of the catalytic reaction by changing some characteristic of the catalyst (expressed as an area or percentage), under user control. If the catalyst somehow becomes consumed or fouled the reaction would proceed less completely or not at all.

If the simulator was intended to support training then the trainees could learn not only normal process operations and controls, but also what happens during abnormal situations. Those can include things like leaks, instrument and other equipment failures, loss of utilities, and so on. In those cases it’s may be less important to represent the events exactly than to get them right in character, so the trainees learn what to look for and how to identify cause and effect.

The simulator could be used to test novel operating concepts as discussed. It would be able to handle a range of variation in the feed material. It could be used for operations research by performing parametric runs to test the effects of different changes in a systematic way.

As I’ve been reading about the different aspects of refinery operations I’ve been able to relate most of the processes to elements I simulated when I worked on nuclear power plant simulators. I got to know about the steam cycle on a very deep level and also how to deal with noncondensable gases, catalytic recombiners, phase changes, different kinds of absorption and release processes, and different kinds of filtration and separation processes as well. I look forward to continuing research on the subject.

This entry was posted in Simulation and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply