Conference papers and presentations

Previous papers

PSE will be presenting papers at the following events.

ACHEMA 2012, (Frankfurt am Main, Germany, 18-22 June 2012)

1. Design and scale-up of a high-performance multitubular reactor for propylene oxide

Alejandro Cano, Process Systems Enterprise Inc., 2 Ridgedale Avenue, Suite 202, Cedar Knolls, NJ 07927-1118, USA

New advanced process modelling techniques coupled with innovative experimental procedures are enabling the use of high-fidelity reactor models validated against laboratory and pilot data to be used for the detailed design of innovative high-performance exothermic multitubular reactors.

The approach allows the effects of changes to design variables (such as the catalyst characteristics, the catalyst/inert ratio, tube pitch, tube length, coolant velocity, feed reactant mass fraction, number of baffles, cooling water inlet temperature as well as the number of active reactors and numerous other quantities) on key performance indicators (such as throughput, conversion and yield, tube-side temperature profiles and catalyst lifetime) to be calculated to a very high degree of predictive accuracy.

Multitubular reactors are complicated units that have been very difficult to model in the past because of the complex catalytic reactions taking place in the tubes, the large number of tubes, and the interrelationship between exothermic reaction in the tubes and the shell-side cooling medium.

The approach takes into account the close coupling between the tube-side phenomena and the shell-side heat transfer and hydrodynamics. The main advance of the techniques described here over existing simulation approaches are that (1) tube models incorporate high-accuracy first-principles representation of catalytic reaction, species diffusion and bed heat transfer, including intra-particle and surface effects, (2) models are validated against companies' own laboratory and pilot plant data and (3) mathematical optimization techniques are used to determine the optimal values of multiple design variables simultaneously rather than by trial-and-error simulation.

The final design is verified using a computational fluid dynamics (CFD) model of the shell side to ensure that no mechanical constraints such as shell-side fluid velocities are violated.

An integrated modelling/experimental design methodology is presented which uses specially-designed experimental procedures to obtain accurate estimates of the key kinetic and heat transfer parameters from a limited number of carefully targeted experiments. Formal model-based parameter estimation techniques ensure that parameter interaction is taken into account and provide parameter confidence information for subsequent risk analysis.

The approach is illustrated using the design of a high-performance new reactor for the manufacture of propylene oxide. Apart from reactor design, the techniques can be used for a wide range of applications including minimisation of design risk, new catalyst design and assessment, derivation of safe and effective start-up procedures, control design, and maximization of operational flexibility. The techniques described can also be used for operational decision support and troubleshooting. They apply to a variety of reactors, including those for the production of methanol, acrylic acid and Fischer-Tropsch synthesis for gas-to-liquid applications.

2. Efficient catalyst-to-reactor methodologies for novel chemical reactor design and scale-up

Zbigniew Urban Process Systems Enterprise Limited, 5th Floor East, 26-28 Hammersmith Grove, London W6 7HA, United Kingdom

Advances in material engineering open new possibilities in heterogeneous catalysis. Various forms of catalyst supports (such as long ceramic membranes, mechanically durable tubes that can withstand temperatures up to 2000 C, multilayer pellets of complex geometry etc.) now open new possibilities for deploying the catalytically active ingredients within reactors, potentially leading to substantial improvements in selectivity, safety, and catalyst usability and lifetime.

On the other hand, the very existence of these new options often significantly enlarges the reactor design space, the effective and efficient exploration of which now presents new challenges. In particular, traditional approaches for reactor design and scale-up are often limited by the cost and time of pilot plant experimentation that is required to explore all these possibilities.

This paper describes a method whereby kinetic data measured for one form of catalyst support can be used to quantify the performance of that catalyst in any other geometry. The approach is illustrated with data gathered for cylindrical pellets catalyst being used for catalyst engineering of (i) hollow pellets (ii) eggshell (iii) membrane reactors in various arrangements of gas flow.

3. Advanced model for operational optimization of steam crackers

Abduljelil Iliyas1, Munawar Saudagar1, Štepan Špatenka2, Zbigniew Urban2, Constantinos Pantelides2,* 1. Technology and Innovation Center, Saudi Basic Industries Corporation (SABIC), P.O Box 42503, Riyadh 11551, Kingdom of Saudi Arabia 2 Process Systems Enterprise Limited, 5th Floor East, 26-28 Hammersmith Grove, London W6 7HA, United Kingdom

Steam cracking of light hydrocarbons to olefins has been a major contributor to the growth of petrochemical industries for several decades. Over 20 million MTA of new ethylene capacity will be added in the Middle East by 2016. At the same time, given price fluctuations, the need for profitable operation of ethylene plant has never been so critical.

It is well known that the heart of ethylene plant is the thermal cracking furnace - its operation dictates the overall plant profitability. Steam cracker performance optimization involves an optimal trade-off between run-lengths which are too short (thereby unnecessarily reducing the availability of the cracker due to decoking operations that are too frequent) and run-lengths which are too long (thereby reducing the efficiency of operation due to coking causing reduced heat transfer and excessive pressure drops). As a result, achieving truly optimal operation requires the solution of a dynamic optimization problem which will determine the optimal time-varying profiles of all controls available at the operator's disposal while maintaining the cracker within safe and operable limits. Such an optimization problem needs to be based on a mathematical model that provides a sufficiently accurate description of the processes that take place within the cracker tubes and also the firebox.

The work described in this paper is part of a joint SABIC/PSE project aiming to establish a comprehensive capability for modeling and operational optimization of steam cracking technology within SABIC and its affiliates. A general model of a thermal cracker is developed using PSE's state-of-the-art high-fidelity modeling platform, gPROMS®. The tube side phenomena, including both chemistry and heat/mass transfer phenomena, are modeled in detail. The tube models are coupled with a detailed model of the firebox making use of geometrical and other information extracted from a Computational Fluid Dynamics (CFD) model. Advanced dynamic optimization techniques are applied to the combined model for the determination of an optimal operating policy over an entire run at a SABIC cracker.

4. Production Crystallization process models for the pharmaceutical industry: efficient workflows for validation against experiments and scale-up

Session: General Topics - Processes and Apparatus for Pharmaceutical

Sean Bermingham, Mark Pinto, Process Systems Enterprise Limited, 5th Floor East, 26-28 Hammersmith Grove, London W6 7HA, United Kingdom

This paper describes a branded pharmaceutical company's assessment of available tools and techniques for model-based design and optimization of crystallization processes and considers the potential for using the same models to quantify the design space of these processes.

The seeded batch cooling crystallization of an API from a solvent was used as a case study. Investigation of crystal images from the process revealed that the dominant mechanisms are activated nucleation, growth, attrition and to a lesser extent agglomeration. The key challenges from the process development perspective are to be able to predict how the rates of these mechanisms vary depending on operating conditions, crystallizer/agitator type and equipment scale.

A model of the batch cooling crystallization set-up used for the experimental work was developed using a commercially available tool, based on a population balance framework that supports both steady-state and dynamic applications and the following kinetic expressions: a classical primary nucleation model; a secondary nucleation model related to crystal attrition; a 2-step growth model that considers mass transfer and surface integration; and an agglomeration model that considers collision frequencies as well as the fraction of collisions resulting in a successful agglomeration event.

For the model validation, the unknown parameters in the kinetic equations were estimated against the solute concentration and PSD measurements of the performed experiments. The combination of model equations incorporating fundamental descriptions of the underlying phenomena and empirical parameters estimated from laboratory-scale experimental measurements gives the validated model a high degree of predictive accuracy.

The validated model was subsequently combined with flow information from a CFD model to construct a so-called Multizonal model that allowed prediction of local wall temperatures, primary and secondary nucleation rates, growth (and dissolution) rates as well as agglomeration rates as well as gravity-induced particle segregation where required.

It is demonstrated that using commercially feasible tools it is now possible to develop, validate and apply crystallization models with an effort of 4-8 man-weeks. The advantage of a model-based approach over more traditional approaches (e.g. statistical analysis) is a reduction in the number of experiments required to understand the crystallization behaviour and the ability to predict behaviour at different scales or different configurations at the same/similar scale thus reducing scale-up risk.

Overall, this approach provides a reliable mechanism for investigating system performance at any scale of operation, as well as support for design decisions such as impeller size and shape. Typical applications in the pharmaceutical sector include the design of crystallizers to achieve the required particle size distribution (PSD) for active pharmaceutical ingredients. The techniques also apply to the manufacture of fine and bulk chemicals, and can also be used for debottlenecking and troubleshooting of poor operation.

5. Model-based scale-up of impact milling

Session: Special sessions - Solids Process Engineering

Brian T. Gettelfinger1, Stephen R. Glassmeyer2, Mark Pinto*3, Sean Bermingham3 1. Chemical Systems Modeling Section, Modeling and Simulation, Corporate R&D, Procter & Gamble, West Chester, OH, USA 2. Particle Processing Section, Process Technologies, Corporate Engineering, Procter & Gamble, West Chester, OH, USA 3. Process Systems Enterprise Ltd, London, United Kingdom

The Vogel & Peukert model uses separate material properties and mill parameters determined from bench top experiments to predict the particle size distribution of the output of impact mills. This turns mill modelling into a tool that can be used every day at P&G.

This talk covers the implementation of this model and use within the gSOLIDS environment (Process Systems Enterprise, UK). We determined the material properties for a representative material from sieving and single impact milling experiments. We successfully deployed the population balance model of Vogel and Peukert that predicts the output of our bench top pin mill in gSOLIDS. The gSOLIDS tool allowed us to perform parameter estimation of this highly nonlinear model which gave us distinct material and mill properties. We then made successful model predictions of mill scale-up using these same parameters.

This method could potentially save millions annually in experimental costs as we can generalize this method to any powder that is broken in an impact mill. We have thus developed a model-based work process for impact mill scale-up that uses gSOLIDS at its core.

References: L. Vogel, W. Peukert (2005) From single impact behaviour to modelling of impact mills. Chemical Engineering Science 60, 5164-5176.

6. New techniques for high-fidelity dynamic modelling of depressurising vessels and flare networks to improve safety and reduce CAPEX

Session: General Topics - Safety

James Marriott*, Apostolos Giovanoglou, Zbigniew Urban, Process Systems Enterprise Limited, 5th Floor East, 26-28 Hammersmith Grove, London W6 7HA, United Kingdom

The use of dynamic modelling for relief system design can result in a considerable reduction in capital expenditure while simultaneously improving plant safety for oil & gas and refining processes. This paper considers the application of dynamic analysis to two areas, vessel depressurisation (or "blowdown") and flare network design, in order to accurately quantify relief loads and metal temperatures to enable informed safety and CAPEX decision support.

The detailed dynamic modelling and simulation of the rapid depressurisation ("blowdown") of high-pressure vessels is a key element of the safety analysis of oil & gas production plant and other high-pressure installations. This depressurisation not only determines the load imposed on the pressure relief system (e.g. flare network) but, more importantly, may result in significantly reduced temperatures of the vessel walls, which may lead to embrittlement and high thermal stresses. Models for blowdown have been proposed by several authors over the past two decades, and some of this work has been applied extensively in industrial applications.

This paper presents a next-generation model for blowdown calculations. In contrast to earlier models in the literature, the model incorporates (i) a 3-dimensional model of the metal walls taking account of the transfer of heat between regions of the wall in contact with different phases. This allows a more accurate estimation of the wall temperatures, and the direct computation of thermal stresses, and (ii) a more accurate description of the non-equilibrium interactions among the various phases which does not rely on the use of adjustable parameters. The model has been validated against the set of experimental data obtained from a full-scale vessel.

The flare networks for major plants also represent a non-negligible part of the overall capital investment. Current industrial practice for their design is primarily based on steady-state analysis, as described in standards such as API 521 and supported by widely-used software tools. However, it is a widely recognized fact that the application of steady-steady considerations to what is fundamentally a dynamic system inevitably requires the use of conservative assumptions which often result in significant oversizing of flare headers and other components of the network. Another major contributor to capital cost is the use of special materials for the parts of the system that may be exposed to low-temperature fluids - usually as a result of relief from high-pressure process vessels - and which are therefore at risk of embrittlement. The key to being able to limit the additional capital expenditure is the accurate estimation of the length of piping that is subject to "abnormal" temperatures.

The paper describes, in addition to the capabilities for blowdown, an advanced model-based system for flare system network design that addresses the above issues while being compatible with existing steady-state flare network technology. The system supports steady-state and dynamic analysis, wall temperature modelling and prediction of hydrate and ice formation within a single integrated framework.


7. Integrated design and optimization of a new HPPO process for reduced energy consumption

Session: Special sessions -Energy Efficiency by Integrated Processes

Alejandro Cano*1, Hilario Martín Rodríguez2
1. Process Systems Enterprise Limited, 5th Floor East, 26-28 Hammersmith Grove, London W6 7HA, United Kingdom [correspondence: [email protected]]
2. Repsol, Centro Tecnológico , Carretera de Extremadura A-5, km. 18, 28935 Móstoles, Madrid, Spain

During process design there are many trade-offs to consider. Some equipment decisions may improve the economics of the equipment being considered but have a negative impact on the economics - as well as the operability - of the plant as a whole. Integrated whole plant design optimization techniques make it possible to undertake the design of complex reactor and separation sections simultaneously, in order to determine optimal values of design variables taking all relevant constraints into consideration, and thereby maximizing overall best energy efficiency and economics.

In the case presented in the paper the application of such optimization techniques to a new propylene oxide process resulted in the elimination of entire distillation columns from the original process design, saving significant capital and energy costs. The resulting design was then optimized for heat integration, utilising process streams to heat reboilers where possible.

The plant comprised a complex multitubular reactor and a separation section with many distillation columns (one an azeotropic distillation and two involving reaction), plus large recycles. A high-fidelity simulation model was built of the integrated reactor and separation flowsheet, which was then optimized using an economic objective function that represented annualised capital plus operating cost. The rigorous mathematical optimization considered 49 decision variables simultaneously, from both the reaction and separation sections.

Separation section design variables included condenser reflux ratios, temperatures, pressures and temperature approaches, column top pressures, reboiler boil ratios and temperatures as well as concentrations of various products in distillate and bottoms streams. In addition were included configuration and topology decisions, such as the location of feed trays and column bypasses, which allowed flowsheet alternatives to be considered as part of the optimization.

The optimal design represented large savings in operating and capital cost with respect to base case. In addition to the elimination of the two columns (with their associated energy use), heat integration yielded significant operating cost savings with attractive return on investment; payback was less than four months.

The methods presented in this paper are suitably general to be able to be applied to any process plant, for design and operation and can be implemented using commercially-available simulation and modelling tools.

8. Fully rigorous implementation of the Maxwell-Stefan diffusion model as part of a modern, phenomenological model platform: Towards a fully predictive capability for model-based design of aq. Alkanolamine/Caustic Soda CO2 removal processes

Session: Special Session - CO2 Separation and Utilisation

Praveen Lawrence, Maarten Nauta, Zbigniew Urban, Juan-Carlos Mani* Process Systems Enterprise Limited, 5th Floor East, 26-28 Hammersmith Grove, London W6 7HA, United Kingdom

Post-combustion CO2 capture is currently recognised as the most promising technology to reduce CO2 emissions. Also, CO2 removal is a necessary step in a number of petrochemical processes, e.g., olefin productions. Amongst the different technologies at disposal for CO2 capture and removal, aqueous Alkanolamine solutions and Caustic Soda absorption processes respectively represent the most mature/accepted techniques and are commercially applied on large industrial scales. Process modeling has been widely utilised to design these absorption processes using different levels of model complexity, from simple approaches in flowsheeting packages based on Murphree efficiency calculations1 to more rigorous custom models based on the Maxwell-Stefan mass-transfer/diffusion model2. However, and even in the latter example, model simplifications are routinely introduced.

The work described in this paper presents typical results of a series of CO2 absorption processes projects with industry players based on the fully rigorous implementation of the Maxwell-Stefan diffusion model as part of a modular and phenomenological modeling environment which is available as the Advanced Model Library for Gas-Liquid Contactors (AML:GLC3) on the industrially tested gPROMS® modeling platform. This fully rigorous implementation of the Maxwell-Stefan diffusion model takes into consideration simultaneously the concentration and potential gradients of the ionic species across the gas-liquid film and comes with semi-automated initialisation procedures. Experimental data show a remarkable predictive capability of this approach. Furthermore, the new modeling paradigm allows to realistically represent any column configuration and to perform different task, such as steady-state, dynamic or stochastic simulation, parameter estimation and dynamic optimization with a single model. Industrially relevant examples are presented.


  1. For example Øi L. E., "Aspen HYSYS Simulation of CO2 Removal by Amine Absorption from a Gas Based Power Plant", SIMS2007 Conference Gøteborg (2007)
  2. For example Kucka L., Müller I., Kenig E. Y., Górak A., "On the modelling and simulation of sour gas absorption by aqueous amine solutions", Chemical Engineering Science, 58 (2003), 3571 - 3578
  3.; last accessed on 29-Aug-2011

Session: General Topics - Mixing and Separation Technology or Special Session - Energy Efficiency by Integrated Processes

9a. Evaluating improved separation and energy efficiency with Heat Integrated Distillation Columns (HIDiC): Performing rigorous feasibility studies utilising newest modelling techniques

Session: General Topics - Mixing and Separation Technology or Special Session - Energy Efficiency by Integrated Processes

Praveen Lawrence, Zbigniew Urban, Christian Mõllmann, Juan-Carlos Mani* Process Systems Enterprise Limited, 5th Floor East, 26-28 Hammersmith Grove, London W6 7HA, United Kingdom

While conventional distillation columns possibly represent the most mature unit operation, they are however characterised by low thermodynamic efficiency. Over the years, a number of innovative distillation processes have been developed to improve the separation efficiency1. It is now well known, that "internal" heat integration between the rectifying and stripping sections along their (full) lengths is potentially the most promising approach. This principle is applied in the Heat Integrated Distillation Column (HIDiC)2. While HIDiC's are potentially very energy efficient due to heat transfer enforced vaporisation and condensation parts, it has proven to be very difficult to achieve design efficiency due to narrow operating windows and complex control requirements. Hence, it is not surprising that only one pilot HIDiC column on a semi-industrial scale has been built so far, even though energy reductions of 60%+ are possible and have been measured3.

A rigorous, model-based approach for HIDiC design optimization and operational studies is presented based on a modularized, phenomenological and generic modelling concept for all types of gas-liquid separations, which is available as the Advanced Model Library for Gas-Liquid Contactors (AML:GLC4) on the gPROMS® modelling platform. Integrated steady-state and dynamic simulation model structure, alongside dynamic and MINLP optimization capabilities are required technologies to assess the complex design and operational issues present, for instance the length of the heat integration portion between the stripping and rectifying sections and inverse dynamic responses with widely differing und unexpected time constants. Specific examples are presented and discussed.


  1. Vapour recompression has become one of the industrial standards. See for example D. Bruisma, S. Spoelstra, "Heat Pumps in Distillation", Conference Proceedings Distillation Absorption 2010
  2. N. Asprion, S. Mollner, N. Poth, B. Rumpf, "Energy Management in Chemical Industry" in Ullmann's Encyclopedia of Industrial Chemistry, 2010
  3. A K. Horiuchi, K. Yanagimoto, K. Kataoka, M. Nakaiwa: "Energy saving characteristics of heat-integrated distillation column technology applied to multi-component petroleum distillation'', IChemE Symp. Series 152 (2006) 172-180
  4.; last accessed on 29-Aug-2011

10. Model-based scale-up in HDPE processes: from lab-scale experimentation to commercial plants

Session: General Topics - Advanced Reaction Technology

Zbigniew Urban, Constantinos C. Pantelides, Process Systems Enterprise Limited, 5th Floor East, 26-28 Hammersmith Grove, London W6 7HA

The chemistry and kinetics of heterogeneous co-polymerisation for the production of high-density polyethylene (HDPE) using Ziegler-Natta or metallocene catalysts is best evaluated from semi-batch experiments even though the commercial scale reactor is (usually) a train of continuously operated CSTRs. This technique for developing recipes for new grades of polymers does not involve the complications of imperfect mixing or residence time distribution, and is particularly useful in the case of polymerisation aimed at multi-modal products. Any required step-changes in operating conditions can also easily be implemented over the batch duration.

This paper describes a comprehensive model of heterogeneous co-polymerisation based on the following approach:

- the co-polymerisation chemistry is described in terms of a multi-dimensional compositional distribution of the co-monomers, rather than the usual (1-dimensional) molecular weight distribution or its moments;

- the kinetics of co-polymerisation chemistry and catalyst deactivation are described for up to 4 distinct types of active sites;

- the reaction rates are formulated in terms of the liquid phase composition, the latter being predicted from measurable gas-phase composition, temperature and pressure via an equation of state validated on experimental data;

- a multi-grain model is employed for a single macro-particle, whose radius increases with time; intra-particle mass transfer limitations for monomers and chain-transfer agent are accounted for, resulting in a distribution of reaction rate and polymer MWD over the macro-particle radius; this is particularly important for very active catalysts (e.g. metallocenes).

The above model is used to estimate kinetic parameters using measurements from semi-batch reactor experiments. A particular challenge in this context is the identification of the initial state of the reactor as the experiment preparation procedure involves a complex sequence of operations. Although the latter is well defined from the "operator" point of view, the state of the reactor at the end of this procedure and just before the start of the polymerisation is not known nor can it be measured directly. Instead, an appropriate model is used to translate the given sequence of operations into a prediction of the internal state of the system at the end of the experiment preparation phase.

Once the co-polymerisation chemistry is determined from the lab-scale experiments, the model can be used to scale-up the batch reactor recipes to the commercial-scale CSTR train with no loss of accuracy. A major complication arises from the fact that the residence time distribution in these reactors implies that the polymer particles are no longer identical. The complexity of the underlying model of each particle (typically involving tens of thousands of variables) precludes the use of standard population balance techniques. Instead, we have developed a proprietary computational algorithm that allows the exact determination and optimization of the steady-state behaviour of trains of CSTRs based on the above particle model.

Overall, the above methodology represents a comprehensive model-based procedure for the effective scale-up of HDPE processes from lab-scale experiments to commercial-scale operations.