Conference papers and presentations
2015 AAPS Annual Meeting and Exposition (Orlando, Fla., USA, October 25-29, 2015)
Biopharmaceutics Risk Assessment of Solid Oral Dosage Forms via Mechanistic Gastrointestinal Performance Modelling
Dan Braido , Process Systems Enterprise
Understanding the absorption of orally-delivered drugs via the gastrointestinal system to support biopharmaceutics risk assessments is essential for new drug product development. The biopharmaceutics performance of drug products can depend on multiple factors, including drug substance physicochemical properties, particle size distribution, patient physiology and dosage form composition. Even to the most experienced investigators, the dynamic and quantitative interplay of these factors is not intuitively visualizable. In silico modelling allows for the low cost, rapid identification of the risk factors affecting drug absorption and subsequently of risk mitigation strategies to ensure robust biopharmaceutical performance. Highly mechanistic models should allow this before clinical trial data is available and enable modification of clinical study design to understand, confirm and develop practical solutions to overcome biopharmaceutics risks.
gCOAS (general Computational Oral Absorption Simulation), a mechanistic oral absorption modelling tool developed using the gPROMS mathematical modelling platform, was utilized in the investigation of several pharmaceutical compounds. The bottom up modelling framework incorporates multiple features, including mass and charge balances of solute species, explicit tracking of all species in solution, partitioning of drug species in bile salt mixed micelles, surface pH dependent drug dissolution, kinetic co-existence of multiple drug solid forms in the GI tract, nucleation and particle growth kinetics, and fluid dynamics.
Simulation outputs include full access to positional and temporal details of all variables, allowing for in depth analysis of the absorption process. Specific biopharmaceutics risk factors associated with the absorption of each compound are enabled. Detailed analyses of simulations by gCOAS identified rate limiting processes of absorption for four structurally diverse model drugs representing acidic salt, basic salt, neutral and ampholyte compounds. Effects of the digestive state on oral absorption of these model drugs were calculated on the basis of partitioning of un-ionized and ionized drug species into bile salt mixed micelles. In addition, simulations performed using transitive physiological states effected via the dynamic calculation of the human body’s response to feeding result in significantly different biopharmaceutics performance of the compounds when compared to simulations using static physiological parameters. Based on the results of the simulations, potential risk mitigation strategies for each model compound are proposed.
gCOAS possesses the flexibility and predictive capability to test various physiological and drug product properties to identify key performance risk factors influencing oral absorption. Specifically, it can identify risk factors contributing to poor or incomplete absorption, provide the necessary tools to explore risk mitigation strategies including solubilization, and provide guidance for confirmatory experiments with the goal of improving the drug product design process before clinical studies have been performed.
ECCE10 (Nice, France, September 27-1 October, 2015)
1. Using modelling to improve filtration through PSD span reduction
Niall A. Mitchell, Sean K. Bermingham, Hassan S. Mumtaz, Process Systems Enterprise
The current production process of an active ingredient isolates the batch by a cooling crystallization process. A major bottleneck of this process is the isolation time of the product after crystallization, which can vary significantly between batches. From observation, the key variable that dictates the isolation time of the product is the width of the particle size distribution of the crystals obtained; the wider the particle size distribution, the longer the filtration time observed. Typically for this product, two distinct environments of fine particles and large single crystals are observed within a single batch. In this work an experimental scale-down study was carried out to replicate the plant process and to assess whether the filtration properties could be improved by modifying the conditions in the crystallizer. Seeding the batch with different quantities of seed with a narrow particle size distribution and modification of the cooling profile where carried out, with limited success in narrowing the particle size distribution significantly. Crystallization modeling using gCRYSTAL was employed to rationalize the experimental observations and to identify the key crystallization phenomena that are dominant in the process. It was observed that the controlling phenomena for this crystallization were the favorable kinetics for crystal growth and also attrition of the crystals, once crystals had grown to a critical size. Having identified this from the modeling, it was then possible to devise additional experiments to reduce the width of the particle size distribution of this material with the potential to improve plant operations.
2. Modelling, Validation and optimization of a lab and bench scale batch crystallization process
Niall A. Mitchell, Sean K. Bermingham, Hassan S. Mumtaz, Process Systems Enterprise
Model-based scale-up and optimization is a powerful industrial technique for achieving the desired product quality, reducing the cost of experimentation as well as time to market. For complex crystallization processes, the population balance modelling technique is able to predict the effect of batch recipe on the particle size distribution (PSD) of the final product. gCRYSTAL enables the user to configure a population balance model focusing on the crystallization science and engineering rather than complex mathematical equations and the numerics of solution. Using an inorganic salt as the example compound, this study illustrates the application of gCRYSTAL in:
- Using lab-scale data to estimate parameters related to growth and secondary nucleation
- Predict the PSD of a bench scale crystallizer
- Recipe optimization and its lab-scale validation to maximize d50
It is found that the growth kinetics are well-described by the 2-step growth model while the attrition-based secondary nucleation kinetics are adequately described by the Evans et al. (1974) model which includes a critical crystal size below which attrition is absent. The parameters are estimated using 5 seeded batch cooling crystallization lab experiments. The same parameters are then used in a model of the bench scale crystallizer (except the secondary nucleation rate constant which is re-estimated as the tip speed is different) resulting in a good prediction of the PSD (see Figure.1). A recipe optimization is also carried out using gCRYSTAL and the recipe is tested in the lab crystallizer with reasonable validation.
3. Compartmental population balance modeling of wet granulation processes: Development of a flexible modeling tool for process validation and design
Dana Barrasso, David Slade, Sean K. Bermingham, Process Systems Enterprise
Wet granulation is a particle size enlargement process commonly used in the food, fertilizer, consumer products, and pharmaceutical industries to enhance flowability, reduce dust formation, and improve uniformity. Despite their widespread use, these processes are often operated empirically with low efficiency, in part due to a limited understanding of process behavior.
While recent research has advanced the understanding of the phenomena involved in granulation, quantitative tools describing these complex processes are less available. Population balance modeling techniques have been developed to describe changes in particle size during granulation. However, many of these models simplify the process, making assumptions that are not universally applicable, such as assuming ideal mixing, considering only a single, pure component, or excluding certain phenomena. A general modeling tool for wet granulation is desired to describe the detailed mechanisms of wet granulation on a flexible basis to quantitatively describe the effects of material properties and process parameters on key product attributes, such as granule strength, composition, and size distributions. In this work, a flexible modeling framework is presented to consider the variety of regimes and phenomena found in high shear wet granulation, fluid bed granulation, and screw granulation processes. The model considers a multitude of possible phenomena, such as drop nucleation, granule wetting, growth though layering, consolidation, agglomeration, and breakage.
Using a compartmental approach, these phenomena are segregated into well-mixed zones that describe spatial behavior within the granulators. Spray zones and droplet addition zones represent regions where liquid initially contacts powder particles. Depending on the liquid addition mechanism, these zones result in particle surface wetting or nucleation of large wet granules. Within the circulation zone, the granules undergo consolidation, coalescence, and growth by layering. Breakage zones represent regions where the particles are subjected to high shear rates, as observed near a chopper or within the kneading elements of a screw. Residence time models are used to evaluate transfer between compartments. Further, the model allows for multiple solid and liquid components and is extended to a multi-dimensional population balance model to capture inhomogeneities and distributed particle properties. By tracking multiple components and multidimensional distributions in particle composition, mechanistic or semi-mechanistic rate expressions can be evaluated to consider the effects of particle and material properties on process behavior and product attributes.
Applications of this modeling tool for fluid bed granulation and twin screw granulation are presented to demonstrate the predictive capabilities of the model when calibrated to experimental data. Implications for process design and scale-up, flowsheeting, and process optimization are discussed.
4. Hierarchical fidelity modelling of co-and counter-current spray dryers: From psychrometric charts to hybrid cfd-population balance models
David Slade, Sean Bermingham, Hassan Mumtaz, Process Systems Enterprise
Spray drying is a commonly employed industrial process producing powders with a wide variety of application; such products include detergents, pharmaceutics and food powders. The scale and design of the spray dryer is closely linked to the requirements of the product being manufactured, from large counter-current dryers used for detergents to much smaller, co-current designs used in pharmaceutic manufacture. Important critical quality attributes such as porosity, moisture content and size need to be closely controlled and monitored while maintaining an efficient manufacturing procedure.
The development of modelling tools of varying levels of detail allow for immediate assessment of operating conditions and ranges as well as a more exploratory investigation of possible operating spaces. These can be utilised from a manufacturing level to a research and design aspect.
Psychrometric charts can be modelled to quickly indicate the drying capacity at given operating conditions, an important addition to this is to consider the material’s glass temperature above which the material becomes sticky and can cause catastrophic fouling issues.
At a higher level of detail, it is important to assess micro-scale phenomena as well as macro-scale implications. A first approach to this is to perform single particle droplet drying experiments and use these to begin to configure a full model by characterising the drying behaviour of the material been dried. A well-mixed approximation of the dryer can then be utilised, this approach is often acceptable for smaller dryers, the spatial gradients within the dryer are ignored and residence time of the particles estimated.
The ultimate goal is to capture the full complex dynamics of mixing, re-circulation, agglomeration and other phenomena in a comprehensive yet efficient manner. To achieve this, a novel, advanced process modelling approach can be adopted which couples CFD simulations of the spray drying tower to a population balance, multi-compartment model. Coalescence and drying are captured in the model which makes use of the particle trajectories calculated within CFD to inform the multi-compartment model in terms of fluxes between zones. Quantifying the role the numerous process inputs have on the fluxes of particles between compartments reduces the need of CFD to increase R&D efficiency. Optimizing the drying process, while maintaining consistency in the product, may be achieved via a study of key parameters, thus enabling the reduction in energy consumed during the manufacture of the powder and giving further detail on approaches to control the qualities of the final product.
AIChE 2015 (Salt Lake City, UT, USA, November 8-13, 2015)
Dana Barrasso, David Slade, Sean K. Bermingham, Process Systems Enterprise
Although wet granulation processes are widely used in a range of industries, they are typically designed empirically and operated inefficiently. In order to improve these processes, predictive modeling tools are desired for process design, validation, and optimization. Recent advances have expanded the mechanistic understanding of these processes, enabling quantitative descriptions to be formed using population balance modeling techniques.
Although they share a common name, wet granulation processes in practice are operated using a range of configurations, resulting in different process outcomes. Further, various mechanisms drive wet granulation processes, including wetting, drying, particle flux, agglomeration, drop nucleation, breakage, consolidation, and layering. The dominant phenomena are often governed by equipment configuration and process conditions, and different types of granulators exhibit different behavior.
Consequently, there is no single population balance model for wet granulation processes. Instead, custom models must be configured to meet the needs of the application. Some considerations in model development include population balance dimensionality and grid resolution, representation of particle composition, use of compartmental models.
Oversimplified models are unable to capture realistic behavior and provide little additional insight into process outcomes, only representing idealized cases. In contrast, overly complex models are inefficient, and numerical challenges can mask the true results. These high-fidelity models can often be simplified by identifying and eliminating irrelevant details. Despite the broad range of wet granulation models developed in academia and industry, a unifying framework is needed to ensure consistency, establish best practices and facilitate technology and knowledge transfer.
In this study, various alternatives for modeling wet granulation are discussed. These configurations are unified in a flexible modeling framework that encompasses varying levels of fidelity. Through this framework, population balance dimensionality, compartmental configuration, and governing rate mechanisms can be customized to simulate a variety of wet granulation processes. Case studies of twin screw and fluid bed granulation processes are presented to demonstrate the implications of model alternatives and establish best practices for model-based design and operation of efficient and robust wet granulation processes.
Jun Zhang1, Frances Pereira2, Ravendra Singh1, Sean Bermingham2, Rohit Ramachandran1, Fernando J. Muzzio1 and Marianthi Ierapetritou1, (1)Chemical and Biochemical Engineering, Rutgers University, Piscataway, NJ, (2)Process Systems Enterprise Limited, London, United Kingdom
Pharmaceutical industry faces a "data rich" world since data are continuously generated from the substantial experiments employed in the process development activities. The data generation speed was anticipated as double every month, and it is still being promoted by the development of rapid experimental techniques. Benefits of harnessing this accumulated data include: narrowing down the design space of pharmaceutical process to be explored, reducing the amount of experiments to be implemented, and generating a clear landscape about target product and process performance . However, less attention has been given on data utilization as indicated by the survey of pharmaceutical industry that pointed to less than 10% of data use. The reason of inefficient data utilization is that the data lack systematic organization and there is very little work on information extraction from such data sets. To address this issue, a systematic approach is developed which consists of material properties data representation, search function and multivariate data analysis.
The data representation can systematically represent material properties data, which is generated by different characterization devices, as a set of specifications; and these specifications would be organized as a XML file, which is an extensible markup language defined by W3C. Each data point of material properties is represented as a XML file with unique filename as the ID, and in each XML file, the measurements of material properties are organized as a hierarchical structure where each node represents a specification, i.e. a specific measurement?s name with associated value.
The search function is developed to allow user to retrieve desired material properties data, which consists of user interface, ontology base and comparison algorithm. User interface allows user to define the specifications of data to be retrieved, e.g. API name, as well as the numerical criterion to be used for data searching, e.g. ?10% of specified viscosity. Ontology base consists of a set of ontologies that describe the relationship of terminologies referred in the materials properties data, e.g. Paracetamol is a type of API. The ontology establishes logic links among the data that allow more related data can be explored, which expands the search space that could help user to further understand the data. Comparison algorithm compares user?s specifications with each XML file using numerical criterion and ontologies to return the data that is relevant with user?s specifications.
Based on the returned data, partial lease square regression (PLS) algorithm is used to correlate the material properties information and process parameters with process output in order to generate a predictive model. Such predictive model ensures consistency of information of each unit?s input and output, and it would greatly facilitate process simulation, especially the whole process consisting of different unit models that require inconsistent inputs and outputs.
As a case study, a specific blending process consisting of feeding, co-milling and blending, which has inconsistent model inputs and outputs, is used. gPROMs  is selected as the simulation platform to demonstrate how this framework can be implemented and how the data can be fully and flexibly used for process simulation study.
 Zhang J, Hunter A, Zhou Y. A logic-reasoning based system to harness bioprocess experimental data and knowledge for design. Biochemical Engineering Journal, 2013, 74: 127-135.
 Boukouvala F, Niotis V, Ramachandran R, Muzzio F, Ierapetritou M, An integrated approach for dynamic flowsheet modeling and sensitivity analysis of a continuous tablet manufacturing process, Computers & Chemical Engineering, 2012, 42: 30-47.
Pavol Rajniak1, Stefan Radl2, Johannes G. Khinast2, Michael Braun3, Daniela Steigmiller3, Alfred Fetscher3, Martin Maus3, Robert Schmidtke4, Matej Zadravec1, Sean Bermingham5, David Slade5, Maryam Askarishahi1 and Mohammadreza Ebrahimi1, (1)Research Center Pharmaceutical Engineering, Graz, Austria, (2)Institute of Process and Particle Engineering, Graz University of Technology, Graz, Austria, (3)Boehringer Ingelheim Pharma GmbH & Co. KG, Biberach an der Riss, Germany, (4))Boehringer Ingelheim Pharma GmbH & Co. KG, Biberach an der Riss, Germany, (5)Process Systems Enterprise Limited, London, United Kingdom
This presentation summarizes ongoing activities of a collaboration project of 4 partnering organizations (see above the list of co-authors and partners). The aim of the project is to systematically investigate, develop, implement and validate elements for application of the fluidised bed granulation (FBG) process model. The project thus considers the FBG model itself, implementation into gSOLIDS software, as well as the addition of formulation-, process- and equipment-specific model parameters required with actual industrial FBG process simulations.
Despite of being a widely-used unit operation, the application of (FBG) is still to some extent guided by empirical methods rather than by scientifically-based strategies. The development of realistic mathematical models that are combined with suitable process measurements and evaluation can yield powerful tools for knowledge-based control of process and product quality.
The complex interplay of various phenomena that govern the process dynamics of FBG at different scales poses a significant challenge in developing such models. Most importantly, a realistic FBG model has to incorporate phenomena associated with:
A/ Hydrodynamic modeling of the multi-phase mixture flow
B/ Heat and mass balances: Impact of process conditions on the granule moisture and growth
C/ Modeling of contact mechanics and granule formation
D/ Population balancing (PB) of agglomeration and breakage of different multi-component granules
It is understood that even in a highly detailed, mechanistic FBG model, some model parameters must be calibrated based on experimental data from a process. Clearly, carefully designed and executed experiments are important and necessary to support modelling activities. In summary, our contribution aiming on mechanistic modelling of FBG will discuss the following topics:
- Experimental studies of the FBG process for different formulations at different scales (BI)
- A novel experimental study of the temperature and humidity profiles in a laboratory granulator (BI)
- CFD and coupled CFD-DEM simulations of the 2 ?phase (gas-solid) isothermal granular flow in different granulators (RCPE, IPPT)
- CFD simulations of the 3 ? phase (gas ? solid ? droplets) flow and evaporation in a laboratory scale granulator (RCPE)
- Combination of the CFD models with population balancing (PB) in a laboratory scale granulator (RCPE)
- Development of a simple (ideal mixer approach) macroscopic heat and mass balances model (HMBM), its calibration by fitting to the BI experimental data, testing of different evaporation rate expressions to predict the particle moisture at different process conditions (RCPE)
- Development of a simple (ideal mixer approach) population balance model (PBM), fitting to the BI experimental data, testing of different agglomeration kernels (RCPE)
- Combination of the above HMBM and PBM to predict granule growth at different process conditions (RCPE)
- Comparison and combination of the RCPE models with existing gSOLIDS models and development of an optimal macroscopic model within the gSOLIDS environment (RCPE, PSE)
Partnering organizations realize that developing a detailed (CFD based) model for the whole granulation process is nowadays still a too ambitious goal. So the key strategy is to use information from such detailed models for development of more practical macroscopic models. For example, information about the flow patterns and temperature and moisture profiles (both, experimental and theoretical) should help in definition of ?active zones? or ?drying zones? (in which the agglomeration or drying and/or breakage take place) of granulators and their corresponding models.
John Hecht1, Vidyapati Vidyapati1 and Jianfeng Li2 (1)Process Technologies, Corporate Engineering, (2)Process Systems Enterprise, Cedar Knolls, NJ
This presentation describes a numerical simulation of segregation of a powder mixture when a storage bin is used in a powder process. A continuum framework tracks composition changes inside and exiting the bin during its dynamic operation. The simulation treats particle segregation on the moving free surfaces during filling and emptying, tracks the shape of the free surface, and includes velocity profiles within the powder during discharge. This simulation was programmed using gSOLIDS and therefore it can be connected to other unit operations for an overall powder process simulation. Results obtained by these simulations can help in mitigating segregation risk either by changing the equipment design, modifying the powder properties or by controlling the process parameters.
Richard Pattison1, Pieter Schmal2 and Constantinos C. Pantelides 2 (1)McKetta Department of Chemical Engineering, University of Texas at Austin, Austin, TX, , (2)Process Systems Enterprise
Periodic adsorption processes (PAPs) are attractive alternatives to traditional distillation and absorption separation systems due to their relatively low capital costs and energy requirements . PAPs have found many industrial applications , and recently have gained attention as an option for hydrogen purification for electricity generation and other purposes [3-4]. Optimizing the design of these processes is critical to realizing the potential cost and energy savings.
The modeling, simulation and optimization of PAPs is challenging for several reasons. The mathematical models of these systems are described by nonlinear partial differential algebraic equations (PDAEs), with variation in both temporal and spatial domains, and the boundary conditions change several times during each cycle simulation. A particularly challenging aspect is the need to compute the cyclic steady state (CSS) behavior of a given PAP, i.e., the state in which the variable trajectories are identical in successive cycles. This is of the most practical interest for industrial purposes.
The conventional method for calculating the CSS is to simply integrate the process model until there is no change in the state trajectories between iterations. While this is an effective and proven method, it can often require the integration of many cycles (e.g., up to 4000 have been reported ) to reach the CSS. Other methods have proposed formulating the problem as a system of nonlinear equations which require the values of the state variables at the beginning and end of each cycle to be identical. However, the relation between these two sets of variable values is determined by the temporal integration of a PDAE system over a cycle; calculating the Jacobian that is required for each Newton iteration is computationally expensive, and for large systems it may be practically intractable . An alternative approach is to simultaneously discretize both the temporal and spatial variations within the underlying PDAEs, thereby obtaining a very large set of nonlinear algebraic relations which can then be solved using a sparse solver . However, this imposes severe demands on both storage and computation; moreover, using a fixed discretization scheme may fail to result in the required accuracy of solution.
In this work, we propose an algorithm for computing the CSS in PAPs based on the Jacobian-Free Newton-Krylov (JFNK)  method. The latter provides the benefit of super-linear convergence without the need to form or store the actual Jacobian. The linear system required for the Newton update is solved by an iterative linear solver which uses accurate approximations of Jacobian-vector products to minimize the linear residual. The Jacobian-vector product required at each iteration is obtained via the simulation of a single cycle of the PAP.
Whilst JFNK methods are conceptually simple, their efficient operation is predicated on obtaining sufficiently fast convergence of the iterative linear solver. This is typically achieved via the choice of an appropriate preconditioner. We propose a novel preconditioner that can result in significant improvements in performance both for single ?one-off? CSS determination and in the context of parametric studies where a sequence of CSSs need to be calculated for different values of process design and operation parameters.
We illustrate the benefits of the proposed method in the simulation of PAPs using a rapid pressure swing adsorption (RPSA) process and a multi-bed PSA process as case studies.
 Nikolic, D., Giovanoglou, A., Georgiadis, M.C., and Kikkinides, E.S. Generic modeling framework for gas separations using multibed pressure swing adsorption processes. IECR, 47, 3156-3169, 2008.
 Ruthven, D.M., Farooq, S., and Knaebel, K.M. Pressure Swing Adsorption. VCH Publishers: New York, 1994.
 Bastos-Neto, M., Moeller, A., Staudt, R., B?hm, J., and Gl?ser, R. Dynamic bed measurements of CO adsorption on microporous adsorbents at high pressures for hydrogen purification processes. Separation and Purification Technology, 77, 251-260, 2011.
 Delgado, J.A., ?gueda, V.I., Uguina, M.?., Sotelo, J.L., Brea, P., and Grande, C. Adsorption and diffusion of H2, CO, CH4 and CO2 in BPL activated carbon and 13x zeolite: evaluation of performance in PSA hydrogen purification by simulation. IECR, 53, 15414-15426, 2014.
 Siettos, C.I., Pantelides, C.C., and Kevrekidis, I.G. Enabling dynamic process simulators to perform alternative tasks: A time-stepper-based toolkit for computer-aided analysis. IECR, 42, 6795-6801, 2003.
 Vetukuri, S.R.R., Biegler, L.T., and Walther, A. An inexact trust-region algorithm for the optimization of periodic adsorption processes. IECR, 49, 12004-12013, 2010.
 Nilchan, S. and Pantelides, C.C. On the optimization of periodic adsorption processes. Adsorption, 4, 113-147, 1998.
 Knoll, D.A., and Keyes, D.E. Jacobian-free Newton?Krylov methods: a survey of approaches and applications. Journal of Computational Physics, 193, 357-397, 2004.
ACHEMA 2015 (Frankfurt, Germany, June 15-19, 2015)
1. Model-based Optimization of Batch Processes
Mayank Patel, Mark Matzopoulos, Bart de Groot, Hassan Mumtaz, Process Systems Enterprise Limited, UK
Batch and semi-batch processes contribute substantially to the global production of chemicals and consumer products, and have always dominated consumer product and pharmaceutical production. Now, as chemical companies move towards high value-added specialty chemicals, batch processes have achieved a renewed prominence throughout the chemical process industries. The rapid development of an efficient process to manufacture a new or modified product within an existing batch manufacturing facility is critical for success.
The dynamic nature of batch processing makes process optimization a challenging task unsuited to traditional steady-state simulation tools. The reason being due to appropriately translating recipes through modelling of operating procedures, which is the cornerstone of what defines a batch process. This allows us to perform actions such as what to do, when to do and how to do to the dynamic models.
Advanced Process Modelling (APM) brings new capabilities to rigorous batch process optimization of reaction, crystallization and solids processes. Model-based engineering empowers the engineer to explore, evaluate and optimize alternative operating policies, making it possible to enhance product quality, minimise batch time to increase throughput subject to quality and other constraints, and optimize recipes for different equipment or product qualities.
This work advocates APM technology as the basis for an integrated batch process development methodology that can complement and enhance laboratory and pilot scale experimentation. We aim to demonstrate this through the steps required for model-based engineering of batch processes including:
- Dynamic modelling through high-fidelity model libraries
- Modelling complex operational sequences
- Optimizing batch recipes
2. Design and scale-up of Multitubular reactors
Bart de Groot, Mark Matzopoulos, Hassan Mumtaz, Mayank Patel, Process Systems Enterprise Limited, UK
Multitubular reactors are widely used for fixed-bed catalytic reactions, but good design and operation are challenging. A good design eliminates danger areas where hot spots can occur. This is achieved by adjusting reactor specifications to provide optimal heat control. Detailed modelling is the only reliable way to accurately predict heat transfer at all points throughout the reactor.
The approach allows the effects of changes to design variables (such as the catalyst characteristics, the catalyst/inert ratio, tube pitch, tube length, coolant velocity, feed reactant mass fraction, number of baffles, cooling water inlet temperature as well as the number of active reactors and numerous other quantities) on key performance indicators (such as throughput, conversion and yield, tube-side temperature profiles and catalyst lifetime) to be calculated to a very high degree of predictive accuracy.
Multitubular reactors are complicated units that have been very difficult to model in the past because of the complex catalytic reactions taking place in the tubes, the large number of tubes, and the interrelationship between exothermic reaction in the tubes and the shell-side cooling medium.
The approach takes into account the close coupling between the tube-side phenomena and the shell-side heat transfer and hydrodynamics. The main advance of the techniques described here over existing simulation approaches are that (1) tube models incorporate high-accuracy first-principles representation of catalytic reaction, species diffusion and bed heat transfer, including intra-particle and surface effects, (2) models are validated against companies' own laboratory and pilot plant data and (3) mathematical optimization techniques are used to determine the optimal values of multiple design variables simultaneously rather than by trial-and-error simulation.
The final design is verified using a computational fluid dynamics (CFD) model of the shell side to ensure that no mechanical constraints such as shell-side fluid velocities are violated.
An integrated modelling/experimental design methodology is presented which uses specially-designed experimental procedures to obtain accurate estimates of the key kinetic and heat transfer parameters from a limited number of carefully targeted experiments. Formal model-based parameter estimation techniques ensure that parameter interaction is taken into account and provide parameter confidence information for subsequent risk analysis.
The approach is illustrated using the design of a high-performance new reactor for the manufacture of propylene oxide. Apart from reactor design, the techniques can be used for a wide range of applications including minimisation of design risk, new catalyst design and assessment, derivation of safe and effective start-up procedures, control design, and maximization of operational flexibility. The techniques described can also be used for operational decision support and troubleshooting. They apply to a variety of reactors, including those for the production of methanol, acrylic acid and Fischer-Tropsch synthesis for gas-to-liquid applications.
3. New techniques for optimizing periodic adsorption process operation
Mayank Patel, Mark Matzopoulos, Bart de Groot, Hassan Mumtaz, Process Systems Enterprise Ltd, London, UK,
Periodic swing adsorption, be it of pressure, temperature or volume, is an attractive option for the final separation stage of a process since it requires little energy input and is capable of producing a very pure product. There has been widespread development and application of pressure swing adsorption (PSA) systems. Applications expanded from drying and trace-component removal to bulk gas separations. Examples include hydrogen purification from steam methane reforming with or without CO2 capture as a byproduct, air separation for the production of nitrogen and dehydration of fermentation derived ethanol as a fuel additive, to name a few. With extensive industrial applications, there is significant interest for an efficient modelling, simulation, and optimization strategy.
Despite the significant interest, there are indeed challenges as well as major considerations that need to be taken into account before attempting to model these systems. Periodic Swing Adsorption systems are distributed in nature, with spatial and temporal variations along and normal to the bulk flow The characterisation of diffusion species through micropore and macropores surmounts to the level of detail required in the modelling effort. Swing adsorption processes are inherently dynamic, and usually involve complex cyclic operating procedures which dictate the bed to perform flow reversals. This is an area of huge concern, as flow reversals often create major discontinuities in the system and often halt the progression of achieving a solution.
The use of mathematical optimization to improve processes is well understood concept and can play crucial roles for optimizing design characteristics such as overall length of the adsorption bed, or size of an adsorbent layer, as well as determining the optimal step durations that make up a cycle.
Results however, are only meaningful when the system has reach cyclic steady state, where by the state of a bed at end of cycle must be the same as state at the start of cycle. Numerous efforts have been reported that take advantage of numerical gains in reaching to cyclic steady state (CSS) more efficiently. The gPROMS platform has long been a tool for modelling adsorption beds due to efficient handling of spatial variations, dynamics and complex operating procedures as well as the discontinuities inherent in PSA in an efficient manner.
Here we will present a modelling workflow to describe the dynamic behaviour of a PSA unit applicable for the study of both a single column and multiple columns of multi-layered beds undergoing a multitude of steps per cycle. We aim to present an approach for those interested in increase process understanding as well as those aiming to reach CSS in a computationally efficient manner. The application of hydrogen separation from a five component mixture (H2/CO2/CH4/CO/N2).
4. Whole plant optimization of ethylene production
Hassan Mumtaz, Mark Matzopoulos, Bart de Groot, Mayank Patel, Process Systems Enterprise Limited, UK
Natural gas and the petroleum fractions obtained after the primary fractionation of crude oil by distillation consist chiefly of saturated, paraffinic and naphthenic hydrocarbons, whose chemical reactivity is mediocre, precluding the development of diversified families of chemical compounds of varying complexity. This can only be achieved by using unsaturated aliphatic or aromatic hydrocarbons which, due to their many reactive potentialities, offer outstanding flexibility for organic synthesis. Acetylene, which was for many years the most widely used basic hydrocarbons in aliphatic chemistry, has gradually been superseded by ethylene, propylene and butadiene according to the synthesis considered, owing to its high production cost. Despite the fourfold increase in the price of crude oil which occurred in 1973 and its subsequent steady increase, ethylene still retained its economic advantage over acetylene from natural gas or from coal. .
At the industrial level, this technique was first developed in the United States. As early as 1920, Union Carbide and Carbon Co. built a pilot plant operation on ethane and propane, and this company went on to create the first chemical complex using products derived from the pyrolysis of gas oil. In 1946, Shell Chemical built the first petrochemical complex at Stanlow, using refinery gases as the pyrolysis feedstock. During the 1940-1950 period, the minimum capacity of ethylene production plants grew progressively from 10 to 50 kton/year. Giant installations subsequently appeared, routinely producing 300 kton/year of ethylene from petrochemical naphtha .
Steam cracking primarily produces ethylene, but also propylene and, as secondary products, depending on the feedstock employed, a C4 cut rich in butadiene and a C5- cut with a high content of aromatics, particularly benzene .
This work comprises the detailed modelling and optimization of an ethylene plant processing fresh propane and recycle streams of ethane and propane.
Over 97% of the annual volume of ethylene produced is based on thermal cracking of petroleum hydrocarbons with steam, i.e., steam cracking or pyrolysis. The reactions are highly endothermic, so a high energy input is required. After this step, the stream is severely cooled down, in order to stop side reactions . The downstream processing starts with a water quench, followed by a multistage compression with the cracked gas is scrubbed for acid gas removal . The cracked gas is thereafter dehydrated by chilling and dried by the use of molecular sieves. The water-free cracked gas goes into a distillation column train, in order to separate and recover valuable products, like ethylene and propylene.
Here we aim to understand both plant behaviour and response to factors that may affect the recovery of propylene from the C3 splitter column. These being the rate fouling over time, the reduction in feed flowrate of propane required to compensate for fouling, as well as the role margins available within the various unit operations in the flowshet aid in bringing the throughput back to a situation where no fouling is present in the reactor. Finally, a formal mathematical optimization is applied for reducing the energy demand as well as total annualized cost.
 A. Chauvel and G. Lefebvre, Petrochemical Processes, Paris: ?ditions Technip, 1989.
 F. Ullman, Ullmann's Encyclopedia Of Industrial Chemistry, Weinheim: VCH, 1985.
5. Modelling of fluid bed drying at different scales in drug product manufacture
Hassan Mumtaz, Process Systems Enterprise Limited, UK
Empirical approach to scale up fluid bed drying is time consuming and resource intensive. In this case study, advanced process modelling is used to develop and validate a mechanism based fluid-bed drying model with placebo materials. The model is able to predict batch drying profiles for different initial moisture contents and granule sizes. In this work, we present the process of developing and validating the lab-scale model and explain the impact and importance of internally and externally limited drying on the final results. The model is then successfully applied to simulate drying profile for a real drug product manufactured in large scale fluid bed dryer. The software package used in this work is gSOLIDS®, which is an advanced and rigorous modelling tool including flowsheeting capabilities (see Figure).
6. New techniques for urban and industrial wastewater systems optimization
Mark Matzopoulos, Process Systems Enterprise Limited, UK; Nicolas Descoins, Bluewatt Engineering, Lausanne, Switzerland; Leandro Salgueiro, Bluewatt Engineering, Lausanne, Switzerland
The operation of wastewater systems can consume significant amounts of energy, mostly in the form of electricity, although if plant design and operation are optimized, in many cases it is possible to become a net supplier of electricity to the grid. However, because of the complexity of the biological and chemical processes that make up treatment plants, and the continually varying influent loads, determining optimal design and operation is very challenging.
Wastewater systems optimization based on high-fidelity process models has been shown to reduce urban wastewater treatment plant energy consumption by up to 40% while maintaining water purity standards. This presentation demonstrates how high-fidelity models of biological and physio-chemical treatment reactors (for example, activated sludge reactors and anaerobic digesters), solid/liquid separation units and various mechanical process units are tuned against plant operation then deployed within an optimization framework to optimize key performance indicators (KPIs) such as plant energy efficiency or overall economics. Constraints typically include water quality measures such as N-NO3, COD and N-NH4 concentrations, and decision variables typically include dissolved air flow rates and O2 setpoints in the various reactors, recycle rates and the sludge extraction rate. It is possible at the same time to ensure that treatment plants can adapt to any anticipated change in incoming pollutant load, power and chemical costs, effluent purity standards or environmental regulations.
The approach is illustrated with reference to a Swiss urban wastewater treatment plant. The same approach can be applied to industrial biological treatment processes, and the model developed for such applications can be applied to accurately determine the effect on plant operation of upgrades and retro-fits.