publicationstransparent600#500000 Journal ArticlesConference PublicationsTechnical Talks Published Articles 2022 Jakeman, John; Friedman, Sam; Eldred, Michael; Tamellini, Lorenzo; Gorodetsky, Alex; Allaire, DouglasAdaptive experimental design for multi‐fidelity surrogate modeling of multi‐disciplinary systems Journal Article In: International Journal for Numerical Methods in Engineering, vol. 123, iss. 12, pp. 2760-2790, 2022.Abstract | Links | BibTeX@article{jakeman2022adaptive, title = {Adaptive experimental design for multi‐fidelity surrogate modeling of multi‐disciplinary systems}, author = {John Jakeman and Sam Friedman and Michael Eldred and Lorenzo Tamellini and Alex Gorodetsky and Douglas Allaire}, doi = {10.1002/nme.6958}, year = {2022}, date = {2022-06-30}, urldate = {2022-06-30}, journal = {International Journal for Numerical Methods in Engineering}, volume = {123}, issue = {12}, pages = {2760-2790}, abstract = {We present an adaptive algorithm for constructing surrogate models of multi-disciplinary systems composed of a set of coupled components. With this goal we introduce “coupling” variables with a priori unknown distributions that allow surrogates of each component to be built independently. Once built, the surrogates of the components are combined to form an integrated-surrogate that can be used to predict system-level quantities of interest at a fraction of the cost of the original model. The error in the integrated-surrogate is greedily minimized using an experimental design procedure that allocates the amount of training data, used to construct each component-surrogate, based on the contribution of those surrogates to the error of the integrated-surrogate. The multi-fidelity procedure presented is a generalization of multi-index stochastic collocation that can leverage ensembles of models of varying cost and accuracy, for one or more components, to reduce the computational cost of constructing the integrated-surrogate. Extensive numerical results demonstrate that, for a fixed computational budget, our algorithm is able to produce surrogates that are orders of magnitude more accurate than methods that treat the integrated system as a black-box.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseWe present an adaptive algorithm for constructing surrogate models of multi-disciplinary systems composed of a set of coupled components. With this goal we introduce “coupling” variables with a priori unknown distributions that allow surrogates of each component to be built independently. Once built, the surrogates of the components are combined to form an integrated-surrogate that can be used to predict system-level quantities of interest at a fraction of the cost of the original model. The error in the integrated-surrogate is greedily minimized using an experimental design procedure that allocates the amount of training data, used to construct each component-surrogate, based on the contribution of those surrogates to the error of the integrated-surrogate. The multi-fidelity procedure presented is a generalization of multi-index stochastic collocation that can leverage ensembles of models of varying cost and accuracy, for one or more components, to reduce the computational cost of constructing the integrated-surrogate. Extensive numerical results demonstrate that, for a fixed computational budget, our algorithm is able to produce surrogates that are orders of magnitude more accurate than methods that treat the integrated system as a black-box.Closedoi:10.1002/nme.6958Close Zhang, Guanglu; Allaire, Douglas; Cagan, JonathanReducing the Search Space for Global Minimum: A Focused Regions Identification Method for Least Squares Parameter Estimation in Nonlinear Models Journal Article In: ASME Journal of Computing and Information Science in Engineering, vol. 23, iss. 2, pp. 021006, 2022.Abstract | Links | BibTeX@article{zhang2022reducing, title = {Reducing the Search Space for Global Minimum: A Focused Regions Identification Method for Least Squares Parameter Estimation in Nonlinear Models}, author = {Guanglu Zhang and Douglas Allaire and Jonathan Cagan}, doi = {doi.org/10.1115/1.4054440}, year = {2022}, date = {2022-06-03}, urldate = {2022-06-03}, journal = {ASME Journal of Computing and Information Science in Engineering}, volume = {23}, issue = {2}, pages = {021006}, abstract = {Important for many science and engineering fields, meaningful nonlinear models result from fitting such models to data by estimating the value of each parameter in the model. Since parameters in nonlinear models often characterize a substance or a system (e.g., mass diffusivity), it is critical to find the optimal parameter estimators that minimize or maximize a chosen objective function. In practice, iterative local methods (e.g., Levenberg\textendashMarquardt method) and heuristic methods (e.g., genetic algorithms) are commonly employed for least squares parameter estimation in nonlinear models. However, practitioners are not able to know whether the parameter estimators derived through these methods are the optimal parameter estimators that correspond to the global minimum of the squared error of the fit. In this paper, a focused regions identification method is introduced for least squares parameter estimation in nonlinear models. Using expected fitting accuracy and derivatives of the squared error of the fit, this method rules out the regions in parameter space where the optimal parameter estimators cannot exist. Practitioners are guaranteed to find the optimal parameter estimators through an exhaustive search in the remaining regions (i.e., focused regions). The focused regions identification method is validated through two case studies in which a model based on Newton’s law of cooling and the Michaelis\textendashMenten model are fitted to two experimental data sets, respectively. These case studies show that the focused regions identification method can find the optimal parameter estimators and the corresponding global minimum effectively and efficiently.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseImportant for many science and engineering fields, meaningful nonlinear models result from fitting such models to data by estimating the value of each parameter in the model. Since parameters in nonlinear models often characterize a substance or a system (e.g., mass diffusivity), it is critical to find the optimal parameter estimators that minimize or maximize a chosen objective function. In practice, iterative local methods (e.g., Levenberg–Marquardt method) and heuristic methods (e.g., genetic algorithms) are commonly employed for least squares parameter estimation in nonlinear models. However, practitioners are not able to know whether the parameter estimators derived through these methods are the optimal parameter estimators that correspond to the global minimum of the squared error of the fit. In this paper, a focused regions identification method is introduced for least squares parameter estimation in nonlinear models. Using expected fitting accuracy and derivatives of the squared error of the fit, this method rules out the regions in parameter space where the optimal parameter estimators cannot exist. Practitioners are guaranteed to find the optimal parameter estimators through an exhaustive search in the remaining regions (i.e., focused regions). The focused regions identification method is validated through two case studies in which a model based on Newton’s law of cooling and the Michaelis–Menten model are fitted to two experimental data sets, respectively. These case studies show that the focused regions identification method can find the optimal parameter estimators and the corresponding global minimum effectively and efficiently.Closedoi:doi.org/10.1115/1.4054440Close Molkeri, Abhilash; Khatamsaz, Danial; Couperthwaite, Richard; James, Jaylen; Arróyave, Raymundo; Allaire, Douglas; Srivastava, AnkitOn the importance of microstructure information in materials design: PSP vs PP Journal Article In: Acta Materialia, vol. 223, pp. 117471, 2022.Abstract | Links | BibTeX@article{molkeri2022importance, title = {On the importance of microstructure information in materials design: PSP vs PP}, author = {Abhilash Molkeri and Danial Khatamsaz and Richard Couperthwaite and Jaylen James and Raymundo Arr\'{o}yave and Douglas Allaire and Ankit Srivastava}, doi = {10.1016/j.actamat.2021.117471}, year = {2022}, date = {2022-01-15}, urldate = {2022-01-15}, journal = {Acta Materialia}, volume = {223}, pages = {117471}, abstract = {The focus of goal-oriented materials design is to find the necessary chemistry/processing conditions to achieve the desired properties. In this setting, a material’s microstructure is either only used to carry out multiscale simulations to establish an invertible quantitative process-structure-property (PSP) relationship, or to rationalize a posteriori the underlying microstructural features responsible for the properties achieved. The materials design process itself, however, tends to be microstructure-agnostic: the microstructure only mediates the process-property (PP) connection and is\textemdashwith some exceptions such as architected materials\textemdashseldom used for the optimization itself. While the existence of PSP relationships is the central paradigm of materials science, it would seem that for materials design, one only needs to focus on PP relationships. In this work, we attempt to resolve the issue whether ‘PSP’ is a superior paradigm for materials design in cases where the microstructure itself cannot be (directly) manipulated to optimize materials’ properties. To this end, we formulate a novel microstructure-aware closed-loop multi-fidelity Bayesian optimization framework for materials design and rigorously demonstrate the importance of the microstructure information in the design process. The problem considered here involves finding the right combination of chemistry and processing parameters that maximizes a targeted mechanical property of a model dual-phase steel. Our results clearly show that an explicit incorporation of microstructure knowledge in the materials design framework significantly enhances the materials optimization process. We thus prove, in a computational setting, and for a particular representative problem where microstructure intervenes to influence properties of interest, that ‘PSP’ is superior to ‘PP’ when it comes to materials design.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseThe focus of goal-oriented materials design is to find the necessary chemistry/processing conditions to achieve the desired properties. In this setting, a material’s microstructure is either only used to carry out multiscale simulations to establish an invertible quantitative process-structure-property (PSP) relationship, or to rationalize a posteriori the underlying microstructural features responsible for the properties achieved. The materials design process itself, however, tends to be microstructure-agnostic: the microstructure only mediates the process-property (PP) connection and is—with some exceptions such as architected materials—seldom used for the optimization itself. While the existence of PSP relationships is the central paradigm of materials science, it would seem that for materials design, one only needs to focus on PP relationships. In this work, we attempt to resolve the issue whether ‘PSP’ is a superior paradigm for materials design in cases where the microstructure itself cannot be (directly) manipulated to optimize materials’ properties. To this end, we formulate a novel microstructure-aware closed-loop multi-fidelity Bayesian optimization framework for materials design and rigorously demonstrate the importance of the microstructure information in the design process. The problem considered here involves finding the right combination of chemistry and processing parameters that maximizes a targeted mechanical property of a model dual-phase steel. Our results clearly show that an explicit incorporation of microstructure knowledge in the materials design framework significantly enhances the materials optimization process. We thus prove, in a computational setting, and for a particular representative problem where microstructure intervenes to influence properties of interest, that ‘PSP’ is superior to ‘PP’ when it comes to materials design.Closedoi:10.1016/j.actamat.2021.117471Close2021 Couperthwaite, Richard; Khatamsaz, Danial; Molkeri, Abhilash; James, Jaylen; Srivastava, Ankit; Allaire, Douglas; Arróyave, RaymundoThe BAREFOOT Optimization Framework Journal Article In: Integrating Materials and Manufacturing Innovation, vol. 10, iss. 4, pp. 644-660, 2021.Abstract | Links | BibTeX@article{couperthwaite2021barefoot, title = {The BAREFOOT Optimization Framework}, author = {Richard Couperthwaite and Danial Khatamsaz and Abhilash Molkeri and Jaylen James and Ankit Srivastava and Douglas Allaire and Raymundo Arr\'{o}yave}, doi = {10.1007/s40192-021-00235-2}, year = {2021}, date = {2021-12-01}, urldate = {2021-12-01}, journal = {Integrating Materials and Manufacturing Innovation}, volume = {10}, issue = {4}, pages = {644-660}, abstract = {This work presents a description of the Batch Reification/Fusion Optimization Framework (BAREFOOT). BAREFOOT is a Bayesian Optimization (BO) Framework that has been built specifically for the aim of material optimization and design. The Framework combines multi-fidelity model fusion with batch BO to enable accelerated materials design. The Framework is built in Python and is available as open-source code. The Framework offers the capability to do pure Batch BO, pure Multi-Fidelity (sequential BO), or combine both methods. Since BO relies on acquisition functions, we have implemented many of the most commonly used acquisition functions in the Framework. Finally, the Framework is capable of both single- and multi-objective optimization approaches. This work presents an overview of the Framework and the calculation methods available and demonstrates the Framework’s performance based on generic optimization test functions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseThis work presents a description of the Batch Reification/Fusion Optimization Framework (BAREFOOT). BAREFOOT is a Bayesian Optimization (BO) Framework that has been built specifically for the aim of material optimization and design. The Framework combines multi-fidelity model fusion with batch BO to enable accelerated materials design. The Framework is built in Python and is available as open-source code. The Framework offers the capability to do pure Batch BO, pure Multi-Fidelity (sequential BO), or combine both methods. Since BO relies on acquisition functions, we have implemented many of the most commonly used acquisition functions in the Framework. Finally, the Framework is capable of both single- and multi-objective optimization approaches. This work presents an overview of the Framework and the calculation methods available and demonstrates the Framework’s performance based on generic optimization test functions.Closedoi:10.1007/s40192-021-00235-2Close Khatamsaz, Danial; Molkeri, Abhilash; Couperthwaite, Richard; James, Jaylen; Arróyave, Raymundo; Srivastava, Ankit; Allaire, DouglasAdaptive active subspace-based efficient multifidelity materials design Journal Article In: Materials & Design, vol. 209, pp. 110001, 2021.Abstract | Links | BibTeX@article{khatamsaz2021adaptive, title = {Adaptive active subspace-based efficient multifidelity materials design}, author = {Danial Khatamsaz and Abhilash Molkeri and Richard Couperthwaite and Jaylen James and Raymundo Arr\'{o}yave and Ankit Srivastava and Douglas Allaire}, doi = {10.1016/j.matdes.2021.110001}, year = {2021}, date = {2021-11-01}, urldate = {2021-11-01}, journal = {Materials \& Design}, volume = {209}, pages = {110001}, abstract = {Materials design calls for an optimal exploration and exploitation of the process-structure-property (PSP) relationships to produce materials with targeted properties. Recently, we developed and deployed a closed-loop multi-information source fusion (multi-fidelity) Bayesian Optimization (BO) framework to optimize the mechanical performance of a dual-phase material by adjusting the material composition and processing parameters. While promising, BO frameworks tend to underperform as the dimensionality of the problem increases. Herein, we employ an adaptive active subspace method to efficiently handle the large dimensionality of the design space of a typical PSP-based material design problem within our multi-fidelity BO framework. Our adaptive active subspace method significantly accelerates the design process by prioritizing searches in the important regions of the high-dimensional design space. A detailed discussion of the various components and demonstration of three approaches to implementing the adaptive active subspace method within the multi-fidelity BO framework is presented.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseMaterials design calls for an optimal exploration and exploitation of the process-structure-property (PSP) relationships to produce materials with targeted properties. Recently, we developed and deployed a closed-loop multi-information source fusion (multi-fidelity) Bayesian Optimization (BO) framework to optimize the mechanical performance of a dual-phase material by adjusting the material composition and processing parameters. While promising, BO frameworks tend to underperform as the dimensionality of the problem increases. Herein, we employ an adaptive active subspace method to efficiently handle the large dimensionality of the design space of a typical PSP-based material design problem within our multi-fidelity BO framework. Our adaptive active subspace method significantly accelerates the design process by prioritizing searches in the important regions of the high-dimensional design space. A detailed discussion of the various components and demonstration of three approaches to implementing the adaptive active subspace method within the multi-fidelity BO framework is presented.Closedoi:10.1016/j.matdes.2021.110001Close Khatamsaz, Danial; Peddareddygari, Lalith; Friedman, Samuel; Allaire, DouglasBayesian optimization of multiobjective functions using multiple information sources Journal Article In: AIAA Journal, vol. 59, iss. 6, pp. 1964-1974, 2021.Abstract | Links | BibTeX@article{khatamsaz2021bayesian, title = {Bayesian optimization of multiobjective functions using multiple information sources}, author = {Danial Khatamsaz and Lalith Peddareddygari and Samuel Friedman and Douglas Allaire}, doi = {10.2514/1.J059803}, year = {2021}, date = {2021-06-01}, urldate = {2021-06-01}, journal = {AIAA Journal}, volume = {59}, issue = {6}, pages = {1964-1974}, abstract = {Multiobjective optimization is often a difficult task owing to the need to balance competing objectives. A typical approach to handling this is to estimate a Pareto frontier in objective space by identifying nondominated points. This task is typically computationally demanding owing to the need to incorporate information of high enough fidelity to be trusted in design and decision-making processes. In this work, we present a multi-information source framework for enabling efficient multiobjective optimization. The framework allows for the exploitation of all available information and considers both potential improvement and cost. The framework includes ingredients of model fusion, expected hypervolume improvement, and intermediate Gaussian process surrogates. The approach is demonstrated on a test problem and an aerostructural wing design problem.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseMultiobjective optimization is often a difficult task owing to the need to balance competing objectives. A typical approach to handling this is to estimate a Pareto frontier in objective space by identifying nondominated points. This task is typically computationally demanding owing to the need to incorporate information of high enough fidelity to be trusted in design and decision-making processes. In this work, we present a multi-information source framework for enabling efficient multiobjective optimization. The framework allows for the exploitation of all available information and considers both potential improvement and cost. The framework includes ingredients of model fusion, expected hypervolume improvement, and intermediate Gaussian process surrogates. The approach is demonstrated on a test problem and an aerostructural wing design problem.Closedoi:10.2514/1.J059803Close Zhang, Guanglu; Allaire, Douglas; Cagan, JonathanTaking the Guess Work Out of the Initial Guess: A Solution Interval Method for Least-Squares Parameter Estimation in Nonlinear Models Journal Article In: ASME Journal of Computing and Information Science in Engineering, vol. 21, iss. 2, pp. 021011, 2021.Abstract | Links | BibTeX@article{zhang2021taking, title = {Taking the Guess Work Out of the Initial Guess: A Solution Interval Method for Least-Squares Parameter Estimation in Nonlinear Models}, author = {Guanglu Zhang and Douglas Allaire and Jonathan Cagan}, doi = {10.1115/1.4048811}, year = {2021}, date = {2021-04-01}, urldate = {2021-04-01}, journal = {ASME Journal of Computing and Information Science in Engineering}, volume = {21}, issue = {2}, pages = {021011}, abstract = {Fitting a specified model to data is critical in many science and engineering fields. A major task in fitting a specified model to data is to estimate the value of each parameter in the model. Iterative local methods, such as the Gauss\textendashNewton method and the Levenberg\textendashMarquardt method, are often employed for parameter estimation in nonlinear models. However, practitioners must guess the initial value for each parameter to initialize these iterative local methods. A poor initial guess can contribute to non-convergence of these methods or lead these methods to converge to a wrong or inferior solution. In this paper, a solution interval method is introduced to find the optimal estimator for each parameter in a nonlinear model that minimizes the squared error of the fit. To initialize this method, it is not necessary for practitioners to guess the initial value of each parameter in a nonlinear model. The method includes three algorithms that require different levels of computational power to find the optimal parameter estimators. The method constructs a solution interval for each parameter in the model. These solution intervals significantly reduce the search space for optimal parameter estimators. The method also provides an empirical probability distribution for each parameter, which is valuable for parameter uncertainty assessment. The solution interval method is validated through two case studies in which the Michaelis\textendashMenten model and Fick’s second law are fit to experimental data sets, respectively. These case studies show that the solution interval method can find optimal parameter estimators efficiently. A four-step procedure for implementing the solution interval method in practice is also outlined.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseFitting a specified model to data is critical in many science and engineering fields. A major task in fitting a specified model to data is to estimate the value of each parameter in the model. Iterative local methods, such as the Gauss–Newton method and the Levenberg–Marquardt method, are often employed for parameter estimation in nonlinear models. However, practitioners must guess the initial value for each parameter to initialize these iterative local methods. A poor initial guess can contribute to non-convergence of these methods or lead these methods to converge to a wrong or inferior solution. In this paper, a solution interval method is introduced to find the optimal estimator for each parameter in a nonlinear model that minimizes the squared error of the fit. To initialize this method, it is not necessary for practitioners to guess the initial value of each parameter in a nonlinear model. The method includes three algorithms that require different levels of computational power to find the optimal parameter estimators. The method constructs a solution interval for each parameter in the model. These solution intervals significantly reduce the search space for optimal parameter estimators. The method also provides an empirical probability distribution for each parameter, which is valuable for parameter uncertainty assessment. The solution interval method is validated through two case studies in which the Michaelis–Menten model and Fick’s second law are fit to experimental data sets, respectively. These case studies show that the solution interval method can find optimal parameter estimators efficiently. A four-step procedure for implementing the solution interval method in practice is also outlined.Closedoi:10.1115/1.4048811Close Khatamsaz, Danial; Molkeri, Abhilash; Couperthwaite, Richard; James, Jaylen; Arróyave, Raymundo; Allaire, Douglas; Srivastava, AnkitEfficiently exploiting process-structure-property relationships in material design by multi-information source fusion Journal Article In: Acta Materialia, vol. 206, pp. 116619, 2021.Abstract | Links | BibTeX@article{khatamsaz2021efficiently, title = {Efficiently exploiting process-structure-property relationships in material design by multi-information source fusion}, author = {Danial Khatamsaz and Abhilash Molkeri and Richard Couperthwaite and Jaylen James and Raymundo Arr\'{o}yave and Douglas Allaire and Ankit Srivastava}, doi = {10.1016/j.actamat.2020.116619}, year = {2021}, date = {2021-03-01}, urldate = {2021-03-01}, journal = {Acta Materialia}, volume = {206}, pages = {116619}, abstract = {Materials design calls for the (inverse) exploitation of Process-Structure-Property (PSP) relationships to produce materials with targeted properties. Unfortunately, most materials design frameworks are not optimal, given resource constraints. Bayesian Optimization (BO)-based frameworks are increasingly used in materials design as they balance the exploration and exploitation of design spaces. Most BO-based frameworks assume that the design space can be queried by a single information source (e.g. experiment or simulation). Recently, we demonstrated microstructure-sensitive design of alloys with a BO framework capable of exploiting multiple information sources. While promising, the previous framework is limited as it assumes that the optimal microstructure is always feasible and considers microstructural features as the design space. Herein, we sidestep this unwarranted assumption and instead consider that chemistry and processing conditions constitute the design space amenable to optimization. We demonstrate the efficacy of our expanded framework by optimizing the mechanical performance of a ferritic/martensitic dual-phase material by adjusting composition/processing parameters. The framework uses thermodynamic results to predict microstructural attributes which are then used to predict the mechanical properties using a variety of micromechanical models and a microstructure-based finite element model. The final stage involves implementing model reification and information fusion, and a knowledge-gradient acquisition function to determine the next best design point and information sources to query. A detailed discussion of the various components and demonstration of how the framework can be implemented under three sets of cost-based constraints is presented.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseMaterials design calls for the (inverse) exploitation of Process-Structure-Property (PSP) relationships to produce materials with targeted properties. Unfortunately, most materials design frameworks are not optimal, given resource constraints. Bayesian Optimization (BO)-based frameworks are increasingly used in materials design as they balance the exploration and exploitation of design spaces. Most BO-based frameworks assume that the design space can be queried by a single information source (e.g. experiment or simulation). Recently, we demonstrated microstructure-sensitive design of alloys with a BO framework capable of exploiting multiple information sources. While promising, the previous framework is limited as it assumes that the optimal microstructure is always feasible and considers microstructural features as the design space. Herein, we sidestep this unwarranted assumption and instead consider that chemistry and processing conditions constitute the design space amenable to optimization. We demonstrate the efficacy of our expanded framework by optimizing the mechanical performance of a ferritic/martensitic dual-phase material by adjusting composition/processing parameters. The framework uses thermodynamic results to predict microstructural attributes which are then used to predict the mechanical properties using a variety of micromechanical models and a microstructure-based finite element model. The final stage involves implementing model reification and information fusion, and a knowledge-gradient acquisition function to determine the next best design point and information sources to query. A detailed discussion of the various components and demonstration of how the framework can be implemented under three sets of cost-based constraints is presented.Closedoi:10.1016/j.actamat.2020.116619Close Couperthwaite, Richard; Allaire, Douglas; Arróyave, RaymundoUtilizing gaussian processes to fit high dimension thermodynamic data that includes estimated variability Journal Article In: Computational Materials Science, vol. 188, pp. 110133, 2021.Abstract | Links | BibTeX@article{couperthwaite2021utilizing, title = {Utilizing gaussian processes to fit high dimension thermodynamic data that includes estimated variability}, author = {Richard Couperthwaite and Douglas Allaire and Raymundo Arr\'{o}yave}, doi = {10.1016/j.commatsci.2020.110133}, year = {2021}, date = {2021-02-15}, urldate = {2021-02-15}, journal = {Computational Materials Science}, volume = {188}, pages = {110133}, abstract = {CALPHAD-based thermodynamic modeling is an integral component of any ICME framework applied to the accelerated development of alloys. The utility of this type of analysis is that it provides knowledge about the impact of chemistry and (to some degree) processing on the phase stability of alloys. This information can later be passed on to other computational tools which can be used to narrow the experimental space that needs to be explored to arrive at optimal alloy designs. Two major challenges arise with these techniques: (1) it is difficult to interface the outputs of such models with other computational tools without significant overhead; (2) CALPHAD-based predictions tend to be agnostic with regards to uncertainty. The latter challenge is because in commercial thermodynamic packages, it is often not possible to access the model parameters as they tend to be encrypted, making the associated thermodynamic databases essentially ‘black boxes’ and so methods that consider only the inputs to the models must be considered. In the current work, we develop surrogate models of CALPHAD-based phase stability predictions that fulfill two objectives: (1) they enable the offline evaluation of a component of the ICME model chain that can then be incorporated into a more complete alloy design scheme without the need to directly interface with a thermodynamic engine; (2) they allow for the consideration of uncertainty. We apply the framework to the investigation of the impact of chemistry and heat treatment on the phase constitution of commercial steel grades and evaluate the performance of this framework relative to direct thermodynamic calculations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseCALPHAD-based thermodynamic modeling is an integral component of any ICME framework applied to the accelerated development of alloys. The utility of this type of analysis is that it provides knowledge about the impact of chemistry and (to some degree) processing on the phase stability of alloys. This information can later be passed on to other computational tools which can be used to narrow the experimental space that needs to be explored to arrive at optimal alloy designs. Two major challenges arise with these techniques: (1) it is difficult to interface the outputs of such models with other computational tools without significant overhead; (2) CALPHAD-based predictions tend to be agnostic with regards to uncertainty. The latter challenge is because in commercial thermodynamic packages, it is often not possible to access the model parameters as they tend to be encrypted, making the associated thermodynamic databases essentially ‘black boxes’ and so methods that consider only the inputs to the models must be considered. In the current work, we develop surrogate models of CALPHAD-based phase stability predictions that fulfill two objectives: (1) they enable the offline evaluation of a component of the ICME model chain that can then be incorporated into a more complete alloy design scheme without the need to directly interface with a thermodynamic engine; (2) they allow for the consideration of uncertainty. We apply the framework to the investigation of the impact of chemistry and heat treatment on the phase constitution of commercial steel grades and evaluate the performance of this framework relative to direct thermodynamic calculations.Closedoi:10.1016/j.commatsci.2020.110133Close2020 Couperthwaite, Richard; Molkeri, Abhilash; Khatamsaz, Danial; Srivastava, Ankit; Allaire, Douglas; Arròyave, RaymundoMaterials design through batch bayesian optimization with multisource information fusion Journal Article In: JOM, vol. 72, iss. 12, pp. 4431-4443, 2020.Abstract | Links | BibTeX@article{couperthwaite2020materials, title = {Materials design through batch bayesian optimization with multisource information fusion}, author = {Richard Couperthwaite and Abhilash Molkeri and Danial Khatamsaz and Ankit Srivastava and Douglas Allaire and Raymundo Arr\`{o}yave}, doi = {10.1007/s11837-020-04396-x}, year = {2020}, date = {2020-12-01}, urldate = {2020-12-01}, journal = {JOM}, volume = {72}, issue = {12}, pages = {4431-4443}, abstract = {Integrated computational materials engineering (ICME) calls for the integration of simulation tools and experiments to accelerate the development of materials. ICME approaches tend to be computationally costly, and recently, Bayesian optimization (BO) has been proposed as a way to make ICME more resource efficient. Conventional BO, however, is sequential (i.e., one-at-a-time) in nature, which makes it very time-consuming when the evaluation of a materials design choice is costly. While conventional high-throughput approaches enable the synthesis and characterization (or simulation) of materials in a parallel manner, they tend to be “open loop” and are unable to provide recommendations of what to try next once the parallel experiment/simulation has been carried out and analyzed. Here, we address this problem by introducing a batch BO framework that enables the exploration of the material’s design space in a parallel fashion. We augment this approach by incorporating information fusion frameworks capable of integrating multiple information sources. Demonstrating the proposed approach in the computational design of dual-phase steel, we show that batch BO can result in a significant reduction in the time and resources needed to carry out the design task. The proposed approach has wider applicability, well beyond the ICME example used to demonstrate it.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseIntegrated computational materials engineering (ICME) calls for the integration of simulation tools and experiments to accelerate the development of materials. ICME approaches tend to be computationally costly, and recently, Bayesian optimization (BO) has been proposed as a way to make ICME more resource efficient. Conventional BO, however, is sequential (i.e., one-at-a-time) in nature, which makes it very time-consuming when the evaluation of a materials design choice is costly. While conventional high-throughput approaches enable the synthesis and characterization (or simulation) of materials in a parallel manner, they tend to be “open loop” and are unable to provide recommendations of what to try next once the parallel experiment/simulation has been carried out and analyzed. Here, we address this problem by introducing a batch BO framework that enables the exploration of the material’s design space in a parallel fashion. We augment this approach by incorporating information fusion frameworks capable of integrating multiple information sources. Demonstrating the proposed approach in the computational design of dual-phase steel, we show that batch BO can result in a significant reduction in the time and resources needed to carry out the design task. The proposed approach has wider applicability, well beyond the ICME example used to demonstrate it.Closedoi:10.1007/s11837-020-04396-xClose Zhang, Guanglu; Morris, Elissa; Allaire, Douglas; McAdams, Daniel AResearch opportunities and challenges in engineering system evolution Journal Article In: ASME Journal of Mechanical Design, vol. 142, iss. 8, pp. 081401, 2020.Abstract | Links | BibTeX@article{zhang2020research, title = {Research opportunities and challenges in engineering system evolution}, author = {Guanglu Zhang and Elissa Morris and Douglas Allaire and Daniel A McAdams}, doi = {10.1115/1.4045908}, year = {2020}, date = {2020-08-01}, urldate = {2020-08-01}, journal = {ASME Journal of Mechanical Design}, volume = {142}, issue = {8}, pages = {081401}, abstract = {Research in engineering system evolution studies the technical performance (e.g., speed, capacity, and energy efficiency) and the functional and architectural changes of engineering systems (e.g., automobiles, aircrafts, laptops, and smartphones) over time. The research results of engineering system evolution help designers, R\&D managers, investors, and policy makers to generate innovative design concepts, set reasonable R\&D targets, invest in promising technologies, and develop effective incentive policies. In this paper, we introduce engineering system evolution as an emerging research area. We develop a cyclic model to understand the general structure of engineering system evolution and summarize seven basic research questions accordingly. A review and analysis of prior research related to engineering system evolution is provided to identify the pioneering works in this promising research area. We also discuss the challenges and opportunities in the quantitative and qualitative study of engineering system evolution for future research.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseResearch in engineering system evolution studies the technical performance (e.g., speed, capacity, and energy efficiency) and the functional and architectural changes of engineering systems (e.g., automobiles, aircrafts, laptops, and smartphones) over time. The research results of engineering system evolution help designers, R&D managers, investors, and policy makers to generate innovative design concepts, set reasonable R&D targets, invest in promising technologies, and develop effective incentive policies. In this paper, we introduce engineering system evolution as an emerging research area. We develop a cyclic model to understand the general structure of engineering system evolution and summarize seven basic research questions accordingly. A review and analysis of prior research related to engineering system evolution is provided to identify the pioneering works in this promising research area. We also discuss the challenges and opportunities in the quantitative and qualitative study of engineering system evolution for future research.Closedoi:10.1115/1.4045908Close Ghosh, Supriyo; Seede, Raiyan; James, Jaylen; Karaman, Ibrahim; Elwany, Alaa; Allaire, Douglas; Arroyave, RaymundoStatistical modelling of microsegregation in laser powder-bed fusion Journal Article In: Philosophical Magazine Letters, vol. 100, iss. 6, pp. 271-282, 2020.Abstract | Links | BibTeX@article{ghosh2020statistical, title = {Statistical modelling of microsegregation in laser powder-bed fusion}, author = {Supriyo Ghosh and Raiyan Seede and Jaylen James and Ibrahim Karaman and Alaa Elwany and Douglas Allaire and Raymundo Arroyave}, doi = {10.1080/09500839.2020.1753894}, year = {2020}, date = {2020-06-02}, urldate = {2020-06-02}, journal = {Philosophical Magazine Letters}, volume = {100}, issue = {6}, pages = {271-282}, abstract = {Laser powder-bed fusion solidification of Ni\textendashNb alloys often results in cellular morphology in which the solute microsegregation was determined using experiments and simulations, and the data obtained were utilised to explore the predictive capability of microsegregation models. The experimental ‘ground truth’ was compared with high-fidelity phase-field simulations as well as with analytical model predictions. Supervised statistical analyses, including linear regression, polynomial regression, and model reification were employed to understand the merit of these approaches toward microsegregation estimation. The bias-variance and accuracy-interpretability trade-off limits were considered in the data analysis that was consistent with our experimental findings.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseLaser powder-bed fusion solidification of Ni–Nb alloys often results in cellular morphology in which the solute microsegregation was determined using experiments and simulations, and the data obtained were utilised to explore the predictive capability of microsegregation models. The experimental ‘ground truth’ was compared with high-fidelity phase-field simulations as well as with analytical model predictions. Supervised statistical analyses, including linear regression, polynomial regression, and model reification were employed to understand the merit of these approaches toward microsegregation estimation. The bias-variance and accuracy-interpretability trade-off limits were considered in the data analysis that was consistent with our experimental findings.Closedoi:10.1080/09500839.2020.1753894Close Sanabria-Borbón, Adriana; Soto-Aguilar, Sergio; Estrada-López, Johan; Allaire, Douglas; Sánchez-Sinencio, EdgarGaussian-process-based surrogate for optimization-aided and process-variations-aware analog circuit design Journal Article In: Electronics, vol. 9, iss. 4, pp. 685, 2020.Abstract | Links | BibTeX@article{sanabria2020gaussian, title = {Gaussian-process-based surrogate for optimization-aided and process-variations-aware analog circuit design}, author = {Adriana Sanabria-Borb\'{o}n and Sergio Soto-Aguilar and Johan Estrada-L\'{o}pez and Douglas Allaire and Edgar S\'{a}nchez-Sinencio}, doi = {10.3390/electronics9040685}, year = {2020}, date = {2020-04-01}, urldate = {2020-04-01}, journal = {Electronics}, volume = {9}, issue = {4}, pages = {685}, abstract = {Optimization algorithms have been successfully applied to the automatic design of analog integrated circuits. However, many of the existing solutions rely on expensive circuit simulations or use fully customized surrogate models for each particular circuit and technology. Therefore, the development of an easily adaptable low-cost and efficient tool that guarantees resiliency to variations of the resulting design, remains an open research area. In this work, we propose a computationally low-cost surrogate model for multi-objective optimization-based automated analog integrated circuit (IC) design. The surrogate has three main components: a set of Gaussian process regression models of the technology’s parameters, a physics-based model of the MOSFET device, and a set of equations of the performance metrics of the circuit under design. The surrogate model is inserted into two different state-of-the-art optimization algorithms to prove its flexibility. The efficacy of our surrogate is demonstrated through simulation validation across process corners in three different CMOS technologies, using three representative circuit building-blocks that are commonly encountered in mainstream analog/RF ICs. The proposed surrogate is 69X to 470X faster at evaluation compared with circuit simulations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseOptimization algorithms have been successfully applied to the automatic design of analog integrated circuits. However, many of the existing solutions rely on expensive circuit simulations or use fully customized surrogate models for each particular circuit and technology. Therefore, the development of an easily adaptable low-cost and efficient tool that guarantees resiliency to variations of the resulting design, remains an open research area. In this work, we propose a computationally low-cost surrogate model for multi-objective optimization-based automated analog integrated circuit (IC) design. The surrogate has three main components: a set of Gaussian process regression models of the technology’s parameters, a physics-based model of the MOSFET device, and a set of equations of the performance metrics of the circuit under design. The surrogate model is inserted into two different state-of-the-art optimization algorithms to prove its flexibility. The efficacy of our surrogate is demonstrated through simulation validation across process corners in three different CMOS technologies, using three representative circuit building-blocks that are commonly encountered in mainstream analog/RF ICs. The proposed surrogate is 69X to 470X faster at evaluation compared with circuit simulations.Closedoi:10.3390/electronics9040685Close Burrows, B.; Allaire, D.Nonlinear Kalman Filtering With Expensive Forward Models via Measure Change Journal Article In: ASME Journal of Dynamic Systems, Measurement, and Control, vol. 142, no. 2, pp. 021006, 2020.Abstract | Links | BibTeX@article{burrows2020nonlinear, title = {Nonlinear Kalman Filtering With Expensive Forward Models via Measure Change}, author = {B. Burrows and D. Allaire}, doi = {10.1115/1.4045323}, year = {2020}, date = {2020-02-01}, urldate = {2020-02-01}, journal = {ASME Journal of Dynamic Systems, Measurement, and Control}, volume = {142}, number = {2}, pages = {021006}, abstract = {Filtering is a subset of a more general probabilistic estimation scheme for estimating the unobserved parameters from the observed measurements. For nonlinear, high speed applications, the extended Kalman filter (EKF) and the unscented Kalman filter (UKF) are common estimators; however, expensive and strongly nonlinear forward models remain a challenge. In this paper, a novel Kalman filtering algorithm for nonlinear systems is developed, where the numerical approximation is achieved via a change of measure. The accuracy is identical in the linear case and superior in two nonlinear test problems: a challenging 1D benchmarking problem and a 4D structural health monitoring problem. This increase in accuracy is achieved without the need for tuning parameters, rather relying on a more complete approximation of the underlying distributions than the Unscented Transform. In addition, when expensive forward models are used, we achieve a significant reduction in computational cost without resorting to model approximation.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseFiltering is a subset of a more general probabilistic estimation scheme for estimating the unobserved parameters from the observed measurements. For nonlinear, high speed applications, the extended Kalman filter (EKF) and the unscented Kalman filter (UKF) are common estimators; however, expensive and strongly nonlinear forward models remain a challenge. In this paper, a novel Kalman filtering algorithm for nonlinear systems is developed, where the numerical approximation is achieved via a change of measure. The accuracy is identical in the linear case and superior in two nonlinear test problems: a challenging 1D benchmarking problem and a 4D structural health monitoring problem. This increase in accuracy is achieved without the need for tuning parameters, rather relying on a more complete approximation of the underlying distributions than the Unscented Transform. In addition, when expensive forward models are used, we achieve a significant reduction in computational cost without resorting to model approximation.Closedoi:10.1115/1.4045323Close Attari, Vahid; Honarmandi, Pejman; Duong, Thien; Sauceda, Daniel; Allaire, Douglas; Arroyave, RaymundoUncertainty propagation in a multiscale CALPHAD-reinforced elastochemical phase-field model Journal Article In: Acta Materialia, vol. 183, pp. 452-470, 2020.Abstract | Links | BibTeX@article{attari2020uncertainty, title = {Uncertainty propagation in a multiscale CALPHAD-reinforced elastochemical phase-field model}, author = {Vahid Attari and Pejman Honarmandi and Thien Duong and Daniel Sauceda and Douglas Allaire and Raymundo Arroyave}, doi = {10.1016/j.actamat.2019.11.031}, year = {2020}, date = {2020-01-15}, urldate = {2020-01-15}, journal = {Acta Materialia}, volume = {183}, pages = {452-470}, abstract = {ICME approaches provide decision support for materials design by establishing quantitative process-structure-property relations. Confidence in the decision support, however, must be achieved by establishing uncertainty bounds in ICME model chains. The quantification and propagation of uncertainty in computational materials science, however, remains a rather unexplored aspect of computational materials science approaches. Moreover, traditional uncertainty propagation frameworks tend to be limited in cases with computationally expensive simulations. A rather common and important model chain is that of CALPHAD-based thermodynamic models of phase stability coupled to phase-field models for microstructure evolution. Propagation of uncertainty in these cases is challenging not only due to the sheer computational cost of the simulations but also because of the high dimensionality of the input space. In this work, we present a framework for the quantification and propagation of uncertainty in a CALPHAD-based elastochemical phase-field model. We motivate our work by investigating the microstructure evolution in Mg2SixSn thermoelectric materials. We first carry out a Markov Chain Monte Carlo-based inference of the CALPHAD model parameters for this pseudobinary system and then use advanced sampling schemes to propagate uncertainties across a high-dimensional simulation input space. Through high-throughput phase-field simulations we generate 200,000 time series of synthetic microstructures and use machine learning approaches to understand the effects of propagated uncertainties on the microstructure landscape of the system under study. The microstructure dataset has been curated in the Open Phase-field Microstructure Database (OPMD), available at http://microstructures.net.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseICME approaches provide decision support for materials design by establishing quantitative process-structure-property relations. Confidence in the decision support, however, must be achieved by establishing uncertainty bounds in ICME model chains. The quantification and propagation of uncertainty in computational materials science, however, remains a rather unexplored aspect of computational materials science approaches. Moreover, traditional uncertainty propagation frameworks tend to be limited in cases with computationally expensive simulations. A rather common and important model chain is that of CALPHAD-based thermodynamic models of phase stability coupled to phase-field models for microstructure evolution. Propagation of uncertainty in these cases is challenging not only due to the sheer computational cost of the simulations but also because of the high dimensionality of the input space. In this work, we present a framework for the quantification and propagation of uncertainty in a CALPHAD-based elastochemical phase-field model. We motivate our work by investigating the microstructure evolution in Mg2SixSn thermoelectric materials. We first carry out a Markov Chain Monte Carlo-based inference of the CALPHAD model parameters for this pseudobinary system and then use advanced sampling schemes to propagate uncertainties across a high-dimensional simulation input space. Through high-throughput phase-field simulations we generate 200,000 time series of synthetic microstructures and use machine learning approaches to understand the effects of propagated uncertainties on the microstructure landscape of the system under study. The microstructure dataset has been curated in the Open Phase-field Microstructure Database (OPMD), available at http://microstructures.net.Closedoi:10.1016/j.actamat.2019.11.031Close2019 Ghoreishi, S Fatemeh; Thomison, William; Allaire, DouglasSequential Information-Theoretic and Reification-Based Approach for Querying Multi-Information Sources Journal Article In: AIAA Journal of Aerospace Information Systems, vol. 16, iss. 12, pp. 575-587, 2019.Abstract | Links | BibTeX@article{ghoreishi2019sequential, title = {Sequential Information-Theoretic and Reification-Based Approach for Querying Multi-Information Sources}, author = {S Fatemeh Ghoreishi and William Thomison and Douglas Allaire}, doi = {10.2514/1.I010753}, year = {2019}, date = {2019-12-01}, urldate = {2019-12-01}, journal = {AIAA Journal of Aerospace Information Systems}, volume = {16}, issue = {12}, pages = {575-587}, abstract = {While the growing number of computational models available to designers can solve a lot of problems, it complicates the process of properly using the information provided by each simulator. It may seem intuitive to select the model with the highest accuracy, or fidelity, as decision makers want the greatest degree of certainty to increase their efficacy. However, high-fidelity models often come at a high computational expense. While comparatively lacking in veracity, low-fidelity models do contain some degree of useful information that can be obtained at a low cost. We propose a sequential method to use this information to generate a fused model with superior predictive capability than any of its constituent models. Our methodology estimates the correlation between each model using a model reification approach that eliminates the observational data requirement. The correlation is then used in an updating procedure whereby uncertain outputs from multiple models may be fused together to better estimate some quantity or quantities of interest. These ingredients are used in a decision-theoretic manner to query from multiple information sources sequentially to achieve the maximum knowledge about the fused model in as few information source evaluations as possible with minimum cost.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseWhile the growing number of computational models available to designers can solve a lot of problems, it complicates the process of properly using the information provided by each simulator. It may seem intuitive to select the model with the highest accuracy, or fidelity, as decision makers want the greatest degree of certainty to increase their efficacy. However, high-fidelity models often come at a high computational expense. While comparatively lacking in veracity, low-fidelity models do contain some degree of useful information that can be obtained at a low cost. We propose a sequential method to use this information to generate a fused model with superior predictive capability than any of its constituent models. Our methodology estimates the correlation between each model using a model reification approach that eliminates the observational data requirement. The correlation is then used in an updating procedure whereby uncertain outputs from multiple models may be fused together to better estimate some quantity or quantities of interest. These ingredients are used in a decision-theoretic manner to query from multiple information sources sequentially to achieve the maximum knowledge about the fused model in as few information source evaluations as possible with minimum cost.Closedoi:10.2514/1.I010753Close Ghoreishi, Seyede Fatemeh; Molkeri, Abhilash; Arróyave, Raymundo; Allaire, Douglas; Srivastava, AnkitEfficient use of multiple information sources in material design Journal Article In: Acta Materialia, vol. 180, pp. 260-271, 2019.Abstract | Links | BibTeX@article{ghoreishi2019efficient, title = {Efficient use of multiple information sources in material design}, author = {Seyede Fatemeh Ghoreishi and Abhilash Molkeri and Raymundo Arr\'{o}yave and Douglas Allaire and Ankit Srivastava}, doi = {10.1016/j.actamat.2019.09.009}, year = {2019}, date = {2019-11-01}, urldate = {2019-11-01}, journal = {Acta Materialia}, volume = {180}, pages = {260-271}, abstract = {We present a general framework for the design/optimization of materials that is capable of accounting for multiple information sources available to the materials designer. We demonstrate the framework through the microstructure-based design of multi-phase microstructures. Specifically, we seek to maximize the strength normalized strain-hardening rate of a dual-phase ferritic/martensitic steel through a multi-information source Bayesian optimal design strategy. We assume that we have multiple sources of information with varying degrees of fidelity as well as cost. The available information from all sources is fused through a reification approach and then a sequential experimental design is carried out. The experimental design seeks not only to identify the most promising region in the materials design space relative to the objective at hand, but also to identify the source of information that should be used to query this point in the decision space. The selection criterion for the source used, accounts for the discrepancy between the source and the ‘ground truth’ as well as its cost. It is shown that when there is a hard constraint on the budget available to carry out the optimization, accounting for the cost of querying individual sources is essential.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseWe present a general framework for the design/optimization of materials that is capable of accounting for multiple information sources available to the materials designer. We demonstrate the framework through the microstructure-based design of multi-phase microstructures. Specifically, we seek to maximize the strength normalized strain-hardening rate of a dual-phase ferritic/martensitic steel through a multi-information source Bayesian optimal design strategy. We assume that we have multiple sources of information with varying degrees of fidelity as well as cost. The available information from all sources is fused through a reification approach and then a sequential experimental design is carried out. The experimental design seeks not only to identify the most promising region in the materials design space relative to the objective at hand, but also to identify the source of information that should be used to query this point in the decision space. The selection criterion for the source used, accounts for the discrepancy between the source and the ‘ground truth’ as well as its cost. It is shown that when there is a hard constraint on the budget available to carry out the optimization, accounting for the cost of querying individual sources is essential.Closedoi:10.1016/j.actamat.2019.09.009Close Ghoreishi, Seyede Fatemeh; Friedman, Samuel; Allaire, DouglasAdaptive dimensionality reduction for fast sequential optimization with Gaussian processes Journal Article In: ASME Journal of Mechanical Design, vol. 141, iss. 7, pp. 071404, 2019.Abstract | Links | BibTeX@article{ghoreishi2019adaptive, title = {Adaptive dimensionality reduction for fast sequential optimization with Gaussian processes}, author = {Seyede Fatemeh Ghoreishi and Samuel Friedman and Douglas Allaire}, doi = {10.1115/1.4043202}, year = {2019}, date = {2019-07-01}, urldate = {2019-07-01}, journal = {ASME Journal of Mechanical Design}, volume = {141}, issue = {7}, pages = {071404}, abstract = {Available computational models for many engineering design applications are both expensive and and of a black-box nature. This renders traditional optimization techniques difficult to apply, including gradient-based optimization and expensive heuristic approaches. For such situations, Bayesian global optimization approaches, that both explore and exploit a true function while building a metamodel of it, are applied. These methods often rely on a set of alternative candidate designs over which a querying policy is designed to search. For even modestly high-dimensional problems, such an alternative set approach can be computationally intractable, due to the reliance on excessive exploration of the design space. To overcome this, we have developed a framework for the optimization of expensive black-box models, which is based on active subspace exploitation and a two-step knowledge gradient policy. We demonstrate our approach on three benchmark problems and a practical aerostructural wing design problem, where our method performs well against traditional direct application of Bayesian global optimization techniques.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseAvailable computational models for many engineering design applications are both expensive and and of a black-box nature. This renders traditional optimization techniques difficult to apply, including gradient-based optimization and expensive heuristic approaches. For such situations, Bayesian global optimization approaches, that both explore and exploit a true function while building a metamodel of it, are applied. These methods often rely on a set of alternative candidate designs over which a querying policy is designed to search. For even modestly high-dimensional problems, such an alternative set approach can be computationally intractable, due to the reliance on excessive exploration of the design space. To overcome this, we have developed a framework for the optimization of expensive black-box models, which is based on active subspace exploitation and a two-step knowledge gradient policy. We demonstrate our approach on three benchmark problems and a practical aerostructural wing design problem, where our method performs well against traditional direct application of Bayesian global optimization techniques.Closedoi:10.1115/1.4043202Close Zhang, Guanglu; Allaire, Douglas; Shankar, Venkatesh; McAdams, Daniel AA case against the trickle-down effect in technology ecosystems Journal Article In: PLoS One, vol. 14, iss. 6, pp. e0218370, 2019.Abstract | Links | BibTeX@article{zhang2019case, title = {A case against the trickle-down effect in technology ecosystems}, author = {Guanglu Zhang and Douglas Allaire and Venkatesh Shankar and Daniel A McAdams}, doi = {10.1371/journal.pone.0218370}, year = {2019}, date = {2019-06-13}, urldate = {2019-06-13}, journal = {PLoS One}, volume = {14}, issue = {6}, pages = {e0218370}, abstract = {Technology evolution describes a change in a technology performance over time. The modeling of technology evolution is crucial for designers, entrepreneurs, and government officials to set reasonable R\&D targets, invest in promising technology, and develop effective incentive policies. Scientists and engineers have developed several mathematical functions such as logistic function and exponential function (Moore’s Law) to model technology evolution. However, these models focus on how a technology evolves in isolation and do not consider how the technology interacts with other technologies. Here, we extend the Lotka-Volterra equations from community ecology to model a technology ecosystem with system, component, and fundamental layers. We model the technology ecosystem of passenger aircraft using the Lotka-Volterra equations. The results show limited trickle-down effect in the technology ecosystem, where we refer to the impact from an upper layer technology to a lower layer technology as a trickle-down effect. The limited trickle-down effect suggests that the advance of the system technology (passenger aircraft) is not able to automatically promote the performance of the component technology (turbofan aero-engine) and the fundamental technology (engine blade superalloy) that constitute the system. Our research warns that it may not be effective to maintain the prosperity of a technology ecosystem through government incentives on system technologies only. Decision makers should consider supporting the innovations of key component or fundamental technologies.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseTechnology evolution describes a change in a technology performance over time. The modeling of technology evolution is crucial for designers, entrepreneurs, and government officials to set reasonable R&D targets, invest in promising technology, and develop effective incentive policies. Scientists and engineers have developed several mathematical functions such as logistic function and exponential function (Moore’s Law) to model technology evolution. However, these models focus on how a technology evolves in isolation and do not consider how the technology interacts with other technologies. Here, we extend the Lotka-Volterra equations from community ecology to model a technology ecosystem with system, component, and fundamental layers. We model the technology ecosystem of passenger aircraft using the Lotka-Volterra equations. The results show limited trickle-down effect in the technology ecosystem, where we refer to the impact from an upper layer technology to a lower layer technology as a trickle-down effect. The limited trickle-down effect suggests that the advance of the system technology (passenger aircraft) is not able to automatically promote the performance of the component technology (turbofan aero-engine) and the fundamental technology (engine blade superalloy) that constitute the system. Our research warns that it may not be effective to maintain the prosperity of a technology ecosystem through government incentives on system technologies only. Decision makers should consider supporting the innovations of key component or fundamental technologies.Closedoi:10.1371/journal.pone.0218370Close Swischuk, Renee; Allaire, DouglasA machine learning approach to aircraft sensor error detection and correction Journal Article In: ASME Journal of Computing and Information Science in Engineering, vol. 19, iss. 4, pp. 041009, 2019.Abstract | Links | BibTeX@article{swischuk2019machine, title = {A machine learning approach to aircraft sensor error detection and correction}, author = {Renee Swischuk and Douglas Allaire}, doi = {10.1115/1.4043567}, year = {2019}, date = {2019-06-06}, urldate = {2019-06-06}, journal = {ASME Journal of Computing and Information Science in Engineering}, volume = {19}, issue = {4}, pages = {041009}, abstract = {Sensors are crucial to modern mechanical systems. The location of these sensors can often make them vulnerable to outside interferences and failures, and the use of sensors over a lifetime can cause degradation and lead to failure. If a system has access to redundant sensor output, it can be trained to autonomously recognize errors in faulty sensors and learn to correct them. In this work, we develop a novel data-driven approach to detect sensor failures and predict the corrected sensor data using machine learning methods in an offline/online paradigm. Autocorrelation is shown to provide a global feature of failure data capable of accurately classifying the state of a sensor to determine if a failure is occurring. Feature selection of the redundant sensor data in combination with k-nearest neighbors regression is used to predict the corrected sensor data rapidly, while the system is operational. We demonstrate our methodology on flight data from a four-engine commercial jet that contains failures in the pitot static system resulting in inaccurate airspeed measurements.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseSensors are crucial to modern mechanical systems. The location of these sensors can often make them vulnerable to outside interferences and failures, and the use of sensors over a lifetime can cause degradation and lead to failure. If a system has access to redundant sensor output, it can be trained to autonomously recognize errors in faulty sensors and learn to correct them. In this work, we develop a novel data-driven approach to detect sensor failures and predict the corrected sensor data using machine learning methods in an offline/online paradigm. Autocorrelation is shown to provide a global feature of failure data capable of accurately classifying the state of a sensor to determine if a failure is occurring. Feature selection of the redundant sensor data in combination with k-nearest neighbors regression is used to predict the corrected sensor data rapidly, while the system is operational. We demonstrate our methodology on flight data from a four-engine commercial jet that contains failures in the pitot static system resulting in inaccurate airspeed measurements.Closedoi:10.1115/1.4043567Close Talapatra, Anjana; Boluki, Shahin; Honarmandi, Pejman; Solomou, Alexandros; Zhao, Guang; Ghoreishi, Seyede Fatemeh; Molkeri, Abhilash; Allaire, Douglas; Srivastava, Ankit; Qian, Xiaoning; Dougherty, Edward; Lagoudas, Dimitris; Arróyave, RaymundoExperiment design frameworks for accelerated discovery of targeted materials across scales Journal Article In: Frontiers in Materials, vol. 6, pp. 82, 2019.Abstract | Links | BibTeX@article{talapatra2019experiment, title = {Experiment design frameworks for accelerated discovery of targeted materials across scales}, author = {Anjana Talapatra and Shahin Boluki and Pejman Honarmandi and Alexandros Solomou and Guang Zhao and Seyede Fatemeh Ghoreishi and Abhilash Molkeri and Douglas Allaire and Ankit Srivastava and Xiaoning Qian and Edward Dougherty and Dimitris Lagoudas and Raymundo Arr\'{o}yave}, doi = {doi.org/10.3389/fmats.2019.00082}, year = {2019}, date = {2019-04-24}, journal = {Frontiers in Materials}, volume = {6}, pages = {82}, abstract = {Over the last decade, there has been a paradigm shift away from labor-intensive and time-consuming materials discovery methods, and materials exploration through informatics approaches is gaining traction at present. Current approaches are typically centered around the idea of achieving this exploration through high-throughput (HT) experimentation/computation. Such approaches, however, do not account for the practicalities of resource constraints which eventually result in bottlenecks at various stage of the workflow. Regardless of how many bottlenecks are eliminated, the fact that ultimately a human must make decisions about what to do with the acquired information implies that HT frameworks face hard limits that will be extremely difficult to overcome. Recently, this problem has been addressed by framing the materials discovery process as an optimal experiment design problem. In this article, we discuss the need for optimal experiment design, the challenges in it’s implementation and finally discuss some successful examples of materials discovery via experiment design.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseOver the last decade, there has been a paradigm shift away from labor-intensive and time-consuming materials discovery methods, and materials exploration through informatics approaches is gaining traction at present. Current approaches are typically centered around the idea of achieving this exploration through high-throughput (HT) experimentation/computation. Such approaches, however, do not account for the practicalities of resource constraints which eventually result in bottlenecks at various stage of the workflow. Regardless of how many bottlenecks are eliminated, the fact that ultimately a human must make decisions about what to do with the acquired information implies that HT frameworks face hard limits that will be extremely difficult to overcome. Recently, this problem has been addressed by framing the materials discovery process as an optimal experiment design problem. In this article, we discuss the need for optimal experiment design, the challenges in it’s implementation and finally discuss some successful examples of materials discovery via experiment design.Closedoi:doi.org/10.3389/fmats.2019.00082Close Ghoreishi, Seyede Fatemeh; Allaire, DouglasMulti-information source constrained Bayesian optimization Journal Article In: Structural and Multidisciplinary Optimization, vol. 59, iss. 3, pp. 977-991, 2019.Abstract | Links | BibTeX@article{ghoreishi2019multi, title = {Multi-information source constrained Bayesian optimization}, author = {Seyede Fatemeh Ghoreishi and Douglas Allaire}, doi = {10.1007/s00158-018-2115-z}, year = {2019}, date = {2019-03-01}, urldate = {2019-03-01}, journal = {Structural and Multidisciplinary Optimization}, volume = {59}, issue = {3}, pages = {977-991}, abstract = {Design decisions for complex systems often can be made or informed by a variety of information sources. When optimizing such a system, the evaluation of a quantity of interest is typically required at many different input configurations. For systems with expensive to evaluate available information sources, the optimization task can potentially be computationally prohibitive using traditional techniques. This paper presents an information-economic approach to the constrained optimization of a system with multiple available information sources. The approach rigorously quantifies the correlation between the discrepancies of different information sources, which enables the overcoming of information source bias. All information is exploited efficiently by fusing newly acquired information with that previously evaluated. Independent decision-makings are achieved by developing a two-step look-ahead utility policy and an information gain policy for objective function and constraints respectively. The approach is demonstrated on a one-dimensional example test problem and an aerodynamic design problem, where it is shown to perform well in comparison to traditional multi-information source techniques.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseDesign decisions for complex systems often can be made or informed by a variety of information sources. When optimizing such a system, the evaluation of a quantity of interest is typically required at many different input configurations. For systems with expensive to evaluate available information sources, the optimization task can potentially be computationally prohibitive using traditional techniques. This paper presents an information-economic approach to the constrained optimization of a system with multiple available information sources. The approach rigorously quantifies the correlation between the discrepancies of different information sources, which enables the overcoming of information source bias. All information is exploited efficiently by fusing newly acquired information with that previously evaluated. Independent decision-makings are achieved by developing a two-step look-ahead utility policy and an information gain policy for objective function and constraints respectively. The approach is demonstrated on a one-dimensional example test problem and an aerodynamic design problem, where it is shown to perform well in comparison to traditional multi-information source techniques.Closedoi:10.1007/s00158-018-2115-zClose Ghosh, Supriyo; Mahmoudi, Mohamad; Johnson, Luke; Elwany, Alaa; Arroyave, Raymundo; Allaire, DouglasUncertainty analysis of microsegregation during laser powder bed fusion Journal Article In: Modelling and Simulation in Materials Science and Engineering, vol. 27, iss. 3, pp. 034002, 2019.Abstract | Links | BibTeX@article{ghosh2019uncertainty, title = {Uncertainty analysis of microsegregation during laser powder bed fusion}, author = {Supriyo Ghosh and Mohamad Mahmoudi and Luke Johnson and Alaa Elwany and Raymundo Arroyave and Douglas Allaire}, doi = {10.1088/1361-651X/ab01bf}, year = {2019}, date = {2019-02-25}, urldate = {2019-02-25}, journal = {Modelling and Simulation in Materials Science and Engineering}, volume = {27}, issue = {3}, pages = {034002}, abstract = {Quality control in additive manufacturing can be achieved through variation control of the quantity of interest (QoI). We choose in this work the microstructural microsegregation to be our QoI. Microsegregation results from the spatial redistribution of a solute element across the solid\textendashliquid interface that forms during solidification of an alloy melt pool during the laser powder bed fusion process. Since the process as well as the alloy parameters contribute to the statistical variation in microstructural features, uncertainty analysis of the QoI is essential. High-throughput phase-field simulations estimate the solid\textendashliquid interfaces that grow for the melt pool solidification conditions that were estimated from finite element simulations. Microsegregation was determined from the simulated interfaces for different process and alloy parameters. Correlation, regression, and surrogate model analyses were used to quantify the contribution of different sources of uncertainty to the QoI variability. We found negligible contributions of thermal gradient and Gibbs\textendashThomson coefficient and considerable contributions of solidification velocity, liquid diffusivity, and segregation coefficient on the QoI. Cumulative distribution functions and probability density functions were used to analyze the distribution of the QoI during solidification. Our approach, for the first time, identifies the uncertainty sources and frequency densities of the QoI in the solidification regime relevant to additive manufacturing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseQuality control in additive manufacturing can be achieved through variation control of the quantity of interest (QoI). We choose in this work the microstructural microsegregation to be our QoI. Microsegregation results from the spatial redistribution of a solute element across the solid–liquid interface that forms during solidification of an alloy melt pool during the laser powder bed fusion process. Since the process as well as the alloy parameters contribute to the statistical variation in microstructural features, uncertainty analysis of the QoI is essential. High-throughput phase-field simulations estimate the solid–liquid interfaces that grow for the melt pool solidification conditions that were estimated from finite element simulations. Microsegregation was determined from the simulated interfaces for different process and alloy parameters. Correlation, regression, and surrogate model analyses were used to quantify the contribution of different sources of uncertainty to the QoI variability. We found negligible contributions of thermal gradient and Gibbs–Thomson coefficient and considerable contributions of solidification velocity, liquid diffusivity, and segregation coefficient on the QoI. Cumulative distribution functions and probability density functions were used to analyze the distribution of the QoI during solidification. Our approach, for the first time, identifies the uncertainty sources and frequency densities of the QoI in the solidification regime relevant to additive manufacturing.Closedoi:10.1088/1361-651X/ab01bfClose Zhang, Guanglu; Allaire, Douglas; McAdams, Daniel A; Shankar, VenkateshGenerating Technology Evolution Prediction Intervals Using a Bootstrap Method Journal Article In: Journal of Mechanical Design, 2019.Abstract | Links | BibTeX@article{zhanggenerating, title = {Generating Technology Evolution Prediction Intervals Using a Bootstrap Method}, author = {Guanglu Zhang and Douglas Allaire and Daniel A McAdams and Venkatesh Shankar}, doi = {10.1115/1.4041860}, year = {2019}, date = {2019-01-15}, journal = {Journal of Mechanical Design}, abstract = {Technology evolution prediction is critical for designers, business managers, and entrepreneurs to make important decisions during product development planning such as R\&D investment and outsourcing. In practice, designers want to supplement point forecasts with prediction intervals to assess future uncertainty and make contingency plans accordingly. However, prediction intervals generation for technology evolution has received scant attention in the literature. In this paper, we develop a generic method that uses bootstrapping to generate prediction intervals for technology evolution. The method we develop can be applied to any model that describes technology performance incremental change. We consider parameter uncertainty and data uncertainty and establish their empirical probability distributions. We determine an appropriate confidence level to generate prediction intervals through a holdout sample analysis rather than specify that the confidence level equals 0.05 as is typically done in the literature. In addition, our method provides the probability distribution of each parameter in a prediction model. The probability distribution is valuable when parameter values are associated with the impact factors of technology evolution. We validate our method to generate prediction intervals through two case studies of central processing units and passenger airplanes. These case studies show that the prediction intervals generated by our method cover every actual data point in the holdout sample tests. We outline four steps to generate prediction intervals for technology evolution prediction in practice.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseTechnology evolution prediction is critical for designers, business managers, and entrepreneurs to make important decisions during product development planning such as R&D investment and outsourcing. In practice, designers want to supplement point forecasts with prediction intervals to assess future uncertainty and make contingency plans accordingly. However, prediction intervals generation for technology evolution has received scant attention in the literature. In this paper, we develop a generic method that uses bootstrapping to generate prediction intervals for technology evolution. The method we develop can be applied to any model that describes technology performance incremental change. We consider parameter uncertainty and data uncertainty and establish their empirical probability distributions. We determine an appropriate confidence level to generate prediction intervals through a holdout sample analysis rather than specify that the confidence level equals 0.05 as is typically done in the literature. In addition, our method provides the probability distribution of each parameter in a prediction model. The probability distribution is valuable when parameter values are associated with the impact factors of technology evolution. We validate our method to generate prediction intervals through two case studies of central processing units and passenger airplanes. These case studies show that the prediction intervals generated by our method cover every actual data point in the holdout sample tests. We outline four steps to generate prediction intervals for technology evolution prediction in practice.Closedoi:10.1115/1.4041860Close Isaac, Benson; Allaire, DouglasExpensive Black-Box Model Optimization via a Gold Rush Policy Journal Article In: Journal of Mechanical Design, vol. 141, no. 3, pp. 031401-031401-9, 2019.Abstract | Links | BibTeX@article{isaacexpensive, title = {Expensive Black-Box Model Optimization via a Gold Rush Policy}, author = {Benson Isaac and Douglas Allaire}, editor = {Gary Wang}, url = {http://mechanicaldesign.asmedigitalcollection.asme.org/article.aspx?articleid=2717397}, doi = {10.1115/1.4042113}, year = {2019}, date = {2019-01-10}, journal = {Journal of Mechanical Design}, volume = {141}, number = {3}, pages = {031401-031401-9}, abstract = {The optimization of black-box models is a challenging task owing to the lack of analytic gradient information and structural information about the underlying function, and also due often to significant run times. A common approach to tackling such problems is the implementation of Bayesian global optimization techniques. However, these techniques often rely on surrogate modeling strategies that endow the approximation of the underlying expensive function with non-existent features. Further, these techniques tend to push new queries away from previously queried design points, making it difficult to locate an optimum point that rests near a previous model evaluation. To overcome these issues, we propose a gold rush policy that relies on purely local information to identify the next best design alternative to query. The method employs a surrogate constructed point-wise, that adds no additional features to the approximation. The result is a policy that performs well in comparison to state of the art Bayesian global optimization methods on several benchmark problems. The policy is also demonstrated on a constrained optimization problem using a penalty method.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseThe optimization of black-box models is a challenging task owing to the lack of analytic gradient information and structural information about the underlying function, and also due often to significant run times. A common approach to tackling such problems is the implementation of Bayesian global optimization techniques. However, these techniques often rely on surrogate modeling strategies that endow the approximation of the underlying expensive function with non-existent features. Further, these techniques tend to push new queries away from previously queried design points, making it difficult to locate an optimum point that rests near a previous model evaluation. To overcome these issues, we propose a gold rush policy that relies on purely local information to identify the next best design alternative to query. The method employs a surrogate constructed point-wise, that adds no additional features to the approximation. The result is a policy that performs well in comparison to state of the art Bayesian global optimization methods on several benchmark problems. The policy is also demonstrated on a constrained optimization problem using a penalty method.Closehttp://mechanicaldesign.asmedigitalcollection.asme.org/article.aspx?articleid=27[...]doi:10.1115/1.4042113Close Zhang, Guanglu; Allaire, Douglas; McAdams, Daniel A; Shankar, VenkateshSystem evolution prediction and manipulation using a Lotka–Volterra ecosystem model Journal Article In: Design Studies, vol. 60, pp. 103-138, 2019.Abstract | Links | BibTeX@article{zhang2019system, title = {System evolution prediction and manipulation using a Lotka\textendashVolterra ecosystem model}, author = {Guanglu Zhang and Douglas Allaire and Daniel A McAdams and Venkatesh Shankar}, doi = {10.1016/j.destud.2018.11.001}, year = {2019}, date = {2019-01-01}, urldate = {2019-01-01}, journal = {Design Studies}, volume = {60}, pages = {103-138}, abstract = {System evolution prediction is critical for designers to make R\&D and outsourcing decisions. Many descriptive models are used for this purpose, but they have several limitations. In this paper, we extend the Lotka\textendashVolterra equations as an ecosystem model to predict the performances of the system and its components. This model comprises a set of differential equations that describe symbiosis, commensalism, and amensalism relationships between a system and multiple components. We associate every parameter in the model with its causal factors, develop a three-step application of the model, and illustrate the application through a case study on passenger airplane fuel efficiency. Our model identifies the key components in a system. The identified components help designers generate strategies to boost system performance.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseSystem evolution prediction is critical for designers to make R&D and outsourcing decisions. Many descriptive models are used for this purpose, but they have several limitations. In this paper, we extend the Lotka–Volterra equations as an ecosystem model to predict the performances of the system and its components. This model comprises a set of differential equations that describe symbiosis, commensalism, and amensalism relationships between a system and multiple components. We associate every parameter in the model with its causal factors, develop a three-step application of the model, and illustrate the application through a case study on passenger airplane fuel efficiency. Our model identifies the key components in a system. The identified components help designers generate strategies to boost system performance.Closedoi:10.1016/j.destud.2018.11.001Close Honarmandi, Pejman; Duong, Thien Chi; Ghoreishi, S Fatemeh; Allaire, Douglas; Arroyave, RaymundoBayesian uncertainty quantification and information fusion in CALPHAD-based thermodynamic modeling Journal Article In: Acta Materialia, vol. 164, pp. 636–647, 2019.Abstract | Links | BibTeX@article{honarmandi2019bayesian, title = {Bayesian uncertainty quantification and information fusion in CALPHAD-based thermodynamic modeling}, author = {Pejman Honarmandi and Thien Chi Duong and S Fatemeh Ghoreishi and Douglas Allaire and Raymundo Arroyave}, url = {https://arxiv.org/pdf/1806.05769.pdf}, year = {2019}, date = {2019-01-01}, urldate = {2019-01-01}, journal = {Acta Materialia}, volume = {164}, pages = {636--647}, publisher = {Elsevier}, abstract = {Calculation of phase diagrams is one of the fundamental tools in alloy design\textemdashmore specifically under the framework of Integrated Computational Materials Engineering. Uncertainty quantification of phase diagrams is the first step required to provide confidence for decision making in property- or performance-based design. As a manner of illustration, a thorough probabilistic assessment of the CALPHAD model parameters is performed against the available data for a Hf-Si binary case study using a Markov Chain Monte Carlo sampling approach. The plausible optimum values and uncertainties of the parameters are thus obtained, which can be propagated to the resulting phase diagram. Using the parameter values obtained from deterministic optimization in a computational thermodynamic assessment tool (in this case Thermo-Calc) as the prior information for the parameter values and ranges in the sampling process is often necessary to achieve a reasonable cost for uncertainty quantification. This brings up the problem of finding an appropriate CALPHAD model with high-level of confidence which is a very hard and costly task that requires considerable expert skill. A Bayesian hypothesis testing based on Bayes’ factors is proposed to fulfill the need of model selection in this case, which is applied to compare four recommended models for the Hf-Si system. However, it is demonstrated that information fusion approaches, i.e., Bayesian model averaging and an error correlation-based model fusion, can be used to combine the useful information existing in all the given models rather than just using the best selected model, which may lack some information about the system being modelled.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseCalculation of phase diagrams is one of the fundamental tools in alloy design—more specifically under the framework of Integrated Computational Materials Engineering. Uncertainty quantification of phase diagrams is the first step required to provide confidence for decision making in property- or performance-based design. As a manner of illustration, a thorough probabilistic assessment of the CALPHAD model parameters is performed against the available data for a Hf-Si binary case study using a Markov Chain Monte Carlo sampling approach. The plausible optimum values and uncertainties of the parameters are thus obtained, which can be propagated to the resulting phase diagram. Using the parameter values obtained from deterministic optimization in a computational thermodynamic assessment tool (in this case Thermo-Calc) as the prior information for the parameter values and ranges in the sampling process is often necessary to achieve a reasonable cost for uncertainty quantification. This brings up the problem of finding an appropriate CALPHAD model with high-level of confidence which is a very hard and costly task that requires considerable expert skill. A Bayesian hypothesis testing based on Bayes’ factors is proposed to fulfill the need of model selection in this case, which is applied to compare four recommended models for the Hf-Si system. However, it is demonstrated that information fusion approaches, i.e., Bayesian model averaging and an error correlation-based model fusion, can be used to combine the useful information existing in all the given models rather than just using the best selected model, which may lack some information about the system being modelled.Closehttps://arxiv.org/pdf/1806.05769.pdfClose2018 Allen, Richard C; El-Halwagi, M; Allaire, Douglas LCapacity planning for modular and transportable infrastructure for shale gas production and processing Journal Article In: Industrial & Engineering Chemistry Research, 2018.Abstract | Links | BibTeX@article{allenreview1, title = {Capacity planning for modular and transportable infrastructure for shale gas production and processing}, author = {Richard C Allen and M El-Halwagi and Douglas L Allaire}, doi = {10.1021/acs.iecr.8b04255}, year = {2018}, date = {2018-12-01}, journal = {Industrial \& Engineering Chemistry Research}, abstract = {Shale gas wells typically have steep production decline curves in the first few years of operation. Therefore, if such reduction in production is not accounted for, much of the supporting infrastructure within the shale gas field owned by the exploration and production (E\&P) company will be grossly oversized after only a few years of production. Instead of the conventional approach of utilizing spatially fixed processing facilities, this work proposes the use of modular and transportable processing plants. This in turn allows the processing facilities to be composed of multiple modular plants operating in parallel. These modular plants can be reallocated within the field to other processing facilities by the E\&P company to combat the uncertainty in production that comes with developing a shale gas field. A superstructure is developed to aid in formulating the capacity planning and allocation problem as a multi-stage stochastic program with uncertain production forecasts. We incorporate a novel recourse function that allows the operator of the E\&P company to quantify the effect of postponing the processing of the influent to a later time due to insufficient processing capacity. The proposed approach and solution technique are illustrated through a case study. For a set of randomly generated scenarios, the modular and transportable system shows major cost and operational benefits over the traditional permanent plants with fixed capacities.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseShale gas wells typically have steep production decline curves in the first few years of operation. Therefore, if such reduction in production is not accounted for, much of the supporting infrastructure within the shale gas field owned by the exploration and production (E&P) company will be grossly oversized after only a few years of production. Instead of the conventional approach of utilizing spatially fixed processing facilities, this work proposes the use of modular and transportable processing plants. This in turn allows the processing facilities to be composed of multiple modular plants operating in parallel. These modular plants can be reallocated within the field to other processing facilities by the E&P company to combat the uncertainty in production that comes with developing a shale gas field. A superstructure is developed to aid in formulating the capacity planning and allocation problem as a multi-stage stochastic program with uncertain production forecasts. We incorporate a novel recourse function that allows the operator of the E&P company to quantify the effect of postponing the processing of the influent to a later time due to insufficient processing capacity. The proposed approach and solution technique are illustrated through a case study. For a set of randomly generated scenarios, the modular and transportable system shows major cost and operational benefits over the traditional permanent plants with fixed capacities.Closedoi:10.1021/acs.iecr.8b04255Close Ghoreishi, Seyede Fatemeh; Molkeri, Abhilash; Srivastava, Ankit; Arroyave, Raymundo; Allaire, DouglasMulti-information source fusion and optimization to realize ICME: Application to dual-phase materials Journal Article In: Journal of Mechanical Design, vol. 140, no. 11, pp. 111409, 2018.Abstract | Links | BibTeX@article{ghoreishi2018multib, title = {Multi-information source fusion and optimization to realize ICME: Application to dual-phase materials}, author = {Seyede Fatemeh Ghoreishi and Abhilash Molkeri and Ankit Srivastava and Raymundo Arroyave and Douglas Allaire}, doi = {10.1115/1.4041034}, year = {2018}, date = {2018-11-01}, urldate = {2018-11-01}, journal = {Journal of Mechanical Design}, volume = {140}, number = {11}, pages = {111409}, publisher = {American Society of Mechanical Engineers}, abstract = {Integrated Computational Materials Engineering (ICME) calls for the integration of computational tools into the materials and parts development cycle, while the Materials Genome Initiative (MGI) calls for the acceleration of the materials development cycle through the combination of experiments, simulation, and data. As they stand, both ICME and MGI do not prescribe how to achieve the necessary tool integration or how to efficiently exploit the computational tools, in combination with experiments, to accelerate the development of new materials and materials systems. This paper addresses the first issue by putting forward a framework for the fusion of information that exploits correlations among sources/models and between the sources and “ground truth.” The second issue is addressed through a multi-information source optimization framework that identifies, given current knowledge, the next best information source to query and where in the input space to query it via a novel value-gradient policy. The querying decision takes into account the ability to learn correlations between information sources, the resource cost of querying an information source, and what a query is expected to provide in terms of improvement over the current state. The framework is demonstrated on the optimization of a dual-phase steel to maximize its strength-normalized strain hardening rate. The ground truth is represented by a microstructure-based finite element model while three low fidelity information sources\textemdashi.e., reduced order models\textemdashbased on different homogenization assumptions\textemdashisostrain, isostress, and isowork\textemdashare used to efficiently and optimally query the materials design space.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseIntegrated Computational Materials Engineering (ICME) calls for the integration of computational tools into the materials and parts development cycle, while the Materials Genome Initiative (MGI) calls for the acceleration of the materials development cycle through the combination of experiments, simulation, and data. As they stand, both ICME and MGI do not prescribe how to achieve the necessary tool integration or how to efficiently exploit the computational tools, in combination with experiments, to accelerate the development of new materials and materials systems. This paper addresses the first issue by putting forward a framework for the fusion of information that exploits correlations among sources/models and between the sources and “ground truth.” The second issue is addressed through a multi-information source optimization framework that identifies, given current knowledge, the next best information source to query and where in the input space to query it via a novel value-gradient policy. The querying decision takes into account the ability to learn correlations between information sources, the resource cost of querying an information source, and what a query is expected to provide in terms of improvement over the current state. The framework is demonstrated on the optimization of a dual-phase steel to maximize its strength-normalized strain hardening rate. The ground truth is represented by a microstructure-based finite element model while three low fidelity information sources—i.e., reduced order models—based on different homogenization assumptions—isostrain, isostress, and isowork—are used to efficiently and optimally query the materials design space.Closedoi:10.1115/1.4041034Close Arroyave, Raymundo; Shields, Samantha; Chang, Chi-Ning; Fowler, Debra; Malak, Richard; Allaire, DouglasInterdisciplinary Research on Designing Engineering Material Systems: Results From a National Science Foundation Workshop Journal Article In: Journal of Mechanical Design, vol. 140, no. 11, pp. 110801, 2018.Abstract | Links | BibTeX@article{arroyave2018interdisciplinary, title = {Interdisciplinary Research on Designing Engineering Material Systems: Results From a National Science Foundation Workshop}, author = {Raymundo Arroyave and Samantha Shields and Chi-Ning Chang and Debra Fowler and Richard Malak and Douglas Allaire}, url = {http://mechanicaldesign.asmedigitalcollection.asme.org/article.aspx?articleid=2697906}, year = {2018}, date = {2018-01-01}, journal = {Journal of Mechanical Design}, volume = {140}, number = {11}, pages = {110801}, publisher = {American Society of Mechanical Engineers}, abstract = {We present the results from a workshop on interdisciplinary research on design of engineering material systems, sponsored by the National Science Foundation. The workshop was prompted by the need to foster a culture of interdisciplinary collaboration between the engineering design and materials communities. The workshop addressed the following: (i) conceptual barriers between materials and engineering design research communities; (ii) research questions that the interdisciplinary field of materials design should focus on; (iii) processes and metrics to be used to validate research activities and outcomes on materials design; and (iv) strategies to sustain and grow the interdisciplinary field. This contribution presents a summary of the state of the field\textemdashelicited through extensive guided discussions between representatives of both communities\textemdashand a snapshot of research activities that have emerged since the workshop. Based on the increasing level of sophistication of interdisciplinary research programs on design of materials it is apparent that the field is growing and has great potential to play a key role in a vibrant interdisciplinary materials innovation ecosystem. Sustaining such efforts will contribute significantly to the advancement of technologies that will impact many industries and will enhance society-wide health, security, and economic well-being.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseWe present the results from a workshop on interdisciplinary research on design of engineering material systems, sponsored by the National Science Foundation. The workshop was prompted by the need to foster a culture of interdisciplinary collaboration between the engineering design and materials communities. The workshop addressed the following: (i) conceptual barriers between materials and engineering design research communities; (ii) research questions that the interdisciplinary field of materials design should focus on; (iii) processes and metrics to be used to validate research activities and outcomes on materials design; and (iv) strategies to sustain and grow the interdisciplinary field. This contribution presents a summary of the state of the field—elicited through extensive guided discussions between representatives of both communities—and a snapshot of research activities that have emerged since the workshop. Based on the increasing level of sophistication of interdisciplinary research programs on design of materials it is apparent that the field is growing and has great potential to play a key role in a vibrant interdisciplinary materials innovation ecosystem. Sustaining such efforts will contribute significantly to the advancement of technologies that will impact many industries and will enhance society-wide health, security, and economic well-being.Closehttp://mechanicaldesign.asmedigitalcollection.asme.org/article.aspx?articleid=26[...]Close Curran, Qinxian Chelsea; Allaire, Douglas; Willcox, Karen ESensitivity analysis methods for mitigating uncertainty in engineering system design Journal Article In: Systems Engineering, 2018.Abstract | Links | BibTeX@article{curransensitivity, title = {Sensitivity analysis methods for mitigating uncertainty in engineering system design}, author = {Qinxian Chelsea Curran and Douglas Allaire and Karen E Willcox}, doi = {https://doi.org/10.1002/sys.21422}, year = {2018}, date = {2018-01-01}, journal = {Systems Engineering}, publisher = {Wiley Online Library}, abstract = {For many engineering systems, current design methodologies do not adequately quantify and manage uncertainty as it arises during the design process, which can lead to unacceptable risks, increases in programmatic cost, and schedule overruns. This paper develops new sensitivity analysis methods that can be used to better understand and mitigate the effects of uncertainty in system design. In particular, a new entropy‐based sensitivity analysis methodology is introduced, which apportions output uncertainty into contributions due to not only the variance of input factors and their interactions, but also to features of the underlying probability distributions that are related to distribution shape and extent. Local sensitivity analysis techniques are also presented, which provide computationally inexpensive estimates of the change in output uncertainty resulting from design modifications. The proposed methods are demonstrated on an engineering example to show how they can be used in the design context to systematically manage uncertainty budgets\textemdashwhich specify the allowable level of uncertainty for a system\textemdashby helping to identify design alternatives, evaluate trade‐offs between available options, and guide decisions regarding the allocation of resources.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseFor many engineering systems, current design methodologies do not adequately quantify and manage uncertainty as it arises during the design process, which can lead to unacceptable risks, increases in programmatic cost, and schedule overruns. This paper develops new sensitivity analysis methods that can be used to better understand and mitigate the effects of uncertainty in system design. In particular, a new entropy‐based sensitivity analysis methodology is introduced, which apportions output uncertainty into contributions due to not only the variance of input factors and their interactions, but also to features of the underlying probability distributions that are related to distribution shape and extent. Local sensitivity analysis techniques are also presented, which provide computationally inexpensive estimates of the change in output uncertainty resulting from design modifications. The proposed methods are demonstrated on an engineering example to show how they can be used in the design context to systematically manage uncertainty budgets—which specify the allowable level of uncertainty for a system—by helping to identify design alternatives, evaluate trade‐offs between available options, and guide decisions regarding the allocation of resources.Closedoi:https://doi.org/10.1002/sys.21422Close2017 Ulker, Fatma; Allaire, Douglas; Willcox, KarenSensitivity-guided decision-making for wind farm micro-siting Journal Article In: International Journal for Numerical Methods in Fluids, vol. 83, no. 1, pp. 52–72, 2017.Abstract | Links | BibTeX@article{ulker2017sensitivity, title = {Sensitivity-guided decision-making for wind farm micro-siting}, author = {Fatma Ulker and Douglas Allaire and Karen Willcox}, url = {https://onlinelibrary.wiley.com/doi/10.1002/fld.4256}, year = {2017}, date = {2017-01-01}, journal = {International Journal for Numerical Methods in Fluids}, volume = {83}, number = {1}, pages = {52--72}, publisher = {Wiley Online Library}, abstract = {This paper presents a quantitative risk assessment for design and development of a renewable energy system to support decision‐making among design alternatives. Throughout the decision‐making phases, resources are allocated among exploration and exploitation tasks to manage the uncertainties in design parameters and to adapt designs to new information for enhanced performance. The resource allocation problem is formulated as a sequential decision feedback loop for a quantitative analysis of exploration and exploitation trade‐offs. We support decision‐making by tracking the evolution of uncertainties, the sensitivity of design alternatives to the uncertainties, and the performance, reliability, and robustness of each design. This is achieved by analyzing the uncertainties in the wind resource, the turbine performance and operation, and the models that define the power curve and wake deficiency. Comparison of the performance, reliability, and robustness of aligned and staggered turbine layouts before and after wind assessment experiments aids in improving micro‐siting decisions. The results demonstrate that design decisions can be supported by efficiently allocating resources towards improved estimates of achievable design objectives and by quantitatively assessing the risk in meeting those objectives. }, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseThis paper presents a quantitative risk assessment for design and development of a renewable energy system to support decision‐making among design alternatives. Throughout the decision‐making phases, resources are allocated among exploration and exploitation tasks to manage the uncertainties in design parameters and to adapt designs to new information for enhanced performance. The resource allocation problem is formulated as a sequential decision feedback loop for a quantitative analysis of exploration and exploitation trade‐offs. We support decision‐making by tracking the evolution of uncertainties, the sensitivity of design alternatives to the uncertainties, and the performance, reliability, and robustness of each design. This is achieved by analyzing the uncertainties in the wind resource, the turbine performance and operation, and the models that define the power curve and wake deficiency. Comparison of the performance, reliability, and robustness of aligned and staggered turbine layouts before and after wind assessment experiments aids in improving micro‐siting decisions. The results demonstrate that design decisions can be supported by efficiently allocating resources towards improved estimates of achievable design objectives and by quantitatively assessing the risk in meeting those objectives. Closehttps://onlinelibrary.wiley.com/doi/10.1002/fld.4256Close Amaral, Sergio; Allaire, Douglas; Blanco, Elena De La Rosa; Willcox, Karen EA decomposition-based uncertainty quantification approach for environmental impacts of aviation technology and operation Journal Article In: AI EDAM, vol. 31, no. 3, pp. 251–264, 2017.Abstract | Links | BibTeX@article{amaral2017decomposition, title = {A decomposition-based uncertainty quantification approach for environmental impacts of aviation technology and operation}, author = {Sergio Amaral and Douglas Allaire and Elena De La Rosa Blanco and Karen E Willcox}, url = {https://www.cambridge.org/core/journals/ai-edam/article/decompositionbased-uncertainty-quantification-approach-for-environmental-impacts-of-aviation-technology-and-operation/3FC9383C0903C3C4927F08AE0D76EA98/share/121030d9a9bc75e1634088728ad47e090018ae3e}, year = {2017}, date = {2017-01-01}, journal = {AI EDAM}, volume = {31}, number = {3}, pages = {251--264}, publisher = {Cambridge University Press}, abstract = {As a measure to manage the climate impact of aviation, significant enhancements to aviation technologies and operations are necessary. When assessing these enhancements and their respective impacts on the climate, it is important that we also quantify the associated uncertainties. This is important to support an effective decision and policymaking process. However, such quantification of uncertainty is challenging, especially in a complex system that comprises multiple interacting components. The uncertainty quantification task can quickly become computationally intractable and cumbersome for one individual or group to manage. Recognizing the challenge of quantifying uncertainty in multicomponent systems, we utilize a divide-and-conquer approach, inspired by the decomposition-based approaches used in multidisciplinary analysis and optimization. Specifically, we perform uncertainty analysis and global sensitivity analysis of our multicomponent aviation system in a decomposition-based manner. In this work, we demonstrate how to handle a high-dimensional multicomponent interface using sensitivity-based dimension reduction and a novel importance sampling method. Our results demonstrate that the decomposition-based uncertainty quantification approach can effectively quantify the uncertainty of a feed-forward multicomponent system for which the component models are housed in different locations and owned by different groups.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseAs a measure to manage the climate impact of aviation, significant enhancements to aviation technologies and operations are necessary. When assessing these enhancements and their respective impacts on the climate, it is important that we also quantify the associated uncertainties. This is important to support an effective decision and policymaking process. However, such quantification of uncertainty is challenging, especially in a complex system that comprises multiple interacting components. The uncertainty quantification task can quickly become computationally intractable and cumbersome for one individual or group to manage. Recognizing the challenge of quantifying uncertainty in multicomponent systems, we utilize a divide-and-conquer approach, inspired by the decomposition-based approaches used in multidisciplinary analysis and optimization. Specifically, we perform uncertainty analysis and global sensitivity analysis of our multicomponent aviation system in a decomposition-based manner. In this work, we demonstrate how to handle a high-dimensional multicomponent interface using sensitivity-based dimension reduction and a novel importance sampling method. Our results demonstrate that the decomposition-based uncertainty quantification approach can effectively quantify the uncertainty of a feed-forward multicomponent system for which the component models are housed in different locations and owned by different groups.Closehttps://www.cambridge.org/core/journals/ai-edam/article/decompositionbased-uncer[...]Close Ghoreishi, SF; Allaire, DLAdaptive uncertainty propagation for coupled multidisciplinary systems Journal Article In: AIAA Journal, pp. 1–11, 2017.Abstract | Links | BibTeX@article{ghoreishi2017adaptive, title = {Adaptive uncertainty propagation for coupled multidisciplinary systems}, author = {SF Ghoreishi and DL Allaire}, url = {https://arc.aiaa.org/doi/10.2514/1.J055893}, year = {2017}, date = {2017-01-01}, journal = {AIAA Journal}, pages = {1--11}, publisher = {American Institute of Aeronautics and Astronautics}, abstract = {This paper presents a novel uncertainty propagation approach for multidisciplinary systems with feedback couplings, model discrepancy, and parametric uncertainty. The proposed method incorporates aspects of Gibbs sampling, importance resampling, and density estimation to ensure that, under mild assumptions, the current method is provably convergent in distribution. The method uses the samples available from previously simulating the disciplines by applying sequential importance resampling. The absence or lack of samples in each discipline is addressed by introducing an adaptive greedy sample increment process to improve the efficiency of uncertainty analysis with minimum possible computational cost. A key feature of the approach is that disciplinary models are all synthesized independently based on their available data, and it does not require any full coupled system-level evaluations. The proposed approach is illustrated on the propagation of uncertainty for an aerodynamics\textendashstructures system and is compared to a system-level Monte Carlo uncertainty analysis approach.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseThis paper presents a novel uncertainty propagation approach for multidisciplinary systems with feedback couplings, model discrepancy, and parametric uncertainty. The proposed method incorporates aspects of Gibbs sampling, importance resampling, and density estimation to ensure that, under mild assumptions, the current method is provably convergent in distribution. The method uses the samples available from previously simulating the disciplines by applying sequential importance resampling. The absence or lack of samples in each discipline is addressed by introducing an adaptive greedy sample increment process to improve the efficiency of uncertainty analysis with minimum possible computational cost. A key feature of the approach is that disciplinary models are all synthesized independently based on their available data, and it does not require any full coupled system-level evaluations. The proposed approach is illustrated on the propagation of uncertainty for an aerodynamics–structures system and is compared to a system-level Monte Carlo uncertainty analysis approach.Closehttps://arc.aiaa.org/doi/10.2514/1.J055893Close Burrows, Brian J; Isaac, Benson; Allaire, DouglasMultitask Aircraft Capability Estimation Using Conjunctive Filters Journal Article In: Journal of Aerospace Information Systems, pp. 1–12, 2017.Abstract | Links | BibTeX@article{burrows2017multitask, title = {Multitask Aircraft Capability Estimation Using Conjunctive Filters}, author = {Brian J Burrows and Benson Isaac and Douglas Allaire}, url = {https://arc.aiaa.org/doi/abs/10.2514/1.I010538}, year = {2017}, date = {2017-01-01}, journal = {Journal of Aerospace Information Systems}, pages = {1--12}, publisher = {American Institute of Aeronautics and Astronautics}, abstract = {In this paper, a data-driven approach to producing rapid, online estimates of aircraft capability is presented. The process involves using physics-based models to produce an offline library of various damage states and associated capabilities. This association is performed in-flight by an online Bayesian classification process, using single-maneuver sensor readings to predict capability. Previous literature focused on estimating capability for a single-maneuver type, and this work extends that work to allow for simultaneous estimation of multiple maneuver types. Because of sensor noise, misclassifications can occur, and this is accounted for by incorporating uncertainty into the estimation. The ability to estimate capability for multiple maneuver types enables the performance of sequential information-gathering maneuvers, often resulting in more accurate, less uncertain estimates through information fusion. Information gained by performing sequential information-gathering maneuvers is fused using standard Bayesian fusion techniques, as well as a novel conjunctive fusion method developed in this work. The conjunctive filter is shown to perform with a lower mean squared error than the Bayesian fusion technique and the single-maneuver classification with no fusion step. Our methodology and demonstrations are developed in the context of a medium-altitude, long-endurance unmanned aerial vehicle.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseIn this paper, a data-driven approach to producing rapid, online estimates of aircraft capability is presented. The process involves using physics-based models to produce an offline library of various damage states and associated capabilities. This association is performed in-flight by an online Bayesian classification process, using single-maneuver sensor readings to predict capability. Previous literature focused on estimating capability for a single-maneuver type, and this work extends that work to allow for simultaneous estimation of multiple maneuver types. Because of sensor noise, misclassifications can occur, and this is accounted for by incorporating uncertainty into the estimation. The ability to estimate capability for multiple maneuver types enables the performance of sequential information-gathering maneuvers, often resulting in more accurate, less uncertain estimates through information fusion. Information gained by performing sequential information-gathering maneuvers is fused using standard Bayesian fusion techniques, as well as a novel conjunctive fusion method developed in this work. The conjunctive filter is shown to perform with a lower mean squared error than the Bayesian fusion technique and the single-maneuver classification with no fusion step. Our methodology and demonstrations are developed in the context of a medium-altitude, long-endurance unmanned aerial vehicle.Closehttps://arc.aiaa.org/doi/abs/10.2514/1.I010538Close Amaral, Sergio; Allaire, Douglas; Willcox, KarenOptimal L_2-norm empirical importance weights for the change of probability measure Journal Article In: Statistics and Computing, vol. 27, no. 3, pp. 625–643, 2017.Abstract | Links | BibTeX@article{amaral2017optimal, title = {Optimal L_2-norm empirical importance weights for the change of probability measure}, author = {Sergio Amaral and Douglas Allaire and Karen Willcox}, url = {http://rdcu.be/ISwr}, year = {2017}, date = {2017-01-01}, journal = {Statistics and Computing}, volume = {27}, number = {3}, pages = {625--643}, publisher = {Springer}, abstract = {This work proposes an optimization formulation to determine a set of empirical importance weights to achieve a change of probability measure. The objective is to estimate statistics from a target distribution using random samples generated from a (different) proposal distribution. This work considers the specific case in which the proposal distribution from which the random samples are generated is unknown; that is, we have available the samples but no explicit description of their underlying distribution. In this setting, the Radon\textendashNikodym theorem provides a valid but indeterminable solution to the task, since the distribution from which the random samples are generated is inaccessible. The proposed approach employs the well-defined and determinable empirical distribution function associated with the available samples. The core idea is to compute importance weights associated with the random samples, such that the distance between the weighted proposal empirical distribution function and the desired target distribution function is minimized. The distance metric selected for this work is the L2 -norm and the importance weights are constrained to define a probability measure. The resulting optimization problem is shown to be a single linear equality and box-constrained quadratic program. This problem can be solved efficiently using optimization algorithms that scale well to high dimensions. Under some conditions restricting the class of distribution functions, the solution of the optimization problem is shown to result in a weighted proposal empirical distribution function that converges to the target distribution function in the L1 -norm, as the number of samples tends to infinity. Results on a variety of test cases show that the proposed approach performs well in comparison with other well-known approaches.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseThis work proposes an optimization formulation to determine a set of empirical importance weights to achieve a change of probability measure. The objective is to estimate statistics from a target distribution using random samples generated from a (different) proposal distribution. This work considers the specific case in which the proposal distribution from which the random samples are generated is unknown; that is, we have available the samples but no explicit description of their underlying distribution. In this setting, the Radon–Nikodym theorem provides a valid but indeterminable solution to the task, since the distribution from which the random samples are generated is inaccessible. The proposed approach employs the well-defined and determinable empirical distribution function associated with the available samples. The core idea is to compute importance weights associated with the random samples, such that the distance between the weighted proposal empirical distribution function and the desired target distribution function is minimized. The distance metric selected for this work is the L2 -norm and the importance weights are constrained to define a probability measure. The resulting optimization problem is shown to be a single linear equality and box-constrained quadratic program. This problem can be solved efficiently using optimization algorithms that scale well to high dimensions. Under some conditions restricting the class of distribution functions, the solution of the optimization problem is shown to result in a weighted proposal empirical distribution function that converges to the target distribution function in the L1 -norm, as the number of samples tends to infinity. Results on a variety of test cases show that the proposed approach performs well in comparison with other well-known approaches.Closehttp://rdcu.be/ISwrClose2016 Opgenoord, Max MJ; Allaire, Douglas L; Willcox, Karen EVariance-based sensitivity analysis to support simulation-based design under uncertainty Journal Article In: Journal of Mechanical Design, vol. 138, no. 11, pp. 111410, 2016.Abstract | Links | BibTeX@article{opgenoord2016variance, title = {Variance-based sensitivity analysis to support simulation-based design under uncertainty}, author = {Max MJ Opgenoord and Douglas L Allaire and Karen E Willcox}, url = {http://mechanicaldesign.asmedigitalcollection.asme.org/article.aspx?articleid=2537153}, year = {2016}, date = {2016-01-01}, journal = {Journal of Mechanical Design}, volume = {138}, number = {11}, pages = {111410}, publisher = {American Society of Mechanical Engineers}, abstract = {Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as a function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseSensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as a function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.Closehttp://mechanicaldesign.asmedigitalcollection.asme.org/article.aspx?articleid=25[...]Close2015 Lecerf, Marc; Allaire, Douglas; Willcox, KarenMethodology for dynamic data-driven online flight capability estimation Journal Article In: AIAA Journal, vol. 53, no. 10, pp. 3073–3087, 2015.Abstract | Links | BibTeX@article{lecerf2015methodology, title = {Methodology for dynamic data-driven online flight capability estimation}, author = {Marc Lecerf and Douglas Allaire and Karen Willcox}, url = {https://arc.aiaa.org/doi/10.2514/1.J053893}, year = {2015}, date = {2015-01-01}, journal = {AIAA Journal}, volume = {53}, number = {10}, pages = {3073--3087}, publisher = {American Institute of Aeronautics and Astronautics}, abstract = {This paper presents a data-driven approach for the online updating of the flight envelope of an unmanned aerial vehicle subjected to structural degradation. The main contribution of the work is a general methodology that leverages both physics-based modeling and data to decompose tasks into two phases: expensive offline simulations to build an efficient characterization of the problem and rapid data-driven classification to support online decision making. In the approach, physics-based models at the wing and vehicle level run offline to generate libraries of information covering a range of damage scenarios. These libraries are queried online to estimate vehicle capability states. The state estimation and associated quantification of uncertainty are achieved by Bayesian classification using sensed strain data. The methodology is demonstrated on a conceptual unmanned aerial vehicle executing a pullup maneuver, in which the vehicle flight envelope is updated dynamically with onboard sensor information. During vehicle operation, the maximum maneuvering load factor is estimated using structural strain sensor measurements combined with physics-based information from precomputed damage scenarios that consider structural weakness. Compared to a baseline case that uses a static as-designed flight envelope, the self-aware vehicle achieves both an increase in probability of executing a successful maneuver and an increase in overall usage of the vehicle capability.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseThis paper presents a data-driven approach for the online updating of the flight envelope of an unmanned aerial vehicle subjected to structural degradation. The main contribution of the work is a general methodology that leverages both physics-based modeling and data to decompose tasks into two phases: expensive offline simulations to build an efficient characterization of the problem and rapid data-driven classification to support online decision making. In the approach, physics-based models at the wing and vehicle level run offline to generate libraries of information covering a range of damage scenarios. These libraries are queried online to estimate vehicle capability states. The state estimation and associated quantification of uncertainty are achieved by Bayesian classification using sensed strain data. The methodology is demonstrated on a conceptual unmanned aerial vehicle executing a pullup maneuver, in which the vehicle flight envelope is updated dynamically with onboard sensor information. During vehicle operation, the maximum maneuvering load factor is estimated using structural strain sensor measurements combined with physics-based information from precomputed damage scenarios that consider structural weakness. Compared to a baseline case that uses a static as-designed flight envelope, the self-aware vehicle achieves both an increase in probability of executing a successful maneuver and an increase in overall usage of the vehicle capability.Closehttps://arc.aiaa.org/doi/10.2514/1.J053893Close2014 Amaral, Sergio; Allaire, Douglas; Willcox, KarenA decomposition-based approach to uncertainty analysis of feed-forward multicomponent systems Journal Article In: International Journal for Numerical Methods in Engineering, vol. 100, no. 13, pp. 982–1005, 2014.Abstract | Links | BibTeX@article{amaral2014decomposition, title = {A decomposition-based approach to uncertainty analysis of feed-forward multicomponent systems}, author = {Sergio Amaral and Douglas Allaire and Karen Willcox}, url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/nme.4779}, year = {2014}, date = {2014-01-01}, journal = {International Journal for Numerical Methods in Engineering}, volume = {100}, number = {13}, pages = {982--1005}, publisher = {Wiley Online Library}, abstract = {To support effective decision making, engineers should comprehend and manage various uncertainties throughout the design process. Unfortunately, in today's modern systems, uncertainty analysis can become cumbersome and computationally intractable for one individual or group to manage. This is particularly true for systems comprised of a large number of components. In many cases, these components may be developed by different groups and even run on different computational platforms. This paper proposes an approach for decomposing the uncertainty analysis task among the various components comprising a feed‐forward system and synthesizing the local uncertainty analyses into a system uncertainty analysis. Our proposed decomposition‐based multicomponent uncertainty analysis approach is shown to be provably convergent in distribution under certain conditions. The proposed method is illustrated on quantification of uncertainty for a multidisciplinary gas turbine system and is compared to a traditional system‐level Monte Carlo uncertainty analysis approach.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseTo support effective decision making, engineers should comprehend and manage various uncertainties throughout the design process. Unfortunately, in today's modern systems, uncertainty analysis can become cumbersome and computationally intractable for one individual or group to manage. This is particularly true for systems comprised of a large number of components. In many cases, these components may be developed by different groups and even run on different computational platforms. This paper proposes an approach for decomposing the uncertainty analysis task among the various components comprising a feed‐forward system and synthesizing the local uncertainty analyses into a system uncertainty analysis. Our proposed decomposition‐based multicomponent uncertainty analysis approach is shown to be provably convergent in distribution under certain conditions. The proposed method is illustrated on quantification of uncertainty for a multidisciplinary gas turbine system and is compared to a traditional system‐level Monte Carlo uncertainty analysis approach.Closehttps://onlinelibrary.wiley.com/doi/abs/10.1002/nme.4779Close Allaire, Douglas; Willcox, KarenA mathematical and computational framework for multifidelity design and analysis with computer models Journal Article In: International Journal for Uncertainty Quantification, vol. 4, no. 1, 2014.Abstract | Links | BibTeX@article{allaire2014mathematical, title = {A mathematical and computational framework for multifidelity design and analysis with computer models}, author = {Douglas Allaire and Karen Willcox}, url = {http://www.dl.begellhouse.com/journals/52034eb04b657aea,67004a5b6ddaf807,58efb4b506c3f902.html}, year = {2014}, date = {2014-01-01}, journal = {International Journal for Uncertainty Quantification}, volume = {4}, number = {1}, publisher = {Begel House Inc.}, abstract = {A multifidelity approach to design and analysis for complex systems seeks to exploit optimally all available models and data. Existing multifidelity approaches generally attempt to calibrate low-fidelity models or replace low-fidelity analysis results using data from higher fidelity analyses. This paper proposes a fundamentally different approach that uses the tools of estimation theory to fuse together information from multifidelity analyses, resulting in a Bayesian-based approach to mitigating risk in complex system design and analysis. This approach is combined with maximum entropy characterizations of model discrepancy to represent epistemic uncertainties due to modeling limitations and model assumptions. Mathematical interrogation of the uncertainty in system output quantities of interest is achieved via a variance-based global sensitivity analysis, which identifies the primary contributors to output uncertainty and thus provides guidance for adaptation of model fidelity. The methodology is applied to multidisciplinary design optimization and demonstrated on a wing-sizing problem for a high altitude, long endurance vehicle.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseA multifidelity approach to design and analysis for complex systems seeks to exploit optimally all available models and data. Existing multifidelity approaches generally attempt to calibrate low-fidelity models or replace low-fidelity analysis results using data from higher fidelity analyses. This paper proposes a fundamentally different approach that uses the tools of estimation theory to fuse together information from multifidelity analyses, resulting in a Bayesian-based approach to mitigating risk in complex system design and analysis. This approach is combined with maximum entropy characterizations of model discrepancy to represent epistemic uncertainties due to modeling limitations and model assumptions. Mathematical interrogation of the uncertainty in system output quantities of interest is achieved via a variance-based global sensitivity analysis, which identifies the primary contributors to output uncertainty and thus provides guidance for adaptation of model fidelity. The methodology is applied to multidisciplinary design optimization and demonstrated on a wing-sizing problem for a high altitude, long endurance vehicle.Closehttp://www.dl.begellhouse.com/journals/52034eb04b657aea,67004a5b6ddaf807,58efb4b[...]Close Allaire, Douglas; Willcox, KarenUncertainty assessment of complex models with application to aviation environmental policy-making Journal Article In: Transport Policy, vol. 34, pp. 109–113, 2014.Abstract | Links | BibTeX@article{allaire2014uncertainty, title = {Uncertainty assessment of complex models with application to aviation environmental policy-making}, author = {Douglas Allaire and Karen Willcox}, url = {https://www.sciencedirect.com/science/article/abs/pii/S0967070X14000535}, year = {2014}, date = {2014-01-01}, journal = {Transport Policy}, volume = {34}, pages = {109--113}, publisher = {Elsevier}, abstract = {Numerical simulation models that support decision-making and policy-making processes are often complex and involve many disciplines. These models typically have many factors of different character, such as operational, design-based, technological, and economics-based. Such factors generally contain uncertainty, which leads to uncertainty in model outputs. For such models, it is critical to both the application of model results and the future development of the model that a formal approach to the assessment of uncertainty in the model be established and carried out. In this paper, a comprehensive approach to the uncertainty assessment of complex models intended to support decision-making and policy-making processes is presented. The approach consists of seven steps, which are establishing assessment goals, documenting assumptions and limitations, documenting model factors and outputs, classifying and characterizing factor uncertainty, conducting uncertainty analysis, conducting sensitivity analysis, and presenting results. Highlights of the approach are demonstrated on a real-world model intended to estimate the impacts of aviation on climate change.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseNumerical simulation models that support decision-making and policy-making processes are often complex and involve many disciplines. These models typically have many factors of different character, such as operational, design-based, technological, and economics-based. Such factors generally contain uncertainty, which leads to uncertainty in model outputs. For such models, it is critical to both the application of model results and the future development of the model that a formal approach to the assessment of uncertainty in the model be established and carried out. In this paper, a comprehensive approach to the uncertainty assessment of complex models intended to support decision-making and policy-making processes is presented. The approach consists of seven steps, which are establishing assessment goals, documenting assumptions and limitations, documenting model factors and outputs, classifying and characterizing factor uncertainty, conducting uncertainty analysis, conducting sensitivity analysis, and presenting results. Highlights of the approach are demonstrated on a real-world model intended to estimate the impacts of aviation on climate change.Closehttps://www.sciencedirect.com/science/article/abs/pii/S0967070X14000535Close Allaire, Douglas; Noel, George; Willcox, Karen; Cointin, RebeccaUncertainty quantification of an aviation environmental toolsuite Journal Article In: Reliability Engineering & System Safety, vol. 126, pp. 14–24, 2014.Abstract | Links | BibTeX@article{allaire2014uncertaintyb, title = {Uncertainty quantification of an aviation environmental toolsuite}, author = {Douglas Allaire and George Noel and Karen Willcox and Rebecca Cointin}, url = {https://www.sciencedirect.com/science/article/pii/S0951832014000039}, year = {2014}, date = {2014-01-01}, journal = {Reliability Engineering \& System Safety}, volume = {126}, pages = {14--24}, publisher = {Elsevier}, abstract = {This paper describes uncertainty quantification (UQ) of a complex system computational tool that supports policy-making for aviation environmental impact. The paper presents the methods needed to create a tool that is “UQ-enabled” with a particular focus on how to manage the complexity of long run times and massive input/output datasets. These methods include a process to quantify parameter uncertainties via data, documentation and expert opinion, creating certified surrogate models to accelerate run-times while maintaining confidence in results, and executing a range of mathematical UQ techniques such as uncertainty propagation and global sensitivity analysis. The results and discussion address aircraft performance, aircraft noise, and aircraft emissions modeling.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseThis paper describes uncertainty quantification (UQ) of a complex system computational tool that supports policy-making for aviation environmental impact. The paper presents the methods needed to create a tool that is “UQ-enabled” with a particular focus on how to manage the complexity of long run times and massive input/output datasets. These methods include a process to quantify parameter uncertainties via data, documentation and expert opinion, creating certified surrogate models to accelerate run-times while maintaining confidence in results, and executing a range of mathematical UQ techniques such as uncertainty propagation and global sensitivity analysis. The results and discussion address aircraft performance, aircraft noise, and aircraft emissions modeling.Closehttps://www.sciencedirect.com/science/article/pii/S0951832014000039Close2012 Allaire, Douglas L; Willcox, Karen EA variance-based sensitivity index function for factor prioritization Journal Article In: Reliability Engineering & System Safety, vol. 107, pp. 107–114, 2012.Abstract | Links | BibTeX@article{allaire2012variance, title = {A variance-based sensitivity index function for factor prioritization}, author = {Douglas L Allaire and Karen E Willcox}, url = {https://www.sciencedirect.com/science/article/pii/S0951832011001712}, year = {2012}, date = {2012-01-01}, journal = {Reliability Engineering \& System Safety}, volume = {107}, pages = {107--114}, publisher = {Elsevier}, abstract = {Among the many uses for sensitivity analysis is factor prioritization\textemdashthat is, the determination of which factor, once fixed to its true value, on average leads to the greatest reduction in the variance of an output. A key assumption is that a given factor can, through further research, be fixed to some point on its domain. In general, this is an optimistic assumption, which can lead to inappropriate resource allocation. This research develops an original method that apportions output variance as a function of the amount of variance reduction that can be achieved for a particular factor. This variance-based sensitivity index function provides a main effect sensitivity index for a given factor as a function of the amount of variance of that factor that can be reduced. An aggregate measure of which factors would on average cause the greatest reduction in output variance given future research is also defined and assumes the portion of a particular factors variance that can be reduced is a random variable. An average main effect sensitivity index is then calculated by taking the mean of the variance-based sensitivity index function. A key aspect of the method is that the analysis is performed directly on the samples that were generated during a global sensitivity analysis using rejection sampling. The method is demonstrated on the Ishigami function and an additive function, where the rankings for future research are shown to be different than those of a traditional global sensitivity analysis.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseAmong the many uses for sensitivity analysis is factor prioritization—that is, the determination of which factor, once fixed to its true value, on average leads to the greatest reduction in the variance of an output. A key assumption is that a given factor can, through further research, be fixed to some point on its domain. In general, this is an optimistic assumption, which can lead to inappropriate resource allocation. This research develops an original method that apportions output variance as a function of the amount of variance reduction that can be achieved for a particular factor. This variance-based sensitivity index function provides a main effect sensitivity index for a given factor as a function of the amount of variance of that factor that can be reduced. An aggregate measure of which factors would on average cause the greatest reduction in output variance given future research is also defined and assumes the portion of a particular factors variance that can be reduced is a random variable. An average main effect sensitivity index is then calculated by taking the mean of the variance-based sensitivity index function. A key aspect of the method is that the analysis is performed directly on the samples that were generated during a global sensitivity analysis using rejection sampling. The method is demonstrated on the Ishigami function and an additive function, where the rankings for future research are shown to be different than those of a traditional global sensitivity analysis.Closehttps://www.sciencedirect.com/science/article/pii/S0951832011001712Close Allaire, Douglas; He, Qinxian; Deyst, John; Willcox, KarenAn information-theoretic metric of system complexity with application to engineering system design Journal Article In: Journal of Mechanical Design, vol. 134, no. 10, pp. 100906, 2012.Abstract | Links | BibTeX@article{allaire2012information, title = {An information-theoretic metric of system complexity with application to engineering system design}, author = {Douglas Allaire and Qinxian He and John Deyst and Karen Willcox}, url = {http://mechanicaldesign.asmedigitalcollection.asme.org/article.aspx?articleid=1484828}, year = {2012}, date = {2012-01-01}, journal = {Journal of Mechanical Design}, volume = {134}, number = {10}, pages = {100906}, publisher = {American Society of Mechanical Engineers}, abstract = {System complexity is considered a key driver of the inability of current system design practices to at times not recognize performance, cost, and schedule risks as they emerge. We present here a definition of system complexity and a quantitative metric for measuring that complexity based on information theory. We also derive sensitivity indices that indicate the fraction of complexity that can be reduced if more about certain factors of a system can become known. This information can be used as part of a resource allocation procedure aimed at reducing system complexity. Our methods incorporate Gaussian process emulators of expensive computer simulation models and account for both model inadequacy and code uncertainty. We demonstrate our methodology on a candidate design of an infantry fighting vehicle.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseSystem complexity is considered a key driver of the inability of current system design practices to at times not recognize performance, cost, and schedule risks as they emerge. We present here a definition of system complexity and a quantitative metric for measuring that complexity based on information theory. We also derive sensitivity indices that indicate the fraction of complexity that can be reduced if more about certain factors of a system can become known. This information can be used as part of a resource allocation procedure aimed at reducing system complexity. Our methods incorporate Gaussian process emulators of expensive computer simulation models and account for both model inadequacy and code uncertainty. We demonstrate our methodology on a candidate design of an infantry fighting vehicle.Closehttp://mechanicaldesign.asmedigitalcollection.asme.org/article.aspx?articleid=14[...]Close2010 Allaire, Douglas; Willcox, KarenSurrogate modeling for uncertainty assessment with application to aviation environmental system models Journal Article In: AIAA journal, vol. 48, no. 8, pp. 1791–1803, 2010.Abstract | Links | BibTeX@article{allaire2010surrogate, title = {Surrogate modeling for uncertainty assessment with application to aviation environmental system models}, author = {Douglas Allaire and Karen Willcox}, url = {https://arc.aiaa.org/doi/10.2514/1.J050247}, year = {2010}, date = {2010-01-01}, journal = {AIAA journal}, volume = {48}, number = {8}, pages = {1791--1803}, abstract = {Numerical simulation models to support decision-making and policy-making processes are often complex, involving many disciplines, many inputs, and long computation times. Inputs to such models are inherently uncertain, leading to uncertainty in model outputs. Characterizing, propagating, and analyzing this uncertainty is critical both to model development and to the effective application of model results in a decision-making setting; however, the many thousands of model evaluations required to sample the uncertainty space (e.g., via Monte Carlo sampling) present an intractable computational burden. This paper presents a novel surrogate modeling methodology designed specifically for propagating uncertainty from model inputs to model outputs and for performing a global sensitivity analysis, which characterizes the contributions of uncertainties in model inputs to output variance, while maintaining the quantitative rigor of the analysis by providing confidence intervals on surrogate predictions. The approach is developed for a general class of models and is demonstrated on an aircraft emissions prediction model that is being developed and applied to support aviation environmental policy-making. The results demonstrate how the confidence intervals on surrogate predictions can be used to balance the tradeoff between computation time and uncertainty in the estimation of the statistical outputs of interest.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseNumerical simulation models to support decision-making and policy-making processes are often complex, involving many disciplines, many inputs, and long computation times. Inputs to such models are inherently uncertain, leading to uncertainty in model outputs. Characterizing, propagating, and analyzing this uncertainty is critical both to model development and to the effective application of model results in a decision-making setting; however, the many thousands of model evaluations required to sample the uncertainty space (e.g., via Monte Carlo sampling) present an intractable computational burden. This paper presents a novel surrogate modeling methodology designed specifically for propagating uncertainty from model inputs to model outputs and for performing a global sensitivity analysis, which characterizes the contributions of uncertainties in model inputs to output variance, while maintaining the quantitative rigor of the analysis by providing confidence intervals on surrogate predictions. The approach is developed for a general class of models and is demonstrated on an aircraft emissions prediction model that is being developed and applied to support aviation environmental policy-making. The results demonstrate how the confidence intervals on surrogate predictions can be used to balance the tradeoff between computation time and uncertainty in the estimation of the statistical outputs of interest.Closehttps://arc.aiaa.org/doi/10.2514/1.J050247Close Allaire, Douglas L; Willcox, Karen EDistributional sensitivity analysis Journal Article In: Procedia-Social and Behavioral Sciences, vol. 2, no. 6, pp. 7595–7596, 2010.Abstract | Links | BibTeX@article{allaire2010distributional, title = {Distributional sensitivity analysis}, author = {Douglas L Allaire and Karen E Willcox}, url = {https://www.sciencedirect.com/science/article/pii/S1877042810012759}, year = {2010}, date = {2010-01-01}, journal = {Procedia-Social and Behavioral Sciences}, volume = {2}, number = {6}, pages = {7595--7596}, publisher = {Elsevier}, abstract = {Among the uses for global sensitivity analysis is factor prioritization. A key assumption for this is that a given factor can, through further research, be fixed to some point on its domain. For factors containing epistemic uncertainty, this is an optimistic assumption, which can lead to inappropriate resource allocation. Thus, this research develops an original method, referred to as distributional sensitivity analysis, that considers which factors would on average cause the greatest reduction in output variance, given that the portion of a particular factor's variance that can be reduced is a random variable. A key aspect of the method is that the analysis is performed directly on the samples that were generated during a global sensitivity analysis using acceptance/rejection sampling. In general, if for each factor, N model runs are required for a global sensitivity analysis, then those same N model runs are sufficient for a distributional sensitivity analysis.}, keywords = {}, pubstate = {published}, tppubtype = {article} } CloseAmong the uses for global sensitivity analysis is factor prioritization. A key assumption for this is that a given factor can, through further research, be fixed to some point on its domain. For factors containing epistemic uncertainty, this is an optimistic assumption, which can lead to inappropriate resource allocation. Thus, this research develops an original method, referred to as distributional sensitivity analysis, that considers which factors would on average cause the greatest reduction in output variance, given that the portion of a particular factor's variance that can be reduced is a random variable. A key aspect of the method is that the analysis is performed directly on the samples that were generated during a global sensitivity analysis using acceptance/rejection sampling. In general, if for each factor, N model runs are required for a global sensitivity analysis, then those same N model runs are sufficient for a distributional sensitivity analysis.Closehttps://www.sciencedirect.com/science/article/pii/S1877042810012759Close Articles Under Review Khatamsaz, Danial; Vela, Brent; Singh, Prashant; Johnson, Duane; Allaire, Douglas; Arroyave, RaymundoMulti-Objective Materials Bayesian Optimization with Active Learning of Design Constraints: Application to Refractory Multi-Principal-Element Alloys Unpublished Submitted to Acta Materialia, under review, 2022.BibTeX@unpublished{nokey, title = {Multi-Objective Materials Bayesian Optimization with Active Learning of Design Constraints: Application to Refractory Multi-Principal-Element Alloys}, author = {Danial Khatamsaz and Brent Vela and Prashant Singh and Duane Johnson and Douglas Allaire and Raymundo Arroyave}, year = {2022}, date = {2022-08-02}, urldate = {2022-08-02}, howpublished = {Submitted to Acta Materialia, under review}, keywords = {}, pubstate = {published}, tppubtype = {unpublished} } Close James, J. R.; Gonzales, M.; Gerlt, A. R. C.; Payton, E. J.; John, R.; Arroyave, R.; Allaire, D.Parameter Range Estimation and Uncertainty Analysis for High Strain Rate Material Response via Model Fusion Unpublished Submitted to the Journal of Applied Physics, under review, 2022.BibTeX@unpublished{nokey, title = {Parameter Range Estimation and Uncertainty Analysis for High Strain Rate Material Response via Model Fusion}, author = {J. R. James and M. Gonzales and A. R. C. Gerlt and E. J. Payton and R. John and R. Arroyave and D. Allaire}, year = {2022}, date = {2022-08-01}, urldate = {2022-08-01}, howpublished = {Submitted to the Journal of Applied Physics, under review}, keywords = {}, pubstate = {published}, tppubtype = {unpublished} } Close James, Jaylen; Sanghvi, Meet; Gerlt, Austin; Allaire, Douglas; Arroyave, Raymundo; Gonzales, MannyOptimized Uncertainty Propagation Across High Fidelity Taylor Anvil Simulation Unpublished Submitted to Frontiers in Materials Science, under review, 2022.BibTeX@unpublished{nokey, title = {Optimized Uncertainty Propagation Across High Fidelity Taylor Anvil Simulation}, author = {Jaylen James and Meet Sanghvi and Austin Gerlt and Douglas Allaire and Raymundo Arroyave and Manny Gonzales}, year = {2022}, date = {2022-08-01}, howpublished = {Submitted to Frontiers in Materials Science, under review}, keywords = {}, pubstate = {published}, tppubtype = {unpublished} } Close 2022 Khatamsaz, Danial; Allaire, DouglasMaterials Design using an Active Subspace Batch Bayesian Optimization Approach Conference AIAA SciTech Forum, no. 2022-0075, 2022.Abstract | Links | BibTeX@conference{khatamsaz2022materials, title = {Materials Design using an Active Subspace Batch Bayesian Optimization Approach}, author = {Danial Khatamsaz and Douglas Allaire}, doi = {10.2514/6.2022-0075}, year = {2022}, date = {2022-01-07}, urldate = {2022-01-07}, booktitle = {AIAA SciTech Forum}, number = {2022-0075}, abstract = {Integrated computational materials engineering (ICME) calls for integrating simulation tools and/or experiments to develop new materials and materials systems. However, implementation of ICME approaches is challenging mainly due to the considerable computational expense of such frameworks and large dimensionality of the design space. Addressing these challenges is thus critical to the success of ICME initiatives. We present here a specific Bayesian optimization framework designed to address these two challenges. In particular, we propose an active subspace batch Bayesian optimization framework. The framework makes use of dimension reduction via the active subspace method and makes use of the ability to query in parallel via the batch Bayesian optimization approach. The integration of these techniques leads to significant efficiency improvements while maintaining accuracy.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } CloseIntegrated computational materials engineering (ICME) calls for integrating simulation tools and/or experiments to develop new materials and materials systems. However, implementation of ICME approaches is challenging mainly due to the considerable computational expense of such frameworks and large dimensionality of the design space. Addressing these challenges is thus critical to the success of ICME initiatives. We present here a specific Bayesian optimization framework designed to address these two challenges. In particular, we propose an active subspace batch Bayesian optimization framework. The framework makes use of dimension reduction via the active subspace method and makes use of the ability to query in parallel via the batch Bayesian optimization approach. The integration of these techniques leads to significant efficiency improvements while maintaining accuracy.Closedoi:10.2514/6.2022-0075Close2021 Singh, Arjun; Allaire, DouglasDynamic data-driven sensor placement for enabling capability estimation of self-aware aerospace vehicles Conference AIAA Aviation Forum, no. 2021-3096, 2021.Abstract | Links | BibTeX@conference{singh2021dynamic, title = {Dynamic data-driven sensor placement for enabling capability estimation of self-aware aerospace vehicles}, author = {Arjun Singh and Douglas Allaire}, doi = {10.2514/6.2021-3096}, year = {2021}, date = {2021-06-21}, urldate = {2021-06-21}, booktitle = {AIAA Aviation Forum}, number = {2021-3096}, abstract = {Unmanned aerial vehicles (UAV) are utilized in numerous industries and in recent years with the advent of learning techniques, the focus is now on developing Self \textendash aware UAVs that rely on an array of environmental sensors to replace a pilots awareness of the structural capability of the UAV and provide time critical analysis of the sensor data to make complex decisions in real time. Hence a self-aware UAV is capable of dynamically and autonomously sense its structural state and act accordingly to perform the required task. This work proposes a data driven approach to producing estimates of capability for a self \textendash aware UAV and optimizing the placement of sensors to minimize the error between true and predicted capability. This process involves using high physics-based models such as ASWING in tandem with Akselos modeler, a cloud based FEA solver, to produce an offline library that comprises of damage states along with capabilities corresponding to different kinematic states of a UAV. Further this generated information is used to create a classification model which is used to predict the capability for real time data. The classification model serves as an enabler for the optimization algorithm to measure the error value between the true and the predicted capability of the UAV to determine the optimum sensor placement. We demonstrate the improvement in performance through a comparison between optimum placement and the standard placement of sensors. We also provide evidence of proof of concept of how dynamic sampling of information can improve the process of capability estimation in self \textendash aware aerospace vehicles.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } CloseUnmanned aerial vehicles (UAV) are utilized in numerous industries and in recent years with the advent of learning techniques, the focus is now on developing Self – aware UAVs that rely on an array of environmental sensors to replace a pilots awareness of the structural capability of the UAV and provide time critical analysis of the sensor data to make complex decisions in real time. Hence a self-aware UAV is capable of dynamically and autonomously sense its structural state and act accordingly to perform the required task. This work proposes a data driven approach to producing estimates of capability for a self – aware UAV and optimizing the placement of sensors to minimize the error between true and predicted capability. This process involves using high physics-based models such as ASWING in tandem with Akselos modeler, a cloud based FEA solver, to produce an offline library that comprises of damage states along with capabilities corresponding to different kinematic states of a UAV. Further this generated information is used to create a classification model which is used to predict the capability for real time data. The classification model serves as an enabler for the optimization algorithm to measure the error value between the true and the predicted capability of the UAV to determine the optimum sensor placement. We demonstrate the improvement in performance through a comparison between optimum placement and the standard placement of sensors. We also provide evidence of proof of concept of how dynamic sampling of information can improve the process of capability estimation in self – aware aerospace vehicles.Closedoi:10.2514/6.2021-3096Close Khatamsaz, Danial; Allaire, DouglasA comparison of reification and cokriging for sequential multi-information source fusion Conference AIAA SciTech Forum, no. 2021-1477, 2021.Abstract | Links | BibTeX@conference{khatamsaz2021comparison, title = {A comparison of reification and cokriging for sequential multi-information source fusion}, author = {Danial Khatamsaz and Douglas Allaire}, doi = {10.2514/6.2021-1477}, year = {2021}, date = {2021-01-04}, urldate = {2021-01-04}, booktitle = {AIAA SciTech Forum}, number = {2021-1477}, abstract = {Many engineering tasks, such as optimization, analysis, model development, model calibration, and others, can potentially exploit information from many sources. These sources include numerical models, expert opinion, and experimental data. Information fusion over these sources of information has the potential to provide a more complete quantitative picture of the current state of knowledge of a given ground truth quantity of interest. This state of knowledge can be updated as new information from any given source is acquired. In this work, we compare two information fusion approaches that both seek to combine all available information to form a surrogate model of the ground truth. These are model reification and cokriging. The comparison considers several test functions as well as a real world NACA 0012 analysis. A quantity of interest is considered for each test case, as well as the derivative of the quantity of interest in some cases. Each fusion approach performs well generally, with each being superior to the other under certain conditions.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } CloseMany engineering tasks, such as optimization, analysis, model development, model calibration, and others, can potentially exploit information from many sources. These sources include numerical models, expert opinion, and experimental data. Information fusion over these sources of information has the potential to provide a more complete quantitative picture of the current state of knowledge of a given ground truth quantity of interest. This state of knowledge can be updated as new information from any given source is acquired. In this work, we compare two information fusion approaches that both seek to combine all available information to form a surrogate model of the ground truth. These are model reification and cokriging. The comparison considers several test functions as well as a real world NACA 0012 analysis. A quantity of interest is considered for each test case, as well as the derivative of the quantity of interest in some cases. Each fusion approach performs well generally, with each being superior to the other under certain conditions.Closedoi:10.2514/6.2021-1477Close Peddareddygari, Lalith; Allaire, DouglasTime to failure prognosis of a gas turbine engine using predictive analytics Conference AIAA SciTech Forum, no. 2021-1355, 2021.Abstract | Links | BibTeX@conference{peddareddygari2021time, title = {Time to failure prognosis of a gas turbine engine using predictive analytics}, author = {Lalith Peddareddygari and Douglas Allaire}, doi = {10.2514/6.2021-1355}, year = {2021}, date = {2021-01-04}, urldate = {2021-01-04}, booktitle = {AIAA SciTech Forum}, number = {2021-1355}, abstract = {Maintenance costs and machine availability are some of the most important concerns of any company that owns large machinery such as gas turbine engines. With the advent of the fourth wave of the industrial revolution, also known as Industrial Internet of things, the focus has shifted to identifying optimal utilization policies of the equipment. A reduction in the installation costs of sensors has helped companies install them on their key equipment. As a result, online data from sensors can be utilized to monitor the health of the machines. This work proposes a prognostic approach using standard machine learning techniques for the prediction of time-to-failure for gas turbine engines. This approach provides accurate and dynamic estimates of the health of the machine. To enable this capability, the failure prognosis includes multiple recurrent neural network models to predict the sensor readings of the engine and support vector machines to classify these readings as safe or failure. Our approach is demonstrated on available simulated engine data from NASA that includes several different failure modes. The performance of this predictive analytics based failure prognosis technology is compared with current industry standards on land-based gas turbine engines, where it is shown to perform well.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } CloseMaintenance costs and machine availability are some of the most important concerns of any company that owns large machinery such as gas turbine engines. With the advent of the fourth wave of the industrial revolution, also known as Industrial Internet of things, the focus has shifted to identifying optimal utilization policies of the equipment. A reduction in the installation costs of sensors has helped companies install them on their key equipment. As a result, online data from sensors can be utilized to monitor the health of the machines. This work proposes a prognostic approach using standard machine learning techniques for the prediction of time-to-failure for gas turbine engines. This approach provides accurate and dynamic estimates of the health of the machine. To enable this capability, the failure prognosis includes multiple recurrent neural network models to predict the sensor readings of the engine and support vector machines to classify these readings as safe or failure. Our approach is demonstrated on available simulated engine data from NASA that includes several different failure modes. The performance of this predictive analytics based failure prognosis technology is compared with current industry standards on land-based gas turbine engines, where it is shown to perform well.Closedoi:10.2514/6.2021-1355Close2020 Zhang, Guanglu; Allaire, Douglas; Cagan, JonathanAn Initial Guess Free Method for Least Squares Parameter Estimation in Nonlinear Models Conference International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, ASME, 2020.Abstract | Links | BibTeX@conference{zhang2020initial, title = {An Initial Guess Free Method for Least Squares Parameter Estimation in Nonlinear Models}, author = {Guanglu Zhang and Douglas Allaire and Jonathan Cagan}, doi = {10.1115/DETC2020-22047}, year = {2020}, date = {2020-08-17}, urldate = {2020-08-17}, booktitle = {International Design Engineering Technical Conferences and Computers and Information in Engineering Conference}, publisher = {ASME}, abstract = {Fitting models to data is critical in many science and engineering fields. A major task in fitting models to data is to estimate the value of each parameter in a given model. Iterative methods, such as the Gauss-Newton method and the Levenberg-Marquardt method, are often employed for parameter estimation in nonlinear models. However, practitioners must guess the initial value for each parameter in order to initialize these iterative methods. A poor initial guess can contribute to non-convergence of these methods or lead these methods to converge to a wrong solution. In this paper, an initial guess free method is introduced to find the optimal parameter estimators in a nonlinear model that minimizes the squared error of the fit. The method includes three algorithms that require different level of computational power to find the optimal parameter estimators. The method constructs a solution interval for each parameter in the model. These solution intervals significantly reduce the search space for optimal parameter estimators. The method also provides an empirical probability distribution for each parameter, which is valuable for parameter uncertainty assessment. The initial guess free method is validated through a case study in which Fick’s second law is fit to an experimental data set. This case study shows that the initial guess free method can find the optimal parameter estimators efficiently. A four-step procedure for implementing the initial guess free method in practice is also outlined.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } CloseFitting models to data is critical in many science and engineering fields. A major task in fitting models to data is to estimate the value of each parameter in a given model. Iterative methods, such as the Gauss-Newton method and the Levenberg-Marquardt method, are often employed for parameter estimation in nonlinear models. However, practitioners must guess the initial value for each parameter in order to initialize these iterative methods. A poor initial guess can contribute to non-convergence of these methods or lead these methods to converge to a wrong solution. In this paper, an initial guess free method is introduced to find the optimal parameter estimators in a nonlinear model that minimizes the squared error of the fit. The method includes three algorithms that require different level of computational power to find the optimal parameter estimators. The method constructs a solution interval for each parameter in the model. These solution intervals significantly reduce the search space for optimal parameter estimators. The method also provides an empirical probability distribution for each parameter, which is valuable for parameter uncertainty assessment. The initial guess free method is validated through a case study in which Fick’s second law is fit to an experimental data set. This case study shows that the initial guess free method can find the optimal parameter estimators efficiently. A four-step procedure for implementing the initial guess free method in practice is also outlined.Closedoi:10.1115/DETC2020-22047Close Khatamsaz, Danial; Peddareddygari, Lalith; Friedman, Sam; Allaire, Douglas LEfficient multi-information source multiobjective Bayesian optimization Conference AIAA SciTech Forum, no. 2020-2127, 2020.Abstract | Links | BibTeX@conference{khatamsaz2020efficient, title = {Efficient multi-information source multiobjective Bayesian optimization}, author = {Danial Khatamsaz and Lalith Peddareddygari and Sam Friedman and Douglas L Allaire}, doi = {10.2514/6.2020-2127}, year = {2020}, date = {2020-01-05}, urldate = {2020-01-05}, booktitle = {AIAA SciTech Forum}, number = {2020-2127}, abstract = {Multi-objective optimization is often a difficult task owing to the need to balance competing objectives. A typical approach to handling this is to estimate a Pareto frontier in objective space by identifying non-dominated points. This task is typically computationally demanding owing to the need to incorporate information of high enough fidelity to be trusted in design and decision-making processes. In this work, we present a multi-information source framework for enabling efficient multi-objective optimization. The framework allows for the exploitation of all available information and considers both potential improvement and cost. The framework includes ingredients of model fusion, expected hypervolume improvement, and intermediate Gaussian process surrogates. The approach is demonstrated on two test problems and aerostructural wing design problem.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } CloseMulti-objective optimization is often a difficult task owing to the need to balance competing objectives. A typical approach to handling this is to estimate a Pareto frontier in objective space by identifying non-dominated points. This task is typically computationally demanding owing to the need to incorporate information of high enough fidelity to be trusted in design and decision-making processes. In this work, we present a multi-information source framework for enabling efficient multi-objective optimization. The framework allows for the exploitation of all available information and considers both potential improvement and cost. The framework includes ingredients of model fusion, expected hypervolume improvement, and intermediate Gaussian process surrogates. The approach is demonstrated on two test problems and aerostructural wing design problem.Closedoi:10.2514/6.2020-2127Close2019 Imani, Mahdi; Ghoreishi, Seyede Fatemeh; Allaire, Douglas; Braga-Neto, Ulisses MMFBO-SSM: Multi-fidelity Bayesian optimization for fast inference in state-space models Conference Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, 2019.Abstract | Links | BibTeX@conference{imani2019mfbo, title = {MFBO-SSM: Multi-fidelity Bayesian optimization for fast inference in state-space models}, author = {Mahdi Imani and Seyede Fatemeh Ghoreishi and Douglas Allaire and Ulisses M Braga-Neto}, doi = {10.1609/aaai.v33i01.33017858}, year = {2019}, date = {2019-07-17}, urldate = {2019-07-17}, booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence}, volume = {33}, issue = {1}, pages = {7858-7865}, abstract = {Nonlinear state-space models are ubiquitous in modeling real-world dynamical systems. Sequential Monte Carlo (SMC) techniques, also known as particle methods, are a well-known class of parameter estimation methods for this general class of state-space models. Existing SMC-based techniques rely on excessive sampling of the parameter space, which makes their computation intractable for large systems or tall data sets. Bayesian optimization techniques have been used for fast inference in state-space models with intractable likelihoods. These techniques aim to find the maximum of the likelihood function by sequential sampling of the parameter space through a single SMC approximator. Various SMC approximators with different fidelities and computational costs are often available for sample-based likelihood approximation. In this paper, we propose a multi-fidelity Bayesian optimization algorithm for the inference of general nonlinear state-space models (MFBO-SSM), which enables simultaneous sequential selection of parameters and approximators. The accuracy and speed of the algorithm are demonstrated by numerical experiments using synthetic gene expression data from a gene regulatory network model and real data from the VIX stock price index.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } CloseNonlinear state-space models are ubiquitous in modeling real-world dynamical systems. Sequential Monte Carlo (SMC) techniques, also known as particle methods, are a well-known class of parameter estimation methods for this general class of state-space models. Existing SMC-based techniques rely on excessive sampling of the parameter space, which makes their computation intractable for large systems or tall data sets. Bayesian optimization techniques have been used for fast inference in state-space models with intractable likelihoods. These techniques aim to find the maximum of the likelihood function by sequential sampling of the parameter space through a single SMC approximator. Various SMC approximators with different fidelities and computational costs are often available for sample-based likelihood approximation. In this paper, we propose a multi-fidelity Bayesian optimization algorithm for the inference of general nonlinear state-space models (MFBO-SSM), which enables simultaneous sequential selection of parameters and approximators. The accuracy and speed of the algorithm are demonstrated by numerical experiments using synthetic gene expression data from a gene regulatory network model and real data from the VIX stock price index.Closedoi:10.1609/aaai.v33i01.33017858Close Burrows, Brian; Allaire, DouglasAnalysis of Uncertainty Quantification Techniques for Vehicle Capability in Damaged Composite Aircraft Conference AIAA Aviation Forum, no. 2019-3663, 2019.Links | BibTeX@conference{burrows2019analysis, title = {Analysis of Uncertainty Quantification Techniques for Vehicle Capability in Damaged Composite Aircraft}, author = {Brian Burrows and Douglas Allaire}, doi = {doi.org/10.2514/6.2019-3663}, year = {2019}, date = {2019-06-17}, urldate = {2019-06-17}, booktitle = {AIAA Aviation Forum}, number = {2019-3663}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Closedoi:doi.org/10.2514/6.2019-3663Close Sanghvi, Meet; Honarmandi, Pejman; Attari, Vahid; Duong, Thien; Arroyave, Raymundo; Allaire, Douglas LUncertainty Propagation via Probability Measure Optimized Importance Weights with Application to Thermoelectric Materials Proceedings Article In: AIAA Scitech 2019 Forum, pp. 0967, 2019.Links | BibTeX@inproceedings{sanghvi2019uncertainty, title = {Uncertainty Propagation via Probability Measure Optimized Importance Weights with Application to Thermoelectric Materials}, author = {Meet Sanghvi and Pejman Honarmandi and Vahid Attari and Thien Duong and Raymundo Arroyave and Douglas L Allaire}, doi = {10.2514/6.2019-0967}, year = {2019}, date = {2019-01-01}, urldate = {2019-01-01}, booktitle = {AIAA Scitech 2019 Forum}, pages = {0967}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Closedoi:10.2514/6.2019-0967Close2018 Ghoreishi, Seyede Fatemeh; Allaire, Douglas LGaussian process regression for Bayesian fusion of multi-fidelity information sources Proceedings Article In: 2018 Multidisciplinary Analysis and Optimization Conference, pp. 4176, 2018.Links | BibTeX@inproceedings{ghoreishi2018gaussian, title = {Gaussian process regression for Bayesian fusion of multi-fidelity information sources}, author = {Seyede Fatemeh Ghoreishi and Douglas L Allaire}, url = {https://arc.aiaa.org/doi/10.2514/6.2018-4176}, year = {2018}, date = {2018-01-01}, booktitle = {2018 Multidisciplinary Analysis and Optimization Conference}, pages = {4176}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Closehttps://arc.aiaa.org/doi/10.2514/6.2018-4176Close Zhang, Guanglu; Allaire, Douglas; McAdams, Daniel A; Shankar, VenkateshGenerating Technology Evolution Prediction Intervals With Bootstrap Method Proceedings Article In: ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, pp. V007T06A057–V007T06A057, American Society of Mechanical Engineers 2018.Links | BibTeX@inproceedings{zhang2018generating, title = {Generating Technology Evolution Prediction Intervals With Bootstrap Method}, author = {Guanglu Zhang and Douglas Allaire and Daniel A McAdams and Venkatesh Shankar}, url = {http://mechanicaldesign.asmedigitalcollection.asme.org/article.aspx?articleid=2712590}, year = {2018}, date = {2018-01-01}, booktitle = {ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference}, pages = {V007T06A057--V007T06A057}, organization = {American Society of Mechanical Engineers}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Closehttp://mechanicaldesign.asmedigitalcollection.asme.org/article.aspx?articleid=27[...]Close Isaac, Benson; Allaire, DouglasExpensive Black-Box Model Optimization via a Gold Rush Policy Proceedings Article In: ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, pp. V02BT03A035–V02BT03A035, American Society of Mechanical Engineers 2018.Links | BibTeX@inproceedings{isaac2018expensive, title = {Expensive Black-Box Model Optimization via a Gold Rush Policy}, author = {Benson Isaac and Douglas Allaire}, url = {https://proceedings.asmedigitalcollection.asme.org/proceeding.aspx?articleID=2713244}, year = {2018}, date = {2018-01-01}, booktitle = {ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference}, pages = {V02BT03A035--V02BT03A035}, organization = {American Society of Mechanical Engineers}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Closehttps://proceedings.asmedigitalcollection.asme.org/proceeding.aspx?articleID=271[...]Close Ghoreishi, Seyede Fatemeh; Allaire, Douglas LA Fusion-Based Multi-Information Source Optimization Approach using Knowledge Gradient Policies Conference 2018 AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, 2018.Links | BibTeX@conference{ghoreishi2018fusion, title = {A Fusion-Based Multi-Information Source Optimization Approach using Knowledge Gradient Policies}, author = {Seyede Fatemeh Ghoreishi and Douglas L Allaire}, doi = {10.2514/6.2018-1159}, year = {2018}, date = {2018-01-01}, urldate = {2018-01-01}, booktitle = {2018 AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference}, pages = {1159}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Closedoi:10.2514/6.2018-1159Close Swischuk, Renee C; Allaire, Douglas LA Machine Learning Approach to Aircraft Sensor Error Detection and Correction Conference 2018 AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, 2018.BibTeX@conference{swischuk2018machine, title = {A Machine Learning Approach to Aircraft Sensor Error Detection and Correction}, author = {Renee C Swischuk and Douglas L Allaire}, year = {2018}, date = {2018-01-01}, booktitle = {2018 AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference}, pages = {1164}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close Isaac, Benson; Friedman, Sam; Allaire, Douglas LEfficient approximation of coupling variable fixed point sets for decoupling multidisciplinary systems Conference 2018 AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, 2018.Links | BibTeX@conference{isaac2018efficient, title = {Efficient approximation of coupling variable fixed point sets for decoupling multidisciplinary systems}, author = {Benson Isaac and Sam Friedman and Douglas L Allaire}, doi = {10.2514/6.2018-1908}, year = {2018}, date = {2018-01-01}, urldate = {2018-01-01}, booktitle = {2018 AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference}, pages = {1908}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Closedoi:10.2514/6.2018-1908Close Friedman, Sam; Isaac, Benson; Ghoreishi, Seyede Fatemeh; Allaire, Douglas LEfficient Decoupling of Multiphysics Systems for Uncertainty Propagation Conference 2018 AIAA Non-Deterministic Approaches Conference, 2018.Links | BibTeX@conference{friedman2018efficient, title = {Efficient Decoupling of Multiphysics Systems for Uncertainty Propagation}, author = {Sam Friedman and Benson Isaac and Seyede Fatemeh Ghoreishi and Douglas L Allaire}, doi = {10.2514/6.2018-1661}, year = {2018}, date = {2018-01-01}, urldate = {2018-01-01}, booktitle = {2018 AIAA Non-Deterministic Approaches Conference}, pages = {1661}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Closedoi:10.2514/6.2018-1661Close2017 Friedman, Sam; Ghoreishi, Seyede Fatemeh; Allaire, Douglas LQuantifying the impact of different model discrepancy formulations in coupled multidisciplinary systems Conference 19th AIAA Non-Deterministic Approaches Conference, 2017.BibTeX@conference{friedman2017quantifying, title = {Quantifying the impact of different model discrepancy formulations in coupled multidisciplinary systems}, author = {Sam Friedman and Seyede Fatemeh Ghoreishi and Douglas L Allaire}, year = {2017}, date = {2017-01-01}, booktitle = {19th AIAA Non-Deterministic Approaches Conference}, pages = {1950}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close Burrows, Brian J; Allaire, Douglas LA Comparison of Naive Bayes Classifiers with Applications to Self-Aware Aerospace Vehicles Conference 18th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, 2017.BibTeX@conference{burrows2017comparison, title = {A Comparison of Naive Bayes Classifiers with Applications to Self-Aware Aerospace Vehicles}, author = {Brian J Burrows and Douglas L Allaire}, year = {2017}, date = {2017-01-01}, booktitle = {18th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference}, pages = {3819}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close Thomison, William D; Allaire, Douglas LA Model Reification Approach to Fusing Information from Multifidelity Information Sources Conference 19th AIAA Non-Deterministic Approaches Conference, 2017.BibTeX@conference{thomison2017model, title = {A Model Reification Approach to Fusing Information from Multifidelity Information Sources}, author = {William D Thomison and Douglas L Allaire}, year = {2017}, date = {2017-01-01}, booktitle = {19th AIAA Non-Deterministic Approaches Conference}, pages = {1949}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close2016 Li, Kaiyu; Allaire, DouglasA Compressed Sensing Approach to Uncertainty Propagation for Approximately Additive Functions Conference ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, American Society of Mechanical Engineers 2016.BibTeX@conference{li2016compressed, title = {A Compressed Sensing Approach to Uncertainty Propagation for Approximately Additive Functions}, author = {Kaiyu Li and Douglas Allaire}, year = {2016}, date = {2016-01-01}, booktitle = {ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference}, pages = {V01AT02A027--V01AT02A027}, organization = {American Society of Mechanical Engineers}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close Burrows, Brian; Isaac, Benson; Allaire, Douglas LA Dynamic Data-Driven Approach to Multiple Task Capability Estimation for Self-Aware Aerospace Vehicles Conference 17th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, 2016.BibTeX@conference{burrows2016dynamic, title = {A Dynamic Data-Driven Approach to Multiple Task Capability Estimation for Self-Aware Aerospace Vehicles}, author = {Brian Burrows and Benson Isaac and Douglas L Allaire}, year = {2016}, date = {2016-01-01}, booktitle = {17th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference}, pages = {4125}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close Isaac, Benson; Allaire, Douglas LA Dynamic Data-Driven Approach to Optimal Offline Learning for Online Flight Capability Estimation Conference 18th AIAA Non-Deterministic Approaches Conference, 2016.BibTeX@conference{isaac2016dynamic, title = {A Dynamic Data-Driven Approach to Optimal Offline Learning for Online Flight Capability Estimation}, author = {Benson Isaac and Douglas L Allaire}, year = {2016}, date = {2016-01-01}, booktitle = {18th AIAA Non-Deterministic Approaches Conference}, pages = {1444}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close Rooney, Warren; Allaire, DouglasAn Information-Theoretic Model of Project Schedule Overruns Caused by Task Rework: A Case for Newspeak Conference ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, American Society of Mechanical Engineers 2016.BibTeX@conference{rooney2016information, title = {An Information-Theoretic Model of Project Schedule Overruns Caused by Task Rework: A Case for Newspeak}, author = {Warren Rooney and Douglas Allaire}, year = {2016}, date = {2016-01-01}, booktitle = {ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference}, pages = {V01BT02A022--V01BT02A022}, organization = {American Society of Mechanical Engineers}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close Ghoreishi, Seyede Fatemeh; Allaire, Douglas LCompositional uncertainty analysis via importance weighted gibbs sampling for coupled multidisciplinary systems Conference 18th AIAA Non-Deterministic Approaches Conference, 2016.BibTeX@conference{ghoreishi2016compositional, title = {Compositional uncertainty analysis via importance weighted gibbs sampling for coupled multidisciplinary systems}, author = {Seyede Fatemeh Ghoreishi and Douglas L Allaire}, year = {2016}, date = {2016-01-01}, booktitle = {18th AIAA Non-Deterministic Approaches Conference}, pages = {1443}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close Korobenko, Artem; Pigazzini, Marco; Singh, Victor; Kim, Hyonny; Allaire, Douglas L; Willcox, Karen E; Marsden, Alison; Bazilevs, YuriDynamic-Data-Driven Damage Prediction in Aerospace Composite Structures Conference 17th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, 2016.BibTeX@conference{korobenko2016dynamic, title = {Dynamic-Data-Driven Damage Prediction in Aerospace Composite Structures}, author = {Artem Korobenko and Marco Pigazzini and Victor Singh and Hyonny Kim and Douglas L Allaire and Karen E Willcox and Alison Marsden and Yuri Bazilevs}, year = {2016}, date = {2016-01-01}, booktitle = {17th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference}, pages = {4126}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close Friedman, Samuel; Allaire, DouglasQuantifying Model Discrepancy in Coupled Multi-Physics Systems Conference ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, American Society of Mechanical Engineers 2016.BibTeX@conference{friedman2016quantifying, title = {Quantifying Model Discrepancy in Coupled Multi-Physics Systems}, author = {Samuel Friedman and Douglas Allaire}, year = {2016}, date = {2016-01-01}, booktitle = {ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference}, pages = {V01AT02A024--V01AT02A024}, organization = {American Society of Mechanical Engineers}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close2015 Lam, Rémi; Allaire, Douglas L; Willcox, Karen EMultifidelity optimization using statistical surrogate modeling for non-hierarchical information sources Conference 56th AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, 2015.BibTeX@conference{lam2015multifidelity, title = {Multifidelity optimization using statistical surrogate modeling for non-hierarchical information sources}, author = {R\'{e}mi Lam and Douglas L Allaire and Karen E Willcox}, year = {2015}, date = {2015-01-01}, booktitle = {56th AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference}, pages = {0143}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close2014 Allaire, Douglas; Kordonowy, David; Lecerf, Marc; Mainini, Laura; Willcox, KarenMultifidelity DDDAS methods with application to a self-aware aerospace vehicle Conference vol. 29, Elsevier, 2014.BibTeX@conference{allaire2014multifidelity, title = {Multifidelity DDDAS methods with application to a self-aware aerospace vehicle}, author = {Douglas Allaire and David Kordonowy and Marc Lecerf and Laura Mainini and Karen Willcox}, year = {2014}, date = {2014-01-01}, journal = {Procedia Computer Science}, volume = {29}, pages = {1182--1192}, publisher = {Elsevier}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close Allaire, Douglas L; Lecerf, Marc; Willcox, Karen E; Kordonowy, David NA Dynamic Data Driven Approach to Online Flight Envelope Updating for Self Aware Aerospace Vehicles Conference 16th AIAA Non-Deterministic Approaches Conference, 2014.BibTeX@conference{allaire2014dynamic, title = {A Dynamic Data Driven Approach to Online Flight Envelope Updating for Self Aware Aerospace Vehicles}, author = {Douglas L Allaire and Marc Lecerf and Karen E Willcox and David N Kordonowy}, year = {2014}, date = {2014-01-01}, booktitle = {16th AIAA Non-Deterministic Approaches Conference}, pages = {1175}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close2013 Allaire, D; Chambers, J; Cowlagi, R; Kordonowy, D; Lecerf, M; Mainini, Laura; Ulker, F; Willcox, KarenAn offline/online DDDAS capability for self-aware aerospace vehicles Conference vol. 18, Elsevier pubstate = published, 2013.BibTeX@conference{allaire2013offline, title = {An offline/online DDDAS capability for self-aware aerospace vehicles}, author = {D Allaire and J Chambers and R Cowlagi and D Kordonowy and M Lecerf and Laura Mainini and F Ulker and Karen Willcox}, year = {2013}, date = {2013-01-01}, journal = {Procedia Computer Science}, volume = {18}, pages = {1959--1968}, publisher = {Elsevier pubstate = published}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close2012 Allaire, Douglas; Willcox, KarenFusing information from multifidelity computer models of physical systems Conference Information Fusion (FUSION), 2012 15th International Conference on, IEEE 2012.BibTeX@conference{allaire2012fusing, title = {Fusing information from multifidelity computer models of physical systems}, author = {Douglas Allaire and Karen Willcox}, year = {2012}, date = {2012-01-01}, booktitle = {Information Fusion (FUSION), 2012 15th International Conference on}, pages = {2458--2465}, organization = {IEEE}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close Amaral, Sergio; Allaire, Douglas; Willcox, KarenA decomposition approach to uncertainty analysis of multidisciplinary systems Conference 12th AIAA Aviation Technology, Integration, and Operations (ATIO) Conference and 14th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, 2012.BibTeX@conference{amaral2012decomposition, title = {A decomposition approach to uncertainty analysis of multidisciplinary systems}, author = {Sergio Amaral and Douglas Allaire and Karen Willcox}, year = {2012}, date = {2012-01-01}, booktitle = {12th AIAA Aviation Technology, Integration, and Operations (ATIO) Conference and 14th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference}, pages = {5563}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close He, Qinxian; Allaire, Douglas; Deyst, John; Willcox, KarenA Bayesian Framework for Uncertainty Quantification in the Design of Complex Systems Conference 12th AIAA Aviation Technology, Integration, and Operations (ATIO) Conference and 14th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, 2012.BibTeX@conference{he2012bayesian, title = {A Bayesian Framework for Uncertainty Quantification in the Design of Complex Systems}, author = {Qinxian He and Douglas Allaire and John Deyst and Karen Willcox}, year = {2012}, date = {2012-01-01}, booktitle = {12th AIAA Aviation Technology, Integration, and Operations (ATIO) Conference and 14th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference}, pages = {5479}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close2011 Allaire, Douglas; Willcox, Karen; Deyst, JohnOn the Application of Estimation Theory to Complex System Design Under Uncertainty Conference SIAM Conference on Computational Science and Engineering. Reno, vol. 1, 2011.BibTeX@conference{allaire2011application, title = {On the Application of Estimation Theory to Complex System Design Under Uncertainty}, author = {Douglas Allaire and Karen Willcox and John Deyst}, year = {2011}, date = {2011-01-01}, booktitle = {SIAM Conference on Computational Science and Engineering. Reno}, volume = {1}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close2010 Allaire, Douglas; Willcox, Karen; Toupet, OlivierA bayesian-based approach to multifidelity multidisciplinary design optimization Conference 13th AIAA/ISSMO Multidisciplinary Analysis Optimization Conference, 2010.BibTeX@conference{allaire2010bayesianb, title = {A bayesian-based approach to multifidelity multidisciplinary design optimization}, author = {Douglas Allaire and Karen Willcox and Olivier Toupet}, year = {2010}, date = {2010-01-01}, booktitle = {13th AIAA/ISSMO Multidisciplinary Analysis Optimization Conference}, pages = {9183}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close2009 Noel, George; Allaire, Doug; Jacobson, Stuart; Willcox, Karen; Cointin, Rebecca; others,Assessment of the aviation environmental design tool Conference Eighth USA/Europe Air Traffic Management Research and Development Seminar (ATM2009), 2009.BibTeX@conference{noel2009assessment, title = {Assessment of the aviation environmental design tool}, author = {George Noel and Doug Allaire and Stuart Jacobson and Karen Willcox and Rebecca Cointin and others}, year = {2009}, date = {2009-01-01}, booktitle = {Eighth USA/Europe Air Traffic Management Research and Development Seminar (ATM2009)}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close Cointin, Rebecca; Noel, George; Allaire, Doug; Willcox, Karen; Jacobson, StuartAssessing the uncertainty in FAA's Noise and Emissions Compliance Model Proceedings Article In: INTER-NOISE and NOISE-CON Congress and Conference Proceedings, pp. 2516–2525, 2009.BibTeX@inproceedings{cointin2009assessing, title = {Assessing the uncertainty in FAA's Noise and Emissions Compliance Model}, author = {Rebecca Cointin and George Noel and Doug Allaire and Karen Willcox and Stuart Jacobson}, year = {2009}, date = {2009-01-01}, booktitle = {INTER-NOISE and NOISE-CON Congress and Conference Proceedings}, volume = {2009}, number = {4}, pages = {2516--2525}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Close2007 Allaire, Douglas L; Waitz, Ian A; Willcox, Karen EA comparison of two methods for predicting emissions from aircraft gas turbine combustors Conference ASME Turbo Expo 2007: Power for Land, Sea, and Air, American Society of Mechanical Engineers 2007.BibTeX@conference{allaire2007comparison, title = {A comparison of two methods for predicting emissions from aircraft gas turbine combustors}, author = {Douglas L Allaire and Ian A Waitz and Karen E Willcox}, year = {2007}, date = {2007-01-01}, booktitle = {ASME Turbo Expo 2007: Power for Land, Sea, and Air}, pages = {899--908}, organization = {American Society of Mechanical Engineers}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Close Invited Talks International Invited Talks “What Next? Sequentially value-optimal engineering tasking for analysis and design,” Computational Methods for Design and Control of Next-Generation Engineered Systems Workshop, Singapore University of Technology and Design (SUTD), Singapore (May, 2018). “Offline/online data-driven approaches to engineering analysis,” National University of Singapore, Department of Mechanical Engineering Seminar Series, Singapore (May, 2016). “Compositional uncertainty quantification for coupled multiphysics systems,” Society for Industrial and Applied Mathematics (SIAM) Uncertainty Quantification (UQ) Conference, Lausanne, Switzerland (April, 2016). “Robustness, prevention, and resilience: design under uncertainty for complex engineering systems,” Complex Systems Digital Campus (CS-DC) World e-conference (October, 2015). “A Bayesian-based approach to multifidelity multidisciplinary design optimization,” Uncertainty Quantification Workshop, International Centre for Mathematical Sciences (ICMS), Edinburgh,Scotland (May, 2010). “Application of the Sobol’ method to large-scale aviation environmental policy-making,” The 5th Summer School on Sensitivity Analysis of Model Output, Venice, Italy (September, 2008). National Invited Talks “Efficient uncertainty propagation for coupled systems,” Army Research Laboratory Seminar, Aberdeen Proving Grounds, MD (September, 2018). “Global sensitivity analysis via transductive measure transformation,” Society for Industrial and Applied Mathematics (SIAM) Annual Meeting, Portland, OR (July, 2018). “Towards efficient value-gradient querying via subspace optimization,” Air Force Research Laboratory, Dayton, OH (June, 2018). “Design for dynamic data-driven self-aware systems,” American Society of Mechanical Engineers International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (IDETC/CIE), 43rd Design Automation Conference KEYNOTE on Data-Driven Engineering Design, Cleveland, OH (August, 2017). “A model reification approach to fusing information from multifidelity information sources,” Society for Industrial and Applied Mathematics (SIAM) Computational Science and Engineering (CSE), Atlanta, GA (February, 2017). “Dynamic data-driven methods for self-aware aerospace vehicles,” Air Force Office of Scientific Research (AFOSR) Dynamic Data Driven Application Systems (DDDAS) Program Review, Arlington, VA (January, 2016). “Offline learning for dynamic data-driven capability estimation for self-aware aerospace vehicles,” INFORMS Annual Meeting, Philadelphia, PA (November, 2015). “An offline/online compositional approach to uncertainty quantification for coupled multidisciplinary systems,” Texas A&M University, Department of Mathematics, Numerical Analysis Seminar (October, 2015). “A scalable compositional approach to uncertainty quantification for the optimization under uncertainty of multi-physics systems,” SIAM CSE, Salt Lake City, UT (March, 2015). “Offline libraries and online classification for enabling a dynamic data-driven self-aware aerospace vehicle,” ASME IDETC Dynamic Data-Driven Application Systems Panel Session, Buffalo, NY (August, 2014). “An offline/online approach to enabling a dynamic data-driven self-aware aerospace vehicle”, MIT DDDAS Workshop, Cambridge, MA (May, 2014). “A composition-based approach to uncertainty analysis with application to multi-information source optimization,” Information Science and Technology Institute, Los Alamos National Laboratory, Los Alamos, NM (April, 2014). “Multi-information source optimization: resource allocation,” Materials by Design Workshop, Los Alamos National Laboratory, Los Alamos, NM (July, 2013). “Multifidelity model management for conceptual design,” Air Force Research Laboratory, Multidisciplinary Science and Technology Center Technical Interchange Meeting on Multifidelity Methods, Dayton, OH (February, 2013). “Sensitivity analysis for model management and model fusion for design and analysis,” MultiSampler Optimization Workshop, Santa Fe, NM (July, 2012). “An information-theoretic metric of system complexity with application to engineering design,” 7th Consortium for Multidisciplinary Design Optimization, West Lafayette, IN (July, 2012); 2012 Spring Research Conference: Enabling the Interface Between Statistics and Engineering, Cambridge, MA (June, 2012); 8th AIAA Multidisciplinary Design Optimization Specialist Conference, Honolulu, HI (April, 2012). “An entropy-based uncertainty measure and importance indicator,” Institute for Operations Research and Management Science (INFORMS) Annual Meeting, Charlotte, NC (November, 2011). “A multi-fidelity multidisciplinary conceptual design methodology,” 6th Consortium for Multidisciplinary Design Optimization, Ann Arbor, MI ( July, 2011). “Stochastic process decision methods for complex cyber-physical system design and development,” MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Cambridge, MA (November, 2010). “A Bayesian-based approach to fidelity management for multidisciplinary design optimization,” Air Force Research Laboratory Seminar, Dayton, OH (April, 2010).