Academic Research

Our team are experts at building science, machine learning, and artificial intelligence. Our Research & Development team shares our research in academic journals.

Newsletter Subscription

Sign up to stay informed with the latest updates, exclusive news, future events & opportunities, and valuable insights delivered directly to your inbox.

Adaptive in situ model refinement for surrogate-augmented population-based optimization

In surrogate-based optimization (SBO), the deception issues associated with the low fidelity of the surrogate model can be dealt with in situ model refinement that uses infill points during optimization. However, there is a lack of model refinement methods that are both independent of the choice of surrogate model (neural networks, radial basis functions, Kriging, etc.) and provides a methodical approach to preserve the fidelity of the search dynamics, especially in the case of population-based heuristic optimization processes. This paper presents an adaptive model refinement (AMR) approach to fill this important gap. Therein, the question of when to refine the surrogate model is answered by a novel hypothesis testing concept that compares the distribution of model error and distribution of function improvement over iterations. These distributions are respectively computed via a probabilistic cross-validation approach and by leveraging the probabilistic improvement information uniquely afforded by population-based algorithms such as particle swarm optimization. Moreover, the AMR method identifies the size of the batch of infill points needed for refinement. Numerical experiments performed on multiple benchmark functions and an optimal (building energy) planning problem demonstrate AMR’s ability to preserve computational efficiency of the SBO process while providing solutions of more attractive fidelity than those provisioned by a standard SBO approach.

Read More

Concurrent surrogate model selection (COSMOS): optimizing model type, kernel function, and hyper-parameters

This paper presents an automated surrogate model selection framework called the Concurrent Surrogate Model Selection or COSMOS. Unlike most existing techniques, COSMOS coherently operates at three levels, namely: 1) selecting the model type (e.g., RBF or Kriging), 2) selecting the kernel function type (e.g., cubic or multiquadric kernel in RBF), and 3) determining the optimal values of the typically user-prescribed hyper-parameters (e.g., shape parameter in RBF). The quality of the models is determined and compared using measures of median and maximum error, given by the Predictive Estimation of Model Fidelity (PEMF) method. PEMF is a robust implementation of sequential k-fold cross-validation. The selection process undertakes either a cascaded approach over the three levels or a more computationally-efficient one-step approach that solves a mixed-integer nonlinear programming problem. Genetic algorithms are used to perform the optimal selection. Application of COSMOS to benchmark test functions resulted in optimal model choices that agree well with those given by analyzing the model errors on a large set of additional test points. For the four analytical benchmark problems and three practical engineering applications – airfoil design, window heat transfer modeling, and building energy modeling – diverse forms of models/kernels are observed to be selected as optimal choices. These observations further establish the need for automated multi-level model selection that is also guided by dependable measures of model fidelity.

Read More