Bibliography#

[ARC+19]

Ahsan Alvi, Binxin Ru, Jan-Peter Calliess, Stephen Roberts, and Michael A Osborne. Asynchronous batch bayesian optimisation with improved local penalisation. In International Conference on Machine Learning. 2019.

[ASSR19]

Alexander Amini, Wilko Schwarting, Ava Soleimany, and Daniela Rus. Deep evidential regression. arXiv preprint arXiv:1910.02600, 2019.

[BGL+12]

Julien Bect, David Ginsbourger, Ling Li, Victor Picheny, and Emmanuel Vazquez. Sequential design of computer experiments for the estimation of a probability of failure. Statistics and Computing, 22(3):773–793, 2012.

[BES+08]

Barron J Bichon, Michael S Eldred, Laura Painton Swiler, Sandaran Mahadevan, and John M McFarland. Efficient global reliability analysis for nonlinear implicit performance functions. AIAA journal, 46(10):2459–2468, 2008.

[BCO21]

Mickael Binois, Nicholson Collier, and Jonathan Ozik. A portfolio approach to massively parallel bayesian optimization. arXiv preprint arXiv:2110.09334, 2021.

[BCKW15]

Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In International Conference on Machine Learning, 1613–1622. PMLR, 2015.

[BRVDW19]

David Burt, Carl Edward Rasmussen, and Mark Van Der Wilk. Rates of convergence for sparse variational gaussian process regression. In International Conference on Machine Learning, 862–871. PMLR, 2019.

[CZZ18]

Laming Chen, Guoxin Zhang, and Eric Zhou. Fast greedy map inference for determinantal point process to improve recommendation diversity. Advances in Neural Information Processing Systems, 2018.

[CG13]

Clément Chevalier and David Ginsbourger. Fast computation of the multi-points expected improvement with applications in batch selection. In International Conference on Learning and Intelligent Optimization, 59–69. Springer, 2013.

[CGE14]

Clément Chevalier, David Ginsbourger, and Xavier Emery. Corrected kriging update formulae for batch-sequential data assimilation. In Mathematics of Planet Earth, pages 119–122. Springer, 2014.

[CDD12]

Ivo Couckuyt, Dirk Deschrijver, and Tom Dhaene. Towards efficient multiobjective optimization: multiobjective statistical criterions. 2012 IEEE Congress on Evolutionary Computation, CEC 2012, pages 10–15, 2012. doi:10.1109/CEC.2012.6256586.

[DBB20]

Samuel Daulton, Maximilian Balandat, and Eytan Bakshy. Differentiable expected hypervolume improvement for parallel multi-objective bayesian optimization. arXiv preprint arXiv:2006.05078, 2020.

[DTLZ02]

Kalyanmoy Deb, Lothar Thiele, Marco Laumanns, and Eckart Zitzler. Scalable multi-objective optimization test problems. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No. 02TH8600), volume 1, 825–830. IEEE, 2002.

[DPRP22]

Youssef Diouane, Victor Picheny, Rodolphe Le Riche, and Alexandre Scotto Di Perrotolo. Trego: a trust-region framework for efficient global optimization. 2022. arXiv:2101.06808.

[DKvdH+17]

Vincent Dutordoir, Nicolas Knudde, Joachim van der Herten, Ivo Couckuyt, and Tom Dhaene. Deep Gaussian process metamodeling of sequentially sampled non-stationary response surfaces. In 2017 Winter Simulation Conference (WSC), volume, 1728–1739. 2017. doi:10.1109/WSC.2017.8247911.

[EPG+19]

David Eriksson, Michael Pearce, Jacob Gardner, Ryan D Turner, and Matthias Poloczek. Scalable global optimization via local Bayesian optimization. In Advances in Neural Information Processing Systems, 5496–5507. 2019. URL: http://papers.nips.cc/paper/8788-scalable-global-optimization-via-local-bayesian-optimization.pdf.

[FF95]

C.M. Fonseca and P.J. Fleming. Multiobjective genetic algorithms made easy: selection sharing and mating restriction. In First International Conference on Genetic Algorithms in Engineering Systems: Innovations and Applications, volume, 45–52. 1995. doi:10.1049/cp:19951023.

[GG16]

Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: representing model uncertainty in deep learning. In International Conference on Machine Learning, 1050–1059. PMLR, 2016.

[GKZ+14]

Jacob Gardner, Matt Kusner, Zhixiang, Kilian Weinberger, and John Cunningham. Bayesian optimization with inequality constraints. In Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research. PMLR, 22–24 Jun 2014. URL: http://proceedings.mlr.press/v32/gardner14.html.

[GT16]

Alan Genz and Giang Trinh. Numerical computation of multivariate normal probabilities using bivariate conditioning. In Monte Carlo and Quasi-Monte Carlo Methods, pages 289–302. Springer, 2016.

[GLRC10a]

David Ginsbourger, Rodolphe Le Riche, and Laurent Carraro. Kriging Is Well-Suited to Parallelize Optimization, pages 131–162. Springer Berlin Heidelberg, Berlin, Heidelberg, 2010. URL: https://doi.org/10.1007/978-3-642-10701-6_6, doi:10.1007/978-3-642-10701-6_6.

[GLRC10b]

David Ginsbourger, Rodolphe Le Riche, and Laurent Carraro. Kriging is well-suited to parallelize optimization. In Computational intelligence in expensive optimization problems, pages 131–162. Springer, 2010.

[GonzalezDHL16]

Javier González, Zhenwen Dai, Philipp Hennig, and Neil Lawrence. Batch bayesian optimization via local penalization. In Artificial intelligence and statistics. 2016.

[GL12]

Robert B Gramacy and Herbert KH Lee. Cases for the nugget in modeling computer experiments. Statistics and Computing, 22(3):713–722, 2012.

[HBB+19]

Ali Hebbal, Loic Brevault, Mathieu Balesdent, El-Ghazali Talbi, and Nouredine Melab. Bayesian optimization using deep Gaussian processes. arXiv preprint arXiv:1905.03350, 2019.

[HernandezLHG14]

JM Hernández-Lobato, MW Hoffman, and Z Ghahramani. Predictive entropy search for efficient global optimization of black-box functions. Advances in Neural Information Processing Systems, 2014.

[HernandezLA15]

José Miguel Hernández-Lobato and Ryan Adams. Probabilistic backpropagation for scalable learning of bayesian neural networks. In International Conference on Machine Learning, 1861–1869. PMLR, 2015.

[HernandezLRPKAG17]

José Miguel Hernández-Lobato, James Requeima, Edward O Pyzer-Knapp, and Alán Aspuru-Guzik. Parallel and distributed thompson sampling for large-scale accelerated exploration of chemical space. In International conference on machine learning. 2017.

[HHGL11]

Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Máté Lengyel. Bayesian active learning for classification and preference learning. 2011. arXiv:1112.5745.

[HANZ06]

Deng Huang, Theodore T Allen, William I Notz, and Ning Zeng. Global optimization of stochastic black-box systems via sequential kriging meta-models. Journal of global optimization, 2006.

[JSW98]

Donald R Jones, Matthias Schonlau, and William J Welch. Efficient global optimization of expensive black-box functions. Journal of Global optimization, 13(4):455–492, 1998.

[KLHG21]

Arlind Kadra, Marius Lindauer, Frank Hutter, and Josif Grabocka. Well-tuned simple nets excel on tabular datasets. Advances in Neural Information Processing Systems, 2021.

[KKSP18]

Kirthevasan Kandasamy, Akshay Krishnamurthy, Jeff Schneider, and Barnabas Poczos. Parallelised bayesian optimisation via thompson sampling. In Amos Storkey and Fernando Perez-Cruz, editors, Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, volume 84 of Proceedings of Machine Learning Research, 133–142. PMLR, 2018. URL: https://proceedings.mlr.press/v84/kandasamy18a.html.

[KOHagan00]

M.C. Kennedy and A O'Hagan. Predicting the output from a complex computer code when fast approximations are available. Biometrika, 87:1–13, 03 2000.

[LPB16]

Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. arXiv preprint arXiv:1612.01474, 2016.

[Mac92]

David J. C. MacKay. Information-based objective functions for active data selection. Neural Computation, 4(4):590–604, 1992.

[MOP23]

H. B. Moss, S. W. Ober, and V. Picheny. Inducing point allocation for sparse gaussian processes in high-throughput \B\ayesian optimisation. In Proceedings of the Twenty-Fith International Conference on Artificial Intelligence and Statistics. 2023.

[MLGR21]

Henry B Moss, David S Leslie, Javier Gonzalez, and Paul Rayson. Gibbon: general-purpose information-based bayesian optimisation. Journal of Machine Learning Research, 22:1–49, 2021.

[MLR21]

Henry B Moss, David S Leslie, and Paul Rayson. Mumbo: multi-task max-value bayesian optimization. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2020, Ghent, Belgium, September 14–18, 2020, Proceedings, Part III. 2021.

[MLR20]

Henry B. Moss, David S. Leslie, and Paul Rayson. Bosh: bayesian optimization by sampling hierarchically. ArXiv, 2020.

[NR08]

Hannes Nickisch and Carl Edward Rasmussen. Approximations for binary gaussian process classification. Journal of Machine Learning Research, 9(67):2035–2078, 2008. URL: http://jmlr.org/papers/v9/nickisch08a.html.

[OA09]

Manfred Opper and Cédric Archambeau. The variational gaussian approximation revisited. Neural computation, 2009.

[OWA+21]

Ian Osband, Zheng Wen, Mohammad Asghari, Morteza Ibrahimi, Xiyuan Lu, and Benjamin Van Roy. Epistemic neural networks. arXiv preprint arXiv:2107.08924, 2021.

[PRD+17]

P. Perdikaris, M. Raissi, A. Damianou, N.D. Lawrence, and G.E Karniadakis. Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2017.

[PGR+10]

Victor Picheny, David Ginsbourger, Olivier Roustant, Raphael T Haftka, and Nam-Ho Kim. Adaptive designs of experiments for accurate approximation of target regions. Journal of Mechanical Design, 2010.

[PWG13]

Victor Picheny, Tobias Wagner, and David Ginsbourger. A benchmark of kriging-based infill criteria for noisy optimization. Structural and Multidisciplinary Optimization, 48:, 09 2013. doi:10.1007/s00158-013-0919-4.

[RBM08]

Pritam Ranjan, Derek Bingham, and George Michailidis. Sequential experiment design for contour estimation from complex computer codes. Technometrics, 50(4):527–541, 2008.

[SEH18]

Hugh Salimbeni, Stefanos Eleftheriadis, and James Hensman. Natural gradients in practice: non-conjugate variational inference in gaussian process models. International Conference on Artificial Intelligence and Statistics, 2018.

[SWJ98]

Matthias Schonlau, William J Welch, and Donald R Jones. Global versus local search in constrained optimization of computer models. Lecture Notes-Monograph Series, pages 11–25, 1998.

[SKSK10]

Niranjan Srinivas, Andreas Krause, Matthias Seeger, and Sham M. Kakade. Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design. In Johannes Fürnkranz and Thorsten Joachims, editors, Proceedings of the 27th International Conference on Machine Learning (ICML-10), 1015–1022. Omnipress, 2010.

[TPD20]

Léonard Torossian, Victor Picheny, and Nicolas Durrande. Bayesian quantile and expectile optimisation. arXiv preprint arXiv:2001.04833, 2020.

[VMA+21]

Sattar Vakili, Henry Moss, Artem Artemev, Vincent Dutordoir, and Victor Picheny. Scalable thompson sampling using sparse gaussian process models. Advances in Neural Information Processing Systems, 2021.

[VVL99]

David A Van Veldhuizen and Gary B Lamont. Multiobjective evolutionary algorithm test suites. In Proceedings of the 1999 ACM symposium on Applied computing, 351–357. 1999.

[WJ17]

Zi Wang and Stefanie Jegelka. Max-value entropy search for efficient bayesian optimization. arXiv preprint arXiv:1703.01968, 2017.

[WZH+13]

Ziyu Wang, Masrour Zoghi, Frank Hutter, David Matheson, and Nando de Freitas. Bayesian optimization in high dimensions via random embeddings. In IJCAI, volume 13, 1778–1784. 2013.

[WBT+20]

James Wilson, Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, and Marc Deisenroth. Efficiently sampling functions from gaussian process posteriors. In International Conference on Machine Learning. 2020.

[WHD18]

James Wilson, Frank Hutter, and Marc Deisenroth. Maximizing acquisition functions for bayesian optimization. Advances in Neural Information Processing Systems, 2018.

[YEDBack19]

Kaifeng Yang, Michael Emmerich, André Deutz, and Thomas Bäck. Efficient computation of expected hypervolume improvement using box decomposition algorithms. Journal of Global Optimization, 75(1):3–34, 2019.