The High Energy Physics (HEP) community is facing a daunting computing challenge in the upcoming years, as upgrades to the Large Hadron Collider and new technologies such as liquid argon detectors will require vast amounts of simulation and processing. Additionally, the stochastic nature of research suggests that leveraging elastically available resources would increase efficiency and cost-effectiveness. At the same time, the decreasing cost of renting commercial cloud resources and the increasing scale of High Performance Computing (HPC) facilities make them attractive targets for HEP workflows. The HEPCloud program aims to seamlessly integrate grid, cloud, and allocation-based facilities into a single virtual facility, as transparently to the user as possible. Recently, we have integrated Amazon Web Services, Google Cloud Platform, and the Cori supercomputer at the National Energy Research Scientific Computing Center (NERSC). Results from these studies will be discussed as well as future directions involving machine learning and quantum computing.
Burt Holzman is Assistant Director of the Scientific Computing Division at Fermi National Accelerator Laboratory, where he oversees the HEPCloud program and coordinates cross-cutting initiatives and solutions across the facility. Previously, he served as manager of the Tier-1 computing facility for the CMS experiment, and as a group leader for grid services and middleware. Before joining Fermilab, he was a research scientist and postdoctoral researcher at Brookhaven National Laboratory, where he studied two-particle interferometry in heavy ion collisions and was the head of computing for the PHOBOS experiment. He holds a B.S. in Mechanical Engineering from Carnegie Mellon University and a Ph.D. in Physics from the University of Illinois at Chicago.