Datacenters consume a stunning amount of energy to power and cool generative artificial intelligence (genAI), compute and infrastructure hardware. The training of genAI artificial neural network models typically consumes months of time, with thousands of multi-billion transistor processors, high-bandwidth semiconductor and magnetic memories, and optical network processors operating perpetually [1] , [2] . The New York Times has reported that “In a middle-ground scenario, by 2027 AI servers could use between 85 to 134 terawatt hours (TWh) annually [3] .” GenAI model training presents a daunting and pressing power consumption challenge which is misaligned with societal net zero and greenhouse gas reduction objectives.