Cooling infrastructure encompasses the complete mechanical, hydraulic, and thermodynamic systems that manage heat in computational facilities. Data centers generate heat as an unavoidable byproduct of computation; if not continuously removed, the heat causes servers to throttle performance, suffer damage, or fail. Modern hyperscale data centers employ sophisticated cooling architectures: evaporative cooling circulates water through towers where evaporation carries heat to atmosphere (thermodynamically efficient, water-intensive); air cooling uses fans and heat exchangers (eliminates water use, increases electricity consumption); liquid immersion cooling submerges servers in dielectric fluid (improves heat transfer, requires hardware redesign). Cooling systems consume 30-40% of total facility electricity—nearly as much as the computational load itself. A large data center campus can evaporate one to five million gallons of water daily through evaporative cooling, creating resource competition in water-scarce regions. The infrastructure is not ancillary but integral: without continuous cooling, computation halts.
The physics of heat removal is governed by heat transfer principles that have not changed since the nineteenth century: heat flows from hot to cold at rates proportional to temperature difference and surface area. Data centers must maintain server inlet temperatures below roughly 27°C (80°F) to ensure reliable operation; ambient air temperatures in many data center locations regularly exceed this, requiring active cooling even when outside air is relatively cool. Evaporative cooling achieves efficiency by exploiting water's high latent heat of vaporization—roughly 2.26 megajoules per kilogram. Evaporating one liter of water removes 2,260 kilojoules of heat from the facility; this is far more efficient than moving the same heat through air cooling, which must rely on sensible heat transfer (raising air temperature) rather than phase change. The efficiency advantage explains why evaporative cooling remains dominant despite water costs and environmental concerns.
Water consumption scales with computational load and local climate conditions. Humid climates reduce evaporation rates but also reduce cooling effectiveness; dry climates maximize evaporation but intensify water resource conflicts. A facility in Phoenix, Arizona—hot and dry—evaporates more water per megawatt of IT load than an identical facility in Oregon—cool and humid. The geographic variation means that water consumption cannot be specified from computational demand alone; it requires site-specific thermodynamic calculations incorporating local temperature, humidity, and cooling system design. Microsoft's 34% year-over-year water consumption increase (2022-2023) reflects both AI workload growth and geographic expansion into regions where cooling requires more water per megawatt. The aggregate figure—1.7 billion gallons annually—understates local impacts where multiple facilities concentrate in a single watershed.
Alternative cooling technologies are advancing but face adoption barriers. Liquid immersion cooling demonstrates superior heat transfer—servers can be packed more densely, reducing facility footprint and improving energy efficiency by eliminating fans and air handling. But immersion requires servers designed for the liquid environment; retrofitting existing hardware is impractical, meaning adoption must occur at hardware refresh cycles or new facility construction. The technology exists at pilot and small-deployment scale; achieving hyperscale deployment requires supply chain development for dielectric fluids, training for maintenance personnel unfamiliar with liquid-cooled systems, and confidence that long-term reliability matches air-cooled configurations. The transition is underway but measured in years, not quarters.
The cooling infrastructure's institutional dimension is as important as its technical dimension. Data center operators must negotiate water rights, discharge permits, and environmental reviews. In water-stressed regions, securing water supply for a multi-million-gallon-per-day facility requires demonstrating that the use serves public interest and does not irreparably harm other users or ecosystems. The negotiation process adds months to years to site development timelines. The Dalles, Oregon case (2022)—where Google faced community opposition over water consumption—demonstrates that water availability is not purely a hydrological question but a political one, requiring public deliberation that cannot be short-circuited by corporate capital or technical sophistication. The infrastructure's physical and social dimensions interlock: you cannot build the cooling system without water rights, and you cannot secure water rights without community acceptance, which requires time, transparency, and credible evidence that the facility's economic benefits justify its resource costs.
Data center cooling evolved from small server room air conditioning (1990s) through in-row cooling (2000s) to the industrial-scale evaporative systems that characterize hyperscale facilities (2010s-present). The transition reflects exponentially growing computational density: a modern server rack dissipates 10-40 kilowatts, compared to 2-5 kilowatts for racks in the early 2000s. The heat density exceeds what air cooling alone can manage economically, driving the shift to water-based systems despite their resource intensity.
Smil's analysis draws on his broader work on water resources (Harvesting the Biosphere, Global Catastrophes and Trends) and industrial energy systems. His specific warnings about data center water consumption appear in the Bankinter presentation and in How the World Really Works, where he notes that thermodynamics does not negotiate—the heat must go somewhere, and removing it costs energy and water in quantities determined by physics, not by corporate sustainability commitments. The cooling infrastructure section of Vaclav Smil—On AI applies his career-long insistence that resource constraints are physical before they are economic or political.
Thirty-to-forty percent overhead. Cooling consumes 30-40% of data center electricity, nearly matching computational load; PUE improvements reduce but cannot eliminate this overhead because thermodynamics requires heat removal.
Evaporative efficiency tradeoff. Water-based cooling is thermodynamically superior to air cooling but consumes millions of gallons daily per large facility—creating a resource choice between energy efficiency and water conservation with no perfect solution.
Climate-dependent consumption. Water evaporation rates vary with local temperature and humidity; identical facilities in different climates have different water footprints, complicating aggregate consumption estimates and making site selection a thermodynamic optimization problem.
Immersion cooling transition. Liquid cooling technology offers efficiency gains but requires hardware redesign, supply chain development, and workforce training—adoption timeline measured in years, limiting near-term impact on aggregate water consumption.
Political hydrology. Securing water rights for multi-million-gallon facilities requires public deliberation and environmental review; community opposition can delay or block construction regardless of technical feasibility or economic benefit, making the political timeline part of the physical constraint.