Taiwan's data center industry is facing unprecedented energy efficiency regulatory pressure. With the Ministry of Economic Affairs officially incorporating PUE (Power Usage Effectiveness) into mandatory regulations, data centers with contracted capacity of 5 MW or above must achieve clearly defined PUE thresholds within specified deadlines[1]. For HVAC engineers, this is not merely a numerical target but a comprehensive engineering transformation spanning design philosophy to equipment selection, airflow management to control strategies. As the largest non-IT energy consumption component in PUE, the cooling system is the decisive battleground for achieving compliance.

1. Background of Taiwan's Data Center PUE Regulations

Global data center electricity consumption already accounts for 2-3% of total worldwide electricity usage, and with the explosion of AI computing demands, this is expected to double within the next five years[2]. Facing enormous energy consumption and carbon emission pressures, Taiwan's Bureau of Energy under the Ministry of Economic Affairs has established explicit PUE management regulations for large-scale data centers, elevating the energy efficiency metric from an industry self-regulation to a legal obligation.

Under the current regulations, data centers are subject to different PUE threshold requirements based on their scale and type:

  • Hyperscale data centers: Self-built and self-operated facilities with contracted capacity of 5 MW or above must achieve an annual average PUE of 1.3 or below
  • Colocation data centers: Multi-tenant colocation facilities with contracted capacity of 5 MW or above must achieve an annual average PUE of 1.4 or below
  • Existing facilities: Data centers already in operation must submit improvement plans and progressively approach target values year by year

The PUE threshold for colocation data centers is slightly more relaxed than for hyperscale facilities because in multi-tenant environments, IT load variability is greater, cooling efficiency is harder to optimize across the entire facility, and operators have limited control over tenant equipment configurations[3]. This differentiated standard reflects the regulators' pragmatic consideration of operational challenges in practice.

2. PUE Definition and Calculation Methods

PUE was introduced by The Green Grid in 2007 and is the most widely adopted data center energy efficiency metric globally[4]. Its definition is intuitive and straightforward:

PUE = Total Facility Power / IT Equipment Power

Total Facility Power encompasses all electricity consumption within the data center boundary, including IT equipment, cooling systems, UPS and power distribution losses, lighting, security, and other auxiliary systems. IT Equipment Power accounts only for electricity directly used by servers, storage devices, networking equipment, and other computing and data processing equipment.

Cooling System's Share in PUE

In a typical data center energy consumption structure, the cooling system accounts for 30-40% of non-IT energy consumption, making it the single largest auxiliary energy consumer[5]. Taking a traditional data center with PUE 1.5 as an example: if the IT load is 10 MW, total energy consumption is 15 MW, with 5 MW being non-IT energy consumption. The cooling system accounts for approximately 3-4 MW of that, including chillers, cooling towers, pumps, precision air conditioning fans, and more. Reducing PUE from 1.5 to 1.3 means non-IT energy consumption must be compressed from 5 MW to 3 MW -- cooling system energy savings is the biggest lever for achieving this goal.

Standardization of Measurement Boundaries and Calculation Periods

While PUE calculation appears simple, the definition of measurement boundaries directly affects the accuracy and comparability of the values. The Green Grid has defined three measurement levels[4]:

  • Level 1 (Basic): Uses UPS output as the IT load measurement point, with an annual measurement cycle
  • Level 2 (Intermediate): Measures at the IT equipment power distribution unit (PDU) output, with a monthly measurement cycle
  • Level 3 (Advanced): Measures at individual IT equipment power input, with real-time continuous measurement

Taiwan's regulations require Level 2 or higher measurement precision, using the weighted average over a full 365-day year as the reporting basis. This means data centers cannot rely solely on excellent PUE during winter low-temperature months to lower the annual average -- cooling efficiency during summer high-temperature months is equally critical.

Analysis of the Seven Key Audit Items

To ensure the completeness and accuracy of PUE reporting data, the regulations simultaneously require data centers to conduct regular audits on the following seven items:

  1. Measurement equipment calibration records: Power measurement equipment must be calibrated annually, ensuring errors remain within ±2%
  2. Sub-metered energy consumption: Each subsystem including cooling, power distribution, and lighting must be independently metered -- estimation through total meter subtraction is not permitted
  3. IT load ratio records: Actual IT equipment load ratios must be recorded to prevent PUE figures from being inflated by no-load or low-load conditions
  4. Cooling system efficiency records: Chiller COP, cooling tower efficiency, pump energy consumption, and related data must be recorded monthly
  5. UPS efficiency and power distribution losses: UPS operating efficiency and transformer losses must be included in calculations
  6. Free cooling hours records: Facilities utilizing waterside or airside economizers must record actual operating hours and energy savings
  7. Anomalous event exclusion explanations: Data exclusions during power outages, equipment failures, and other non-standard operating periods must be supported by reasonable justification and documentation

3. Cooling System Impact Analysis on PUE

To systematically reduce PUE, one must first break down the cooling system's energy consumption structure. The cooling PUE component (Cooling PUE or CLF, Cooling Load Factor) can be expressed as:

CLF = Cooling System Power / IT Equipment Power

For a traditional air-cooled data center, CLF typically ranges from 0.3 to 0.5. To achieve an overall PUE of 1.3, with power distribution losses (UPS, transformers, etc.) accounting for approximately 0.10-0.15, CLF must be compressed to below 0.15-0.20 -- this places extremely high demands on cooling system design.

Relationship Between Chiller COP and PUE

The chiller is the single largest energy-consuming piece of equipment in the cooling system, with its energy efficiency expressed as COP (Coefficient of Performance) or kW/RT. Taking a data center with 10 MW IT load as an example[6]:

  • Traditional screw chiller (COP 4.5): Chiller power consumption required to cool the IT load is approximately 780 kW, contributing about 0.078 to PUE
  • High-efficiency centrifugal chiller (COP 6.5): Chiller power consumption drops to approximately 540 kW, reducing PUE contribution to 0.054
  • Magnetic-bearing centrifugal chiller (COP 8.0-10.0): Chiller power consumption can be reduced to 350-440 kW, with PUE contribution of only 0.035-0.044

The efficiency difference in chillers alone can cause PUE variations of 0.03-0.04. When targeting PUE 1.3, every 0.01 improvement is precious.

Fan Energy Consumption: Precision AC vs. In-Row Cooling vs. Rear Door Cooling

Fan energy consumption at the air conditioning terminal is a frequently underestimated PUE factor. Fan power in traditional raised-floor precision air conditioning (CRAC/CRAH) accounts for 30-40% of total unit power, and due to long air delivery paths and high duct resistance, fan efficiency tends to be low. In-row cooling shortens the air delivery distance, reducing fan energy consumption by 20-30%. Rear Door Heat Exchangers push the cooling point further to the rack exhaust face, virtually eliminating air delivery path losses[5].

Pump Energy Consumption: Constant Flow vs. Variable Flow Systems

Chilled water and condenser water pump energy consumption accounts for approximately 10-15% of cooling system energy. Traditional constant flow systems operate at rated flow regardless of load level, causing significant energy waste during partial loads. Variable flow systems paired with Variable Frequency Drives (VFDs) can dynamically adjust flow based on actual cooling demand. According to pump affinity laws, when flow is reduced to 80%, pump power is only 51% of rated; when flow drops to 60%, power decreases to just 22%[7]. For data centers with an average annual load ratio of 60-70%, variable flow systems can save 40-60% of pump energy consumption.

4. Cooling Engineering Strategies for Achieving PUE 1.3

From a practical engineering perspective, PUE 1.3 is not achievable through any single technology but rather through the systematic layering of multiple strategies. The following analyzes key engineering strategies in order of impact magnitude and implementation priority.

Raising Chilled Water Supply Temperature

Traditional data centers set chilled water supply temperature at 7°C, a practice inherited from comfort air conditioning design conventions. However, ASHRAE TC 9.9 guidelines allow IT equipment inlet air temperatures up to 27°C (upper limit of the A1 class recommended range)[8], meaning there is enormous room for raising chilled water supply temperature:

  • 7°C → 12°C: Chiller COP improves by approximately 15-20%, while also increasing available hours for waterside economizer operation
  • 12°C → 18°C: COP improves by an additional 20-30%, and in Taiwan, waterside economizer hours can expand from less than 500 hours to over 1,500 hours
  • 18°C → 24°C (warm water cooling): Feasible with direct liquid cooling architectures, free cooling hours can exceed 4,000 hours

For every 1°C increase in supply water temperature, chiller COP can improve by approximately 2-3%. This is the most cost-effective and directly beneficial PUE improvement measure.

Waterside Economizer Application in Taiwan

Waterside economizers utilize cooling towers to produce sufficiently cold cooling water during periods of low outdoor wet-bulb temperature, cooling the chilled water through plate heat exchangers and bypassing chiller operation. In the Kaohsiung area of Taiwan, the annual average wet-bulb temperature is approximately 23-24°C. If the chilled water supply temperature is set at 7°C, waterside economizer operation has virtually zero available hours. However, raising the supply temperature to 18°C significantly increases available hours. Combined with partial free cooling mode (cooling tower and chiller operating in tandem), the annual energy savings can reach 10-15% of cooling energy consumption.

Limitations of Airside Economizers and Indirect Evaporative Cooling

Airside economizers directly introduce outdoor air to cool the data hall. In dry, cool climates (such as the Pacific Northwest of the United States), this approach can reduce PUE to below 1.10. However, Taiwan's hot and humid environment severely limits airside economizer application: high humidity can cause server condensation, and salt-laden coastal air accelerates equipment corrosion.

Indirect Evaporative Cooling (IEC) is a compromise approach -- it uses evaporative cooling principles to lower supply air temperature, but isolates outdoor air from the data hall air through heat exchangers, preventing moisture and contaminant intrusion. In central and southern Taiwan, IEC can provide partial cooling capability during transitional seasons, but the high wet-bulb temperatures in summer still limit its effectiveness.

High-Efficiency Centrifugal and Magnetic-Bearing Chiller Selection

The chiller is the core equipment of the cooling system, and its selection directly determines the PUE baseline. For data centers targeting PUE 1.3, the following types should be prioritized[6]:

  • Magnetic-bearing centrifugal chillers: Oil-free bearing design with excellent part-load efficiency, IPLV can reach 0.30-0.35 kW/RT (COP 10-12), suitable for data centers with high load variability
  • Variable-speed centrifugal chillers: Compressor speed regulation via VFD, full-load COP can reach 6.5-7.5, with IPLV performance superior to fixed-speed models
  • Multi-chiller staged configuration: Large and small chiller combinations or N+1 redundancy configurations ensure that chillers operate within high-efficiency zones across all load conditions

Planning your data center cooling system to meet PUE regulatory requirements? Contact our engineering team for a complete engineering solution from design to verification.

Hot Aisle and Cold Aisle Containment Strategies

Airflow management quality directly determines the upper limit of cooling efficiency. The mixing of hot and cold air (Bypass and Recirculation) is one of the biggest sources of energy waste in traditional data halls[5]. Hot Aisle Containment (HAC) or Cold Aisle Containment (CAC) can eliminate over 80% of hot-cold air mixing, improving cooling efficiency by 20-30%. For facilities pursuing PUE 1.3, aisle containment is not an option but a fundamental requirement. HAC is more favored in practice because it concentrates high-temperature return air and directs it to the air conditioning return air plenum, while keeping the rest of the data hall at a more comfortable temperature.

5. Breakthrough PUE Improvements Through Liquid Cooling Technology

When air cooling system optimizations have approached their physical limits, liquid cooling technology delivers a qualitative leap in PUE. The specific heat capacity and thermal conductivity of liquids far exceed those of air, enabling more heat removal with less energy consumption[9].

Direct Liquid Cooling (DLC) Can Achieve PUE 1.05-1.10

Direct Liquid Cooling (DLC) uses cold plates affixed to chip surfaces, with coolant temperatures typically set at 30-45°C. Since the coolant temperature is much higher than outdoor air temperature, year-round free cooling can be achieved under most climate conditions (using only cooling towers or dry coolers for heat rejection), completely bypassing chillers. DLC can remove 70-80% of IT equipment heat, with the remaining 20-30% (heat generated by memory, hard drives, network equipment, etc.) still requiring a small amount of supplementary air cooling. Under this architecture, PUE can be reduced to the 1.05-1.10 range.

Immersion Cooling Virtually Eliminates Cooling Energy Consumption

Immersion cooling submerges entire servers in a non-conductive dielectric coolant, achieving 100% heat removal by liquid. Since fans are entirely unnecessary (both server fans and air conditioning fans can be removed), fan-related energy consumption drops to zero. Combined with year-round free cooling using high-temperature coolant, the theoretical PUE for immersion cooling can approach 1.02-1.04[9]. However, adoption barriers are higher, including equipment warranty conditions, changed maintenance procedures, coolant costs, and structural load-bearing considerations.

PUE Calculation Methods for Hybrid Architectures

In practice, most data centers will adopt hybrid architectures where liquid and air cooling coexist -- high-density GPU cabinets use liquid cooling while general servers and network equipment maintain air cooling. PUE calculation for hybrid architectures must combine the energy consumption of both cooling systems in the numerator:

PUE = (ITliquid + ITair + Coolingliquid + Coolingair + Other) / (ITliquid + ITair)

The higher the proportion of IT load in the liquid-cooled zone, the more its low cooling energy consumption can pull down the overall PUE. During the planning phase, the capacity ratio between liquid cooling and air cooling should be clearly defined, and the target hybrid PUE should be calculated accordingly.

6. Continuous PUE Optimization During Operations

There is often a gap of 0.1-0.2 between the design-phase PUE target and the actual operational PUE[10]. Continuous optimization during operations is the last line of defense for ensuring regulatory compliance.

Real-Time PUE Monitoring and Baseline Establishment

Establishing a comprehensive sub-metering system is the foundation of operational optimization. Real-time energy consumption data from every chiller, every pump set, and every row of precision air conditioning must be aggregated into the BMS (Building Management System) or DCIM (Data Center Infrastructure Management) platform. Through continuous data collection, baseline PUE curves under different outdoor conditions and IT load ratios can be established, enabling identification of anomalies and quantification of improvement opportunities.

Seasonal Adjustment Strategies

Taiwan's climate characteristics require cooling systems to have seasonal adjustment capabilities. During summer (June to September), outdoor wet-bulb temperatures frequently exceed 27°C, and chillers must bear the full cooling load. During transitional seasons (March to May, October to November), partial free cooling can be activated. In winter (December to February), a higher proportion of free cooling can be achieved in northern Taiwan. Operations teams should develop differentiated cooling strategies based on seasons, including chilled water supply temperature setpoints, cooling tower fan speeds, and chiller staging logic.

Proven PUE Improvements Through AI-Optimized Controls

Google pioneered the application of DeepMind's machine learning models to data center cooling control in 2016, achieving a 40% reduction in cooling energy consumption and approximately 0.12 PUE improvement. Since then, AI-based cooling optimization has become an industry trend. Core capabilities of AI control systems include: predicting cooling demand hours ahead based on weather forecasts, dynamically adjusting chilled water temperature setpoints, optimizing load distribution across multiple chillers, and identifying equipment efficiency degradation to trigger preventive maintenance. For facilities targeting PUE 1.3, AI-optimized controls can contribute an additional 0.05-0.10 PUE improvement beyond conventional control logic.

Conclusion

The mandatory enforcement of PUE regulations marks a pivotal transition for Taiwan's data center industry from "just build it" to "it must be efficient." For HVAC engineering professionals, this represents both a challenge and an opportunity. The strategy for achieving PUE 1.3 does not rely on any single breakthrough technology but rather on a systematic engineering approach encompassing chilled water temperature elevation, chiller efficiency optimization, refined airflow management, maximized free cooling, liquid cooling technology adoption, and AI-driven control optimization. Every 0.01 improvement in PUE stems from meticulous attention to detail and a deep understanding of thermodynamic fundamentals. Under the dual pressures of regulatory compliance and sustainable development, the professional value of cooling system design is being redefined.