One approach is to adopt a "fail in place" model where technicians never go into a production facility, but even Google has technicians adding and replacing individual servers in their containerized data centers.
Other approaches to consider:
- Localized spot cooling. A very small air conditioner could take the edge off the area in front of a rack.
- Perform service operations at night or when it's reasonably cool.
This last suggestion may seem too simplistic at first, but it's actually quite practical. In a facility with sufficient redundancy to ensure high availability, server replacement should be able to wait up to 24 hours. Operating a data center at consistently high temperatures will end up increasing power consumption in the IT equipment. It only makes sense to use higher temperatures in a data center when using optimizers to eliminate or substantially reduce HVAC CapEx and OpEx costs.
If a data center is using economizers, the temperature in the data center should drop when the outside temperature drops. Even in relatively warm areas during summer months, there are substantial times each day where the temperature drops to reasonable levels in which technicians can comfortably work.