As we transfer to widescale deployment, we imagine that UCS X-Collection architectural message is resonating extraordinarily effectively as deployment suggestions has been unbelievable.
Within the first publish a part of the weblog sequence, we mentioned how heterogeneous computing is inflicting paradigm shift in computing shaping UCS X-Collection structure. On this weblog we’ll focus on the electromechanical facets of the UCS X-Collection structure.
Formfactor stays fixed for a life cycle of the product therefore electromechanical facets that form the enclosure design are very vital in design part.
Electro-mechanical Anchors
A few of the anchors are:
- Socket density per ‘RU”.
- Reminiscence density
- Backplane much less IO
- Mezzanine choices
- Volumetric house for logic
- Energy footprint & supply
- Airflow (Measured as CFM) for cooling
Socket and reminiscence density is essential when evaluating completely different vendor’s product and usually is a sign of a how effectively the platform mechanical has been designed in a given “RU” envelop. The ratio of volumetric house required for mechanical integrity vs logic is one other necessary standards. These criterion helped us to zero all the way down to 7RU as chassis top & on the identical time providing extra volumetric house obtainable for logic in comparison with the comparable “RU” design from trade.
Earlier technology of compute platforms relied on Backplane for connectivity. UCS X-Collection, doesn’t use backplane however direct join IO Interconnects. Because the IO expertise advances , nodes and the IO Interconnect will be upgraded seamlessly as the weather that must be modified are modular and never fastened on backplane. Because the IO Interconnect velocity will increase its attain decreases making it more durable and more durable to scale in electrical area. UCS X-Collection has been designed with hybrid connector method that helps electrical IO by default and be prepared for optical IO in future. This optical IO choice is optimized for intra chassis connectivity. Direct join IO and not using a backplane helps to cut back the airflow resistance and helps to take away the warmth from inlet to outlet effectively.
Energy distribution
Rack energy density per “RU” is hitting 1KW and shortly will transcend that. Majority of the prevailing server design makes use of effectively established 12V distribution to simplify down conversion for CPU voltages. Nonetheless as present density will increase utilizing 12V distribution would add to the connector prices, PCB layer depend and routing challenges. UCS X-Collection, seeing the necessity for subsequent technology of server energy necessities selected to make use of increased voltage distribution of 54V as an alternative of 12V. Increased voltage distribution reduces present density by 4.5 instances and ohmic losses by 20 instances decrease in comparison with 12V. Shifting from 12V to 54V DC output helps in simplifying major PSU design and makes onboard energy distribution extra environment friendly.
Server Energy Consumption
We’re seeing CPU TDP (Thermal Design Energy) rising by 75-100W at a 2 12 months cadence stage. Compute nodes will quickly begin seeing 350W per socket and so they must be prepared for 500W+ by 2024. A 2 Socket server with GPU and storage requires near 2.2KW energy not accounting for any distribution losses. To chill this 2 Socket server, fan modules alone will eat round 240W , 11 % of whole energy. Factoring distribution efficiencies at every intermediate stage of conversion from AC enter we’re taking a look at round 2.4KW energy draw. So, in a RACK servers with 20 x 2RU servers , Fan energy alone will eat 4800W !!. Modular blade platform like UCS X-Collection with its centralized cooling and greater followers, provide a lot increased CFM’s at a decrease energy consumption. Nonetheless fan energy consumption is certainly turning into substantial portion of the full energy finances.
Cooling
Advances in semiconductor and magnetics permits us to offer extra energy within the life time of chassis. Nonetheless, it’s tough to drag off a dramatic improve on airflow ( measured as fan CFM) as technological advances are sluggish & incremental. Moreover, price economics dictates use of passive warmth switch methods to chill the CPU in Server. This makes defining fan CFM necessities for cooling the compute nodes a multi variable drawback.
Not like commonplace rack design which makes use of unfold core CPU placement, UCS X-Collection makes use of shadow core design precept complicating cooling even additional.
Banks of U2/E3 storage drives with energy upwards of 25W every and accelerators on entrance facet of the blade will prohibit air going to the CPU in addition to it’ll pre-heat air.
UCS X-Collection design approached these challenges holistically. At the beginning is the improvement of the state-of-the-art fan module delivering the category main CFM. The opposite being the dynamic energy administration coupled with fan management algorithm that may adapt & evolve as cooling demand grows and ebbs. Compute nodes are designed with excessive and low CFM paths channeling acceptable airflow for cooling. Moreover, energy administration choices offers buyer with configurable knobs to optimize for fan energy or excessive efficiency mode.
Emergence of Alternate Cooling Applied sciences
Spot cooling of CPU/GPU at 350W is approaching limits of air-cooling Doubling airflow ends in 30% extra cooling however it will add 6-8 instances extra fan energy with diminishing return.
Knowledge facilities are usually not but prepared for liquid cooling on wholesale foundation. Immersion cooling requires full overhaul of the RACK. Hyperscalers will lead early adoption cycle and ultimately Enterprises prospects will get there however the tipping level from air to liquid cooling remains to be unknown. Air cooling isn’t going away as we nonetheless want to chill reminiscence, storage and different elements that are operationally tough for liquid cooling. We have to accumulate extra information and reply following vital questions earlier than liquid cooling turns into engaging.
- Do we actually want liquid cooling for all RACKs or solely few RACKs which hosts excessive TDP servers.
- Is liquid cooling extra for inexperienced area deployments as a method to cut back fan energy/acoustics than for high-TDP CPU/GPU enablement?
- Any Compliance/mandates that targets vitality discount by sure dates in information middle?
- TCO evaluation of fan energy saving vs the full price of liquid cooling deployment?
- Is buyer OK to spend extra on fan energy cooling than retrofitting the infrastructure for liquid-cooling?
- Is liquid cooling going to assist deploy extra excessive TDP servers with out upgrading energy to the RACK?
- For ex: Saving 100W per 1U in fan energy interprets to 3.6KW (36x 1U server) extra obtainable energy
UCS X-Collection nevertheless does help a hybrid mannequin – a mixture of air/liquid cooling when air cooling alone isn’t ample. Be careful for extra particulars in upcoming blogs on liquid cooling in UCS X-Collection.
Within the subsequent weblog, we’ll elaborate on tendencies drove the UCS X-Collection inner structure.
Sources
UCS X-Collection – The Way forward for Computing Weblog Collection – Half 1 of three
Share: