For more than ten years, we have been and remain a leader in designing, building and operating data centers.

When we started building our first data center, we faced many challenges such as: How do we optimize the energy footprint of such facilities? How can we prevent power loss? How can we deliver proper cooling? How can we make it modular? During the first years of operations, all these questions often kept us awake at night. Nevertheless, through trial and error, hard-work and perseverance, we finally gained the expertise to make our data centers efficient and scalable. Nowadays our data centers are business-proven facilities managed by professional and dedicated teams. To date, we are one of the largest data center operators in France with more than 42,000 square meters and 31MW in production.

Our brand new facility, DC5 is one of the most significant data center projects in France. Offering more than 20MW of IT power, DC5 is composed of 12 private suites providing between 900 and 1,800 kW of IT power each on a total surface area of 16,000 square meters. We designed this facility for massively scalable cloud computing and big data infrastructures.

In this blog post, we detail how we transformed a mail sorting building into an unprecedented hyper-scale datacenter. The post is composed of three sections:

  1. A Strategic Building Location
  2. An Efficient Cooling System
  3. Ultra-high-density by design

A Strategic Building Location

Located in the northwest of Paris, the site was initially a mail sorting office in activity until 2013. This building has many strategic advantages that enforced our decision to transform it into a hyper-scale data center.

First of all, the building is located in Saint-Ouen-l'Aumône in the biggest business park in Europe and close to two Internet backbones of Tier-1 operators. Moreover, its localization benefits from a huge power capacity. Finally, it’s more than 50km away from our other data centers, allowing us to offer (mid-, long-term) a dissociated availability zone for the Paris region.

In the picture above you can see the geographical locations of each of our data centers, and where DC5 is located compared to DC2, DC3, and DC4.

An Efficient Cooling System

At DC5, we use a direct free cooling system with evaporative cooling to cool IT rooms and a direct free cooling with traditional chiller units to cool both Meet Me rooms and UPS rooms. There is, therefore, no real air conditioning, which is complicated in France because the country has a temperate climate. Unlike Finland, the temperature is not cold enough to use outside air all year round and unlike Spain, it is not hot enough to use hot-cold air exchanges every day.

By using a direct free cooling system with evaporative cooling, the air that enters the data center is cooled before going into the IT rooms. This method offers an efficient alternative to mechanical cooling. At DC5, evaporative cooling is enabled when the air temperature is over 30°C. Through a direct free cooling system with evaporative cooling, we can maintain a constant temperature of 30°C +/- 1°C in the cold aisles.

How it works

At DC5, out of two floors, one is an IT room, and the other is a plenum dedicated to moving vast amounts of air. The cold air is then injected in the cold aisle while the hot air is extracted from the hot aisle.

The cooling, electrical & network distribution is supplied from the ceiling while in traditional design, the distribution is generally made via the floor.

The air intake grids on the picture above only let the air get in and keep away rain, birds, and other adventurous animals.

Just behind the air intake grids, a ventilation grid zone is equipped with programmable logic controllers (PLC) to regulate grid opening and control the air flow. The cooler the air, the more the grid closes and vice-versa.

The PLCs are controlled by temperature and humidity probes to adjust their opening and the usage of the adiabatic media. In total, we are able to process over 400 pieces of information in real-time to adjust and optimize the system using complex algorithms.

Once the air is in, the airflow goes to the second series of grids. Their role is to ensure an equal air flow repartition on all the filtration wall. Without these grids, the air flow is unequal between the bottom and the top of the wall.
To block and void air particles in the building, we installed F7 grade air filters.

Just behind the air filters, a fan wall pushes the cleaned air into the cold aisles from the ceiling.

To cool both, the Meet Me rooms and mechanical areas, we use direct free cooling with traditional chiller units. These chiller units are only used when the outside temperature is higher than 20°C. Our design is innovative as we also use an ice storage unit with 6 MWh capacity. The ice storage unit offers multiple advantages:

  • It answers the cooling demand quickly without using chiller units.
  • It avoids a short cycle on chiller units.
  • It provides cooling demand in case of chiller unit failure.
  • It is cost efficient as the ice is produced at night when energy is less expensive.

Besides this, DC5 cooling system can be considered innovative because:

  • 100% of air from the outside goes through an adiabatic process which enables the air to evaporate via a cooling and humidification system.
  • It recycles air coming from hot aisles. Wasted heat is mixed with outside air in the mixing zone to meet the set temperature and warm the air before it enters the data center to maintain a temperature of 30°C in the cold aisles.
  • We use ice storage for all the infrastructures which need mechanical cooling and low ambient temperatures.

Using direct free-cooling with evaporative cooling presents many advantages compared to a traditional cooling system. This design is energy efficient and significantly reduces water wastage contrary to chillers or cooling towers. In addition, its low complexity minimizes the risk of a cooling equipment failure. Finally, the design simplicity makes the system easier to maintain and to operate for the on-site team.

Ultra high-density by design

We designed DC5 for ultra-high density and hyper-scale infrastructure. Each rack can support up to 6KW with a total capacity of 292 racks per room.

At DC5 racks are dually powered via two different paths.
Only one path is protected by offline UPS and generators. UPSes are only running when the grid is below our quality criteria. The other path is only generator protected. As all our servers are dual corded, this has absolutely no impact.
This design gives us the right level of redundancy with a much more efficient infrastructure, and near to 100% of installed capacity usable in all time, contrary to traditional 2N architecture with its maximum 50% of installed capacity usable for servers.

In case of a power failure, Path A becomes unavailable until the generators kick in, as it's not protected by UPSes. Path B ensures uninterrupted power supply until the generators are fully operational. This operation takes less than 12 seconds. Our generators are continuous running rating engines, working in phase and synchronized with the grid. In other words, they can run at the same time as the grid is used. When the power grid comes back, the load can be transferred from generators to the grid without interruption and power-cut on Path A.

Conclusion

The design of DC5 enabled us to build a very efficient and ultra-high density facility. By using simpler construction, we improved reliability. Having fewer components means higher reliability as there are fewer parts that can fail.

Launch your DEV or GP instances in fr-par2

Create Block Storage volumes in fr-par2