logo
Hefei Coolnet power Co., ltd
products
news
Home >

China Hefei Coolnet power Co., ltd Company News

2024

11/26

Exciting Visit & Future Collaboration Alert!

Last week, Coolnet had the honor of hosting a delegation from a leading European telecommunications company at our Changsha production base. The visit was a significant step towards a future collaboration focused on building more sustainable and efficient digital infrastructure. During the visit, the team explored our cutting-edge facilities and gained a deeper understanding of our data center integrated solutions. They were particularly impressed with our innovative approach to Modular Data Centers, which offer scalable and energy-efficient solutions for modern data center requirements. The delegation also had the opportunity to learn about our advanced Precision Air Conditioning systems. These solutions are designed to maintain optimal conditions for critical equipment, ensuring both performance and energy efficiency. Our Precision Air Conditioning systems include various configurations tailored for different environments, from room cooling to rack-mounted solutions.   Looking Ahead: Building a Greener Future Together The discussions were inspiring, with both teams sharing a strong commitment to sustainability in digital infrastructure. Our client praised Coolnet for integrating innovation with eco-friendly practices, particularly our energy-efficient designs and commitment to reducing carbon footprints. We are excited to announce that preliminary collaboration agreements have been reached, laying the groundwork for a greener and more resilient digital future. This partnership is a testament to Coolnet’s vision that technology and sustainability must go hand in hand.   A big thank you to the visiting team for their trust and confidence in Coolnet. Together, we’re powering progress responsibly.

2025

04/07

Why do we need data center interconnection?

  What is a data center?   With the continuous development of industrial digital transformation, data becomes a key production factor, and data center shoulders the heavy responsibility of computing, storing and forwarding data, and is the most critical digital infrastructure in the new infrastructure. Modern data centers mainly include the following core components. Computing system, including general-purpose computing modules for deploying business and high-performance computing modules for providing super computing power, etc. Storage systems, including mass storage modules, data management engines, storage private networks, etc. Energy system, including power supply module, temperature control module, IT management module, etc. Data center network, which is responsible for linking general-purpose computing, high-performance computing and storage modules inside the data center, and all data interactions between them are realized through the data center network. Among them, it is the general-purpose computing module that directly undertakes the user's business, and the physical basic unit it relies on is a large number of servers. If the server is the body of the data center, then the data center network is the soul of the data center.     Why do we need data center interconnection? At present, data center construction is common in various organizations and enterprises, but it is difficult for a single data center to adapt to the business needs of the new era, and there is an urgent need for multi-data center interconnection. This is mainly reflected in the following aspects.   Rapid growth of business scale Currently, emerging businesses such as cloud computing and intelligence are developing rapidly, and the number of related applications is also increasing rapidly, and these applications are strongly dependent on data centers. Therefore, the scale of business undertaken by data centers is growing rapidly, and the resources of individual data centers will soon be insufficient. Restricted by factors such as the area occupied by data center construction and energy supply, a single data center cannot be expanded indefinitely, and when the business scale grows to a certain extent, multiple data centers need to be built in the same city or off-site. At this time, multiple data centers need to be interconnected to collaborate with each other to complete the business support. In addition, in the context of economic digital transformation, in order to achieve common business success, enterprises within the same industry and between different industries need to frequently share and cooperate at the data level, which also requires interconnection and interoperability between data centers of different enterprises.   Cross-geographic user access is becoming increasingly common In recent years, the business of data centers has changed from being centered on Web services to cloud services and data services, and the scope of users of related organizations and enterprises has long been unrestricted by geography. Especially in the case of very popular mobile Internet, users expect to enjoy high-quality services anytime and anywhere. In order to meet these demands and further enhance user experience, enterprises with the conditions usually build multiple data centers in different geographic areas to facilitate cross-territory user access in close proximity. This requires business deployment across data centers, and needs to support multi-data center interconnection.  

2022

11/14

"Cold" and "Hot" in Data Centers

Immersion liquid cooling is to immerse the server covered with dense red and yellow threads in a water-like liquid, only that the liquid is not ordinary water, but a special insulating coolant, and the heat generated in the server computing is absorbed by the coolant and then enters the external cooling cycle.   The energy consumption of a data center is roughly composed of communication and network equipment, power supply and distribution system, lighting and auxiliary equipment, and cooling system, of which the cooling part accounts for about 40% of the total energy consumption of the data center.   In the past, air cooling has been widely used due to its low cost and simple deployment. However, in recent years, with the rise of high-density computing, chip, server performance and single cabinet power density continue to improve, submerged liquid cooling began to be favored. In the early days, data centers could still only rely on a few mainframes to complete local data calculations, which not only could not do distributed computing, but also could not provide services to the outside world. It was not until the mid-1990s that the Internet emerged and had a huge impact on the market. With the demand for companies to support Internet business applications, data centers were accepted as a service model for most companies.   After 2010, with the rise of cloud computing technology, cloud data centers have gradually come back into the limelight. Compared with the previous ones, the infrastructure of cloud data centers is more scaled, standardized and intelligent, with lower construction costs and more services carried.   Looking back at the history of data center development, it is easy to see that data has gradually become a new production factor, and its importance for productivity development is constantly rising. As we all know, the carrier of data center is one IT equipment, and the computing power of all IT equipment is determined by the chip. From the current point of view, the power consumption of the mainstream chips of the entire server is growing, and even in recent years there has been a considerable slope up. As the power of the server chip grows from 100W, 200W to 350W, 400W, the power consumption of the server will have a doubling, which determines the single cabinet power density from the earliest 4KW, 6KW growth to 15KW to 20KW. Obviously, under such a trend, the traditional air-cooling method can no longer meet the heat dissipation and cooling needs of data centers. Currently, air cooling technology is still used on a large scale. As we continue to enhance the air cooling technology, such as increasing its volume, area, heat pipe technology and various advanced technologies, as well as doing various optimizations at the data center level with the rack level and nodes, the PUE of the data center can also be reduced to about 1.2. But then we also found that even with all these optimizations, up to 20% of the energy consumption is still wasted on data center cooling and maintenance, plus the energy consumption of system fans. This is still not ideal for the carbon reduction goal.

2022

11/14

Application Reference--Mongolian Mining Corporation Data Center recommended Solution:

Mongolian Mining Corporation is a high-quality coking coal producer and exporter in Mongolia. The Company owns and operates two open-pit coking coal mines - Ukhaa Khudag and Baruun Naran, both located in Umnugobi aimag of Mongolia. The main requirement of energy & power data centers is: reliability first, economy and energy saving It is recommended to use CyberMaster medium and large computer room air conditioning series (link) water-cooled type, double cold source type, chilled water double coil type or single coil type unit Data Center Features: ● The computer room area is more than 500㎡ ● Single rack load is about 5kW, ● There is a high heat density rack area with a single rack load greater than 5kW ● 365 × 24 hours uninterrupted operation ● Indoor requirements: temperature requirements 23 ℃ ± 2, humidity requirements 50% ± 5, cleanliness requirements ≥0.5μ≤18000 grains / liter Main Requirement: ● Reliability: At least 99.999% is required, the system has no single point of failure, the failure can be resolved within 10 minutes or there is a backup plan switch ● Energy-saving: The application of energy-efficient equipment, the system energy-saving design, operation energy-saving and design energy-saving are taken into consideration. ● Economy: The overall investment is optimal, and it can be invested in batches to reduce the cost of capital. ● Maintainability: The maintenance cost is the lowest, the maintenance professional requirements are the lowest, and the maintenance difficulty is the least.

2021

07/29

1