In today’s article, we’re going to talk about computing power.
Computing power has been a hot concept in the ICT industry in recent years. It appears in news reports and in speeches by celebrities.
So, what exactly is computing power? What are the categories of computing power and what are their uses? What is the current state of global computing development?
Next, DiskMFR will give you a detailed science.
What exactly is computer power?
More specifically, computing power is the computing ability to achieve the target result output by processing information data.
We humans, in fact, are capable of this. In the course of our lives, calculations are made all the time. Our brain is a powerful computing engine.
Most of the time, we do toolless calculations in our heads and heads. However, this is a little bit low power. So, when we encounter a complex situation, we will use the power tool for depth calculation.
In ancient times, our primitive tools are grass rope and stone. Later, with the progress of civilization, we have more practical power tools such as COUNTING ROD (a small stick used for calculation) and abacus, and the level of power is constantly improved.
By the 1940s, we had the computing revolution.
In February 1946, the world’s first digital electronic computer ENIAC was born, marking the human computing power formally entering the digital electronic era.
Later, with the emergence and development of semiconductor technology, we entered the chip era. The chip became the main carrier of computing power.
Time went on.
By the 1970s and 1980s, chip technology, governed by Moore’s Law, had come a long way. Chips are getting better and smaller. Finally, computers were miniaturized, and the PC was born.
The birth of the PC is of profound significance. It is a sign that IT computing power is no longer just for a few large enterprises (mainframes), but to head up to ordinary households and small and medium-sized enterprises. It has successfully opened the door of the national information age, and promoted the popularization of information technology in the whole society.
With the help of PC, people fully feel the improvement in quality of life brought by IT computing power, as well as production efficiency. The emergence of the PC also laid the foundation for the later vigorous development of the Internet.
Since the beginning of the 21st century, computing power has undergone great changes again.
This is marked by the advent of cloud computing.
Before cloud computing, humans were struggling with the power of single-point computing (a mainframe or a PC that can complete all the computing tasks independently), and they had tried distributed computing architectures such as grid computing (where a huge computing task is broken down into many smaller computing tasks and assigned to different computers to complete).
Cloud computing is a new attempt at distributed computing. Its essence is to pack and aggregate a large number of scattered computing resources to achieve higher reliability, higher performance, and lower cost computing power.
Specifically, in cloud computing, computing resources such as central processing unit (CPU), memory, hard disk, and graphics card (GPU) are pooled to form a virtual “computing power resource pool” that can be infinitely expanded by means of software.
If the user has a computing power demand, the “computing power resource pool” will dynamically allocate computing power resources, and the user pays according to the demand.
Compared with the equipment purchased by users, the equipment room built by users, and their own operation and maintenance, cloud computing has obvious cost-performance advantages.
After computing the power cloud, the data center has become the main carrier of computing power. The scale of human computing power, beginning a new leap.
Classification of computing power
The emergence of cloud computing and data centers is due to the deepening of informatization and digitalization, which has triggered the strong demand for the computing power of the whole society.
These demands are not only from the consumer sector (mobile Internet, watching TV series, online shopping, taxi hailing, O2O, etc.), but also from the industry sector (industrial manufacturing, transportation logistics, financial securities, education, and medical care, etc.), and from the urban governance sector (smart city, Yizhutong, urban brain, etc.).
There are different algorithms for different power applications and needs. Different algorithms have different requirements on the characteristics of computing power.
Generally, we divide computing forces into two major categories, namely, general computing forces and special computing forces.
- General Computing Power: x86, ARM
- Specialized Computing Power: FPGA, ASIC
Everyone should have heard that the chip responsible for output computing power is divided into the general chip and the special chip.
CPU processor chips, such as x86, are general-purpose chips. The computational tasks they can accomplish are diversified and flexible, but the power consumption is higher.
And special chip mainly refers to FPGA and ASIC.
FPGA is a programmable integrated circuit. The hardware can be programmed to change the logic of the chips inside, but the software is deeply customized to perform specialized tasks.
ASIC is an application-specific integrated circuit. As the name suggests, it is a custom-made chip for professional use, and most of its software algorithms are embedded in silicon chips.
Asics can perform specific computing functions, but the role is relatively simple low energy consumption. FPGA is a cross between general-purpose chips and ASICs.
Let’s take bitcoin mining as an example.
In the past, people used to use PC (x86 general purpose chip) mining, later more difficult mining, computing power is not enough. So, I started mining using graphics cards (GPUs). Later, when the power consumption of graphics cards was too high to pay for electricity, FPGA and ASIC cluster arrays were used for mining.
In data centers, computing tasks are also divided into basic general computing and HPC High-performance computing.
HPC calculation, and further subdivided into three categories:
- Scientific computing: physical chemistry, meteorology, environmental protection, life science, petroleum exploration, astronomical exploration, etc.
- Engineering computing: computer-aided engineering, computer-aided manufacturing, electronic design automation, electromagnetic simulation, etc.
- Intelligent computing: AI (Artificial Intelligence) computing, including machine learning, deep learning, data analysis, and so on.
Scientific computing and engineering computing should be heard of, these professional scientific research areas of data production are large, and the requirements for computing power are extremely high.
Take oil and gas exploration. Oil and gas exploration, simply put, is to CT the surface. In a single project, the raw data often exceeds 100 terabytes, and may even exceed 1 PB. Such a huge amount of data requires a huge amount of computing power to support.
Intelligent computing, we need to focus on this.
AI artificial intelligence is currently the focus of the whole society. No matter which field, are studying the application and implementation of artificial intelligence.
The three core elements of artificial intelligence are computing power, algorithms, and data.
As we all know, AI artificial intelligence is large computing power, especially “eat” computing power. In artificial intelligence computing, there are many matrices or vector multiplication and addition, which is relatively specific, so the CPU is not suitable for computing.
In the real world, people mainly use GPUs and the aforementioned special-purpose chips for computing. In particular, GPU is the main force of AI computing power.
Although GPU is a graphics processor, the number of GPU cores (logical operation units) is far more than the CPU. It is suitable for sending the same instruction stream to multiple cores in parallel and using different input data to execute, so as to complete massive simple operations in graphics processing or big data processing.
Therefore, GPUs are more suitable for processing computationally intensive and highly parallel computing tasks (such as AI computing).
In recent years, because of the strong demand for artificial intelligence computing, the country has also built a number of intelligent computing centers, that is, data centers dedicated to intelligent computing.
In addition to smart computing centers, there are now many supercomputing centers. The supercomputing center houses supercomputers like Tianhe-1, which is dedicated to undertaking various large-scale scientific and engineering computing tasks.
The data centers we usually see are basically cloud computing data centers.
- Cloud computing data center
- Smart Computing Center
- Supercomputing Center
The tasks are mixed, ranging from basic general-purpose computing to high-performance computing, to a lot of heterogeneous computing (computing that uses different types of instruction sets at the same time). Because of the increasing demand for high-performance computing, the proportion of dedicated computing chips is gradually increasing.
TPU, NPU, and DPU, which gradually became popular a few years ago, are actually dedicated chips.
|APU||Accelerated Processing Unit|
|CPU||Central Processing Unit|
|DPU||Data Processing Unit|
|GPU||Graphics Processing Unit|
|NPU||Neural Network Processing Unit|
|TPU||Tensor Processing Unit|
The “computing power offloading” that we often hear about is not actually deleting computing power, but transferring many computing tasks (such as virtualization, data forwarding, compressed storage, encryption, decryption, etc.) from CPU to NPU, DPU, and other chips to reduce the computing power burden of CPU.
In recent years, in addition to the basic universal computing power, intelligent computing power, and supercomputing power, the concept of frontier computing power has also appeared in the scientific community, including quantum computing, photon computing, etc., which is worthy of attention.
Measures of computing power
Since computing power is an “ability”, of course, there will be indicators and benchmark units to measure its strength. The familiar units are FLOPS, TFLOPS, etc.
In fact, there are many indicators to measure the size of computing power, such as MIPS, DMIPS, OPS, and so on.
|Unit of Measurement||Full Name|
|MIPS||Million Instructions Per Second|
|DMIPS||Dhrystone Million Instructions executed Per Second|
|OPS||Operations Per Second|
|FLOPS||Floating-point Operations Per Second|
|Hash/s||Hash Per Second|
MFLOPS, GFLOPS, TFLOPS, PFLOPS, etc., are all of the different magnitudes of FLOPS. The specific relationship is as follows:
|Unit of Measurement||Full Name|
The difference in computing power between different computing power carriers is very huge. In order to better understand the difference, DiskMFR has made a comparison table of calculation power:
|The platform of Computing Power||The size of Computing Power|
|Intel 8086 CPU||710000 FLOPS|
|Intel Pentium 4HT 3.6Ghz||7000000000 FLOPS|
|Intel Geforce GTX 1080Ti||10800000000000 FLOPS|
|Qualcomm Snapdragon 888 processor||26000000000000 FLOPS|
|Milky Way One||2566000000000000 FLOPS|
|Sunway Taihu Light (Supercomputer)||125000000000000000 FLOPS|
Earlier we mentioned universal computing, smart computing, and supercomputing. The power of smart computing and supercomputing is trending much faster than that of general-purpose computing.
General Purpose Computing power (FP32) will grow tenfold to 3.3 ZFLOPS by 2030, according to GIV. And the AI intelligent computing power (FP16), will increase 500 times to 105 ZFLOPS.
Current situation and future of computing power
As early as 1961, John McCarthy, the “father of artificial intelligence,” proposed the goal of Utility Computing. “One-day computing may be organized as a public utility, just as the telephone system is a public utility,” he argues.
Today, his vision has become a reality. In the digital wave, computing power has become a public basic resource like water and electricity, and data centers and communication networks have also become important public infrastructure.
This is the result of the IT industry and communication industry’s hard work for half a century.
For human society as a whole, computing power is no longer a technical dimension. It has risen to the dimensions of economics and philosophy, and become the core product of the digital economy era, as well as the cornerstone of the digitalized transformation of the whole society.
The life of each of us, as well as the operation of factories and enterprises, and the operation of government departments, are inseparable from computing power. We also need massive computing power in key areas such as national security, national defense construction, and research in basic disciplines.
Computing power determines the speed of digital economy development and the height of social intelligence development.
According to data jointly released by IDC, Inspur Information, and Tsinghua University’s Institute for Global Industry, the digital economy and GDP will grow by 3.5‰ and 1.8‰, respectively, for every 1 point increase in the computing power index.
In the field of computing power, the competition between countries is increasingly fierce.
In 2020, the total size of China’s computing power reached 135 EFLOPS, an increase of 55% year on year, about 16 percentage points higher than the global growth rate. At present, our absolute computing power is ranked second in the world.
However, from a per capita point of view, we are not dominant, only in the middle level of computing power countries.
Especially in the chip and other core computing technology, we still have a big gap with the developed countries. Many chokehold technologies have not been solved, seriously affecting our computing security, and thus affecting national security.
So, there is still a long way to go, and we need to continue to work hard.
In the future society, information, digitization, and intelligence will be further accelerated. The arrival of the era of intelligent connection of all things, the introduction of a large number of intelligent Internet of things terminals, and the landing of AI intelligent scenarios will produce unimaginable massive data.
Such data will further stimulate the demand for computing power.
According to Roland Berger’s forecast, from 2018 to 2030, the demand for computing power for autonomous driving will increase 390 times, and the demand for smart factories will increase 110 times. The demand for computing power per capita in major countries will increase 20 times from less than 500 GFLOPS today to 10,000 GFLOPS in 2035.
Global computing power will reach 6.8 ZFLOPS by 2025, a 30-fold increase from 2020, according to Inspur AI Research Institute.
A new round of computing revolution is accelerating.
Computing power is such an important resource, but in fact, there are still a lot of problems with how we use computing power.
For example, the computational force utilization problem, and the computational force distribution equilibrium problem. According to the data of IDC, the small computing power utilization rate of enterprises is only 10%-15% at present, and there is a lot of waste.
Moore’s Law began to slow down in 2015, and the growth rate of computing power per unit of energy consumption has been gradually separated by the growth rate of data volume. While exploring the potential of chip computing power, we must consider the resource scheduling problem of computing power.
So, how do we schedule the computational forces? Can the existing communication network technology meet the demand for computing power scheduling?