The role of hardware and algorithms in the field of artificial intelligence can be said to be half, and at the chip level, the industry is almost unanimous in the point of view - GPU is much more important than the CPU in the artificial intelligence deep learning algorithm, which is why NVIDIA The limelight in the field of artificial intelligence even overshadowed Intel.
There is no doubt that the GPU is the most popular method for training deep learning neural networks. This kind of program has been favored by companies such as Google, Microsoft, IBM, Toyota, and Baidu. Therefore, GPU manufacturers have gradually become a public company in the past two years. Object.
As an absolute leader in the GPU field, NVIDIA has recently made frequent moves. Earlier this year, the company launched the Tesla P100 GPU for deep neural networks and released the NVIDIA DGX-1, a single-chassis deep learning supercomputer based on the GPU.
Now that this deep learning supercomputer has come out, NVIDIA CEO Huang Renxun recently delivered DGX-1 to the artificial intelligence project OpenAI that Musk founded. What project does OpenAI use for DGX-1? how to use? It is not yet known, but we can talk about what this deep learning supercomputer is. What place does it have?
What is a deep learning supercomputer?As the name implies, deep learning supercomputers are a combination of deep learning and supercomputers. The “Tianhe-1†and “Tianhe-2†we all know are supercomputers. Of course, not only that, but high performance computing (HPC) ) Computers can be counted as supercomputers, such as NVIDIA's Tesla series.
Because deep learning neural networks, especially hundreds of thousands of layers of neural networks, require very high computational and throughput capabilities, GPUs have a natural advantage in handling complex operations: it has excellent floating-point performance and can guarantee both classification and Convolution performance and accuracy.
So GPU-powered supercomputers have become the best choice for training a variety of deep neural networks. For example, in the Google Brain project, 12 GPUs are configured on 3 machines, and the performance reaches a level of 1000 CPU clusters.
How is NVIDIA DGX-1 performance?Huang Renxun once said that 3,000 people spent 3 years to develop such a DGX-1, which is evident in the difficulty of deep learning supercomputer R&D.
According to the official NVIDIA introduction, the specifications of the DGX-1 are as follows:
Half-precision (FP16) peak performance up to 170 Teraflops;
8 Tesla P100 GPU accelerators with 16GB of memory per GPU
NVLink Hybrid Cube Mesh (NVLink Hybrid Cube Grid);
7TB Solid State Drive DL Cache;
Dual 10 Gigabit Ethernet, Quad InfiniBand 100Gb network connection;
Power consumption: 3U – 3200W.
Because NVIDIA designed these hardware in a single chassis, the DGX-1 is called a single-chassis deep learning supercomputer.
The Tesla P100 has 15.3 billion 16nm FinFET transistors with a core area of ​​610mm2. According to Huangrenxun, this GPU is by far the largest chip.
The DGX-1's integrated 8GB 16GB memory GPU throughput is equivalent to the level of 250 traditional servers. Its configured 7TB SSD is used to store a large amount of raw data trained by neural networks.
In addition, the DGX-1 system also includes a set of deep learning software, the Deep Learning GPU Training System (DIGITSTM), which can be used to design deep neural networks (DNN). It is understood that DGX-1 can provide deep learning training. Accelerate the speed by 75 times and increase the CPU performance by 56 times. What kind of concept is this?
The Intel Xeon Xeon system requires more than 250 nodes and 150 hours to train Alexnet, while the DGX-1 requires only one node for 2 hours, which has significant advantages in performance and total node bandwidth. Of course, under the performance enhancement, the power consumption reached 3200W, and the selling price was as high as 129,000 US dollars.
Is the GPU the only choice?Although the GPU has certain advantages over the CPU, in the face of FPGAs and neural network chips, the GPU still has a lot to lose.
Researchers have tested that the architecture of the FPGA is more flexible and the performance per unit of power consumption is stronger than that of the GPU. Deep learning algorithms can run faster and more efficiently on FPGAs, and power consumption can be made even lower. Intel even launched a hybrid chip architecture for FPGAs and CPUs.
Another research direction is neural network chips. Representatives of this field are IBM's TrueNorth and Cambrian's DianNao. According to the results of the simulation test, the Cambrian deep learning processor using the DianNaoYu instruction set has an order-of-magnitude performance improvement over the CPU of the x86 instruction set; IBM's Truenorth contains 5.4 billion low-cost transistor synapse chips. However, the power consumption is as low as 700 milliwatts, and the performance and power consumption have been upgraded to a new level.
Chen Yunxi, a researcher at the Cambrian neural network processor and the Institute of Computing Technology at the Chinese Academy of Sciences, said that "accelerated chips are the ultimate form of neural network chips."
However, the ideal is very full and the reality is very skinny! For now, GPU is the only solution that can realize large-scale application. The position of FPGA or neural network chip to replace GPU can only be said to be a long way!
New type AC Contactor, our model is BC1-D series. New AC Contactor is the new designed AC Contactors, which with a new face appear to the user with high quality product. This product uses the fresh PA66 nylon material, no any recycled material mixed. Have high protection level for the fire-resistant. White body, yellow inserting core, looks more charming. All the metallic parts are standards of quality control.
The AC Contactor is suitable for using in the circuits up to the rated voltage 660VAC 50Hz or 60Hz, rated current up to 95A, for making and breaking and frequently start, controlling the AC motor. Combined with the auxiliary contactor group, air delayer, machine interlocking devices etc. It is combined into the delay contactor, mechanical interlocking contactor, star-delta starter, with the thermal relay, it is combined into the electromagnetic starter.
AC Contactor,Magnetic AC Contactor,ACMagnetic Electric Contactor,Electrical Power AC Contactor
Ningbo Bond Industrial Electric Co., Ltd. , https://www.bondelectro.com