Home > News content

Intel will push an independent GPU in 2020 but the hegemony of CPU overlord's research and development of high-end GPUs is somewhat "octy"

via:雷锋网     time:2018/6/13 22:32:08     readed:157

It is no surprise that pushing independent GPUs by 2020

It's not surprising that Intel will launch a separate GPU, because it's a picture of Raja Koduri.Raja Koduri was named chief architect of Intel on November 8th last year, and served as senior vice president of the newly established Core and Visual Computing Group.Prior to joining Intel, Raja Koduri served as Senior Vice President and Chief Architect of the AMD Radeon Business Unit, where he was responsible for a number of image-related products including APUs, stand-alone GPUs, semi-custom products, and GPU computing products. Earlier, Raja Koduri was responsible for Mac's image display system at Apple.

As a result, the addition of more than 25 years of GPU-experienced expertise has allowed Intel to launch an independent GPU initiative ahead of schedule. According to Intel's statement, Raja Koduri was selected for his visual computing and accelerated computing experience on PCs, game consoles, professional workstations, computing devices and other platforms. He is an expert in the hardware, software, and system architecture of the image. Of course, with the addition of Raja Koduri, other GPU talent will follow Intel, which will greatly enhance Intel's talent in GPU.

Shortly after joining Raja Koduri, there were rumors that Intel will launch a new GPU on CES 2019 in January next year, but this is obviously radical and unreasonable for the design of complex chips. Because of the typical GPU architecture and chipDevelopmentThe cycle is three years, so the introduction of independent GPUs in 2020 is relatively reasonable, but whether it can be launched as planned also involves technical and human resources issues.

The goal is nothing more than a game and data center market

For Intel's goal of developing high-end GPUs, it is nothing more than games and the rapidly growing AI market. In the current GPU market, Nvidia holds a leading position. As the pillar of Nvidia's performance, the 2018 first quarter financial report shows that its gaming chip business revenue increased by 68% to 1.72 billion US dollars, and revenue from data center business also increased by 71% to 701 million yuan. Dollars. In addition to the game market and data center, the huge demand for high-performance GPUs for digital cryptocurrencies and self-driving cars is also reflected in Nvidia's latest financial report.

The recent bright earnings report from Nvidia makes AMD and Intel not want to miss the strong growth of the high-performance GPU market. AMD unveiled the world's first Vega GPU using the 7nm process during Computex 2018, and stated that it has begun sample shipments and is expected to start mass shipments later this year. For AMD, which is fiercely competing in the gaming market and Nvidia, it is undoubtedly that you want to grab the 7nm GPU and grab the data center GPU market. AMD pointed out at the press conference that by 2025, data will increase by 50 times: wearable devices, IoT, and 5G devices are becoming popular, and these devices will generate a large amount of data. With the rapid increase in the amount of data and complexity of algorithms, the demand for computing power is also growing at a rapid rate.

As we all know, the demand for processors in data centers currently mainly includes CPUs and GPUs.For Intel, the CPU in the data center already has an advantage. If you have your own high-end discrete graphics card, you can combine it with its own CPU to form a more powerful computing capability and compete with Nvidia and AMD.As for the GPU that first pushed the game market or the GPU in the data center, it is not known. Ryan Shrout, a foreign analyst, believes that Intel will first launch a separate GPU for game PCs.

The R&D History of CPU Overlord's Independent GPU

Intel currently does not have its own image solution, but it is limited to core graphics cards, namely Intel HD Graphics series. This core graphics card has relatively low image processing capabilities and can only be used in some applications where image processing is less demanding. Client devices, such as notebooks. In the development of the GPU, Intel's chip design capabilities of the CPU overlord will not be questioned by most people, but in fact, the CPU giant's GPU development process is not smooth. Intel's GPU R&D can be traced back to 1997, when Intel acquired 2D display core technology through the acquisition of Chips and Technologies (C&T), and 3D technology was developed with the help of Real3D, which has a 20% stake.In February 1998, the Intel 740 (i740 for short) was officially released. This is the only display core developed by Intel Corporation for use on stand-alone graphics cards.

Intel's only display core Intel740 developed for stand-alone graphics cards

The i740 is the first display core that uses the HyperPipelined 3D architecture and is also a 64-bit architecture. The i740 is manufactured on a 0.35 micron process. The core frequency is synchronized with the AGP (Accelerated Graphics Port), which means the default value is 66 MHz. By increasing the AGP frequency, the core can be overclocked. In addition to 3D graphics, the i740 offers excellent 2D display andvideoPlay effect. As for performance, i740's performance in gaming applications is about half that of Voodoo2 (the hottest 3D graphics card that pushes 3Dfx to the top) and is also lower than Voodoo. In the standard inspection program of 3D Winbench 98, its performance was in the same level with Voodoo2, so some people think that the driver of the graphics card has fooled the inspection program. However, with Intel's dominance and cheap prices, many manufacturers have introduced products that use the i740. Product prices have continued to decline, which has led to the i740's high sales and helped Intel gain a good share of the low-end GPU market.

ArrivedOn April 27, 1999, Intel announced a follow-up version of Intel 740 - Intel752(codenamed Portola, referred to as i752). Its core architecture is 128bit, its core frequency is 100MHz, its display memory frequency is 133MHz, and it supports a maximum of 16MB of display memory. butIntel decided to integrate the i752 in the console version before the release, canceled the discrete graphics card, so the i752 only engineering development version of the discrete graphics products spread to the market.Later, Intel752 changed to Intel754 (codenamed Coloma) to support AGP 4X, integrated in the i810E chipset, and other parameters are the same as i752.

After that, Nvidia saw the potential of GPUs in other areas when most of the people knew about GPUs still limited to graphics acceleration, and started the GPGPU (universal GPU) strategy and launched CUDA in 2007.After several years of development and accumulation, CUDA has become the first choice for developers with its stable performance, easy-to-use APIs, complete documentation, and years of developer community operations. Has become the standard data center.Although AMD's attitude toward GPGPU is not that positive, after seeing CUDA's CUDA, it has been promoting OpenCL similar to CUDA with several other cooperative vendors such as Qualcomm. In addition, AMD launched a heterogeneous system architecture HSA in 2014 (heterogeneous). System architecture), I hope to open up the memory space of the CPU and GPU to solve the performance loss caused by memory access between the CPU and GPU, but did not cause a big wave.

After seeing that Nvidia and AMD have successively launched related GPGPU products, Intel also plans to relaunch Larrabee, a discrete graphics product, in order to maintain its advantages. The Larrabee graphics processing project is totally different from all current graphics processing technologies (including Intel's own GMA series integrated display core). It is different from AMD and Nvidia's only graphics processing instruction flow processing technology, but based on its own. The x86 architecture, instruction aspect in addition to some of the new graphics processing instructions to maintain a large number of x86 instructions, making Larrabee has more flexible programmable features and more powerful general-purpose computing processing capabilities, it is Intel's development of multi-core x86 concurrent computing architecture An extension.

Intel plans to introduce the Larrabee graphics core as a consumer graphics processor by 2010 at the latest. However, after the Intel 740, another independent display core, Larrabee, is completely different from Intel's integrated display core, though its R&D team and development concepts are all different. However, due to multiple “jumping”, unsatisfactory R&D progress, poor graphics performance, and high power consumption, it was announced in May 2010 that it would cancel the plan to release related graphics cards.In 2011, Intel first admitted that the MIC project was actually a follow-up version of the Larrabee project.

Nvidia CEO Huang Renxun often criticized Larrabee's irrationality. He believes that Larrabee's performance cannot be outstanding under the drag of the old x86 architecture. It also criticized Intel for failing to strike a reasonable balance between programmable and fixed functions, overemphasizing the degree of programmability, and not all of the graphics processing tasks could be implemented programmatically, and if so, performance would be very poor. At the same time, he believes that Intel’s move is purely a mystery, attempting to use the paper data on the slides to describe the industry. Even if Larrabee has products, it is just an irritating product. For these criticisms, Intel jokingly called him Larrabee's public relations manager, and no pay.

There is also a questionable voice about Intel's use of x86 cores as GPU suspects that Intel still failed to put graphics processing first, because there are no hardware support for the commonly used graphics program interfaces such as DirectX and OpenGL, but only software support, even if it is Intel's claim to develop its own graphics API is nothing more than making full use of multiple x86 cores, which is equivalent to multi-core optimization.

However, it needs to be pointed out thatAlthough the Larrabee project was suspended and criticized by the outside world, Intel also inherited a large number of design elements derived from the "Larrabee" research program in the Intel MIC multiprocessor architecture announced in 2010.The biggest difference is that the former focuses on multiprocessor cooperative computing designed for high-performance computing, and the latter is used as a GPU. At the same time, the concept of developing GPUs for Nvidia and AMD has also had a lot of impact. The "Fermi" architecture used by NVIDIA in the GeForce 400 series introduced in 2010 refers to some of Larrabee's design concepts. It modularizes the GPU's internals, and each module administers multiple sets of stream processors (called "SM" units) and some The special unit constitutes a module called "GPC". In addition to no independent memory controller and display output unit, each GPC module is equivalent to a small GPU. The data sharing of each GPC module is realized by the newly added global secondary cache. . The “Graphics Core Next” architecture used by AMD at the end of 2011 for the Radeon HD 7000 series is a reference to the design concept of the Larrabee project, and it is based on a combination of stream processors and instruction dispatch units. Is "CU".

summary

There is no more critical information about the independent GPU that Intel will launch in 2020, and it is difficult to predict whether the participation of experienced Raja Koduri and the newly established core and visual computing business division can bring competitive advantages to Intel. The high-end GPU, from the independent GPU R & D "history of history."

More importantly, under the background of Intel's all-out transition to AI, if high-end GPUs are successfully developed, Intel will be able to compete with AI by using CPU + GPU + FPGA combinations. After all, Computex 2018 released the first time for Nvidia's robots. The design of the AI ​​chip Jetson Xavier uses a CPU + GPU + DSP. If Intel's independent GPU does not hop in 2020, the chipset owned by Intel will be very beneficial to its dominance in the chip field both in the AI ​​cloud and the terminal.

China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments