Featured

Nvidia plans to sell tech to speed AI chip communication

 Nvidia plans to sell a technology that will tie chips together to speed up the chip-to-chip communication needed to build and deploy artificial intelligence tools, it said on Monday.

Nvidia launched a new version of its NVLink tech called NVLink Fusion on Monday that it will sell to other chip designers to help build powerful custom AI systems with multiple chips linked together.

Marvell Technology and MediaTek plan to adopt the NVLink tech called Fusion for their custom chip efforts, the company said. Other partners include Alchip, Fujitsu and Qualcomm.

Nvidia’s NVLink is used to exchange massive amounts of data between various chips, such as in the company’s GB200, which combines two Blackwell graphics processing units with a Grace processor.

Nvida CEO Jensen Huang made the announcement about NVLink Fusion at the Taipei Music Center, site of the Computex AI exhibition that runs from May 20 to 23.

In addition to the announcement on new tech production, Huang disclosed the company’s plan to build a Taiwan headquarters in the northern suburbs of Taipei.

His keynote speech discussed Nvidia’s history of building AI chips, systems and software to support them.

He said his presentations in the past had focused on the company’s graphics chips. Now Nvidia has grown beyond its roots as a video game graphics chip maker into the dominant producer of the chips that have powered the AI frenzy since ChatGPT’s launch in 2022.

Nvidia has been designing central processing units that would run Microsoft’s Windows operating system and use technology from Arm Holdings, Reuters has previously reported.

At Computex last year, Huang sparked “Jensanity” in Taiwan, as the public and media breathlessly followed the CEO, who was mobbed by attendees at the trade show.

During the company’s annual developer conference in March, Huang outlined how Nvidia would position itself to address the shift in computing needs from building large AI models to running applications based on them.

He made public several new generations of AI chips, including the Blackwell Ultra, which will be available later this year.

The company’s Rubin chips will be followed by Feynman processors, which are set to arrive in 2028.

Nvidia also launched a desktop version of its AI chips, called DGX Spark, targeting AI researchers. On Monday, Huang said the computer was in full production and would be ready in a “few weeks”.

Computex, expected to have 1,400 exhibitors, will be the first major gathering of computer and chip executives in Asia since U.S. President Donald Trump threatened to impose tariffs to push companies to increase production in the United States.

Source : Reuters

GLOBAL BUSINESS AND FINANCE MAGAZINE

Recent Posts

The future is under the glass

Digital design increasingly confers a competitive edge in global tech markets. This column examines how…

18 hours ago

Generative AI in German firms: Diffusion, costs, and expected economic effects

The novelty and speed of diffusion of generative AI means that evidence on its impact…

18 hours ago

Immigration restrictions and natives’ intergenerational mobility: Evidence from the 1920s US quota acts

Much of the debate over the consequences of immigration restrictions for labour market outcomes of…

18 hours ago

Why inflation may respond faster to big shocks: The rise of state-dependent pricing

Macroeconomic models distinguish time-dependent pricing, where firms change prices at fixed intervals, from state-dependent pricing,…

18 hours ago

Showing up in the Alps: The economic value of Davos

Attending the World Economic Forum in Davos is costly, with estimates ranging between $20,000 and…

19 hours ago

Productivity, firm size, and why distortions hurt developing economies

In many developing countries, productive firms remain too small, while less productive firms are too…

19 hours ago