New infrastructure goals to chop AI coaching time sharply as Korea races to scale home AI capabilities
Naver mentioned on Thursday it has accomplished building of South Korea’s largest synthetic intelligence computing cluster, constructed round 4,000 of Nvidia’s next-generation B200 Blackwell graphics processing items. The corporate mentioned the system considerably expands its AI computing capability and brings its infrastructure nearer to the size utilized by main international expertise companies.
Based on Naver, the “B200 4K Cluster” delivers computing energy similar to supercomputers ranked among the many world’s high 500. The cluster will likely be used to speed up growth of the corporate’s proprietary basis fashions and help broader AI deployment throughout its companies.
Why the cluster issues
Naver mentioned inner simulations present the brand new infrastructure might velocity up AI mannequin growth by round twelve instances. Its analysis workforce reported that coaching a 72-billion-parameter mannequin—beforehand requiring about 18 months on an Nvidia A100-based system with 2,048 GPUs—can now be accomplished in roughly six weeks.
Whereas precise coaching instances could range relying on workloads and configurations, the corporate mentioned the advance permits extra frequent experimentation and quicker iteration cycles, that are more and more vital in large-scale AI growth.
Concentrate on basis and multimodal fashions
The cluster is predicted to play a central function in advancing Naver’s in-house basis fashions, together with its omni mannequin that may course of textual content, photographs, video, and audio inside a single system. Naver mentioned it plans to broaden large-scale coaching of those multimodal fashions with the intention of attaining efficiency ranges similar to international friends, earlier than rolling them out throughout its platforms and accomplice industries.
Trade observers observe that such infrastructure is changing into a baseline requirement for corporations searching for to compete in basis mannequin growth, the place entry to large, secure computing sources usually determines growth velocity and mannequin high quality.
Engineering decisions behind the efficiency features
Naver attributed the efficiency features to large-scale parallel processing mixed with high-speed networking, in addition to enhancements in cooling and energy administration. The corporate mentioned the cluster design attracts on its expertise working high-performance GPU methods, together with its early business deployment of Nvidia’s SuperPod infrastructure in 2019.
By immediately designing and working its personal giant clusters, Naver mentioned it has been in a position to optimize system effectivity past commonplace off-the-shelf configurations.
Strategic framing round AI sovereignty
Choi Soo-yeon, CEO of Naver, framed the funding as a part of a broader nationwide and strategic effort reasonably than a standalone expertise improve.
“This infrastructure secures a core asset that helps nationwide AI competitiveness and self-reliance,” she mentioned. “With an atmosphere that allows speedy studying and repeated experimentation, we are able to apply AI applied sciences extra flexibly throughout companies and industrial fields.”
Her feedback mirror a rising emphasis amongst South Korean expertise companies on constructing home AI capabilities amid intensifying international competitors and rising dependence on superior computing sources.
Knowledge middle enlargement and future capability
Naver has not disclosed the particular knowledge middle internet hosting the brand new cluster, saying solely that it’s housed at a leased facility in Seoul to permit quicker scaling. Individually, the corporate has introduced plans to broaden its knowledge middle footprint, together with a serious mission in Sejong geared toward reaching 270 megawatts of capability.
The enlargement means that the B200 cluster is probably going a part of a longer-term infrastructure roadmap reasonably than a one-off deployment.
A part of a broader AI infrastructure push
The most recent construct provides to Naver’s wider push to scale AI computing throughout South Korea. In October 2025, the corporate mentioned it will deploy round 60,000 Nvidia GPUs by way of partnerships with LG AI Analysis, SK Telecom, NC AI, and Upstage.
Taken collectively, these strikes level to an intensifying race amongst Korean expertise companies to safe large-scale computing sources, as entry to superior AI infrastructure turns into a defining think about long-term competitiveness in basis fashions and utilized AI companies.
Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the newest breakthroughs, get unique updates, and join with a world community of future-focused thinkers.
Unlock tomorrow’s tendencies at the moment: learn extra, subscribe to our e-newsletter, and grow to be a part of the NextTech neighborhood at NextTech-news.com

