Edited by Brian Birnbaum
Between 2024 and 2025, hyperscalers have grown to command approximately 60–66% of the global cloud market, leveraging their extensive services and worldwide infrastructure. On the other hand, neocloud providers, focused on delivering high-performance, cost-efficient infrastructure tailored for AI, and machine learning workloads, hold a smaller but expanding share of under 5%.
By 2030, hyperscalers are projected to maintain 55–65% market share, driven by their scale and versatility, whereas neoclouds could capture 5–10% overall and 15–20% of the AI-specific segment, fueled by their cost efficiencies and specialized offerings.
Neoclouds are poised to outpace the broader cloud market’s growth in AI infrastructure, propelled by surging demand for affordable, high-performance GPU compute.
According to Horizon Grand View Research, the AI cloud market is expected to soar from $87 billion in 2024 to $647 billion by 2030, reflecting a robust 39.7% CAGR, with neoclouds well-positioned to claim an increasing share.
This means that the leading neocloud could sustain annual growth rates exceeding 50% for years.
Within this dynamic landscape, Nebius Group emerges as a standout among neocloud providers.
But does Nebius possess a competitive moat relative to its peers, particularly in a cloud computing market often perceived as commoditized?
Let’s find out.
Index
1. Nebius at a glance
2. All In on Verticalization
3. The Apple of Datacenters?
4. Elite management & top-tier AI Engineers
5. Conclusion
1. Nebius at a glance
Headquartered in Amsterdam, Netherlands, Nebius is a leading AI infrastructure provider, delivering a full-stack cloud platform optimized for AI workloads through custom-designed data centers, proprietary hardware, and advanced software tools like Nebius AI Studio.
In recent months, Nebius has significantly expanded its global presence, establishing hubs across the United States and Europe to enhance its capacity to serve customers worldwide. In December 2024, a $700 million strategic equity raise, backed by Accel, NVIDIA, and Orbis, provided not only capital but also strong validation of Nebius’s vision to create a unique, high-demand AI infrastructure. The company’s core AI business achieved remarkable growth, surging over 600% in 2024, with robust performance continuing into 2025, positioning Nebius as a dynamic player in the AI cloud market.
Its cloud business integrates purpose-built data centers, in-house-designed hardware, and sophisticated software tools, such as Nebius AI Studio, to deliver cost-effective, energy-efficient solutions for AI model training and inference.
Serving a diverse clientele—from AI-native startups to enterprises—Nebius combines the scalability and reliability of hyperscalers with the precision of specialized AI infrastructure, driving transformative applications in industries like healthcare, robotics, and entertainment.
Beyond its primary AI cloud business, Nebius oversees several ventures, including Avride (autonomous driving technology), Toloka (AI data labeling), TripleTen (edtech for tech reskilling), and a 28% stake in ClickHouse (data warehousing). While these businesses hold significant potential, they are excluded from this analysis due to their limited synergies with Nebius’s core cloud operations, which remain the company’s strategic focus. Although these ventures could generate substantial cash flow in the future, management’s emphasis on potential divestitures or attracting new investors to fund them suggests they are primarily viewed as sources of capital. As such, they represent a valuable upside in cash reserves, likely to bolster Nebius’s core AI infrastructure business rather than drive its long-term strategy.
To support its rapid growth and global expansion, Nebius employs three distinct data center models, each balancing control, efficiency, and speed to meet the demands of AI workloads:
Greenfield: Nebius owns the land and designs the entire facility, optimizing energy efficiency and performance. The Finland data center, a greenfield example, achieves a leading Power Usage Effectiveness (PUE) and recycles server heat to warm local homes, positioning Nebius for regulatory advantages.
Build-to-Suit: Nebius partners with developers who provide land and power, while Nebius specifies custom designs for efficiency. The New Jersey facility exemplifies this, incorporating advanced cooling and power management.
Co-location: Nebius leases space in third-party data centers for rapid deployment, using custom racks to enhance efficiency. Sites in France, Iceland, and Kansas City leverage renewable energy and meet strict performance standards.
This strategic flexibility is amplified by Nebius’s special relationship with NVIDIA, as a Reference Platform NVIDIA Cloud Partner, one of only a few globally recognized for expertise in deploying NVIDIA’s GPUs, like the H100 and Blackwell series, within a fully integrated hardware and software stack. This partnership ensures Nebius optimizes NVIDIA’s cutting-edge hardware for AI workloads, enhancing performance and energy efficiency, and strengthens its competitive edge in delivering cost-effective, high-performance AI infrastructure.
The content of this analysis is for entertainment and informational purposes only and should not be considered financial or investment advice. Please conduct your own thorough research and due diligence before making any investment decisions and consult with a professional if needed.
2. All In on Verticalization
Nebius’s full-stack verticalization is its cornerstone advantage, delivering unmatched cost and energy efficiency in a–likely–commoditizing AI compute market where NVIDIA GPUs level the performance playing field. Nebius’s full-stack approach encompasses data centers, in-house-designed hardware, and an intelligent software layer, granting full control over performance optimization, reliability, and cost efficiency and enabling Nebius to provide
the lowest pricing in the GPU cloud market,
as stated by SemiAnalysis, an independent research firm specialized in semiconductors and AI.
This cost leadership stems from bespoke server and rack designs optimized for AI workloads, which deliver optimized power and cooling efficiency, lower latency, and seamless integration with its cloud platform, allowing flexibility on pricing, delivering cost savings for customers by maximizing resource utilization and minimizing hardware bottlenecks.
As an example, with servers operating at up to 40°C—exceeding the ASHRAE standard of 27°C—Nebius leverages air cooling for high-density chips, slashing energy costs, while toolless rack designs reduce maintenance from hours to minutes, further lowering expenses, as stated by the company in its annual report.
Nebius’s greenfield data center in Finland achieves “one of the world’s leading power usage effectiveness (PUE) levels” and recycles 15,000–20,000 MWh of server heat annually to warm local homes, according to the company.
Daniel Bounds, the Chief Marketing Officer of Nebius, during the Q4 2024 earnings call highlights this edge, noting Nebius’s
expertise in building high PUE… lays the foundation of our cost advantage amongst our neocloud peers,
This leadership in energy efficiency not only drives significant cost savings but also positions Nebius to potentially capitalize on tightening energy consumption regulations.
As AI computing demand surges and national power grids face increasing strain, governments are likely to impose stricter regulations, introducing incentive programs to reward energy-efficient providers and penalize energy-intensive ones. Much like Tesla profited early on by selling CO2 emissions credits to other automakers, Nebius could leverage its energy-efficient infrastructure to generate revenue through similar credit-based systems in the cloud computing sector, establishing a competitive edge in a regulated future.
Nebius’s AI software stack is a pivotal component of its vertical integration strategy, seamlessly tying together its custom hardware and data centers to deliver a cohesive, high-performance AI cloud platform.
Andrey Korolenko, Head of Infrastructure at Nebius, emphasizes this during the Q1 2025 earnings call, stating,
we started Nebius with a clear goal to build a full-stack AI cloud… our focus has been to create a software stack that is specifically built for the AI workloads,
structured in three layers: hardware management, a virtualized cloud platform, and pre-configured AI tools that “simplify the entire AI development process”. This stack enhances efficiency with features like Slurm-based cluster upgrades for “automatic recovery, proactive system checks, [and] issue detection before the actual jobs fail,” reducing downtime, and optimized object storage that “boost[s] the speed of read and write for compute nodes” to accelerate training.
Tom Blackwell, Chief Communication Officer at Nebius, underscores its strategic importance, noting,
our software stack, it’s a critical part of the offering… we’re relatively unique in having that full stack offering among the Neocloud in our space…it also just makes us very sticky with customers,
enabling rapid GPU cluster provisioning and providing tools to “manage their data models, and track their progress”. By integrating with platforms like Metaflow and SkyPilot, the stack ensures “minimal friction” for customers, while its potential to “become even the most significant standalone driver of higher-margin revenue” reinforces Nebius’s ability to attract and retain a broad customer base, solidifying its vertically integrated ecosystem.
The value-added services component of Nebius’s verticalized strategy, including tools like Nebius AI Studio for inference-as-a-service, is a critical differentiator, enhancing its full-stack AI infrastructure by extending the utility and profitability of its offerings beyond commoditized bare-metal solutions and potentially creates new revenue streams for the company that enable us to command software-like margins.
3. The Apple of Datacenters?
The recently launched Nebius AI Studio inference service finally expands the Company’s offering to app builders, with access to a range of state-of-the-art open-source models in a flexible, user-friendly environment at among the lowest price-per-token on the market.
Could Nebius emerge as the "Apple of Data Centers," redefining AI infrastructure much like Apple transformed consumer electronics?
By leveraging external computing hardware, such as NVIDIA GPUs, and integrating it into a proprietary ecosystem of custom-designed hardware and a sophisticated software stack, Nebius can achieve superior performance and industry-leading energy efficiency, as seen in its Finland data center’s low PUE and heat recovery system. This vertical integration reminds of Apple’s approach with consumer electronics but applied to data centers, optimizing every layer for cost savings and reliability while fostering an attractive marketplace for AI developers.
Although substantial evidence supports this thesis, including SemiAnalysis’s Gold rating for Nebius in its ClusterMAX™ GPU Cloud Rating System—placing it alongside hyperscalers like Azure and Oracle, and just behind leading NeoCloud CoreWeave—clear financial indicators of this competitive edge are still to emerge.
To assess Nebius’s competitive moat, KPIs such as revenue growth and gross margins should be closely monitored, particularly in comparison to the leading NeoCloud competitor CoreWeave.
Nebius, with an annualized run-rate revenue of $229 million, is significantly smaller than CoreWeave, trailing by roughly an order of magnitude. Both companies are achieving triple-digit year-over-year revenue growth, though Nebius is playing catch-up, having shifted focus to the AI cloud market in 2024, while CoreWeave launched its cloud services in 2019.
For Nebius to demonstrate a structural advantage, it must outpace CoreWeave’s revenue growth to capture market share and, at the same time, achieve gross margins equal to or higher than CoreWeave’s, validating its cost leadership in the AI infrastructure space.
During the Q1 2025 earnings call, Roman Chernin, Chief Business Officer at Nebius, outlined a robust financial outlook for the company:
Our base case plan calls for several billion dollars of revenue in the midterm over the next few years. While our base case assumes that we grow our capacity to support this type of revenue growth from 2025 levels of 100 megawatt. Our ambition is to grow much larger and much faster. For that, we are building a data center pipeline to provide scalability to more than 1 gigawatt of power,
He further elaborated on profitability, stating,
Our target of [ 20% to 30% ] EBIT margins is a function of two factors. Just greater mix of workloads where we can run our GPU fleet with a high level of utilization for a longer period of time. Second, I would say is software. We put a lot of efforts into developing our software, which allow us assumed contribution from high-margin software and services revenue over the long term.
If these projections are realized, Nebius could outperform CoreWeave’s current 17% EBIT margin, demonstrating superior operational efficiency and a competitive advantage in the AI infrastructure market.
4. Elite management & top-tier AI Engineers
A non-negotiable requirement for all my investment theses is the conviction that the company is led by an exceptional and trustworthy executive team with skin in the game, which not only drives the existence of a moat but also ensures its sustainability over time. Nebius seems to check this box.
Nebius’s management team, spearheaded by founder and CEO Arkady Volozh, combines deep expertise, reliability, and engineering excellence, drawing on the rich heritage of Yandex, often called the “Google of Russia.” Volozh, who launched Yandex in 1997, grew it into a $31 billion tech leader by 2021, excelling in search, e-commerce, and AI infrastructure before its 2024 restructuring into Nebius Group, prompted by geopolitical challenges. His return as CEO in August 2024, after navigating EU sanctions, highlights his steadfast commitment, backed by a significant stake in Nebius, holding 55% of voting power, ensuring substantial skin in the game.
Alongside seasoned executives like Chief Business Officer Roman Chernin and Chief Product Officer Andrey Korolenko, Volozh leads a team of over 850 top-tier AI engineers, many long-tenured Yandex alumni with over 15 years of collaboration, including experts in data center construction and operations, hardware R&D, AI cloud platform development, and AI research. This is how Nebius is poised to maintain complete control over its technology stack, ensuring seamless integration from infrastructure to AI services with 24/7 global support.
The team’s longstanding partnerships with leading chipmakers and OEMs further strengthen Nebius’s infrastructure capabilities.
Lauded by investors like Accel for being “some of the best hardware and software engineers globally,” this team’s proven track record in building proprietary infrastructure underpins Nebius’s reliable execution of its vertically integrated AI cloud strategy, positioning its leadership as a key competitive strength.
5. Conclusion
Despite Nebius rising from the restructured foundation of Yandex—a proven tech leader driven by exceptional engineers and management—its AI cloud venture was launched only in July 2024 and is relatively nascent, lacking sufficient financial data to fully validate the thesis presented here.
Qualitative indicators are compelling: the management team, led by Arkady Volozh, inspires confidence, and independent research from SemiAnalysis, which awards Nebius a Gold rating in its ClusterMAX™ GPU Cloud Rating System, underscores its potential.
However, Nebius must demonstrate that its full-stack approach delivers a sustainable edge, particularly in its software component, which still trails competitors in user experience. SemiAnalysis notes, “Nebius still lags behind competitors regarding user experience… Despite offering on-demand NVIDIA H100 GPUs at roughly $1.50 per hour… Nebius struggles with customer adoption [due to] overly complex and unintuitive” UI/UX, though the company is actively addressing these issues.
Despite being unprofitable yet, Nebius’s financial strength, with 1,44 billion in cash reserves and minimal debt, enables aggressive investment and risk-taking.
Currently serving independent developers and AI-native companies, Nebius aims to capture frontier AI labs and enterprise clients, as Chief Business Officer Roman Chernin emphasizes in the Q1 2025 earnings call:
how quickly we get there will be a function of how fast we can scale and capture demand through more enterprise-level customers and longer-term contracts,
To challenge CoreWeave, which benefits from elite clients like OpenAI (a significant revenue driver), Nebius must accelerate enterprise adoption.
Summarising, while Nebius presents a promising investment with strong potential for gains, its unproven financial track record keeps it outside my concentrated portfolio’s ten holdings.
Should I gain conviction and validate my thesis, I believe Nebius could offer an investment opportunity akin to buying NVIDIA with amplified exposure.
Exclusively leveraging NVIDIA GPUs, Nebius serves as a critical conduit for enterprises and developers to access these chips, capitalizing on the surging demand for GPU-powered AI workloads.
As the user base for NVIDIA GPUs expands, Nebius’s revenue growth is poised to scale as a multiple of NVIDIA’s GPU revenue growth, driven by the market share it captures in the AI cloud infrastructure space.
I’ll closely monitor upcoming earnings for data supporting my thesis, potentially using options to capitalize on rapid stock appreciation if market conditions align, without the direct risk of ownership.
Follow me on X for real-time updates.