
NVIDIA DGX™ B300
Setting a new bar for AI factory performance,
from training to inference.
The adoption of AI across industries has seen exponential growth over a short period of time, signaling a fundamental shift in the way businesses are approaching their AI transformation. Organizations that have been slow to adopt technological advancements out of skepticism are racing to outfit their data centers with the right infrastructure and augment their teams with the right talent to leverage AI, but many of these organizations are finding that deploying AI is not as simple as they may have planned.
While the promise of generative AI is transformative, these enterprises are facing several common challenges in adopting and scaling these technologies. Among them are finding the right solution for integration complexities, filling critical gaps in expertise, and managing energy consumption and costs. These organizations are finding that they are not equipped to scale and operate in the same way hyperscalers do.
NVIDIA DGX™ B300, the building block of NVIDIA DGX SuperPOD, is a purpose-built AI infrastructure solution tailored to meet the computational demands of AI reasoning, leveraging full-stack software to ease the burden on enterprises to deploy AI in a streamlined manner. Powered by NVIDIA Blackwell Ultra GPUs, DGX B300 delivers 144 petaFLOPS for inference and 72 petaFLOPS for training, all in a new form factor designed to fit seamlessly into the modern data center and is compatible with NVIDIA MGX™ and traditional enterprise racks. With DGX B300, any enterprise can perform training and inference on diverse AI workloads with an unprecedented level of efficiency.