YARIAN.COM

Loading

Learning Objectives Checklist

  • Indicate the key components and features of the NVIDIA data center platform
  • Identify the GPU and CPU requirements for AI data centers, the different products available, and their intended use cases
  • Understand the purpose and capabilities of multi-GPU systems
  • Describe the multi-node GPU interconnect technology
  • Determine the role of DPUs and DOCA in an AI data center​
  • Evaluate the benefits of using NVIDIA-Certified Systems
  • Explain the basics of AI Data Center Networks​
  • Outline the networking requirements essential for ​ AI data centers.
  • List the main features of InfiniBand and Ethernet networking technologies employed in AI data centers.
  • Describe the NVIDIA networking portfolio.
  • Identify the storage requirements necessary for AI workloads
  • Explain the key concepts of storage file systems and apply them in relevant scenarios
  • Comprehend the benefits of using validated storage partners in an AI Data Center
  • Articulate what goes into planning data center deployments and how space, power, and cooling considerations affect these plans
  • Discuss how NVIDIA optimizes energy efficiency in data centers through reduced networking infrastructure combined with power-efficient GPUs
  • Describe cooling architecture of GPUs in data centers
  • Understand how to improve efficiency through co-location
  • Explain the value of reference architectures​
  • Describe the information found in reference architectures
  • Identify available NVIDIA reference architectures
  • Describe the components in the NVIDIA BasePOD reference architecture