Supercomputing 22 takes place this year at the Kay Bailey Hutchison Convention Center in Dallas, TX. SC22 programs run from November 13-18, 2022 while exhibits are open November 14-17, 2022.
SC22 sessions provide a leading technical agenda for professionals and students in the supercomputing community, and are delivered to the highest academic and professional standards. Best practices are shared across algorithms, applications, architectures and networks, clouds and distributed computing, data analytics, visualization, storage, and machine learning. Other topics covered include programming systems, system software, and stare of the practice in large-scale deployment and integration. Birds of a Feather (BoF) sessions provide a dynamic, noncommercial venue for conference attendees to openly discuss current topics of interest to the HPC community – from programming models to big data to accelerators to education. BoF sessions connect you with other attendees with similar interests and help you drive the conversation.
In many ways, SC22 marks the era of Exascale supercomputing, systems capable of calculating at least 1018 IEEE 754 Double Precision (F64-bit) operations (multiplications and/or additions) per second, also known as exaFLOPS. Exascale computing is a significant achievement in computer engineering. It will allow improved scientific applications and better predictions such as weather forecasting, climate modeling, and personalized medicine. Exascale also reaches the estimated processing power of the human brain at the neural level, a target of the Human Brain Project. In 2022 the world’s first public Exascale computer, Frontier, was announced. As of June 2022, it is the world’s fastest supercomputer.
Exhibitors are Returning to SC22
After a pandemic-necessitated hiatus exhibitors are returning to SC22, essentially to pre-pandemic levels. The role supercomputing and HPC played in allowing this to happen by allowing 35+ years of mRNA vaccine research to culminate in safe and incredibly effective SARS-CoV-2 vaccines, in spite of a variant explosion, should not be underestimated. A community reunion of supercomputing computer engineers, scientists, academia, students, and all industries of the HPC community is expected. SC22 will set the standard for the education and engagement that fuels continued global technological advancements. All indicators are showing that SC22 is going to be the place to be in November!
PNY and SC22
PNY is excited to be joining our exhibiting partners on the show floor at SC22. Our partners bring a wide and deep range of expertise pertaining to GPU acceleration of HPC and AI to supercomputing levels (NVIDIA Data Center GPUs and select NVIDIA RTX boards), advanced networking and switching fabrics (the entire NVIDIA Networking line, including the latest InfiniBand and Ethernet products and Data processing Units, also known as DPUs), required by Exascale computing. A deep knowledge of the NVIDIA HPC SDK, a comprehensive suite of compilers, libraries and tools for HPC to maximize developer productivity and the performance and portability of HPC applications is also available within PNY’s partner ecosystem. The NVIDIA HPC SDK supports C, C++, and Fortran compilers to support GPU acceleration of HPC modeling and simulation applications with standard C++ and Fortran, Open ACC directives, and CUDA GPU-accelerated math libraries that maximize the performance of common HPC algorithms, and optimized libraries that enables standards-based multi-GPU and scalable systems programming, with the full ability to take advantage of the latest NVIDIA networking offerings. Performance profiling and debugging tools simplify porting and optimization of HPC applications, and containerization tools enable easy deployment on-premises or in the cloud. With support for NVIDIA GPUs and Arm, OpenPOWER, or x86-64 CPUs running Linux, the HPC SDK provides the tools you need to build NVIDIA GPU-accelerated applications.
NVIDIA H100 and the NVIDIA HPC SDK
The NVIDIA H100 Data Center Tensor Core GPU for PCIe, based on NVIDIA’s groundbreaking new Hopper architecture, provides an order-of-magnitude leap for accelerated computing, is designed for Exascale workloads, and includes a dedicated Transformer Engine to solve trillion-parameter language models.
NVIDIA H100 Tensor Core technology supports a broad range of math precisions, providing a single accelerator for every compute workload. The NVIDIA H100 PCIe supports double precision (FP64), single precision (FP32), half precision (FP16), and integer (INT8) compute tasks. NVIDIA H100 Tensor Core GPUs for mainstream (PCIe) servers include an NVIDIA AI Enterprise five-year software subscription, including enterprise support, simplifying AI adoption at HPC performance levels. This ensures your next project can access the AI frameworks and tools required to bring unprecedented AI capabilities to HPC and supercomputing tasks. NVLink support delivers 900 GB/s of bidirectional bandwidth, 5x the performance of PCIe Gen5, to maximize application performance for large workloads.
The NVIDIA H100 PCIe card features Multi-Instance GPU (MIG) capability. This can be used to partition the GPU into as many as seven hardware-isolated GPU instances, providing a unified platform that enables elastic HPC data centers to adjust dynamically to shifting workload demands. As well as one can allocate the right size of resources from the smallest to biggest multi-GPU jobs. NVIDIA H100's versatility means that IT managers can maximize the utility of every GPU in their data center. NVIDIA H100 PCIe cards use three NVIDIA® NVLink® bridges. They are the same as the bridges used with NVIDIA A100 PCIe cards. This allows two NVIDIA H100 PCIe cards to be connected to deliver 900 GB/s bidirectional bandwidth or 5x the bandwidth of PCIe Gen5, to maximize application performance for large workloads.
When used with NVIDIA’s HPC SDK the H100, or other NVIDIA products such as the A100, A100X, A30, or A30X deliver the following benefits:
Widely used HPC applications, including VASP, Gaussian, ANSYS Fluent, GROMACS, and MAMD use CUDA, OpenACC, and GPU-accelerated math libraries to deliver breakthrough performance. These same software tools can be used to GPU-accelerate your applications and achieve dramatic speedups and power efficiency using NVIDIA GPUs.
Build and optimize applications for over 99 percent of today’s Top500 systems, including those based on NVIDIA GPUs or x86-64, Arm or OpenPOWER CPUs. You can use drop-in libraries, C++17 par parallel algorithms and OpenACC directives to GPU accelerate your code and ensure your applications are fully portable to other compilers and systems.
Maximize science and engineering throughput and minimize coding time with a single integrated suite that allows you to quickly port, parallelize and optimize for GPU acceleration, including industry-standard communication libraries for multi-GPU and scalable computing and profiling and debugging tools for analysis.
PNY Partners Participating at SC22
The following PNY partners will be exhibiting at SC22:
ACE | Booth 2013 | Mission-Driven Technology Solutions
AMAX | Booth 439 | Liquid Cooled Servers and HPC Solutions
ASA Computers | Booth 4246 | HPC Servers
GRAID Technology | Booth 4341 | Enterprise Data Storage
Inspur | Booth 2233 | Cloud Computing and Big Data Services Provider
Microway | Booth 2213 | HPC Systems
One Stop Systems (OSS)| Booth 1628 | Server for GPU Accelerated Computing, Dual NVIDIA RTX A6000 with NVLink® and NVIDIA Networking Solutions
Penguin Solutions | Booth 2400 | AI, HPC and Edge
Silicon Mechanics | Booth 1422 | HPC
Advanced Clustering | Booth 3643 | HPC
ASPEN Systems | Lounge Event (11/16) | HPC
ATIPA Technologies | Booth 3022 | User Group Research
LIQID INC. | Booth 2306 | Composable Design (NVIDIA H100)
When visiting the exhibit hall make sure you check out the solutions and expertise of the PNY partners noted above. If you need additional information on NVIDIA Data Center or select NVIDIA RTX™ solutions for HPC visit www.pny.com or email firstname.lastname@example.org.