Data is fundamentally changing the way companies do business, driving demand for data scientists and increasing the complexity in their workflows. To meet these challenges Data scientists (and others) need sophisticated hardware, development, and software platforms which traditionally entailed IT or high-cost Data Science professionals spending many man hours configuring an engineering workstation that would meet their daily demands. With the launch of Quadro RTX, a new class of system was introduced– the NVIDIA-Powered Data Science Workstation – that delivers a fully integrated hardware and software solution for data science. This is just one of the reasons why Digital Engineering (DE) magazine selected the NVIDIA-Powered Data Science Workstation as its Editor’s Pick of the Week.
This is going to be a long blog post, but by the end, you will have an Ubuntu environment connected to the NVIDIA GPU Cloud platform, pulling a TensorFlow container and ready to start benchmarking GPU performance.
Let's split this into four phases:
1) Install Ubuntu 18.04 LTS and NVIDIA Graphics Driver
2) Install Docker CE and NVIDIA Docker v 2.0
3) Setup NVIDIA GPU Cloud and pull down GPU optimized docker containers
4) Run the TensorFlow benchmark
It's time to get started!
In this blog post, I will go over the hardware considerations I used when putting together a system for benchmarking GPU performance for Deep Learning using Ubuntu 18.04, NVIDIA GPU Cloud (NGC) and TensorFlow. Keep in mind that everyone will have different budgets and requirements for their own systems, which can and will result in a wide range of configurations. My particular list should serve only as a reference; your system will likely be different based on your own requirements.
Before we dive into the details, let’s go over what we are seeking to accomplish. Our goal is to build a system to test the compute performance between different GPUs; therefore the GPU should be the only variable that changes between the different test runs. To ensure the consistency of our tests, we will remove any potential bottlenecks that will negatively impact GPU performance.
Topics: PNY, NVIDIA, NVIDIA Quadro, PNYPRO, NVIDIA GPU Cloud, Pro Tip, Turing, Quadro RTX, Artificial Intelligence, GeForce RTX, NGC, GPU-accelerated machine learning, NVIDIA CUDA, CUDA-X, Linux, Tensorflow, benchmark
As a PC enthusiast, I love pitting hardware solutions against each other to determine their relative performance when completing a particular task. This process is also known as “Benchmarking.” Benchmarking results are usually considered the best tool to evaluate the merits of competing systems when making a purchase decision.
In this 3-part blog series, we’ll discuss how to build a system, with an emphasis on benchmarking GPU performance for Deep Learning using Ubuntu 18.04, NVIDIA GPU Cloud (NGC) and TensorFlow.
Topics: Deep Learning, NVIDIA GPU, NVIDIA GPUs, NVIDIA Quadro GPUs, CUDA, NVIDIA RTX Technology, Pro Tip, Tensor Cores, Quadro RTX, NVIDIA Turing Architecture, nvidia quadro rtx, Artificial Intelligence, NVIDIATuring, GeForce RTX, Data Science Workstation, NGC, data science, RAPIDS, GPU-accelerated machine learning, NVIDIA CUDA, analytics, CUDA-X, Linux, Tensorflow, benchmark