Don’t miss an NVIDIA CUDA Python Workshop Dec. 5

Do you want to run your Python code 2X, 5X, or even 10X faster?
Do you want to learn how to leverage GPUs to accelerate your Python code?
Do you want to learn a new skill and be 10X smarter?

If Yes, then this workshop is for you!

In coordination with NVIDIA, CRC is excited to host the workshop Fundamentals of Accelerated Computing with CUDA Python.  This full-day workshop led by the NVIDIA Deep Learning Institute instructors will teach you the fundamental tools and techniques for running GPU-accelerated Python applications. Here are the logistics:

 

Location: University of Pittsburgh, 311A/B Schenley Place (see map at: https://crc.pitt.edu/about-us)
Date: Thursday Dec. 5, 8:30am-5pm
Power Source: We will provide coffee and snacks in the morning and lunch.
Prerequisites: Attendees will need to bring his/her own laptop. Attendees are expected to have basic Python competency and familiarity with variable types, loops, conditional statements, functions, and array manipulations.  Also, basic NumPy competency and use of ndarrays and ufuncs.

Register at: https://crc.pitt.edu/nvidia19-reg. Attendance is capped at 30.

Workshop Outline

  • Introduction (15 mins) Meet the instructor
  • Create an account at courses.nvidia.com/join
  • Introduction to CUDA Python with Numba (120 mins)
    • Begin working with the Numba compiler and CUDA programming  in Python.
    • Use Numba decorators to GPU-accelerate numerical Python functions
    • Optimize host-to-device and device-to-host memory transfers.
  • Break (60 mins)
  • Custom CUDA Kernels in Python with Numba  (120 mins)
    • Learn CUDA’s parallel thread hierarchy and how to extend parallel program possibilities
    • Launch massively parallel custom CUDA kernels on the GPU
    • Utilize CUDA atomic operations to avoid race conditions during  parallel execution.
  • Break (15 mins)
  • RNG, Multidimensional Grids, and Shared  Memory for CUDA Python with Numba (120 mins)
    • Use xoroshiro128+ RNG to support GPU-accelerated Monte Carlo methods.
    • Learn multidimensional grid creation and how to work in parallel on  2D matrices.
    • Leverage on-device shared memory to promote memory coalescing while reshaping 2D matrices.
  • Final Review (15 mins)
    • Review key learnings and wrap up questions
    • Complete the assessment to earn a certificate.
    • Take the workshop survey.