DOKK Library

CUDA Programming

Authors Moreno Marzolla

License CC-BY-SA-4.0

Plaintext
CUDA Programming


    Moreno Marzolla
    Dip. di Informatica—Scienza e Ingegneria (DISI)
    Università di Bologna

    moreno.marzolla@unibo.it
Copyright © 2014, 2017–2022
Moreno Marzolla, Università di Bologna, Italy
https://www.moreno.marzolla.name/teaching/HPC/

This work is licensed under the Creative Commons Attribution-ShareAlike 4.0
International License (CC BY-SA 4.0). To view a copy of this license, visit
http://creativecommons.org/licenses/by-sa/4.0/ or send a letter to Creative Commons,
543 Howard Street, 5th Floor, San Francisco, California, 94105, USA.




                                  CUDA Programming                                     2
                 Acknowledgments
●
    Most of the content of this presentation is from Mark
    Harris (Nvidia Corporation), “CUDA C/C++ BASICS”
    –   http://developer.download.nvidia.com/compute/developertrai
        ningmaterials/presentations/cuda_language/Introduction_to_
        CUDA_C.pptx
●
    Salvatore Orlando (Univ. Ca' Foscari di Venezia)
●
    Tim Mattson (Intel Labs)
    –   “Hands-on Intro to CUDA for OpenCL programmers”
●
    CUDA C programming guide
    –   http://docs.nvidia.com/cuda/cuda-c-programming-guide/



                            CUDA Programming                     3
                                 CUDA
●
    CUDA = Compute Unified Device Architecture
●
    Exposes a data-parallel programming model
●
    CUDA vs OpenCL
    –   OpenCL is a portable and open alternative for programming
        GPUs, CPUs and other architectures
         ●
             OpenCL is based on standard C/C++ plus libraries
         ●
             No vendor lock-in
    –   However, OpenCL is being adopted slowly
         ●
             Some implementations not as efficient as CUDA (up to 30% slower)
         ●
             Some implementations are buggy
         ●
             OpenCL programs are much more verbose than CUDA ones
    –   Eventually the problems above will be solved

                                  CUDA Programming                              4
Hello, world: CUDA




      CUDA Programming   5
Hello, world: OpenCL




       CUDA Programming   6
                       Terminology
                                               Host   Device
●
    Host
    –   The CPU and its memory
        (host memory)
●
    Device
    –   The GPU and its memory                         GPU
        (device memory)                 Host Memory

                                                      Device
                                               CPU    Memory



                                       PCIe Bus


                            CUDA Programming                   7
© NVIDIA corp.
                     Basic concepts
●
    CUDA separate a program into
    –   a CPU program (the host program), which includes all I/O
        operations or operation for user interaction, and
    –   a GPU program (the device program), which contains all
        computations to be executed on the GPU.
●
    The simplest case of interaction between host
    program and device program is implemented by
    –   a host program that first copies data into the global memory
        of the GPU
    –   the same host program calls device functions to initiate the
        processing on the GPU


                             CUDA Programming                          8
                 Simple Processing Flow

                              PCI Bus




  1. Copy input data from CPU memory
     to GPU memory




                                    CUDA Programming   9
© NVIDIA corp.
                 Simple Processing Flow

                              PCI Bus




  1. Copy input data from CPU memory
     to GPU memory
  2. Load GPU program and execute,
     caching data on chip for
     performance




                                    CUDA Programming   10
© NVIDIA corp.
                 Simple Processing Flow

                              PCI Bus




  1. Copy input data from CPU memory
     to GPU memory
  2. Load GPU program and execute,
     caching data on chip for
     performance
  3. Copy results from GPU memory to
     CPU memory



                                    CUDA Programming   11
© NVIDIA corp.
 Streaming
Multiprocessor
                 Anatomy of a GPU                                           Streaming
                                                                            Processor

                 GPU                                            SM

                 SM      SM     SM     SM             SP SP SP SP SP SP

                 SM      SM     SM     SM             SP SP SP SP SP SP

                 SM      SM     SM     SM             SP SP SP SP SP SP

                 SM      SM     SM     SM             SP SP SP SP SP SP

                                                      SP SP SP SP SP SP
 GPU                     L2 Cache
                                                      SP SP SP SP SP SP
 global
memory           SM      SM     SM     SM             SP SP SP SP SP SP

                 SM      SM     SM     SM                 Warp scheduler
                                                        and execution control
                 SM      SM     SM     SM
                                                            Register file
                 SM      SM     SM     SM
                                                             L1 Cache

                      Interface to host
                                   CUDA Programming
                                                          Shared memory
                                                                                    12
        A Programmer's view of a GPU
                                                                                  Grid
●
    A THREAD is the minimal unit of
    work.                                                    Block 0,0          Block 1,0
                                                                                      0,0         Block 2,0
    –   All the threads execute the same
        KERNEL function
    –   Threads are grouped in WARPs of 32
        for scheduling, i.e. WARPs are the                   Block 0,1          Block 1,1
                                                                                      0,0         Block 2,1
        minimal units of scheduling
●
    A BLOCK is an independent
    subpiece of work that can be
    executed in any order by a SM
    –   A 3D array of threads                                                 Block 1,1
    –   Max no. of threads in a block: 512
                                                           Thread 0,0 Thread 1,0 Thread 2,0 Thread 3,0 Thread 4,0
        threads (up to Version 2) and 1024
        threads (since Version 3).
●
    A GRID is a piece of work that can                     Thread 0,1 Thread 1,1 Thread 2,1 Thread 3,1 Thread 4,1

    be executed by the GPU
    –   A 2D array of BLOCKs (3D since CUDA                Thread 0,2 Thread 1,2 Thread 2,2 Thread 3,2 Thread 4,2
        Version 3)
                                        CUDA Programming                                                            13
                          Blocks and Grid
    ThreadIdx.{x,y,z}                 ThreadIdx.{x,y,z}


     z

         x
y




             blockIdx.{x,y,z}                blockIdx.{x,y}



    Compute Capability ≥ 2.x            Compute Capability < 2.x
                                CUDA Programming                   14
               Automatic scalability
●
    Do we need to take care
    of the device computing
    power?
    –   No, because the block
        scheduler can re-arrange
        blocks
    –   A Grid contains a set of
        independent blocks,
        which can be executed in
        any order


                                   http://docs.nvidia.com/cuda/cuda-c-programming-guide/

                            CUDA Programming                                         15
            CUDA thread scheduling
●
    A CUDA warp is a group of 32 CUDA threads that
    execute simultaneously
    –   Identifiable uniquely by dividing the Thread Index by 32
    –   The hardware is most efficiently utilized when all threads in a
        warp execute instructions from the same program address
    –   If threads in a warp diverge, then some execution pipelines
        go unused
    –   If threads in a warp access aligned, contiguous blocks of
        DRAM, the accesses are coalesced into a single high-
        bandwidth access
●
    A CUDA warp represents the minimum granularity of
    efficient SIMD execution
                              CUDA Programming                       16
        Single Instruction Multiple Data
●
    Individual threads of a warp start                             Warp
    together at the same program
    address                                                               AAA
                                                    int x;
●
    Each thread has its own program                 AAA
    counter and register state                      if (x > 0) {          XXX
                                                        XXX
    –   Each thread is free to branch and           } else {
        execute independently                           YYY               YYY
    –   Provides the MIMD abstraction               }
        branch behavior                             BBB
                                                                          BBB
●
    Each branch will be executed
    serially
    –   Threads not following the current
        branch will be disabled

                                 CUDA Programming                         17
Hands-on introduction to CUDA programming




                CUDA Programming            18
                          Hello World!
   ●
       Standard C that runs on the host
   ●   The NVIDIA compiler (nvcc) can be used to compile
       programs, even with no device code

       /* cuda-hello0.cu */
       #include <stdio.h>
       int main(void)                         $ nvcc hello_world.cu
       {                                      $ ./a.out
            printf("Hello World!\n");         Hello World!
            return 0;
       }




                               CUDA Programming                       19
© NVIDIA corp.
        Hello World! with Device Code
                 /* cuda-hello1.cu */
                 #include <stdio.h>
                 __global__ void mykernel(void) { }

                 int main(void)
  Two new        {
  syntactic
                    mykernel<<<1,1>>>( );
  elements
                    printf("Hello World!\n");
                    return 0;
                 }




                         CUDA Programming             20
© NVIDIA corp.
           Hello World! with Device Code
                    __global__ void mykernel(void) {   }


   ●   CUDA/C keyword __global__ indicates a function that:
       –   Runs on the device
       –   Is called from host code
       –   __global__ functions must return void
   ●   nvcc separates source code into host and device
       components
       –   Device functions (e.g., mykernel()) are processed by the
           NVIDIA compiler
       –   Host functions (e.g., main()) are processed by the standard
           host compiler (e.g., gcc)

                                CUDA Programming                         21
© NVIDIA corp.
           Hello World! with Device Code
                           mykernel<<<1,1>>>( );


   ●
       Triple angle brackets mark a call from host code to
       device code
       –   Also called a “kernel launch”
       –   We’ll return to the parameters (1,1) in a moment
   ●
       That’s all that is required to execute a function on the
       GPU!




                                CUDA Programming                  22
© NVIDIA corp.
         Hello World! with Device Code
       #include <stdio.h>
       __global__ void mykernel(void)
       { }
                                                  $ nvcc cuda-hello1.cu
       int main(void) {                           $ ./a.out
          mykernel<<<1,1>>>( );                   Hello World!
          printf("Hello World!\n");
          return 0;
       }


   ●   mykernel() does nothing



                               CUDA Programming                           23
© NVIDIA corp.
       Parallel Programming in CUDA/C
   ●
       But wait… GPU computing is about massive
       parallelism!
   ●
       We need a more interesting example…
   ●
       We’ll start by adding two integers and build up to
       vector addition

         a           a[ ]

             +                                 +

         b           b[ ]
             =                                 =

         c           c[ ]

                            CUDA Programming                24
© NVIDIA corp.
                 Addition on the Device
   ●
       A simple kernel to add two integers

                 __global__ void add(int *a, int *b, int *c)
                 {
                    *c = *a + *b;
                 }


   ●   As before __global__ is a CUDA C/C++ keyword
       meaning
       –   add() will execute on the device
       –   add() will be called from the host


                                CUDA Programming               25
© NVIDIA corp.
                 Addition on the Device
   ●
       Note the use of pointers for the variables

                 __global__ void add(int *a, int *b, int *c)
                 {
                    *c = *a + *b;
                 }


   ●   add() runs on the device, so a, b and c must point to
       device memory
   ●
       We need to allocate memory on the GPU



                               CUDA Programming                26
© NVIDIA corp.
                    Memory Management
   ●
       Host and device memory are separate entities
       –   Device pointers point to GPU memory
            ●
                May be passed to/from host code
            ●
                May not be dereferenced in host code
       –   Host pointers point to CPU memory
            ●
                May be passed to/from device code
            ●
                May not be dereferenced in device code
   ●
       Simple CUDA API for handling device memory
       –   cudaMalloc(), cudaFree(), cudaMemcpy()
       –   Similar to the C equivalents malloc(), free(), memcpy()



                                    CUDA Programming             27
© NVIDIA corp.
       Addition on the Device: main()
       /* cuda-vecadd0.cu */
       int main(void) {
            int a, b, c;             /* host copies of a, b, c */
            int *d_a, *d_b, *d_c;    /* device copies of a, b, c */
            const size_t size = sizeof(int);
            /* Allocate space for device copies of a, b, c */
            cudaMalloc((void **)&d_a, size);
            cudaMalloc((void **)&d_b, size);
            cudaMalloc((void **)&d_c, size);
            /* Setup input values */
            a = 2; b = 7;
            /* Copy inputs to device */
            cudaMemcpy(d_a, &a, size, cudaMemcpyHostToDevice);
            cudaMemcpy(d_b, &b, size, cudaMemcpyHostToDevice);
            /* Launch add() kernel on GPU */
            add<<<1,1>>>(d_a, d_b, d_c);
            /* Copy result back to host */
            cudaMemcpy(&c, d_c, size, cudaMemcpyDeviceToHost);
            /* Cleanup */
            cudaFree(d_a); cudaFree(d_b); cudaFree(d_c);
            return 0;
       }
                              CUDA Programming                        28
© NVIDIA corp.
            Coordinating Host & Device
   ●
       Kernel launches are asynchronous
       –   Control returns to the CPU immediately
   ●
       CPU needs to synchronize before consuming the
       results
                                Blocks the CPU until the copy is complete
  cudaMemcpy()                  Copy begins when all preceding CUDA calls have
                                completed
  cudaMemcpyAsync()             Asynchronous, does not block the CPU
                                Blocks the CPU until all preceding CUDA calls have
  cudaDeviceSynchronize()       completed




                               CUDA Programming                                  29
© NVIDIA corp.
                 Running in parallel




                      CUDA Programming   30
© NVIDIA corp.
                     Moving to Parallel
   ●
       GPU computing is about massive parallelism
       –   So how do we run code in parallel on the device?
                           add<<< 1, 1 >>>();


                           add<<< N, 1 >>>();


                        # of blocks         # of threads per block

   ●   Instead of executing add() once, execute N times in
       parallel



                                  CUDA Programming                   31
© NVIDIA corp.
          Vector Addition on the Device
●   With add() running in parallel we can do vector addition
●   Terminology: each parallel invocation of add() is referred
    to as a block
    –   The set of blocks is referred to as a grid
    –   Each invocation can refer to its block index using blockIdx.x
           __global__ void add(int *a, int *b, int *c)
           {
              c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x];
           }


●   By using blockIdx.x to index into the array, each block
    handles a different index
                              CUDA Programming                     32
© NVIDIA corp.
           Vector Addition on the Device
   ●
       On the device, each block can execute in parallel

              __global__ void add(int *a, int *b, int *c)
              {
                 c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x];
              }


       blockIdx.x = 0   blockIdx.x = 1       blockIdx.x = 2   blockIdx.x = 3
                                                                               CUDA
          Block 0          Block 1               Block 0
                                                       2         Block 3       block

            a[0]             a[1]                  a[2]            a[3]
              +
            b[0]
                               +
                             b[1]
                                                     +
                                                   b[2]
                                                                    +
                                                                   b[3]
                                                                               CUDA
              =                =                     =              =          thread
            c[0]             c[1]                  c[2]            c[3]
             Thr 0




                              Thr 0




                                                    Thr 0




                                                                    Thr 0
                                      CUDA Programming                            33
© NVIDIA corp.
/* cuda-vecadd1.cu */
#define N 1024
int main(void) {
     int *a, *b, *c;        /* host copies of a, b, c */
     int *d_a, *d_b, *d_c; /* device copies of a, b, c */
     const size_t size = N * sizeof(int);
     /* Alloc space for device copies of a, b, c */
     cudaMalloc((void **)&d_a, size);
     cudaMalloc((void **)&d_b, size);
     cudaMalloc((void **)&d_c, size);
     /* Alloc space for host copies of a,b,c and setup input values */
     a = (int *)malloc(size); vec_init(a, N);
     b = (int *)malloc(size); vec_init(b, N);
     c = (int *)malloc(size);
     /* Copy inputs to device */
     cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice);
     cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice);
     /* Launch add() kernel on GPU with N blocks */
     add<<<N,1>>>(d_a, d_b, d_c);
     /* Copy result back to host */
     cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost);
     /* Cleanup */
     free(a); free(b); free(c);
     cudaFree(d_a); cudaFree(d_b); cudaFree(d_c);
     return 0;
}                             CUDA Programming                      34
© NVIDIA corp.
                               Review
  ●
      Difference between host and device
      –   Host ↔ CPU,    device ↔ GPU
  ●   Using __global__ to declare a function as device code
      –   Executes on the device
      –   Called from the host
  ●
      Passing parameters from host code to a device function
  ●
      Basic device memory management
      –   cudaMalloc()
      –   cudaMemcpy()
      –   cudaFree()
  ●
      Launching parallel kernels
      –   Launch N copies of add() with add<<<N,1>>>(…);
      –   Use blockIdx.x to access block index
                                   CUDA Programming            35
© NVIDIA corp.
Introducing threads




     CUDA Programming   36
                      CUDA Threads
   ●
       Terminology: a block can be split into parallel threads
   ●   Let’s change add() to use parallel threads instead of
       parallel blocks
         __global__ void add(int *a, int *b, int *c)
         {
            c[threadIdx.x] = a[threadIdx.x] + b[threadIdx.x];
         }


   ●   We use threadIdx.x instead of blockIdx.x
   ●   Need to make one change in main()…



                              CUDA Programming                   37
© NVIDIA corp.
                                  CUDA Threads
         __global__ void add(int *a, int *b, int *c)
         {
            c[threadIdx.x] = a[threadIdx.x] + b[threadIdx.x];
         }

                                                                  blockIdx.x = 0


                                                                     Block 0

                 a[0]             a[1]            a[2]               a[3]             a[4]            a[5]            a[6]            a[7]
                   +                +               +                  +                +               +              +               +
                 b[0]             b[1]            b[2]               b[3]             b[4]            b[5]            b[6]            b[7]
                   =                =               =                  =                =               =              =               =
                 c[0]             c[1]            c[2]               c[3]             c[5]            c[5]            c[6]            c[7]
                  threadIdx.x=0


                                  threadIdx.x=1


                                                  threadIdx.x=2


                                                                      threadIdx.x=3


                                                                                      threadIdx.x=4


                                                                                                      threadIdx.x=5


                                                                                                                      threadIdx.x=6


                                                                                                                                      threadIdx.x=7
                                                                  CUDA Programming                                                                    38
© NVIDIA corp.
/* cuda-vecadd2.cu */
#define N 1024
int main(void) {
     int *a, *b, *c;        /* host copies of a, b, c */
     int *d_a, *d_b, *d_c; /* device copies of a, b, c */
     const size_t size = N * sizeof(int);
     /* Alloc space for device copies of a, b, c */
     cudaMalloc((void **)&d_a, size);
     cudaMalloc((void **)&d_b, size);
     cudaMalloc((void **)&d_c, size);
     /* Alloc space for host copies of a,b,c and setup input values */
     a = (int *)malloc(size); random_ints(a, N);
     b = (int *)malloc(size); random_ints(b, N);
     c = (int *)malloc(size);
     /* Copy inputs to device */
     cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice);
     cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice);
     /* Launch add() kernel on GPU with N threads */
     add<<<1,N>>>(d_a, d_b, d_c);
     /* Copy result back to host */
     cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost);
     /* Cleanup */
     free(a); free(b); free(c);
     cudaFree(d_a); cudaFree(d_b); cudaFree(d_c);
     return 0;
}                              CUDA Programming                     39
© NVIDIA corp.
Combining threads and blocks




          CUDA Programming     40
        Combining Blocks and Threads
   ●
       We have seen parallel vector addition using:
       –   Many blocks with one thread each
       –   One block with many threads
   ●
       Let’s adapt vector addition to use both blocks and
       threads
       –   Why? We’ll come to that…
   ●
       First let’s discuss data indexing…




                               CUDA Programming             41
© NVIDIA corp.
                    Combining Blocks and Threads
●
           We must somehow assign to each thread a different
           array element to operate on
                    blockIdx.x = 0                                                     blockIdx.x = 1                                                      blockIdx.x = 2

                           Block 0                                                             Block 1                                                            Block 2

a[0]                   a[1]            a[2]            a[3]            a[4]               a[5]             a[6]            a[7]            a[8]               a[9]            a[10]            a[11]
  +                      +               +               +               +                  +                +               +              +                  +                +                +
b[0]                   b[1]            b[2]            b[3]            b[4]               b[5]             b[6]            b[7]            b[8]               b[9]            b[10]            b[11]
  =                      =               =               =               =                  =                =               =              =                  =                =                =
c[0]                   c[1]            c[2]            c[3]            c[4]               c[5]             c[6]            c[7]            c[8]               c[9]            c[10]            c[11]
    threadIdx.x=0


                       threadIdx.x=1


                                       threadIdx.x=2


                                                       threadIdx.x=3




                                                                       threadIdx.x=0


                                                                                           threadIdx.x=1


                                                                                                           threadIdx.x=2


                                                                                                                           threadIdx.x=3




                                                                                                                                           threadIdx.x=0


                                                                                                                                                              threadIdx.x=1


                                                                                                                                                                               threadIdx.x=2


                                                                                                                                                                                                threadIdx.x=3
                                                                                       CUDA Programming                                                                                                42
       Indexing Arrays with Blocks and
                   Threads
   ●   No longer as simple as using blockIdx.x or
       threadIdx.x alone
   ●
       Consider indexing an array with one element per
       thread, 8 threads per block
                 threadIdx.x    threadIdx.x        threadIdx.x

        0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

          blockIdx.x = 0       blockIdx.x = 1     blockIdx.x = 2

   ●
       With M threads per block a unique index for each
       thread is given by:
             int index = threadIdx.x + blockIdx.x * M;
                               CUDA Programming                    43
© NVIDIA corp.
                 Indexing arrays: Example
   ●
       Which thread will operate on the red element?
                                                  threadIdx.x = 4
        Thread Blocks
        0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

          blockIdx.x = 0      blockIdx.x = 1        blockIdx.x = 2

        Array elements
        0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23



             int index = threadIdx.x + blockIdx.x * M;
                       =      4      +     2      * 8;
                       = 20;

                               CUDA Programming                         44
© NVIDIA corp.
        Vector Addition with Blocks and
                   Threads
   ●   Use the built-in variable blockDim.x to get the
       number of threads per block
          int index = threadIdx.x + blockIdx.x * blockDim.x;
   ●   Combined version of add() to use parallel threads
       and parallel blocks:
        __global__ void add(int *a, int *b, int *c) {
            int index = threadIdx.x + blockIdx.x * blockDim.x;
            c[index] = a[index] + b[index];
        }


   ●   What changes need to be made in main()?

                               CUDA Programming                  45
© NVIDIA corp.
       Addition with blocks and threads
             #define N (2048*2048)
             #define BLKDIM 1024

             int main(void) {
                …
                /* Launch add() kernel on GPU */
                add<<<N/BLKDIM, BLKDIM>>>(d_a, d_b, d_c);
                …
             }
                   Number of      Number of threads
                     blocks           per block

   ●
       However, the problem size might not be multiple of the
       block size...



                               CUDA Programming             46
© NVIDIA corp.
         Handling arbitrary vector sizes
   ●
       Avoid accessing beyond the end of the array
       __global__ void add(int *a, int *b, int *c, int n) {
           int index = threadIdx.x + blockIdx.x * blockDim.x;
           if (index < n) {
               c[index] = a[index] + b[index];
           }
       }


   ●
       Update kernel launch
       add<<<(N + BLKDIM-1)/BLKDIM, BLKDIM>>>(d_a, d_b, d_c, N);


   ●
       See cuda-vecadd3.cu

                              CUDA Programming                     47
© NVIDIA corp.
                             Review
   ●
       Launching parallel kernels
       –   Launch ~N copies of add() with
           add<<<(N + BLKDIM-1)/BLKDIM, BLKDIM>>>(…);
       –   Use blockIdx.x to access block index
       –   Use threadIdx.x to access thread index within block
   ●
       Assign array elements to threads:

           int index = threadIdx.x + blockIdx.x * blockDim.x;




                              CUDA Programming                   48
© NVIDIA corp.
             Why Bother with Threads?
   ●
       Threads seem unnecessary
       –   They add a level of complexity
       –   What do we gain?
   ●
       Unlike parallel blocks, threads have mechanisms to:
       –   Communicate
       –   Synchronize
   ●
       To look closer, we need a new example…




                                CUDA Programming             49
© NVIDIA corp.
Cooperating threads




     CUDA Programming   50
                              1D Stencil
   ●
       Consider applying a stencil to a 1D array of elements
       –   Each output element is the sum of input elements within a
           given radius
   ●
       If RADIUS is 3, then each output element is the sum of
       7 input elements
       –   The first and last RADIUS elements of the output array are
           not computed
    RADIUS           RADIUS




                 +


                                CUDA Programming                           51
© NVIDIA corp.                                      See cuda-stencil1d.c
           Implementing Within a Block
●
    Each thread
    processes one
    output element
    –   blockDim.x
        elements per block
●
    Input elements are
    read several times
    –   With radius 3, each
        input element is
        read seven times
    –   With radius R, each
        input element is
        read (2R+1) times
                              CUDA Programming   52
        Sharing Data Between Threads
   ●
       Global memory accesses are likely to cause a
       bottleneck due to the limited memory bandwidth
   ●
       Within a block, threads can share data via shared
       memory
       –   Extremely fast on-chip memory, user-managed
       –   Think of it as a user-managed local cache
   ●   Declare using __shared__, allocated per thread-
       block
   ●
       Data is not visible to threads in other blocks



                              CUDA Programming             53
© NVIDIA corp.
                CUDA memory model
Thread Block                          Thread Block

          Shared Memory                             Shared Memory


   Registers      Registers               Registers         Registers



      Thread         Thread                      Thread        Thread



    Local          Local                    Local            Local
   Memory         Memory                   Memory           Memory


Global Memory


Constant Memory

                              CUDA Programming                          54
Texture Memory
       Implementing with shared memory
   ●
       Cache data in shared memory
        –   Read (blockDim.x + 2 ´ radius) input elements from
            global memory to shared memory
        –   Compute blockDim.x output elements
        –   Write blockDim.x output elements to global memory
        –   Each block needs a ghost area (halo) of radius elements at
            each boundary
            in[]

              halo on the left                                halo on the right


                 out[]

                                 blockDim.x output elements
                                          CUDA Programming                        55
© NVIDIA corp.
    Implementing with shared memory
●
    Let us make a few simplifying assumptions
     –   The array length is a multiple of the thread block size
     –   The input (and output) array already includes an halo of
         2*RADIUS elements
          ●
              The halo is ignored for the output array
●
    Idea
     –   Each thread block keeps a local cache of
         blockDim.x + 2*RADIUS elements
     –   Each thread copies one element from the global array to the
         local cache
     –   The first RADIUS threads also take care of of filling the halo


                                    CUDA Programming                  56
       Implementing with shared memory
             blockDim.x        blockDim.x         blockDim.x        blockDim.x
       0
in[]

   RADIUS                           gindex                                       RADIUS

                                     lindex
                      0
            temp[]

                             0 1 2 3 4 5 6 7 ThreadIdx.x

            __shared__ int temp[BLKDIM + 2 * RADIUS];
            const int gindex = threadIdx.x + blockIdx.x * blockDim.x + RADIUS;
            const int lindex = threadIdx.x + RADIUS;
            /* … */
            temp[lindex] = in[gindex];
            if (threadIdx.x < RADIUS) {
                 temp[lindex - RADIUS] = in[gindex - RADIUS];
                                        CUDA
                 temp[lindex + BLOCK_SIZE]   Programming+ BLOCK_SIZE];
                                           = in[gindex                                57
            }
       Implementing with shared memory
          blockDim.x        blockDim.x         blockDim.x        blockDim.x

in[]




          temp[]

                          0 1 2 3 4 5 6 7 ThreadIdx.x

         __shared__ int temp[BLKDIM + 2 * RADIUS];
         const int gindex = threadIdx.x + blockIdx.x * blockDim.x + RADIUS;
         const int lindex = threadIdx.x + RADIUS;
         /* … */
         temp[lindex] = in[gindex];
         if (threadIdx.x < RADIUS) {
              temp[lindex - RADIUS] = in[gindex - RADIUS];
              temp[lindex + BLKDIM] =CUDA Programming
                                      in[gindex  + BLKDIM];                   58
         }
       Implementing with shared memory
          blockDim.x        blockDim.x         blockDim.x        blockDim.x

in[]




          temp[]

                          0 1 2 3 4 5 6 7 ThreadIdx.x

         __shared__ int temp[BLKDIM + 2 * RADIUS];
         const int gindex = threadIdx.x + blockIdx.x * blockDim.x + RADIUS;
         const int lindex = threadIdx.x + RADIUS;
         /* … */
         temp[lindex] = in[gindex];
         if (threadIdx.x < RADIUS) {
              temp[lindex - RADIUS] = in[gindex - RADIUS];
              temp[lindex + BLKDIM] =CUDA Programming
                                      in[gindex  + BLKDIM];                   59
         }
         Stencil kernel (does not work!)
   __global__ void stencil_1d(int *in, int *out) {
      __shared__ int temp[BLKDIM + 2 * RADIUS];
                                                         Wrong!
      int gindex = threadIdx.x + blockIdx.x * blockDim.x + RADIUS;
      int lindex = threadIdx.x + RADIUS;
      int result = 0, offset;
      /* Read input elements into shared memory */
      temp[lindex] = in[gindex];
      if (threadIdx.x < RADIUS) {
          temp[lindex - RADIUS] = in[gindex - RADIUS];
          temp[lindex + blockDim.x] = in[gindex + blockDim.x];
      }
      /* Apply the stencil */
      for (offset = -RADIUS ; offset <= RADIUS ; offset++) {
          result += temp[lindex + offset];
      }
      /* Store the result */
      out[gindex] = result;
   }
                             CUDA Programming                        60
© NVIDIA corp.
                           The problem
   ●
       All threads are not necessarily fully synchronized
   ●   Suppose that thread (blockDim.x - 1) reads the halo
       before thread 0 has fetched it
       –   Data race!

in[]




   temp[]

                        Warp        Warp          Warp

                               Thread Block
                               CUDA Programming                            61
                                                    Note: CUDA warps actually
© NVIDIA corp.                                          contain 32 threads
       The solution: __syncthreads()
   ●
       Synchronizes all threads within a block
       –   Used to prevent RAW / WAR / WAW hazards
   ●
       All threads must reach the barrier
       –   In conditional code, the condition must be uniform across the
           block




                                CUDA Programming                      62
© NVIDIA corp.
              Stencil kernel that works
   __global__ void stencil_1d(int *in, int *out) {
       __shared__ int temp[BLKDIM + 2 * RADIUS];
       int gindex = threadIdx.x + blockIdx.x * blockDim.x + RADIUS;
       int lindex = threadIdx.x + RADIUS;
       int result = 0, offset;
       /* Read input elements into shared memory */
       temp[lindex] = in[gindex];
       if (threadIdx.x < RADIUS) {
           temp[lindex - RADIUS] = in[gindex - RADIUS];
           temp[lindex + blockDim.x] = in[gindex + blockDim.x];
       }
       __syncthreads();
       /* Apply the stencil */
       for (offset = -RADIUS ; offset <= RADIUS ; offset++) {
           result += temp[lindex + offset];
       }
       /* Store the result */
       out[gindex] = result;
   }
                               CUDA Programming See cuda-stencil1d-shared.c   63
© NVIDIA corp.
                           Review
●   Use __shared__ to declare a variable/array in
    shared memory
    –   Data is shared between threads in a block
    –   Not visible to threads in other blocks
●   Use __syncthreads() as a barrier
    –   Use to prevent data hazards




                             CUDA Programming       64
Managing the device




     CUDA Programming   65
             Timing CUDA kernels
●   You can use the timing routines provided in hpc.h
●
    However, since kernel invocations are asynchronous,
    you must call cudaDeviceSynchronize() to wait
    the kernel to complete execution

      #include "hpc.h"
      …
      double tstart, tend;

      tstart = hpc_gettime();
      mykernel<<<X, Y>>>( );   /* kernel invocation         */
      cudaDeviceSynchronize(); /* wait for kernel to finish */
      tend = hpc_gettime();

      printf("Elapsed time %f\n", tend – tstart);
                             CUDA Programming                    66
          The __device__ qualifier
●   The __device__ qualifier defines functions that
    –   execute on the device
    –   can be called from device code only
●   __device__ functions are inlined, so they can return
    a value
         __device__ float cuda_fmaxf(float a, float b)
         {
            return (a>b ? a : b);
         }

         __global__ void my_kernel( float *v, int n )
         {
            int i = threadIdx.x + blockIdx.x * blockDim.x;
            if (i<n) {
                v[i] = cuda_fmaxf(1.0, v[i]);
            }
         }                   CUDA Programming                67
             The __host__ qualifier
●
    (Optional) denotes a function that
    –   is executed on the host
    –   can be called from host code only
    –   default behavior when neither __global__ nor
        __device__ are specified
●   You can use both the __host__ and __device__
    qualifiers for the same function
    –   The compiler produces two versions of the function: one for
        the GPU and one for the CPU

        __host__ __device__ float my_fmaxf(float a, float b)
        {
           return (a>b ? a : b);
        }
                             CUDA Programming                         68
      Recap function declaration

                                                         Only callable
                                          Executed on:
                                                            from:
__device__ float deviceFunc()                Device         Device

__host__ float hostFunc()                     Host           Host

__global__ void kernelFunc()                 Device          Host




                       CUDA Programming                              69
          The __device__ qualifier
●   The __device__ qualifier can be applied to variables
    –   easy way to statically allocate variables on the device
    –   can be applied to global variables only
    –   cudaMemcpyToSymbol() / cudaMemcpyFromSymbol()
        must be used to copy data from/to __device__ variables
    –   will see an example in the lab
#define BLKDIM 1024

__device__ float d_buf[BLKDIM];

int main( void )
{                           dest      source
   float buf[BLKDIM];
   …
   cudaMemcpyToSymbol(d_buf, buf, BLKDIM*sizeof(float));
   …
   cudaMemcpyFromSymbol(buf, d_buf, BLKDIM*sizeof(float));
}                         CUDA Programming                        70
                       Reporting Errors
   ●
       All CUDA API calls return an error code
       (cudaError_t)
       –   Error in the API call itself, OR
       –   Error in an earlier asynchronous operation (e.g., kernel)
       –   cudaSuccess means no error
   ●
       Get the error code for the last error:
       cudaError_t cudaGetLastError(void)
   ●
       Get a string to describe the error:
       char *cudaGetErrorString(cudaError_t)
       printf("%s\n", cudaGetErrorString(cudaGetLastError()));


                                 CUDA Programming                      72
© NVIDIA corp.
                        Reporting Errors
 ●   Some useful macros defined in hpc.h
     –   cudaSafeCall(Exp) execute Exp and checks for return
         status
          ●
              Exp can be any CUDA function returning an error code, e.g.,
              cudaMalloc, cudaMemcpy, …
     –   cudaCheckError() checks error code from last CUDA
         operation
          ●
              Usually, a kernel call
          ●   cudaCheckError() calls cudaDeviceSynchronize()

cudaSafeCall( cudaMemcpy(d_a, h_a, size, cudaMemcpyHostToDevice) );
my_kernel<<< 1, 1 >>>(d_a); cudaCheckError();
cudaSafeCall( cudaMemcpy(h_a, d_a, size, cudaMemcpyDeviceToHost) );


                                   CUDA Programming                         73
         Limits on the lab machine
●   Use the deviceQuery command
●   isi-raptor03 has three identical GPUs with the
    following features
        CUDA capability                      6.1
        Global memory                        8114 MB
        CUDA cores                           1920
        Warp size                            32
        Shared memory per block              49152 B
        Constant memory                      65536 B
        Max number of threads per block      1024
        Max size of a thread block           (1024, 1024, 64)
                                CUDA Programming                          76
        Max grid size                        (2147483647, 65535, 65535)
Higher Block Dimensions




       CUDA Programming   77
               Matrix-Matrix multiply
●
    Multiply two n ´ n matrices p and q
void matmul( float *p, float* q, float *r, int n)
{   int i, j, k; float v;
    for (i=0; i<n; i++) {                             j
        for (j=0; j<n; j++) {
            v = 0.0;
            for (k=0; k<n; k++) {
                v += p[i*n + k] * q[k*n + j];
            }                                             q
            r[i*n + j] = v;
        }
    }
}




                                     i

                                                  p       r

                               CUDA Programming               78
          Matrix-Matrix multiply with
                 thread blocks
●
    Decompose the result matrix r into
    square blocks
●
    Assign each block to a thread block
    –   All threads of the thread block compute
        one element of the result matrix                   q

                                            CUDA
                                            thread block




                                                p          r

                             CUDA Programming                  79
         Setting up the thread blocks
●
    The dim3 data type is used to define a one-, two-, or
    three-dimensional “size” for a thread or grid block
    –   dim3 blk(3)
         ●
             defines a variable “dim” representing a 3x1x1 block
    –   dim3 blk(3, 4)
         ●
             defines a variable “dim” representing a 3x4x1 block
    –   dim3 blk(3, 4, 7)
         ●
             defines a variable “dim” representing a 3x4x7 block




                                   CUDA Programming                80
                      Examples
●
    Launch 3 blocks, 16 threads per block (1D)
                mykernel<<<3, 16>>>( );

●
    The same
                dim3 grid(3);
                dim3 block(16);
                mykernel<<<grid, block>>>( );

●
    Launch (16 ´ 4) blocks, (8 ´ 8 ´ 8) threads per block
                dim3 grid(16, 4);    /* 2D */
                dim3 block(8, 8, 8); /* 3D */
                mykernel<<<grid, block>>>( );


                         CUDA Programming               81
                     Kernel invocation
●
    Setup and kernel invocation
    /* cuda-matmul.cu */
    #define BLKDIM 32

    int main( void )
    {
       …
       dim3 block(BLKDIM, BLKDIM);
       dim3 grid((N+BLKDIM-1)/BLKDIM, (N+BLKDIM-1)/BLKDIM);
       …
       /* Launch matmul() kernel on GPU */
       matmul<<<grid, block>>>(d_p, d_q, d_r, N);
       …
    }




                            CUDA Programming                  82
                The matmul kernel
●
    Each thread computes a single element r[i][j] of the
    result matrix r
        const int i = blockIdx.y * blockDim.y + threadIdx.y;
        const int j = blockIdx.x * blockDim.x + threadIdx.x;

                                                         blockIdx.x

                                             threadIdx.x
                                                            j
                               threadIdx.y



                                             i                                   blockDim.y
                  blockIdx.y




                                                             r      blockDim.x




                                                 CUDA Programming                             83
                 The matmul kernel
●
    The kernel function

    __global__ void matmul( float *p, float *q, float *r, int n )
    {
        const int i = blockIdx.y * blockDim.y + threadIdx.y;
        const int j = blockIdx.x * blockDim.x + threadIdx.x;
        int k;
        float val = 0.0;
        if ( i < n && j < n ) {
            for (k=0; k<n; k++) {
                val += p[i*n + k] * q[k*n + j];
            }
            r[i*n + j] = val;
        }
    }



                            CUDA Programming                        84
                Matrix-Matrix multiply
●
    Multiply two n ´ n matrices p and q
     void matmul( float *p, float* q, float *r, int n)
     {   int i, j, k; float v;
         for (i=0; i<n; i++) {                           j
             for (j=0; j<n; j++) {
                 v = 0.0;
                 for (k=0; k<n; k++) {
                     v += p[i*n + k] * q[k*n + j];
                 }                                           q
                 r[i*n + j] = v;
             }
         }
     }



How many times are the
elements of matrix p accessed?        i

                                                   p         r
How many times are the
elements of matrix q accessed?
                                CUDA Programming                 85
     Reducing the memory pressure
●
    To reduce the number of read
    operations to global memory, we use
    __shared__ memory to cache the data
    needed by each block to compute its
    portion of the result                    q
●
    This requires (2 ´ BLKDIM ´ n)
    elements, which might exceed the
    amount of shared memory allowed by
    the device

                                         p   r

                      CUDA Programming           86
     Reducing the memory pressure
●
    The solution: divide the (BLKDIM ´ n)
    stripes of p and q into square blocks of
    (BLKDIM ´ BLKDIM) elements each
                                                       Q1
●
    Operate on two blocks (for p and q) at a
    time                                               Q2   q

                                                       Q3
        R = P1 ⨉  Q1 + P2 ⨉  Q2 + P3 ⨉  Q3

                                      P1     P2   P3   R


                                             p              r

                          CUDA Programming                      87
        How the new kernel works
●
    For each m:          Shared memory     m

                         local_p local_q

                                               q




                         m




                                     p         r




                  CUDA Programming                 88
          How the new kernel works
●
    For each m:                    Shared memory     m
    –   Copy blocks from p and     local_p local_q
        q into shared memory
        local_p and local_q                              q




                                   m




                                               p         r




                            CUDA Programming                 89
          How the new kernel works
●
    For each m:                    Shared memory     m
    –   Copy blocks from p and     local_p local_q
        q into shared memory
        local_p and local_q in                           q
        parallel
    –   compute the matrix
        product local_p ´
                                   m
        local_q in parallel


                                               p         r




                            CUDA Programming                 90
          How the new kernel works
●
    For each m:                    Shared memory
    –   Copy blocks from p and     local_p local_q
        q into shared memory                         m
        local_p and local_q in                           q
        parallel
    –   compute the matrix
        product local_p ´
                                          m
        local_q in parallel
    –   m ← m + BLKDIM

                                               p         r




                            CUDA Programming                 91
          How the new kernel works
●
    For each m:                    Shared memory
    –   Copy blocks from p and     local_p local_q
        q into shared memory                         m
        local_p and local_q in                           q
        parallel
    –   compute the matrix
        product local_p ´
                                          m
        local_q in parallel
    –   m ← m + BLKDIM

                                               p         r




                            CUDA Programming                 92
          How the new kernel works
●
    For each m:                    Shared memory
    –   Copy blocks from p and     local_p local_q
        q into shared memory                         m
        local_p and local_q in                           q
        parallel
    –   compute the matrix
        product local_p ´
                                          m
        local_q in parallel
    –   m ← m + BLKDIM

                                               p         r




                            CUDA Programming                 93
          How the new kernel works
●
    For each m:                    Shared memory
    –   Copy blocks from p and     local_p local_q
        q into shared memory
        local_p and local_q in                             q
        parallel                                       m
    –   compute the matrix
        product local_p ´
                                                   m
        local_q in parallel
    –   m ← m + BLKDIM

                                               p           r




                            CUDA Programming                   94
          How the new kernel works
●
    For each m:                    Shared memory
    –   Copy blocks from p and     local_p local_q
        q into shared memory
        local_p and local_q in                             q
        parallel                                       m
    –   compute the matrix
        product local_p ´
                                                   m
        local_q in parallel
    –   m ← m + BLKDIM

                                               p           r




                            CUDA Programming                   95
          How the new kernel works
●
    For each m:                    Shared memory
    –   Copy blocks from p and     local_p local_q
        q into shared memory
        local_p and local_q in                             q
        parallel                                       m
    –   compute the matrix
        product local_p ´
                                                   m
        local_q in parallel
    –   m ← m + BLKDIM

                                               p           r




                            CUDA Programming                   96
                         The new matmul kernel
      __global__ void matmulb( float *p, float *q, float *r, int n )
      {
          __shared__ float local_p[BLKDIM][BLKDIM];
          __shared__ float local_q[BLKDIM][BLKDIM];
          const int bx = blockIdx.x; const int by = blockIdx.y;
          const int tx = threadIdx.x; const int ty = threadIdx.y;
          const int i = by * BLKDIM + ty;
          const int j = bx * BLKDIM + tx;
          float v = 0.0; int m, k;
          for (m = 0; m < n; m += BLKDIM) {
              local_p[ty][tx] = p[i*n + (m + tx)];
              local_q[ty][tx] = q[(m + ty)*n + j];
              __syncthreads();
              for (k = 0; k < BLKDIM; k++) {
                  v += local_p[ty][k] * local_q[k][tx];
              }
              __syncthreads();
          }
          r[i*n + j] = v;                Wait for all threads in the block
      }                                  to complete before starting the
                                            next iteration (overwriting
                                 CUDA Programming                            97
                                               local_p and local_q)
Credits: Salvatore Orlando
           The new matmul kernel
●
    The setup and kernel invocation remain the same
●
    See cuda-matmul.cu




                        CUDA Programming              98
Reduction on the GPU




      CUDA Programming   99
                 Problem statement
●
    Given a nonempty array of floats or ints, compute the
    sum of all elements of the array
●
    Basic idea
    –   Decompose the problem across thread blocks
    –   Each thread block computes a partial reduction
    –   The CPU completes the reduction




                             CUDA Programming               100
                      Solution #0 (naive)
  ●
       Thread 0 of each block computes the local sum
                                           BLKDIM
h_a[]                d_a[]



           cudaMemcpy
                                         temp[]                  temp[]



                                     +                       +




       +                                    blockIdx.x = 1        blockIdx.x = 2
                 cudaMemcpy


      h_sums[]                       CUDA Programming
                              d_sums[]                                             101
             Solution #0 kernel
#define BLKDIM 512
#define N_OF_BLOCKS 1024
#define N ((N_OF_BLOCKS)*(BLKDIM))

__device__ int d_sums[N_OF_BLOCKS];
int h_sums[N_OF_BLOCKS];                      Shared memory is not
                                               useful here; it will be
__global__ void sum( int *a, int n )            useful in the other
{                                             versions of this kernel
    __shared__ int temp[BLKDIM];
    int lindex = threadIdx.x;
    int bindex = blockIdx.x;
    int gindex = threadIdx.x + blockIdx.x * blockDim.x;
    temp[lindex] = a[gindex];
    __syncthreads();
    if ( 0 == lindex ) {
        int i, my_sum = 0;
        for (i=0; i<blockDim.x; i++) {
            my_sum += temp[i];
        }
        d_sums[bindex] = my_sum;
    }                  CUDA Programming                                  102
}
                                        See cuda-reduction0.cu
                      Solution #1 (better)
  ●
       All threads within each block cooperate to compute the local sum
h_a[]                d_a[]



           cudaMemcpy
                                           temp[]                 temp[]


                                     + + + +                + + + +


                                     + +                    + +


                                     +                      +

       +                                   blockIdx.x = 1         blockIdx.x = 2
                 cudaMemcpy


      h_sums[]                       CUDA Programming
                              d_sums[]                                             103
                  Solution #1 kernel
                                                 This kernel only works if
__global__ void sum( int *a, int n )             ●
                                                   BLKDIM is a power of two;
{                                                ●
                                                   n is a multiple of BLKDIM
    __shared__ int temp[BLKDIM];
    const int lindex = threadIdx.x;
    const int bindex = blockIdx.x;
    const int gindex = threadIdx.x + blockIdx.x * blockDim.x;
    int bsize = blockDim.x / 2;
    temp[lindex] = a[gindex];
    __syncthreads();
    while ( bsize > 0 ) {
        if ( lindex < bsize ) {
            temp[lindex] += temp[lindex + bsize];
        }
        bsize = bsize / 2;
        __syncthreads();
    }
    if ( 0 == lindex ) {
        d_sums[bindex] = temp[0];
    }
}                                                 See cuda-reduction1.cu
                              CUDA Programming                            104
   er al
 at n
        l
M ptio
     ia
 O




            Atomic operations with CUDA




                     CUDA Programming     105
   er al
 at n
       l
M ptio
     ia               Atomic functions
 O




   ●
       Perform read-modify-write operations on one 32- or
       64-bit word in global or shared memory
   ●
       The operation is guaranteed to be atomic
   ●
       Example:
       –   Each thread adds 3 to the content of the memory location a
       –   a must point to a valid address in global or shared memory

                  __global__ void my_kernel( int *a )
                  {
                     atomicAdd(a, 3);
                  }



                                CUDA Programming                    106
   er al
 at n
       l
M ptio
     ia               Atomic functions
 O




   ●
       Many atomic functions are polymorphic
       –   i.e., they work for items of different types
       –   atomicAdd() works for items of type int, unsigned int,
           float, double, and others
   ●
       Not all functions support the same types
   ●
       Check the documentation:

       https://docs.nvidia.com/cuda/cuda-c-programming-gui
       de/index.html#atomic-functions



                               CUDA Programming                     107
   er al
 at n                   Atomic functions
       l
M ptio
     ia
 O

 ●   int atomicAdd(int* addr, int val);
     –   int old = *addr; *addr += val; return old;
 ●   int atomicSub(int* addr, int val);
     –   int old = *addr; *addr -= val; return old;
 ●   int atomicExch(int* addr, int val);
     –   int old = *addr; *addr = val; return old;
 ●   int atomicMin(int* addr, int val);
     –   int old = *addr; *addr = min(*addr, val); return old
 ●   int atomicMax(int* addr, int val);
     –   int old = *addr; *addr = max(*addr, val); return old
 ●   int atomicCAS(int* addr, int cmp, int val);
     –   int old = *addr; *addr = (old == cmp ? val : old); return old;
 ●
     ...and others
                                   CUDA Programming                       108
   er al
 at n
       l
M ptio
     ia
      Reduction with atomic operations
 O



  __global__ void sum( int *a, int n, int *result )
  {
      __shared__ int temp[BLKDIM];
      int lindex = threadIdx.x;
      int gindex = threadIdx.x + blockIdx.x * blockDim.x;
      int bsize = blockDim.x / 2;
                                         This kernel works if
      temp[lindex] = a[gindex];
      __syncthreads();
                                         ●
                                           BLKDIM is a power of two;
      while ( bsize > 0 ) {
                                         ●
                                           n is a multiple of BLKDIM
          if ( lindex < bsize ) {
                                         ●
                                           *result is initially ZERO
              temp[lindex] += temp[lindex + bsize];
          }
          bsize = bsize / 2;
          __syncthreads();
      }
      if ( 0 == lindex ) {
          atomicAdd(result, temp[0]);
      }
  }                                                     See cuda-reduction3.cu
                                  CUDA Programming                         109
Memory Access Optimization Techniques for
                GPUs




                CUDA Programming            110
                CUDA memory model
Thread Block                          Thread Block

          Shared Memory                             Shared Memory


   Registers      Registers               Registers         Registers



      Thread         Thread                      Thread        Thread



    Local          Local                    Local            Local
   Memory         Memory                   Memory           Memory


Global Memory


Constant Memory

                              CUDA Programming                          111
Texture Memory
             Memory access patterns
●
    Each memory access moves 32 or 128 consecutive
    bytes
    –   So, if a thread just needs a single float (4B), this results in
        32B or 128B being moved
●
    The GPU can pack together (coalesce) memory
    accesses when consecutive threads access
    consecutive memory addresses
    –   Examples follow




                               CUDA Programming                           112
                                          Caching load
      ●
          Warp requests 32 aligned, consecutive 4-byte words
      ●
          Addresses fall within 1 cache-line
            –   Warp needs 128 bytes
            –   128 bytes move across the bus on a miss
            –   Bus utilization: 100%
            –   Transactions: 1

                                  Addresses from a warp




             Memory addresses


                                                     CUDA Programming   113

Slide credits: Justin Luitjens, NVIDIA Corporation
                                          Caching load
      ●
          Warp requests 32 aligned, permuted 4-byte words
      ●
          Addresses fall within 1 cache-line
            –   Warp needs 128 bytes
            –   128 bytes move across the bus on a miss
            –   Bus utilization: 100%
            –   Transactions: 1

                                  Addresses from a warp




             Memory addresses


                                                     CUDA Programming   114

Slide credits: Justin Luitjens, NVIDIA Corporation
                                          Caching load
      ●
          Warp requests 32 misaligned, consecutive 4-byte
          words
      ●
          Addresses fall within 2 cache-lines
            –   Warp needs 128 bytes
            –   256 bytes move across the bus on misses
            –   Bus utilization: 50%
            –   Transactions: 2

                                  Addresses from a warp




             Memory addresses
                                                     CUDA Programming   115

Slide credits: Justin Luitjens, NVIDIA Corporation
                                          Caching load
      ●
          All threads in a warp request the same 4-byte word
      ●
          Addresses fall within a single cache-line
            –   Warp needs 4 bytes
            –   128 bytes move across the bus on a miss
            –   Bus utilization: 3.125%

                                  Addresses from a warp




             Memory addresses



                                                     CUDA Programming   116

Slide credits: Justin Luitjens, NVIDIA Corporation
                                          Caching load
      ●
          Warp requests 32 scattered 4-byte words
      ●
          Addresses fall within N cache-lines
            –   Warp needs 128 bytes
            –   N ´ 128 bytes move across the bus on a miss
            –   Bus utilization: 128 / (N ´ 128)

                                  Addresses from a warp




             Memory addresses



                                                     CUDA Programming   117

Slide credits: Justin Luitjens, NVIDIA Corporation
                        Example
●
    See cuda-image-rotation.cu
            Device memory                          Device memory


           A     B          C




                                               G

                                                       D

                                                               A
           D     E          F



                                               H

                                                       E

                                                               B
           G     H          I




                                                               C
                                                       F
                                               I


                                CUDA Programming                   118
                        Example
●
    See cuda-image-rotation.cu
    Device memory       Shared memory


A        B      C

D        E      F

                              A
G        H          I




                         CUDA Programming   119
                        Example
●
    See cuda-image-rotation.cu
    Device memory       Shared memory       Device memory


A        B      C




                                            G

                                                D

                                                        A
D        E      F




                                            H
                              A




                                                E

                                                        B
G        H          I




                                                        C
                                                F
                                            I
                         CUDA Programming                   120
                                                 typedef struct {

           Example                                   float x;
                                                     float y;
                                                     float z;
                                                 } point3d;

●
    Array of Structures (AoS)                    __global__ void
                                                 compute( point3d *points, int n )
    containing three floats                      {
                                                     int i = threadIdx.x;
    –   Think x, y, z as coordinates                 float x = points[i].x;
        of a set of points                           float y = points[i].y;
                                                     float z = points[i].z;
●
    Each memory transaction                      }
                                                     /* use x, y, z … */

    read data that is not used
    –   Low efficiency

                Addresses from a warp


     x y z x y z x y z x y z x y z x y z x y z x y z

    points[0]

                                    CUDA Programming                           121
                                                 typedef struct {

           Example                                   float x;
                                                     float y;
                                                     float z;
                                                 } point3d;

●
    Array of Structures (AoS)                    __global__ void
                                                 compute( point3d *points, int n )
    containing three floats                      {
                                                     int i = threadIdx.x;
    –   Think x, y, z as coordinates                 float x = points[i].x;
        of a set of points                           float y = points[i].y;
                                                     float z = points[i].z;
●
    Each memory transaction                      }
                                                     /* use x, y, z … */

    read data that is not used
    –   Low efficiency

                Addresses from a warp


     x y z x y z x y z x y z x y z x y z x y z x y z

    points[0]

                                    CUDA Programming                           122
                                                 typedef struct {

           Example                                   float x;
                                                     float y;
                                                     float z;
                                                 } point3d;

●
    Array of Structures (AoS)                    __global__ void
                                                 compute( point3d *points, int n )
    containing three floats                      {
                                                     int i = threadIdx.x;
    –   Think x, y, z as coordinates                 float x = points[i].x;
        of a set of points                           float y = points[i].y;
                                                     float z = points[i].z;
●
    Each memory transaction                      }
                                                     /* use x, y, z … */

    read data that is not used
    –   Low efficiency

                Addresses from a warp


     x y z x y z x y z x y z x y z x y z x y z x y z

    points[0]

                                    CUDA Programming                           123
                                           typedef struct {

         Example                               float *x;
                                               float *y;
                                               float *z;
                                           } points3d;

●
    Structure of Arrays (SoA)              __global__ void
                                           compute( points3d *points, int n )
    containing three arrays                {
                                               int i = threadIdx.x;
    of floats                                  float x = points->x[i];
                                               float y = points->y[i];
●
    Memory accesses can                        float z = points->z[i];
    now be coalesced                       }
                                               /* use x, y, z … */




            Addresses from a warp


     x x x x x x x x y y y y y y y y z z z z z z z z

       points.x[]

                               CUDA Programming                           124
                                           typedef struct {

         Example                               float *x;
                                               float *y;
                                               float *z;
                                           } points3d;

●
    Structure of Arrays (SoA)              __global__ void
                                           compute( points3d *points, int n )
    containing three arrays                {
                                               int i = threadIdx.x;
    of floats                                  float x = points->x[i];
                                               float y = points->y[i];
●
    Memory accesses can                        float z = points->z[i];
    now be coalesced                       }
                                               /* use x, y, z … */




            Addresses from a warp


     x x x x x x x x y y y y y y y y z z z z z z z z

       points.x[]

                               CUDA Programming                           125
                                           typedef struct {

         Example                               float *x;
                                               float *y;
                                               float *z;
                                           } points3d;

●
    Structure of Arrays (SoA)              __global__ void
                                           compute( points3d *points, int n )
    containing three arrays                {
                                               int i = threadIdx.x;
    of floats                                  float x = points->x[i];
                                               float y = points->y[i];
●
    Memory accesses can                        float z = points->z[i];
    now be coalesced                       }
                                               /* use x, y, z … */




            Addresses from a warp


     x x x x x x x x y y y y y y y y z z z z z z z z

       points.x[]

                               CUDA Programming                           126
                     GPU memory optimization
                           guidelines
      ●
          Try to coalesce memory accesses
            –   Align starting address (may require padding)
            –   A warp should access contiguous memory locations
            –   Restructure your data so that it is a Structure of Arrays (SoA)
      ●
          Have enough concurrent accesses to saturate the bus
            –   Launch enough threads to maximize throughput




                                                     CUDA Programming        127

Slide credits: Justin Luitjens, NVIDIA Corporation
               Performance Evaluation
●
    The usual concept of speedup
    can not be used
    –   The program has no control over
        the number of CUDA cores used
    –   The hardware multiplexes
        CUDA threads to CUDA cores
●
    We need different metrics, e.g.
    –   Throughput: number of
        processed data items/seconds       Source: https://moderngpu.github.io/scan.html
        as a function of the input size
    –   Speedup vs efficient (i.e.,
        parallel) CPU implementation
●
    See examples
                                 CUDA Programming                                 128