(py)opencl na kartach graficznych: Wprowadzenie do GPGPU kolodziejj.info
Tylko takie GPU? @ CSIRO
Tylko takie GPU? @ nvidia
@ nvidia @ benchmark.pl
wyszukiwanie obiektów na zdjęciu CPU-bound 36 MPix
wyszukiwanie obiektów na zdjęciu CPU-bound 36 MPix implementacje: Matlab: 6 godzin
wyszukiwanie obiektów na zdjęciu CPU-bound 36 MPix implementacje: Matlab: 6 godzin Python + OpenCL: 1 minuta
kolodziejj.info @unit03
GPGPU zastosowanie co muszę umieć? terminologia omówienie przykładowego kodu co dalej?
GPGPU General-Purpose computing on GPU
Zastosowanie strumieniowe przetwarzanie obrazów
Zastosowanie strumieniowe przetwarzanie obrazów dużych tablic w podobny sposób
Zastosowanie strumieniowe przetwarzanie obrazów dużych tablic w podobny sposób obrazy, video
Zastosowanie strumieniowe przetwarzanie obrazów dużych tablic w podobny sposób obrazy, video kryptografia fizyka (od astrofizyki do fizyki kwantowej) biologia, medycyna bazy danych machine learning...
Platformy GPGPU nvidia CUDA MicroSoft's F# + DirectCompute AMD's FireStream C++ AMP OpenACC
Co trzeba znać z Pythona: NumPy
Co trzeba znać z Pythona: NumPy z C: funkcje podstawowe typy danych ([unsigned] integer, float, double)
Co trzeba znać z Pythona: NumPy z C: funkcje podstawowe typy danych ([unsigned] integer, float, double) tablice i wskaźniki
Co trzeba znać z Pythona: NumPy z C: funkcje podstawowe typy danych ([unsigned] integer, float, double) tablice i wskaźniki te dziwne { } ;
Zrównoleglalny przykład Dodawanie wektorów!
Zrównoleglalny przykład A[0] B[0] C[0] = A[0] + B[0] A[1] B[1] C[1] = A[1] + B[1] C[2] = A[2] + B[2] A[2] A[n-1] + B[2] B[n-1] = C[n-1] = A[n-1] + B[n-1]
Zrównoleglalny przykład void add(float * a, float * b, unsigned int n, float * c) { for (int i = 0; i < n; ++i) { c[i] = a[i] + b[i]; } }
CPU, szeregowo N ops CPU ops
Zrównoleglalny przykład A[0] B[0] C[0] = A[0] + B[0] A[1] B[1] C[1] = A[1] + B[1] C[2] = A[2] + B[2] A[2] A[n-1] + B[2] B[n-1] = C[n-1] = A[n-1] + B[n-1]
Zrównoleglalny przykład A[i] + B[i] = C[i] = A[i] + B[i]
CPU, równolegle N ops / 4 CPU ops
Trochę liczb Dodawanie: N = 24, jeden wątek 24 kroki N = 24, 4 wątki 6 kroków
Trochę liczb Dodawanie: N = 24, jeden wątek 24 kroki N = 24, 4 wątki 6 kroków N = 220, 4 wątki 218 kroków
Trochę liczb Dodawanie: N = 24, jeden wątek 24 kroki N = 24, 4 wątki 6 kroków N = 220, 4 wątki 218 kroków potrzebujemy więcej wątków!
OpenCL kernele work items work groups model pamięci
Kernele z grubsza funkcje w C z keyword'em global global void add( global const global const const unsigned global float int gid = get_global_id(0); if (gid < n) { c[gid] = a[gid] + b[gid]; } } float * a, float * b, int n, * c) {
Kernele z grubsza funkcje w C z keyword'em global global void add( global const global const const unsigned global float int gid = get_global_id(0); if (gid < n) { c[gid] = a[gid] + b[gid]; } } float * a, float * b, int n, * c) {
Work items instancje kerneli
Work items instancje kerneli bardzo ograniczona prywatna pamięć (rzędu KB)
Work items instancje kerneli bardzo ograniczona prywatna pamięć (rzędu KB) powinny żyć bardzo krótko ale może ich być bardzo dużo
Work items A[0] B[0] C[0] = A[0] + B[0] A[1] B[1] C[1] = A[1] + B[1] C[2] = A[2] + B[2] A[2] A[n-1] + B[2] B[n-1] = C[n-1] = A[n-1] + B[n-1]
Work items C[0] = A[0] + B[0] C[1] = A[1] + B[1] C[2] = A[2] + B[2] C[n-1] = A[n-1] + B[n-1]
Work items C[2] = A[2] + B[2]
Work groups grupy work items
Work groups grupy work items wielkość grupy local work size
Work groups grupy work items wielkość grupy local work size maksymalna wielkość zależy od urządzenia
Work groups grupy work items wielkość grupy local work size maksymalna wielkość zależy od urządzenia pamięć lokalna
Work groups C[0] = A[0] + B[0] C[1] = A[1] + B[1] C[2] = A[2] + B[2] C[n-1] = A[n-1] + B[n-1]
Work groups C[0] C[1] C[2] C[n-1]
Work groups
Work groups N = 3000
Work groups N = 3000 local work size = 1024 global work size = 3 * 1024 = 3072
Model pamięci
Model pamięci a t e m e m local memory local memory local memory work group 1 work group 2 work group n p r i v global memory device host memory host
Model pamięci a t e m e m local memory local memory local memory work group 1 work group 2 work group n p r i v global memory device host memory host
Model pamięci a t e m e m local memory local memory local memory work group 1 work group 2 work group n p r i v global memory device host memory host
Model pamięci a t e m e m local memory local memory local memory work group 1 work group 2 work group n p r i v global memory device host memory host
Czas na kod! global void add( global const global const const unsigned global float int gid = get_global_id(0); if (gid < n) { c[gid] = a[gid] + b[gid]; } } float * a, float * b, int n, * c) {
Czas na kod! global void add( global const global const const unsigned global float int gid = get_global_id(0); if (gid < n) { c[gid] = a[gid] + b[gid]; } } float * a, float * b, int n, * c) {
Czas na kod! global void add( global const global const const unsigned global float int gid = get_global_id(0); if (gid < n) { c[gid] = a[gid] + b[gid]; } } float * a, float * b, int n, * c) {
Czas na kod! global void add( global const global const const unsigned global float int gid = get_global_id(0); if (gid < n) { c[gid] = a[gid] + b[gid]; } } float * a, float * b, int n, * c) {
Czas na kod! // vectors_cl.cl global void add( global const global const const unsigned global float int gid = get_global_id(0); } if (gid < n) { c[gid] = a[gid] + b[gid]; } float * a, float * b, int n, * c) {
ret = clbuildprogram(program, 1, &device_id, NULL, NULL, NULL); if (ret!= 0) { printf("clbuildprogram returned non-zero status %d: ", ret); #include <stdio.h> #include <stdlib.h> #include <time.h> if (ret == CL_INVALID_PROGRAM) { printf("invalid program\n"); } else if (ret == CL_INVALID_VALUE) { printf("invalid value\n"); } else if (ret == CL_INVALID_DEVICE) { printf("invalid device\n"); } else if (ret == CL_INVALID_BINARY) { printf("invalid binary\n"); } else if (ret == CL_INVALID_BUILD_OPTIONS) { printf("invalid build options\n"); } else if (ret == CL_INVALID_OPERATION) { printf("invalid operation\n"); } else if (ret == CL_COMPILER_NOT_AVAILABLE) { printf("compiler not available\n"); } else if (ret == CL_BUILD_PROGRAM_FAILURE) { printf("build program failure\n"); #include <CL/cl.h> #define ARRAY_SIZE 4096 #define MAX_SOURCE_SIZE (0x100000) int main(void) { const size_t ARRAY_BYTES = ARRAY_SIZE * sizeof(float); float h_a[array_size]; float h_b[array_size]; for (int i = 0; i < ARRAY_SIZE; i++) { h_a[i] = (float)i; h_b[i] = (float)(2 * i); } float h_c[array_size]; size_t log_size; clgetprogrambuildinfo(program, device_id, CL_PROGRAM_BUILD_LOG, 0, NULL, &log_size); FILE *fp; char *source_str; size_t source_size; char *log = (char *) malloc(log_size); fp = fopen("vectors_cl.cl", "r"); if (!fp) { fprintf(stderr, "Failed to load kernel.\n"); exit(1); } source_str = (char *)malloc(max_source_size); source_size = fread(source_str, 1, MAX_SOURCE_SIZE, fp); fclose(fp); clgetprogrambuildinfo(program, device_id, CL_PROGRAM_BUILD_LOG, log_size, log, NULL); printf("%s\n", log); } else if (ret == CL_OUT_OF_HOST_MEMORY) { printf("out of host memory\n"); } exit(1); } cl_platform_id platform_id = NULL; cl_device_id device_id = NULL; cl_uint ret_num_devices; cl_uint ret_num_platforms; cl_int ret = clgetplatformids(1, &platform_id, &ret_num_platforms); ret = clgetdeviceids(platform_id, CL_DEVICE_TYPE_DEFAULT, 1, &device_id, &ret_num_devices); cl_kernel kernel = clcreatekernel(program, "add", &ret); ret = clsetkernelarg(kernel, 0, ret = clsetkernelarg(kernel, 1, ret = clsetkernelarg(kernel, 2, size_t array_size = ARRAY_SIZE; ret = clsetkernelarg(kernel, 3, cl_context context = clcreatecontext(null, 1, &device_id, NULL, NULL, &ret); sizeof(const size_t), (void *)&array_size); size_t global_item_size = ARRAY_SIZE; // Process the entire lists size_t local_item_size = 1; // Divide work items into groups of 64 ret = clenqueuendrangekernel(command_queue, kernel, 1, NULL, &global_item_size, &local_item_size, 0, NULL, NULL); cl_command_queue command_queue = clcreatecommandqueue(context, device_id, 0, &ret); cl_mem a_mem_obj = clcreatebuffer(context, CL_MEM_READ_ONLY, ARRAY_BYTES, NULL, &ret); cl_mem b_mem_obj = clcreatebuffer(context, CL_MEM_READ_ONLY, ARRAY_BYTES, NULL, &ret); cl_mem c_mem_obj = clcreatebuffer(context, CL_MEM_WRITE_ONLY, ARRAY_BYTES, NULL, &ret); ret = clenqueuereadbuffer(command_queue, c_mem_obj, CL_TRUE, 0, ARRAY_BYTES, h_c, 0, NULL, NULL); ret ret ret ret ret ret ret ret ret ret = clenqueuewritebuffer(command_queue, a_mem_obj, CL_TRUE, 0, ARRAY_BYTES, h_a, 0, NULL, NULL); ret = clenqueuewritebuffer(command_queue, b_mem_obj, CL_TRUE, 0, ARRAY_BYTES, h_b, 0, NULL, NULL); cl_program program = clcreateprogramwithsource(context, 1, (const char **)&source_str, (const size_t *)&source_size, &ret); if (ret!= 0) { printf("clcreateprogramwithsource returned non-zero status %d\n\n", ret); exit(1); } sizeof(cl_mem), (void *)&a_mem_obj); sizeof(cl_mem), (void *)&b_mem_obj); sizeof(cl_mem), (void *)&c_mem_obj); } = = = = = = = = = clflush(command_queue); clfinish(command_queue); clreleasekernel(kernel); clreleaseprogram(program); clreleasememobject(a_mem_obj); clreleasememobject(b_mem_obj); clreleasememobject(c_mem_obj); clreleasecommandqueue(command_queue); clreleasecontext(context); return 0;
ret = clbuildprogram(program, 1, &device_id, NULL, NULL, NULL); if (ret!= 0) { printf("clbuildprogram returned non-zero status %d: ", ret); #include <stdio.h> #include <stdlib.h> #include <time.h> if (ret == CL_INVALID_PROGRAM) { printf("invalid program\n"); } else if (ret == CL_INVALID_VALUE) { printf("invalid value\n"); } else if (ret == CL_INVALID_DEVICE) { printf("invalid device\n"); } else if (ret == CL_INVALID_BINARY) { printf("invalid binary\n"); } else if (ret == CL_INVALID_BUILD_OPTIONS) { printf("invalid build options\n"); } else if (ret == CL_INVALID_OPERATION) { printf("invalid operation\n"); } else if (ret == CL_COMPILER_NOT_AVAILABLE) { printf("compiler not available\n"); } else if (ret == CL_BUILD_PROGRAM_FAILURE) { printf("build program failure\n"); #include <CL/cl.h> #define ARRAY_SIZE 4096 #define MAX_SOURCE_SIZE (0x100000) int main(void) { const size_t ARRAY_BYTES = ARRAY_SIZE * sizeof(float); float h_a[array_size]; float h_b[array_size]; for (int i = 0; i < ARRAY_SIZE; i++) { h_a[i] = (float)i; h_b[i] = (float)(2 * i); } float h_c[array_size]; size_t log_size; clgetprogrambuildinfo(program, device_id, CL_PROGRAM_BUILD_LOG, 0, NULL, &log_size); FILE *fp; char *source_str; size_t source_size; char *log = (char *) malloc(log_size); fp = fopen("vectors_cl.cl", "r"); if (!fp) { fprintf(stderr, "Failed to load kernel.\n"); exit(1); } source_str = (char *)malloc(max_source_size); source_size = fread(source_str, 1, MAX_SOURCE_SIZE, fp); fclose(fp); clgetprogrambuildinfo(program, device_id, CL_PROGRAM_BUILD_LOG, log_size, log, NULL); printf("%s\n", log); } else if (ret == CL_OUT_OF_HOST_MEMORY) { printf("out of host memory\n"); } exit(1); } cl_platform_id platform_id = NULL; cl_device_id device_id = NULL; cl_uint ret_num_devices; cl_uint ret_num_platforms; cl_int ret = clgetplatformids(1, &platform_id, &ret_num_platforms); ret = clgetdeviceids(platform_id, CL_DEVICE_TYPE_DEFAULT, 1, &device_id, &ret_num_devices); cl_kernel kernel = clcreatekernel(program, "add", &ret); ret = clsetkernelarg(kernel, 0, ret = clsetkernelarg(kernel, 1, ret = clsetkernelarg(kernel, 2, size_t array_size = ARRAY_SIZE; ret = clsetkernelarg(kernel, 3, cl_context context = clcreatecontext(null, 1, &device_id, NULL, NULL, &ret); sizeof(const size_t), (void *)&array_size); size_t global_item_size = ARRAY_SIZE; // Process the entire lists size_t local_item_size = 1; // Divide work items into groups of 64 ret = clenqueuendrangekernel(command_queue, kernel, 1, NULL, &global_item_size, &local_item_size, 0, NULL, NULL); cl_command_queue command_queue = clcreatecommandqueue(context, device_id, 0, &ret); cl_mem a_mem_obj = clcreatebuffer(context, CL_MEM_READ_ONLY, ARRAY_BYTES, NULL, &ret); cl_mem b_mem_obj = clcreatebuffer(context, CL_MEM_READ_ONLY, ARRAY_BYTES, NULL, &ret); cl_mem c_mem_obj = clcreatebuffer(context, CL_MEM_WRITE_ONLY, ARRAY_BYTES, NULL, &ret); ret = clenqueuereadbuffer(command_queue, c_mem_obj, CL_TRUE, 0, ARRAY_BYTES, h_c, 0, NULL, NULL); ret ret ret ret ret ret ret ret ret ret = clenqueuewritebuffer(command_queue, a_mem_obj, CL_TRUE, 0, ARRAY_BYTES, h_a, 0, NULL, NULL); ret = clenqueuewritebuffer(command_queue, b_mem_obj, CL_TRUE, 0, ARRAY_BYTES, h_b, 0, NULL, NULL); cl_program program = clcreateprogramwithsource(context, 1, (const char **)&source_str, (const size_t *)&source_size, &ret); if (ret!= 0) { printf("clcreateprogramwithsource returned non-zero status %d\n\n", ret); exit(1); } sizeof(cl_mem), (void *)&a_mem_obj); sizeof(cl_mem), (void *)&b_mem_obj); sizeof(cl_mem), (void *)&c_mem_obj); } = = = = = = = = = clflush(command_queue); clfinish(command_queue); clreleasekernel(kernel); clreleaseprogram(program); clreleasememobject(a_mem_obj); clreleasememobject(b_mem_obj); clreleasememobject(c_mem_obj); clreleasecommandqueue(command_queue); clreleasecontext(context); return 0;
Jak to odpalić? instalacja OpenCL ICD Instalalble Client Driver
Jak to odpalić? instalacja OpenCL ICD Instalalble Client Driver kompilacja $ gcc -std=c99 vectors_cl.c -o vectors_cl -l OpenCL
Jak to odpalić? instalacja OpenCL ICD Instalalble Client Driver kompilacja $ gcc -std=c99 vectors_cl.c -o vectors_cl -l OpenCL uruchomienie $./vectors_cl
pyopencl
Czas na ładny kod! global void add( global const global const const unsigned global float int gid = get_global_id(0); if (gid < n) { c[gid] = a[gid] + b[gid]; } } float * a, float * b, int n, * c) {
import numpy import os import pyopencl def add(a, b): # Create context. context = pyopencl.create_some_context() # Create command queue withing it. queue = pyopencl.commandqueue(context) # Build the "program". with open(os.path.join( os.path.dirname(os.path.abspath( file )), "vectors_cl.cl")) as kernel_file: program = pyopencl.program( context, kernel_file.read() ).build()
import numpy import os import pyopencl def add(a, b): # Create context. context = pyopencl.create_some_context() # Create command queue withing it. queue = pyopencl.commandqueue(context) # Build the "program". with open(os.path.join( os.path.dirname(os.path.abspath( file )), "vectors_cl.cl")) as kernel_file: program = pyopencl.program( context, kernel_file.read() ).build()
import numpy import os import pyopencl def add(a, b): # Create context. context = pyopencl.create_some_context() # Create command queue withing it. queue = pyopencl.commandqueue(context) # Build the "program". with open(os.path.join( os.path.dirname(os.path.abspath( file )), "vectors_cl.cl")) as kernel_file: program = pyopencl.program( context, kernel_file.read() ).build()
import numpy import os import pyopencl def add(a, b): # Create context. context = pyopencl.create_some_context() # Create command queue withing it. queue = pyopencl.commandqueue(context) # Build the "program". with open(os.path.join( os.path.dirname(os.path.abspath( file )), "vectors_cl.cl")) as kernel_file: program = pyopencl.program( context, kernel_file.read() ).build()
# Create two readable buffers on the device memory and # copy the input data there. a_in = pyopencl.buffer( context, pyopencl.mem_flags.read_only pyopencl.mem_flags.copy_host_ptr, hostbuf=a) b_in = pyopencl.buffer( context, pyopencl.mem_flags.read_only pyopencl.mem_flags.copy_host_ptr, hostbuf=b) # Create one writeable buffer on the device memory for # result. c_out = pyopencl.buffer( context, pyopencl.mem_flags.write_only, a.nbytes # Size. )
# Create two readable buffers on the device memory and # copy the input data there. a_in = pyopencl.buffer( context, pyopencl.mem_flags.read_only pyopencl.mem_flags.copy_host_ptr, hostbuf=a) b_in = pyopencl.buffer( context, pyopencl.mem_flags.read_only pyopencl.mem_flags.copy_host_ptr, hostbuf=b) # Create one writeable buffer on the device memory for # result. c_out = pyopencl.buffer( context, pyopencl.mem_flags.write_only, a.nbytes # Size. )
# Create two readable buffers on the device memory and # copy the input data there. a_in = pyopencl.buffer( context, pyopencl.mem_flags.read_only pyopencl.mem_flags.copy_host_ptr, hostbuf=a) b_in = pyopencl.buffer( context, pyopencl.mem_flags.read_only pyopencl.mem_flags.copy_host_ptr, hostbuf=b) # Create one writeable buffer on the device memory for # result. c_out = pyopencl.buffer( context, pyopencl.mem_flags.write_only, a.nbytes # Size. )
# Create two readable buffers on the device memory and # copy the input data there. a_in = pyopencl.buffer( context, pyopencl.mem_flags.read_only pyopencl.mem_flags.copy_host_ptr, hostbuf=a) b_in = pyopencl.buffer( context, pyopencl.mem_flags.read_only pyopencl.mem_flags.copy_host_ptr, hostbuf=b) # Create one writeable buffer on the device memory for # result. c_out = pyopencl.buffer( context, pyopencl.mem_flags.write_only, a.nbytes # Size. )
# Execute the kernel. program.add(queue, a.shape, None, a_in, b_in, numpy.uint32(a.size), c_out) # Create empty numpy array on the host for result. c = numpy.empty_like(a) # Copy the result from the device to the host. pyopencl.enqueue_copy(queue, c, c_out) return c
# Execute the kernel. program.add(queue, a.shape, None, a_in, b_in, numpy.uint32(a.size), c_out) # Create empty numpy array on the host for result. c = numpy.empty_like(a) # Copy the result from the device to the host. pyopencl.enqueue_copy(queue, c, c_out) return c
# Execute the kernel. program.add(queue, a.shape, None, a_in, b_in, numpy.uint32(a.size), c_out) # Create empty numpy array on the host for result. c = numpy.empty_like(a) # Copy the result from the device to the host. pyopencl.enqueue_copy(queue, c, c_out) return c
# Execute the kernel. program.add(queue, a.shape, None, a_in, b_in, numpy.uint32(a.size), c_out) # Create empty numpy array on the host for result. c = numpy.empty_like(a) # Copy the result from the device to the host. pyopencl.enqueue_copy(queue, c, c_out) return c
ARRAY_SIZE = 4096 def test_add(): # Generate the input array on the host. a = numpy.empty(array_size, dtype=numpy.float32) b = numpy.empty(array_size, dtype=numpy.float32) for i in range(array_size): a[i] = i b[i] = 2 * i c = add(a, b) assert assert assert assert c[0] == 0 c[1] == 3 c[-2] == 12282 c[-1] == 12285
$ py.test examples/test_vectors_cl.py. ================= 1 passed in 0.21 seconds =================
Więcej o work items get_global_id(0);
Więcej o work items get_global_id(0); get_global_id(1);
Więcej o work items ge t _g l ob a l_ id (2 ); get_global_id(1); get_global_id(0);
global void add( global const float * in, const int width, const int height, global float * c) { int x = get_global_id(0); int y = get_global_id(1); int gid = y * width + x; } if (x < width && y < height) { // }
Optymalizacja maksymalnego wykorzystania jednostek obliczeniowych w GPU
Optymalizacja maksymalnego wykorzystania jednostek obliczeniowych w GPU vs. maksymalną przepustowość przesyłania danych
Optymalizacja rozmiar work group sekwencyjny dostęp do pamięci uruchamianie wielu kerneli naraz...
Wzorce mapa
Wzorce mapa redukcja scan histogram scatter gather sort
Wzorce pyopencl.elementwise pyopencl.reduction pyopencl.scan pyopencl.algorithm pyopencl.bitonic_sort
from pyopencl import... Image array clmath
from pyopencl import... Image array clmath clrandom characterize tools
Wsparcie OpenCL OpenCV ClBLAS, ViennaCL, clfft Rivertrail, WebCL ViNN Go, Haskell, Lua, Rust, Java PgOpenCL
Randomowe uwagi typy danych ile bajtów jest we floacie? zawsze ustawiać dtype w numpy.array
Randomowe uwagi typy danych ile bajtów jest we floacie? zawsze ustawiać dtype w numpy.array wielkość danych
Randomowe uwagi typy danych ile bajtów jest we floacie? zawsze ustawiać dtype w numpy.array wielkość danych jak zwykle, testy :)
Dzięki :) Pytania? kolodziejj.info/talks/gpgpu/