日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

《GPU高性能编程CUDA实战》中代码整理

發布時間:2023/11/27 生活经验 27 豆豆
生活随笔 收集整理的這篇文章主要介紹了 《GPU高性能编程CUDA实战》中代码整理 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

? ? ? ? ?CUDA架構專門為GPU計算設計了一種全新的模塊,目的是減輕早期GPU計算中存在的一些限制,而正是這些限制使得之前的GPU在通用計算中沒有得到廣泛的應用。

???????? 使用CUDA C來編寫代碼的前提條件包括:(1)、支持CUDA的圖形處理器,即由NVIDIA推出的GPU顯卡,要求顯存超過256MB;(2)、NVIDIA設備驅動程序,用于實現應用程序與支持CUDA的硬件之間的通信,確保安裝最新的驅動程序,注意選擇與開發環境相符的圖形卡和操作系統;(3)、CUDA開發工具箱即CUDA Toolkit,此工具箱中包括一個編譯GPU代碼的編譯器;(4)、標準C編譯器,即CPU編譯器。CUDA C應用程序將在兩個不同的處理器上執行計算,因此需要兩個編譯器。其中一個編譯器為GPU編譯代碼,而另一個為CPU編譯代碼。?

???????? 一般,將CPU以及系統的內存稱為主機(Host),而將GPU及其內存稱為設備(Device)。在GPU設備上執行的函數通常稱為核函數(Kernel)。

???????? cudaMalloc函數使用限制總結:(1)、可以將cudaMalloc()分配的指針傳遞給在設備上執行的函數;(2)、可以在設備代碼中使用cudaMalloc()分配的指針進行內存讀/寫操作;(3)、可以將cudaMalloc()分配的指針傳遞給在主機上執行的函數;(4)、不能在主機代碼中使用cudaMalloc()分配的指針進行內存讀/寫操作。

???????? 不能使用標準C的free()函數來釋放cudaMalloc()分配的內存;要釋放cudaMalloc()分配的內存,需要調用cudaFree()。

???????? 設備指針的使用方式與標準C中指針的使用方式完全一樣。主機指針只能訪問主機代碼中的內存,而設備指針也只能訪問設備代碼中的內存。

???????? 在主機代碼中可以通過調用cudaMemcpy()來訪問設備上的內存。

???????? 有可能在單塊卡上包含了兩個或多個GPU。

???????? 在集成的GPU上運行代碼,可以與CPU共享內存。

???????? 計算功能集的版本為1.3或者更高的顯卡才能支持雙精度浮點數的計算。

???????? 尖括號的第一個參數表示設備在執行核函數時使用的并行線程塊的數量,

???????? 并行線程塊集合也稱為一個線程格(Grid)。線程格既可以是一維的線程塊集合,也可以是二維的線程塊集合。

???????? GPU有著完善的內存管理機制,它將強行結束所有違反內存訪問規則的進程。

???????? 在啟動線程塊數組時,數組每一維的最大數量都不能超過65535.這是一種硬件限制,如果啟動的線程塊數量超過了這個限值,那么程序將運行失敗。???

???????? CUDA運行時將線程塊(Block)分解為多個線程。當需要啟動多個并行線程塊時,只需將尖括號中的第一個參數由1改為想要啟動的線程塊數量。在尖括號中,第二個參數表示CUDA運行時在每個線程塊中創建的線程數量。

???????? 硬件將線程塊的數量限制為不超過65535.同樣,對于啟動核函數時每個線程塊中的線程數量,硬件也進行了限制。具體來說,最大的線程數量不能超過設備屬性結構中maxThreadsPerBlock域的值。這個值并不固定,有的是512,有的是1024.

???????? 內置變量blockDim,對于所有線程塊來說,這個變量是一個常數,保存的是線程塊中每一維的線程數量。

???????? 內置變量gridDim,對于所有線程塊來說,這個變量是一個常數,用來保存線程格每一維的大小,即每個線程格中線程塊的數量。

???????? 內置變量blockIdx,變量中包含的值就是當前執行設備代碼的線程塊的索引。

???????? 內置變量threadIdx,變量中包含的值就是當前執行設備代碼的線程索引。

???????? CUDA運行時允許啟動一個二維線程格,并且線程格中的每個線程塊都是一個三維的線程數組。

???????? CUDA C支持共享內存:可以將CUDA? C的關鍵字__share__添加到變量聲明中,這將使這個變量駐留在共享內存中。CUDA C編譯器對共享內存中的變量與普通變量將分別采取不同的處理方式。

???????? CUDA架構將確保,除非線程塊中的每個線程都執行了__syncthreads(),否則沒有任何線程能執行__syncthreads()之后的指令。

???????? 由于在GPU上包含有數百個數學計算單元,因此性能瓶頸通常并不在于芯片的數學計算吞吐量,而是在于芯片的內存帶寬。

???????? 常量內存用于保存在核函數執行期間不會發生變化的數據。NVIDIA硬件提供了64KB的常量內存,并且對常量內存采取了不同于標準全局內存的處理方式。在某些情況下,用常量內存來替換全局內存能有效地減少內存帶寬。要使用常量內存,需在變量前面加上__constant__關鍵字。

???????? 在CUDA架構中,線程束是指一個包含32個線程的集合,這個線程集合被“編織在一起”并且以“步調一致(Lockstep)”的形式執行。在程序中的每一行,線程束中的每個線程都將在不同的數據上執行相同的指令。

???????? 紋理內存是在CUDA C程序中可以使用的另一種只讀內存。與常量內存類似的是,紋理內存同樣緩存在芯片上,因此在某些情況中,它能夠減少對內存的請求并提供更高效的內存帶寬。紋理緩存是專門為那些在內存訪問模式中存在大量空間局部性(Spatial Locality)的圖形應用程序而設計的。

???????? NVIDIA將GPU支持的各種功能統稱為計算功能集(Compute Capability)。高版本計算功能集是低版本計算功能集的超集。

???????? 只有1.1或者更高版本的GPU計算功能集才能支持全局內存上的原子操作。此外,只有1.2或者更高版本的GPU計算功能集才能支持共享內存上的原子操作。CUDA C支持多種原子操作。

???????? C庫函數malloc函數將分配標準的,可分頁的(Pageble)主機內存。而cudaHostAlloc函數將分配頁鎖定的主機內存。頁鎖定內存也稱為固定內存(Pinned Memory)或者不可分頁內存,它有一個重要的屬性:操作系統將不會對這塊內存分頁并交換到磁盤上,從而確保了該內存始終駐留在物理內存上。因此,操作系統能夠安全地使某個應用程序訪問該內存的物理地址,因為這塊內存將不會被破壞或者重新定位。

???????? 固定內存是一把雙刃劍。當使用固定內存時,你將失去虛擬內存的所有功能。特別是,在應用程序中使用每個頁鎖定內存時都需要分配物理內存,因為這些內存不能交換到磁盤上。這意味著,與使用標準的malloc函數調用相比,系統將更快地耗盡內存。因此,應用程序在物理內存較少的機器上會運行失敗,而且意味著應用程序將影響在系統上運行的其它應用程序的性能。

???????? 建議,僅對cudaMemcpy()調用中的源內存或者目標內存,才使用頁鎖定內存,并且在不再需要使用它們時立即釋放,而不是等到應用程序關閉時才釋放。

???????? CUDA流表示一個GPU操作隊列,并且該隊列中的操作將以指定的順序執行

???????? 通過使用零拷貝內存,可以避免CPU和GPU之間的顯式復制操作。

???????? 對于零拷貝內存,獨立GPU和集成GPU,帶來的性能提升是不同的。對于集成GPU,使用零拷貝內存通常都會帶來性能提升,因為內存在物理上與主機是共享的。將緩沖區聲明為零拷貝內存的唯一作用就是避免不必要的數據復制。但是,所有類型的固定內存都存在一定的局限性,零拷貝內存同樣不例外。每個固定內存都會占用系統的可用物理內存,這最終將降低系統的性能。對于獨立GPU,當輸入內存和輸出內存都只能使用一次時,那么在獨立GPU上使用零拷貝內存將帶來性能提升。但由于GPU不會緩存零拷貝內存的內容,如果多次讀取內存,那么最終將得不償失,還不如一開始就將數據復制到GPU。

???????? CUDA工具箱(CUDAToolkit)包含了兩個重要的工具庫:(1)、CUFFT(Fast FourierTransform,快速傅里葉變換)庫;(2)、CUBLAS(Basic Linear Algebra Subprograms,BLAS)是一個線性代數函數庫。

???????? NPP(NVIDIA Performance Primitives)稱為NVIDIA性能原語,它是一個函數庫,用來執行基于CUDA加速的數據處理操作,它的基本功能集合主要側重于圖像處理和視頻處理。

新建一個基于CUDA的測試工程testCUDA,此工程中除了包括common文件外,還添加了另外三個文件,分別為testCUDA.cu、funset.cu、funset.cuh,這三個文件包括了書中絕大部分的測試代碼:

testCUDA.cu:

#include "funset.cuh"
#include <iostream>
#include "book.h"
#include "cpu_bitmap.h"
#include "gpu_anim.h"using namespace std;int test1();//簡單的兩數相加
int test2();//獲取GPU設備相關屬性
int test3();//通過線程塊索引來計算兩個矢量和
int test4();//Julia的CUDA實現
int test5();//通過線程索引來計算兩個矢量和
int test6();//通過線程塊索引和線程索引來計算兩個矢量和
int test7();//ripple的CUDA實現
int test8();//點積運算的CUDA實現
int test9();//Julia的CUDA實現,加入了線程同步函數__syncthreads()
int test10();//光線跟蹤(Ray Tracing)實現,沒有常量內存+使用事件來計算GPU運行時間
int test11();//光線跟蹤(Ray Tracing)實現,使用常量內存+使用事件來計算GPU運行時間
int test12();//模擬熱傳導,使用紋理內存,有些問題
int test13();//模擬熱傳導,使用二維紋理內存,有些問題
int test14();//ripple的CUDA+OpenGL實現
int test15();//模擬熱傳導,CUDA+OpenGL實現,有些問題
int test16();//直方圖計算,利用原子操作函數atomicAdd實現
int test17();//固定內存的使用
int test18();//單個stream的使用
int test19();//多個stream的使用
int test20();//通過零拷貝內存的方式實現點積運算
int test21();//使用多個GPU實現點積運算int main(int argc, char* argv[])
{test21();cout<<"ok!"<<endl;return 0;
}int test1()
{int a = 2, b = 3, c = 0;int* dev_c = NULL;HANDLE_ERROR(cudaMalloc((void**)&dev_c, sizeof(int)));//尖括號表示要將一些參數傳遞給CUDA編譯器和運行時系統//尖括號中這些參數并不是傳遞給設備代碼的參數,而是告訴運行時如何啟動設備代碼,//傳遞給設備代碼本身的參數是放在圓括號中傳遞的,就像標準的函數調用一樣add<<<1, 1>>>(a, b, dev_c);HANDLE_ERROR(cudaMemcpy(&c, dev_c, sizeof(int), cudaMemcpyDeviceToHost));printf("%d + %d = %d\n", a, b, c);cudaFree(dev_c);return 0;
}int test2()
{int count = -1;HANDLE_ERROR(cudaGetDeviceCount(&count));printf("device count: %d\n", count);cudaDeviceProp prop;for (int i = 0; i < count; i++) {HANDLE_ERROR(cudaGetDeviceProperties(&prop, i));printf("   --- General Information for device %d ---\n", i);printf("Name:  %s\n", prop.name);printf("Compute capability:  %d.%d\n", prop.major, prop.minor);printf("Clock rate:  %d\n", prop.clockRate);printf("Device copy overlap:  ");if (prop.deviceOverlap) printf("Enabled\n");else printf("Disabled\n");printf("Kernel execution timeout :  ");if (prop.kernelExecTimeoutEnabled) printf("Enabled\n");else printf("Disabled\n");printf("   --- Memory Information for device %d ---\n", i);printf("Total global mem:  %ld\n", prop.totalGlobalMem);printf("Total constant Mem:  %ld\n", prop.totalConstMem);printf("Max mem pitch:  %ld\n", prop.memPitch);printf("Texture Alignment:  %ld\n", prop.textureAlignment);printf("   --- MP Information for device %d ---\n", i);printf("Multiprocessor count:  %d\n", prop.multiProcessorCount);printf("Shared mem per mp:  %ld\n", prop.sharedMemPerBlock);printf("Registers per mp:  %d\n", prop.regsPerBlock);printf("Threads in warp:  %d\n", prop.warpSize);printf("Max threads per block:  %d\n", prop.maxThreadsPerBlock);printf("Max thread dimensions:  (%d, %d, %d)\n", prop.maxThreadsDim[0], prop.maxThreadsDim[1], prop.maxThreadsDim[2]);printf("Max grid dimensions:  (%d, %d, %d)\n", prop.maxGridSize[0], prop.maxGridSize[1], prop.maxGridSize[2]);printf("\n");}int dev;HANDLE_ERROR(cudaGetDevice(&dev));printf("ID of current CUDA device:  %d\n", dev);memset(&prop, 0, sizeof(cudaDeviceProp));prop.major = 1;prop.minor = 3;HANDLE_ERROR(cudaChooseDevice(&dev, &prop));printf("ID of CUDA device closest to revision %d.%d:  %d\n", prop.major, prop.minor, dev);HANDLE_ERROR(cudaSetDevice(dev));return 0;
}int test3()
{int a[NUM] = {0}, b[NUM] = {0}, c[NUM] = {0};int *dev_a = NULL, *dev_b = NULL, *dev_c = NULL;//allocate the memory on the GPUHANDLE_ERROR(cudaMalloc((void**)&dev_a, NUM * sizeof(int)));HANDLE_ERROR(cudaMalloc((void**)&dev_b, NUM * sizeof(int)));HANDLE_ERROR(cudaMalloc((void**)&dev_c, NUM * sizeof(int)));//fill the arrays 'a' and 'b' on the CPUfor (int i=0; i<NUM; i++) {a[i] = -i;b[i] = i * i;}//copy the arrays 'a' and 'b' to the GPUHANDLE_ERROR(cudaMemcpy(dev_a, a, NUM * sizeof(int), cudaMemcpyHostToDevice));HANDLE_ERROR(cudaMemcpy(dev_b, b, NUM * sizeof(int), cudaMemcpyHostToDevice));//尖括號中的第一個參數表示設備在執行核函數時使用的并行線程塊的數量add_blockIdx<<<NUM,1>>>( dev_a, dev_b, dev_c );//copy the array 'c' back from the GPU to the CPUHANDLE_ERROR(cudaMemcpy(c, dev_c, NUM * sizeof(int), cudaMemcpyDeviceToHost));//display the resultsfor (int i=0; i<NUM; i++) {printf( "%d + %d = %d\n", a[i], b[i], c[i] );}//free the memory allocated on the GPUHANDLE_ERROR(cudaFree(dev_a));HANDLE_ERROR(cudaFree(dev_b));HANDLE_ERROR(cudaFree(dev_c));return 0;
}int test4()
{//globals needed by the update routinestruct DataBlock {unsigned char* dev_bitmap;};DataBlock   data;CPUBitmap bitmap(DIM, DIM, &data);unsigned char* dev_bitmap;HANDLE_ERROR(cudaMalloc((void**)&dev_bitmap, bitmap.image_size()));data.dev_bitmap = dev_bitmap;//聲明一個二維的線程格//類型dim3表示一個三維數組,可以用于指定啟動線程塊的數量//當用兩個值來初始化dim3類型的變量時,CUDA運行時將自動把第3維的大小指定為1dim3 grid(DIM, DIM);kernel_julia<<<grid,1>>>(dev_bitmap);HANDLE_ERROR(cudaMemcpy(bitmap.get_ptr(), dev_bitmap, bitmap.image_size(), cudaMemcpyDeviceToHost));HANDLE_ERROR(cudaFree(dev_bitmap));bitmap.display_and_exit();return 0;
}int test5()
{int a[NUM], b[NUM], c[NUM];int *dev_a = NULL, *dev_b = NULL, *dev_c = NULL;//在GPU上分配內存HANDLE_ERROR(cudaMalloc((void**)&dev_a, NUM * sizeof(int)));HANDLE_ERROR(cudaMalloc((void**)&dev_b, NUM * sizeof(int)));HANDLE_ERROR(cudaMalloc((void**)&dev_c, NUM * sizeof(int)));//在CPU上為數組'a'和'b'賦值for (int i = 0; i < NUM; i++) {a[i] = i;b[i] = i * i;}//將數組'a'和'b'復制到GPUHANDLE_ERROR(cudaMemcpy(dev_a, a, NUM * sizeof(int), cudaMemcpyHostToDevice));HANDLE_ERROR(cudaMemcpy(dev_b, b, NUM * sizeof(int), cudaMemcpyHostToDevice));add_threadIdx<<<1, NUM>>>(dev_a, dev_b, dev_c);//將數組'c'從GPU復制到CPUHANDLE_ERROR(cudaMemcpy(c, dev_c, NUM * sizeof(int), cudaMemcpyDeviceToHost));//顯示結果for (int i = 0; i < NUM; i++) {printf("%d + %d = %d\n", a[i], b[i], c[i]);}//釋放在GPU分配的內存cudaFree(dev_a);cudaFree(dev_b);cudaFree(dev_c);return 0;
}int test6()
{int a[NUM], b[NUM], c[NUM];int *dev_a = NULL, *dev_b = NULL, *dev_c = NULL;//在GPU上分配內存HANDLE_ERROR(cudaMalloc((void**)&dev_a, NUM * sizeof(int)));HANDLE_ERROR(cudaMalloc((void**)&dev_b, NUM * sizeof(int)));HANDLE_ERROR(cudaMalloc((void**)&dev_c, NUM * sizeof(int)));//在CPU上為數組'a'和'b'賦值for (int i = 0; i < NUM; i++) {a[i] = i;b[i] = i * i / 10;}//將數組'a'和'b'復制到GPUHANDLE_ERROR(cudaMemcpy(dev_a, a, NUM * sizeof(int), cudaMemcpyHostToDevice));HANDLE_ERROR(cudaMemcpy(dev_b, b, NUM * sizeof(int), cudaMemcpyHostToDevice));add_blockIdx_threadIdx<<<128, 128>>>(dev_a, dev_b, dev_c);//將數組'c'從GPU復制到CPUHANDLE_ERROR(cudaMemcpy(c, dev_c, NUM * sizeof(int), cudaMemcpyDeviceToHost));//驗證GPU確實完成了我們要求的工作bool success = true;for (int i = 0; i < NUM; i++) {if ((a[i] + b[i]) != c[i]) {printf("error: %d + %d != %d\n", a[i], b[i], c[i]);success = false;}}if (success)printf("we did it!\n");//釋放在GPU分配的內存cudaFree(dev_a);cudaFree(dev_b);cudaFree(dev_c);return 0;
}int test7()
{DataBlock data;CPUAnimBitmap bitmap(DIM, DIM, &data);data.bitmap = &bitmap;HANDLE_ERROR(cudaMalloc((void**)&data.dev_bitmap, bitmap.image_size()));bitmap.anim_and_exit((void(*)(void*,int))generate_frame, (void(*)(void*))cleanup);return 0;
}void generate_frame(DataBlock *d, int ticks)
{dim3 blocks(DIM/16, DIM/16);dim3 threads(16, 16);ripple_kernel<<<blocks,threads>>>(d->dev_bitmap, ticks);HANDLE_ERROR(cudaMemcpy(d->bitmap->get_ptr(), d->dev_bitmap, d->bitmap->image_size(), cudaMemcpyDeviceToHost));
}//clean up memory allocated on the GPU
void cleanup(DataBlock *d)
{HANDLE_ERROR(cudaFree(d->dev_bitmap)); 
}int test8()
{float *a, *b, c, *partial_c;float *dev_a, *dev_b, *dev_partial_c;//allocate memory on the cpu sidea = (float*)malloc(NUM * sizeof(float));b = (float*)malloc(NUM * sizeof(float));partial_c = (float*)malloc(blocksPerGrid * sizeof(float));//allocate the memory on the GPUHANDLE_ERROR(cudaMalloc((void**)&dev_a, NUM * sizeof(float)));HANDLE_ERROR(cudaMalloc((void**)&dev_b, NUM * sizeof(float)));HANDLE_ERROR(cudaMalloc((void**)&dev_partial_c, blocksPerGrid*sizeof(float)));//fill in the host memory with datafor (int i = 0; i < NUM; i++) {a[i] = i;b[i] = i*2;}//copy the arrays 'a' and 'b' to the GPUHANDLE_ERROR(cudaMemcpy(dev_a, a, NUM * sizeof(float), cudaMemcpyHostToDevice));HANDLE_ERROR(cudaMemcpy(dev_b, b, NUM * sizeof(float), cudaMemcpyHostToDevice)); dot_kernel<<<blocksPerGrid,threadsPerBlock>>>(dev_a, dev_b, dev_partial_c);//copy the array 'c' back from the GPU to the CPUHANDLE_ERROR(cudaMemcpy(partial_c, dev_partial_c, blocksPerGrid * sizeof(float), cudaMemcpyDeviceToHost));//finish up on the CPU sidec = 0;for (int i = 0; i < blocksPerGrid; i++) {c += partial_c[i];}//點積計算結果應該是從0到NUM-1中每個數值的平方再乘以2//閉合形式解
#define sum_squares(x)  (x * (x + 1) * (2 * x + 1) / 6)printf("Does GPU value %.6g = %.6g?\n", c, 2 * sum_squares((float)(NUM - 1)));//free memory on the gpu sideHANDLE_ERROR(cudaFree(dev_a));HANDLE_ERROR(cudaFree(dev_b));HANDLE_ERROR(cudaFree(dev_partial_c));//free memory on the cpu sidefree(a);free(b);free(partial_c);return 0;
}int test9()
{DataBlock data;CPUBitmap bitmap(DIM, DIM, &data);unsigned char *dev_bitmap;HANDLE_ERROR(cudaMalloc((void**)&dev_bitmap, bitmap.image_size()));data.dev_bitmap = dev_bitmap;dim3 grids(DIM / 16, DIM / 16);dim3 threads(16,16);julia_kernel<<<grids, threads>>>(dev_bitmap);HANDLE_ERROR(cudaMemcpy(bitmap.get_ptr(), dev_bitmap, bitmap.image_size(), cudaMemcpyDeviceToHost));HANDLE_ERROR(cudaFree(dev_bitmap));bitmap.display_and_exit();return 0;
}int test10()
{DataBlock data;//capture the start timecudaEvent_t start, stop;HANDLE_ERROR(cudaEventCreate(&start));HANDLE_ERROR(cudaEventCreate(&stop));HANDLE_ERROR(cudaEventRecord(start, 0));CPUBitmap bitmap(DIM, DIM, &data);unsigned char *dev_bitmap;Sphere *s;//allocate memory on the GPU for the output bitmapHANDLE_ERROR(cudaMalloc((void**)&dev_bitmap, bitmap.image_size()));//allocate memory for the Sphere datasetHANDLE_ERROR(cudaMalloc((void**)&s, sizeof(Sphere) * SPHERES));//allocate temp memory, initialize it, copy to memory on the GPU, then free our temp memorySphere *temp_s = (Sphere*)malloc(sizeof(Sphere) * SPHERES);for (int i = 0; i < SPHERES; i++) {temp_s[i].r = rnd(1.0f);temp_s[i].g = rnd(1.0f);temp_s[i].b = rnd(1.0f);temp_s[i].x = rnd(1000.0f) - 500;temp_s[i].y = rnd(1000.0f) - 500;temp_s[i].z = rnd(1000.0f) - 500;temp_s[i].radius = rnd(100.0f) + 20;}HANDLE_ERROR(cudaMemcpy( s, temp_s, sizeof(Sphere) * SPHERES, cudaMemcpyHostToDevice));free(temp_s);//generate a bitmap from our sphere datadim3 grids(DIM / 16, DIM / 16);dim3 threads(16, 16);RayTracing_kernel<<<grids, threads>>>(s, dev_bitmap);//copy our bitmap back from the GPU for displayHANDLE_ERROR(cudaMemcpy(bitmap.get_ptr(), dev_bitmap, bitmap.image_size(), cudaMemcpyDeviceToHost));//get stop time, and display the timing resultsHANDLE_ERROR(cudaEventRecord(stop, 0));HANDLE_ERROR(cudaEventSynchronize(stop));float elapsedTime;HANDLE_ERROR(cudaEventElapsedTime(&elapsedTime, start, stop));printf("Time to generate:  %3.1f ms\n", elapsedTime);HANDLE_ERROR(cudaEventDestroy(start));HANDLE_ERROR(cudaEventDestroy(stop));HANDLE_ERROR(cudaFree(dev_bitmap));HANDLE_ERROR(cudaFree(s));// displaybitmap.display_and_exit();return 0;
}int test11()
{DataBlock data;//capture the start timecudaEvent_t start, stop;HANDLE_ERROR(cudaEventCreate(&start));HANDLE_ERROR(cudaEventCreate(&stop));HANDLE_ERROR(cudaEventRecord(start, 0));CPUBitmap bitmap(DIM, DIM, &data);unsigned char *dev_bitmap;//allocate memory on the GPU for the output bitmapHANDLE_ERROR(cudaMalloc((void**)&dev_bitmap, bitmap.image_size()));//allocate temp memory, initialize it, copy to constant memory on the GPU, then free temp memorySphere *temp_s = (Sphere*)malloc(sizeof(Sphere) * SPHERES);for (int i = 0; i < SPHERES; i++) {temp_s[i].r = rnd(1.0f);temp_s[i].g = rnd(1.0f);temp_s[i].b = rnd(1.0f);temp_s[i].x = rnd(1000.0f) - 500;temp_s[i].y = rnd(1000.0f) - 500;temp_s[i].z = rnd(1000.0f) - 500;temp_s[i].radius = rnd(100.0f) + 20;}HANDLE_ERROR(cudaMemcpyToSymbol(s, temp_s, sizeof(Sphere) * SPHERES));free(temp_s);//generate a bitmap from our sphere datadim3 grids(DIM / 16, DIM / 16);dim3 threads(16, 16);RayTracing_kernel<<<grids, threads>>>(dev_bitmap);//copy our bitmap back from the GPU for displayHANDLE_ERROR(cudaMemcpy(bitmap.get_ptr(), dev_bitmap, bitmap.image_size(), cudaMemcpyDeviceToHost));//get stop time, and display the timing resultsHANDLE_ERROR(cudaEventRecord(stop, 0));HANDLE_ERROR(cudaEventSynchronize(stop));float elapsedTime;HANDLE_ERROR(cudaEventElapsedTime(&elapsedTime, start, stop));printf("Time to generate:  %3.1f ms\n", elapsedTime);HANDLE_ERROR(cudaEventDestroy(start));HANDLE_ERROR(cudaEventDestroy(stop));HANDLE_ERROR(cudaFree(dev_bitmap));//displaybitmap.display_and_exit();return 0;
}int test12()
{Heat_DataBlock data;CPUAnimBitmap bitmap(DIM, DIM, &data);data.bitmap = &bitmap;data.totalTime = 0;data.frames = 0;HANDLE_ERROR(cudaEventCreate(&data.start));HANDLE_ERROR(cudaEventCreate(&data.stop));int imageSize = bitmap.image_size();HANDLE_ERROR(cudaMalloc((void**)&data.output_bitmap, imageSize));//assume float == 4 chars in size (ie rgba)HANDLE_ERROR(cudaMalloc((void**)&data.dev_inSrc, imageSize));HANDLE_ERROR(cudaMalloc((void**)&data.dev_outSrc, imageSize));HANDLE_ERROR(cudaMalloc((void**)&data.dev_constSrc, imageSize));HANDLE_ERROR(cudaBindTexture(NULL, texConstSrc, data.dev_constSrc, imageSize));HANDLE_ERROR(cudaBindTexture(NULL, texIn, data.dev_inSrc, imageSize));HANDLE_ERROR(cudaBindTexture(NULL, texOut, data.dev_outSrc, imageSize));//intialize the constant datafloat *temp = (float*)malloc(imageSize);for (int i = 0; i < DIM*DIM; i++) {temp[i] = 0;int x = i % DIM;int y = i / DIM;if ((x>300) && (x<600) && (y>310) && (y<601))temp[i] = MAX_TEMP;}temp[DIM * 100 + 100] = (MAX_TEMP + MIN_TEMP) / 2;temp[DIM * 700 + 100] = MIN_TEMP;temp[DIM * 300 + 300] = MIN_TEMP;temp[DIM * 200 + 700] = MIN_TEMP;for (int y = 800; y < 900; y++) {for (int x = 400; x < 500; x++) {temp[x + y * DIM] = MIN_TEMP;}}HANDLE_ERROR(cudaMemcpy(data.dev_constSrc, temp, imageSize, cudaMemcpyHostToDevice));    //initialize the input datafor (int y = 800; y < DIM; y++) {for (int x = 0; x < 200; x++) {temp[x+y*DIM] = MAX_TEMP;}}HANDLE_ERROR(cudaMemcpy(data.dev_inSrc, temp,imageSize, cudaMemcpyHostToDevice));free(temp);bitmap.anim_and_exit((void (*)(void*,int))Heat_anim_gpu, (void (*)(void*))Heat_anim_exit);return 0;
}int test13()
{Heat_DataBlock data;CPUAnimBitmap bitmap(DIM, DIM, &data);data.bitmap = &bitmap;data.totalTime = 0;data.frames = 0;HANDLE_ERROR(cudaEventCreate(&data.start));HANDLE_ERROR(cudaEventCreate(&data.stop));int imageSize = bitmap.image_size();HANDLE_ERROR(cudaMalloc((void**)&data.output_bitmap, imageSize));//assume float == 4 chars in size (ie rgba)HANDLE_ERROR(cudaMalloc((void**)&data.dev_inSrc, imageSize));HANDLE_ERROR(cudaMalloc((void**)&data.dev_outSrc, imageSize));HANDLE_ERROR(cudaMalloc((void**)&data.dev_constSrc, imageSize));cudaChannelFormatDesc desc = cudaCreateChannelDesc<float>();HANDLE_ERROR(cudaBindTexture2D(NULL, texConstSrc2, data.dev_constSrc, desc, DIM, DIM, sizeof(float) * DIM));HANDLE_ERROR(cudaBindTexture2D(NULL, texIn2, data.dev_inSrc, desc, DIM, DIM, sizeof(float) * DIM));HANDLE_ERROR(cudaBindTexture2D(NULL, texOut2, data.dev_outSrc, desc, DIM, DIM, sizeof(float) * DIM));//initialize the constant datafloat *temp = (float*)malloc(imageSize);for (int i = 0; i < DIM*DIM; i++) {temp[i] = 0;int x = i % DIM;int y = i / DIM;if ((x > 300) && ( x < 600) && (y > 310) && (y < 601))temp[i] = MAX_TEMP;}temp[DIM * 100 + 100] = (MAX_TEMP + MIN_TEMP) / 2;temp[DIM * 700 + 100] = MIN_TEMP;temp[DIM * 300 + 300] = MIN_TEMP;temp[DIM * 200 + 700] = MIN_TEMP;for (int y = 800; y < 900; y++) {for (int x = 400; x < 500; x++) {temp[x + y * DIM] = MIN_TEMP;}}HANDLE_ERROR(cudaMemcpy(data.dev_constSrc, temp, imageSize, cudaMemcpyHostToDevice));    //initialize the input datafor (int y = 800; y < DIM; y++) {for (int x = 0; x < 200; x++) {temp[x + y * DIM] = MAX_TEMP;}}HANDLE_ERROR(cudaMemcpy(data.dev_inSrc, temp,imageSize, cudaMemcpyHostToDevice));free(temp);bitmap.anim_and_exit((void (*)(void*,int))anim_gpu, (void (*)(void*))anim_exit);return 0;
}void Heat_anim_gpu(Heat_DataBlock *d, int ticks)
{HANDLE_ERROR(cudaEventRecord(d->start, 0));dim3 blocks(DIM / 16, DIM / 16);dim3 threads(16, 16);CPUAnimBitmap *bitmap = d->bitmap;//since tex is global and bound, we have to use a flag to//select which is in/out per iterationvolatile bool dstOut = true;for (int i = 0; i < 90; i++) {float *in, *out;if (dstOut) {in  = d->dev_inSrc;out = d->dev_outSrc;} else {out = d->dev_inSrc;in  = d->dev_outSrc;}Heat_copy_const_kernel<<<blocks, threads>>>(in);Heat_blend_kernel<<<blocks, threads>>>(out, dstOut);dstOut = !dstOut;}float_to_color<<<blocks, threads>>>(d->output_bitmap, d->dev_inSrc);HANDLE_ERROR(cudaMemcpy(bitmap->get_ptr(), d->output_bitmap, bitmap->image_size(), cudaMemcpyDeviceToHost));HANDLE_ERROR(cudaEventRecord(d->stop, 0));HANDLE_ERROR(cudaEventSynchronize(d->stop));float elapsedTime;HANDLE_ERROR(cudaEventElapsedTime(&elapsedTime, d->start, d->stop));d->totalTime += elapsedTime;++d->frames;printf( "Average Time per frame:  %3.1f ms\n", d->totalTime/d->frames );
}void anim_gpu(Heat_DataBlock *d, int ticks)
{HANDLE_ERROR(cudaEventRecord(d->start, 0));dim3 blocks(DIM / 16, DIM / 16);dim3 threads(16, 16);CPUAnimBitmap  *bitmap = d->bitmap;//since tex is global and bound, we have to use a flag to//select which is in/out per iterationvolatile bool dstOut = true;for (int i = 0; i < 90; i++) {float *in, *out;if (dstOut) {in  = d->dev_inSrc;out = d->dev_outSrc;} else {out = d->dev_inSrc;in  = d->dev_outSrc;}copy_const_kernel<<<blocks, threads>>>(in);blend_kernel<<<blocks, threads>>>(out, dstOut);dstOut = !dstOut;}float_to_color<<<blocks, threads>>>(d->output_bitmap, d->dev_inSrc);HANDLE_ERROR(cudaMemcpy(bitmap->get_ptr(), d->output_bitmap, bitmap->image_size(), cudaMemcpyDeviceToHost));HANDLE_ERROR(cudaEventRecord(d->stop, 0));HANDLE_ERROR(cudaEventSynchronize(d->stop));float elapsedTime;HANDLE_ERROR(cudaEventElapsedTime(&elapsedTime, d->start, d->stop));d->totalTime += elapsedTime;++d->frames;printf("Average Time per frame:  %3.1f ms\n", d->totalTime/d->frames);
}void Heat_anim_exit(Heat_DataBlock *d)
{cudaUnbindTexture(texIn);cudaUnbindTexture(texOut);cudaUnbindTexture(texConstSrc);HANDLE_ERROR(cudaFree(d->dev_inSrc));HANDLE_ERROR(cudaFree(d->dev_outSrc));HANDLE_ERROR(cudaFree(d->dev_constSrc));HANDLE_ERROR(cudaEventDestroy(d->start));HANDLE_ERROR(cudaEventDestroy(d->stop));
}//clean up memory allocated on the GPU
void anim_exit(Heat_DataBlock *d) 
{cudaUnbindTexture(texIn2);cudaUnbindTexture(texOut2);cudaUnbindTexture(texConstSrc2);HANDLE_ERROR(cudaFree(d->dev_inSrc));HANDLE_ERROR(cudaFree(d->dev_outSrc));HANDLE_ERROR(cudaFree(d->dev_constSrc));HANDLE_ERROR(cudaEventDestroy(d->start));HANDLE_ERROR(cudaEventDestroy(d->stop));
}int test14()
{GPUAnimBitmap  bitmap(DIM, DIM, NULL);bitmap.anim_and_exit((void (*)(uchar4*, void*, int))generate_frame_opengl, NULL);return 0;
}int test15()
{DataBlock_opengl data;GPUAnimBitmap bitmap(DIM, DIM, &data);data.totalTime = 0;data.frames = 0;HANDLE_ERROR(cudaEventCreate(&data.start));HANDLE_ERROR(cudaEventCreate(&data.stop));int imageSize = bitmap.image_size();//assume float == 4 chars in size (ie rgba)HANDLE_ERROR(cudaMalloc((void**)&data.dev_inSrc, imageSize));HANDLE_ERROR(cudaMalloc((void**)&data.dev_outSrc, imageSize));HANDLE_ERROR(cudaMalloc((void**)&data.dev_constSrc, imageSize));HANDLE_ERROR(cudaBindTexture(NULL, texConstSrc ,data.dev_constSrc, imageSize));HANDLE_ERROR(cudaBindTexture(NULL, texIn, data.dev_inSrc, imageSize));HANDLE_ERROR(cudaBindTexture(NULL, texOut, data.dev_outSrc, imageSize));//intialize the constant datafloat *temp = (float*)malloc(imageSize);for (int i = 0; i < DIM*DIM; i++) {temp[i] = 0;int x = i % DIM;int y = i / DIM;if ((x>300) && (x<600) && (y>310) && (y<601))temp[i] = MAX_TEMP;}temp[DIM*100+100] = (MAX_TEMP + MIN_TEMP)/2;temp[DIM*700+100] = MIN_TEMP;temp[DIM*300+300] = MIN_TEMP;temp[DIM*200+700] = MIN_TEMP;for (int y = 800; y < 900; y++) {for (int x = 400; x < 500; x++) {temp[x+y*DIM] = MIN_TEMP;}}HANDLE_ERROR(cudaMemcpy(data.dev_constSrc, temp, imageSize, cudaMemcpyHostToDevice));    //initialize the input datafor (int y = 800; y < DIM; y++) {for (int x = 0; x < 200; x++) {temp[x+y*DIM] = MAX_TEMP;}}HANDLE_ERROR(cudaMemcpy(data.dev_inSrc, temp, imageSize, cudaMemcpyHostToDevice));free(temp);bitmap.anim_and_exit((void (*)(uchar4*, void*, int))anim_gpu_opengl, (void (*)(void*))anim_exit_opengl);return 0;
}void anim_gpu_opengl(uchar4* outputBitmap, DataBlock_opengl *d, int ticks)
{HANDLE_ERROR(cudaEventRecord(d->start, 0));dim3 blocks(DIM / 16, DIM / 16);dim3 threads(16, 16);//since tex is global and bound, we have to use a flag to select which is in/out per iterationvolatile bool dstOut = true;for (int i = 0; i < 90; i++) {float *in, *out;if (dstOut) {in  = d->dev_inSrc;out = d->dev_outSrc;} else {out = d->dev_inSrc;in  = d->dev_outSrc;}Heat_copy_const_kernel_opengl<<<blocks, threads>>>(in);Heat_blend_kernel_opengl<<<blocks, threads>>>(out, dstOut);dstOut = !dstOut;}float_to_color<<<blocks, threads>>>(outputBitmap, d->dev_inSrc);HANDLE_ERROR(cudaEventRecord(d->stop, 0));HANDLE_ERROR(cudaEventSynchronize(d->stop));float elapsedTime;HANDLE_ERROR(cudaEventElapsedTime(&elapsedTime, d->start, d->stop));d->totalTime += elapsedTime;++d->frames;printf("Average Time per frame:  %3.1f ms\n", d->totalTime/d->frames);
}void anim_exit_opengl(DataBlock_opengl *d)
{HANDLE_ERROR(cudaUnbindTexture(texIn));HANDLE_ERROR(cudaUnbindTexture(texOut));HANDLE_ERROR(cudaUnbindTexture(texConstSrc));HANDLE_ERROR(cudaFree(d->dev_inSrc));HANDLE_ERROR(cudaFree(d->dev_outSrc));HANDLE_ERROR(cudaFree(d->dev_constSrc));HANDLE_ERROR(cudaEventDestroy(d->start));HANDLE_ERROR(cudaEventDestroy(d->stop));
}int test16()
{unsigned char *buffer = (unsigned char*)big_random_block(SIZE);//capture the start time starting the timer here so that we include the cost of//all of the operations on the GPU.  if the data were already on the GPU and we just //timed the kernel the timing would drop from 74 ms to 15 ms.  Very fast.cudaEvent_t start, stop;HANDLE_ERROR( cudaEventCreate( &start ) );HANDLE_ERROR( cudaEventCreate( &stop ) );HANDLE_ERROR( cudaEventRecord( start, 0 ) );// allocate memory on the GPU for the file's dataunsigned char *dev_buffer;unsigned int *dev_histo;HANDLE_ERROR(cudaMalloc((void**)&dev_buffer, SIZE));HANDLE_ERROR(cudaMemcpy(dev_buffer, buffer, SIZE, cudaMemcpyHostToDevice));HANDLE_ERROR(cudaMalloc((void**)&dev_histo, 256 * sizeof(int)));HANDLE_ERROR(cudaMemset(dev_histo, 0, 256 * sizeof(int)));//kernel launch - 2x the number of mps gave best timingcudaDeviceProp prop;HANDLE_ERROR(cudaGetDeviceProperties(&prop, 0));int blocks = prop.multiProcessorCount;histo_kernel<<<blocks*2, 256>>>(dev_buffer, SIZE, dev_histo);unsigned int histo[256];HANDLE_ERROR(cudaMemcpy(histo, dev_histo, 256 * sizeof(int), cudaMemcpyDeviceToHost));//get stop time, and display the timing resultsHANDLE_ERROR(cudaEventRecord(stop, 0));HANDLE_ERROR(cudaEventSynchronize(stop));float elapsedTime;HANDLE_ERROR(cudaEventElapsedTime(&elapsedTime, start, stop));printf("Time to generate:  %3.1f ms\n", elapsedTime);long histoCount = 0;for (int i=0; i<256; i++) {histoCount += histo[i];}printf("Histogram Sum:  %ld\n", histoCount);//verify that we have the same counts via CPUfor (int i = 0; i < SIZE; i++)histo[buffer[i]]--;for (int i = 0; i < 256; i++) {if (histo[i] != 0)printf("Failure at %d!\n", i);}HANDLE_ERROR(cudaEventDestroy(start));HANDLE_ERROR(cudaEventDestroy(stop));cudaFree(dev_histo);cudaFree(dev_buffer);free(buffer);return 0;
}float cuda_malloc_test(int size, bool up)
{cudaEvent_t start, stop;int *a, *dev_a;float elapsedTime;HANDLE_ERROR(cudaEventCreate(&start));HANDLE_ERROR(cudaEventCreate(&stop));a = (int*)malloc(size * sizeof(*a));HANDLE_NULL(a);HANDLE_ERROR(cudaMalloc((void**)&dev_a,size * sizeof(*dev_a)));HANDLE_ERROR(cudaEventRecord(start, 0));for (int i=0; i<100; i++) {if (up)HANDLE_ERROR(cudaMemcpy(dev_a, a, size * sizeof( *dev_a ), cudaMemcpyHostToDevice));elseHANDLE_ERROR(cudaMemcpy(a, dev_a, size * sizeof(*dev_a), cudaMemcpyDeviceToHost));}HANDLE_ERROR(cudaEventRecord(stop, 0));HANDLE_ERROR(cudaEventSynchronize(stop));HANDLE_ERROR(cudaEventElapsedTime(&elapsedTime, start, stop));free(a);HANDLE_ERROR(cudaFree(dev_a));HANDLE_ERROR(cudaEventDestroy(start));HANDLE_ERROR(cudaEventDestroy(stop));return elapsedTime;
}float cuda_host_alloc_test(int size, bool up) 
{cudaEvent_t start, stop;int *a, *dev_a;float elapsedTime;HANDLE_ERROR(cudaEventCreate(&start));HANDLE_ERROR(cudaEventCreate(&stop));HANDLE_ERROR(cudaHostAlloc((void**)&a,size * sizeof(*a), cudaHostAllocDefault));HANDLE_ERROR(cudaMalloc((void**)&dev_a, size * sizeof(*dev_a)));HANDLE_ERROR(cudaEventRecord(start, 0));for (int i=0; i<100; i++) {if (up)HANDLE_ERROR(cudaMemcpy(dev_a, a,size * sizeof(*a), cudaMemcpyHostToDevice));elseHANDLE_ERROR(cudaMemcpy(a, dev_a,size * sizeof(*a), cudaMemcpyDeviceToHost));}HANDLE_ERROR(cudaEventRecord(stop, 0));HANDLE_ERROR(cudaEventSynchronize(stop));HANDLE_ERROR(cudaEventElapsedTime(&elapsedTime, start, stop));HANDLE_ERROR(cudaFreeHost(a));HANDLE_ERROR(cudaFree(dev_a));HANDLE_ERROR(cudaEventDestroy(start));HANDLE_ERROR(cudaEventDestroy(stop));return elapsedTime;
}int test17()
{float elapsedTime;float MB = (float)100 * SIZE * sizeof(int) / 1024 / 1024;//try it with cudaMallocelapsedTime = cuda_malloc_test(SIZE, true);printf("Time using cudaMalloc:  %3.1f ms\n", elapsedTime);printf("\tMB/s during copy up:  %3.1f\n", MB/(elapsedTime/1000));elapsedTime = cuda_malloc_test(SIZE, false);printf("Time using cudaMalloc:  %3.1f ms\n", elapsedTime);printf("\tMB/s during copy down:  %3.1f\n", MB/(elapsedTime/1000));//now try it with cudaHostAllocelapsedTime = cuda_host_alloc_test(SIZE, true);printf("Time using cudaHostAlloc:  %3.1f ms\n", elapsedTime);printf("\tMB/s during copy up:  %3.1f\n", MB/(elapsedTime/1000));elapsedTime = cuda_host_alloc_test(SIZE, false);printf("Time using cudaHostAlloc:  %3.1f ms\n", elapsedTime);printf("\tMB/s during copy down:  %3.1f\n", MB/(elapsedTime/1000));return 0;
}int test18()
{cudaDeviceProp prop;int whichDevice;HANDLE_ERROR(cudaGetDevice(&whichDevice));HANDLE_ERROR(cudaGetDeviceProperties(&prop, whichDevice));if (!prop.deviceOverlap) {printf("Device will not handle overlaps, so no speed up from streams\n");return 0;}cudaEvent_t start, stop;float elapsedTime;cudaStream_t stream;int *host_a, *host_b, *host_c;int *dev_a, *dev_b, *dev_c;//start the timersHANDLE_ERROR(cudaEventCreate(&start));HANDLE_ERROR(cudaEventCreate(&stop));//initialize the streamHANDLE_ERROR(cudaStreamCreate(&stream));//allocate the memory on the GPUHANDLE_ERROR(cudaMalloc((void**)&dev_a, NUM * sizeof(int)));HANDLE_ERROR(cudaMalloc((void**)&dev_b, NUM * sizeof(int)));HANDLE_ERROR(cudaMalloc((void**)&dev_c, NUM * sizeof(int)));//allocate host locked memory, used to streamHANDLE_ERROR(cudaHostAlloc((void**)&host_a, FULL_DATA_SIZE * sizeof(int), cudaHostAllocDefault));HANDLE_ERROR(cudaHostAlloc((void**)&host_b, FULL_DATA_SIZE * sizeof(int), cudaHostAllocDefault));HANDLE_ERROR(cudaHostAlloc((void**)&host_c, FULL_DATA_SIZE * sizeof(int), cudaHostAllocDefault));for (int i=0; i<FULL_DATA_SIZE; i++) {host_a[i] = rand();host_b[i] = rand();}HANDLE_ERROR(cudaEventRecord(start, 0));//now loop over full data, in bite-sized chunksfor (int i=0; i<FULL_DATA_SIZE; i+= NUM) {//copy the locked memory to the device, asyncHANDLE_ERROR(cudaMemcpyAsync(dev_a, host_a+i, NUM * sizeof(int), cudaMemcpyHostToDevice, stream));HANDLE_ERROR(cudaMemcpyAsync(dev_b, host_b+i, NUM * sizeof(int), cudaMemcpyHostToDevice, stream));singlestream_kernel<<<NUM/256, 256, 0, stream>>>(dev_a, dev_b, dev_c);//copy the data from device to locked memoryHANDLE_ERROR(cudaMemcpyAsync(host_c+i, dev_c, NUM * sizeof(int), cudaMemcpyDeviceToHost, stream));}// copy result chunk from locked to full bufferHANDLE_ERROR(cudaStreamSynchronize(stream));HANDLE_ERROR(cudaEventRecord(stop, 0));HANDLE_ERROR(cudaEventSynchronize(stop));HANDLE_ERROR(cudaEventElapsedTime(&elapsedTime, start, stop));printf("Time taken:  %3.1f ms\n", elapsedTime);//cleanup the streams and memoryHANDLE_ERROR(cudaFreeHost(host_a));HANDLE_ERROR(cudaFreeHost(host_b));HANDLE_ERROR(cudaFreeHost(host_c));HANDLE_ERROR(cudaFree(dev_a));HANDLE_ERROR(cudaFree(dev_b));HANDLE_ERROR(cudaFree(dev_c));HANDLE_ERROR(cudaStreamDestroy(stream));return 0;
}int test19()
{cudaDeviceProp prop;int whichDevice;HANDLE_ERROR(cudaGetDevice(&whichDevice));HANDLE_ERROR(cudaGetDeviceProperties(&prop, whichDevice));if (!prop.deviceOverlap) {printf( "Device will not handle overlaps, so no speed up from streams\n" );return 0;}//start the timerscudaEvent_t start, stop;HANDLE_ERROR(cudaEventCreate(&start));HANDLE_ERROR(cudaEventCreate(&stop));//initialize the streamscudaStream_t stream0, stream1;HANDLE_ERROR(cudaStreamCreate(&stream0));HANDLE_ERROR(cudaStreamCreate(&stream1));int *host_a, *host_b, *host_c;int *dev_a0, *dev_b0, *dev_c0;//為第0個流分配的GPU內存int *dev_a1, *dev_b1, *dev_c1;//為第1個流分配的GPU內存//allocate the memory on the GPUHANDLE_ERROR(cudaMalloc((void**)&dev_a0, NUM * sizeof(int)));HANDLE_ERROR(cudaMalloc((void**)&dev_b0, NUM * sizeof(int)));HANDLE_ERROR(cudaMalloc((void**)&dev_c0, NUM * sizeof(int)));HANDLE_ERROR(cudaMalloc((void**)&dev_a1, NUM * sizeof(int)));HANDLE_ERROR(cudaMalloc((void**)&dev_b1, NUM * sizeof(int)));HANDLE_ERROR(cudaMalloc((void**)&dev_c1, NUM * sizeof(int)));//allocate host locked memory, used to streamHANDLE_ERROR(cudaHostAlloc((void**)&host_a, FULL_DATA_SIZE * sizeof(int), cudaHostAllocDefault));HANDLE_ERROR(cudaHostAlloc((void**)&host_b, FULL_DATA_SIZE * sizeof(int), cudaHostAllocDefault));HANDLE_ERROR(cudaHostAlloc((void**)&host_c, FULL_DATA_SIZE * sizeof(int), cudaHostAllocDefault));for (int i=0; i<FULL_DATA_SIZE; i++) {host_a[i] = rand();host_b[i] = rand();}HANDLE_ERROR(cudaEventRecord(start, 0));//now loop over full data, in bite-sized chunksfor (int i=0; i<FULL_DATA_SIZE; i+= NUM*2) {//enqueue copies of a in stream0 and stream1//將鎖定內存以異步方式復制到設備上HANDLE_ERROR(cudaMemcpyAsync(dev_a0, host_a+i, NUM * sizeof(int), cudaMemcpyHostToDevice, stream0));HANDLE_ERROR(cudaMemcpyAsync(dev_a1, host_a+i+NUM, NUM * sizeof(int), cudaMemcpyHostToDevice, stream1));//enqueue copies of b in stream0 and stream1HANDLE_ERROR(cudaMemcpyAsync(dev_b0, host_b+i, NUM * sizeof(int), cudaMemcpyHostToDevice, stream0));HANDLE_ERROR(cudaMemcpyAsync(dev_b1, host_b+i+NUM, NUM * sizeof(int), cudaMemcpyHostToDevice, stream1));//enqueue kernels in stream0 and stream1   singlestream_kernel<<<NUM/256, 256, 0, stream0>>>(dev_a0, dev_b0, dev_c0);singlestream_kernel<<<NUM/256, 256, 0, stream1>>>(dev_a1, dev_b1, dev_c1);//enqueue copies of c from device to locked memoryHANDLE_ERROR(cudaMemcpyAsync(host_c+i, dev_c0, NUM * sizeof(int), cudaMemcpyDeviceToHost, stream0));HANDLE_ERROR(cudaMemcpyAsync(host_c+i+NUM, dev_c1, NUM * sizeof(int), cudaMemcpyDeviceToHost, stream1));}float elapsedTime;HANDLE_ERROR(cudaStreamSynchronize(stream0));HANDLE_ERROR(cudaStreamSynchronize(stream1));HANDLE_ERROR(cudaEventRecord(stop, 0));HANDLE_ERROR(cudaEventSynchronize(stop));HANDLE_ERROR(cudaEventElapsedTime(&elapsedTime,start, stop));printf( "Time taken:  %3.1f ms\n", elapsedTime );//cleanup the streams and memoryHANDLE_ERROR(cudaFreeHost(host_a));HANDLE_ERROR(cudaFreeHost(host_b));HANDLE_ERROR(cudaFreeHost(host_c));HANDLE_ERROR(cudaFree(dev_a0));HANDLE_ERROR(cudaFree(dev_b0));HANDLE_ERROR(cudaFree(dev_c0));HANDLE_ERROR(cudaFree(dev_a1));HANDLE_ERROR(cudaFree(dev_b1));HANDLE_ERROR(cudaFree(dev_c1));HANDLE_ERROR(cudaStreamDestroy(stream0));HANDLE_ERROR(cudaStreamDestroy(stream1));return 0;
}float malloc_test(int size)
{cudaEvent_t start, stop;float *a, *b, c, *partial_c;float *dev_a, *dev_b, *dev_partial_c;float elapsedTime;HANDLE_ERROR(cudaEventCreate(&start));HANDLE_ERROR(cudaEventCreate(&stop));//allocate memory on the CPU sidea = (float*)malloc(size * sizeof(float));b = (float*)malloc(size * sizeof(float));partial_c = (float*)malloc(blocksPerGrid * sizeof(float));//allocate the memory on the GPUHANDLE_ERROR(cudaMalloc((void**)&dev_a, size * sizeof(float)));HANDLE_ERROR(cudaMalloc((void**)&dev_b, size * sizeof(float)));HANDLE_ERROR(cudaMalloc((void**)&dev_partial_c, blocksPerGrid * sizeof(float)));//fill in the host memory with datafor (int i=0; i<size; i++) {a[i] = i;b[i] = i * 2;}HANDLE_ERROR(cudaEventRecord(start, 0));//copy the arrays 'a' and 'b' to the GPUHANDLE_ERROR(cudaMemcpy(dev_a, a, size * sizeof(float), cudaMemcpyHostToDevice));HANDLE_ERROR(cudaMemcpy(dev_b, b, size * sizeof(float), cudaMemcpyHostToDevice)); dot_kernel<<<blocksPerGrid, threadsPerBlock>>>(size, dev_a, dev_b, dev_partial_c);//copy the array 'c' back from the GPU to the CPUHANDLE_ERROR(cudaMemcpy(partial_c, dev_partial_c,blocksPerGrid*sizeof(float), cudaMemcpyDeviceToHost));HANDLE_ERROR(cudaEventRecord(stop, 0));HANDLE_ERROR(cudaEventSynchronize(stop));HANDLE_ERROR(cudaEventElapsedTime(&elapsedTime,start, stop));//finish up on the CPU sidec = 0;for (int i=0; i<blocksPerGrid; i++) {c += partial_c[i];}HANDLE_ERROR(cudaFree(dev_a));HANDLE_ERROR(cudaFree(dev_b));HANDLE_ERROR(cudaFree(dev_partial_c));//free memory on the CPU sidefree(a);free(b);free(partial_c);//free eventsHANDLE_ERROR(cudaEventDestroy(start));HANDLE_ERROR(cudaEventDestroy(stop));printf("Value calculated:  %f\n", c);return elapsedTime;
}float cuda_host_alloc_test(int size)
{cudaEvent_t start, stop;float *a, *b, c, *partial_c;float *dev_a, *dev_b, *dev_partial_c;float elapsedTime;HANDLE_ERROR(cudaEventCreate(&start));HANDLE_ERROR(cudaEventCreate(&stop));//allocate the memory on the CPUHANDLE_ERROR(cudaHostAlloc((void**)&a, size*sizeof(float), cudaHostAllocWriteCombined |cudaHostAllocMapped));HANDLE_ERROR(cudaHostAlloc((void**)&b, size*sizeof(float), cudaHostAllocWriteCombined |cudaHostAllocMapped));HANDLE_ERROR(cudaHostAlloc((void**)&partial_c, blocksPerGrid*sizeof(float), cudaHostAllocMapped));//find out the GPU pointersHANDLE_ERROR(cudaHostGetDevicePointer(&dev_a, a, 0));HANDLE_ERROR(cudaHostGetDevicePointer(&dev_b, b, 0));HANDLE_ERROR( cudaHostGetDevicePointer(&dev_partial_c, partial_c, 0));//fill in the host memory with datafor (int i=0; i<size; i++) {a[i] = i;b[i] = i*2;}HANDLE_ERROR(cudaEventRecord(start, 0));dot_kernel<<<blocksPerGrid, threadsPerBlock>>>(size, dev_a, dev_b, dev_partial_c);HANDLE_ERROR(cudaThreadSynchronize());HANDLE_ERROR(cudaEventRecord(stop, 0));HANDLE_ERROR(cudaEventSynchronize(stop));HANDLE_ERROR(cudaEventElapsedTime(&elapsedTime,start, stop));//finish up on the CPU sidec = 0;for (int i=0; i<blocksPerGrid; i++) {c += partial_c[i];}HANDLE_ERROR(cudaFreeHost(a));HANDLE_ERROR(cudaFreeHost(b));HANDLE_ERROR(cudaFreeHost(partial_c));// free eventsHANDLE_ERROR(cudaEventDestroy(start));HANDLE_ERROR(cudaEventDestroy(stop));printf("Value calculated:  %f\n", c);return elapsedTime;
}int test20()
{cudaDeviceProp prop;int whichDevice;HANDLE_ERROR(cudaGetDevice(&whichDevice));HANDLE_ERROR(cudaGetDeviceProperties(&prop, whichDevice));if (prop.canMapHostMemory != 1) {printf( "Device can not map memory.\n" );return 0;}HANDLE_ERROR(cudaSetDeviceFlags(cudaDeviceMapHost));//try it with mallocfloat elapsedTime = malloc_test(NUM);printf("Time using cudaMalloc:  %3.1f ms\n", elapsedTime);//now try it with cudaHostAllocelapsedTime = cuda_host_alloc_test(NUM);printf("Time using cudaHostAlloc:  %3.1f ms\n", elapsedTime);return 0;
}void* routine(void *pvoidData)
{DataStruct *data = (DataStruct*)pvoidData;HANDLE_ERROR(cudaSetDevice(data->deviceID));int size = data->size;float *a, *b, c, *partial_c;float *dev_a, *dev_b, *dev_partial_c;//allocate memory on the CPU sidea = data->a;b = data->b;partial_c = (float*)malloc(blocksPerGrid * sizeof(float));//allocate the memory on the GPUHANDLE_ERROR(cudaMalloc((void**)&dev_a, size * sizeof(float)));HANDLE_ERROR(cudaMalloc((void**)&dev_b, size * sizeof(float)));HANDLE_ERROR(cudaMalloc((void**)&dev_partial_c, blocksPerGrid*sizeof(float)));//copy the arrays 'a' and 'b' to the GPUHANDLE_ERROR(cudaMemcpy(dev_a, a, size*sizeof(float), cudaMemcpyHostToDevice));HANDLE_ERROR(cudaMemcpy(dev_b, b, size*sizeof(float), cudaMemcpyHostToDevice)); dot_kernel<<<blocksPerGrid, threadsPerBlock>>>(size, dev_a, dev_b, dev_partial_c);//copy the array 'c' back from the GPU to the CPUHANDLE_ERROR(cudaMemcpy( partial_c, dev_partial_c, blocksPerGrid * sizeof(float), cudaMemcpyDeviceToHost));//finish up on the CPU sidec = 0;for (int i=0; i<blocksPerGrid; i++) {c += partial_c[i];}HANDLE_ERROR(cudaFree(dev_a));HANDLE_ERROR(cudaFree(dev_b));HANDLE_ERROR(cudaFree(dev_partial_c));//free memory on the CPU sidefree(partial_c);data->returnValue = c;return 0;
}int test21()
{int deviceCount;HANDLE_ERROR(cudaGetDeviceCount(&deviceCount));if (deviceCount < 2) {printf("We need at least two compute 1.0 or greater devices, but only found %d\n", deviceCount);return 0;}float *a = (float*)malloc(sizeof(float) * NUM);HANDLE_NULL(a);float *b = (float*)malloc(sizeof(float) * NUM);HANDLE_NULL(b);//fill in the host memory with datafor (int i=0; i<NUM; i++) {a[i] = i;b[i] = i*2;}//prepare for multithreadDataStruct  data[2];data[0].deviceID = 0;data[0].size = NUM/2;data[0].a = a;data[0].b = b;data[1].deviceID = 1;data[1].size = NUM/2;data[1].a = a + NUM/2;data[1].b = b + NUM/2;CUTThread thread = start_thread(routine, &(data[0]));routine(&(data[1]));end_thread(thread);//free memory on the CPU sidefree(a);free(b);printf("Value calculated:  %f\n", data[0].returnValue + data[1].returnValue);return 0;
}

funset.cuh:

#ifndef _FUNSET_CUH_
#define _FUNSET_CUH_#include <stdio.h>
#include "cpu_anim.h"#define NUM  33 * 1024 * 1024//1024*1024//33 * 1024//10
#define DIM 1024//1000
#define PI 3.1415926535897932f
#define imin(a, b) (a < b ? a : b)
const int threadsPerBlock = 256;
const int blocksPerGrid = imin(32, (NUM/2+threadsPerBlock-1) / threadsPerBlock);//imin(32, (NUM + threadsPerBlock - 1) / threadsPerBlock);
#define rnd(x) (x * rand() / RAND_MAX)
#define INF 2e10f
#define SPHERES 20
#define MAX_TEMP 1.0f
#define MIN_TEMP 0.0001f
#define SPEED 0.25f
#define SIZE (100*1024*1024)
#define FULL_DATA_SIZE (NUM*20)//__global__關鍵字將告訴編譯器,函數應該編譯為在設備而不是主機上運行
__global__ void add(int a, int b, int* c);
__global__ void add_blockIdx(int* a, int* b, int* c);
__global__ void add_threadIdx(int* a, int* b, int* c);
__global__ void add_blockIdx_threadIdx(int* a, int* b, int* c);struct cuComplex {float r, i;__device__ cuComplex(float a, float b) : r(a), i(b)  {}__device__ float magnitude2(void) {return r * r + i * i;}__device__ cuComplex operator*(const cuComplex& a) {return cuComplex(r*a.r - i*a.i, i*a.r + r*a.i);}__device__ cuComplex operator+(const cuComplex& a){return cuComplex(r+a.r, i+a.i);}
};__device__ int julia(int x, int y); 
__global__ void kernel_julia(unsigned char *ptr);
__global__ void ripple_kernel(unsigned char *ptr, int ticks);
struct Sphere;struct DataBlock {unsigned char *dev_bitmap;CPUAnimBitmap *bitmap;Sphere *s;
};void generate_frame(DataBlock *d, int ticks);
void cleanup(DataBlock *d); __global__ void dot_kernel(float *a, float *b, float *c);
__global__ void julia_kernel(unsigned char *ptr);//通過一個數據結構對球面建模
struct Sphere {float r,b,g;float radius;float x,y,z;__device__ float hit(float ox, float oy, float *n){float dx = ox - x;float dy = oy - y;if (dx*dx + dy*dy < radius*radius) {float dz = sqrtf(radius*radius - dx*dx - dy*dy);*n = dz / sqrtf(radius * radius);return dz + z;}return -INF;}
};//聲明為常量內存,__constant__將把變量的訪問限制為只讀
__constant__ Sphere s[SPHERES];__global__ void RayTracing_kernel(Sphere *s, unsigned char *ptr);
__global__ void RayTracing_kernel(unsigned char *ptr);//these exist on the GPU side
texture<float> texConstSrc, texIn, texOut;
texture<float, 2> texConstSrc2, texIn2, texOut2;//this kernel takes in a 2-d array of floats it updates the value-of-interest by a 
//scaled value based on itself and its nearest neighbors
__global__ void Heat_blend_kernel(float *dst, bool dstOut);
__global__ void blend_kernel(float *dst, bool dstOut);
__global__ void Heat_copy_const_kernel(float *iptr);
__global__ void copy_const_kernel(float *iptr);struct Heat_DataBlock {unsigned char   *output_bitmap;float           *dev_inSrc;float           *dev_outSrc;float           *dev_constSrc;CPUAnimBitmap  *bitmap;cudaEvent_t     start, stop;float           totalTime;float           frames;
};//globals needed by the update routine
struct DataBlock_opengl {float           *dev_inSrc;float           *dev_outSrc;float           *dev_constSrc;cudaEvent_t     start, stop;float           totalTime;float           frames;
};void Heat_anim_gpu(Heat_DataBlock *d, int ticks);
void anim_gpu(Heat_DataBlock *d, int ticks);
//clean up memory allocated on the GPU
void Heat_anim_exit(Heat_DataBlock *d);
void anim_exit(Heat_DataBlock *d); 
void generate_frame_opengl(uchar4 *pixels, void*, int ticks);
__global__ void ripple_kernel_opengl(uchar4 *ptr, int ticks);
__global__ void Heat_blend_kernel_opengl(float *dst, bool dstOut);
__global__ void Heat_copy_const_kernel_opengl(float *iptr);
void anim_gpu_opengl(uchar4* outputBitmap, DataBlock_opengl *d, int ticks);
void anim_exit_opengl(DataBlock_opengl *d);
__global__ void histo_kernel(unsigned char *buffer, long size, unsigned int *histo);
__global__ void singlestream_kernel(int *a, int *b, int *c);
__global__ void dot_kernel(int size, float *a, float *b, float *c);struct DataStruct{int     deviceID;int     size;float   *a;float   *b;float   returnValue;
};#endif //_FUNSET_CUH_

funset.cu:

#include "funset.cuh"
#include <stdio.h>__global__ void add(int a, int b, int* c)
{*c = a + b;
}//__global__:從主機上調用并在設備上運行
__global__ void add_blockIdx(int* a, int* b, int* c)
{//計算該索引處的數據//變量blockIdx,是一個內置變量,在CUDA運行時中已經預先定義了這個變量//此變量中包含的值就是當前執行設備代碼的線程塊的索引int tid = blockIdx.x;//this thread handles the data at its thread idif (tid < NUM)c[tid] = a[tid] + b[tid];
}//__device__:表示代碼將在GPU而不是主機上運行,
//由于此函數已聲明為__device__函數,因此只能從其它__device__函數或者
//從__global__函數中調用它們
__device__ int julia(int x, int y) 
{const float scale = 1.5;float jx = scale * (float)(DIM/2 - x)/(DIM/2);float jy = scale * (float)(DIM/2 - y)/(DIM/2);cuComplex c(-0.8, 0.156);cuComplex a(jx, jy);int i = 0;for (i=0; i<200; i++) {a = a * a + c;if (a.magnitude2() > 1000)return 0;}return 1;
}__global__ void kernel_julia(unsigned char *ptr)
{//map from blockIdx to pixel positionint x = blockIdx.x;int y = blockIdx.y;//gridDim為內置變量,對所有的線程塊來說,gridDim是一個常數,用來保存線程格每一維的大小//此處gridDim的值是(DIM, DIM)int offset = x + y * gridDim.x;//now calculate the value at that positionint juliaValue = julia(x, y);ptr[offset*4 + 0] = 255 * juliaValue;ptr[offset*4 + 1] = 0;ptr[offset*4 + 2] = 0;ptr[offset*4 + 3] = 255;
}__global__ void add_threadIdx(int* a, int* b, int* c)
{//使用線程索引來對數據進行索引而非通過線程塊索引(blockIdx.x)int tid = threadIdx.x;if (tid < NUM)c[tid] = a[tid] + b[tid];
}__global__ void add_blockIdx_threadIdx(int* a, int* b, int* c)
{int tid = threadIdx.x + blockIdx.x * blockDim.x;if (tid == 0) {printf("blockDim.x = %d, gridDim.x = %d\n", blockDim.x, gridDim.x);}while (tid < NUM) {c[tid] = a[tid] + b[tid];tid += blockDim.x * gridDim.x;}
}__global__ void ripple_kernel(unsigned char *ptr, int ticks)
{// map from threadIdx/BlockIdx to pixel position//將線程和線程塊的索引映射到圖像坐標//對x和y的值進行線性化從而得到輸出緩沖區中的一個偏移int x = threadIdx.x + blockIdx.x * blockDim.x;int y = threadIdx.y + blockIdx.y * blockDim.y;int offset = x + y * blockDim.x * gridDim.x;// now calculate the value at that position//生成一個隨時間變化的正弦曲線"波紋"float fx = x - DIM/2;float fy = y - DIM/2;float d = sqrtf(fx * fx + fy * fy);unsigned char grey = (unsigned char)(128.0f + 127.0f * cos(d/10.0f - ticks/7.0f) / (d/10.0f + 1.0f)); ptr[offset*4 + 0] = grey;ptr[offset*4 + 1] = grey;ptr[offset*4 + 2] = grey;ptr[offset*4 + 3] = 255;
}__global__ void dot_kernel(float *a, float *b, float *c)
{//聲明了一個共享內存緩沖區,它將保存每個線程計算的加和值__shared__ float cache[threadsPerBlock];int tid = threadIdx.x + blockIdx.x * blockDim.x;int cacheIndex = threadIdx.x;float temp = 0;while (tid < NUM) {temp += a[tid] * b[tid];tid += blockDim.x * gridDim.x;}//set the cache valuescache[cacheIndex] = temp;//synchronize threads in this block//對線程塊中的線程進行同步//這個函數將確保線程塊中的每個線程都執行完__syncthreads()前面的語句后,才會執行下一條語句__syncthreads();//for reductions(歸約), threadsPerBlock must be a power of 2 because of the following codeint i = blockDim.x/2;while (i != 0) {if (cacheIndex < i)cache[cacheIndex] += cache[cacheIndex + i];//在循環迭代中更新了共享內存變量cache,并且在循環的下一次迭代開始之前,//需要確保當前迭代中所有線程的更新操作都已經完成__syncthreads();i /= 2;}if (cacheIndex == 0)c[blockIdx.x] = cache[0];
}__global__ void julia_kernel(unsigned char *ptr)
{//map from threadIdx/BlockIdx to pixel positionint x = threadIdx.x + blockIdx.x * blockDim.x;int y = threadIdx.y + blockIdx.y * blockDim.y;int offset = x + y * blockDim.x * gridDim.x;__shared__ float shared[16][16];//now calculate the value at that positionconst float period = 128.0f;shared[threadIdx.x][threadIdx.y] = 255 * (sinf(x*2.0f*PI/ period) + 1.0f) *(sinf(y*2.0f*PI/ period) + 1.0f) / 4.0f;//removing this syncthreads shows graphically what happens//when it doesn't exist.this is an example of why we need it.__syncthreads();ptr[offset*4 + 0] = 0;ptr[offset*4 + 1] = shared[15 - threadIdx.x][15 - threadIdx.y];ptr[offset*4 + 2] = 0;ptr[offset*4 + 3] = 255;
}__global__ void RayTracing_kernel(Sphere *s, unsigned char *ptr)
{//map from threadIdx/BlockIdx to pixel positionint x = threadIdx.x + blockIdx.x * blockDim.x;int y = threadIdx.y + blockIdx.y * blockDim.y;int offset = x + y * blockDim.x * gridDim.x;float ox = (x - DIM/2);float oy = (y - DIM/2);float r=0, g=0, b=0;float maxz = -INF;for (int i = 0; i < SPHERES; i++) {float n;float t = s[i].hit(ox, oy, &n);if (t > maxz) {float fscale = n;r = s[i].r * fscale;g = s[i].g * fscale;b = s[i].b * fscale;maxz = t;}} ptr[offset*4 + 0] = (int)(r * 255);ptr[offset*4 + 1] = (int)(g * 255);ptr[offset*4 + 2] = (int)(b * 255);ptr[offset*4 + 3] = 255;
}__global__ void RayTracing_kernel(unsigned char *ptr)
{//map from threadIdx/BlockIdx to pixel positionint x = threadIdx.x + blockIdx.x * blockDim.x;int y = threadIdx.y + blockIdx.y * blockDim.y;int offset = x + y * blockDim.x * gridDim.x;float ox = (x - DIM/2);float oy = (y - DIM/2);float r=0, g=0, b=0;float maxz = -INF;for(int i = 0; i < SPHERES; i++) {float n;float t = s[i].hit(ox, oy, &n);if (t > maxz) {float fscale = n;r = s[i].r * fscale;g = s[i].g * fscale;b = s[i].b * fscale;maxz = t;}} ptr[offset*4 + 0] = (int)(r * 255);ptr[offset*4 + 1] = (int)(g * 255);ptr[offset*4 + 2] = (int)(b * 255);ptr[offset*4 + 3] = 255;
}__global__ void Heat_blend_kernel(float *dst, bool dstOut)
{//map from threadIdx/BlockIdx to pixel positionint x = threadIdx.x + blockIdx.x * blockDim.x;int y = threadIdx.y + blockIdx.y * blockDim.y;int offset = x + y * blockDim.x * gridDim.x;int left = offset - 1;int right = offset + 1;if (x == 0) left++;if (x == DIM-1) right--; int top = offset - DIM;int bottom = offset + DIM;if (y == 0) top += DIM;if (y == DIM-1) bottom -= DIM;float t, l, c, r, b;if (dstOut) {//tex1Dfetch是編譯器內置函數,從設備內存取紋理t = tex1Dfetch(texIn, top);l = tex1Dfetch(texIn, left);c = tex1Dfetch(texIn, offset);r = tex1Dfetch(texIn, right);b = tex1Dfetch(texIn, bottom);} else {t = tex1Dfetch(texOut, top);l = tex1Dfetch(texOut, left);c = tex1Dfetch(texOut, offset);r = tex1Dfetch(texOut, right);b = tex1Dfetch(texOut, bottom);}dst[offset] = c + SPEED * (t + b + r + l - 4 * c);
}__global__ void blend_kernel(float *dst, bool dstOut)
{//map from threadIdx/BlockIdx to pixel positionint x = threadIdx.x + blockIdx.x * blockDim.x;int y = threadIdx.y + blockIdx.y * blockDim.y;int offset = x + y * blockDim.x * gridDim.x;float t, l, c, r, b;if (dstOut) {t = tex2D(texIn2, x, y-1);l = tex2D(texIn2, x-1, y);c = tex2D(texIn2, x, y);r = tex2D(texIn2, x+1, y);b = tex2D(texIn2, x, y+1);} else {t = tex2D(texOut2, x, y-1);l = tex2D(texOut2, x-1, y);c = tex2D(texOut2, x, y);r = tex2D(texOut2, x+1, y);b = tex2D(texOut2, x, y+1);}dst[offset] = c + SPEED * (t + b + r + l - 4 * c);
}__global__ void Heat_copy_const_kernel(float *iptr)
{//map from threadIdx/BlockIdx to pixel positionint x = threadIdx.x + blockIdx.x * blockDim.x;int y = threadIdx.y + blockIdx.y * blockDim.y;int offset = x + y * blockDim.x * gridDim.x;float c = tex1Dfetch(texConstSrc, offset);if (c != 0)iptr[offset] = c;
}__global__ void copy_const_kernel(float *iptr) 
{//map from threadIdx/BlockIdx to pixel positionint x = threadIdx.x + blockIdx.x * blockDim.x;int y = threadIdx.y + blockIdx.y * blockDim.y;int offset = x + y * blockDim.x * gridDim.x;float c = tex2D(texConstSrc2, x, y);if (c != 0)iptr[offset] = c;
}void generate_frame_opengl(uchar4 *pixels, void*, int ticks)
{dim3 grids(DIM / 16, DIM / 16);dim3 threads(16, 16);ripple_kernel_opengl<<<grids, threads>>>(pixels, ticks);
}__global__ void ripple_kernel_opengl(uchar4 *ptr, int ticks)
{//map from threadIdx/BlockIdx to pixel positionint x = threadIdx.x + blockIdx.x * blockDim.x;int y = threadIdx.y + blockIdx.y * blockDim.y;int offset = x + y * blockDim.x * gridDim.x;// now calculate the value at that positionfloat fx = x - DIM / 2;float fy = y - DIM / 2;float d = sqrtf(fx * fx + fy * fy);unsigned char grey = (unsigned char)(128.0f + 127.0f * cos(d/10.0f - ticks/7.0f) / (d/10.0f + 1.0f));    ptr[offset].x = grey;ptr[offset].y = grey;ptr[offset].z = grey;ptr[offset].w = 255;
}__global__ void Heat_blend_kernel_opengl(float *dst, bool dstOut)
{//map from threadIdx/BlockIdx to pixel positionint x = threadIdx.x + blockIdx.x * blockDim.x;int y = threadIdx.y + blockIdx.y * blockDim.y;int offset = x + y * blockDim.x * gridDim.x;int left = offset - 1;int right = offset + 1;if (x == 0) left++;if (x == DIM-1) right--; int top = offset - DIM;int bottom = offset + DIM;if (y == 0) top += DIM;if (y == DIM-1) bottom -= DIM;float t, l, c, r, b;if (dstOut) {t = tex1Dfetch(texIn, top);l = tex1Dfetch(texIn, left);c = tex1Dfetch(texIn, offset);r = tex1Dfetch(texIn, right);b = tex1Dfetch(texIn, bottom);} else {t = tex1Dfetch(texOut, top);l = tex1Dfetch(texOut, left);c = tex1Dfetch(texOut, offset);r = tex1Dfetch(texOut, right);b = tex1Dfetch(texOut, bottom);}dst[offset] = c + SPEED * (t + b + r + l - 4 * c);
}__global__ void Heat_copy_const_kernel_opengl(float *iptr)
{int x = threadIdx.x + blockIdx.x * blockDim.x;int y = threadIdx.y + blockIdx.y * blockDim.y;int offset = x + y * blockDim.x * gridDim.x;float c = tex1Dfetch(texConstSrc, offset);if (c != 0)iptr[offset] = c;
}__global__ void histo_kernel(unsigned char *buffer, long size, unsigned int *histo)
{//clear out the accumulation buffer called temp since we are launched with 256 threads, //it is easy to clear that memory with one write per thread__shared__  unsigned int temp[256]; //共享內存緩沖區temp[threadIdx.x] = 0;__syncthreads();//calculate the starting index and the offset to the next block that each thread will be processingint i = threadIdx.x + blockIdx.x * blockDim.x;int stride = blockDim.x * gridDim.x;while (i < size) {atomicAdd(&temp[buffer[i]], 1);i += stride;}//sync the data from the above writes to shared memory then add the shared memory values to the values from//the other thread blocks using global memory atomic adds same as before, since we have 256 threads,//updating the global histogram is just one write per thread!__syncthreads();atomicAdd(&(histo[threadIdx.x]), temp[threadIdx.x]);
}__global__ void singlestream_kernel(int *a, int *b, int *c)
{int idx = threadIdx.x + blockIdx.x * blockDim.x;if (idx < NUM) {int idx1 = (idx + 1) % 256;int idx2 = (idx + 2) % 256;float as = (a[idx] + a[idx1] + a[idx2]) / 3.0f;float bs = (b[idx] + b[idx1] + b[idx2]) / 3.0f;c[idx] = (as + bs) / 2;}
}__global__ void dot_kernel(int size, float *a, float *b, float *c)
{__shared__ float cache[threadsPerBlock];int tid = threadIdx.x + blockIdx.x * blockDim.x;int cacheIndex = threadIdx.x;float temp = 0;while (tid < size) {temp += a[tid] * b[tid];tid += blockDim.x * gridDim.x;}//set the cache valuescache[cacheIndex] = temp;//synchronize threads in this block__syncthreads();//for reductions(歸約), threadsPerBlock must be a power of 2 because of the following codeint i = blockDim.x / 2;while (i != 0) {if (cacheIndex < i)cache[cacheIndex] += cache[cacheIndex + i];__syncthreads();i /= 2;}if (cacheIndex == 0)c[blockIdx.x] = cache[0];
}


以上來自于對《GPU高性能編程CUDA實戰》書中內容整理。

GitHub:https://github.com/fengbingchun/CUDA_Test


總結

以上是生活随笔為你收集整理的《GPU高性能编程CUDA实战》中代码整理的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

歡迎分享!

轉載請說明來源于"生活随笔",并保留原作者的名字。

本文地址:《GPU高性能编程CUDA实战》中代码整理