CUDA Programming/Terminology
Here are the list of various terms that are specific to GPU Parallel Computing field:
Terms
Kernel
CUDA Definition
A kernel is a function that, when called, is executed N times in parallel by N different CUDA threads, as opposed to only once like regular C functions.[1]
OpenCL Definition
A kernel is a function declared in a program and executed on an OpenCL device.[2]
See also: Kernel Object definition
Kernel Object
OpenCL Definition
A kernel object encapsulates a specific kernel function declared in a program and the argument values to be used when executing this function. [2]
Compute Capability
CUDA Definition
It is defined by a major and a minor revision number. Devices with the same major revision number are of the same core architecture. The minor revision number corresponds to an incremental improvement to the core architecture, possibly including new features.
References
[1] NVIDIA CUDA Computing Guide Version 3.1 [2] The OpenCL Specification, Version 1.1, Revision 36