1.使用thrust:
http://stackoverflow.com/questions/13185221/cuda-host-object-to-device
http://blog.csdn.net/shenlan282/article/details/8237576
http://blog.csdn.net/shenlan282/article/details/8237586
2.使用条件编译
http://stackoverflow.com/questions/6978643/cuda-and-classes
Define the class in a header that you #include, just like in C++.
Any method that must be called from device code should be defined with both __device__
and__host__
declspecs,
including the constructor and destructor if you plan to use new
/delete
on
the device (note new
/delete
require
CUDA 4.0 and a compute capability 2.0 or higher GPU).
You probably want to define a macro like
#ifdef __CUDACC__
#define CUDA_CALLABLE_MEMBER __host__ __device__
#else
#define CUDA_CALLABLE_MEMBER
#endif
Then use this macro on your member functions
class Foo {
public:
CUDA_CALLABLE_MEMBER Foo() {}
CUDA_CALLABLE_MEMBER ~Foo() {}
CUDA_CALLABLE_MEMBER void aMethod() {}
};
The reason for this is that only the CUDA compiler knows __device__
and __host__
--
your host C++ compiler will raise an error.
原文链接: https://www.cnblogs.com/Vulkan/archive/2012/11/30/7530193.html
欢迎关注
微信关注下方公众号,第一时间获取干货硬货;公众号内回复【pdf】免费获取数百本计算机经典书籍
原创文章受到原创版权保护。转载请注明出处:https://www.ccppcoding.com/archives/71076
非原创文章文中已经注明原地址,如有侵权,联系删除
关注公众号【高性能架构探索】,第一时间获取最新文章
转载文章受原作者版权保护。转载请注明原作者出处!