Changes#
Version 2020.1#
Removes support for Python 2.7.
Version 2019.1#
Build improvements.
Bug fixes.
Version 2018.1#
Update Boost.Python for better PyPy support
Add
pycuda.elementwise.ElementwiseKernel.get_texref()
.Bug fixes.
Version 2017.2#
zeros_like()
andempty_like()
now have dtype and order arguments as in numpy. Previously these routines always returned a C-order array. The new default behavior follows the numpy default, which is to match the order and strides of the input as closely as possible.A
ones_like()
gpuarray function was added.methods
GPUArray.imag
,GPUArray.real
,GPUArray.conj()
now all return Fortran-ordered arrays when theGPUArray
is Fortran-ordered.
Version 2016.2#
Note
This version is the current development version. You can get it from PyCUDAâs version control repository.
Version 2016.1#
Bug fixes.
Global control of caching.
Matrix/array interop.
Version 2014.1#
Add
PointerHolderBase.as_buffer()
andDeviceAllocation.as_buffer()
.Support for
device_attribute
values added in CUDA 5.0, 5.5, and 6.0.Support for Managed Memory. (contributed by Stan Seibert)
Version 2013.1.1#
Windows fix for PyCUDA on Python 3 (Thanks, Christoph Gohlke)
Version 2013.1#
Python 3 support (large parts contributed by Tomasz Rybak)
Add
pycuda.gpuarray.GPUArray.__getitem__()
, supporting generic slicing.It is possible to create non-contiguous arrays using this functionality. Most operations (elementwise etc.) will not work on such arrays.
More generators in
pycuda.curandom
. (contributed by Tomasz Rybak)Many bug fixes
Note
The addition of pyopencl.array.Array.__getitem__()
has an unintended
consequence due to numpy bug 3375. For instance, this
expression:
numpy.float32(5) * some_gpu_array
may take a very long time to execute. This is because numpy
first
builds an object array of (compute-device) scalars (!) before it decides that
thatâs probably not such a bright idea and finally calls
pycuda.gpuarray.GPUArray.__rmul__()
.
Note that only left arithmetic operations of pycuda.gpuarray.GPUArray
by numpy
scalars are affected. Pythonâs number types (float
etc.)
are unaffected, as are right multiplications.
If a program that used to run fast suddenly runs extremely slowly, it is likely that this bug is to blame.
Hereâs what you can do:
Version 2012.1#
Numerous bug fixes. (including shipped-boost compilation on gcc 4.7)
Version 2011.2#
Fix a memory leak when using pagelocked memory. (reported by Paul Cazeaux)
Fix complex scalar argument passing.
Fix
pycuda.gpuarray.zeros()
when used on complex arrays.Add
pycuda.tools.register_dtype()
to enable scan/reduction on struct types.More improvements to CURAND.
Add support for CUDA 4.1.
Version 2011.1.2#
Various fixes.
Version 2011.1.1#
Various fixes.
Version 2011.1#
When you update code to run on this version of PyCUDA, please make sure to have deprecation warnings enabled, so that you know when your code needs updating. (See the Python docs. Caution: As of Python 2.7, deprecation warnings are disabled by default.)
Add support for CUDA 3.0-style OpenGL interop. (thanks to Tomasz Rybak)
Add range and slice keyword argument to
pycuda.elementwise.ElementwiseKernel.__call__()
.Document preamble constructor keyword argument to
pycuda.elementwise.ElementwiseKernel
.Add vector types, see
pycuda.gpuarray.vec
.Add
pycuda.scan
.Add support for new features in CUDA 4.0.
Add
pycuda.gpuarray.GPUArray.strides
,pycuda.gpuarray.GPUArray.flags
. Allow the creation of arrys in C and Fortran order.Adopt stateless launch interface from CUDA, deprecate old one.
Add CURAND wrapper. (with work by Tomasz Rybak)
Version 0.94.2#
Fix the pesky Fermi reduction bug. (thanks to Tomasz Rybak)
Version 0.94.1#
Support for CUDA debugging. (see FAQ for details.)
Version 0.94#
Support for CUDA 3.0. (but not CUDA 3.0 beta!) Search for âCUDA 3.0â in Device Interface to see whatâs new.
Support for CUDA 3.1 beta. Search for âCUDA 3.1â in Device Interface to see whatâs new.
Support for CUDA 3.2 RC. Search for âCUDA 3.2â in Device Interface to see whatâs new.
Add sparse matrix-vector multiplication and linear system solving code, in
pycuda.sparse
.Add
pycuda.gpuarray.if_positive()
,pycuda.gpuarray.maximum()
,pycuda.gpuarray.minimum()
.Deprecate
pycuda.tools.get_default_device()
Use
pycuda.tools.make_default_context()
inpycuda.autoinit
, which changes its behavior.Remove previously deprecated features:
pycuda.driver.Function.registers
,pycuda.driver.Function.lmem
, andpycuda.driver.Function.smem
have been deprecated in favor of the mechanism above. Seepycuda.driver.Function.num_regs
for more.the three-argument forms (i.e. with streams) of
pycuda.driver.memcpy_dtoh()
andpycuda.driver.memcpy_htod()
. Usepycuda.driver.memcpy_dtoh_async()
andpycuda.driver.memcpy_htod_async()
instead.pycuda.driver.SourceModule
.
Add
pycuda.tools.context_dependent_memoize()
, use it for context-dependent caching of PyCUDAâs canned kernels.Add attributes of
pycuda.driver.CompileError
. (requested by Dan Lepage)Add preliminary support for complex numbers. (initial discussion with Daniel Fan)
Add
pycuda.gpuarray.GPUArray.real
,pycuda.gpuarray.GPUArray.imag
,pycuda.gpuarray.GPUArray.conj()
.
Version 0.93#
Warning
Version 0.93 makes some changes to the PyCUDA programming interface. In all cases where documented features were changed, the old usage continues to work, but results in a warning. It is recommended that you update your code to remove the warning.
OpenGL interoperability in
pycuda.gl
.Document
pycuda.gpuarray.GPUArray.__len__()
. Change its definition to matchnumpy
.Let
pycuda.gpuarray.GPUArray
operators deal with generic data types, including type promotion.Fix thread handling by making internal context stack thread-local.
Add
pycuda.gpuarray.sum()
,pycuda.gpuarray.dot()
,pycuda.gpuarray.subset_dot()
.Synchronous and asynchronous memory transfers are now separate from each other, the latter having an
_async
suffix. The now-synchronous forms still take apycuda.driver.Stream
argument, but this practice is deprecated and prints a warning.pycuda.gpuarray.GPUArray
no longer has an associatedpycuda.driver.Stream
. Asynchronous GPUArray transfers are now separate from synchronous ones and have an_async
suffix.Support for features added in CUDA 2.2.
pycuda.driver.SourceModule
has been moved topycuda.compiler.SourceModule
. It is still available by the old name, but will print a warning about the impending deprecation.pycuda.driver.Device.get_attribute()
with apycuda.driver.device_attribute
attr can now be spelled dev.attr, with no further namespace detours. (Suggested by Ian Cullinan) Likewise forpycuda.driver.Function.get_attribute()
pycuda.driver.Function.registers
,pycuda.driver.Function.lmem
, andpycuda.driver.Function.smem
have been deprecated in favor of the mechanism above. Seepycuda.driver.Function.num_regs
for more.Add PyCUDA version query mechanism, see
pycuda.VERSION
.
Version 0.92#
Note
If youâre upgrading from prior versions,
you may delete the directory $HOME/.pycuda-compiler-cache
to recover now-unused disk space.
Note
During this release time frame, I had the honor of giving a talk on PyCUDA for a class that a group around Nicolas Pinto was teaching at MIT. If youâre interested, the slides for it are available.
Make
pycuda.tools.DeviceMemoryPool
official functionality, after numerous improvements. Addpycuda.tools.PageLockedMemoryPool
for pagelocked memory, too.Properly deal with automatic cleanup in the face of several contexts.
Fix compilation on Python 2.4.
Fix 3D arrays. (Nicolas Pinto)
Improve error message when nvcc is not found.
Automatically run Python GC before throwing out-of-memory errors.
Allow explicit release of memory using
pycuda.driver.DeviceAllocation.free()
,pycuda.driver.HostAllocation.free()
,pycuda.driver.Array.free()
,pycuda.tools.PooledDeviceAllocation.free()
,pycuda.tools.PooledHostAllocation.free()
.Make configure switch
./configure.py --cuda-trace
to enable API tracing.Add a documentation chapter and examples on Metaprogramming.
Add
pycuda.gpuarray.empty_like()
andpycuda.gpuarray.zeros_like()
.Add and document
pycuda.gpuarray.GPUArray.mem_size
in anticipation of stride/pitch support inpycuda.gpuarray.GPUArray
.Merge Jozef Veselyâs MD5-based RNG.
Document
pycuda.driver.from_device()
andpycuda.driver.from_device_like()
.Various documentation improvements. (many of them from Nicholas Tung)
Move PyCUDAâs compiler cache to the system temporary directory, rather than the users home directory.
Version 0.91#
Add support for compiling on CUDA 1.1. Added version query
pycuda.driver.get_version()
. Updated documentation to show 2.0-only functionality.Support for Windows and MacOS X, in addition to Linux. (Gert Wohlgemuth, Cosmin Stejerean, Znah on the Nvidia forums, and David Gadling)
Support more arithmetic operators on
pycuda.gpuarray.GPUArray
. (Gert Wohlgemuth)Add
pycuda.gpuarray.arange()
. (Gert Wohlgemuth)Add
pycuda.curandom
. (Gert Wohlgemuth)Add
pycuda.cumath
. (Gert Wohlgemuth)Add
pycuda.autoinit
.Add
pycuda.tools
.Add
pycuda.tools.DeviceData
andpycuda.tools.OccupancyRecord
.pycuda.gpuarray.GPUArray
parallelizes properly on GTX200-generation devices.Make
pycuda.driver.Function
resource usage available to the program. (See, e.g.pycuda.driver.Function.registers
.)Cache kernels compiled by
pycuda.compiler.SourceModule
. (Tom Annau)Allow for faster, prepared kernel invocation. See
pycuda.driver.Function.prepare()
.Added memory pools, at
pycuda.tools.DeviceMemoryPool
as experimental, undocumented functionality. For some workloads, this can cure the slowness ofpycuda.driver.mem_alloc()
.Fix the memset family of functions.
Improve Error Reporting.
Add order parameter to
pycuda.driver.matrix_to_array()
andpycuda.driver.make_multichannel_2d_array()
.
Acknowledgments#
Gert Wohlgemuth ported PyCUDA to MacOS X and contributed large parts of
pycuda.gpuarray.GPUArray
.Alexander Mordvintsev contributed fixes for Windows XP.
Cosmin Stejerean provided multiple patches for PyCUDAâs build system.
Tom Annau contributed an alternative SourceModule compiler cache as well as Windows build insight.
Nicholas Tung improved PyCUDAâs documentation.
Jozef Vesely contributed a massively improved random number generator derived from the RSA Data Security, Inc. MD5 Message Digest Algorithm.
Chris Heuser provided a test cases for multi-threaded PyCUDA.
The reduction templating is based on code by Mark Harris at Nvidia.
Andrew Wagner provided a test case and contributed the port of the convolution example. The original convolution code is based on an example provided by Nvidia.
Hendrik Riedmann contributed the matrix transpose and list selection examples.
Peter Berrington contributed a working example for CUDA-OpenGL interoperability.
Maarten Breddels provided a patch for âflat-eggâ support.
Nicolas Pinto refactored
pycuda.autoinit
for automatic device finding.Ian Ozsvald and Fabrizio Milo provided patches.
Min Ragan-Kelley solved the long-standing puzzle of why PyCUDA did not work on 64-bit CUDA on OS X (and provided a patch).
Tomasz Rybak solved another long-standing puzzle of why reduction failed to work on some Fermi chips. In addition, he provided a patch that updated PyCUDAâs OpenGL to the state of CUDA 3.0.
Martin Bergtholdt of Philips Research provided a patch that made PyCUDA work on 64-bit Windows 7.
Licensing#
PyCUDA is licensed to you under the MIT/X Consortium license:
Copyright (c) 2009,10 Andreas Klöckner and Contributors.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the âSoftwareâ), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED âAS ISâ, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
PyCUDA includes derivatives of parts of the Thrust computing package (in particular the scan implementation). These parts are licensed as follows:
Copyright 2008-2011 NVIDIA Corporation
Licensed under the Apache License, Version 2.0 (the âLicenseâ); you may not use this file except in compliance with the License. You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an âAS ISâ BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Note
If you use Apache-licensed parts, be aware that these may be incompatible with software licensed exclusively under GPL2. (Most software is licensed as GPL2 or later, in which case this is not an issue.)
Frequently Asked Questions#
The FAQ is now maintained collaboratively in the PyCUDA Wiki.
Citing PyCUDA#
We are not asking you to gratuitously cite PyCUDA in work that is otherwise unrelated to software. That said, if you do discuss some of the development aspects of your code and would like to highlight a few of the ideas behind PyCUDA, feel free to cite this article:
Andreas Klöckner, Nicolas Pinto, Yunsup Lee, Bryan Catanzaro, Paul Ivanov, Ahmed Fasih, PyCUDA and PyOpenCL: A scripting-based approach to GPU run-time code generation, Parallel Computing, Volume 38, Issue 3, March 2012, Pages 157-174.
Hereâs a Bibtex entry for your convenience:
@article{kloeckner_pycuda_2012,
author = {{Kl{\"o}ckner}, Andreas
and {Pinto}, Nicolas
and {Lee}, Yunsup
and {Catanzaro}, B.
and {Ivanov}, Paul
and {Fasih}, Ahmed },
title = "{PyCUDA and PyOpenCL: A Scripting-Based Approach to GPU Run-Time Code Generation}",
journal = "Parallel Computing",
volume = "38",
number = "3",
pages = "157--174",
year = "2012",
issn = "0167-8191",
doi = "10.1016/j.parco.2011.09.001",
}