[PERF]: Faster void * conversion #1616
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
We currently accept an
int,CUdeviceptr, or a buffer-providing object as convertible to avoid *. This is currently handled with a class_HelperInputVoidPtr, which mainly exists to manage the lifetime when the input exposes a buffer.This object (like all PyObjects) is allocated on the heap and gets free'd implicitly by Cython at the end of the function. Since it only exists to manage lifetimes when the object exposes a buffer, we pay this heap allocation penalty even in the common case where the input is a simple integer.
This changes the code to statically allocate the
Py_bufferon the stack, and so is faster for similar reasons to #1545. This means we are trading some stack space (88 bytes) for speed. But given that CUDA Python API calls can't recursively call themselves, I'm not concerned.This improves the overhead time in the benchmark in #659 from 2.97us/call to 2.67us/call.
The old
_HelperInputVoidPtrclass stays around here because it is still useful when the input is a list ofvoid *-convertible things and we can't statically determine how much space to allocate.