Skip to content

Conversation

@mdboom
Copy link
Contributor

@mdboom mdboom commented Feb 12, 2026

We currently accept an int, CUdeviceptr, or a buffer-providing object as convertible to a void *. This is currently handled with a class _HelperInputVoidPtr, which mainly exists to manage the lifetime when the input exposes a buffer.

This object (like all PyObjects) is allocated on the heap and gets free'd implicitly by Cython at the end of the function. Since it only exists to manage lifetimes when the object exposes a buffer, we pay this heap allocation penalty even in the common case where the input is a simple integer.

This changes the code to statically allocate the Py_buffer on the stack, and so is faster for similar reasons to #1545. This means we are trading some stack space (88 bytes) for speed. But given that CUDA Python API calls can't recursively call themselves, I'm not concerned.

This improves the overhead time in the benchmark in #659 from 2.97us/call to 2.67us/call.

The old _HelperInputVoidPtr class stays around here because it is still useful when the input is a list of void *-convertible things and we can't statically determine how much space to allocate.

@copy-pr-bot
Copy link
Contributor

copy-pr-bot bot commented Feb 12, 2026

Auto-sync is disabled for ready for review pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@mdboom
Copy link
Contributor Author

mdboom commented Feb 12, 2026

/ok to test

@github-actions
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant