Skip to content

Conversation

@caseyonit
Copy link

Summary

This PR improves LSP attribute completion performance for large modules (e.g. import polars as pl; pl.) by avoiding eager per-attribute type/docstring resolution when the completion candidate set is large. For smaller completion sets, we still compute and return rich completion metadata (kinds, type details, and documentation) to preserve existing behavior.

Fixes #2296

Test Plan

cargo fmt --all
cargo test -p pyrefly completion -- --nocapture

@meta-cla meta-cla bot added the cla signed label Feb 3, 2026
@migeed-z
Copy link
Contributor

migeed-z commented Feb 3, 2026

cc @yangdanny97 @kinto0

@yangdanny97
Copy link
Contributor

I'm not sure if this is the code path that's hit in the originally linked issue, my profiling does not pick up solver.completions()

@yangdanny97 yangdanny97 self-assigned this Feb 4, 2026
Copy link
Contributor

@yangdanny97 yangdanny97 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should decouple the types part from the docstrings part.

I think not fetching the docstring eagerly when we have a ton of completions makes sense, maybe with an even lower cap than 200 (say, 50)

However, I don't think getting completions, then potentially getting completions again with include_types=true makes sense.

For the vast majority of completions, we're doing essentially double the amount of work, and based on my understanding of the code getting the type is not really that much extra work. Do you have profiling data that suggests otherwise?

@kinto0
Copy link
Contributor

kinto0 commented Feb 10, 2026

I think we should decouple the types part from the docstrings part.

I think not fetching the docstring eagerly when we have a ton of completions makes sense, maybe with an even lower cap than 200 (say, 50)

However, I don't think getting completions, then potentially getting completions again with include_types=true makes sense.

For the vast majority of completions, we're doing essentially double the amount of work, and based on my understanding of the code getting the type is not really that much extra work. Do you have profiling data that suggests otherwise?

I wonder if we should add resolveSupport to completions instead of picking an arbitrary number.

it might also be worthwhile to performance test this. this PR shows an example performance test that can be used to benchmark speed, but it will have to be modified for the completion in this github issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Profile autocompletion speed

4 participants