Merged
Conversation
lutter
requested changes
Jan 21, 2026
9eba1e8 to
719cc97
Compare
lutter
approved these changes
Feb 18, 2026
Collaborator
lutter
left a comment
There was a problem hiding this comment.
Nice! This will be very useful.
Some comments from Claude - not entirely convinced they are necessary; feel free to address or ignore before merging:
3. nested_current_include_count test is redundant
File: aggregation.rs
This test verifies the exact same thing as nested_current_include (6 total rows, 3 per token). It provides no additional signal. Consider replacing it with a more useful test — e.g., testing pagination (first/skip) with current: include, or testing what happens
when the current argument is omitted.
4. Missing test coverage for key scenarios
File: aggregation.rs, runner_tests.rs
- current argument omission: No test verifies that omitting current defaults to exclude behavior through the full GraphQL → store path. The unwrap_or_default() in prefetch.rs handles this, but it's not tested end-to-end.
- first/last with multiple data points in current bucket: Store tests have only 1 data point per token in the current bucket, so first == last trivially. The runner test checks sum only.
- Empty current bucket (all data rolled up, no new data): Not tested — does current: include gracefully return the same as exclude?
- Error paths: The NotSupported error for AttributeNames::All and the ParentLink rejection are untested.
5. max_num_rows() could theoretically overflow
File: relational_queries.rs
fn max_num_rows(&self) -> u32 {
self.range.0.first.unwrap_or(EntityRange::FIRST) + self.range.0.skip
}
If both first and skip are near u32::MAX, this overflows. In practice the values are small (default first=100), but saturating_add would be defensive.
| out.push_sql(") union all (select "); | ||
| write_column_names(&wh.column_names, wh.table, Some("c."), out)?; | ||
| out.push_sql(" from ("); | ||
| let mut current_sql_split = rollup.select_current_sql.split("--FILTERS;"); |
Collaborator
There was a problem hiding this comment.
It might be good to make sure the split results in exactly 2 components (maybe as a debug_assert!) Or, the select_current_sql could be stored pre-split as (String, String)
This makes sure that when the default order is applied the column used in ORDER BY is also included in the SELECT projection
Add expected SQL constants and check_eqv assertions for select_current_sql in the existing rollup() test function, covering all 5 aggregation types: Stats (with dimensions), TotalStats (no dimensions), OpenClose (first/last), Lifetime (cumulative), and CountOnly (count-only).
Fix select_current_bucket to always include GROUP BY timestamp in the generated SQL, regardless of whether dimensions are present.
Added 4 test functions exercising the current aggregation bucket feature through the store's find method: current_include (verifies 6 rows with rolled-up + on-the-fly current bucket data), current_exclude (4 rolled-up rows only), current_include_with_filter (dimension filter on TOKEN1), and current_include_cumulative (totalValue cumulative aggregates). Added TestEnv helper methods (aggregation_query, find_aggregation) for constructing EntityQuery with explicit column selection and configurable AggregationCurrent. Adjusted TIMES[3] from minutes(120) to minutes(121) to ensure source data falls past the last_rollup boundary (max_agg_ts + interval + 1s).
…ueries Extend the current aggregation bucket querying to work for nested aggregation fields accessed through parent entities (SingleWindow path). Changes: - Remove forced aggregation_current = None for nested queries in prefetch.rs - Extend find_rollup to handle FilterCollection::SingleWindow - Extend query_window_one_entity to UNION ALL current bucket data with windowed aggregation data, using the dimension/link column to map current bucket rows to parent IDs
Added 4 test functions for nested aggregation current bucket queries using EntityCollection::Window (Token parent, Stats_hour child): - nested_current_include: 6 rows with aggregate value verification - nested_current_exclude: 4 rolled-up rows only - nested_current_include_empty_bucket: TOKEN3 with no data returns 0 rows - nested_current_include_count: 3 rows per token Extended SCHEMA with Token entity, changed Data.token and Stats.token from Bytes! to Token!, added insert_entities helper for multi-type entity insertion, and inserted TOKEN1/TOKEN2/TOKEN3 entities in test data.
Documentation does not cover Parent-linked nested query limitation or SELECT * constraint
They were treated as doctests
719cc97 to
e9077d1
Compare
5 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR adds support for querying the current, partially filled aggregation bucket with GraphQL.
How does it work?
When the
current: includeis specified in the GraphQL request for an aggregation, the graph-node loads the most recent time series entities that do not currently form a complete bucket and calculates the aggregates on the fly. Then, the results are merged with the relevant complete buckets to produce one response list.