Skip to content

Conversation

jemc
Copy link
Contributor

@jemc jemc commented Jul 14, 2025

Prior to this commit, obtaining the job ID via gh CLI would fail when in a workflow with a job count that exceeds 100. For example, in a workflow with 105 jobs, the last 5 jobs fail to obtain their ids, because the 100-sized page didn't include them.

This PR fixes the issue by using the --paginate flag to list all of the jobs even when they exceed 100 jobs. It uses the built-in pagination feature of the CLI that allows for using multiple pages under the hood when necessary, while still surfacing the output JSON as a single list (not split by pages).

This allows for workflows with arbitrarily-many jobs, at least up to the theoretical limits of how many JSON bytes we can pass around as a string inside our bash commands here.

Prior to this commit, obtaining the job ID via gh CLI would fail
when in a workflow with a job count that exceeds 100.
For example, in a workflow with 105 jobs, the last 5 jobs fail
to obtain their ids, because the 100-sized page didn't include them.

This PR fixes the issue by using the `--paginate` flag to list
_all_ of the jobs even when they exceed 100 jobs. It uses the
built-in pagination feature of the CLI that allows for using
multiple pages under the hood when necessary, while still surfacing
the output JSON as a single list (not split by pages).

This allows for workflows with arbitrarily-many jobs,
at least up to the theoretical limits of how many JSON bytes
we can pass around as a string inside our bash commands here.
@jemc jemc requested a review from rdhar as a code owner July 14, 2025 19:15
@jemc
Copy link
Contributor Author

jemc commented Jul 14, 2025

Context: we have a repository where we manage all our GCP projects as separate, individual Terraform workspaces. As of recently, we crossed the 100 threshold to have 101 projects. Without the fix in this PR, the last project consistently fails to get its job id in a PR that touches all 101 projects.

@rdhar
Copy link
Member

rdhar commented Jul 15, 2025

Over 100 jobs!? You've gone too far! 😄

Almost goes without saying, huge thanks for another significant contribution of yours to this project. Now you've got me thinking: why not replace all per_page=100 with paginate to account for edge-cases at scale?

I've also considered using --cache <duration> (eg., 60s or 5m) but I'm not entirely sure if that has any effect in the context of GitHub Actions executing on ephemeral runners. Anyway, that's for another day.

@rdhar rdhar added this pull request to the merge queue Jul 15, 2025
Merged via the queue into OP5dev:main with commit f1571df Jul 15, 2025
10 checks passed
@rdhar
Copy link
Member

rdhar commented Jul 15, 2025

Happy to see this shipped with v13.6.0 (v13), where your contribution has been credited!

Please consider ⭐ this project, if you or your team find it useful.

GitHub repository stargazers


Many, MANY thanks for sharing this improvement -- and I'm all ears if you happen to have any thoughts/feedback as it sounds like you're a heavy-user of this project!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants