Skip to content

release: 0.1.0-alpha.23 #120

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ jobs:
timeout-minutes: 10
name: lint
runs-on: ${{ github.repository == 'stainless-sdks/codex-python' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
if: github.event_name == 'push' || github.event.pull_request.head.repo.fork
steps:
- uses: actions/checkout@v4

Expand All @@ -35,7 +36,7 @@ jobs:
run: ./scripts/lint

upload:
if: github.repository == 'stainless-sdks/codex-python'
if: github.repository == 'stainless-sdks/codex-python' && (github.event_name == 'push' || github.event.pull_request.head.repo.fork)
timeout-minutes: 10
name: upload
permissions:
Expand Down
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.1.0-alpha.22"
".": "0.1.0-alpha.23"
}
2 changes: 1 addition & 1 deletion .stats.yml
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
configured_endpoints: 65
openapi_spec_hash: 80696dc202de8bacc0e43506d7c210b0
openapi_spec_hash: f63d4542b4bd1530ced013eb686cab99
config_hash: 14b2643a0ec60cf326dfed00939644ff
21 changes: 21 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,26 @@
# Changelog

## 0.1.0-alpha.23 (2025-06-30)

Full Changelog: [v0.1.0-alpha.22...v0.1.0-alpha.23](https://github.com/cleanlab/codex-python/compare/v0.1.0-alpha.22...v0.1.0-alpha.23)

### Features

* **api:** api update ([31096f4](https://github.com/cleanlab/codex-python/commit/31096f4820a7bfdd204b0a2d1d84ab1e36e32d0c))
* **api:** api update ([be06884](https://github.com/cleanlab/codex-python/commit/be06884d321ca5009c9d82346c1b74c7429f82fa))
* **api:** api update ([41b210d](https://github.com/cleanlab/codex-python/commit/41b210dc69c2b9c45eeab01a0afac6a4563d41f2))


### Bug Fixes

* **ci:** correct conditional ([45d3bc0](https://github.com/cleanlab/codex-python/commit/45d3bc05ab56d3e67d036ce84b2c9a1f2d8cfd69))
* **ci:** release-doctor — report correct token name ([1a5e444](https://github.com/cleanlab/codex-python/commit/1a5e444226c829392181d98bc06f8cfb8bf13bd9))


### Chores

* **ci:** only run for pushes and fork pull requests ([6b590bd](https://github.com/cleanlab/codex-python/commit/6b590bd454e939b8453d95c239ee85be1a326909))

## 0.1.0-alpha.22 (2025-06-24)

Full Changelog: [v0.1.0-alpha.21...v0.1.0-alpha.22](https://github.com/cleanlab/codex-python/compare/v0.1.0-alpha.21...v0.1.0-alpha.22)
Expand Down
2 changes: 1 addition & 1 deletion bin/check-release-environment
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
errors=()

if [ -z "${PYPI_TOKEN}" ]; then
errors+=("The CODEX_PYPI_TOKEN secret has not been set. Please set it in either this repository's secrets or your organization secrets.")
errors+=("The PYPI_TOKEN secret has not been set. Please set it in either this repository's secrets or your organization secrets.")
fi

lenErrors=${#errors[@]}
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "codex-sdk"
version = "0.1.0-alpha.22"
version = "0.1.0-alpha.23"
description = "Internal SDK used within cleanlab-codex package. Refer to https://pypi.org/project/cleanlab-codex/ instead."
dynamic = ["readme"]
license = "MIT"
Expand Down
2 changes: 1 addition & 1 deletion src/codex/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "codex"
__version__ = "0.1.0-alpha.22" # x-release-please-version
__version__ = "0.1.0-alpha.23" # x-release-please-version
20 changes: 10 additions & 10 deletions src/codex/resources/projects/projects.py
Original file line number Diff line number Diff line change
Expand Up @@ -212,10 +212,10 @@ def update(
self,
project_id: str,
*,
config: project_update_params.Config,
name: str,
auto_clustering_enabled: bool | NotGiven = NOT_GIVEN,
auto_clustering_enabled: Optional[bool] | NotGiven = NOT_GIVEN,
config: Optional[project_update_params.Config] | NotGiven = NOT_GIVEN,
description: Optional[str] | NotGiven = NOT_GIVEN,
name: Optional[str] | NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
# The extra values given here take precedence over values defined on the client or passed to this method.
extra_headers: Headers | None = None,
Expand All @@ -241,10 +241,10 @@ def update(
f"/api/projects/{project_id}",
body=maybe_transform(
{
"config": config,
"name": name,
"auto_clustering_enabled": auto_clustering_enabled,
"config": config,
"description": description,
"name": name,
},
project_update_params.ProjectUpdateParams,
),
Expand Down Expand Up @@ -820,10 +820,10 @@ async def update(
self,
project_id: str,
*,
config: project_update_params.Config,
name: str,
auto_clustering_enabled: bool | NotGiven = NOT_GIVEN,
auto_clustering_enabled: Optional[bool] | NotGiven = NOT_GIVEN,
config: Optional[project_update_params.Config] | NotGiven = NOT_GIVEN,
description: Optional[str] | NotGiven = NOT_GIVEN,
name: Optional[str] | NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
# The extra values given here take precedence over values defined on the client or passed to this method.
extra_headers: Headers | None = None,
Expand All @@ -849,10 +849,10 @@ async def update(
f"/api/projects/{project_id}",
body=await async_maybe_transform(
{
"config": config,
"name": name,
"auto_clustering_enabled": auto_clustering_enabled,
"config": config,
"description": description,
"name": name,
},
project_update_params.ProjectUpdateParams,
),
Expand Down
88 changes: 88 additions & 0 deletions src/codex/resources/projects/query_logs.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,9 +92,12 @@ def list(
created_at_end: Union[str, datetime, None] | NotGiven = NOT_GIVEN,
created_at_start: Union[str, datetime, None] | NotGiven = NOT_GIVEN,
custom_metadata: Optional[str] | NotGiven = NOT_GIVEN,
failed_evals: Optional[List[str]] | NotGiven = NOT_GIVEN,
guardrailed: Optional[bool] | NotGiven = NOT_GIVEN,
limit: int | NotGiven = NOT_GIVEN,
offset: int | NotGiven = NOT_GIVEN,
order: Literal["asc", "desc"] | NotGiven = NOT_GIVEN,
passed_evals: Optional[List[str]] | NotGiven = NOT_GIVEN,
primary_eval_issue: Optional[
List[Literal["hallucination", "search_failure", "unhelpful", "difficult_query", "unsupported"]]
]
Expand All @@ -118,6 +121,12 @@ def list(

custom_metadata: Filter by custom metadata as JSON string: {"key1": "value1", "key2": "value2"}

failed_evals: Filter by evals that failed

guardrailed: Filter by guardrailed status

passed_evals: Filter by evals that passed

primary_eval_issue: Filter logs that have ANY of these primary evaluation issues (OR operation)

was_cache_hit: Filter by cache hit status
Expand All @@ -144,9 +153,12 @@ def list(
"created_at_end": created_at_end,
"created_at_start": created_at_start,
"custom_metadata": custom_metadata,
"failed_evals": failed_evals,
"guardrailed": guardrailed,
"limit": limit,
"offset": offset,
"order": order,
"passed_evals": passed_evals,
"primary_eval_issue": primary_eval_issue,
"sort": sort,
"was_cache_hit": was_cache_hit,
Expand All @@ -164,9 +176,13 @@ def list_by_group(
created_at_end: Union[str, datetime, None] | NotGiven = NOT_GIVEN,
created_at_start: Union[str, datetime, None] | NotGiven = NOT_GIVEN,
custom_metadata: Optional[str] | NotGiven = NOT_GIVEN,
failed_evals: Optional[List[str]] | NotGiven = NOT_GIVEN,
guardrailed: Optional[bool] | NotGiven = NOT_GIVEN,
limit: int | NotGiven = NOT_GIVEN,
needs_review: Optional[bool] | NotGiven = NOT_GIVEN,
offset: int | NotGiven = NOT_GIVEN,
order: Literal["asc", "desc"] | NotGiven = NOT_GIVEN,
passed_evals: Optional[List[str]] | NotGiven = NOT_GIVEN,
primary_eval_issue: Optional[
List[Literal["hallucination", "search_failure", "unhelpful", "difficult_query", "unsupported"]]
]
Expand All @@ -191,6 +207,14 @@ def list_by_group(

custom_metadata: Filter by custom metadata as JSON string: {"key1": "value1", "key2": "value2"}

failed_evals: Filter by evals that failed

guardrailed: Filter by guardrailed status

needs_review: Filter logs that need review

passed_evals: Filter by evals that passed

primary_eval_issue: Filter logs that have ANY of these primary evaluation issues (OR operation)

remediation_ids: List of groups to list child logs for
Expand Down Expand Up @@ -219,9 +243,13 @@ def list_by_group(
"created_at_end": created_at_end,
"created_at_start": created_at_start,
"custom_metadata": custom_metadata,
"failed_evals": failed_evals,
"guardrailed": guardrailed,
"limit": limit,
"needs_review": needs_review,
"offset": offset,
"order": order,
"passed_evals": passed_evals,
"primary_eval_issue": primary_eval_issue,
"remediation_ids": remediation_ids,
"sort": sort,
Expand All @@ -240,9 +268,13 @@ def list_groups(
created_at_end: Union[str, datetime, None] | NotGiven = NOT_GIVEN,
created_at_start: Union[str, datetime, None] | NotGiven = NOT_GIVEN,
custom_metadata: Optional[str] | NotGiven = NOT_GIVEN,
failed_evals: Optional[List[str]] | NotGiven = NOT_GIVEN,
guardrailed: Optional[bool] | NotGiven = NOT_GIVEN,
limit: int | NotGiven = NOT_GIVEN,
needs_review: Optional[bool] | NotGiven = NOT_GIVEN,
offset: int | NotGiven = NOT_GIVEN,
order: Literal["asc", "desc"] | NotGiven = NOT_GIVEN,
passed_evals: Optional[List[str]] | NotGiven = NOT_GIVEN,
primary_eval_issue: Optional[
List[Literal["hallucination", "search_failure", "unhelpful", "difficult_query", "unsupported"]]
]
Expand All @@ -267,6 +299,14 @@ def list_groups(

custom_metadata: Filter by custom metadata as JSON string: {"key1": "value1", "key2": "value2"}

failed_evals: Filter by evals that failed

guardrailed: Filter by guardrailed status

needs_review: Filter log groups that need review

passed_evals: Filter by evals that passed

primary_eval_issue: Filter logs that have ANY of these primary evaluation issues (OR operation)

was_cache_hit: Filter by cache hit status
Expand All @@ -293,9 +333,13 @@ def list_groups(
"created_at_end": created_at_end,
"created_at_start": created_at_start,
"custom_metadata": custom_metadata,
"failed_evals": failed_evals,
"guardrailed": guardrailed,
"limit": limit,
"needs_review": needs_review,
"offset": offset,
"order": order,
"passed_evals": passed_evals,
"primary_eval_issue": primary_eval_issue,
"sort": sort,
"was_cache_hit": was_cache_hit,
Expand Down Expand Up @@ -406,9 +450,12 @@ async def list(
created_at_end: Union[str, datetime, None] | NotGiven = NOT_GIVEN,
created_at_start: Union[str, datetime, None] | NotGiven = NOT_GIVEN,
custom_metadata: Optional[str] | NotGiven = NOT_GIVEN,
failed_evals: Optional[List[str]] | NotGiven = NOT_GIVEN,
guardrailed: Optional[bool] | NotGiven = NOT_GIVEN,
limit: int | NotGiven = NOT_GIVEN,
offset: int | NotGiven = NOT_GIVEN,
order: Literal["asc", "desc"] | NotGiven = NOT_GIVEN,
passed_evals: Optional[List[str]] | NotGiven = NOT_GIVEN,
primary_eval_issue: Optional[
List[Literal["hallucination", "search_failure", "unhelpful", "difficult_query", "unsupported"]]
]
Expand All @@ -432,6 +479,12 @@ async def list(

custom_metadata: Filter by custom metadata as JSON string: {"key1": "value1", "key2": "value2"}

failed_evals: Filter by evals that failed

guardrailed: Filter by guardrailed status

passed_evals: Filter by evals that passed

primary_eval_issue: Filter logs that have ANY of these primary evaluation issues (OR operation)

was_cache_hit: Filter by cache hit status
Expand All @@ -458,9 +511,12 @@ async def list(
"created_at_end": created_at_end,
"created_at_start": created_at_start,
"custom_metadata": custom_metadata,
"failed_evals": failed_evals,
"guardrailed": guardrailed,
"limit": limit,
"offset": offset,
"order": order,
"passed_evals": passed_evals,
"primary_eval_issue": primary_eval_issue,
"sort": sort,
"was_cache_hit": was_cache_hit,
Expand All @@ -478,9 +534,13 @@ async def list_by_group(
created_at_end: Union[str, datetime, None] | NotGiven = NOT_GIVEN,
created_at_start: Union[str, datetime, None] | NotGiven = NOT_GIVEN,
custom_metadata: Optional[str] | NotGiven = NOT_GIVEN,
failed_evals: Optional[List[str]] | NotGiven = NOT_GIVEN,
guardrailed: Optional[bool] | NotGiven = NOT_GIVEN,
limit: int | NotGiven = NOT_GIVEN,
needs_review: Optional[bool] | NotGiven = NOT_GIVEN,
offset: int | NotGiven = NOT_GIVEN,
order: Literal["asc", "desc"] | NotGiven = NOT_GIVEN,
passed_evals: Optional[List[str]] | NotGiven = NOT_GIVEN,
primary_eval_issue: Optional[
List[Literal["hallucination", "search_failure", "unhelpful", "difficult_query", "unsupported"]]
]
Expand All @@ -505,6 +565,14 @@ async def list_by_group(

custom_metadata: Filter by custom metadata as JSON string: {"key1": "value1", "key2": "value2"}

failed_evals: Filter by evals that failed

guardrailed: Filter by guardrailed status

needs_review: Filter logs that need review

passed_evals: Filter by evals that passed

primary_eval_issue: Filter logs that have ANY of these primary evaluation issues (OR operation)

remediation_ids: List of groups to list child logs for
Expand Down Expand Up @@ -533,9 +601,13 @@ async def list_by_group(
"created_at_end": created_at_end,
"created_at_start": created_at_start,
"custom_metadata": custom_metadata,
"failed_evals": failed_evals,
"guardrailed": guardrailed,
"limit": limit,
"needs_review": needs_review,
"offset": offset,
"order": order,
"passed_evals": passed_evals,
"primary_eval_issue": primary_eval_issue,
"remediation_ids": remediation_ids,
"sort": sort,
Expand All @@ -554,9 +626,13 @@ async def list_groups(
created_at_end: Union[str, datetime, None] | NotGiven = NOT_GIVEN,
created_at_start: Union[str, datetime, None] | NotGiven = NOT_GIVEN,
custom_metadata: Optional[str] | NotGiven = NOT_GIVEN,
failed_evals: Optional[List[str]] | NotGiven = NOT_GIVEN,
guardrailed: Optional[bool] | NotGiven = NOT_GIVEN,
limit: int | NotGiven = NOT_GIVEN,
needs_review: Optional[bool] | NotGiven = NOT_GIVEN,
offset: int | NotGiven = NOT_GIVEN,
order: Literal["asc", "desc"] | NotGiven = NOT_GIVEN,
passed_evals: Optional[List[str]] | NotGiven = NOT_GIVEN,
primary_eval_issue: Optional[
List[Literal["hallucination", "search_failure", "unhelpful", "difficult_query", "unsupported"]]
]
Expand All @@ -581,6 +657,14 @@ async def list_groups(

custom_metadata: Filter by custom metadata as JSON string: {"key1": "value1", "key2": "value2"}

failed_evals: Filter by evals that failed

guardrailed: Filter by guardrailed status

needs_review: Filter log groups that need review

passed_evals: Filter by evals that passed

primary_eval_issue: Filter logs that have ANY of these primary evaluation issues (OR operation)

was_cache_hit: Filter by cache hit status
Expand All @@ -607,9 +691,13 @@ async def list_groups(
"created_at_end": created_at_end,
"created_at_start": created_at_start,
"custom_metadata": custom_metadata,
"failed_evals": failed_evals,
"guardrailed": guardrailed,
"limit": limit,
"needs_review": needs_review,
"offset": offset,
"order": order,
"passed_evals": passed_evals,
"primary_eval_issue": primary_eval_issue,
"sort": sort,
"was_cache_hit": was_cache_hit,
Expand Down
Loading