Skip to content

[generative_ai] Missed regressions in the code samples because of targeted test runs #12990

Open
@Valeriy-Burlaka

Description

@Valeriy-Burlaka

Describe the issue

We're starting to miss regressions in the code samples for generative_ai because of migrating the CI to do targeted (per feature directory) test runs. Meaning, when there are no modifications done in a feature directory, the tests will not run in this directory and the issue will silently live there, potentially for a long time.

We need to run the entire test suite regularly to ensure all code samples remain functional.

Examples

  1. function_calling/basic_example.py is not working anymore with Gemini Flash model. I discovered this when updated a different sample in this directory (see the comment).
  2. Two function calling with OpenAI samples — function_calling/chat_function_calling_basic.py & function_calling/chat_function_calling_config.py — are not working anymore. Same story here: I discovered the issue only because I updated a different file in this feature directory. See this comment.

Metadata

Metadata

Assignees

Labels

api: vertex-aiIssues related to the Vertex AI API.samplesIssues that are directly related to samples.triage meI really want to be triaged.type: bugError or flaw in code with unintended results or allowing sub-optimal usage patterns.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions