Should Automation Reuse Cached OpenAI Responses or Request Fresh Ones Each Time? #98
Closed
BavithiranOneMindIndia
started this conversation in
General
Replies: 1 comment
-
I haven't updated the documentation, but the caching is already there and I think it should be on by default. It will be automatically invalidated whenever UI changes. See 43ceacf for more details. The only change you need to add is to make sure the cache is saved after each successful test. For example, this is how it can be done with Pytest: @hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
if report.when == "call" and "al" in item.funcargs:
if report.passed:
item.funcargs["al"].cache.save()
else:
item.funcargs["al"].cache.discard() |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Once the automation is set up and runs repeatedly, should we cache OpenAI prompt responses or request fresh ones each time? Looking for input on the best approach for performance vs. accuracy.
Beta Was this translation helpful? Give feedback.
All reactions