Replies: 1 comment
-
Seems like adding this to the
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm wondering if there's a way to create a "generic CPU" compilation for the
BudgetOptimizer
in PyTensor to compile the objective and grad functions.pymc-marketing/pymc_marketing/mmm/budget_optimizer.py
Lines 209 to 211 in a87f1fa
I trained my model on a MacBook Pro M1 and created a Docker container with a precompilation snippet to reduce cold starts at runtime. Locally, this precompilation reduces the run time from 3 minutes to about 15 seconds, which works perfectly for my use case.
However, when deploying to the cloud using Google Cloud Build and serving via Google Cloud Run, it seems the CPU architecture mismatch between the build environment (where the precomplie snippet runs) and the Cloud Run instance forces PyTensor to recompile the model at runtime. This results in a 3-minute delay before the first optimized response is returned from the
self.optimize_budget
function, and unfortunately, this recompile is not persistent.Here is how my Dockerfile looks like:
Hope that anyone with more experience with PyTensor than me has a workaround.
cc: @cetagostini @ricardoV94
Beta Was this translation helpful? Give feedback.
All reactions