You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using model.evaluate(), the metric values displayed in the progress bar differ from the values returned by the method. There appears to be double averaging happening - the batch values are already averaged, but the progress bar shows an additional average of these averages.
Code to reproduce:
Code to reproduce:
importtensorflowastfimportnumpyasnp# Modify the model to output the same as the inputmodel=tf.keras.Sequential([
tf.keras.layers.Lambda(lambdax: x) # Lambda layer to pass input directly to output
])
# Compile the model with MAE as the metricmodel.compile(optimizer='adam', loss='mse', metrics=['mae'])
# Dummy data for evaluationx=np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], dtype=float)
y=np.zeros_like(x) # Dummy target valuesresults=model.evaluate(x, y, verbose=1, batch_size=1)
print("Evaluation results:", results)
Expected behavior:
The metric values shown in the progress bar should match the final returned results (or at least be clearly documented if this difference is intentional).
Issue:
Progress bar shows loss: 17.8182, MAE: 3.4545
Returned values show loss: 38.5, MAE: 5.5
The correct values should be the returned ones (38.5 and 5.5 respectively), as these match manual calculations
The progress bar seems to be averaging already-averaged batch values
Environment:
TensorFlow 2.19
Additional notes:
This discrepancy can be confusing for users who rely on the progress bar metrics during evaluation.
The text was updated successfully, but these errors were encountered:
I have tested your code with the latest version of keras(3.9.2) and tensorflow(2.19.0) in this gist and I was able to reproduce the mismatch between the metric values shown in the progress bar and values returned byevaluate() method. However, when I tested with keras(2.15.0) and tensorflow(2.15.0) using this gist, the results were consistent between the progress bar and final evaluation output. We will look into this and update you. Thanks!
Uh oh!
There was an error while loading. Please reload this page.
When using model.evaluate(), the metric values displayed in the progress bar differ from the values returned by the method. There appears to be double averaging happening - the batch values are already averaged, but the progress bar shows an additional average of these averages.
Code to reproduce:
Code to reproduce:
Output:
Expected behavior:
The metric values shown in the progress bar should match the final returned results (or at least be clearly documented if this difference is intentional).
Issue:
Progress bar shows loss: 17.8182, MAE: 3.4545
Returned values show loss: 38.5, MAE: 5.5
The correct values should be the returned ones (38.5 and 5.5 respectively), as these match manual calculations
The progress bar seems to be averaging already-averaged batch values
Environment:
TensorFlow 2.19
Additional notes:
This discrepancy can be confusing for users who rely on the progress bar metrics during evaluation.
The text was updated successfully, but these errors were encountered: