-
|
Hello everyone, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
|
The metrics printed in the logs are only validation metrics and not training metrics. The training metrics are not aggregated, and one should only monitor the validation metric. You can find the per-batch training loss in the results folder in the _train.txt file. Note that overfitting in modern neural networks is not following the same trends, and looking at the training loss is not a good indication of that. One should only look if the validation error increases or plateaus, which is automatically handled by the patience. Finally, we already plot the validation curves at the end of the training; they are stored in the results folder too. |
Beta Was this translation helpful? Give feedback.
The metrics printed in the logs are only validation metrics and not training metrics. The training metrics are not aggregated, and one should only monitor the validation metric. You can find the per-batch training loss in the results folder in the _train.txt file.
Note that overfitting in modern neural networks is not following the same trends, and looking at the training loss is not a good indication of that. One should only look if the validation error increases or plateaus, which is automatically handled by the patience.
Finally, we already plot the validation curves at the end of the training; they are stored in the results folder too.