Hi,
Thanks for the remarkable work.
I would like to know more about the operations in defined loss. Since the mask has been normalized by mask /= torch.mean(mask), should we use the sum operation torch.sum(loss) rather than the mean operation torch.mean(loss) in returning loss?
|
def masked_mse_torch(preds, labels, null_val=np.nan): |
|
labels[torch.abs(labels) < 1e-4] = 0 |
|
if np.isnan(null_val): |
|
mask = ~torch.isnan(labels) |
|
else: |
|
mask = labels.ne(null_val) |
|
mask = mask.float() |
|
mask /= torch.mean(mask) |
|
mask = torch.where(torch.isnan(mask), torch.zeros_like(mask), mask) |
|
loss = torch.square(torch.sub(preds, labels)) |
|
loss = loss * mask |
|
loss = torch.where(torch.isnan(loss), torch.zeros_like(loss), loss) |
|
return torch.mean(loss) |
I am not sure what I understand is right due to my limited knowledge. If you could respond, that would be greatly appreciated.
Hi,
Thanks for the remarkable work.
I would like to know more about the operations in defined loss. Since the
maskhas been normalized bymask /= torch.mean(mask), should we use the sum operationtorch.sum(loss)rather than the mean operationtorch.mean(loss)in returning loss?Bigscity-LibCity/libcity/model/loss.py
Lines 75 to 87 in 38ff383
I am not sure what I understand is right due to my limited knowledge. If you could respond, that would be greatly appreciated.