-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
TORCH_CHECK used within torch.compile does not throw legible errors #126691
Comments
cc @jgong5 |
It doesn't happen for every input. The following input throws a nice error: @torch.compile
def fn(x, y):
b = torch.arange(x.size(0) - 1, device=x.device) + 3
return x[b] + y[b]
x = torch.rand(1024, 128, device="cpu")
y = torch.rand(1024, 128, device="cpu")
fn(x, y) Note that the indexing error in the OP is much more egregious than the one in this second example. |
How to reproduce it? It cannot be repro on master. |
Patch in that PR. There are quite a few issues in master when it comes to issuing device_asserts. See that in the repro in the OP we don't even generate a |
I have compared the difference with
I will submit a PR for it. |
Submitted a PR here. #127868 |
馃悰 Describe the bug
At best, you get a
At worst, you just get the "terminate called recursively" with not mention to
c10::Error
To repro, you can perform any out of bounds error. I got it with
when testing #114471 (may not repro in master).
Versions
#114471
The text was updated successfully, but these errors were encountered: