Dear KernelAgent team,
Thanks for sharing your work with the community. After carefully examining some results from the artifact reposiroty, I found that the solution to 31_VisionAttention of KernelBench Level 3 might be wrong.
https://github.com/Laurawly/kernelagent-artifacts/blob/main/L3/31_VisionAttention/final_kernel.py
The solution does not implement the attention mechanism. It only implements LayerNorm, but it passes the "wrong test" anyway.
I have tried to generate a solution using the complex Fuser pipeline, but it also gave me a false success. There's no Triton-based attention in the solution, because the test allows the agent to use torch.bmm() and Tensor.matmul().
Would you release all artifacts for us to better understand the situation? Thanks.
Dear KernelAgent team,
Thanks for sharing your work with the community. After carefully examining some results from the artifact reposiroty, I found that the solution to 31_VisionAttention of KernelBench Level 3 might be wrong.
https://github.com/Laurawly/kernelagent-artifacts/blob/main/L3/31_VisionAttention/final_kernel.py
The solution does not implement the attention mechanism. It only implements LayerNorm, but it passes the "wrong test" anyway.
I have tried to generate a solution using the complex Fuser pipeline, but it also gave me a false success. There's no Triton-based attention in the solution, because the test allows the agent to use
torch.bmm()andTensor.matmul().Would you release all artifacts for us to better understand the situation? Thanks.