Hard Mining for Robust Classification
Published:
In this blog post we want to explore whether training mostly on the hardest examples allows us to fit robust networks on CIFAR-10 quicker.
Published:
In this blog post we want to explore whether training mostly on the hardest examples allows us to fit robust networks on CIFAR-10 quicker.
Published:
In our recent CVPR paper, “1-Lipschitz Layers Compared: Memory, Speed and Certifiable Robustness” we compared different methods of creating 1-Lipschitz convolutions. In this blog post we try to give some additional background on why 1-Lipschitz methods are an interesting research topic, and discuss some results from the paper.
Published:
In this blog post we investigate whether Large Multimodal Models (LLMs) are robust to small perturbations of input images. We find that this is not the case, we manage to fool Phi-4-multimodal-instruct by small changes to its inputs.
Published:
In this blog post we want to explore whether training mostly on the hardest examples allows us to fit robust networks on CIFAR-10 quicker.
Published:
In our recent CVPR paper, “1-Lipschitz Layers Compared: Memory, Speed and Certifiable Robustness” we compared different methods of creating 1-Lipschitz convolutions. In this blog post we try to give some additional background on why 1-Lipschitz methods are an interesting research topic, and discuss some results from the paper.
Published:
In this blog post we want to explore whether training mostly on the hardest examples allows us to fit robust networks on CIFAR-10 quicker.
Published:
In this blog post we investigate whether Large Multimodal Models (LLMs) are robust to small perturbations of input images. We find that this is not the case, we manage to fool Phi-4-multimodal-instruct by small changes to its inputs.
Published:
In this blog post we investigate whether Large Multimodal Models (LLMs) are robust to small perturbations of input images. We find that this is not the case, we manage to fool Phi-4-multimodal-instruct by small changes to its inputs.
Published:
In this blog post we want to explore whether training mostly on the hardest examples allows us to fit robust networks on CIFAR-10 quicker.