DescriptionIn the past decades with unexpected and rapid development of computer vision, tremendous computer vision applications like face recognition, image recognition, object detection and so on. They present their powerful abilities to made life so convenient for humans. In the trend of computer vision, deep neural networks (DNNs) occupies a very essential role. Because relative applications are deploying in many critical fields such as autonomous car, authentication and so on. However, there exist many adversarial attacks that can result in huge model performance degradation. Deploying a robust and reliable DNN is becoming a crucial and necessary step for various applications. In this work, we introduce SmoothBlock, a novel regularization method to improve the model robustness against adversarial attacks. It can be directly utilized as a defense mechanism in inference phase to protect the pre-trained model. Besides, the proposed SmoothBlock can also be applied in both training and adversarial training to further improve the robustness against various adversarial attacks. Furthermore, we apply the proposed SmoothBlock with a self-ensemble method to improve the robustness of the system. We conduct extensive trials and detailed analysis on CIFAR-10 using Resnet20 model. Results show that the model robustness can be significantly improved by our method against FGSM, PGD and C&W L2 attacks under white-box scenarios.