Algorithms for sensor processing and autonomous navigation of robotic systems in complex unstructured environments are attracting research interest with the emergence of more accurate and cheaper sensors, faster embedded computing, large datasets, and machine learning (ML) approaches. ML is widely used for both perception and end-to-end control for robust autonomy. These systems cannot be trained in all possible environments/conditions. They are also vulnerable to adversarial attacks including training-time attacks (e.g., backdoors in which triggers are embedded in training data by adversaries to cause incorrect classifications), data poisoning in on-line/lifelong learning systems (e.g., adversarial modifications of the environment causing learning spurious correlations), and inference-time adversarial perturbations. Guaranteeing/certifying performance of ML systems is challenging. In this project, we provide both white-box and black-box defenses for backdoor attacks. We consider training-time methods to certify robustness. Lipschitz networks are considered along with applications to tasks such as perceptual similarity scoring which plays a significant role in computer vision to capture the underlying semantics of images as well as in applications such as simultaneous localization and mapping (SLAM) in robotics and semantic SLAM. We will also consider instances of attacks and defenses in robotic applications using various sensors such as vision and LIDAR. Furthermore, for increased robustness of controls and robotic systems, we will develop control systems based on control barrier functions and robust nonlinear adaptive controls for provably safe dynamic operations. The project will include experimental implementations on various platforms (e.g., manipulators, autonomous vehicles, quadrupeds).