The GuardAI project seeks to enhance the security of edge AI systems, addressing their critical vulnerabilities. Emphasis is placed on high-stakes domains, as outlined in the EU’s AI Act, where these systems, including drones, connected and autonomous vehicles, and network edge infrastructure, are becoming widespread and will play a pivotal role in making crucial decisions. These cutting-edge applications heavily rely on real-time decision-making and the processing of sensitive data, rendering them susceptible to various security threats and adversarial attacks. Therefore, the overarching objective of GuardAI is to develop the next generation of resilient AI algorithms tailored for edge applications. Leveraging cutting-edge technological advancements, the project will develop innovative solutions to ensure the integrity, security, and resilience of these systems, fostering trust, and accelerating the safe adoption of AI-driven technologies. A holistic contextual understanding will be integrated, enabling systems to adapt and make informed decisions in dynamic environments. Through a multi-disciplinary and multifaceted approach, by bringing together researchers, industry experts, government agencies, AI practitioners, and advanced threat analysis methods, and robust AI algorithms, GuardAI aspires to create a paradigm shift in AI security. The development of standardized evaluation criteria forges a path to certification frameworks, and real-world insights facilitate a systematic approach to ensuring security-by-design concepts. Embracing a holistic approach, GuardAI also examines ethical considerations in AI technology development to promote ethically sound digital landscapes. Overall, the project will strive to elevate the standards of secure AI systems through cutting-edge advancements and a comprehensive, collaborative approach.