In recent years, the digital environment and digital transformation of enterprises of all sizes have made AI-based solutions vital to missioncritical. AI-based systems are used in every technical field, including smart cities, self-driving cars, autonomous ships, 5G/6G, and nextgeneration intrusion detection systems. The industry's significant exploitation of AI systems exposes early adopters to undiscovered vulnerabilities such as data corruption, model theft, and adversarial samples because of their lack of tactical and strategic capabilities to defend, identify, and respond to attacks on their AI-based systems. Adversaries have created a new attack surface to exploit AIsystem vulnerabilities, targeting Machine Learning (ML) and Deep Learning (DL) systems to impair their functionality and performance. Adversarial AI is a new threat that might have serious effects in crucial areas like finance and healthcare, where AI is widely used. AIAS project aims to perform in-depth research on adversarial AI to design and develop an innovative AI-based security platform for the protection of AI systems and AI-based operations of organisations, relying on Adversarial AI defence methods (e.g., adversarial training, adversarial AI attack detection), deception mechanisms (e.g., high-interaction honeypots, digital twins, virtual personas) as well as on explainable AI solutions (XAI) that empower security teams to materialise the concept of “AI for Cybersecurity” (i.e., AI/ML-based tools to enhance the detection performance, defence and respond to attacks) and “Cybersecurity for AI” (i.e., protection of AI systems against adversarial AI attacks).