Prioritizing Security in Artificial Intelligence: A Call for Secure by Design Approach

As discussions surrounding artificial intelligence (AI) often involve intricate technical details, it’s imperative to recognize that AI is fundamentally a type of software system. In light of this, experts from the National Security Agency (NSA) and the Cybersecurity and Infrastructure Security Agency (CISA) are underscoring the importance of implementing a Secure by Design approach to AI systems.

Christine Lai, AI Security Lead, and Dr. Jonathan Spring, Senior Technical Advisor, emphasize that AI, like any software, must adhere to Secure by Design principles. Manufacturers of AI systems should consider the security of customers as a core business requirement rather than just a technical aspect. This approach entails prioritizing security across the entire lifecycle of the AI product, from conception to end-of-life. Additionally, AI systems should be designed for out-of-the-box security, requiring minimal configuration changes and additional costs for the users.

Although AI systems have distinct characteristics, the basic tenets of Secure by Design should be applied. AI, at its core, processes data to generate predictions, recommendations, or decisions based on statistical reasoning, especially within machine learning-based AI. Similar to any other software, AI should be developed with security as a foundational consideration.

The authors stress that existing security practices and policies should be adapted to AI engineering. These practices encompass AI software design, development, data management, deployment, integration, testing, vulnerability management, incident management, product security, and end-of-life management. As AI continues to gain prominence, it’s crucial to avoid technical debt and implement Secure by Design principles from the outset.

The Secure by Design approach mandates that AI systems are built to defend against cyber threats throughout their lifecycle. This includes safeguarding devices, data, and connected infrastructure. To achieve this, AI engineering should integrate existing security practices, institute vulnerability identifiers, prioritize memory-safe languages, and employ software bills of materials to document dependencies.

While AI presents unique challenges, including adversarial inputs that force misclassification, it’s essential to ensure that AI-enabled systems remain secure. The focus on AI system assurance is vital, especially as AI becomes integral to various aspects of society. CISA urges technology providers to adopt Secure by Design principles for every AI model and system to ensure that AI’s transformative potential is harnessed securely.

As AI’s role in shaping our world increases, embracing Secure by Design becomes paramount to building a robust and secure digital landscape.