IACyC Proceedings - Adapting Cybersecurity Governance Frameworks to Manage Risks in Generative AI Systems

Conference papers

Authors

Adeopatoye Remilekun Jakobs , Knut Haufe , Reiner Creutzburg and Izuchukwu Patrick Udechukwu

Abstract

The accelerated adoption of Generative AI (GenAI) systems, particularly large language models (LLMs) has introduced cybersecurity risks that exceed the scope of traditional governance frameworks like NIST CSF and ISO/IEC 27001. This paper presents a comprehensive evaluation of these governance gaps and proposes the Minimal AI Information Security Control Set (M-AI-ISCS) - a hybrid control set that integrates AI-specific safeguards with established cybersecurity standards. Developed through Design Science Research (DSR), the control set addresses emerging threats including prompt injection, data leakage, model inversion, and adversarial manipulation. Scenario-based testing in healthcare and fintech domains assesses the control set's effectiveness, feasibility, and regulatory alignment. Results indicate that while certain controls - such as continuous monitoring and incident response - are universally critical, others require domain-specific adaptation, including ethical guardrails and privacy protection in healthcare, and adversarial detection and API security in fintech. The findings demonstrate that M-AI-ISCS improves organizational preparedness, enhances regulatory compliance, and strengthens operational resilience in GenAI deployments.

Keywords

Generative AI (GenAI), Cybersecurity Governance, Compliance, Risk Management Frameworks, ISO/IEC 27001, NIST CSF, NIST AI RMF, ISO/IEC 42001, Compliance