- Project Overwatch
- Posts
- #010 Cyber AI Chronicle - AI Regulation: EU AI Act
#010 Cyber AI Chronicle - AI Regulation: EU AI Act
PRESENTED BY
Cyber AI Chronicle
By Simon Ganiere · 10th March 2024
Welcome back! This week newsletter is a 15 minutes read.
Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.
Table of Contents
What I learned this week
TLDR;
I had less time to play with code this week ☹️ but I really appreciate the few of you who pinged me to give me some guidance, share some insights and mention a few new tools! Will do my best to find some more time very soon to continue this exploration!
I decided to cover (some) of the AI regulation this week, not that I find this the most exciting topic but reading on the EU AI Act I really like the risk-based approach they have taken. It makes a lot of sense to me and it can potentially be used in a corporate environment to shape your AI risk management framework.
A short note, which I will need to expand at some point, on API security. I still believe this is a critical part to secure an AI ecosystem. The near future is more about connecting API together to enhance an LLM than anything else. I would 100% ensure your API security strategy is where it should be. Now be careful vendors are already jumping on this with the release of so-called Firewall for AI…the marketing team is at full speed here as it’s basically a WAF with rate limiting and sensitive data detection (that good old DLP control)!
Incident Response is not easy…and Microsoft is proving it again. The latest on the Midnight Blizzard hack which occurred in November…was disclosed in January…and is still ongoing including against Microsoft customers…what I would call a persistent adversary. This read a little bit the Cloudflare story that saw the exploitation of one access token and 3 service accounts that were not remediated…double check those playbooks and your eradication steps!
AI Regulation - The EU AI Act in a Nutshell
This week let’s talk about AI regulation! I know, not the most exciting topic but it’s important and I believe the EU AI Act risk-based approach is nicely done and can potentially be used as the base for your own AI risk management framework.
The EU's AI Act is the first comprehensive legal framework on AI worldwide. The first version of the act was proposed in April 2021 and in early 2024 the European parliament agreed on a final version. The AI Act will likely be published in the Official Journal of the EU between May and July once it has been approved by both the European Parliament and the Council. This means compliance with the act will start in 2026 - they are exceptions for the unacceptable risk system which will need to be removed 6 months after the Act becomes law.
The act is leveraging a risk based approach, with the definition of different level of risk based on the type of AI:
Unacceptable risk AI systems are systems considered a threat to people and will be banned. This includes AI system for biometric identification and categorisation of people, social scoring, etc.
AI system that negatively affect safety or fundamental rights will be considered high risk. This category is split into two: 1) AI systems that negatively affect safety or fundamental rights, that include AI systems used in product falling under the EU’s product safety legislation. 2) AI systems falling into specific areas (such has management of critical infrastructure, education, law enforcement, etc. the full list is defined in the Annex III of the act) that will need to be registered in an EU database. All high-risk AI system will be assessed before being put on the market and also throughout their lifecycle - will come back to this.
Limited risk AI systems, which can be categorised as the like of ChatGPT and other chat bot types of systems, are subject to transparency obligation to allow individual interacting with the system to make informed decision.
Last but not least, the fourth category consider the so-called low-risk AI systems. They don’t use personal data or make any prediction that influence human beings. Most of the AI system should fall under this category. Some example here includes the likes of spam-filter or other processing workflows.
One of the latest revision includes some element of compliance in regards to “general purpose AI model”. The stricter rule will apply to model trained with computing power exceeding 10^FLOPs. That basically means that model like GPT-4 or Gemini will be in scope of the strictest requirements.
Zooming into the high-risk AI system and the requirements for those. Most of the list below is on the provider of the AI system. The importers, distributors and obviously the user, of course, do have their respective responsibilities. Back to our lists of requirements:
Risk Management System (article 9): High-risk AI systems are required to have appropriate risk management processes to identify, evaluate and mitigate potential risks of an AI system throughout its lifecycle.
Data and data governance (article 10): The datasets that support the training of model has to meet high quality standards. Including training validation and testing data must be relevant, representative, free of errors and complete. Special attention to biases in data must be made.
Technical documentation (article 11): Annex IV includes detailed list of technical documentation that is required. The objective being to have documentation to support the compliance with the act.
Record-keeping (article 12): the AI systems must have automatic recording of events (e.g. said differently logs), this is to ensure a level of traceability of the AI system’s functioning throughout its lifecycle.
Transparency (article 13): their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately.
Human oversight (article 14): human-machine interface tools to prevent or minimize risks upfront, enabling users to understand, interpret, and confidently use these tools.
Accuracy, robustness and security (article 15): High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle. High-risk AI systems shall be resilient as regards attempts by unauthorised third parties to alter their use or performance by exploiting the system vulnerabilities.
The technical solutions aimed at ensuring the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks.
The EU AI Act is indeed a very comprehensive framework for AI. Time will tell if the Act find the right balance between a harmonised set of rules focusing on safety, legal compliance, governance, etc. and fostering innovation and a single market for AI application.
Conclusion
The reality is that the strictest requirements in the Act will only apply to a very small set of companies and models. In a way this is what you want with your regulation don’t get everybody in the strictest rules as it will just be counterproductive.
So what can we learn from the Act? My view is that the risk-based approach can be easily applied within a company. Where to start?
Define your risk approach based on your business model. This is about your context. You can apply this to your application criticality as well. If you plan to leverage AI for your critical business process then you want to be sure those AI models have the right level of scrutiny…or you can decide that some of those critical business process should never have an AI model.
Define your checklist! Define that minimum bar so you have consistency. The list of articles in the Act is pretty clear and can make a really good starting point.
Worth a full read
Financial Services and AI: Leveraging the Advantages, Managing the risks
Key Takeaway
The FS-ISAC AI Risk Working Group provides frameworks to help financial institutions manage AI risks effectively.
Cybersecurity strategies include understanding AI threats, leveraging AI in defense, and ensuring responsible AI use.
Key threats include deepfake attacks, misuse of generative AI by employees, and new phishing techniques.
10. Trust is paramount; misapplied or insecure AI can erode trust among customers, regulators, and investors.
Obviously, there is much more as it’s difficult to summarise 6 white papers and a tool in a few bullet points. Highly recommend to have a look at the details!
Interview of JPMorgan’s Global CISO
Key Takeaway
The industry has improved in identifying supply chain vulnerabilities but struggles with detecting sophisticated backdoors and logic issues.
Security is viewed as the study of failure, focusing on protecting confidentiality, integrity, and availability.
Evolving infrastructure demands that security teams adapt and integrate more closely with engineering teams.
Embedding security engineers within teams is crucial for anticipating and designing against cybersecurity threats.
A strong engineering background is deemed essential for CISOs to effectively contribute to security and innovation.
Geopolitical risks and the protection of critical infrastructure are top concerns for maintaining financial stability.
Ransomware and sophisticated cyber actors pose significant threats, necessitating advanced threat intelligence and proactive measures.
The importance of automating security compliance and risk assessment in software development is highlighted.
Some more reading
Huntr the first bug bounty program for AI/ML » READ
HackerGPT, which is a specialized AI assistant for ethical hackers » READ
Not new but interesting read: Cars Are the Worst Product Category We Have Ever Reviewed for Privacy » READ
FS-ISAC published a 6 white paper related to Financial Services and AI » READ
Microsoft customers are being targeted after Redmond's source code, secrets were stolen » READ
Insider risk is never too far…including for AI engineers » READ
Align with my previous comments…data security is key if you leverage third-party training data » READ
Wisdom of the week
[…] As you know, security is designed around defence in depth and not every weakness that we can discover is meaningful. So we try to contextualise this data within the risks that we're managing, such that we can really differentiate between those things that are compliance issues, and those things that are true security, or resiliency issues. It's a pretty hard problem; there’s still a lot of room for growth across the industry. […]
Contact
Let me know if you have any feedback or any topics you want me to cover. You can ping me on LinkedIn or on Twitter/X. I’ll do my best to reply promptly!
Thanks! see you next week! Simon