- Project Overwatch
- Posts
- #009 Cyber AI Chronicle - (Trying to) Build a Cyber RAG
#009 Cyber AI Chronicle - (Trying to) Build a Cyber RAG
PRESENTED BY
Cyber AI Chronicle
By Simon Ganiere · 4th March 2024
Welcome back! This week newsletter is a 15 minutes read.
Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.
Table of Contents
What I learned this week
TL;DR
Building your own AI RAG is difficult - at least for me as I have not coded anything significant in years 😆 Spent most of last weekend and nights trying to building that without the level of success I was hoping. That being said it was a good exercise and allowed me to extend my understanding of some of the key concepts related to LLMs.
In the end my failed attempts at coding made me think about the usual build vs. buy and what that meant including from a security perspective.
You have probably read about the issue of Google Gemini and its problematic (to say the least) text and image response. In short: great to have a new AI product but it can burn you really badly if it goes wrong…and that’s applicable for every company! Really curious to see how this will play out for Google.
(Trying to) Build a Cyber RAG
So I wanted to create a Retrieval Augmented Generation (RAG) tool to look at some cyber reports. My objective was to basically take a dozen cyber reports from known vendors and try to build a tool to query those reports and get some of the key trends, topics, etc. The objective obviously being to ask a bit more than basic questions and do some more analysis work: things like show me the key points mentioned only in a one report, show me the key element mention in more than X reports, etc.
It’s definitely been a few years since my last attempt to do some real coding so as usual not everything went as planned! Started by watching a few tutorials on YouTube (like this one, this one or this one)…i’ll skip the numerous bugs, the dozens of ‘pip install’ for dependencies (don’t forget that python virtual environment if you don’t want to mess up your system), the deprecated functions (yeah even YouTube tutorials from like 2 weeks ago have deprecated functions as AI libraries are getting updated real fast), etc. but I didn’t really succeed in building what I wanted ☹️
A few things that I came to realise along the way:
If you are a casual coder, like me, you will struggle…and yes even with ChatGPT to try to help you out.
Stitching together some scripts is one thing…understanding some of the key AI stack concept is super important like tokenisation, vectors, context windows, temperature, etc.
Don’t forget, your code might be all good but it’s still an AI model so you need to get your prompt right. Prompt engineering matters…a lot!
After all of may failed attempts, I decided to take a step back. As much as my ego was telling me I could build something on my own, I was also wondering if that’s the right approach. The infamous debate of build vs. buy starts kicking in! So you have a few options:
Train your own LLM from scratch: Might be needed for corporate environment but a part of the excitement of a geek to do that on her/his own it make more or less zero sense to do it yourself. This is probably the most “secure” way as you are in charge…even though that comes at cost as well from a secure perspective. It takes significant knowledge and a huge amount of compute power. Not to mention your traditional security needs to be strong (e.g. third-party components, vulnerabilities, data security, access controls, etc.)
Fine-tuning an existing LLM: You can take an existing LLM and fine-tune it. Tweak some of the parameters of the base model and feed only specific data (a lot less data than when you train your own LLM from scratch). Here you open yourself to the another round of security challenges. You need to assess the underlying model for accuracy and robustness. Let’s assume you are using Google Gemini as a base model to support your HR recruitment process…well if Gemini has a significant bias is the model you can do whatever you want with your fine-tuning…your HR recruitment process will be biased…and that’s not what you want.
Prompt-tuning an existing LLM: Customise your prompt! basically add a lot more information in the prompt itself. People tend to use ChatGPT for basic questions like its a Google Search type of tool…when having extended prompt with full input and output instructions can generate brilliant result. That’s where the context window helps…GPT-4 Turbo has a 128k context window…that’s approx. 300 pages of text that you can put in the prompt! Google Gemini 1.5 has a standard 128k token context window…but they have announced that a limited group of people have access to a 1 million tokens context window!
Back to security, the need to assess the model accuracy and robustness also exists…and if you put sensitive text in your prompt well you need to ensure that model is going to keep that information secure and private. Don’t paste those candidate CV in that prompt…might create a whole set of new issues!
Conclusion
So where does that leave me with my initial objective? Well, I made the decision not to train my own model (no surprise), neither fine-tuning one….so here goes the option to use prompt-tuning 😃
I haven’t built the tool I was aiming for (yet) but I now have a better idea about how to approach this. The two best tools I have found are:
Fabric from Daniel Miessler. Check the pattern and the prompt details, he has built some prompt that can analyse a full cyber security report and extract key information. No secret here i’m definitely using this to parse some of news and other long report.
CrewAI I mention this tool already, the latest version is providing some great new future to create your Agents team.
Towards the end of the week, I did find this example with CrewAI which looks interesting. Again not necessarily what I wanted to create but the ability to get a crew of agents working together is key here.
Definitely planning another busy week playing with all of this! If you know of any useful tools or any other tutorial on RAG please let me know!
PS: on the Google Gemini story…i’m really wondering how this will turn out. The damage in terms of reputation is huge for Google. They used the card of safety and responsible AI when OpenAI went ahead of them with the release of ChatGPT…and now that issue is clearly showing some critical underlying issues in their model…not a good look at all! I was already concerned with some of the search and ads models (how do integrate ads in the AI world?) but this might turn ugly for them.
Worth a full read
SoSafe: Cybercrime Trends 2024
Key Takeaway
AI-driven innovation since ChatGPT-3's launch has significantly impacted cyber security and cyberattacks.
Cybercriminals are exploiting AI for deepfakes and voice cloning, raising concerns about disinformation and social manipulation.
Generative AI advancements enable malicious use, such as prompt injection attacks and bypassing CAPTCHA codes.
AI's limitations, like code reliability issues and "hallucinations," pose new security threats.
Cybercriminals are increasingly targeting new technologies, including cloud systems and quantum computing.
The professionalization of cybercrime, with ransomware-as-a-service, poses a growing threat to various sectors.
Hacktivism is on the rise, with political and social motivations driving targeted cyberattacks.
Disinformation-as-a-service is becoming a major tool for manipulating public opinion and destabilizing entities.
Public sector and critical infrastructure face heightened cyber threats, with healthcare and education sectors particularly vulnerable.
Burnout among cyber security professionals is worsening, exacerbated by a global shortage of skilled labor.
Mandiant’s Cyber Threat Intelligence Program Maturity Assessment
Key Takeaway
Mandiant's Intelligence Capability Discovery (ICD) tool aims to improve cybersecurity by evaluating CTI program maturity.
The ICD is based on the NIST Cybersecurity Framework and includes feedback from industry peers.
It assesses six capability areas: Organizational Role of CTI, Intelligence Services, Analyst Capability, Intelligence Process Lifecycle, Analytic Practices, and Technology Integration.
The tool uses a self-assessment format with 42 questions to provide a maturity score and improvement recommendations.
Recommendations are linked to Mandiant's whitepapers and service offerings for further CTI program development.
The ICD framework is adapted from the Capability Maturity Model Integration (CMMI), assessing from Initial to Adaptive levels.
Mandiant's decade-long experience with public and private sectors informed the ICD's development.
Each capability area includes complex and simplistic measures for comprehensive evaluation.
The tool is free and designed for organizations to self-assess before considering third-party intelligence assessments.
Google Cloud hosts other assessments, including the Security & Resilience Framework (SRF), aligned with the NIST Cybersecurity Framework.
Data Scientist Targeted by Malicious Hugging Face ML Model with Silent Backdoor
Key Takeaway
JFrog Security Research team discovered malicious ML models on Hugging Face enabling attackers to execute code remotely.
Malicious models can compromise user environments by executing arbitrary code upon loading, particularly through pickle files.
The compromised models allow attackers to gain full control over victims' machines via a silent backdoor.
Hugging Face has implemented security measures like malware, pickle, and secrets scanning to mitigate risks.
Despite security efforts, a model was found that bypasses these measures, demonstrating platforms' vulnerability to threats.
PyTorch and Tensorflow Keras models are identified as having the highest risk for code execution attacks.
A specific PyTorch model by user "baller423" contained a payload for a reverse shell connection to an external server.
The incident with "baller423/goober2" and similar cases underline the potential for large-scale data breaches or espionage.
JFrog's HoneyPot setup aimed to attract and analyze attacker behaviors, though no significant activity was observed.
The discovery of these vulnerabilities underscores the importance of continuous vigilance and proactive security in AI ecosystems.
Some more reading
FTC hits Avast with $16.5 million fine over allegations of selling users’ browsing data…or when security companies don’t know do security…» READ
X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon » READ
Apple introduce PQ3 a new post-quantum cryptographic protocol that will be used in iMessage » READ
Python Risk Identification Tool for generative AI (PyRIT) » READ
Living off the False Positive is an autogenerated collection of false positives sourced from some of the most popular rule sets. The information is categorized along with ATT&CK techniques, rule sources, and data sources » READ
AI can ‘disproportionately’ help defend against cybersecurity threats, Google CEO Sundar Pichai says » READ
Toward Better Patching — A New Approach with a Dose of AI » READ
Products on your perimeter considered harmful (until proven otherwise) » READ
NIST Releases Version 2.0 of Landmark Cybersecurity Framework » READ
Judge orders NSO to cough up Pegasus super-spyware source code » READ
The story of a company that use AI for its marketing and then failed to deliver anything close to it…not directly AI but it highlights the challenges AI generated images is bringing » READ
Wisdom of the week
I never lose. I either win or learn
Contact
Let me know if you have any feedback or any topics you want me to cover. You can ping me on LinkedIn or on Twitter/X. I’ll do my best to reply promptly!
Thanks! see you next week! Simon