- Project Overwatch
- Posts
- #006 Cyber AI Chronicle - Deepfake in the news
#006 Cyber AI Chronicle - Deepfake in the news
PRESENTED BY
Cyber AI Chronicle
By Simon Ganiere · 11th February 2024
Welcome back! This week newsletter is a 10 minutes read.
Trying something new and I’m changing a little bit the format. Objective being to focus on sharing what I have learned / read / watched / listened during the week. Let me know what you think!
Table of Contents
What I learned this week
TLDR;
Deepfake threats are not new but picking up based on the speed at which AI tech progress.
That being said, good old common sense and basic controls will go a long way! The usual triggers are used with deepfake attacks: sense of urgency, odd request, threat of consequences if not done, odd request, etc.
Now is the time to do your threat modeling on how AI technologies will improve/increase/enable old and new attacks.
Ransomware is still a significant problem across all industries.
I might need to write this every week but the basic people! The basics are what matters. Don’t fall for the latest shiny product, focus on what matters most. This week lesson is the same as in January, if a technology is internet facing, you must be able to patch it in 24h.
Deepfake
Are financial scams going to be the first adopter of the AI frenzy?
You most probably heard about the significant financial loss suffered by a multinational company in Hong Kong, a whooping US $25.6 million! All of this due to a scam were the threat actor (which is unknown at this point) leveraged deepfake technology to make an employee believe he was speaking with the CFO and team members of another company and they managed to convince the employee to make 15 transfers to five different Hong Kong bank accounts.
Hopefully, we will get more technical details on how it happens as so far not much has been actually revealed. The reporting on the news is also a bit sketchy as details vary between news reporting website. We will see what details will be confirmed…or not but in any case it has the added benefice to raise awareness on the topic.
Threat actors have not just discovered the ability to create fake photos, videos or voice. Some of the oldest form of fraud/scam are going back to 300 BC (!). Technology has been enabling scam for a long time, from “pump-and-dump” to Business Email Compromise (BEC) and other technic. That being said, the latest development in AI technologies are enabling threat actors (or anyone for that matter) to create deepfake. From fake photos to videos and voices for a low price and with pretty good quality. Think about those AI generated headshots based on a few dozens pictures of you…not that difficult to get a fake picture of you anymore.
The threat is taken very seriously as it can lead to some horrible situation and not only from a financial scam perspective. The EU Commission adopted a proposal to update the criminal law rules on child sexual abuse. In particular the new rules also update the definitions of the crime to include child sexual abuse material in deep fakes or AI-generated material. Obviously, deepfake can be used for other nefarious intent like fake news and disinformation, mass manipulation and others.
So whilst the creation of deepfake seems to have improved significantly, what can we do about detecting them? Not easy but common sense can most certainly help. Here is a high level list of you can look for, this article has a few additional tips as well.
Visual inspection: sometime deepfake contains artefacts or other abnormalities that are not present in real videos. Look for strange flickering, distorted images or mismatched of lip movement
Metadata analysis: Digital files often contain metadata that can be used to trace their origin and authenticity. By analyzing the metadata of a video, it may be possible to determine whether it has been edited or manipulated.
Forensics analysis: There are several forensic techniques that can be used to analyze videos and detect deepfakes. These techniques might include analyzing patterns in the video, examining the audio track, or comparing the video to other sources to look for discrepancies.
Machine learning: It is also possible to use machine-learning algorithms that can be trained on a large dataset of real and fake videos, and then used to classify new videos as either real or fake.
The future might also include some more advance level of authentication in corporate environment. Securely exchanging encryption key can be used to ensure that you are really speaking with who you think you are speaking with. This will add another layer of complexity for the threat actors.
Yes, I know some of those are not easy to do especially at speed. Can’t wait to start the new Teams meeting asking people to blinks furiously or move their head around to detect a blip in the video. That being said common sense is most probably going to come to the rescue as the threat actors still use the same approach:
Create a sense of urgency to encourage you to do something quickly
They often present an offer that really is too good to be true
They are asking for things your normally not doing.
Going back to the Hong Kong scam at the beginning of the newsletter, doing 15 different transfers to 5 different bank accounts should probably have triggered some other controls. If the request is not following normal process it’s probably because it’s not true.
As a brief conclusion, as I’m sure we will talk again about the topic, the age-old wisdom of pausing to question anomalies remains one of our most reliable shield. One of the key question in the future lies in our ability to ensure authenticity. With all of the progress and value that AI technologies can give us, if we are continuously challenging the authenticity of any content it’s going to be a real problem. A start of an answer could be the work initiated by C2PA on watermarking…but even that is still raising a lot of questions if this can be one the answer.
Worth a full read
Inside the Underground Site Where ‘Neural Networks’ Churn Out Fake IDs
Key Takeaway
OnlyFake's use of neural networks represents a significant advancement in the creation of fake IDs, posing a challenge to cybersecurity.
The ease and speed of generating these IDs democratize access to fraudulent documents, potentially increasing criminal activities.
Financial institutions and cryptocurrency exchanges are particularly vulnerable to these sophisticated fraud methods.
The ongoing development of countermeasures by companies indicates a cat-and-mouse game between fraudsters and security teams.
The case underscores the urgent need for improved identity verification methods in the face of advancing AI technologies.
Note: OnlyFake website was taken down after the publication of that article
Ransomware Hit $1 Billion in 2023
Key Takeaway
The resurgence of ransomware in 2023 highlights the adaptability and persistence of cybercriminals despite law enforcement efforts.
The exploitation of software vulnerabilities, such as MOVEit, underscores the importance of cybersecurity vigilance across all sectors.
The significant role of Ransomware as a Service (RaaS) and Initial Access Brokers (IABs) in facilitating ransomware attacks points to the need for enhanced cybersecurity measures and awareness.
Some more reading
Secure AI agents necessitate novel technologies, including hardened sandboxes, agent-specific authentication, and the API-ification of legacy systems for enhanced capabilities » READ
The rise of the Internet of Agents (IoA) necessitates a revolutionary approach to cybersecurity to ensure a secure, AI-powered future » READ
FCC bans AI-generated voice cloning in robocalls » READ
No toothbrush did not cause a massive DDoS Attack » READ
Mastercard leverages its AI capabilities to fight real-time payment scams » READ
Google Bard become Gemini » READ
Ivanti is still in the news, releasing a 5th vulnerability after one hot-fix and two patches…if you have Ivanti go patch it now or just remove it from your environment » READ
CISA, FBI warn of China-linked hackers pre-positioning for ‘destructive cyberattacks against US critical infrastructure » READ
ONCD is studying ‘liability regimes’ for software flaws » READ
Quote of the week
A breach alone is not a disaster, but mishandling it is.
Contact
Let me know if you have any feedback or any topics you want me to cover. You can ping me on LinkedIn or on Twitter/X. I’ll do my best to reply promptly!
Thanks! see you next week! Simon