Tag Archive for: Artificial Intelligence

A.I. Deepfake Posing as the CFO Scams $25 Million: How to Protect Your Organization from the Exploding Deepfake AI Cyber Scam

Deepfakes use Artificial Intelligence (A.I) to create fake, hyper-realistic audio and video that is generally used to manipulate the viewer’s perception of reality. In most deepfakes, the legitimate person’s face or body has been digitally altered to appear to be someone else’s. Well known deepfakes have been created using movie stars and even poorly produced videos of world leaders.

Removing the malicious part of the definition, deepfakes have been used in the film industry for quite some time to de-age actors (think Luke Skywalker in The Mandalorian) or resurrect deceased actors for roles or voiceovers (think Carey Fisher in Rogue One – okay, can you tell I’m a Star Wars geek?). Cybercriminals have latched on to the technology, using AI-generated deepfakes in conjunction with business email compromise (also known as whaling and CEO fraud) to scam organizations out of massive amounts of money.

Just recently, a finance worker at an international firm was tricked into wrongly paying out $25 million to cybercriminals using deepfake technology to pose as the company’s Chief Financial Officer during a video conference. And it wasn’t just one deepfake! The fraudsters generated deepfakes of several other members of the staff, removing any red flags that it wasn’t a legitimate virtual meeting. As a subordinate, would you refuse a request from your boss that is made face-to-face (albeit virtually)? You might be savvy enough, but most employees aren’t willing to risk upsetting their boss.

The days of just sending suspicious emails to spam is no longer adequate. Our Spidey Sense (the B.S. Reflex I talk about in my keynotes) must be attuned to more than business email and phone compromise. We have entered the age of Business Communication Compromise, which encompasses email, video conferences, phone calls, FaceTime, texts, Slack, WhatsApp, Instagram, Snap and all other forms of communication. It takes a rewiring of the brain; TO NOT BELIEVE WHAT YOU SEE. AI is so effective and believable that workers may even feel like they are being silly or paranoid for questioning a video’s validity. But I’m sure as the employee who lost their organization $25M can attest, it’s way less expensive to be safe than sorry.

The solution to not falling prey to deepfake scams is similar to the tools used to detect and deter any type of social engineering or human manipulation. Empowering your employees, executives and customers with a sophisticated but simple reflex is the most powerful way to avoid huge losses to fraud. When you build such a fraud reflex, people will be less likely to ignore their gut feeling when something is “off.” And that moment of pause, that willingness to verify before sharing information or sending money, is like gold. These are the skills that I emphasize and flesh out in my newly-crafted keynote speech, Savvy Cybersecurity in a World of Weaponized A.I.

Get in touch if you’d like to learn more about how I will customize a keynote for your organization to prepare your people for the whole new world of AI cybercrime. Contact Us or call 303.777.3221.

How Hackers Use A.I. to Make Fools of Us (& Foil Security Awareness Training)

In a bit of cybercrime jujitsu, A.I.-enabled hackers are using our past security awareness training to make us look silly. Remember the good old days when you could easily spot a phishing scam by its laughable grammar, questionable spelling and odd word choice? 

“Kind Sir, we a peel to your better nurture for uhsistance in accepting $1M dollhairs.” 

Or how about fear-based emails with an utter lack of context from a Gmail account linking to suspicious “at-first-glance-it-looks-real” URLs: 

“Your recent paycheck was rejected by your bank! Please click on definitely-not-a-scam.com [disguised as your employer] and give us the entirety of your sensitive financial information”  

Well, those tools no longer work.

Here’s the deal: Hackers use A.I. or more specifically Gen A.I. (Generative Artificial Intelligence) to turn outdated phishing detection tools on their heads by empowering them to tailor perfectly crafted, error free, emotionally convincing emails that appear to come from a trusted source and reference actual events in your life. Giving A.I. to cybercriminals is like handing your five-year-old a smartphone – they’re better at it than you will ever be. 

A.I. augmented phishing emails are designed to trigger your trust hormone (oxytocin, not to be confused with Oxycontin) by systematically eliminating all of the red flags you learned during your organization’s cybersecurity awareness training. So, when an employee receives a well-crafted, error free email from a friend that references recent personal events, past cybersecurity awareness training actually encourages them to click on it.

To make matters worse, if the hacker happens to have access to breached databases about you, like emails compromised during a Microsoft 365 attack, they become the Frank Abagnale of phishing (the world’s most famous impersonator, if you don’t know who he is). Criminals can easily dump breached data into a Large Language Model (LLM) and then ask A.I. to compose a phishing campaign based on your past five emails

A.I. software allows even novice cybercriminals to scrape your relationships, life events and location from social media, combine it with personally identifying information purchased on the dark web, and serve it up to your email or text as if it originated from someone you trust. It’s like having your own personal stalker, but it’s a cyborg that understands your love of blueberry cruffins and ornamental garden gnomes. (Ok, maybe those are my loves, not yours.).  

The reality is that hackers are no longer crafting the emails one by one; it’s artificially intelligent software doing millions of times per day what nation-state hackers used to spend months doing to prepare spear-phishing campaigns. And it means that phishing and business email compromise campaigns will eventually appear in your inbox as often as spam. And that threatens your bottom line. 

Let’s get serious for a hot minute. For those of you who have attended one of my cybersecurity keynotes, here is a comprehensive and organized approach to the steps your organization should begin taking as outlined by the Blockbuster Cyber Framework:

  1. HEROES (Your people): Immediately retrain your people to properly identify, verify and distinguish harmful phishing and social engineering schemes from legitimate communication. This requires new thinking applied to old reflexes. 
  2. STAKES (What you have to lose): Identify which data is the most sensitive, profitable, and targeted by ENEMIES, and prioritize its defense. You can’t protect everything, so protect the right things first. 
  3. SETTING (Your technology): 1. Implement defensive software tools like A.I.-enhanced spam filtration that helps detect phishing emails. Generative A.I. is brilliant at detecting patterns, and that will make identifying even the most well-crafted phishing campaigns somewhat easier. 2. Properly segment and segregate your network so that access to one area of your data doesn’t expose others.
  4. GUIDES (Experts in the field): Hire an external security assessment team (not your I.T. provider) to evaluate your technological and human defenses and known vulnerabilities. Internal teams have less incentive to  discover their own mistakes. 
  5. PLAN (Pre-attack and post-attack next steps): Develop a prevention roadmap before the ATTACK and an Incident Response Plan that lets you know exactly who to call and how to respond when a successful phishing attack occurs (because it will). Preparation is the greatest form of mitigation. 
  6. VICTORY (When you don’t end up on the front page): When nothing bad happens, reward your people. Throw a party for your team, because nothing says “thank you for not clicking on that profit-destroying scam” like a rowdy office shindig. Incentivizing good behavior is just as critical to your culture of security as retraining after someone mistakenly clicks on a phishing email. 

Cybercrime is constantly changing and now A.I. enables every attack type to scale. Make sure your cyber defenses and people don’t end up being the fool. 

John Sileo is a cybersecurity author, expert and keynote speaker fascinated by how A.I. accelerates everything, including crime. His clients range from the Pentagon to Amazon, small businesses to large associations. John has been featured on 60 Minutes, Fox & Friends and even cooking meatballs with Rachel Ray. His latest keynote speech is Savvy Cybersecurity in a World of Weaponized A.I. Contact Us or call for details: 303.777.3221.

ChatGPT: What is Generative AI & Why Does it Matter? (As Explained by a Bot)

ChatGPT Expert Sileo

ChatGPT is a Type of Generative AI (and this post is an example)

(The best way to understand ChatGPT and Generative AI is to show what it can do. So, everything below this line was written by ChatGPT based on the prompt “Write an article about Generative AI and ChatGPT that explains what it is and how it can be used for good or evil”. I added the subtitles and a sentence on the cybersecurity implications of ChatGPT (using RED text) that we are already witnessing. Other than that, everything was written by a bot. – John )

Artificial Intelligence (AI) is a rapidly growing field that has the potential to revolutionize the way businesses operate. From automating repetitive tasks to making complex decisions, AI is already being used in various industries to improve efficiency, reduce costs, and enhance the customer experience. However, one area of AI that has gained particular attention in recent years is Generative AI. This technology, which allows AI systems to create new content, has the potential to change the way businesses work in ways that we can’t yet fully anticipate.

One of the most significant changes that generative AI will bring to businesses is the ability to automate the creation of content. For example, a generative AI system can be trained to write news articles, create social media posts, or even compose music. This will save businesses time and money while also allowing them to create more content than ever before.

An Example of Generative AI

ChatGPT is an example of generative AI applied to content creation and is a type of language model developed by OpenAI. It is based on a technology called “transformer” which allows it to understand and generate human language. Essentially, ChatGPT is a program that can understand and respond to natural language text input, it can answer questions, summarize information, generate text and even complete a text given some initial context.

For example, if you ask ChatGPT to write a news article on a specific topic, it can use the information it has learned from a large dataset of text and write an article on that topic. Similarly, it can answer a question that you might ask, such as “What is the capital of France?” or “What is the definition of AI?”. Additionally, it can generate text such as poetry, song lyrics, story, and so on.

It is used in various applications such as chatbots, virtual assistants, automated content generation, language translation, and more. It can be fine-tuned for specific tasks by training it on a smaller dataset of text that is related to that task.

It is a powerful tool for natural language processing, it is capable of understanding and generating human-like text and can be useful in various industries. However, like any AI technology, it has its limitations and it’s important to keep in mind that it’s not always going to produce perfect results.

Other Uses of Generative AI

In addition to creating content, generative AI also has the potential to improve the decision-making process. This is because it can analyze large amounts of data and identify patterns that humans may not be able to see. For example, a generative AI system can be used to predict future market trends, identify potential fraud, or even recommend personalized products to customers.

The Risks of ChatGPT and Generative AI

However, with any new technology comes risk. One of the main risks of generative AI is the potential for it to produce fake or biased content. For example, a generative AI system that is trained on biased data could produce content that is racist, sexist, or just plain inaccurate. This could lead to serious damage to a company’s reputation and credibility.

Another risk of generative AI is the potential for it to be used for malicious purposes. For example, a generative AI system could be used to create deepfake videos or fake news articles that spread misinformation and cause chaos. This could have serious consequences, not just for businesses but for society as a whole. It can also create phishing emails for cybercriminals who don’t speak English, can write malicious software code for malware, ransomware and other tools used by cybercriminals, and it can do it all for free and in a matter of minutes.

In conclusion, Generative AI is a powerful technology that has the potential to change the way businesses work in many ways. However, as with any new technology, it also poses risks that need to be carefully considered. Businesses should be aware of these risks and work to mitigate them in order to fully harness the power of Generative AI.

___________________________

Cybersecurity keynote speaker John Sileo’s newest keynote speech, Hacking A.I. – Cybersecurity in the Age of Artificial Intelligence, explores the changing landscape of technology and cyber threats due to tectonic shifts fostered by ChatGPT, Generative AI, cloud computing, deep fakes, and adaptive ransomware. For every good use of technology, there is a corresponding evil intention exploited by cybercriminals, corporate spies and rogue nation-states. Your awareness, response and resilience has become even more vital to your organization’s performance and reputation. John is offering a limited number of 24 Hacking A.I. keynotes this year due to advanced bookings of his other keynote speeches. Bring him in for this business-oriented, non-technical, cutting-edge cybersecurity update by calling us directly on 303.777.3221 or filling out our Contact Us form.