Disinformation Campaigns Are Coming for Your Bottom Line 

disinformation campaigns

The rise of disinformation campaigns could put the reputation of your company at risk

Imagine waking up to find the internet flooded with fake news that one of your products was killing hordes of people or your company had been implicated in a human trafficking ring. Imagine if there was a deepfake video of you or one of your company executives engaging in criminal activity: purchasing illegal drugs, bribing an official or defrauding the company and its shareholders. 

Welcome to the age of disinformation campaigns.

These types of campaigns are increasingly being used to target businesses and executives. For centuries, they’ve been used as a political tool for one simple reason: They work. There’s ample evidence that Russia manipulated the 2016 presidential election through fake news. In July, a European Commission analysis found that Russia targeted the European parliamentary elections, and just last week, Facebook and Twitter had to take action against China after it orchestrated numerous coordinated social media campaigns to undermine political protests in Hong Kong. 

From Italy to Brazil, Nigeria to Myanmar, governments or individuals are sowing division, discrediting an opponent or swaying an election with false information — often with deadly consequences.

Here at home, there have been numerous disinformation campaigns aimed at politicians and other individuals. Earlier this summer, a video of House Speaker Nancy Pelosi, doctored to make it appear that she was drunk, went viral. Last July, the Conservative Review network (CRTV) posted an interview to Facebook with Congresswoman Alexandria Ocasio-Cortez (who was then a candidate) where she was generally confused and appeared to think Venezuela was in the Middle East. It turned out the “interview” was a mashup of an interview Ocasio-Cortez gave on the show Firing Line spliced with staged questions from CRTV host Allie Stuckey. The post was viewed over a million times within 24 hours and garnered derisive comments from viewers who thought it was real — before Stuckey announced that it was meant as satire. 

Republican politicians have also been targeted (though to a lesser degree). Last year, North Dakota Democrats ran a Facebook ad under a page titled “Hunter Alerts.” The ad warned North Dakotans that they could lose their out-of-state hunting licenses if they voted in the midterm elections, a claim that was unsubstantiated and refuted by the state’s GOP.

Regardless of the targets, disinformation campaigns are designed to leave you wondering what information to trust and who to believe. They succeed when they sow any sense of doubt in your thinking.

The same technology that makes the spread of false information in the political arena so dangerous and effective is now being aimed at the business sector. 

Earlier this year, the Russian network RT America — which was identified as a “principal meddler” in the 2016 presidential election by U.S. intelligence agencies — aired a segment spooking viewers by claiming 5G technology can cause problems like brain cancer and autism. 

There’s no scientific evidence to back up the claims, and many seem to think the success of America’s 5G network is seen as a threat to Russia, which will use every weapon in its arsenal to create doubt and confusion in countries it deems competitors or enemies. 

Whether for political gain (to help elect a U.S. President sympathetic to Russia) or to sabotage technological progress that threatens Russia’s place in the world economic hierarchy (as with 5G), Russia has developed and deployed a sophisticated disinformation machine that can be pointed like a tactical missile at our underlying democratic and capitalistic institutions. 

Economic warfare on a macro level is nothing new, and fake news and “pump and dump” tactics have long been used in stock manipulation. But more and more, individual companies are being targeted simply because the perpetrator has an axe to grind. 

Starbucks was a target in 2017, when a group on the anonymous online bulletin board 4Chan created a fake campaign offering discounted items to undocumented immigrants. Creators of the so-called “Dreamer Day” promotion produced fake ads and the hashtag #borderfreecoffee to lure unsuspecting undocumented Americans to Starbucks. The company took to Twitter to set the record straight after it was targeted in angry tweets.

Tesla, Coca-Cola, Xbox and Costco are among numerous companies or industries that have also been targeted by orchestrated rumors.

The threat to American companies is so severe that earlier this month, Moody’s Investment Services released a report with a dire warning: Disinformation campaigns can harm a company’s reputation and creditworthiness. 

How would you respond to a fake but completely believable viral video of you as a CEO, employee (or even as a parent) admitting to stealing from your clients, promoting white-supremacy or molesting children? The consequences to your reputation, personally and professionally, would be devastating — and often irreparable regardless of the truth behind the claims. As I explored in Deepfakes: When Seeing May Not Be Believing, advances in artificial intelligence and the declining cost of deepfake videos make highly credible imposter videos an immediate and powerful reality. 

Preparing your organization for disinformation attacks is of paramount importance, as your speed of response can make a significant financial and reputational difference. Just as you should develop a Breach Response Plan before cybercriminals penetrate your systems, you would also be wise to create a Disinformation Response Plan that:

  • Outlines your public relations strategy
  • Defines potential client and stakeholder communications 
  • Prepares your social media response
  • Predetermines the legal implications and appropriate response.

Disinformation campaigns are here to stay, and advances in technology will ensure they become more prevalent and believable. That’s why it’s vital that you put a plan in place before you or your company are victimized — because at this point in the game, the only way to fight disinformation is with the immediate release of accurate and credible information. 


About Cybersecurity Keynote Speaker John Sileo

John Sileo is an award-winning author and keynote speaker on cybersecurity, identity theft and tech/life balance. He energizes conferences, corporate trainings and main-stage events by making security fun and engaging. His clients include the Pentagon, Schwab and organizations of all sizes. John got started in cybersecurity when he lost everything, including his $2 million business, to cybercrime. Since then, he has shared his experiences on 60 Minutes, Anderson Cooper, and even while cooking meatballs with Rachel Ray. Contact John directly to see how he can customize his presentations to your audience.

FaceApp is Fun, But Putin Will Own Your Privacy

FaceApp quite literally owns your face forever (or atleast the image of your face).

It’s funny how we spend billions of dollars a year on health and beauty products and treatments designed to keep us looking, as Carrie Underwood sings, “young and beautiful”, but when a fun app comes along that gives us a goofy look or makes us look 30 years older, we jump at the chance to see it and share it with all of our friends on Social Media.  That’s exactly the case with FaceApp, an app that alters photos to make you look years older or alter facial expressions, looks, etc.  Thanks in part to use by celebrities such as Underwood, the Jonas Brothers and LeBron James, more than 150 million users have uploaded their photos to the app and it is now the top-ranked app on the iOS App Store in 121 countries. Free, fun and harmless, right?  Maybe, maybe not…

Every app is uploading your data and daily habits and locations, combining it with your social media profile and exploiting or selling it. That’s the profit model of the internet, not just FaceApp. That’s not what makes this particular app unique or noteworthy.  Wireless lab, creators of FaceApp is based in St. Petersburg, Russia, which means that by default, Vladimir Putin has a picture of you someplace on his hard driveLet’s be clear, Russia can get into any centralized database of facial recognition photos it wants to – this just makes it easier for them.

Not only that, but FaceApp retains a perpetual license to utilize your photo in any way it sees fit. In their words you are granting FaceApp “a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you”.

This makes it not just a privacy issue, but also a security issue, as there is no guarantee that your photos and device data are stored securely. In fact, there is almost no chance that they are stored securely. In addition to your photo, some other personal information is transmitted, and you are never alerted to the fact that either are being uploaded.

For now, it seems that they are only uploading the photo that you choose to upload, but I see no reason why they won’t slyly begin uploading every photo in your album as their terms of service don’t preclude that evolution. Facebook didn’t always collect and sell our information as they do now, but that didn’t stop them when profit is involved.  Information collection companies start by collecting very little until we stop paying attention, and then they transmit everything. They love the slippery slope of boiling the privacy frog!

So-what can you do about it?

  • The Democratic National Committee sent out a warning to campaigns recently telling people to delete the apps from their phone.  It’s a start, but deleting the app doesn’t get rid of your data in the cloud, and doing so is time-consuming and confusing.
  • For the fastest processing, try sending the requests from the FaceApp mobile app using ‘Settings->Support->Report a bug’ with the word ‘privacy’ in the subject line.
  • If it’s not too late, resist the urge to download the app!  Maybe look at a picture of your parents instead.

Most importantly, the next time you are giving away access to your photos or allowing any app to access data on your phone, read their privacy or data use policy first. You will be amazed at what you are giving away for free that makes them gobs of money.

John Sileo loves his role as an “energizer” for cyber security at conferences, corporate trainings, and industry events. He specializes in making security fun so that it sticks. His clients include the Pentagon, Schwab and many organizations so small (and security conscious) that you won’t have even heard of them. John has been featured on 60 Minutes, recently cooked meatballs with Rachel Ray and got started in cyber security when he lost everything, including his $2 million software business, to cybercrime. Call if you would like to bring John to speak to your members – 303.777.3221.

Are Alexa, Google & Siri Eavesdropping on You?

Amazon and Google have both come out with wildly popular digital assistants that are loosely known as smart speakers. Amazons is called Alexa and Googles is called, well, Google.

“Hey Alexa, would you say you are smarter than Google?”

Apple’s digital assistant is Siri which can be found on all new Apple devices, including the HomePod, a less popular version of Alexa. For the time being, Siri isn’t quite as smart or popular as the other kids, so I’m leaving her out of this conversation for now. Sorry Siri.

Just the fact that Alexa, Google and any digital assistant answer you the minute you mention their name shows that they are ALWAYS LISTENING! Once you have triggered them, they are recording the requests you make just as if you had typed them into a search engine. So they know when you order pizza, what songs you like and what’s on your calendar for the week. They can also have access to your contacts, your location and even combine that information with your buying and surfing habits on their website.  

To be fair, Amazon and Google both say that their digital assistants only process audio after we trigger them with a phrase like “Hey, Alexa” or “OK, Google”. So they aren’t listening to EVERY conversation… YET. Why do I say, YET? Because the New York Times dug a little deeper and took a look at the patents that Amazon and Google are filing for future makeovers of their digital assistants. In one set of patent applications, Amazon describes, and I’m quoting here, a “voice sniffer algorithm” that can analyze audio in realtime when it hears words like “love”, “bought” or “dislike”. It went on to illustrate how a phone call between two friends could result in one receiving an offer for the San Diego Zoo and the other seeing an ad for a Wine club based on the passive conversation that the two of them were having.

In other words, no one had invited Alexa to the conversation, but she, or he, or they were there listening, analyzing and selling your thoughts anyway. That’s just creepy! It gets worse. The Times found another patent application showing how a digital assistant could “determine a speaker’s MOOD using the volume of the user’s voice, detected breathing rate, crying and so forth as well as determine their medical condition based on detected coughing, sneezing and so forth”. And so forth, and so forth. To that, I have only two words: Big Brother!

Let’s call these future digital assistants exactly what they are: audio-based spyware used for profit-making surveillance that treat us users like tasty soundbites at the advertising watering hole. Our private conversations will one-day drive their advertisements, profits and product development. They are data mining what we say, turning it into a quantitative model and selling it to anyone who will buy it. Well, I don’t buy it. And I won’t buy one, until I am sure, in writing, that it’s not eavesdropping on everything said in my home.

Granted, these are all proposed changes to be made in the future, but they are a clear sign of where smart speakers and digital assistants are going. Their intention is to eavesdrop on you. Your One Minute Mission is to ask yourself how comfortable you are having a corporation like Amazon or Google eventually hearing, analyzing and sharing your private conversations.

I have to be forthright with you, many people will say they don’t care, and this really is their choice. We are all allowed to make our own choices when it comes to privacy. But the vitally important distinction here is that you make a choice, an educated, informed choice, and intentionally invite Alexa or Google into your private conversations.

I hope this episode of Sileo On Security has helped you do just that.

Delete Your Facebook After Cambridge Analytica?

I’ve written A LOT about Facebook in the past.

  • What not to post
  • What not to like
  • What not to click on
  • How to keep your kids safe
  • How to keep your data protected
  • How to delete your account

ETC! Search specific topics here.

And personally, I’m ashamed of myself for knowing exactly how social networks like Facebook take advantage of users and our data, and yet still have a Facebook profile. I’m not just sharing my information, Facebook is also sharing everyone of my “friends’” Information through me. I’m currently thinking that the only way to protest this gross misuse is data is to delete my profile (which still won’t purge my historical data, but will stop future leakage).

And yes, I’ve written several times about how Facebook is allowed to sell your privacy.  Now, it turns out the practices I have warned about for years are taking over our headlines with a “little” news bit about how Cambridge Analytica has used data obtained from Facebook to affect the 2016 U.S. Presidential election.

Here’s a brief timeline:

  • In 2014, a Soviet-born researcher and professor, Aleksandr Kogan, developed a “personality quiz” for Facebook.
  • When a user took the quiz, it also granted the app access to scrape his or her profile AND the profiles of any Facebook friends. (Incidentally I was writing about why you shouldn’t take those quizzes right about the time all of this data was being gathered!  And, it was totally legal at that time!)
  • About 270,000 people took the quiz. Between these users and all of their friend connections, the app harvested the data of about 50 million people.
  • This data was then used by Cambridge Analytica to help them target key demographics while working with the Trump campaign during the 2016 presidential election.
  • Facebook learned of this in late 2015 and asked everyone in possession of the data to destroy it. (They did not, however, tell those affected that their data had been harvested.)
  • The company said it did, and Facebook apparently left it at that.

That takes us up to recent days, when The Guardian and The New York Times wrote articles claiming that the firm still has copies of the data and used it to influence the election.

What’s happening now?

  • Facebook has suspended Cambridge Analytica from its platform, banning the company from buying ads or running its Facebook pages.
  • The Justice Department’s special counsel, Robert S. Mueller III, has demanded the emails of Cambridge Analytica employees who worked for the Trump team as part of his investigation into Russian interference in the election.
  • The European Union wants data protection authorities to investigate both Facebook and Cambridge Analytica. The UK’s information commissioner is seeking a warrant to access Cambridge Analytica’s servers.

And what should you be doing?

Consider deleting your profile. I am. I’ve written about how to do that before and how to weigh deactivating your account versus deleting it. Consider carefully before making that choice.

Remember that the real illusion about Facebook is that there is anything significant we can actually do to protect our privacy. Facebook provides an effective privacy checkup tool, but it does nothing to limit the data that Facebook sees, or that Facebook decides to share with organizations willing to buy it, or even that hackers decide to target.

The data you’ve already shared on Facebook, from your profile to your posts and pictures is already lost. There is nothing you can do to protect it now. The only data you can protect is your future data that you choose to not share on Facebook.  Here are my suggestions for a few pro-active steps you can take right now:

  • Delete or deactivate your Facebook profile
  • Reread my post about Facebook Privacy from 2013—unfortunately, all of it still applies today!
  • Memorize this phrase: “Anything I put on Facebook is public, permanent and exploitable.”
  • Tell some little white lies on your profile.
  • And stop taking those quizzes!

John Sileo is an an award-winning author and keynote speaker on cybersecurity, identity theft and online privacy. He specializes in making security entertaining, so that it works. John is CEO of The Sileo Group, whose clients include the Pentagon, Visa, Homeland Security & Pfizer. John’s body of work includes appearances on 60 Minutes, Rachael Ray, Anderson Cooper & Fox Business. Contact him directly on 800.258.8076.