Is WhatsApp Privacy a Big Fat Facebook Lie? What You Need to Know.

WhatsApp privacy policy

WhatsApp Privacy: Facebook’s New “Data Use” Policy

I have been getting a ton of questions on the privacy of your personal data that is sent through WhatsApp. Is Facebook, who owns WhatsApp, sharing everything you write, including all of your contacts, messages and behaviors? It’s not quite that simple, but neither is Facebook.

Facebook announced a new WhatsApp privacy policy recently which created A LOT of confusion and user backlash. The changes caused such an uproar that they ultimately have decided to delay release of the new WhatsApp privacy agreement from Feb. 8 to May 15 while they sort themselves out. So let me give you a head start!

Behind all of this, WhatsApp is trying to break into the world of messaging for businesses (to compete with Slack and other programs). That way, when you communicate with a business, Facebook will see what you’re saying and use that information for advertising purposes.

Your Data That Can Be Accessed By Facebook

Facebook contends that your private messages will remain encrypted end-to-end, including to them, but Facebook & WhatsApp will have access to everything they’ve had access to since 2014:

  • Phone numbers being used
  • How often the app is opened
  • The operating system and resolution of the device screen
  • An estimation of your location at time of usage based on your internet connection

Purportedly, Facebook won’t keep records on whom people are contacting in WhatsApp, and WhatsApp contacts aren’t shared with Facebook. Given Facebook’s miserable history with our personal privacy, I don’t actually believe that they will limit information sharing to the degree that they promise. I think that this is one of those cases where they will secretly violate our privacy until it is discovered and then ask forgiveness and lean on the fact that we have no legislation protecting us as consumers. But please be aware that if you utilize Facebook, you are already sharing a massive amount of information about yourself and your contacts. WhatsApp may just add another piece of data into your profile.Watch The Social Dilemma on Netflix if you’d like to learn more about how you are being used to power their profits.

Highly Private Messaging Alternatives to WhatsApp

So, while it is mostly a “cosmetic change” to the WhatsApp privacy policy, if you are uncomfortable using it, you may want to consider the following:

    • There are alternative messaging apps, including Signal and Telegram, both of which have seen huge new user sign-ups since the announcement. I personally use Apple Messages (daily communications) and Signal (highly confidential communications).
    • WhatsApp says it clearly labels conversations with businesses that use Facebook’s hosting services. Be on the lookout for those.
    • The feature that allows your shopping activity to be used to display related ads on Facebook and Instagram is optional and when you use it, WhatsApp “will tell you in the app how your data is being shared with Facebook.” Monitor it and opt out.
    • If you don’t want Facebook to target you with more ads based on your WhatsApp communication with businesses, just don’t use that feature.
    • Trust the WhatsApp messaging app as much as you trust Facebook, because ultimately, they are the same company.

John Sileo is a cybersecurity expert, privacy advocate, award-winning author and media personality as seen on 60 Minutes, Anderson Cooper and Fox & Friends. He keynotes conferences virtually and in person around the world. John is the CEO of The Sileo Group, a business think tank based in Colorado

Telemedicine: Are Virtual Doctor Visits a Cyber & Privacy Risk?

The Trump administration has relaxed privacy requirements for telemedicine, or virtual doctor visits: medical staff treating patients over the phone and using video apps such as FaceTime, Zoom, Skype and Google Hangouts. The move raises the chances that hackers will be able to access patient’s highly sensitive medical data, using it, for example, to blackmail the patient into paying a ransom to keep the personal health information (PHI) private.

This relaxation in privacy regulations about telemedicine is necessary, as treating coronavirus patients in quick, safe, virtual ways is a more critical short-term priority than protecting the data. That may sound contradictory coming out of the keyboard of a cybersecurity expert, and that exposes a misconception about how security works.

Security is not about eliminating all risk, because there is no such thing. Security is about prioritizing risk and controlling the most important operations first. Diagnosing and treating patients affected by Covid-19 is a higher priority than keeping every last transmission private.

Put simply, the life of a patient is more important than the patient’s data. With that in mind, protecting the data during transmission and when recordings are stored on the medical practice’s servers is still important.

  • Doctors should utilize audio/video services that provide full encryption between the patient and the medical office during all telemedicine visits
  • If the doctor’s office keeps a copy of the recording, it should be stored and backed up only on encrypted servers
  • Not all employees of the doctor’s office should have the same level of access to telemedicine recordings; all patient data should be protected with user-level access
  • Employees of the doctor’s office should be trained to repel social engineering attacks (mostly by phone and phishing email) to gain access to telemedicine recordings

Telemedicine and virtual doctor visits is just one way that the government is willing to accept increased risks during the pandemic. Many federal employees are also now working remotely, accessing sensitive data, often on personal computers that haven’t been properly protected by cybersecurity experts. This poses an even greater problem than putting patient data at risk, because nearly every government (and corporate) employee is working remotely for the foreseeable future. I will address those concerns in an upcoming post.

In the meantime, stay safe in all ways possible.

About Cybersecurity Keynote Speaker John Sileo

John Sileo is the founder and CEO of The Sileo Group, a privacy and cybersecurity think tank, in Lakewood, Colorado, and an award-winning author, keynote speaker, and expert on technology, surveillance economy, cybersecurity and tech/life balance.

Private Eyes Are Watching You: What it Means to Live (and Be Watched) in the Surveillance Economy

surveillance economy john sileo

What it is the Surveillance Economy

How do you feel about the fact that Facebook knows your weight, your height, your blood pressure, the dates of your menstrual cycle, when you have sex and maybe even whether you got pregnant? Even when you’re not on Facebook, the company is still tracking you as you move across the internet. It knows what shape you’re in from the exercise patterns on your fitness device, when you open your Ring doorbell app and which articles you check out on YouTube — or more salacious sites. 

Welcome to the surveillance economy — where our personal data and online activity are not only tracked but sold and used to manipulate us. As Shoshana Zuboff, who coined the term surveillance capitalism, recently wrote, “Surveillance capitalism begins by unilaterally staking a claim to private human experience as free raw material for translation into behavioral data. Our lives are rendered as data flows.” In other words, in the vast world of internet commerce, we are the producers and our digital exhaust is the product. 

It didn’t have to be this way. Back when the internet was in its infancy, the government could have regulated the tech companies but instead trusted them to regulate themselves. Over two decades later, we’re just learning about the massive amounts of personal data these tech giants have amassed, but it’s too late to put the genie back in the bottle. 

The game is rigged. We can’t live and compete and communicate without the technology, yet we forfeit all our rights to privacy if we take part. It’s a false choice. In fact, it’s no choice at all. You may delete Facebook and shop at the local mall instead of Amazon, but your TV, fridge, car and even your bed may still be sharing your private data. 

As for self-regulation, companies may pay lip service to a public that is increasingly fed up with the intrusiveness, but big tech and corporate America continue to quietly mine our data. And they have no incentive to reveal how much they’re learning about us. In fact, the more they share the knowledge, the lower their profits go. 

This is one of those distasteful situations where legislation and regulation are the only effective ways to balance the power. Because as individuals, we can’t compete with the knowledge and wallet of Google, Facebook and Amazon. David versus Goliath situations like this were the genesis of government in the first place. But in 2020, can we rely on the government to protect us? 

Unlikely. At least for now. For starters, federal government agencies and local law enforcement use the same technology (including facial recognition software) for collecting data and to track our every move. And unfortunately, those who make up the government are generally among the new knowledge class whose 401Ks directly benefit by keeping quiet while the tech giants grow. Plus, there are some real benefits to ethical uses of the technology (think tracking terrorists), making regulation a difficult beast to tackle. But it’s well worth tackling anyway, just as we’ve done with nuclear submarines and airline safety.

In a recent Pew study, 62% of Americans said it was impossible to go through daily life without companies collecting data about them, and 81% said the risks of companies collecting data outweigh the benefits. The same number said they have little or no control over the data companies collect. 

At some stage, consumers will get fed up and want to take back control from the surveillance economy, and the pendulum will swing, as it already has in Europe, where citizens have a toolbox full of privacy tools to prevent internet tracking, including the right to be forgotten by businesses. Europe’s General Data Protection Rule (GDPR) is a clear reminder that consumers do retain the power, but only if they choose to. It’s not inevitable that our every move and personal data are sold to the highest bidder. We’ve happily signed on, logged in and digitized our way to this point. 

When consumers (that means you) are outraged enough, the government will be forced to step in. Unfortunately, at that point, the regulation is likely to be overly restrictive, and both sides will wish we’d come to some compromise before we wrecked the system. 

In the meantime, you have three basic choices: 

  1. Decrease your digital exhaust by eliminating or limiting the number of social media sites, devices and apps you use. (I know, I know. Not likely.)
  2. Change your privacy and security defaults on each device, app and website that collects your personal information. (More likely. But it takes a time investment and doesn’t fully solve privacy leakage.)
  3. Give in. Some people are willing to bet that a loss of privacy will never come back to haunt them. That’s exactly the level of complacency big tech companies have instilled in us using neuroscience for the past decade.  

Loss of privacy is a slippery slope, and it’s important to take the issue seriously before things get worse. Left unchecked, the private eyes watching your every move could go from tracking your exercise habits and sex life (as if that’s not creepy enough) to meddling with your ability to get health insurance or a mortgage. And suddenly it won’t seem so harmless anymore.

About Cybersecurity Keynote Speaker John Sileo

John Sileo is the founder and CEO of The Sileo Group, a privacy and cybersecurity think tank, in Lakewood, Colorado, and an award-winning author, keynote speaker, and expert on technology, surveillance economy, cybersecurity and tech/life balance.


Google Isn’t Just Buying Fitbit, They’re Tracking Your Donut Habit

John Sileo: Google Fitbit to Track Your Health Data

Spinning Wildly on the Hampster Wheel of the Surveillance Economy

You’re heading to the gym for a workout when you decide to surprise your coworkers with a treat. You search for the nearest bagel shop on your Google Maps app. The app directs you to their closest advertiser, Donut Feel Good?, which is actually a donut shop just short of the bagel place. Your heart pounds from the joy of anticipation — your team will LOVE you (and the sugar rush). 

Just as you’re leaving the donut place, your phone alerts you to a coupon at your favorite coffee shop. “Why not?” you think, as Google nudges your behavior just a bit more. As you bite into your first donut and bask in coworker glory, Google is busy sharing your lack of exercise and poor eating habits with your health insurance company, which also has an app on your phone.  

Welcome to the surveillance economy, where the product is your data.

Acquiring Fitbit Moves Google Out of Your Pocket and Into Your Body 

Thanks to Google’s purchase of Fitbit, Google doesn’t just know your location, your destination and your purchases, it now knows your resting heart rate and increased beats per minute as you anticipate that first donut bite. Google is at the forefront of the surveillance economy — making money by harvesting the digital exhaust we all emit just living our lives. 

Google already has reams of data on our internet searches (, location data (maps and Android phones), emails and contacts (Gmail), home conversations and digital assistant searches (Google Home), video habits (YouTube), smarthome video footage and thermostat settings (Nest) and document contents (Docs, Sheets, etc.). The sheer volume of our digital exhaust that they’re coalescing, analyzing and selling is phenomenal.

Combine that psychographic and behavioral data with the health data of 28 million Fitbit users, and Google can probably predict when you’ll need to use the toilet. 

Fitbit tracks what users eat, how much they weigh and exercise, the duration and quality of their sleep and their heart rate. With advanced devices, women can log menstrual cycles. Fitbit scales keep track of body mass index and what percentage of a user’s weight is fat. And the app (no device required) tracks all of that, plus blood sugar.  

It’s not a stretch of the imagination to think Fitbit and other health-tracking devices also know your sexual activity and heart irregularities by location (e.g., your heart rate goes up when you pass the Tesla dealership, a car you’ve always wanted). Google wants to get its hands on all that information, and if past behavior is any indicator, they want to sell access to it. 

As Reuters noted, much of Fitbit’s value “may now lie in its health data.”

Can We Trust How Google Uses Our Health Data? 

Regarding the sale, Fitbit said, “Consumer trust is paramount to Fitbit. Strong privacy and security guidelines have been part of Fitbit’s DNA since day one, and this will not change.” 

But can we trust that promise? This is a common tactic of data user policy scope creep: Once we stop paying attention and want to start using our Fitbit again, the company will change its policies and start sharing customer data. They’ll notify us in a multipage email that links to a hundred-page policy that we’ll never read. Even if we do take the time to read it, are we going to be able to give up our Fitbit? We’ve seen this tactic play out again and again with Google, Facebook and a host of other companies.

Google put out its own statement, assuring customers the company would never sell personal information and that Fitbit health and wellness data would not be used in its advertising. The statement said Fitbit customers had the power to review, move or delete their data, but California is the only U.S. state that can require the company to do so by law — under the California Consumer Protection Act, set to go into effect next year. 

Tellingly, Google stopped short of saying the data won’t be used for purposes other than advertising. Nor did they say they won’t categorize you into a genericized buyer’s profile (Overweight, Underfit & Obsessed with Donuts) that can be sold to their partners.

And advertisements are just the tip of the iceberg. Google can use the data for research and to develop health care products, which means it will have an enormous influence on the types of products that are developed, including pharmaceuticals. If that isn’t troubling to you, remember that Google (and big pharma) are in business to make money, not serve the public good. 

Google Has Demonstrated Repeatedly That It Can’t Be Trusted with Our Data

Just this week, we learned that Google has been quietly working with St. Louis-based Ascension, the second-largest health system in the U.S., collecting and aggregating the detailed health information of millions of Americans in 21 states. 

Code-named Project Nightingale, the secret collaboration began last year and, as the Wall Street Journal reported, “The data involved in the initiative encompasses lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, including patient names and dates of birth.”

The Journal also reported that neither the doctors nor patients involved have been notified, and at least 150 Google employees have access to the personal health data of tens of millions of patients. Remarkably, this is all legal under a 1996 law that allows hospitals to share data with business partners without patients’ consent. Google is reportedly using the data to develop software (that uses AI and machine learning) “that zeroes in on individual patients to suggest changes to their care.” It was originally reported that the arrangement is all legal under a 1996 law that allows hospitals to share data with business partners without patients’ consent.

However, the day after the story broke, a federal inquiry was launched into Project Nightingale. The Office for Civil Rights in the Department of Health and Human Services is looking into whether HIPAA protections were fully implemented in accordance with the 1996 law.

Your Health Insurance Could Be at Stake

Likewise, Fitbit has been selling devices to employees through their corporate wellness programs for years and has teamed up with health insurers, including United Healthcare, Humana and Blue Cross Blue Shield

Even if individual data from Fitbit users isn’t shared, Google can use it to deduce all sorts of health trends. It’s also possible that “anonymous” information can be re-identified, meaning data can be matched with individual users. This sets up a scenario where we can be denied health care coverage or charged higher premiums based on data gathered on our eating or exercise habits. 

Now couple that with data on what foods we buy, where we go on vacation and our most recent Google searches, and companies will not only be able to track our behavior, they’ll be able to predict it. This kind of digital profile makes a credit report look quaint by comparison.

Get Off the Hamster Wheel

For the time being, you control many of the inputs that fuel the surveillance economy. You can choose to take off your Fitbit. You can change the default privacy settings on your phone. You can delete apps that track your fitness and health, buy scales that don’t connect to the internet and opt-out of information sharing for the apps and devices you must use. Your greatest tool in the fight for privacy is your intentional use of technology.

In other words, you do have a measure of control over your data. Donut Feel Good?

About Cybersecurity Keynote Speaker John Sileo

John Sileo is the founder and CEO of The Sileo Group, a privacy and cybersecurity think tank, in Lakewood, Colorado, and an award-winning author, keynote speaker, and expert on technology, cybersecurity and tech/life balance.

Disinformation Campaigns Are Coming for Your Bottom Line 

disinformation campaigns

The rise of disinformation campaigns could put the reputation of your company at risk

Imagine waking up to find the internet flooded with fake news that one of your products was killing hordes of people or your company had been implicated in a human trafficking ring. Imagine if there was a deepfake video of you or one of your company executives engaging in criminal activity: purchasing illegal drugs, bribing an official or defrauding the company and its shareholders. 

Welcome to the age of disinformation campaigns.

These types of campaigns are increasingly being used to target businesses and executives. For centuries, they’ve been used as a political tool for one simple reason: They work. There’s ample evidence that Russia manipulated the 2016 presidential election through fake news. In July, a European Commission analysis found that Russia targeted the European parliamentary elections, and just last week, Facebook and Twitter had to take action against China after it orchestrated numerous coordinated social media campaigns to undermine political protests in Hong Kong. 

From Italy to Brazil, Nigeria to Myanmar, governments or individuals are sowing division, discrediting an opponent or swaying an election with false information — often with deadly consequences.

Here at home, there have been numerous disinformation campaigns aimed at politicians and other individuals. Earlier this summer, a video of House Speaker Nancy Pelosi, doctored to make it appear that she was drunk, went viral. Last July, the Conservative Review network (CRTV) posted an interview to Facebook with Congresswoman Alexandria Ocasio-Cortez (who was then a candidate) where she was generally confused and appeared to think Venezuela was in the Middle East. It turned out the “interview” was a mashup of an interview Ocasio-Cortez gave on the show Firing Line spliced with staged questions from CRTV host Allie Stuckey. The post was viewed over a million times within 24 hours and garnered derisive comments from viewers who thought it was real — before Stuckey announced that it was meant as satire. 

Republican politicians have also been targeted (though to a lesser degree). Last year, North Dakota Democrats ran a Facebook ad under a page titled “Hunter Alerts.” The ad warned North Dakotans that they could lose their out-of-state hunting licenses if they voted in the midterm elections, a claim that was unsubstantiated and refuted by the state’s GOP.

Regardless of the targets, disinformation campaigns are designed to leave you wondering what information to trust and who to believe. They succeed when they sow any sense of doubt in your thinking.

The same technology that makes the spread of false information in the political arena so dangerous and effective is now being aimed at the business sector. 

Earlier this year, the Russian network RT America — which was identified as a “principal meddler” in the 2016 presidential election by U.S. intelligence agencies — aired a segment spooking viewers by claiming 5G technology can cause problems like brain cancer and autism. 

There’s no scientific evidence to back up the claims, and many seem to think the success of America’s 5G network is seen as a threat to Russia, which will use every weapon in its arsenal to create doubt and confusion in countries it deems competitors or enemies. 

Whether for political gain (to help elect a U.S. President sympathetic to Russia) or to sabotage technological progress that threatens Russia’s place in the world economic hierarchy (as with 5G), Russia has developed and deployed a sophisticated disinformation machine that can be pointed like a tactical missile at our underlying democratic and capitalistic institutions. 

Economic warfare on a macro level is nothing new, and fake news and “pump and dump” tactics have long been used in stock manipulation. But more and more, individual companies are being targeted simply because the perpetrator has an axe to grind. 

Starbucks was a target in 2017, when a group on the anonymous online bulletin board 4Chan created a fake campaign offering discounted items to undocumented immigrants. Creators of the so-called “Dreamer Day” promotion produced fake ads and the hashtag #borderfreecoffee to lure unsuspecting undocumented Americans to Starbucks. The company took to Twitter to set the record straight after it was targeted in angry tweets.

Tesla, Coca-Cola, Xbox and Costco are among numerous companies or industries that have also been targeted by orchestrated rumors.

The threat to American companies is so severe that earlier this month, Moody’s Investment Services released a report with a dire warning: Disinformation campaigns can harm a company’s reputation and creditworthiness. 

How would you respond to a fake but completely believable viral video of you as a CEO, employee (or even as a parent) admitting to stealing from your clients, promoting white-supremacy or molesting children? The consequences to your reputation, personally and professionally, would be devastating — and often irreparable regardless of the truth behind the claims. As I explored in Deepfakes: When Seeing May Not Be Believing, advances in artificial intelligence and the declining cost of deepfake videos make highly credible imposter videos an immediate and powerful reality. 

Preparing your organization for disinformation attacks is of paramount importance, as your speed of response can make a significant financial and reputational difference. Just as you should develop a Breach Response Plan before cybercriminals penetrate your systems, you would also be wise to create a Disinformation Response Plan that:

  • Outlines your public relations strategy
  • Defines potential client and stakeholder communications 
  • Prepares your social media response
  • Predetermines the legal implications and appropriate response.

Disinformation campaigns are here to stay, and advances in technology will ensure they become more prevalent and believable. That’s why it’s vital that you put a plan in place before you or your company are victimized — because at this point in the game, the only way to fight disinformation is with the immediate release of accurate and credible information. 

About Cybersecurity Keynote Speaker John Sileo

John Sileo is an award-winning author and keynote speaker on cybersecurity, identity theft and tech/life balance. He energizes conferences, corporate trainings and main-stage events by making security fun and engaging. His clients include the Pentagon, Schwab and organizations of all sizes. John got started in cybersecurity when he lost everything, including his $2 million business, to cybercrime. Since then, he has shared his experiences on 60 Minutes, Anderson Cooper, and even while cooking meatballs with Rachel Ray. Contact John directly to see how he can customize his presentations to your audience.