Google Isn’t Just Buying Fitbit, They’re Tracking Your Donut Habit

John Sileo: Google Fitbit to Track Your Health Data

Spinning Wildly on the Hampster Wheel of the Surveillance Economy

You’re heading to the gym for a workout when you decide to surprise your coworkers with a treat. You search for the nearest bagel shop on your Google Maps app. The app directs you to their closest advertiser, Donut Feel Good?, which is actually a donut shop just short of the bagel place. Your heart pounds from the joy of anticipation — your team will LOVE you (and the sugar rush). 

Just as you’re leaving the donut place, your phone alerts you to a coupon at your favorite coffee shop. “Why not?” you think, as Google nudges your behavior just a bit more. As you bite into your first donut and bask in coworker glory, Google is busy sharing your lack of exercise and poor eating habits with your health insurance company, which also has an app on your phone.  

Welcome to the surveillance economy, where the product is your data.

Acquiring Fitbit Moves Google Out of Your Pocket and Into Your Body 

Thanks to Google’s purchase of Fitbit, Google doesn’t just know your location, your destination and your purchases, it now knows your resting heart rate and increased beats per minute as you anticipate that first donut bite. Google is at the forefront of the surveillance economy — making money by harvesting the digital exhaust we all emit just living our lives. 

Google already has reams of data on our internet searches (Google.com), location data (maps and Android phones), emails and contacts (Gmail), home conversations and digital assistant searches (Google Home), video habits (YouTube), smarthome video footage and thermostat settings (Nest) and document contents (Docs, Sheets, etc.). The sheer volume of our digital exhaust that they’re coalescing, analyzing and selling is phenomenal.

Combine that psychographic and behavioral data with the health data of 28 million Fitbit users, and Google can probably predict when you’ll need to use the toilet. 

Fitbit tracks what users eat, how much they weigh and exercise, the duration and quality of their sleep and their heart rate. With advanced devices, women can log menstrual cycles. Fitbit scales keep track of body mass index and what percentage of a user’s weight is fat. And the app (no device required) tracks all of that, plus blood sugar.  

It’s not a stretch of the imagination to think Fitbit and other health-tracking devices also know your sexual activity and heart irregularities by location (e.g., your heart rate goes up when you pass the Tesla dealership, a car you’ve always wanted). Google wants to get its hands on all that information, and if past behavior is any indicator, they want to sell access to it. 

As Reuters noted, much of Fitbit’s value “may now lie in its health data.”

Can We Trust How Google Uses Our Health Data? 

Regarding the sale, Fitbit said, “Consumer trust is paramount to Fitbit. Strong privacy and security guidelines have been part of Fitbit’s DNA since day one, and this will not change.” 

But can we trust that promise? This is a common tactic of data user policy scope creep: Once we stop paying attention and want to start using our Fitbit again, the company will change its policies and start sharing customer data. They’ll notify us in a multipage email that links to a hundred-page policy that we’ll never read. Even if we do take the time to read it, are we going to be able to give up our Fitbit? We’ve seen this tactic play out again and again with Google, Facebook and a host of other companies.

Google put out its own statement, assuring customers the company would never sell personal information and that Fitbit health and wellness data would not be used in its advertising. The statement said Fitbit customers had the power to review, move or delete their data, but California is the only U.S. state that can require the company to do so by law — under the California Consumer Protection Act, set to go into effect next year. 

Tellingly, Google stopped short of saying the data won’t be used for purposes other than advertising. Nor did they say they won’t categorize you into a genericized buyer’s profile (Overweight, Underfit & Obsessed with Donuts) that can be sold to their partners.

And advertisements are just the tip of the iceberg. Google can use the data for research and to develop health care products, which means it will have an enormous influence on the types of products that are developed, including pharmaceuticals. If that isn’t troubling to you, remember that Google (and big pharma) are in business to make money, not serve the public good. 

Google Has Demonstrated Repeatedly That It Can’t Be Trusted with Our Data

Just this week, we learned that Google has been quietly working with St. Louis-based Ascension, the second-largest health system in the U.S., collecting and aggregating the detailed health information of millions of Americans in 21 states. 

Code-named Project Nightingale, the secret collaboration began last year and, as the Wall Street Journal reported, “The data involved in the initiative encompasses lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, including patient names and dates of birth.”

The Journal also reported that neither the doctors nor patients involved have been notified, and at least 150 Google employees have access to the personal health data of tens of millions of patients. Remarkably, this is all legal under a 1996 law that allows hospitals to share data with business partners without patients’ consent. Google is reportedly using the data to develop software (that uses AI and machine learning) “that zeroes in on individual patients to suggest changes to their care.” It was originally reported that the arrangement is all legal under a 1996 law that allows hospitals to share data with business partners without patients’ consent.

However, the day after the story broke, a federal inquiry was launched into Project Nightingale. The Office for Civil Rights in the Department of Health and Human Services is looking into whether HIPAA protections were fully implemented in accordance with the 1996 law.

Your Health Insurance Could Be at Stake

Likewise, Fitbit has been selling devices to employees through their corporate wellness programs for years and has teamed up with health insurers, including United Healthcare, Humana and Blue Cross Blue Shield

Even if individual data from Fitbit users isn’t shared, Google can use it to deduce all sorts of health trends. It’s also possible that “anonymous” information can be re-identified, meaning data can be matched with individual users. This sets up a scenario where we can be denied health care coverage or charged higher premiums based on data gathered on our eating or exercise habits. 

Now couple that with data on what foods we buy, where we go on vacation and our most recent Google searches, and companies will not only be able to track our behavior, they’ll be able to predict it. This kind of digital profile makes a credit report look quaint by comparison.

Get Off the Hamster Wheel

For the time being, you control many of the inputs that fuel the surveillance economy. You can choose to take off your Fitbit. You can change the default privacy settings on your phone. You can delete apps that track your fitness and health, buy scales that don’t connect to the internet and opt-out of information sharing for the apps and devices you must use. Your greatest tool in the fight for privacy is your intentional use of technology.

In other words, you do have a measure of control over your data. Donut Feel Good?


About Cybersecurity Keynote Speaker John Sileo

John Sileo is the founder and CEO of The Sileo Group, a privacy and cybersecurity think tank, in Lakewood, Colorado, and an award-winning author, keynote speaker, and expert on technology, cybersecurity and tech/life balance.

Disinformation Campaigns Are Coming for Your Bottom Line 

disinformation campaigns

The rise of disinformation campaigns could put the reputation of your company at risk

Imagine waking up to find the internet flooded with fake news that one of your products was killing hordes of people or your company had been implicated in a human trafficking ring. Imagine if there was a deepfake video of you or one of your company executives engaging in criminal activity: purchasing illegal drugs, bribing an official or defrauding the company and its shareholders. 

Welcome to the age of disinformation campaigns.

These types of campaigns are increasingly being used to target businesses and executives. For centuries, they’ve been used as a political tool for one simple reason: They work. There’s ample evidence that Russia manipulated the 2016 presidential election through fake news. In July, a European Commission analysis found that Russia targeted the European parliamentary elections, and just last week, Facebook and Twitter had to take action against China after it orchestrated numerous coordinated social media campaigns to undermine political protests in Hong Kong. 

From Italy to Brazil, Nigeria to Myanmar, governments or individuals are sowing division, discrediting an opponent or swaying an election with false information — often with deadly consequences.

Here at home, there have been numerous disinformation campaigns aimed at politicians and other individuals. Earlier this summer, a video of House Speaker Nancy Pelosi, doctored to make it appear that she was drunk, went viral. Last July, the Conservative Review network (CRTV) posted an interview to Facebook with Congresswoman Alexandria Ocasio-Cortez (who was then a candidate) where she was generally confused and appeared to think Venezuela was in the Middle East. It turned out the “interview” was a mashup of an interview Ocasio-Cortez gave on the show Firing Line spliced with staged questions from CRTV host Allie Stuckey. The post was viewed over a million times within 24 hours and garnered derisive comments from viewers who thought it was real — before Stuckey announced that it was meant as satire. 

Republican politicians have also been targeted (though to a lesser degree). Last year, North Dakota Democrats ran a Facebook ad under a page titled “Hunter Alerts.” The ad warned North Dakotans that they could lose their out-of-state hunting licenses if they voted in the midterm elections, a claim that was unsubstantiated and refuted by the state’s GOP.

Regardless of the targets, disinformation campaigns are designed to leave you wondering what information to trust and who to believe. They succeed when they sow any sense of doubt in your thinking.

The same technology that makes the spread of false information in the political arena so dangerous and effective is now being aimed at the business sector. 

Earlier this year, the Russian network RT America — which was identified as a “principal meddler” in the 2016 presidential election by U.S. intelligence agencies — aired a segment spooking viewers by claiming 5G technology can cause problems like brain cancer and autism. 

There’s no scientific evidence to back up the claims, and many seem to think the success of America’s 5G network is seen as a threat to Russia, which will use every weapon in its arsenal to create doubt and confusion in countries it deems competitors or enemies. 

Whether for political gain (to help elect a U.S. President sympathetic to Russia) or to sabotage technological progress that threatens Russia’s place in the world economic hierarchy (as with 5G), Russia has developed and deployed a sophisticated disinformation machine that can be pointed like a tactical missile at our underlying democratic and capitalistic institutions. 

Economic warfare on a macro level is nothing new, and fake news and “pump and dump” tactics have long been used in stock manipulation. But more and more, individual companies are being targeted simply because the perpetrator has an axe to grind. 

Starbucks was a target in 2017, when a group on the anonymous online bulletin board 4Chan created a fake campaign offering discounted items to undocumented immigrants. Creators of the so-called “Dreamer Day” promotion produced fake ads and the hashtag #borderfreecoffee to lure unsuspecting undocumented Americans to Starbucks. The company took to Twitter to set the record straight after it was targeted in angry tweets.

Tesla, Coca-Cola, Xbox and Costco are among numerous companies or industries that have also been targeted by orchestrated rumors.

The threat to American companies is so severe that earlier this month, Moody’s Investment Services released a report with a dire warning: Disinformation campaigns can harm a company’s reputation and creditworthiness. 

How would you respond to a fake but completely believable viral video of you as a CEO, employee (or even as a parent) admitting to stealing from your clients, promoting white-supremacy or molesting children? The consequences to your reputation, personally and professionally, would be devastating — and often irreparable regardless of the truth behind the claims. As I explored in Deepfakes: When Seeing May Not Be Believing, advances in artificial intelligence and the declining cost of deepfake videos make highly credible imposter videos an immediate and powerful reality. 

Preparing your organization for disinformation attacks is of paramount importance, as your speed of response can make a significant financial and reputational difference. Just as you should develop a Breach Response Plan before cybercriminals penetrate your systems, you would also be wise to create a Disinformation Response Plan that:

  • Outlines your public relations strategy
  • Defines potential client and stakeholder communications 
  • Prepares your social media response
  • Predetermines the legal implications and appropriate response.

Disinformation campaigns are here to stay, and advances in technology will ensure they become more prevalent and believable. That’s why it’s vital that you put a plan in place before you or your company are victimized — because at this point in the game, the only way to fight disinformation is with the immediate release of accurate and credible information. 


About Cybersecurity Keynote Speaker John Sileo

John Sileo is an award-winning author and keynote speaker on cybersecurity, identity theft and tech/life balance. He energizes conferences, corporate trainings and main-stage events by making security fun and engaging. His clients include the Pentagon, Schwab and organizations of all sizes. John got started in cybersecurity when he lost everything, including his $2 million business, to cybercrime. Since then, he has shared his experiences on 60 Minutes, Anderson Cooper, and even while cooking meatballs with Rachel Ray. Contact John directly to see how he can customize his presentations to your audience.

FaceApp is Fun, But Putin Will Own Your Privacy

FaceApp quite literally owns your face forever (or atleast the image of your face).

It’s funny how we spend billions of dollars a year on health and beauty products and treatments designed to keep us looking, as Carrie Underwood sings, “young and beautiful”, but when a fun app comes along that gives us a goofy look or makes us look 30 years older, we jump at the chance to see it and share it with all of our friends on Social Media.  That’s exactly the case with FaceApp, an app that alters photos to make you look years older or alter facial expressions, looks, etc.  Thanks in part to use by celebrities such as Underwood, the Jonas Brothers and LeBron James, more than 150 million users have uploaded their photos to the app and it is now the top-ranked app on the iOS App Store in 121 countries. Free, fun and harmless, right?  Maybe, maybe not…

Every app is uploading your data and daily habits and locations, combining it with your social media profile and exploiting or selling it. That’s the profit model of the internet, not just FaceApp. That’s not what makes this particular app unique or noteworthy.  Wireless lab, creators of FaceApp is based in St. Petersburg, Russia, which means that by default, Vladimir Putin has a picture of you someplace on his hard driveLet’s be clear, Russia can get into any centralized database of facial recognition photos it wants to – this just makes it easier for them.

Not only that, but FaceApp retains a perpetual license to utilize your photo in any way it sees fit. In their words you are granting FaceApp “a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you”.

This makes it not just a privacy issue, but also a security issue, as there is no guarantee that your photos and device data are stored securely. In fact, there is almost no chance that they are stored securely. In addition to your photo, some other personal information is transmitted, and you are never alerted to the fact that either are being uploaded.

For now, it seems that they are only uploading the photo that you choose to upload, but I see no reason why they won’t slyly begin uploading every photo in your album as their terms of service don’t preclude that evolution. Facebook didn’t always collect and sell our information as they do now, but that didn’t stop them when profit is involved.  Information collection companies start by collecting very little until we stop paying attention, and then they transmit everything. They love the slippery slope of boiling the privacy frog!

So-what can you do about it?

  • The Democratic National Committee sent out a warning to campaigns recently telling people to delete the apps from their phone.  It’s a start, but deleting the app doesn’t get rid of your data in the cloud, and doing so is time-consuming and confusing.
  • For the fastest processing, try sending the requests from the FaceApp mobile app using ‘Settings->Support->Report a bug’ with the word ‘privacy’ in the subject line.
  • If it’s not too late, resist the urge to download the app!  Maybe look at a picture of your parents instead.

Most importantly, the next time you are giving away access to your photos or allowing any app to access data on your phone, read their privacy or data use policy first. You will be amazed at what you are giving away for free that makes them gobs of money.

John Sileo loves his role as an “energizer” for cyber security at conferences, corporate trainings, and industry events. He specializes in making security fun so that it sticks. His clients include the Pentagon, Schwab and many organizations so small (and security conscious) that you won’t have even heard of them. John has been featured on 60 Minutes, recently cooked meatballs with Rachel Ray and got started in cyber security when he lost everything, including his $2 million software business, to cybercrime. Call if you would like to bring John to speak to your members – 303.777.3221.

Are Alexa, Google & Siri Eavesdropping on You?

Amazon and Google have both come out with wildly popular digital assistants that are loosely known as smart speakers. Amazons is called Alexa and Googles is called, well, Google.

“Hey Alexa, would you say you are smarter than Google?”

Apple’s digital assistant is Siri which can be found on all new Apple devices, including the HomePod, a less popular version of Alexa. For the time being, Siri isn’t quite as smart or popular as the other kids, so I’m leaving her out of this conversation for now. Sorry Siri.

Just the fact that Alexa, Google and any digital assistant answer you the minute you mention their name shows that they are ALWAYS LISTENING! Once you have triggered them, they are recording the requests you make just as if you had typed them into a search engine. So they know when you order pizza, what songs you like and what’s on your calendar for the week. They can also have access to your contacts, your location and even combine that information with your buying and surfing habits on their website.  

To be fair, Amazon and Google both say that their digital assistants only process audio after we trigger them with a phrase like “Hey, Alexa” or “OK, Google”. So they aren’t listening to EVERY conversation… YET. Why do I say, YET? Because the New York Times dug a little deeper and took a look at the patents that Amazon and Google are filing for future makeovers of their digital assistants. In one set of patent applications, Amazon describes, and I’m quoting here, a “voice sniffer algorithm” that can analyze audio in realtime when it hears words like “love”, “bought” or “dislike”. It went on to illustrate how a phone call between two friends could result in one receiving an offer for the San Diego Zoo and the other seeing an ad for a Wine club based on the passive conversation that the two of them were having.

In other words, no one had invited Alexa to the conversation, but she, or he, or they were there listening, analyzing and selling your thoughts anyway. That’s just creepy! It gets worse. The Times found another patent application showing how a digital assistant could “determine a speaker’s MOOD using the volume of the user’s voice, detected breathing rate, crying and so forth as well as determine their medical condition based on detected coughing, sneezing and so forth”. And so forth, and so forth. To that, I have only two words: Big Brother!

Let’s call these future digital assistants exactly what they are: audio-based spyware used for profit-making surveillance that treat us users like tasty soundbites at the advertising watering hole. Our private conversations will one-day drive their advertisements, profits and product development. They are data mining what we say, turning it into a quantitative model and selling it to anyone who will buy it. Well, I don’t buy it. And I won’t buy one, until I am sure, in writing, that it’s not eavesdropping on everything said in my home.

Granted, these are all proposed changes to be made in the future, but they are a clear sign of where smart speakers and digital assistants are going. Their intention is to eavesdrop on you. Your One Minute Mission is to ask yourself how comfortable you are having a corporation like Amazon or Google eventually hearing, analyzing and sharing your private conversations.

I have to be forthright with you, many people will say they don’t care, and this really is their choice. We are all allowed to make our own choices when it comes to privacy. But the vitally important distinction here is that you make a choice, an educated, informed choice, and intentionally invite Alexa or Google into your private conversations.

I hope this episode of Sileo On Security has helped you do just that.