Google Buys Fitbit

Google Isn’t Just Buying Fitbit, They’re Tracking Your Donut Habit

You’re heading to the gym for a workout when you decide to surprise your coworkers with a treat. You search for the nearest bagel shop on your Google Maps app, which directs you to their closest advertiser, Donut Feel Good? Your heart pounds from the joy of anticipation — your team will LOVE you (or at least the sugar rush). Just as you’re leaving Donut Feel Good, your phone dings you with a coupon for coffee across the street. “Why not?” you think, as Google subtly nudges your behavior just a bit more. While you’re in the office, basking in coworker glory, Google is busy sharing your lack of exercise and trans-fat consumption with your health insurance company.  

Welcome to the surveillance economy, where your data is the product. I’m John Sileo, privacy and security are my jam (as my kids like to say), and my goal is to make sure you’re being intentional with how you allow technology to track and share your private information, especially as you consider buying a tracker for someone you love. 

Put simply, Google is moving out of our pockets and into our bodies. Thanks to their purchase of Fitbit, the health tracking device, Google can combine what they already know about us – the content of our internet searches (Bradley – Graphic representation: Google.com), location data (maps and Android phones), emails, contacts (Gmail), conversations at home, smart speaker searches (Google Home), video watching habits (YouTube), video footage, thermostat settings (Nest) and document contents (Docs, Sheets, etc.) – they will now be able to combine this with our health data. The sheer volume of the digital exhaust they’re collecting, analyzing and selling is phenomenal. Google is at the forefront of the surveillance economy — making money by harvesting the digital exhaust we all emit just living our connected lives. 

Fitness devices and apps can track what we eat, how much we weigh, when we exercise, sleep and have low blood sugar. They know that your heart-rate increases when you shop at your favorite store, can predict menstrual cycles, record body mass index and interpret your intimate cuddling habits. And you thought that gift you were about to buy benefited the recipient. Actually, you’re paying Google to improve your personalized tracking profile that they can sell to advertisers. Which you might be okay with, but you deserve to know enough to have the choice.   

Google and Fitbit say that our data will be anonymized, secured and kept private. Blah, blah, blah. This is a common tactic I call PPSS, Privacy Policy Slippery Slope. When we stop paying attention, the tech company emails us an “updated” 100-page privacy policy that they know we will never read and can never understand. They love taking advantage of our defeatist attitude – oh, there is nothing I can do about it anyway. That attitude resigns you to being categorized into a highly profitable behavioral profile, whether that’s Healthy, Happy and Rich, or Overweight, Underpaid & Obsessed with Donuts.

In a related story, Google has been quietly working with St. Louis-based Ascension, the second-largest health system in the U.S., collecting and aggregating the detailed health information on millions of Americans. 

Code-named Project Nightingale, the secret collaboration began last year and, according to the Wall Street Journal, “encompasses lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, including patient names and dates of birth.” The Journal also reported that neither the doctors nor patients involved have been notified.

Now couple that with data on what foods we buy, where we go on vacation and our most recent Google searches, and companies will not only be able to track our behavior, they’ll be able to predict it. And behavior prediction is the holy grail of the surveillance economy. 

For the time being, you control many of the inputs that fuel the surveillance economy – but changing behavior is hard. I know because even I have to make intentional choices about how I share my health data. The keyword in that sentence is intentional.

For example, you can choose to take off your Fitbit or trust your data with Apple, which is a hardware and media company where Google is an information aggregation company. You can change the default privacy settings on your phone, your tracker and your profile. You can delete apps that track your fitness and health, buy scales that don’t connect to the internet and opt-out of information sharing for the apps and devices you must use. Your greatest tool in the fight for privacy and security is your intentional use of technology.

In other words, you do have a measure of control over your data. Donut Feel Good?

About Cybersecurity Keynote Speaker John Sileo

John Sileo is the founder and CEO of The Sileo Group, a privacy and cybersecurity think tank, in Lakewood, Colorado, and an award-winning author, keynote speaker, and expert on technology, cybersecurity and tech/life balance. 

 

Keywords:

Meta: Are you comfortable having Google own your Fitbit data to add your heart rate, exercise frequency, current weight, and sleep habits to everything else they track about you? But they promise not to share…

 

What if Putin Had an Army of Killer Artificial Intelligence Robots?

John-sileo-artificial-intellegence-expert

The New Frontier: How Science Fiction Distorts Our Next Move on Artificial Intelligence and Cybersecurity

It’s been 51 years since a computer named Hal terrorized astronauts in the movie 2001: A Space Odyssey. And it’s been more than three decades since “The Terminator” featured a stone-faced Arnold Schwartzenegger as a cyborg terrorizing “Sarah Connah.” Yet, dark dystopian civilizations — where computers or robots control humans — are often what come to mind when we think of the future of artificial intelligence. And that is misleading.

I was happily raised on a healthy diet of science fiction, from the “Death Star” to “Blade Runner.” But increasingly, as we approach the AI-reality threshold, Hollywood’s technological doomsday scenarios divert the conversation from what we really need to focus on: the critical link between human beings, artificial intelligence and cybersecurity. In other words, it’s not AI we need to fear; it’s AI in the hands of autocrats, cybercriminals and nation-states. Fathom, for a moment, Darth Vader, Hitler or even a benevolent U.S. president in charge of an army of robots that always obey their leader’s command. In this scenario, we wouldn’t avert a nuclear showdown with a simple game of tic-tac-toe (yes, I loved “War Games,” too). 

As I noted in my post about deepfakes, not only is AI getting more sophisticated but it’s increasingly being used in nefarious ways, and we recently crossed a new frontier. Last March, the CEO of a U.K. energy firm received a call from the German CEO of the parent company who told him to immediately transfer $243,000 to the bank account of a Hungarian supplier — which he did. After the transfer, the money was moved to a bank in Mexico and then to multiple locations.

In fact, the U.K. executive was talking to a bot, a computer generated “digital assistant” — much like Siri or Alexa — designed by criminals using AI technology to mimic the voice of the German CEO. The only digital assistance the crime-bot gave was to digitally separate the company from a quarter million dollars. 

Rüdiger Kirsch of Euler Hermes Group SA, the firm’s insurance company, told the Wall Street Journal  that the U.K. executive recognized the slight German accent and “melody” in his boss’s voice. Kirsch also said he believes commercial software was used to mimic the CEO’s voice — meaning this may be the first instance of AI voice mimicry used for fraud.

Trust me, it won’t be the last. We’re at the dawn of a whole new era of AI-assisted cybercrime.

What’s ironic about the prevailing wisdom around AI, however, is that the capabilities of criminals and bad actors is often underestimated, while those on the cybersecurity side are overestimated. At every security conference I attend, the room is filled with booths of companies claiming to use “advanced” AI to defend data and otherwise protect organizations. But buyer beware, because at this stage, it’s more a marketing strategy than a viable product. 

That’s because artificial intelligence is more human than we think. From my experience peering under the hood of AI-enabled technology like internet-enabled TVs, digital assistants and end-point cybersecurity products, I’m constantly amazed by how much human input and monitoring is necessary to make them “smart.” In many ways, this is a comforting thought, as it makes human beings the lifeblood of how the technology is applied. People, at least, have a concept of morality and conscience; machines don’t. 

In a sense, AI is really just an advanced algorithm (which, by the way, can build better algorithms than humans). The next stage is artificial general intelligence (AGI), which is the ability of a machine to perform any task a human can (some experts refer to this as singularity or consciousness). This is an important distinction because current AI can perform certain tasks as well as or even better than humans, but not every task — and humans still need to provide the training. 

We’ll achieve artificial general intelligence when we’re able to replicate the functions of the human brain. Experts say it’s not only theoretically possible, but that we’ll most likely develop it by the end of the century, if not much sooner. 

The U.S., China and Russia are all pursuing the technology with a vengeance, each vying for supremacy. In 2017, China released a plan to be the leader by 2030, and that same year Russian President Vladimir Putin said, “Whoever becomes the leader in this sphere will become the ruler of the world.” Darth Putin, anyone?

And this brings us back to those doomsday scenarios, but I’m not talking about cyborgs roaming American cities with modern weaponry. The real threat is to American industry and infrastructure. So, instead of worrying about a future where bots are our overlords, it’s time we focus on the technological and legislative conversations we need to have before AGI becomes ubiquitous.

Cybercriminals using AI were able to swindle an energy company out of a quarter million dollars without breaking a sweat. 

They’ll be back.


About Cybersecurity Keynote Speaker John Sileo

John Sileo is the founder and CEO of The Sileo Group, a cybersecurity think tank, in Lakewood, Colorado, and an award-winning author, keynote speaker and expert on technology, cybersecurity, and tech/life balance. He energizes conferences, corporate trainings and main-stage events by making security fun and engaging. His clients include the Pentagon, Schwab, and organizations of all sizes. John got started in cybersecurity when he lost everything, including his $2 million business, to cybercrime. Since then, he has shared his experiences on 60 Minutes, Anderson Cooper, and even while cooking meatballs with Rachel Ray. Contact John directly to see how he can customize his presentations to your audience.

Going Analog: Tom Kellermann Emerging Cyberthreats

John speaks with Tom Kellermann, Chief Cybersecurity Officer at Carbon Black, about emerging cyberthreats and what we can do to protect ourselves.

About Tom Kellermann

Tom Kellermann is the Chief Cybersecurity Officer for Carbon Black Inc. Prior to joining Carbon Black, Tom was the CEO and founder of Strategic Cyber Ventures. On January 19, 2017, Tom was appointed the Wilson Center’s Global Fellow for Cyber Policy in 2017.

Tom previously held the positions of Chief Cybersecurity Officer for Trend Micro; Vice President of Security for Core Security and Deputy CISO for the World Bank Treasury.

In 2008 Tom was appointed a commissioner on the Commission on Cyber Security for the 44th President of the United States. In 2003 he co-authored the Book “Electronic Safety and Soundness: Securing Finance in a New Age.”

Kellermann believes, “In order to wage the counter-insurgency we must spin the chess board. The killchain is obsolete – we must measure success of disruption of attacker behavior. Understanding root cause is paramount. Combination of TTPs define intent. Cyber is All about context and intent/cognition.”

About Cybersecurity Author & Expert John Sileo

John Sileo is an award-winning author and Hall of Fame Speaker who specializes in providing security awareness training that’s as entertaining as it is educational. John energizes conferences, corporate trainings and main-stage events by interacting with the audience throughout his presentations. His clients include the Pentagon, Schwab and organizations of all sizes. John got started in cybersecurity when he lost everything, including his $2 million business, to cybercrime. Since then, he has shared his experiences on 60 Minutes, Anderson Cooper, and even while cooking meatballs with Rachel Ray. Contact John directly to see how he can customize his presentations to your audience.

How to Turbocharge your Cybersecurity Awareness Training

Security awareness training can’t be a boring afterthought if it’s going to work. 

Own it. Secure it. Protect it. 

Those are the key themes for this year’s National Cybersecurity Awareness Month, coming up in October, and it’s good advice. Unfortunately, it’s the same message your trainees have been hearing for years and, at this point, they’ve largely tuned it out.

The challenge isn’t creating a pithy slogan. It’s turning advice into action and an enduring “culture of security.” At this point, cybersecurity is on the radar for most companies, and the smart ones make it a priority. To achieve their cybersecurity goals, many organizations implement cybersecurity awareness training sessions, which seek to educate the rank and file on threats and how to thwart them. When done well, these initiatives can be a way to focus the entire organization — and can greatly reduce the risk of data breach, cyberextortion or damaging disinformation campaigns.

When not done well, you’ll be lucky if your team remembers the words cybersecurity awareness as they shuffle out the door — no doubt refreshed after scrolling through Facebook or watching the latest Taylor Swift video.

The problem is that many security programs are actually less than the sum of their parts for the simple reason that they don’t have an overarching end goal. Sure, the objective is to educate your team on emerging threats so your company is more secure, but that’s a nebulous goal. And because it’s a nebulous goal lacking tangible motivation, your team doesn’t buy in. 

That’s not to say they don’t care about the company’s security. Of course they do — but it’s not personal. 

Unfortunately, when it comes to cybersecurity in the corporate sector, the human element is usually overlooked. This is a mistake. I often hear companies refer to humans as the “weakest link” in cybersecurity, which of course becomes a self-fulfilling prophecy. Enlightened organizations understand that security is a highly effective competitive differentiator (think Apple) and that humans, when properly trained, are the strongest defense against cyberthreats. Consequently, an effective program must start by getting people — from the top down — invested in the goal and the process. 

I’ve been the opening keynote speaker for hundreds of security awareness programs around the world, many of them outside the bounds of National Cybersecurity Awareness Month, and most of them leave me hungering for more: More engagement, more interaction and more actionable information. In short, more substance. 

Here are a few tips for designing a cybersecurity awareness program that will engage your team and get results.

Ownership

Don’t focus on the CISO, CRO, CIO or CTO. That would just be preaching to the choir. The missing but crucial link in cybersecurity awareness programs tends to be a security “believer” from the executive team or board of directors. Successful programs are clearly led, repeatedly broadcast and constantly emphasized from the very top of the organization — with an attitude of authenticity and immediacy. Whether it’s your CEO at an annual gathering or a board member kicking off National Cybersecurity Awareness Month, your security champion must not only become an evangelist but also have the authority and budget to implement change.

Strategy

Approach your program strategically, and devise a plan to protect your intellectual property, critical data and return on information assets. You’re competing for resources, so build a compelling business case that demonstrates the organization’s ROI in business terms, not buried in technobabble. 

  • What did it cost your competitor when ransomware froze its operation for a week? (e.g., FedEx: $300 million)
  • How much would the training have cost to avoid the CEO whaling scheme that lost a similar-sized company millions of dollars? 
  • What do the directors of compliance, HR and IT have to add to the defense equation? 

The most successful cybersecurity awareness programs have a budget, a staff (however small) and cross-departmental support. Involve the business team and other stakeholders up front to leverage their expertise before rollout.

Methodology

Here’s a litmus test for the potential effectiveness of your security awareness program: Does it begin by focusing on the critical information assets and devices inside of your organization? If so, it’s probably doomed. Why? Because your employees are human beings, and they want to know how this affects them personally before they invest time to protect the organization’s coffers. 

Excellent security awareness kicks off by making data protection personal — building ownership before education. From there, the training must be engaging (dare I say fun?) and interactive (live social engineering) so your audience members pay attention and apply what they learn. Death-by-PowerPoint will put behavioral change to sleep permanently. Highly effective programs build a foundational security reflex (proactive skepticism) and are interesting enough to compete against cute puppy videos and our undying desire for a conference-room snooze.

Sustenance

Best practice security awareness training, like a five-course meal, doesn’t end with the appetizer. Yes, kickoff is best achieved with a high-energy, personally relevant, in-person presentation that communicates the emotional and financial consequences of data loss — but that’s only the beginning. 

From there, your team needs consistent, entertaining follow-up education to keep the fire alive. For example, we’ve found short, funny, casual video tips on the latest cyberthreats to be highly effective (once your team takes ownership for their own data, and yours). Then, add lunch workshops on protecting personal devices, incentive programs for safe behavior, and so on. Culture matures by feeding it consistently.

Measurement

If you don’t measure your progress (and actually demonstrate some), no one will fund next year’s training budget. Here are a few questions I ask when facilitating board retreats on cybersecurity: 

  • What are your cybersecurity awareness training KPIs, your key metrics? 
  • How did successful phishing or social engineering attacks decline as a byproduct of your program? 
  • Has user awareness of threats, policy and solutions increased? 
  • How many employees showed up for the Cybersecurity Awareness Month keynote and fair? 
  • Do your events help employees protect their own data as well?
  • How department-specific are your training modules? 

When you can show quantitative progress, you’ll have the backing to continue building your qualitative culture of security.

Over the long term, a culture of security that reinvents itself as cyberthreats evolve will be far less costly than a disastrous cybercrime that lands your company on the front page. National Cybersecurity Awareness Month is a great catalyst to get your organization thinking about its cybersecurity strategy. Now it’s time to take action.


About Cybersecurity Author & Expert John Sileo

John Sileo is an award-winning author and Hall of Fame Speaker who specializes in providing security awareness training that’s as entertaining as it is educational. John energizes conferences, corporate trainings and main-stage events by interacting with the audience throughout his presentations. His clients include the Pentagon, Schwab and organizations of all sizes. John got started in cybersecurity when he lost everything, including his $2 million business, to cybercrime. Since then, he has shared his experiences on 60 Minutes, Anderson Cooper, and even while cooking meatballs with Rachel Ray. Contact John directly to see how he can customize his presentations to your audience.

Deepfakes: When Seeing May Not Be Believing

Deepfake Bill Hader

How deepfake videos can undermine our elections, the stock markets and our belief in truth itself

Last weekend, attendees at a conference in Las Vegas got quite a surprise. They were waiting for a presentation by Democratic National Committee Chairman Tom Perez and, instead saw his image on a big screen via Skype. During his brief video appearance, the chairman apologized that he was unable to be there in person. In fact, the voice coming out of Perez’s mouth was DNC’s chief security officer Bob Lord and the audience had just been treated to a deepfake video — video that’s been manipulated to make it look like people say or do things they actually didn’t.

The video was shown in the AI Village at DEFCON — one of the world’s largest hacker conventions — to demonstrate the threat deepfakes pose to the 2020 election and beyond. It was made with the cooperation of the DNC; Artificial Intelligence (AI) experts altered Perez’s face to make it look as if he were apologizing, and Lord supplied his voice. 

Watch carefully as Bill Hader turns into Seth Rogan & Tom Cruise!

[For a CNN video primer on deepfakes, click here.]

Remember when Forrest Gump shook hands and spoke with Presidents Kennedy and Nixon? That was an early example of a deepfake video. More recently, and less innocuous, was a video of House Speaker Nancy Pelosi, altered to make it appear that she was drunk and slurring her words. The video went viral and was viewed more than 3 million times. For some viewers, it confirmed their disdain for Nancy Pelosi; for others, it simply confirmed that they can’t trust the other side of the political spectrum. It’s troubling that neither of these reactions is based in fact.

In the 25 years since Forrest Gump introduced manipulated video, sophisticated AI has been developed to create nearly undetectable deepfakes. Not only has the technology improved, but the nefarious uses have proliferated. Take deepfake porn, where a victim’s face is superimposed on the body of a porn actor. It’s often used as a weapon against women.  Actress Scarlett Johansson, whose face was grafted onto dozens of pornographic scenes using free online AI software, is one famous example, but it happens to ordinary women, too. Just for a second, imagine your daughter or son, husband or wife being targeted by deepfake porn to destroy their reputation or settle an old score. As the technology becomes less expensive and more available, that is what we face.

Until recently, the warnings about deepfakes in the U.S. have focused on political ramifications, most notably their expected use in the 2020 election. Imagine a doctored video of a candidate saying they’re changing their position on gun control, for example. In June, during a House Intelligence Committee hearing on the issue, experts warned that there are multiple risks, including to national security: What if our enemies post a video of President Trump saying he’s just launched nuclear weapons at Russia or North Korea? 

The business community and financial markets may also be targeted. A video of Jeff Bezos warning of low quarterly Amazon results could cause a sell-off of Amazon stock, for instance. Bezos has the platform to quickly respond, but by the time he’s corrected the record, real damage could be done. Similarly, CEOs of lesser-known companies could be targeted, say the night before an IPO or new product launch. 

There’s currently a kind of arms race occurring between those developing deepfake technology and those developing ways to detect the altered videos — and the good guys are losing. 

In an interview with The Washington Post in June, Hany Farid, a computer science professor and digital forensics expert at the University of California at Berkeley said, “We are outgunned. The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.”

Soon, we may all be awash in deepfake video, unable to detect truth from fiction, and this is perhaps the most worrying aspect. We already live in an age where more than half the U.S. population distrusts the media, the government and their neighbors, and belief in conspiracy theories is on the rise. A staggering one in 10 Americans don’t believe we landed on the moon — 18% of the nonbelievers are between the ages of 18 and 34 — and a 2018 poll found that one-third of Americans don’t believe that 6 million Jews were murdered in the Holocaust. That’s to say nothing of the people worldwide who deny the Holocaust ever happened. 

Historically, the best way to refute conspiracy theorists has been video proof. The countless hours of footage of the Allies liberating concentration camps, the 9/11 attacks and that grainy film of Neil Armstrong planting the American flag on the moon couldn’t be denied. Until now. 

In a statement, the House Intelligence Committee said, “Deep fakes raise profound questions about national security and democratic governance, with individuals and voters no longer able to trust their own eyes or ears when assessing the authenticity of what they see on their screens.”

In other words, we’re entering an age when seeing is no longer believing. 

This is all part of a larger movement where technology is used to erode trust, and in the hands of foreign enemies like Russia, it can and will be used to undermine our belief in the free press, in our leadership and in democracy. It is essentially the use of the First Amendment to undermine the First Amendment, and unethical corporations and cyber criminals will hop on board as soon as the AI technology is affordable to the mass market.

So where is the hope in all of this? Unlike our weak regulatory response to the malicious tools that have come before — from viruses to spyware, botnets to ransomware — we must combine comprehensive legislative oversight and control with the ethical use of technology to proactively minimize the problem before it becomes mainstream. Our senators and representatives must take the lead in setting standards for how AI technology, including technology to produce deepfakes, is released, utilized and policed. 

Bruce Schneier’s book, “Click Here to Kill Everybody,” includes an excellent primer on the regulatory framework that would start us down the path. We, as voters, must directly express our concern to congressional leadership and urge them to act before a proliferation of deepfake videos destroys reputations — along with our ability to believe our own eyes.


About Cybersecurity Keynote Speaker John Sileo

John Sileo is an award-winning author and keynote speaker on cybersecurity, identity theft and tech/life balance. He energizes conferences, corporate trainings and main-stage events by making security fun and engaging. His clients include the Pentagon, Schwab and organizations of all sizes. John got started in cybersecurity when he lost everything, including his $2 million business, to cybercrime. Since then, he has shared his experiences on 60 Minutes, Anderson Cooper, and even while cooking meatballs with Rachel Ray. Contact John directly to see how he can customize his presentations to your audience.