This is part of our Road Trip 2017 summer series “The Smartest Stuff,” about how innovators are thinking up new ways to make you — and the world around you — smarter.
“Are you a hacker?”
A Las Vegas driver asks me this after I tell him I’m headed to Defcon at Caesars Palace. I wonder if his sweat isn’t just from the 110℉ heat blasting the city.
All week, a cloud of paranoia looms over Las Vegas, as hackers from around the world swarm Sin City for Black Hat and Defcon, two back-to-back cybersecurity conferences taking place in the last week of July. At Caesars Palace, where Defcon is celebrating its 25th anniversary, the UPS store posts a sign telling guests it won’t accept printing requests from USB thumb drives. You can’t be too careful with all those hackers in town.
Everywhere I walk I see hackers — in tin-foiled fedoras, wearing biker vests for the Telephreak Defcon party. Mike Spicer, a security researcher, carries a 4-foot-high backpack holding a “Wi-Fi cactus.” Think wires, antennas, colored lights and 25 Wi-Fi scanners that, in seven hours, captured 75 gigabytes of data from anyone foolish enough to use public Wi-Fi. I see a woman thank him for holding the door open for her, all while his backpack sniffs for unencrypted passwords and personal information it can grab literally out of thin air.
You’d think that, with all the potential threats literally walking about town, Vegas’ director of technology and innovation, Mike Sherwood, would be stressed out. It’s his job to protect thousands of smart sensors around the city that could jam traffic, blast water through pipes or cause a blackout if anything goes haywire.
And yet he’s sitting right in front of me at Black Hat, smiling.
His entire three-person team, in fact, is at Black Hat so they can learn how to stave off future attacks. Machine learning is guarding Las Vegas’ network for them.
Broadly speaking, artificial intelligence refers to machines carrying out jobs that we would consider smart. Machine learning is a subset of AI in which computers learn and adapt for themselves.
Now a number of cybersecurity companies are turning to machine learning in an attempt to stay one step ahead of professionals working to steal industrial secrets, disrupt national infrastructures, hold computer networks for ransom and even influence elections. Las Vegas, which relies on machine learning to keep the bad guys out, offers a glimpse into a future when more of us will turn to our AI overlords for protection.
AI gets smart about cybersecurity
Cyberattacks have become more sophisticated and more dangerous. Here’s how Las Vegas stays safe.
by Alfred Ng
Autoplay: ONAutoplay: OFF
Man and machine
At its most basic, machine learning for security involves feeding massive amounts of data to the AI program, which the software then analyzes to spot patterns and recognize what is, and isn’t, a threat. If you do this millions of times, the machine becomes smart enough to prevent intrusions and malware on its own.
Machine learning naysayers argue that hackers can write malware to trick AI. Sure the software can learn really fast, but it stumbles when it encounters data its creators didn’t anticipate. Remember how trolls turned Microsoft’s Tay machine-learning chatbot into a mindless racist? It makes a good case against relying on AI for cybersecurity, where the stakes are so high.
Even so, Sherwood says he trusts the Darktrace software that has protected Las Vegas’ network and thousands of sensors for the last 18 months.
Since last February, Darktrace has defended the city from cyberattacks, around the clock. That comes in handy when you have only three staffers handling cybersecurity for a city with more than 630,000 people, 3,000 employees and thousands of online devices. It was worse when Sherwood joined two years ago.
“That was the time where we only had one security person on the team,” Sherwood tells me. “That was when I thought, ‘I need help and I can’t afford to hire more people.'”
It’s really easy for AI to miss things.
David Brumley, Carnegie Mellon University
He’d already used Darktrace in his previous job as deputy director of public safety and city technology in Irvine, California, and he thought the software could help in Las Vegas. Within two weeks, Darktrace found malware on Las Vegas’ network that was sending out data.
“We didn’t even know,” Sherwood says. “Traditional scanners weren’t picking it up.”
I’m standing in front of a tattoo parlor in Las Vegas’ Arts District, a little more than 4 miles from Caesars Palace. Across the street, I see three shuttered stores next to two bail bonds shops.
I’m convinced the taxi driver dropped me off at the wrong location.
This is supposed to be Vegas’ $1 million Innovation District project? Where are the self-driving cars the city said it’s testing in the area? Or the solar-powered streetlights that also generate energy from our footsteps?
I look again at the Innovation District map on my phone. I’m in the right place. Despite the rundown stores, trailer homes and empty lots, this corner of downtown Vegas is much smarter than it looks.
That’s because hidden on the roads and inside all the streetlights, traffic signals and pipes are thousands of sensors. They’re tracking the air quality, controlling the lights and water, counting the cars traveling along the roads, and providing Wi-Fi.
Officials chose the city’s rundown area to serve as its Innovation District because they wanted to redevelop it, with help from technology, Sherwood says. There’s just one problem: All those connected devices are potential targets for a cyberattack. That’s where Darktrace comes in.
Sherwood willingly banks on Darktrace to protect the city’s entire network because the software comes at machine learning from a different angle. Most machine learning tools rely on brute force: cramming themselves with thousands of terabytes of data so they can learn through plenty of trial and error. That’s how IBM’s Deep Blue computer learned to defeat Garry Kasparov, the world chess champion, in a best-of-seven match in 1997. In the security world, that data describes malware signatures — essentially algorithms that identify specific viruses or worms, for instance.
Darktrace, in contrast, doesn’t look at a massive database of malware that’s come before. Instead, it looks for patterns of human behavior. It learns within a week what’s considered normal behavior for users and sets off alarms when things fall out of pattern, like when someone’s computer suddenly starts encrypting loads of files.
Rise of the machines?
Still, it’s probably too soon to hand over all security responsibilities to artificial intelligence, says David Brumley, a security professor and director of Carnegie Mellon University’s CyLab Security and Privacy Institute. He predicts it’ll take at least 10 years before we can safely use AI to keep bad things out.
“It’s really easy for AI to miss things,” Brumley tells me over the phone. “It’s not a perfect solution, and you still need people to make important choices.”
Brumley’s team last year built an AI machine that won DARPA’s Cyber Grand Challenge, beating out other AI entries. A few days later, their contender took on some of the world’s best hackers at Defcon. They came in last.
Sure, machines can help humans fight the scale and speed of attacks, but it’ll take years before they can actually call the shots, says Brumley.
That’s because the model for AI right now is still data cramming, which — by today’s standards — is actually kind of dumb.
But it was still good enough to outwit Kasparov 20 years ago, making him the de facto poster child for man outsmarted by machine.
“I always remind people it was a rematch, because I won the first one,” he tells me, chuckling, while sitting in a room at Caesars Palace during Defcon. Today Kasparov, 54, is the ambassador for security software company Avast, which is why he’s been giving talks around the country on why humans need to work with AI in cybersecurity.
He tells me machines can now learn too fast for humans to keep up, no matter if it’s chess or cybersecurity. “The vigilance and the precision required to beat the machine — it’s virtually impossible to reach in human competition,” Kasparov says.
About two months before Defcon, I’m at Darktrace’s headquarters in New York, where company executives show me how the system works.
On a screen, I see connected computers and printers sending data to Darktrace’s network as it monitors for behavior that’s out of the ordinary.
“For example, Sue doesn’t usually access this much internal data,” Nancy Karches, Darktrace’s sales manager, tells me. “This is straying from Sue’s normal pattern.” So Darktrace shuts down an attack most likely waged by another machine.
“When you have machine-based attacks, the attacks are moving at a machine speed from one to the other,” says Darktrace CEO Nicole Eagan. “It’s hard for humans to keep up with that.”
But what happens when AI becomes the norm? When everyone’s using AI, says Brumley, hackers will turn all their attention on finding the machines’ flaws — something they’re not doing yet.
“We’ve seen again and again, the reason new solutions work better is because attackers aren’t targeting its weaknesses,” he says. “As soon as it became popular, it started working worse and worse.”
About 60 percent of cybersecurity experts at Black Hat believe hackers will use AI for attacks by 2018, according to a survey from the security company Cylance.
“Machine learning security is not foolproof,” says Hyrum Anderson, principal data scientist at cybersecurity company Endgame, who leads research on machine learning to detect hackers and their tools. Anderson expects AI-based malware will rapidly make thousands of attempts to find code that the AI-based security misses.
“The bad guy can do this with trial and error, and it will cost him months,” Anderson says. “The bot can learn to do this, and it will take hours.”
Anderson says he expects cybercriminals will eventually sell AI malware on darknet markets to wannabe hackers.
For now, Sherwood feels safe having the city protected by an AI machine, which has shielded Las Vegas’ network for the past year. But he also realizes a day will come when hackers could outsmart the AI. That’s why Sherwood and his Las Vegas security team are at Black Hat: to learn how to use human judgment and creativity while the machine parries attacks as rapidly as they come in.
Kasparov has been trying to make that point for the last 20 years. He sees machines doing about 80 percent to 90 percent of the work, but he believes they’ll never get to what he calls “that last decimal place.”
“You will see more and more advanced destruction on one side, and that will force you to become more creative on the positive side,” he tells me.
“Human creativity is how we make the difference.”
Road Trip 2016: Reporters’ dispatches from the field on tech’s role in the global refugee crisis.
Road Trip 2015: CNET hunts for innovation outside the Silicon Valley bubble.
Share your voice
Road Trip 2017SecurityTech IndustryCultureArtificial intelligence