I recently happened upon a short video by Frank W. Abagnale on Big Think website on the subject of hackable technology. In it, Abagnale – a frequent lecturer at the FBI Academy – ponders: "We develop a lot of technology but we never go to the final step and that is the last question of the development in how would someone misuse this technology and let’s make sure it can’t be done."
It’s a thought-provoking statement that led me down a deep internet rabbit hole. What I discovered would be enough to keep even the most level-headed of us awake at night. But, first, let’s start at the beginning...
Abagnale’s statement instantly evoked memories of Jeff Goldblum characters classic line in Jurassic Park:
Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.
Of course, he was referring to the questionable practice of growing dinosaurs in test tubes for corporate profit, but the sentiment rings true for a lot of modern day technological advances. Whilst GPS, webcams and pacemakers are all clever, useful – even life-saving - innovations, the potential for corruption also makes them dangerous.
Right now, cybercrime is basically a financial crime. It’s a business of stealing money or stealing data. Data is money. However, I think it’s going to turn a lot blacker than that. We have the ability right now as I speak to you to shut someone’s pacemaker off but were limited by distance. We have to be within 35 feet of the victim. I could walk by you on the sidewalk, turn off your pacemaker, speed it up or any bodily device you have on you controlled by a chip or a computer programme…
We have the ability now for law enforcement to pull over a vehicle on the interstate if they can get within 35 feet of the vehicle… So, the question is five years from now will that be 35 miles, 350 miles, 3,500 miles away?... The ability to kidnap the person in the car and lock them in the car. The ability to just take over the car and crash the car. Those are the things that haven’t been answered that no one has figured out yet.
We develop those technologies, we want to make them inexpensive, number one, so they're not encrypted. They don't have a lot of technology in them. And two, where its return on investment. We want our money back right away.
And marketing is saying, "Hey, this is great! We’ve got to get this out right away!" Without asking that question what if someone were to do this with it and how could we stop that now before we ever put it in the marketplace. Very little companies do that. Most of the technology out there can be hacked, can be manipulated because we don’t do those things.
It’s not unrealistic that many companies rely on an “ignorance is bliss” attitude from their consumers – those who willingly welcome potentially hackable devices into their homes…
Sure, there is a small possibility that an Amazon Echo can be hacked, but what are the chances of that happening to me… I’m not an interesting person – why would anyone want to listen into my conversations or watch my family?
Erm, maybe to steal your bank details? Obtain inside information about the company you work for? Perhaps to watch your teenage daughter undress at night… It doesn’t bear thinking about it. But that’s the crux of the issue – somebody does need to think about it, in the minutiae of detail and before the product ever goes to market.
In April 2019, it was reported that 7,000 iTrack and 20,000 ProTrack accounts – GPS tracking tools – were accessed by a hacker going by the name of L&M. Having realized that all users were given the same default password on sign-up, the hacker was able to access internal vehicle systems, allowing him to track their location and even turn off their engines remotely. In an online chat interview with Motherboard, L&M stated: “My target was the company, not the customers.
Customers are at risk because of the company… They need to make money, and don't want to secure their customers.”
A recent report also shone a light on the fallibility of peer-to-peer technology. Security journalist, Brain Krebs, flagged concerns around the iLnkP2P software made by Shenzhen Yunni Technology – software that resides in millions of IoT devices including doorbells, cameras and baby monitors. Paul Marrapese, a security researcher, identified a lack of encryption or authentication within the software that left more than 2 million devices vulnerable to attacks. Krebs’ advice? Avoid purchasing or using IoT devices that advertise P2P capabilities. Simple! If you know what P2P technology is in the first place, that is…
Scarily, cybercrime isn’t contained to virtual realms either. House burglaries, car thefts and even murders have all been made possible with hackable technology.
As tech gets smarter, so do criminals.
In July 2019, TechCrunch shared research highlighting serious issues with smart hub door locks. Security researchers Chase Dardaman and Jason Wheeler had identified three security flaws which, when combined, could be abused to open any door with a so-called “smart-lock”. In the accompanying article, TechCrunch explained:
Smart home technology has come under increasing scrutiny in the past year. Although convenient to some, security experts have long warned that adding an internet connection to a device increases the attack surface, making the devices less secure than their traditional counterparts. The smart home hubs that control a home’s smart devices, like water meters and even the front door lock, can be abused to allow landlords entry to a tenant’s home whenever they like.
Elaborating on Dardaman and Wheeler’s research, they added:
The researchers found they could extract the hub’s private SSH key for “root” — the user account with the highest level of access — from the memory card on the device. Anyone with the private key could access a device without needing a password… They later discovered that the private SSH key was hardcoded in every hub sold to customers — putting at risk every home with the same hub installed.
Thankfully, Dardaman and Wheeler aren’t the only code-loving crusaders using their powers for good, although I think I’d have a heart attack if I were the one receiving “help” from the a certain Hank Fordham…
Vice reported in late 2018 that a then 22-year-old Fordham logged into an Arizona man’s Nest security camera from his home in Calgary, Alberta and began broadcasting his voice to alert the owner to his insecure device. And apparently, this wasn’t his first rodeo! Vice stated:
In the last year, Fordham says he and his colleagues in the Anonymous Calgary Hivemind—a collective of white hat hackers—have hacked into between five and 10 different smart home security camera accounts and communicated with people on the other end.
He purportedly gained access through a technique known as “credential stuffing” – a subject I have recently covered in the video section of this site. If you don’t do anything else today, give it a watch. It could save you from a lot of anguish in the future. After all, it’s one thing to know your home and family might be at risk. It’s another thing to know what you can do about it…