Spooky Spyware

Here is my grand finale blog post for 2019’s Cybersecurity Awareness Month.

The History of Spyware

In 1999, Steve Gibson of Gibson Research detected advertising software on his computer and suspected it was actually stealing his confidential information. The so-called adware had been covertly installed and was difficult to remove, so he decided to counter-attack and develop the first ever anti-spyware program, OptOut.

Gibson had found something running on his computer called TSAdbot, and it freaked him out:

“A couple weeks later, there was news of something really bad that was installling itself in people’s machines. People were finding it by DLL name. It was called Aureate. So I created something called OptOut that was the very first anti-spyware tool. And that’s where the name got coined, or the term “spyware” came out of that work.”

Security Now Episode #007

In the same way that you have ads on websites, people were finding ways to put ads in freeware applications. This way, the freeware authors would be able to get paid back by this central advertising agency. The problem is that it was profiling what you did, monitoring your connection, and sending this collected data back to them. All without the user’s permission.

While you could uninstall the software, the spyware would remain. Back then, when times were simpler, it could be resolved (most of the time) by deleting the DLLs, which is exactly what Gibson’s OptOut program was designed to do.

Fast forward to 2005

Spyware begins adopting technology known as rootkit technology. This is, as many know, from the original days of when hackers were surreptitiously installing stuff in people’s computers that wouldn’t get found.

A rootkit exploits the fundamental way computers function. An operating system (OS) is sort of like the foundation of the way the system operates, and applications run on top of the OS. An inherent division exists between the space a user occupies and the kernel space, which is down below doing all the work on behalf of requests from the applications.

Rootkits embed their code down into this kernel space, essentially becoming a part of the operating system and altering its fundamental behavior. This changed the playing field completely when it came to the detection and removal of spyware. 

Spyware doesn’t just get onto your machine by piggybacking on freeware. Ever since websites have become more dynamic they became a part of the malware threat landscape as well. Netscape brought us JavaScript, which introduced the notion of scripting to web pages.

Microsoft, following suit (as they were ferociously competing with Netscape), took the technology they had (ActiveX) and brought it to their browser. So, if you were using Internet Explorer (IE), when you visited a website, this ActiveX technology uses either the Java scripting or Microsoft’s Active scripting to download code without asking you and will run it on your machine.

In other words, I’m not even downloading anything malicious, or installing dubious freeware, I’m merely visiting websites and I get infected! Fortunately, the popular modern browsers are light years ahead of where IE and Netscape were back in 2005.

How is this legal?

In short, by the precedent set by the 1970’s third-party doctrine.

In the 1970s, the Supreme Court handed down Smith v. Maryland and United States v. Miller. These two cases have turned out to be the most important Fourth Amendment decisions in recent history. The Court held that people are not entitled to an expectation of privacy in information they voluntarily provide to third parties. This legal proposition, commonly known as the third-party doctrine, permits the government access to vasts amounts of information about individuals (e.g. websites they’ve visited, who they’ve emailed, phone records, utility, banking, and education records, etc.) An important question is raised in light of all of our technological and social advances since it’s original conception in the ‘70s. 

Here is a short summary of how the Smith case came about: In 1976, a Baltimore woman, Patricia McDonough, was robbed in her home. She was able to give the police a description of the robber and the 1975 Monte Carlo she thought the robber was driving. After a few days she began receiving creepy, threatening phone calls from an unknown man. In one of the calls, the unknown man asked her to go stand on her front porch, where she observed the same Monte Carlo drive past. The following week, police observed that car in McDonough’s neighborhood. By running the plates, they learned that the car was registered to Michael Lee Smith.

The police went to the telephone company and requested that a pen register on Smith’s landline to record the numbers dialed from the telephone at Smith’s home. The following day, the police obtained a warrant to search Smith’s house where they discovered a phone book with the corner earmarked on the page McDonough’s name was found. Smith was then arrested and placed in a line-up where McDonough identified him as the man who robbed her.

In pretrial, Smith filed a motion to suppress the information derived from the installation of the pen register because it was obtained without a warrant. The motion was eventually denied by the trial court. The question of “did the pen register without a warrant violate Smith’s Fourth Amendment protection against unreasonable searches and seizures?” was posed to the Supreme Court, where they reached a 5-3 majority decision: No.

Supreme Court’s 1979 decision in Smith v. Maryland resulted in the concept of a third-party doctrine. It is important to note that this doctrine was established in the context of telephones: since Smith was voluntarily sharing information (by dialing phone numbers) with a third-party (the telephone company) that there was no expectation of privacy linked to this data. However, while there IS an expectation of privacy in the content of the communications (Smith’s phone conversations), the police only needed proof that he was making the phone calls to the victim.

The argument was (and still is): these aren’t actually his records — they don’t belong to him. They were the companies records and belong to them!

So what does this mean? It means that this collection of information without a warrant didn’t go against his Fourth Amendment rights and if one person doesn’t have Fourth Amendment interest in records held by a company, then no one does. In other words, the company who holds data collected through their business with us has absolute proprietary ownership of all of them. 

Flash forward 40 years

…and we are still relying on this precedent set around the case of Mr. Smith.

Who has a privacy right for the data that’s held by a company?

Data collection are records about people, about you and I. Its not data that is being exploited, it’s people that are being exploited. Its not data that’s being manipulated, its you and I that’s being manipulated. 

So, who is enabling spyware?

We know that commercial spyware has existed and been trafficked in the U.S. for well over a decade — and this extends way beyond our earlier examples of hidden adware on our systems: we’re talking about the type of spyware used as a personal surveillance tool by governments and jealous spouses alike. In a poll by the National Network to End Domestic Violence, 54% of domestic abuse victims in America were being tracked by their abusers. (NNEDV Survey)

Spyware isn’t being used only by totalitarian regimes monitoring targets of interest; There are many many cases where western democracies are employing the same surveillance techniques. One example is how El Chapo was spied on by the Mexican government by using the surveillance software known as Pegasus (more on Pegasus below). It’s fairly obvious why governments around the world all have a vested interest in keeping spyware in a legal gray area.

Companies also have a vested interest in keeping spyware legal. Consider the extent of data analytics and targeted ad services.

Nowadays these things are constantly running discretely within our pockets, on our smart phones. Apple has made it essentially impossible to observe network traffic that is going on in the background of your iPhone. There is an entire industry supporting data collection that is built on keeping this invisible. (Side note: I’m not saying there is some grand conspiracy here, I’m simply trying to point out that “big data” is big money.)

We need to make the activities of our devices more visible and understandable to the average person. And then, we need to give them direct control over it. Oh you don’t want Facebook Ad services reporting back to their servers from your smart phone every 5 seconds? Well, first it should be obvious to you that it’s happening, and second you should be able to deny this. You may have paid for the device, but are you the one who really owns it?

So really, what is the benefit for government entities or companies to make spyware illegal, when there is such benefit from the growth of a data collection industry? Sometimes it’s best to turn a blind eye: “You cant awaken someone who’s pretending to sleep” (Navajo proverb).

A look into a modern iteration of spyware: Pegasus, the flying horse!

Photo from Kaspersky

Pegasus was discovered thanks to a UAE human rights activist named Ahmed Mansoor. He was the target of a spear-phishing attack where the Pegasus software was employed. He sent the questionable links to security experts at Citizen Lab, who in turn, brought in another cybersecurity, Lookout, to help investigate.

The malware had been in operation for more than a year before it was discovered, probably as long as four years… which enabled it to develop a high degree of maturity. The software was capable of exploiting multiple iOS versions — essentially every version of iOS from 7 all the way up through 9.3.3.

It dynamically, remotely jailbreaks a phone, giving itself complete carte blanche access across the device.

According to the IT security company Lookout’s report on Pegasus, “it is professionally developed and highly advanced in its use of zero-day vulnerabilities, code obfuscation, and encryption. It uses sophisticated function hooking to subvert OS- and application-layer security in voice/audio calls and apps including remotely accessing text messages, iMessages, calls, emails, logs, and more from apps including Gmail, Facebook, Skype, WhatsApp, Viber, FaceTime, Calendar, Line, Mail.ru, WeChat, Surespot, Tango, Telegram, and others.”

They know this about all of these apps because embedded within Pegasus are individiual DLLs for each of them. While technically they’re not called DLLs (as we’re talking about Macs), but that’s essentially what they are: dynamically loaded libraries. So it uses the phone’s jailbroken status to then turn off code sign checking, and then injects these modules into the respective apps. Its got a Telegram hooking module, a WhatsApp hooking module, an iMessage, et cetera. And the key here is that it hooks pre-encryption.

Pegasus Delivery

The attack is very simple in its delivery and silent in delivering its payload.
It all starts when the attacker sends a website URL through SMS, email, social media, or any other message. It uses WebKit, the general purpose HTML display engine used not only by Safari, but many other web browsers. It’s what gets invoked when an HTTP link is accessed.

Once the user clicks the link, the software will silently carry out a series of exploits against the victim’s device, remotely jailbreaking it so that the espionage software packages can be unloaded and installed.

A remote server is sending the exploit with one-time-use links in order to prevent any kind of an analysis post-infection (which this also serves as kind of an assured licensing model).

For example, when some entity says “we want to purchase this for use,” (see Citizen Lab’s great write-up), they would receive a Pegasus workstation, from NSO Group, which connects you into this infrastructure. But NSO runs the infrastructure. They source the exploit from their servers. So if your check bounces or your money doesn’t come through, you don’t get anything.

Stages of Pegasus

Stage 1 is the delivery and the WebKit vulnerability. This stage comes down over the initial URL in the form of an HTML file that exploits a vulnerability.

Stage 2 is the jailbreak stage. This is where it is downloaded from the first stage code based on the device type. So the user agent in the web query tells the remote server what it is, who it is, and what device and technology is making the query. Pegasus then sends a customized attack package for that device. One thing I find very interesting about Stage 2 is that it is downloaded as an obfuscated and encrypted package. Each package is encrypted with unique keys at each download, making traditional network-based controls ineffective) By always using a new key to encrypt, you’re never going to be able to establish any pattern for matching, and every instance of the same thing ends up looking different.

Stage 3 is where the espionage software is initialized. This stage is downloaded by Stage 2 (and is also based on the device type). It is in this stage where Pegasus installs the hooks into the applications the attacker wishes to spy on.