SVR Attacks on Microsoft 365

FireEye is reporting the current known tactics that the SVR used to compromise Microsoft 365 cloud data as part of its SolarWinds operation:

Mandiant has observed UNC2452 and other threat actors moving laterally to the Microsoft 365 cloud using a combination of four primary techniques:

  • Steal the Active Directory Federation Services (AD FS) token-signing certificate and use it to forge tokens for arbitrary users (sometimes described as Golden SAML). This would allow the attacker to authenticate into a federated resource provider (such as Microsoft 365) as any user, without the need for that user’s password or their corresponding multi-factor authentication (MFA) mechanism.
  • Modify or add trusted domains in Azure AD to add a new federated Identity Provider (IdP) that the attacker controls. This would allow the attacker to forge tokens for arbitrary users and has been described as an Azure AD backdoor.
  • Compromise the credentials of on-premises user accounts that are synchronized to Microsoft 365 that have high privileged directory roles, such as Global Administrator or Application Administrator.
  • Backdoor an existing Microsoft 365 application by adding a new application or service principal credential in order to use the legitimate permissions assigned to the application, such as the ability to read email, send email as an arbitrary user, access user calendars, etc.

Lots of details here, including information on remediation and hardening.

The more we learn about the this operation, the more sophisticated it becomes.

In related news, MalwareBytes was also targeted.

Posted on January 21, 2021 at 6:31 AM1 Comments

Sophisticated Watering Hole Attack

Google’s Project Zero has exposed a sophisticated watering-hole attack targeting both Windows and Android:

Some of the exploits were zero-days, meaning they targeted vulnerabilities that at the time were unknown to Google, Microsoft, and most outside researchers (both companies have since patched the security flaws). The hackers delivered the exploits through watering-hole attacks, which compromise sites frequented by the targets of interest and lace the sites with code that installs malware on visitors’ devices. The boobytrapped sites made use of two exploit servers, one for Windows users and the other for users of Android

The use of zero-days and complex infrastructure isn’t in itself a sign of sophistication, but it does show above-average skill by a professional team of hackers. Combined with the robustness of the attack code — ­which chained together multiple exploits in an efficient manner — the campaign demonstrates it was carried out by a “highly sophisticated actor.”

[…]

The modularity of the payloads, the interchangeable exploit chains, and the logging, targeting, and maturity of the operation also set the campaign apart, the researcher said.

No attribution was made, but the list of countries likely to be behind this isn’t very large. If you were to ask me to guess based on available information, I would guess it was the US — specifically, the NSA. It shows a care and precision that it’s known for. But I have no actual evidence for that guess.

All the vulnerabilities were fixed by last April.

Posted on January 20, 2021 at 6:00 AM2 Comments

Injecting a Backdoor into SolarWinds Orion

Crowdstrike is reporting on a sophisticated piece of malware that was able to inject malware into the SolarWinds build process:

Key Points

  • SUNSPOT is StellarParticle’s malware used to insert the SUNBURST backdoor into software builds of the SolarWinds Orion IT management product.
  • SUNSPOT monitors running processes for those involved in compilation of the Orion product and replaces one of the source files to include the SUNBURST backdoor code.
  • Several safeguards were added to SUNSPOT to avoid the Orion builds from failing, potentially alerting developers to the adversary’s presence.

Analysis of a SolarWinds software build server provided insights into how the process was hijacked by StellarParticle in order to insert SUNBURST into the update packages. The design of SUNSPOT suggests StellarParticle developers invested a lot of effort to ensure the code was properly inserted and remained undetected, and prioritized operational security to avoid revealing their presence in the build environment to SolarWinds developers.

This, of course, reminds many of us of Ken Thompson’s thought experiment from his 1984 Turing Award lecture, “Reflections on Trusting Trust.” In that talk, he suggested that a malicious C compiler might add a backdoor into programs it compiles.

The moral is obvious. You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect.

That’s all still true today.

Posted on January 19, 2021 at 6:16 AM21 Comments

Friday Squid Blogging: China Launches Six New Squid Jigging Vessels

From Pingtan Marine Enterprise:

The 6 large-scale squid jigging vessels are normally operating vessels that returned to China earlier this year from the waters of Southwest Atlantic Ocean for maintenance and repair. These vessels left the port of Mawei on December 17, 2020 and are sailing to the fishing grounds in the international waters of the Southeast Pacific Ocean for operation.

I wonder if the company will include this blog post in its PR roundup.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on January 15, 2021 at 4:03 PM206 Comments

Click Here to Kill Everybody Sale

For a limited time, I am selling signed copies of Click Here to Kill Everybody in hardcover for just $6, plus shipping.

Note that I have had occasional problems with international shipping. The book just disappears somewhere in the process. At this price, international orders are at the buyer’s risk. Also, the USPS keeps reminding us that shipping — both US and international — may be delayed during the pandemic.

I have 500 copies of the book available. When they’re gone, the sale is over and the price will revert to normal.

Order here.

EDITED TO ADD: I was able to get another 500 from the publisher, since the first 500 sold out so quickly.

Please be patient on delivery. There are already 550 orders, and that’s a lot of work to sign and mail. I’m going to be doing them a few at a time over the next several weeks. So all of you people reading this paragraph before ordering, understand that there are a lot of people ahead of you in line.

EDITED TO ADD (1/16): I am sold out. If I can get more copies, I’ll hold another sale after I sign and mail the 1,000 copies that you all purchased.

Posted on January 15, 2021 at 12:26 PM27 Comments

Cell Phone Location Privacy

We all know that our cell phones constantly give our location away to our mobile network operators; that’s how they work. A group of researchers has figured out a way to fix that. “Pretty Good Phone Privacy” (PGPP) protects both user identity and user location using the existing cellular networks. It protects users from fake cell phone towers (IMSI-catchers) and surveillance by cell providers.

It’s a clever system. The players are the user, a traditional mobile network operator (MNO) like AT&T or Verizon, and a new mobile virtual network operator (MVNO). MVNOs aren’t new. They’re intermediaries like Cricket and Boost.

Here’s how it works:

  1. One-time setup: The user’s phone gets a new SIM from the MVNO. All MVNO SIMs are identical.
  2. Monthly: The user pays their bill to the MVNO (credit card or otherwise) and the phone gets anonymous authentication (using Chaum blind signatures) tokens for each time slice (e.g., hour) in the coming month.
  3. Ongoing: When the phone talks to a tower (run by the MNO), it sends a token for the current time slice. This is relayed to a MVNO backend server, which checks the Chaum blind signature of the token. If it’s valid, the MVNO tells the MNO that the user is authenticated, and the user receives a temporary random ID and an IP address. (Again, this is now MVNOs like Boost already work.)
  4. On demand: The user uses the phone normally.

The MNO doesn’t have to modify its system in any way. The PGPP MVNO implementation is in software. The user’s traffic is sent to the MVNO gateway and then out onto the Internet, potentially even using a VPN.

All connectivity is data connectivity in cell networks today. The user can choose to be data-only (e.g., use Signal for voice), or use the MVNO or a third party for VoIP service that will look just like normal telephony.

The group prototyped and tested everything with real phones in the lab. Their approach adds essentially zero latency, and doesn’t introduce any new bottlenecks, so it doesn’t have performance/scalability problems like most anonymity networks. The service could handle tens of millions of users on a single server, because it only has to do infrequent authentication, though for resilience you’d probably run more.

The paper is here.

Posted on January 15, 2021 at 6:36 AM24 Comments

Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • I’m speaking (online) as part of Western Washington University’s Internet Studies Lecture Series on January 20, 2021.
  • I’m speaking (online) at ITU Denmark on February 2, 2021. Details to come.
  • I’m being interviewed by Keith Cronin as part of The Center for Innovation, Security, and New Technology’s CSINT Conversations series, February 10, 2021 from 11:00 AM – 11:30 AM CST.
  • I’ll be speaking at an Informa event on February 28, 2021. Details to come.

The list is maintained on this page.

Posted on January 14, 2021 at 11:42 AM0 Comments

Finding the Location of Telegram Users

Security researcher Ahmed Hassan has shown that spoofing the Android’s “People Nearby” feature allows him to pinpoint the physical location of Telegram users:

Using readily available software and a rooted Android device, he’s able to spoof the location his device reports to Telegram servers. By using just three different locations and measuring the corresponding distance reported by People Nearby, he is able to pinpoint a user’s precise location.

[…]

A proof-of-concept video the researcher sent to Telegram showed how he could discern the address of a People Nearby user when he used a free GPS spoofing app to make his phone report just three different locations. He then drew a circle around each of the three locations with a radius of the distance reported by Telegram. The user’s precise location was where all three intersected.

[…]

Fixing the problem — or at least making it much harder to exploit it — wouldn’t be hard from a technical perspective. Rounding locations to the nearest mile and adding some random bits generally suffices. When the Tinder app had a similar disclosure vulnerability, developers used this kind of technique to fix it.

Posted on January 14, 2021 at 6:08 AM17 Comments

On US Capitol Security — By Someone Who Manages Arena-Rock-Concert Security

Smart commentary:

…I was floored on Wednesday when, glued to my television, I saw police in some areas of the U.S. Capitol using little more than those same mobile gates I had ­ the ones that look like bike racks that can hook together ­ to try to keep the crowds away from sensitive areas and, later, push back people intent on accessing the grounds. (A new fence that appears to be made of sturdier material was being erected on Thursday.) That’s the same equipment and approximately the same amount of force I was able to use when a group of fans got a little feisty and tried to get backstage at a Vanilla Ice show.

[…]

There’s not ever going to be enough police or security at any event to stop people if they all act in unison; if enough people want to get to Vanilla Ice at the same time, they’re going to get to Vanilla Ice. Social constructs and basic decency, not lightweight security gates, are what hold everyone except the outliers back in a typical crowd.

[…]

When there are enough outliers in a crowd, it throws the normal dynamics of crowd control off; everyone in my business knows this. Citizens tend to hold each other to certain standards ­ which is why my 40,000-person town does not have 40,000 police officers, and why the 8.3 million people of New York City aren’t policed by 8.3 million police officers.

Social norms are the fabric that make an event run smoothly — and, really, hold society together. There aren’t enough police in your town to handle it if everyone starts acting up at the same time.

I like that she uses the term “outliers,” and I make much the same points in Liars and Outliers.

Posted on January 13, 2021 at 6:06 AM44 Comments

Cloning Google Titan 2FA keys

This is a clever side-channel attack:

The cloning works by using a hot air gun and a scalpel to remove the plastic key casing and expose the NXP A700X chip, which acts as a secure element that stores the cryptographic secrets. Next, an attacker connects the chip to hardware and software that take measurements as the key is being used to authenticate on an existing account. Once the measurement-taking is finished, the attacker seals the chip in a new casing and returns it to the victim.

Extracting and later resealing the chip takes about four hours. It takes another six hours to take measurements for each account the attacker wants to hack. In other words, the process would take 10 hours to clone the key for a single account, 16 hours to clone a key for two accounts, and 22 hours for three accounts.

By observing the local electromagnetic radiations as the chip generates the digital signatures, the researchers exploit a side channel vulnerability in the NXP chip. The exploit allows an attacker to obtain the long-term elliptic curve digital signal algorithm private key designated for a given account. With the crypto key in hand, the attacker can then create her own key, which will work for each account she targeted.

The attack isn’t free, but it’s not expensive either:

A hacker would first have to steal a target’s account password and also gain covert possession of the physical key for as many as 10 hours. The cloning also requires up to $12,000 worth of equipment and custom software, plus an advanced background in electrical engineering and cryptography. That means the key cloning — ­were it ever to happen in the wild — ­would likely be done only by a nation-state pursuing its highest-value targets.

That last line about “nation-state pursuing its highest-value targets” is just not true. There are many other situations where this attack is feasible.

Note that the attack isn’t against the Google system specifically. It exploits a side-channel attack in the NXP chip. Which means that other systems are probably vulnerable:

While the researchers performed their attack on the Google Titan, they believe that other hardware that uses the A700X, or chips based on the A700X, may also be vulnerable. If true, that would include Yubico’s YubiKey NEO and several 2FA keys made by Feitian.

Posted on January 12, 2021 at 6:16 AM32 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.