Reflected API Cyber Attacks: An Emerging Supply Chain Headache
By Barak Engel, Chief “Geek” at EAmmune, author of "Why CISOs Fail" and "The Security Hippie", CISO at Stubhub and Bond, PLP, and Skytop Contributor / March 14th, 2022
Barak Engel’s company, EAmmune, manages security programs across many companies and verticals. He was CISO and Chief Privacy Officer for MuleSoft just prior to its $6.5 billion acquisition by Salesforce, and has seen StubHub as its CISO through the company’s $4B acquisition by ViaGoGo and its massive digital transformation project.
Barak serves on multiple board advisory positions, providing critical technology, risk, product and market insights to boards and CEOs. First gaining exposure into the intersection of business and security while at Netvision, Israel’s largest ISP during the dotcom boom, he became focused on how security will inevitably become an essential business discipline. Barak has built, managed and counseled security departments across countless and diverse organizations, including MuleSoft, Amplitude Analytics, Live Nation/Ticketmaster, StubHub, Barnes and Noble, bebe Stores and many others.
Barak’s first book “Why CISOs Fail – The Missing Link In Security Management” has been widely praised for its brutally honest assessment of why so many security leaders fail to connect effectively with their CEO and board, why that failure has created unnecessary additional risks for enterprises, and how it can be fixed. In recognition of its critical contribution to the field, the book was nominated and accepted into the Cybercannon in 2021. Barak’s second book, “The Security Hippie”, published in January 2022, aims to make the field of information security more accessible to the average reader.
Mistaken Identity
Everybody remembers at least one action movie (or MacGyver episode) where the protagonist uses a mirror to reflect a laser in order to bypass a protective grid. They often do it by redirecting the beam to fry some sensor around a corner, taking down the system in the process and gaining unauthorized access to something desirable.
It’s such an old Hollywood shtick it’s practically a tired one, but we still love it because it’s so easy to understand.
Turns out that very same thing is now happening all over the world of IT. And you should at least know about it, because what it implies is that in the future we will begin to see cases of a new kind of mistaken identity: one corporation erroneously believing, based on all the evidence at its disposal, that another corporation is attacking it.
This is going to provide all sorts of amusement for, in particular, corporate lawyers. Just imagine this: the CISO goes to the General Counsel (GC) and tells them, factually, that they have incontrovertible proof, with logs and audit trails, that some other company has been attacking their own systems. Now that I think about it, GCs might want to rewatch Ghostbusters, because they will be chasing plenty of apparitions.
RAPI Attacks
Welcome to RAPI attacks. The API part is what it looks like, and the R stands for Reflected; in other words, Reflected API attacks.
Technobabble aside, this is what it means: in our modern cloud-based world, there are many cloud-based software providers that serve as a sort of enabler or sometimes middleman (we call those middleware) between other organizations. They perform an enormous number of technology functions, and are seemingly growing exponentially in number.
Practically all of them operate with a common set of hidden assumptions. For example, that they are authorized to fulfill their intended function as requested by their users. Sure, it’s obvious, but we don’t actually spell it out very often. In this case, this hidden assumption has a crucial side-effect: these systems are explicitly authorized to perform their function. Once onboarded, they are trusted.
As an aside, there is an entire world in information security and privacy called “Zero Trust” that applies here, and there are some phenomenally bright people (like Richard Bird) in that world that you can learn from. But for our purposes, that’s all I am going to say about it.
Our Trust in the Cloud: A Chink in the Armor
Back to our narrative: we have cloud systems that are trusted to perform a function with our own corporate systems. Time to discuss the second hidden assumption: we accept that the functionality they provide is, by and large, automated. After all, our purpose in spending money with the vendors is to increase productivity by relying on computers and software to do all sorts of stuff more efficiently than humans can. Just like the first assumption, this is an expectation that isn’t explicitly spelled out, but the end result is that these systems are now trusted and automated.
And now we turn to the third hidden assumption: incentives. By and large, the cloud vendors are typically indemnified against misuse of their own systems, and are clearly incentivized to avoid examining what their customers are doing. Liability drives this latter part. By calling themselves a “platform” and leaving the responsibility of proper use of the platform to the customers, the vendors hand off the liability of misuse to the customers. There is nothing nefarious about this. It is the only sane approach for any cloud vendor to take, especially when customers sometimes do horrible and illegal things with free accounts (Oh, yes, it happens. A lot). Privacy rules also point in the same direction, as the vendor is directed to not assume ownership of PII that it doesn’t need. All of this is usually captured in the commercial paper underlying the engagement.
Which leads us to the finished, three-legged stool: middleware cloud platforms are, by and large, trusted, automated, and blind.
Let’s get to the point behind this entire piece, which I am certain you already see.
How These Attacks Occur
A malicious actor now has a pretty convenient way to attack any entity indirectly. All they need to do is to “reflect” harmful code through one of these players towards a target, and if they are successful, gain access, while the target thinks that the bad guys (from their perspective) are the legitimate cloud vendor. Just like our protagonist holding a mirror to reflect the laser beam to defeat the security system around the corner, so does our bad guy.
Let me illustrate this by outlining a benign and trivial, utterly hypothetical, very easy to understand example.
We’re all familiar with calendar automation tools – you know, the ones where you connect your corporate calendar to a cloud system that then adds an interface for anyone to book meetings with you at pre-specified time windows. They are very useful, and save a tremendous amount of effort for a lot of people.
Here is what they also do: connect directly to corporate groupware (like Google Calendar), with read/write permissions, fully authorized by the user (trusted), and often without the knowledge of corporate IT. They always provide a way for the booking party to describe what they wish to discuss… in a field… on the booking vendor’s web page. Heck, some of them actually allow attachments. The systems don’t usually inspect this description, for all the reasons mentioned above; they simply book (automated) while handing off the description to the calendaring system of the user (blind).
Here’s the Headache
The thing is that trusting input from users is pretty much the origin of every information security threat. Every successful hack at some point depends on fooling a system to do something it isn’t intended to do in the first place, via malformed input. Even social engineering can be thought of this way, if one thinks of the human operator as the “system” and the con as the fooling part (duh).
But what if the corporate calendaring system (not the vendor!) has an unknown exposure in its internal booking engine that can be compromised via a malicious payload in the description field? Then one could “reflect” that payload via the automated external booking vendor and compromise the target system, all the while appearing like it was the vendor that did it. That’s certainly what the audit trails will show, if they show anything at all.
Fun, Right?
I’ll also add that, because APIs are always customized (each one has its own special “language” that it uses), there is no easy way to monitor these interactions – you have to have a behavioral system designed explicitly to learn how each API behaves, and then be able to determine if a malicious payload is being transferred through an authorized connection. Tricky stuff.
For all I can tell, this is already happening a fair amount(*), but I don’t see it discussed much at all. Instead, it’s sort of buried in larger and complex conversations about supply-chain kill chains, and zero trust, and this and that, which makes it harder to understand unless you’re a total nerd. Seemed like a bit of explanation was in order.
And I find that naming things often helps, hence my suggestion of RAPI, or Reflected API attacks.
My good friend Ivan Novikov at Wallarm, who I asked to review this so that I don’t make a bigger fool of myself than I typically do, pointed me to a couple of recent live examples of RAPIs: Google “hacking” Dropbox, and Flickr being “hacked by” AWS.
In both cases, these issues were reported by hackers inside of a bug bounty program, but that just tells you this stuff is real.