Thales Blog

Server Wasn’t Patched (yada, Yada, Yada), All Your Data Has Been Stolen

October 12, 2017

The failure to patch a server as a cause of a breach only tells a small part of the story. Protecting personal and confidential information is not as simple as keeping patches on a server up to date. The responsibilities of securing data goes well beyond a single administrator's responsibility to carry out their duties.

Just like the Seinfeld television episode (Season 8 Episode 19) where many of the most important pieces of information in a story are left out by using the phrase “yada, yada, yada”, reporting on the root cause of a breach really doesn't tell us much about how the sensitive data was not properly protected. Yes, a hacker might have gained access to an enterprise due to a single vulnerability on a web server, and yada, yada, yada, they lost a lot of confidential data. “Why do we let the explanation of a single vulnerability pass as the cause of the breach? As a security or operations professional, why would anyone accept an architecture where a single breach of an internet-facing server will reveal all data in your network? Defense in depth” was such a popular buzzword a couple years back. Where is defense in depth, and why did it fail?

It is standard practice to separate your data from your presentation tiers when designing any modern web application (in fact PCI Compliance requires servers only have a single purpose). I won't get into detail about a common 3 tier architecture, but let's assume it's safe to say that the database containing records isn’t the typical source of a breach – it’s the web server. Why is that important? To get to the actual data, the hacker has to move through the network to reach the data that is the source of a typical breach.

The breach of a web server typically gives hackers root privileges to run commands only on that system. This brings up questions: How the attack is able to move through the network? How is the data actually protected on the database server? With this in mind, here are some of the unanswered questions I have when a breach occurs, and the same questions anyone looking to protect their own servers should be asking:

  • How did the attack get credentials for other servers once they breached the initial server or workstation. Did they use the same login credentials for the web server and the database server allowing the attacker to move laterally throughout the environment?
  • Did they have any network monitoring to indicate that data was being exfiltrated?
  • Did they encrypt the data at a field or file level in the database?
  • If they encrypted, how was the attacker able to decrypt? Did they use proper key management and strong encryption (something more secure than just disk encryption)?
  • Did they steal the actual database or a backup copy?
  • Did they use SSH keys to secure connection to servers? If so, why were keys for the database server able to be accessed on a web server?
  • Do they monitor the web server for unauthorized changes? If you have a whitelist of files on the webserver - a database showing up and being exfiltrated should be something you can find.

In order to move the security field forward and understand how we can better protect our infrastructure, we need to start asking smarter, more thoughtful questions when a breach does occur. We need to ensure our data is protected – not just with encryption, but access control. We need to monitor the network. We need to control login access and credentials. These should be considered the bare minimum.

If we are only relying on a single admin to manually patch a server when vulnerabilities are discovered, we will all be fighting a losing battle.