Don't let SSL be used against you
Connecting state and local government leaders
Secure Sockets Layer is a double-edged sword that can protect your Internet traffic but hide malicious activity. Controlling and monitoring SSL is an important part of a security architecture, says Netronome co-founder David Wells.
Secure Sockets Layer can protect Internet traffic by providing encryption and authentication. But it is a double-edged sword that also can hide malware and other malicious traffic. That creates the need to inspect or otherwise control traffic that uses SSL.
David Wells, co-founder and vice president of technology at Netronome, is responsible for the technology behind the company’s SSL Inspector Appliance. Wells also has contributed to standards development for Frame Relay and Asynchronous Transfer Mode and has served on the board of directors of the ATM Forum. He has a doctorate from Cranfield Institute of Technology and holds a number of patents relating to telecommunications.
Wells talked recently with GCN’s William Jackson about SSL and network security.
GCN: What is SSL?
Wells: SSL stands for Secure Sockets Layer, which is a security mechanism for traffic over the Internet. It was originally invented by Netscape back in the mid-’90s and went through various versions. But eventually, the idea was formalized in the [Internet Engineering Task Force] version called TLS, for Transport Layer Security. But everyone commonly refers to both as SSL.
The way most people encounter it is in a Web session. When you go to a site with a URL that has HTTPS, that means it is HTTP over SSL or TLS and is therefore secure. SSL is generic in the sense that it runs on top of TCP, and you can run other applications on top of them. It doesn’t have to be HTTP.
How mature is SSL as a security technology?
It has been around since the mid-’90s, and it is very stable. Nobody should be running the early versions of SSL; almost anything you look at today is running TLS. Commonly, people talk about it as encryption, and SSL does include encryption, but the first thing it does is provide authentication of at least one of the parties in the conversation. When you set up an SSL session to a secure Web site, part of the process is that you are provided with information that guarantees that the Web site you are talking to is really the one you think you’re talking to. It authenticates the server you are going to. That is the first part of security, being able to trust that you are talking to the right person at the other end. Once you’ve established that trusted connection, it encrypts the data.
What is the trade-off in using SSL? Why not use it for all traffic?
It is one of life’s ironies that by doing something that is good, at the same time, you create other problems. It gives security that the traffic is encrypted so that nobody can intercept it, but unfortunately, that “nobody” doesn’t just mean the bad guys. The good guys can’t intercept it either. All of the threats that were common 10 years ago, we’ve built security defenses to protect against. All of that security infrastructure that we’ve built relies on, typically, a network appliance that is in-line, being able to see the content of traffic and detect when there is a problem before that traffic gets to the target.
The other thing that is increasingly important for enterprises is that they need to know what is going out of the enterprise. There are a lot of security solutions that will sit in the network and monitor outgoing traffic. So we’ve built this security architecture in the enterprise, and they work fine until you have traffic that is SSL. The very fact that the traffic is now secure means that the security infrastructure is now blind.
When should SSL be used, and when should it be avoided?
In general, using it is a good thing to do for most things because otherwise anyone who can intercept the traffic can read it. For a lot of the cloud applications or Web 2.0 applications that are hosted in the Internet, the standard interface to talk to all of those things is HTTPS. For a lot of things, it is not a matter of balancing the risk versus the reward. If SSL isn’t there, you aren’t going to do it. The thing that people are waking up to is that you should use it, but you need to be aware that you limit the ability of the security infrastructure to a degree, and you want a way to mitigate that.
How do you inspect encrypted traffic?
There are a number of options. There are some environments — a locked down, very secure agency — where you could insist that nobody use SSL. But that is the extreme. So you can use either a traditional proxy or a transparent proxy to gain access to the data.
The traditional proxy is very limited. The client has to set up a non-SSL session to the proxy. You have to configure all of your clients to send all of the traffic you are interested in to the proxy rather than going direct. That is not really the ideal solution. The security applications have to be in-line, and they typically are appliances that run at line rates. So they need to run at gigabit-plus, and they can’t afford the latency that the proxy function involves. A transparent proxy does what the security community calls a man-in-the-middle attack. If you have a secure connection end-to-end, you insert something the traffic will flow through, and it fools both ends into thinking it is not there, decrypting and re-encrypting the traffic.
What is the downside to inspecting encrypted traffic this way?
If you had a security protocol in which it was easy to do a man-in-the-middle attack, it would be a pretty poor protocol. SSL was designed to avoid the risk of those attacks. In order to use those attacks, you have to create a specific set of circumstances. You can do it in an enterprise environment, but you can’t do it in a public environment. So inspecting for threats within an enterprise is eminently achievable. But doing it on the Internet is virtually impossible.
During the handshake phase in an SSL session, the server authenticates itself by sending back a signed digital certificate. The client can validate that certificate. Then they exchange the keys needed for encryption. So the transparent proxy has to be in-line to see that handshake phase and have the ability to persuade the client to trust it, and that is not that difficult to do in an enterprise, where the entity that owns the private keys to the SSL server is also running the SSL inspection device.
How is encrypted traffic being used today by the bad guys?
There are a number of examples of SSL being used in the last six months or a year. A large bank had an employee that was leaking the source code of their automatic trading, which he sent out over an SSL session to a Web server in Germany. They had data loss mechanisms in place, but they couldn’t see what the content was. In the last two or three years there has been a big increase in people worried about data loss. It doesn’t require any malicious software to leak data over SSL.
Once malware has made it onto the desktop in the enterprise, it needs to report back home to send information out of the enterprise or pick up new commands. Increasingly, you are seeing that communication channel using SSL, so it is not visible. One of the ways you can improve security in an enterprise is by applying policy to SSL. There are two kinds of server certificates used by SSL servers to authenticate themselves. One is self-signed. That kind of certificate has no real authentication power at all. There is no way, when you receive that certificate, of validating it. What any commercial SSL site would use are signed certificates from a recognized authority. You can use the authority’s public key to authenticate it. So your policy should not allow an SSL session with a self-signed certificate.
What developments do you see coming in SSL and SSL inspection?
It is only in the last few years that the need for SSL inspection has been widely understood. Even in the late ’90s, people knew how to use a transparent proxy to do SSL inspection, but it was slow. It wasn’t something you could deploy in the network. It is computationally very intensive. You have to do all the public-key handling for the authentication phase and then do the decrypt and re-encrypt. That means specialist hardware-accelerated appliances, and that is the trend.