For network security, the devil is in the complexity
Connecting state and local government leaders
Edward Amoroso, AT&T Services' chief security officer, talks about public/private security efforts; the powerful elements of the Trusted Internet Connections initiative; and the importance of simplicity, elegance and control of complexity.
MTIPS is part of the move toward managed and network-based security services that providers such as AT&T are offering. As chief security officer at AT&T Services, Edward Amoroso oversees the security of the company’s infrastructure and supports operations for customer security, too.
Amoroso began his career with AT&T 24 years ago at Bell Laboratories, where he worked on Unix security and government security initiatives. He is involved in the company’s network-based security strategies. He is a graduate of the Columbia Business School’s senior executive program and holds a master’s degree and doctorate from the Stevens Institute of Technology, where he also is an adjunct computer sciences professor. He spoke recently with GCN’s William Jackson about government security issues.
GCN: Despite long awareness of cyber threats, the number of exploits against government systems continues to grow. Why is security so hard?
AMOROSO: The complexity of systems that the global community has been building over the last 20 years has grown significantly. If there is one word that is considered a fundamental issue in the security community, it is complexity. The control and management of complexity is Job One in building secure systems.
Is that complexity inevitable if we are going to have the functionality and versatility we want?
The goal of engineering is to build things and manage complexity. The word “elegant” is one that is not heard enough in engineering circles. An elegant design is one that has simple design elements and accomplishes the functional objectives in a cost-effective manner. An inelegant solution is one that involves spaghetti code and overly complex design with all sorts of connections. That, by definition, is insecure. So simplicity, elegance and the control of complexity would be, at the foundational level, the types of things that would help to deal with security.
Government and industry have acknowledged for years the need for cooperation to improve security. Why are we still struggling to achieve it?
I don’t think anyone is intentionally trying to avoid cooperation. I think everyone agrees that there is quite a bit to be gained from cooperation. It is more about what to share and how one uses that shared information. For example, vulnerability data is pretty easy to scoop up and share with antivirus companies. It has become so prevalent that, in some cases, the antivirus companies can barely keep up with all the reports coming in. But often, a company or an agency knows about a particular vulnerability and has to make a decision whether to disseminate that information or whether that dissemination would put its users or constituents at risk. So it is not that anybody has been paying lip service to cooperation. The models are difficult and need to be worked out, and I think we’ve seen some progress.
What is the proper role of network and service providers in securing their customers' networks?
The role of the service provider has evolved. Everyone in the service provider industry would say that this is an enormous focus for us moving forward. What we have had until now is a situation where service-level agreements between service providers and agencies involve requirements about latency and packet drop, and we have metrics associated with that. The service provider mentality has for years been to just provide reliably — and with great availability and dependability — the ability to move traffic from Point A to Point B. The problem with that model is that embedded in that traffic often is a lot of spam and viruses and malware traffic.
Why shouldn’t service providers be extending their contractual relationships with customers via the service-level agreements to include dependable policy enforcement, filtering, antivirus processing and redirection of denial-of-service attacks? This is becoming an important part of the relationship between businesses and service providers, and I think it is gradually beginning to extend to the relationship with consumers as well.
Should government be outsourcing more of its security efforts?
From the perspective of the service provider, we often think the word “outsource” is the wrong term. We have features in our network that we believe ought to be employed. If a government agency uses our VPN services and we have the ability to join that VPN with firewall capability, that is a feature. Call it what you will, but really that is just turning on and using features that are available to you and they can complement the kind of things that are being done now. Whether defense in depth is an important enough requirement that an agency would do things both in the enterprise and at the perimeter and also in the network, we think they should. Typically the role of the service provider is what is codified in an SLA and we think that SLAs should include those features.
It is complementary and not necessarily mutually exclusive?
It becomes a budget and financial decision. Everyone would agree that adding layers of defense makes sense. I can’t imagine anyone saying that is a bad idea. The only time it becomes a question is when financial considerations enter into the picture and you can only afford to do A and B and not A, B and C.
What is the status of the Trusted Internet Connections initiative?
The realization of TIC is in the MTIPS offering. You can almost use it as a programmatic synonym for TIC. It is a collection of nodes used to coalesce and aggregate secure processing for agencies. There are two components in TIC that are powerful. One is enhancing the integrity of Internet connections, rethinking their functionality. That is a good idea. The second thing is the network-based element here, namely that a service provider can embed into MTIPS control some combined protections that you might not get in an individual gateway. Take a distributed denial-of-service attack, for example. DDOS attacks are becoming more prevalent. The advantage of a standardized gateway solution is that these things can be embedded and provide downstream protection to anybody using that gateway. You can’t put the protection against a denial-of-service attack at the door, you have to go out into the perimeter and stop that traffic buildup before it gets to the door.
Are agencies seeing the benefit of TIC and MTIPS yet?
We have to go through certification to get authority to operate. We have been through the [Homeland Security Department] one, and we have not announced authority to operate yet, but we’re getting close. Once we have that final stamp, we can begin to operate our MTIPS service. We have announced two customers. In the certification, we demonstrate in each our security operation centers compliance with requirements, and then the government accreditation authority comes in.
It’s a lot work, and it is a huge accomplishment to get through that, and we’re champing at the bit.
Is there any one piece of advice you would give to improve cybersecurity?
The one point we always make is that in designing systems and operating systems and running an enterprise, controlling complexity is the most important thing. If the description of your system looks too complicated, then it probably is — and it is going to be insecure. You have to fight the urge to add complexity to systems, because once you do you are heading down a path of insecurity.