Information Security for Systems
Brook S. E. Schoenfield
One definition of security architecture might be, "applied information security." Or perhaps, more to the point of this work, security architecture applies the principles of security to system architectures. It should be noted that there are (at least) two uses of the term, "security architecture." One of these is, as defined above, to ensure that the correct security features, controls, and properties are included into an organization's digital systems and to help implement these through the practice of system architecture. The other branch, or common usage, of "security architecture" is the architecture of the security systems of an organization. In the absence of the order provided through architecture, organizations tend to implement various security technologies "helterskelter," that is, ad hoc. Without security architecture, the intrusion system (IDS) might be distinct and independent from the firewalls (perimeter). Firewalls and IDS would then be unconnected and independent from anti-virus and anti-malware on the endpoint systems and entirely independent of server protections. The security architect first uncovers the intentions and security needs of the organization: open and trusting or tightly controlled, the data sensitivities, and so forth. Then, the desired security posture (as it's called) is applied through a collection of coordinated security technologies. This can be accomplished very intentionally when the architect has sufficient time to strategize before architecting, then to architect to feed a design, and to have a sound design to support implementation and deployment. (Of course, most security architects inherit an existing set of technologies. If these have grown up piecemeal over a significant period of time, there will be considerable legacy that hasn't been architected with which to contend. This is the far more common case.)
[I]nformation security solutions are often designed, acquired and installed on a tactical basis…. [T]here is no strategy that can be identifiably said to support the goals of the business. An approach that avoids these piecemeal problems is the development of an enterprise security architecture which is business-driven and which describes a structured inter-relationship between the technical and procedural solutions to support the long-term needs of the business.¹
Going a step further, the security architect who is primarily concerned with deploying security technologies will look for synergies between technologies such that the sum of the controls is greater than any single control or technology. And, there are products whose purpose is to enhance synergies. The purpose of the security information and event management (SIEM) products is precisely this kind of synergy between the event and alert flows of disparate security products. Depending upon needs, this is exactly the sort of synergistic view of security activity that a security architect will try to enhance through a security architecture (this second branch of the practice). The basic question the security architect implementing security systems asks is, "How can I achieve the security posture desired by the organization through a security infrastructure, given time, money, and technology restraints."
Contrast the foregoing with the security architect whose task it is to build security into systems whose function has nothing to do with information security. The security architecture of any system depends upon and consumes whatever security systems have been put into place by the organization. Oftentimes, the security architecture of nonsecurity systems assumes the capabilities of those security systems that have been put into place. The systems that implement security systems are among the tools that the system security architect will employ, the "palette" from which she or he draws, as systems are analyzed and security requirements are uncovered through the analysis. You may think of the security architect concerned with security systems, the designer of security systems, as responsible for the coherence of the security infrastructure. The architect concerned with non-security systems will be utilizing the security infrastructure in order to add security into or underneath the other systems that will get deployed by the organization. In smaller organizations, there may be no actual distinction between these two roles: the security architect will design security systems and will analyze the organization's other systems in light of the security infrastructure. The two, systems and security systems, are intimately linked and, typically, tightly coupled. Indeed, as stated previously, at least a portion of the security infrastructure will usually provide security services such as authentication and event monitoring for the other systems. And, firewalls and the like will provide protections that surround the non-security systems.
Ultimately, the available security infrastructure gives rise to an organization's technical standards. Although an organization might attempt to create standards and then build an infrastructure to those standards, the dictates of resources, technology, skill, and other constraints will limit "ivory tower" standards; very probably, the ensuing infrastructure will diverge significantly from standards that presume a perfect world and unlimited resources.
When standards do not match what can actually be achieved, the standards become empty ideals. In such a case, engineers' confidence will be shaken; system project teams are quite likely to ignore standards, or make up their own. Security personnel will lose considerable influence. Therefore, as we shall see, it's important that standards match capabilities closely, even when the capabilities are limited. In this way, all participants in the system security process will have more confidence in analysis and requirements. Delivering ivory tower, unrealistic requirements is a serious error that must be avoided. Decision makers need to understand precisely what protections can be put into place and have a good understanding of any residual, unprotected risks that remain.
From the foregoing, it should be obvious that the two concentrations within security architecture work closely together when these are not the same person. When the roles are separate disciplines, the architect concerned with the infrastructure must understand what other systems will require, the desired security posture, perimeter protections, and security services. The architect who assesses the non-security systems must have a very deep and thorough understanding of the security infrastructure such that these services can be applied appropriately. I don't want to over specify. If an infrastructure provides strong perimeter controls (firewalls), there is no need to duplicate those controls locally. However, the firewalls may have to be updated for new system boundaries and inter-trust zone communications.
In other words, these two branches of security architecture work very closely together and may even be fulfilled by the same individual.
No matter how the roles are divided or consolidated, the art of security analysis of a system architecture is the art of applying the principles of information security to that system architecture. A set of background knowledge domains is applied to an architecture for the purpose of discovery. The idea is to uncover points of likely attack: "attack surfaces." The attack surfaces are analyzed with respect to active threats that have the capabilities to exercise the attack surfaces. Further, these threats must have access in order to apply their capabilities to the attack surfaces. And the attack surfaces must present a weakness that can be exploited by the attacker, which is known as a "vulnerability." This weakness will have some kind of impact, either to the organization or to the system. The impact may be anywhere from high to low.
We will delve into each of these components later in the book. When all the requisite components of an attack come together, a "credible attack vector" has been discovered. It is possible in the architecture that there are security controls that protect against the exercise of a credible attack vector. The combination of attack vector and mitigation indicates the risk of exploitation of the attack vector. Each attack vector is paired to existing (or proposed) security controls. If the risk is low enough after application of the mitigation, then that credible attack vector will receive a low risk. Those attack vectors with a significant impact are then prioritized.
The enumeration of the credible attack vectors, their impacts, and their mitigations can be said to be a "threat model," which is simply the set of credible attack vectors and their prioritized risk rating.
Since there is no such thing as perfect security, nor are there typically unlimited resources for security, the risk rating of credible attack vectors allows the security architect to focus on meaningful and significant risks.
Securing systems is the art and craft of applying information security principles, design imperatives, and available controls in order to achieve a particular security posture. The analyst must have a firm grasp of basic computer security objectives for confidentiality, integrity, and availability, commonly referred to as "CIA." Computer security has been described in terms of CIA. These are the attributes that will result from appropriate security "controls." "Controls" are those functions that help to provide some assurance that data will only be seen or handled by those allowed access, that data will remain or arrive intact as saved or sent, and that a particular system will continue to deliver its functionality. Some examples of security controls would be authentication, authorization, and network restrictions. A system-monitoring function may provide some security functionality, allowing the monitoring staff to react to apparent attacks. Even validation of user inputs into a program may be one of the key controls in a system, preventing misuse of data handling procedures for the attacker's purposes.
The first necessity for secure software is specifications that define secure behavior exhibiting the security properties required. The specifications must define functionality and be free of vulnerabilities that can be exploited by intruders. The second necessity for secure software is correct implementation meeting specifications. Software is correct if it exhibits only the behavior defined by its specificationnot, as today is often the case, exploitable behavior not specified, or even known to its developers and testers.²
The process that we are describing is the first "necessity" quoted above, from the work of Redwine and Davis (2004)²"specifications that define secure behavior exhibiting the security properties required." Architecture risk assessment (ARA) and threat modeling is intended to deliver these specifications such that the system architecture and design includes properties that describe the system's security.
The assurance that the implementation is correctthat the security properties have been built as specified and actually protect the system and that vulnerabilities have not been introducedis a function of many factors. That is, this is the second "necessity" given above by Redwine and David (2004). ² These factors must be embedded into processes, into behaviors of the system implementers, and for which the system is tested. Indeed, a fair description of my current thinking on a secure development lifecycle (SDL) can be found in Core Software Security: Security at the Source, Chapter 9, and is greatly expanded within the entire book, written by Dr. James Ransome and Anmol Misra. ³ Architecture analysis for security fits within a mature SDL. Security assessment will be far less effective standing alone, without all the other activities of a mature and holistic SDL or secure project development lifecycle. However, a broad discussion of the practices that lead to assurance of implementation is not within the scope of this work. Together, we will limit our exploration to ARA and threat modeling, solely, rather than attempting cover an entire SDL.
A suite of controls implemented for a system becomes that system's defense. If well designed, these become a "defense-in-depth," a set of overlapping and somewhat redundant controls. Because, of course, things fail. One security "principle" is that no single control can be counted upon to be inviolable. Everything may fail. Single points of failure are potentially vulnerable.
I drafted the following security principles for the enterprise architecture practice of Cisco Systems, Inc. We architected our systems to these guidelines.
- Risk Management: We strive to manage our risk to acceptable business levels.
- Defense-in-Depth: No one solution alone will provide sufficient risk mitigation. Always assume that every security control will fail.
- No Safe Environment: We do not assume that the internal network or that any environment is "secure" or "safe." Wherever risk is too great, security must be addressed.
- CIA: Security controls work to provide some acceptable amount of Confidentiality, Integrity, and/or Availability of data (CIA).
- Ease Security Burden: Security controls should be designed so that doing the secure thing is the path of least resistance. Make it easy to be secure, make it easy to do the right thing.
- Industry Standard: Whenever possible, follow industry standard security practices.
- Secure the Infrastructure: Provide security controls for developers not by them. As much as possible, put security controls into the infrastructure. Developers should develop business logic, not security, wherever possible.
The foregoing principles were used as intentions and directions for architecting and design. These principles are still in use by Enterprise Architecture at Cisco Systems, Inc., though they have gone through several revisions. National Cyber Security Award winner Michele Guel and Security Architect Steve Acheson are coauthors of these principles.
As we examined systems falling within Cisco's IT development process, we applied specific security requirements in order to achieve the goals outlined through these principles. Requirements were not only technical; gaps in technology might be filled through processes, and staffing might be required in order to carry out the processes and build the needed technology. We drove toward our security principles through the application of "people, process, and technology." It is difficult to architect without knowing what goals, even ideals, one is attempting to achieve. Principles help to consider goals as one analyzes a system for its security: The principles are the properties that the security is supposed to deliver.
These principles (or any similar very high level guidance) may seem like they are too general to help? But experience taught me that once we had these principles firmly communicated and agreed upon by most, if not all, of the architecture community, discussions about security requirements were much more fruitful. The other architects had a firmer grasp on precisely why security architects had placed particular requirements on a system. And, the principles helped security architects remember to analyze more holistically, more thoroughly, for all the intentions encapsulated within the principles.
ARAs are a security, "rubber meets the road" activity. The following is a generic statement about what the practice of information security is about, a definition, if you will.
Information assurance is achieved when information and information systems are protected against attacks through the application of security services such as availability, integrity, authentication, confidentiality, and nonrepudiation. The application of these services should be based on the protect, detect, and react paradigm. This means that in addition to incorporating protection mechanisms, organizations need to expect attacks and include attack detection tools and procedures that allow them to react to and recover from these unexpected attacks. ⁴
This book is not a primer in information security. It is assumed that the reader has at least a glancing familiarity with CIA and the paradigm, "protect, detect, react," as described in the quote above. If not, then perhaps it might be of some use to take a look at an introduction to computer security before proceeding? It is precisely this paradigm whereby:
- Security controls are in-built to protect a system.
- Monitoring systems are created to detect attacks.
- Teams are empowered to react to attacks.
The Open Web Application Security Project (OWASP) provides a distillation of several of the most well known sets of computer security principles:
- Apply defense-in-depth (complete mediation).
- Use a positive security model (fail-safe defaults, minimize attack surface).
- Fail securely.
- Run with least privilege.
- Avoid security by obscurity (open design).
- Keep security simple (verifiable, economy of mechanism).
- Detect intrusions (compromise recording).
- Don't trust infrastructure.
- Don't trust services.
- Establish secure defaults⁵
Some of these principles imply a set of controls (e.g., access controls and privilege sets). Many of these controls, such as "Avoid security by obscurity" and "Keep security simple," are guides to be applied during design, approaches rather than specific demands to be applied to a system. When assessing a system, the assessor examines for attack surfaces, then applies specific controls (technologies, processes, etc.) to realize these principles.
These principles (and those like the ones quoted) are the tools of computer security architecture. Principles comprise the palette of techniques that will be applied to systems in order to achieve the desired security posture. The prescribed requirements fill in the three steps enumerated above:
- Protect a system through purpose-built security controls.
- Attempt to detect attacks with security-specific monitors.
- React to any attacks that are detected.
In other words, securing systems is the application of the processes, technologies, and people that "protect, detect, and react" to systems. Securing systems is essentially applied information security. Combining computer security with information security risk comprises the core of the work.
The output of this "application of security to a system" is typically security "requirements." There may also be "nice-to-have" guidance statements that may or may not be implemented. However, there is a strong reason to use the word "requirement." Failure to implement appropriate security measures may very well put the survival of the organization at risk.
Typically, security professionals are assigned a "due diligence" responsibility to prevent disastrous events. There's a "buck stops here" part of the practice: Untreated risk must never be ignored. That doesn't mean that security's solution will be adopted. What it does mean is that the security architect must either mitigate information security risks to an acceptable, known level or make the appropriate decision maker aware that there is residual risk that either cannot be mitigated or has not been mitigated sufficiently.
Just as a responsible doctor must follow a protocol that examines the whole health of the patient, rather than only treating the presenting problem, so too must the security architect thoroughly examine the "patient," any system under analysis, for "vital signs"that is, security health.
The requirements output from the analysis are the collection of additions to the system that will keep the system healthy as it endures whatever level of attack is predicted for its deployment and use. Requirements must be implemented or there is residual risk. Residual risk must be recognized because of due diligence responsibility. Hence, if the analysis uncovers untreated risk, the output of that analysis is the necessity to bring the security posture up and risk down to acceptable levels. Thus, risk practice and architecture analysis must go hand-in-hand.
So, hopefully, it is clear that a system is risk analyzed in order to determine how to apply security to the system appropriately. We then can define Architecture Risk Analysis (ARA) as the process of uncovering system security risks and applying information security techniques to the system to mitigate the risks that have been discovered.
Applying Security to Any System
Some of those security needs will be provided by an existing security infrastructure. Some of the features that have been specified through the analysis will be services consumed from the security infrastructure. And there may be features that need to be built solely for the system at hand. There may be controls that are specific to the system that has been analyzed. These will have to be built into the system itself or added to the security architecture, depending upon whether these features, controls, or services will be used only by this system, or whether future systems will also make use of these.
A typical progression of security maturity is to start by building one-off security features into systems during system implementation. During the early periods, there may be only one critical system that has any security requirements! It will be easier and cheaper to simply build the required security services as a part of the system as it's being implemented. As time goes on, perhaps as business expands into new territories or different products, there will be a need for common architectures, if for no other reason than maintainability and shared cost. It is typically at this point that a security infrastructure comes into being that supports at least some of the common security needs for many systems to consume. It is characteristically a virtue to keep complexity to a minimum and to reap scales of economy.
Besides, it's easier to build and run a single security service than to maintain many different ones whose function is more or less the same. Consider storage of credentials (passwords and similar).
Maintaining multiple disparate stores of credentials requires each of these to be held at stringent levels of security control. Local variations of one of the stores may lower the overall security posture protecting all credentials, perhaps enabling a loss of these sensitive tokens through attack, whereas maintaining a single repository at a very high level, through a select set of highly trained and skilled administrators (with carefully controlled boundaries and flows) will be far easier and cheaper. Security can be held at a consistently high level that can be monitored more easily; the security events will be consistent, allowing automation rules to be implemented for raising any alarms. And so forth.
An additional value from a single authentication and credential storing service is likely to be that users may be much happier in that they have only a single password to remember! Of course, once all the passwords are kept in a single repository, there may be a single point of failure. This will have to be carefully considered. Such considerations are precisely what security architects are supposed to provide to the organization.
It is the application of security principles and capabilities that is the province and domain of security architecture as applied to systems.
The first problem that must be overcome is one of discovery.
- What risks are the organization's decision makers willing to undertake?
- What security capabilities exist?
- Who will attack these types of systems, why, and to attain what goals?
Without the answers to these formative questions, any analysis must either treat every possible attack as equally dangerous, or miss accounting for something important. In a world of unlimited resources, perhaps locking everything down completely may be possible. But I haven't yet worked at that organization; I don't practice in that world. Ultimately, the goal of a security analysis isn't perfection. The goal is to implement just enough security to achieve the surety desired and to allow the organization to take those risks for which the organization is prepared. It must always be remembered that there is no usable perfect security.
A long-time joke among information security practitioners remains that all that's required to secure a system is to disconnect the system, turn the system off, lock it into a closet, and throw away the key. But of course, this approach disallows all purposeful use of the system. A connected, running system in purposeful use is already exposed to a certain amount of risk. One cannot dodge taking risks, especially in the realm of computer security. The point is to take those risks that can be borne and avoid those which cannot. This is why the first task is to find out how much security is "enough." Only with this information in hand can any assessment and prescription take place.
Erring on the side of too much security may seem safer, more reasonable. But, security is expensive. Taken among the many things to which any organization must attend, security is important but typically must compete with a host of other organizational priorities. Of course, some organizations will choose to give their computer security primacy. That is what this investigation is intended to uncover.
Beyond the security posture that will further organizational goals, an inventory of what security has been implemented, what weaknesses and limitations exist, and what security costs must be borne by each system is critical.
Years ago, when I was just learning system assessment, I was told that every application in the application server farm creating a Secure Sockets Layer (SSL) tunnel was required to implement bidirectional, SSL certificate authentication. This was before the standard became Transport Layer Security (TLS). Such a connection presumes that at the point at which the SSL is terminated on the answering (server) end, the SSL "stack," implementing software, will be tightly coupled, usually even controlled by the application that is providing functionality over the SSL tunnel. In the SSL authentication exchange, first, the server (listener) certificate is authenticated by the client (caller). Then, the client must respond with its certificate to be authenticated by the server. Where many different and disparate, logically separated applications coexist on the same servers, each application would then have to be listening for its own SSL connections. You typically shouldn't share a single authenticator across all of the applications. Each application must have its own certificate. In this way, each authentication will be tied to the relevant application. Coupling authenticator to application then provides robust, multi-tenant application authentication.
I dutifully provided a requirement to the first three applications that I analyzed to use bidirectional, SSL authentication. I was told to require this. I simply passed the requirement to project teams when encountering a need for SSL. Case closed? Unfortunately not.
I didn't bother to investigate how SSL was terminated for our application server farms.
SSL was not terminated at the application, at the application server software, or even at the operating system upon which each server was running. SSL was terminated on a huge, specialized SSL adjunct to the bank of network switches that routed network traffic to the server farm. The receiving switch passed all SSL to the adjunct, which terminated the connection and then passed the normal (not encrypted SSL) connection request onwards to the application servers.
The key here is that this architecture separated the network details from the application details. And further and most importantly, SSL termination was quite a distance (in an application sense) from any notion of application. There was no coupling whatsoever between application and SSL termination. That is, SSL termination was entirely independent from the server-side entities (applications), which must offer the connecting client an authentication certificate. The point being that the infrastructure had designed "out" and had not accounted for a need for application entities to have individual SSL certificate authenticators. The three applications couldn't "get there from here;";there was no capability to implement bidirectional SSL authentication. I had given each of these project teams a requirement that couldn't be accomplished without an entire redesign of a multi-million dollar infrastructure. Oops!
Before rushing full steam ahead into the analysis of any system, the security architect must be sure of what can be implemented and what cannot, what has been designed into the security infrastructure, and what has been designed out of it. There are usually at least a few different ways to "skin" a security problem, a few different approaches that can be applied. Some of the approaches will be possible and some difficult or even impossible, just as my directive to implement bidirectional SSL authentication was impossible given the existing infrastructure for those particular server farms and networks. No matter how good a security idea may seem on the face of it, it is illusory if it cannot be made real, given the limits of what exists or accounting for what can be put into place. I prefer never to assume; time spent understanding existing security infrastructure is always time well spent. This will save a lot of time for everyone involved. Some security problems cannot be solved without a thorough understanding of the existing infrastructure.
Almost every type and size of a system will have some security needs. Although it may be argued that a throw-away utility, written to solve a singular problem, might not have any security needs, if that utility finds a useful place beyond its original problem scope, the utility is likely to develop security needs at some point. Think about how many of the UNIX command line programs gather a password from the user. Perhaps many of these utilities were written without the need to prompt for the user's credentials and subsequently to perform an authentication on the user's behalf? Still, many of these utilities do so today. And authentication is just one security aspect out of many that UNIX system utilities perform. In other words, over time, many applications will eventually grapple with one or more security issues.
Complex business systems typically have security requirements up front. In addition, either the implementing organization or the users of the system or both will have security expectations of the system. But complexity is not the determiner of security. Consider a small program whose sole purpose is to catch central processing unit (CPU) memory faults. If this software is used for debugging, it will probably have to, at the very least, build in access controls, especially if the software allows more than one user at a time (multiuser). Alternatively, if the software catches the memory faults as a part of a security system preventing misuse of the system through promulgation of memory faults, preventing say, a privilege escalation through an executing program via a memory fault, then this small program will have to be self-protective such that attackers cannot turn it off, remove it, or subvert its function. Such a security program must not, under any circumstances, open a new vector of attack. Such a program will be targeted by sophisticated attackers if the program achieves any kind of broad distribution.
Thus, the answer as to whether a system requires an ARA and threat model is tied to the answers to a number of key questions:
- What is the expected deployment model?
- What will be the distribution?
- What language and execution environment will run the code?
- On what operating system(s) will the executables run?
These questions are placed against probable attackers, attack methods, network exposures, and so on. And, of course, as stated above, the security needs of the organization and users must be factored against these.
The answer to whether a system will benefit from an ARA/Threat model is a function of the dimensions outlined above, and perhaps others, depending upon consideration of those domains on which analysis is dependent. The assessment preprocess or triage will be outlined in a subsequent chapter. The simple answer to "which systems?"
is any size, shape, complexity, but certainly not all systems. A part of the art of the security architecture assessment is deciding which systems must be analyzed, which will benefit, and which may pass. That is, unless in your practice you have unlimited time and resources. I've never had this luxury. Most importantly, even the smallest application may open a vulnerability, an attack vector, into a shared environment.
Unless every application and its side effects are safely isolated from every other application, each set of code can have effects upon the security posture of the whole. This is particularly true in shared environments. Even an application destined for an endpoint (a Microsoft Windows™ application, for instance) can contain a buffer overflow that allows an attacker an opportunity, perhaps, to execute code of the attacker's choosing. In other words, an application doesn't have to be destined for a large, shared server farm in order to affect the security of its environment. Hence, a significant step that we will explore is the security triage assessment of the need for analysis.
Size, business criticality, expenses, and complexity, among others, are dimensions that may have a bearing, but are not solely deterministic. I have seen many Enterprise IT efforts fail, simply because there was an attempt to reduce this early decision to a two-dimensional space, yes/no questions. These simplifications invariably attempted to achieve efficiencies at scale. Unfortunately, in practice today, the decision to analyze the architecture of a system for security is a complex, multivariate problem. That is why this decision will have its own section in this book. It takes experience (and usually more than a few mistakes) to ask appropriate determining questions that are relevant to the system under discussion.
The answer to "Systems? Which systems?" cannot be overly simplified. Depending upon use cases and intentions, analyzing almost any system may produce significant security return on time invested. And, concomitantly, in a world of limited resources, some systems and, certainly, certain types of system changes may be passed without review. The organization may be willing to accept a certain amount of unknown risk as a result of not conducting a review.
- Sherwood, J., Clark, A., and Lynas, D. "Enterprise Security Architecture." SABSA White Paper, SABSA Limited, 1995–2009. Retrieved from
- Redwine, S. T., Jr., and Davis, N., eds. (2004). "Processes to Produce Secure Software: Towards more Secure Software." Software Process Subgroup, Task Force on Security across the Software Development Lifecycle, National Cyber Security Summit, March 2004.
- Ransome, J. and Misra, A. (2014). Core Software Security: Security at the Source. Boca Raton (FL): CRC Press.
- NSA. "Defense in Depth: A practical strategy for achieving Information Assurance in today's highly networked environments." National Security Agency, Information Assurance Solutions Group STE 6737. Available from:
- Open Web Application Security Project (OWASP) (2013). Some Proven Application Security Principles. Retrieved from
Read more IT Performance Improvement
Certain names and logos on this page and others may constitute trademarks, servicemarks, or tradenames of
Taylor & Francis LLC. Copyright © 20082015 Taylor & Francis LLC. All rights reserved.