In-Depth
Risky Business
The work of risk analysis—evaluating security threats, alerts and all-out panic attacks—is vital to keeping your network safe and sound and you sane.
- By Roberta Bragg
- 09/01/2001
Some days I just want to put my head down on my
desk and give up. I'm talking about those days—just
as I'm starting to feel proud to be evangelizing
the many built-in security features of Windows
2000—when a really dumbfounding security alert
from Microsoft arrives in my inbox and shatters
my beliefs. On those days, becoming a Wal-Mart
greeter sounds appealing.
Oh, I know that every operating system has flaws
(See "Severity Metrics"), and I'm glad that Microsoft
is telling me about its latest. But, sometimes,
the cumulative effect of what I'm hearing just
bruises my soul. Nevertheless, those days are
few and far between, and I've yet to lose my faith.
In fact, I've often been asked how I can respond
so calmly to the screaming meemees out there who
are constantly raising some alert about this or
that. Why haven't I joined them in parading around
on the virtual street corner with placard raised
proclaiming the end of the world? How do I provide
a calm, measured response that dispels fears and
prevents panic among the masses?
It's not that I don't feel the need to consider
their messages. It's just that I don't believe
they're all of equal value and, thus, don't require
an equal response. You see, I've got a secret
approach to these doomsdayers: Most of the time
they're not telling all of the truth! And, when
they are, it may not apply to you or may not apply
if you've taken serious charge of your systems
and use sound security practices. A greater risk
may be that of a hurricane or power outage or
from a disgruntled employee.
You already knew that, right? OK. Then all you
have to do is figure out which security alerts
you should be paying attention to and calmly go
about making sure your systems are protected.
How do you do this? Simple. Learn to apply the
principles of risk analysis to all of the potential
threats to your systems and promise Sister Roberta
that you'll stop jerking your knee up into your
chin every time you see the words "Microsoft"
and "security bug" in the same sentence.
While keeping that second promise is something
you'll have to do, I can help you understand the
basic principles of risk analysis and how to apply
them. Once you've started down this path by identifying
the more mundane threats to your network, you'll
be better able to evaluate Chicken Little's frantic
cries.
The Basics of Risk Analysis
While esoteric definitions of risk analysis exist
("Risk analysis is the science of putting the
future at the service of the present," from the
Journal of Risk Analysis), a more straightforward
approach can be found in Will Ozier's article,
"Risk Analysis and Assessment," from Auerbach
Press' Handbook of Information Security Management.
His article defines risk as, "…the potential for
harm or loss," and risk analysis as, "…the process
of analyzing a target environment and the relationships
of its risk-related attributes."
The target environment, in our case, is our information
systems. The risk-related attributes are numerous,
but certainly include exposure to hostile networks
such as the Internet, employee misuse, acts of
nature, bugs in software and human stupidity.
Risk analysis, when understood and practiced properly,
provides a way to organize your understanding
of the what-if and put it into perspective by
judging the financial consequences vs. how often
it might happen and how likely it may happen at
all. Thus, you can look at all of the possibilities
and weigh them on a scale that allows you to put
your efforts and anxiety where they'll do the
most good.
Severity
Metrics |
Ever wonder which products
pose more of a threat to your
network? Take a look at the
Computer Emergency Response
Team's list of vulnerabilities
by severity. A large number
of known vulnerabilities to
information systems (OSes, applications,
routers, modems and so on) is
ordered and listed at www.kb.cert.org/vuls/bymetric.
It makes for really interesting
reading. In the top 10, only
three Microsoft-related vulnerabilities
are ranked; there are more Unix-related
issues detailed.
So what's the top vulnerability?
With a metric of 108.16 is a
flaw in BIND T_NXT record processing,
which may cause a buffer overflow
(repaired in version 8.2.2.)
that has the potential to give
an attacker control over the
name server, potentially at
the root (administrator) level.
IIS' superfluous file name decoding
flaw comes in fourth at 79.31.
This vulnerability might allow
an attacker to gain access to
the IIS server with the privileges
of IUSR_computername. This account
is limited by the privileges
and file access assigned to
it. A patch exists to correct
this issue.
The point here isn't to pit
the relative security of one
thing against another—it's merely
to give you some perspective
(and, possibly, ammunition).
The metrics, by the way, are
calculated in a relative manner
(using 0 as the lowest and 180
as the highest) and consider
factors such as how widely the
vulnerability is known, whether
there's knowledge of the vulnerability
being exploited, whether Internet
infrastructure (think globally,
act locally) is at risk, how
many systems are at risk, the
impact and ease of exploitation,
and any preconditions that must
exist before the vulnerability
can be exploited.
The results may assist you
in identifying very serious
vulnerabilities. CERT advises
that any vulnerability with
a rating above 40 is generally
severe enough to be considered
serious. As a note, CERT cautions
that ratings aren't linear.
The Microsoft IIS vulnerability
above with its rating close
to 80 isn't twice as bad as
the "Compaq Web-enabled management
software buffer overflow in
user authentication name" vulnerability
with a metric of almost 40.
—Roberta Bragg
|
|
|
Properly done, risk analysis allows you to ignore
threats that can't affect you ("I don't use that
product so I don't need to be concerned," or "I'm
not connected to the Internet so the worm can't
get me"); spend less time on that which carries
little chance of happening ("Someone's going to
capture my SSL-encrypted, changed-frequently password,
decrypt it and use it to obtain access to my network
resources"); and devote your time to those things
that stand a definite chance of affecting you
(hostile code in e-mail attachments, weak protection
of sensitive files and folders, and penetration
through paths, such as modems, that go around
your firewall).
While no numeric system, gut feeling or even
statistical analysis can be a hundred percent
correct, we can come up with a methodology that
allows you to compare calculated threats. One
such process is the Annualized Loss Expectancy
(ALE) calculation. This is derived by multiplying
the Single Loss Expectancy (SLE)—the expected
monetary loss for a single occurrence of some
event—times the Annualized Rate of Occurrence
(ARO)—how many times the event might occur in
a single year.
For example, to determine the merits of properly
monitoring your network's Internet traffic and
how much money you can expend on that process,
you might have to calculate the price tag should
each threat occur. How much would it cost if the
next Internet worm or macro virus or other hostile
attachment breeched your defenses and was able
to attack your network? Don't forget the cost
of lost productivity and the time it takes to
repair systems, clean e-mail servers and restore
data. How many times per year might this happen?
Once you have these two answers, simply multiply
them to arrive at the ALE.
Would purchasing a new virus-checking product
that works on your entry points, frequent updating
of desktop virus patterns, or training of administrators
and users reduce the number of effective attacks
and/or reduce the amount of loss? How much would
it cost to put these solutions into place? How
does this threat rank as opposed to others? Once
you know the answers to these questions, not only
can you determine the threats that could harm
your company the most, but you can identify which
ones would be best alleviated by your efforts.
The benefit of risk analysis isn't in exposing
the enormous costs that may result if some threat
becomes reality. It's that you can put all risk
in proper perspective and find those you can effectively
reduce by applying your resources. This process
is called risk assessment: risk management is
the process of doing something about it.
To use the results of your analysis, the cost
of solutions must also be calculated. By identifying
the problem, what it may cost should it occur,
and calculating the cost of a solution that will
reduce the risk (and, thus, the cost), you can
compare the cost of the solution in relative terms.
Does it take more to protect the systems than
it would if the risk became reality? Can some
intermediate solution obtain most of the benefits
at a reduced cost? This is the kind of analysis
that'll result in the effective use of your current
resources. It can also help you obtain the budget
to put solutions into production.
Study Threats to Your Network
Does all this non-technical mumbo-jumbo sound
like a lot of work? Where will you get the statistics?
How can you calculate the numbers? Does anybody
really know the probability or frequency that
some risk will occur?
Risk analysis, like any other business activity,
has many approaches. The first thing to determine
is which type of assessment (assignment of value
to assets, threat frequency, consequence and so
on) you wish to do. There are two major divisions:
quantitative and qualitative. Quantitative assessment
is exhaustive and applies hard-dollar values to
every aspect (how much will the lack of a firewall
cost?). Qualitative assessment uses non-numeric
results (you don't have a firewall, therefore
you're at risk). Most risk analysis sits somewhere
between these extremes. We could even argue, as
Ozier does, that a purely quantitative risk assessment
is impossible, as we're attempting to apply that
to qualitative properties (how much will loss
of esteem cost?). Or you could insist, as many
practical managers do, that using the quantitative
approach is a bunch of hooey. They argue that
it costs a ton to get those figures and complete
the analysis; when you're done, a qualitative
analysis would give you the same ordering of threats
and identify the same practices that can provide
the most benefit. In other words, here's one place
where gut feeling can outrank pure science.
If you'd like to investigate the quantitative
approach further, you should realize that most
companies that use such techniques have long since
abandoned paper and pencil and either have automated
their calculations or purchased a risk-analysis
package such as C&A Systems COBRA (www.pcorp.u-net.com),
Crystal Ball (www.decisioneering.com/index.html)
or Pertmaster (www.pertmaster.com).
These programs still require extensive work, but
remove the tedium of calculations. They outline
the steps and allow you to focus on the technical
and business issues particular to your systems.
Remember that even a sophisticated, vetted program
won't produce good results unless you realize
the impact of GIGO (garbage in, garbage out).
To get accurate results, you may need to enlist
the support and assistance of business entities
beyond IT.
Any good risk analysis considers more than the
impact of threats to information systems. To do
this right, you'll want to enlist the aid of non-IT
departments. As a techie, however, you can gain
a lot of the benefit of risk analysis by taking
the first look at IT issues.
The approach I usually recommend uses primarily
a qualitative approach. There are three parts
to this process.
Know Your Enemy, Your Strengths
and Your Weaknesses
Instead of spending countless hours attempting
to apply a dollar amount to every risk, use your
experience and the knowledge of your peers to
catalog and then apply a weighted factor to each
threat. You'll still have to estimate relative
cost, chance and frequency, as well as mitigating
factors. I use a scale of 1 to 10, with 10 being
highest. Compiling and ordering these results
will tell you where to spend your energy and money.
The process starts simply. List the threats and
determine the information you want to accumulate
about each. You may want to prepare a chart, Excel
spreadsheet or Word table, like Table 1. Along
the left side of the table is a partial list of
threats to a typical small business network with
Internet connectivity. (For this to work, you
must list all of the potential threats to your
systems.) Across the top of the table, in columns,
are designations for information necessary to
start the process. These represent assets, frequency,
certainty, mitigating factors and space for an
overall rating. Table 2 details a partial list
of the process. Remember these are representative
ratings for a fictitious company. Only you know
your systems and can compile an appropriate list
of threats and their ratings.
Threat
|
Asset/
Process |
Value
of Loss |
Frequency
(Number of times annually) |
Weight |
Certainty |
Mitigating
Factor |
Rating
|
Hostile
code in e-mail attachment |
x |
x |
x |
x |
x |
x |
x |
Hostile
code from Web site |
x |
x |
x |
x |
x |
x |
x |
Port
scan |
x |
x |
x |
x |
x |
x |
x |
DDOS
attack |
x |
x |
x |
x |
x |
x |
x |
Power
spike/
brown out |
x |
x |
x |
x |
x |
x |
x |
Malicious
employee |
x |
x |
x |
x |
x |
x |
x |
Mistake
(data entry in product pricing
table) |
x |
x |
x |
x |
x |
x |
x |
Misuse
of system |
x |
x |
x |
x |
x |
x |
x |
Data
theft (stolen customer credit
card numbers from Web site) |
x |
x |
x |
x |
x |
x |
x |
|
Table
1. Stage one of risk analysis is
to develop a table to weigh the risks
to your network in specific areas and
give each aspect of the risk a weighting.
|
|
|
Table
2. Once you've determined the most
serious risks, the next step is to identify
solutions or poducts and practices that
can reduce the threat and, thus, the
cost. (Click image to view larger version.) |
|
Assets represent the computer systems, data,
public confidence, productivity and so on that
may be affected by such a threat. In some cases
you might not see the loss in replacement systems,
but in the labor required to fix the problem.
Record this as well. Provide a value column, but
don't record an exact value—rather rate each asset
in importance. For example, it's obviously more
serious if you lose the financial database server
than a desktop system. You might, using my scale
or one you develop, rate the loss of desktops
as a three, while loss of the database server
might represent a 10. When you're done, you can
very easily see the threats that represent catastrophic
scenarios.
But you're not finished. The next step is to
determine the frequency of each potential threat.
You want to know how many times a year this might
occur. (For instance, you might like to know how
often a new e-mail attachment containing hostile
code is released on the Internet.) A good source
for some of these statistics is the CERT site.
CERT is the Computer Emergency Response Team at
Carnegie Mellon University. You can find quarterly
reports at www.cert.org/summaries/.
In addition to releasing security alerts, CERT
keeps statistics on the nature of information
system threats. Practical notes might also be
obtained from the Honeynet Project at http://project.honeynet.org/.
These folks leave a test network open on the Internet
and record the techniques used to attack their
systems. You won't find quantified statistics,
but you will find—in numerous "Know Your Enemy"
white papers—various factoids, such as the 524
TCP port 139 (NetBIOS Session Service) scans in
one month.
Brothers and Sisters, It's
Time
|
It's time to stand up and be
counted. It's time to look in
the mirror and say, "I'm mad
as hell and I'm not going to
take it anymore." It's time
to leave your office cubicle
and repeat the refrain.
I say this not because the
Code Red Worm infected thousands
of unpatched IIS servers and
denied service to other systems
and networks in July. While
this evil piece of code spread
havoc like some Old Testament
devil, it's only a symptom of
our disinterest, a symptom of
our ability to hide behind fences
and run our networks as if they
were appendages of our self-centered
selves. Like some frontier explorers,
at the first sign of attack,
we circle the wagons and make
it about them and us.
But it's not time to point
the finger. Yeah, Microsoft
should have written the product
better in the first place. OK,
the installer should have followed
good security practices and
removed the mappings. Sure,
the admin should be keeping
the system patched. But is management
providing the resources to do
this? Or do they only listen
to the security mantra when
big bucks have already been
lost because their unpatched
and unhardened systems have
gone down? More than "non-patchers"
are at fault here.
All righty, then. What can
we do? How can we protect our
network from untrained admins,
unsophisticated users, stingy
managers and well-meaning executives
who don't have a clue? Patch
your servers. Participate in
user and home user education.
Focus on keeping the rest of
cyber-connected space safe as
well. Your properly patched
servers won't participate in
DDOS attacks; but what about
the ones your neighbors have?
Doing your part is good, but
that's only the first step.
What we also need, brothers
and sisters, is activity that
preserves the entire Internet
and associated computing environments.
That means we have an enormous
educational, emotional and technical
job on our hands. We need security
awareness training for all levels
of computer users: managers,
executives. IT, end users and
home users. We need procedures
focused on setting up secure
systems and evaluating risks
and responding to security alerts,
and then follow them. We need
to get enough people to do the
job that needs to be done and
enough money to do it with.
We need to join with others
for the good of all. Educate
them, honor them and provoke
them into doing what's right.
As it went in the '60s, "You're
either part of the solution
or part of the problem." Which
one are you?
—Roberta Bragg
|
|
|
While neither of these can tell you exactly how
many times the threat will occur for your network,
they can give you an idea. They help validate
gut feelings like "there will be a larger number
of new hostile code type attacks spread via e-mail
or via connection with questionable Internet sites
than all-out attacks directed at your specific
network" (unless, of course, you have a very public
presence or there's some clear benefit to attacking
you that's well known).
By now, you should realize that even the frequency
of the threat couldn't be generalized or probably
even calculated very accurately. The number of
attacks present each year must be filtered by
knowledge of your company and its systems.
Nevertheless, public figures can give you averages
as a starting point and help you begin to determine
if a particular event will happen. You'll want
to try to find out if a particular threat will
become reality or is certain to happen this year.
Once again, this is a relative rating. While no
one can predict natural disasters, your data center
is more likely to be flattened by a tornado in
the Midwest, a hurricane in Florida, a tsunami
in Japan or an earthquake along the West Coast.
Thus, depending on your geographic location, the
certainty of one of these events varies. However,
if you allow file sharing on systems directly
connected to the Internet, the certainty of your
system being compromised is very high.
If you're really interested in how this process
can be quantified, you might take a look at actuarial
science. This is the process of determining—through
statistical analysis using large amounts of data—the
relative certainty that a particular event will
occur to a particular class of individuals or
companies. It's the science behind the variance
in insurance rates, say, between teenage male
drivers and 30-something female drivers. While
I haven't seen many publications on work done
on the types of threats we face in cyberspace,
there are many statistics on how likely it is
that a storm will threaten e-commerce site warehouses
or how likely it is that an earthquake will shut
down the power to server farms in California.
Start your studies by checking out www.actuary.com
or turn to risk-analysis sites such as www.sra.com
and www.risk-analysis-center.com,
which quantify the risk to public health from
chemical spills and biological hazards as well
as natural disasters.
Finally, you need to list mitigating factors.
These are systems or practices already in place
that can reduce the loss presented by the threat.
They can also be specific information about your
business, which naturally limits its loss. Properly
configured and maintained firewalls, for example,
can reduce the risk of being connected to untrusted
networks.
Pick the 10 Percent
It's often said that 10 percent of the threats
cause 90 percent of the damage. Your job is to
identify that 10 percent and reduce the risk.
Simply take each threat and add up the weighted
responses to each category. Those threats with
the largest results represent your worst nightmares.
Next, take a quantitative approach to determine
what to do about them. At this time you must attempt
to apply a real-world dollar figure to the loss
and to the products and processes that can reduce
that cost. This will help you identify the processes
that will be of the greatest benefit. You might
find, for example, that it would cost an enormous
sum to reduce the risk only a tiny amount. On
the other hand, you might find that funds spent
on a security-awareness program or providing security
training for network and systems administrators
would reduce risk substantially. You might be
surprised by how much good, well-known security
practices can mitigate risk. Choose projects that
will realize large benefits with little effort.
Presenting this information to management will
often obtain the support and financial backing
needed to put your solutions in place.
In Table 2, you can see that, comparatively speaking,
the risk at our hypothetical company is higher
from hostile code in e-mail systems than anything
else. The next step would be to identify solutions
or products and practices that can reduce the
risk and, thus, the cost should the risk become
reality. By now you're thinking this company needs
to scan attachments for known hostile code, and
there are products that can do this. While this
won't eliminate all risks, it'll counter many
of them, especially if the code signature database
is frequently updated. Justification for its purchase
may seem evident. But wait! You must first attempt
to determine a real-dollar loss for this company
and compare it to the cost of purchasing, installing
and maintaining the filtering product. If I told
you that this company comprises 10 users, I think
you'd agree that most products that do this at
the network entry point would cost more than the
potential loss. (That would depend, of course
on the value of the data, existence of backups
and so on. That's why we attempt to assign a dollar
value at this point.) Alternatively, the cost
of providing some protection—in the form of a
desktop security application and training users
in its proper use—can be justified.
Filtering Alerts Through
Wise Eyes
Finally, use this knowledge each time a new threat
is identified. You'll find that—because you've
done your homework—you can calmly approach and
even dismiss many of those alerts that previously
precipitated panic attacks. (As an example, consider
those systems administrators who applied patches
to IIS systems when Microsoft provided them who
weren't bothered by several highly successful
attacks that have been crippling some sites this
year.) Like MASH's Hawkeye, when the wounded are
brought in, you'll dismiss the minor, place aside
that which can wait, and concentrate on life-threatening
emergencies. You'll also sleep a lot better at
night.
Keep the faith, baby.