VizSec and RAID Wrap-Up

Last week I attended VizSec 2008 and RAID 2008. I'd like to share a few thoughts about each event.

I applaud the conference organizers for scheduling these conferences in the same city, back-to-back. That decision undoubtedly improved attendance and helped justify my trip. Thank you to John Goodall for inviting me to join the VizSec program committee.

I enjoyed the VizSec keynote by Treemap inventor Ben Shneiderman. I liked attending a non-security talk that had security implications. Sometimes I focus so strictly on security issues that I miss the wider computing field and opportunities to see what non-security peers are developing.

I must admit that I did not pay as much attention to the series of speakers that followed Prof Shneiderman as I would have liked. Taking advantage of the site's wireless network, I was connected to work the entire day doing incident handling. I did manage to speak with Raffy Marty during lunch, which was (as always) enlightening.

One theme I noticed at VizSec was the limitation of tools and techniques to handle large data sets. Some people attributed this to the Prefuse visualization toolkit used by many tools. Several attendees said they turn to visualization approaches because their manual analysis methods fail for large data sets. They don't need visualization tools which also croak when analyzing more than several hundred thousand records.

I also noticed that many visualization work for security tends to focus on IP addresses and ports. That is nice if you are limited to analyzing NetFlow records or other session data, but most of the excitement these days exists as log files, URLs, or layer 7 content. Perhaps just when the researchers have figured out a great way to show who is talking to who, it won't matter much anymore. Clients will all be talking to the cloud, and the action will be within the cloud -- beyond the inspection of most clients.

One presentation which I really liked was Improving Attack Graph Visualization through Data Reduction and Attack Grouping (.pdf) by John Homer, Xinming Ou, Ashok Varikuti and Miles McQueen. I thought their paper addressed a really practical problem, namely reducing the number of attack paths to those most likely (and logically) used by an intruder. I believe the speaker was unnecessarily criticized by several participants. I could see this approach being used in operational networks to assist security staff make defensive and detective decisions.

At the end of the day I participated in a poster session by virtue of being a co-author of Towards Zero-Day Attack Detection through Intelligent Icon Visualization of MDL Model Proximity with Scott Evans, Stephen Markham, Jeremy Impson and Eric Steinbrecher. Scott and Stephen work at GE Research, and I plan to collaborate with them for our internal security analysis.

Following VizSec I attended two days of RAID, or the 11th Recent Advanced in Intrusion Detection conference. Five years ago I participated in the 6th RAID conference and posted my thoughts. In that post I noted comments by Richard Steinnon, months after his 2003 comments that IDS was "dead":

"Gateways and firewalls are finally plugging the holes... we are winning the arms race with hackers... the IDS is at the end of life."

I found those comments funny on their own, and in light of the recent story Intrusion-prevention systems still not used full throttle: survey:

Network-based intrusion-prevention systems are in-line devices intended to detect and block a wide variety of attacks, but the equipment still is often used more like an intrusion-detection system to passively monitor traffic, new research shows...

[Richard] Stiennon -- who created some controversy five years ago while a Gartner ananlyst when he declared IDSs "dead” -- says this Infonetics survey gives him fuel to fan the flames of criticism once again.

“IDS should be dead because it’s still a failed technology,” Stiennon says, expressing the view that simply logging alerts about attacks is almost always a pointless exercise. “IPS equipment should be doing more to block attacks.”


The fundamental problem was, is, and will continue to be, the following:

If you can detect an attack with 100% accuracy, of course you should try to prevent it. If you can't, what else is left? Detection.

I continue to consider so-called "intrusion detection systems" to really be attack indication systems. It's important to try to prevent what you can, but to also have a system to let you know when something bad might be happening. This subject is worthy of a whole chapter in a new book, so I'll have to wait to write that argument.

Overall, I felt that a lot of the RAID talks were divorced from operational reality. Several attendees addressed this subject with questions. Too many researchers appear to be working on subjects that would never see the light of day in real networks.

Comments

Richard, I would just point out that attacks are constant. I am telling you now. There, I saved you millions in spending on IDS.

The RBN is attacking, the Chinese government is attacking.

Are IDS logs something to investigate Monday morning after 48 hours of attacks? Do you advocate 24X7 monitoring as well?

I remember addressing the RAID conference at Carnegie Mellon a month after finally publishing my thoughts on the ineffectiveness of IDS. The audience seemed like they wanted to throw tomatoes at me!

Where is the "Recent Advances in Intrusion Prevention" conference? That would be worth going to.

-Stiennon
I would venture the following assumption: most everyone who criticizes the need for IDS, or what I would call attack indication systems, are not doing any active network defense, dealing with intrusions, or trying to detect if their protective/resistive measures have failed. In other words, the critics are disconnected managers, pundits, analysts, reporters, certifiers/accreditors, auditors, and so on. Does anyone really think it is possible to sit back and expect your defensive systems to stop everything?
Anonymous said…
Richard,

I think you have made a good assumption. It is highly unlikely that pundits, disconnected managers, product marketers, etc. have any insight into conditions in the operational "trenches". But from their perspective, who wants to know what the defender/responder thinks? They rarely have any influence on architecture or policy decisions.
Hey Alex, please email me (taosecurity at gmail dot com). I'd like to talk Splunk on FreeBSD 7.x and integrating NSM data into Splunk. Thank you.
Anonymous said…
This comment has been removed by a blog administrator.
orekdm said…
I smell blood in the water....

I'll try to be brief but this has always been a pet peeve of mine. IPS is subject to the same criticisms that IDS is, but it's worse, because it's inline. Tuning is always the crux of the problem. If enterprises find it too difficult to appropriately staff their SOC with people capable of tuning IDS, how can you possibly make the claim that it can be done for IPS.

If you don't tune out the false positives on an IDS, you have have extra noise and impact the performance of the whole system preventing the maximum utility. The same is not true for IPS. You actually can *introduce* harm to your environment by not actively analyzing and tuning out false positives that are blocking traffic.

How many old uricontent matching rules are floating around out there in closed source rulesets that are matching modern legitimate traffic? What about the liability factors associated with closed source IPS rules generating false positives that are dropped by the default policies? I'd love to hear the lawyers start discussing that issue. I imagine the big players that hang their hat on ever increasingly sized IPS's will deny all responsibility and just point to their contract language.

I consider this the unspoken Achilles heel of IPS that some analysts haven't had the courage to address.

Perhaps it's time to trumpet the Hippocratic Oath for Network Security Management?

"First, do no harm."

p.s. I was disappointed with RAID this year.
Tomas said…
Hi,
There seems to be a lot of skepticism towards academic security research amongst the security industry and practitioners. However, I don't think only the academics are to be blamed. For instance, when working with intrusion detection it is awkwardly hard to get some real data to use for testing. Most often companies don't want to share there real traffic data for any reason. Even if they can anonymize it.

Since there is only one (as I know of) open dataset (MIT DARPA) for testing IDSs it is almost impossible to compare with previous research. If anybody could provide an up to date realistic test dataset on a continuous basis I think a lot research in intrusion detection could be improved.

In case of improving the state of intrusion prevention I think we also would need some kind of a common test bed where people could test their algorithms/systems; maybe online.

Just some thoughts.

By the way, Richard, since RAID was a disappointment, what research would you like to see done in intrusion detection?
toby said…
I didn't realize you were there. I would have enjoyed meeting in person. I'm guessing your comments about when people turn to visualization were based on the panel I ran regarding applied uses. I'm curious if you agreed or disagreed with our comments.
Your comment on Netflow seems to map to some of the things Lurene and Rich and Ron said.

Popular posts from this blog

Zeek in Action Videos

New Book! The Best of TaoSecurity Blog, Volume 4

MITRE ATT&CK Tactics Are Not Tactics