Thoughts on Software Assurance

Last night I attended a talk at my local ISSA chapter. The speaker was Joe Jarzombek, Director for Software Assurance for the National Cyber Security Division of the Department of Homeland Security. Mr Jarzombek began his talk by pointing out the proposed DHS reorganization creates an Assistant Secretary for Cyber Security and Telecommunications working for the Under Secretary for Preparedness.

This is supposed to be an improvement over the previous job held by Amit Yoran, where he lead the National Cyber Security Division, under the Information Analysis and Infrastructure Protection Directorate. According to this story, "Yoran had reported to Robert P. Liscouski, assistant secretary for infrastructure protection, and was not responsible for telecommunication networks, which are the backbone of the Internet." Mr Jarzombek said that people who are not Assistant Secretaries are "not invited to the table" on serious matters.

Turning to the main points of his presentation, Mr Jarzombek said the government worries about "subversions of the software supply chain" by developers who are not "exercising a minimum level of responsible practice." He claimed that "business intelligence is being acquired because companies are not aware of their supply chains."

The government wants to "strengthen operational resiliency" by "building trust into the software acquired and used by the government and critical infrastructure." To that end, software assurance is supposed to incorporate "trustworthiness, predictable execution, and conformance." Mr Jarzombek wants developers to "stop making avoidable mistakes." He also wants those operating critical infrastructure to realize that "if software is not secure, it's not safe. If software can be changed remotely by an intruder, it's not reliable." Apparently infrastructure provides think in terms of safety and reliability, but believe security is "someone else's problem."

I applaud Mr Jarzombek's work in this area, but I think the problem set is too difficult. For example, the government appears to worry about two separate problems. First they are concerned that poor programming practices will introduce vulnerabilities. To address this issue Mr Jarzombek and associates promote a huge variety of standards that are supposed to "raise the bar" for software development. To me this sounds like the argument for certification and accreditation (C&A). Millions of dollars and thousands of hours are spent on C&A, and C&A levels are used to assess security. In reality C&A is a 20-year-old paperwork exercise that does not yield improved security. The only real way to measure security is to track the numbers and types of compromise over time, and try to see that number decrease.

Second, the government is worried about rogue developers (often overseas and outsourced) introducing back doors into critical code. No amount of paperwork is going to stop this group. Whatever DHS and friends produces will be widely distributed in the hopes of encouraging its adoption. This means rogue developers can code around the checks performed by DHS diagnostic software. Even if given the entire source code to a project, skilled rogue developers can obfuscate their additions.

In my opinion the government spends way too much time on the vulnerability aspect of the risk equation. Remember risk = threat X vulnerability X asset/impact/cost/etc. Instead of devoting so much effort to vulnerabilities, I think the government should divert resources to deterring and prosecuting threats.

Consider the "defense" of a city from thieves. Do city officials devote huge amounts of resources to shoring up doors, windows, locks, and so forth on citizen homes? That problem is too large, and thieves would find other ways to steal anyway. Instead, police deter crime when possible and catch thieves who do manage to steal property. Of course "proactive" measures to prevent crime are preferred, so the police work with property owners to make improvements to homes and businesses where possible.

I asked Mr Jarzombek a question along these lines. He essentially said the threat problem is too difficult to address, so the government concentrates on vulnerabilities. That's not much of an answer, since his approach has to defend all of the nation's targets. My threat-based approach focuses on deterring and capturing the much smaller groups of real threats.

Mr Jarzombek then said that the government does pursue threats, but he "can't talk about that." Why not? I understand he and others can't reveal operational details, but why not say "Federal, state and local law enforcement are watching carefully and we will have zero tolerance for these kinds of crimes." Someone actually said those words, but not about attacking infrastructure. These words were spoken by Alberto Gonzales, US Attorney General, with respect to Katrina phishers.

This approach would have more effect against domestic intruders, since foreign governments would not be scared by threat of prosecution. However, if foreign groups knew we would pursue them with means other than law enforcement, we might be able to deter some of their activities. At the very least we could devote more resources to intelligence and infiltration, thereby learning about groups attacking infrastructure and preventing damaging attacks.

Since I'm discussing software assurance, I found a few interesting sites hosted by Fortify Software. The Taxonomy of Coding Errors that Affect Security looks very cool. The Fortify Extra is a newsletter, which among other features includes a "Who's Winning in the Press?" count of "good guy" and "bad guy" citations. This site is not yet live, but in October DHS will launch buildsecurityin.us-cert.gov. The Center for National Software Studies was mentioned last night.

Also, the 2nd Annual US OWASP Conference will be held in Gaithersburg, MD 11-12 October 2005.

Comments

Anonymous said…
I work for a federal department which recently completed a C&A review. Very little of what sysadmins routinely do was changed as any result. Getting the information in all the blocks on an 80 page Word document exactly right was the hard part. The inspection was also MS specific. Not one inspector even looked at my BSD and linux systems. I was looking forward to the opportunity of objectivley comparing open-source systems with Microsoft but it never happened.

It was only a paperwork exercise. Too bad it's lasted 20 years.
John Ward said…
The government should be more concerned with catching threats. It is the vendor’s responsibility to minimize the number of vulnerabilities in their product. And some ridiculous certification process is not the answer. A product may have passed some smoke-and-mirrors C&A process, but still be a piece of crap. Mr. Jarzombek is not the only one who wants developers to stop making “avoidable mistakes”. The issue is with education, not certification. A certification is nothing more than a piece of paper; it doesn't mean that the developer is proficient in anything. Take the MCP for example. I have worked with tons of MCPs who basically went through the motions for certification, but for the life of them they couldn't actually write any solve any sort of problems, develop solutions, or write any “real” code. Developers typically do not learn how to write secure I/O to prevent buffer overflows, proper memory management, or even something as more basic as don't use C for tasks where a higher-level language with type-safe strings and array bound checking would be more appropriate. Granted, languages like Pascal are not immune from buffer overflows, but they are much less frequent due to things like bounds-checking and variable length strings. If developers would learn something other than C and add more languages to their development arsenal, they would learn how to pick the proper language for the task, applications would be much more secure.

And as far as "rouge programmers", if companies allow their internal or outsourced programmers to write obfuscated code for any reason, it’s a reflection of the quality of the company and their product. If they outsourced code to be merged into their product without review from a senior developer, then they are asking for trouble. The standards for development need to be changed across the board. For example, I require all of my developers to write their code in easy to follow logical blocks that are clearly commented, and I require code to undergo peer review by another developer. If those requirements are not met, I make them do it over. There is no place in production code for obfuscated code, because the coders who write that are show offs, and I have no tolerance for “cowboy programmers”.

The biggest issue is with PHB's who make the product decisions in their organization. Too often you have these jack asses pick a product based on what the sales force have sold them on. A company can have a first-string, world-class sales force but a third rate product. Managers are not knowledgeable to know what to look for in products, and as a result, usually pick garbage with potential vulnerabilities.

These are all issues that the industry needs to address within itself. I don’t agree that these problems are too difficult to deal with. The industry used to apply certain standards to software back in the day. The problem now is that programmers are a dime a dozen and lazy to boot. We have too many of the Gen-X and Gen-Y kids who decided to hop on the IT bandwagon when it was the booming industry, and they did not have the proper mental aptitude to be programmers. As a result, the market is saturated with garbage programmers, even more so after the bubble burst.
Anonymous said…
I think your 'thoughts' and the comments are very relevant, but
what about the [security] quality of the code? I know that the
threat angles are covered by other govt orgs and yes, of course
it's too piecemeal and uncoordinated - much like other intelligence
efforts. And I too have a low opinion of how well standards and
particularly certifications can improve the situation.

I'm still of the opinion that building high quality code is better
from a security viewpoint than poor code, and even that can be vastly
improved by refocusing on particular flaws that have defintie
security impacts. And we certainly could use new ideas for proper
testing (resources are always a problem). I think the software
assurance angle is correct in addressing this area.
John Ward said…
I totally agree with the Anonymous poster. The problem is most programmers are not introduced to secure programming practices from the very beginning. For example, when someone is learning C, the first thing they learn for I/O is something like:

char input_buffer[20];
gets(input_buffer);

Instead, they should learn is:

char input_buffer[20];
fgets(input_buffer, 20, stdin);

First impressions go a long way, and if secure programming is introduced from the get go then it will stay with developers. Better quality coding = more secure code. Certifications do not teach this either.
Anonymous said…
This comment has been removed by a blog administrator.

Popular posts from this blog

Zeek in Action Videos

New Book! The Best of TaoSecurity Blog, Volume 4

MITRE ATT&CK Tactics Are Not Tactics