Forum latest

The Devils in the Design
General
Written by Daniel   
Wednesday, 07 November 2007 10:47

Are your software developers sabotaging your company's application code? How do you know?

NOVEMBER 5, 2007 | 5:35 PM
By Tim Wilson
Site Editor, Dark Reading

ARLINGTON, Va. -- Computer Security Institute 2007 -- What if a software developer wants to put a security flaw in your enterprise's applications?
That was the foreboding question posed by Dawn Cappelli and Andrew Moore, two experts from CERT and the Software Engineering Institute at Carnegie Mellon University, in a session held at the CSI show here today. The answers had some security pros in attendance worried.
"This is a little scary," said one attendee, who asked to remain anonymous. "This could happen to us."



Most enterprises trust their developers, and they generally assume that security flaws found in enterprise software are the result of accidents, oversights, or sloppy coding. But enterprises should also be watching for that small, dangerous fraction of developers who create backdoors or other exploits that might let them steal or damage data later on, according to the CERT experts. (See Security's New School.)

CERT, which has been doing research on insider threats for several years in conjunction with the U.S. Secret Service, found one company that lost $691 million over a five-year period through modified source code introduced by an employee in applications development.

"You wonder, how could a company lose that much money over such a long period of time and not catch it," said Cappelli. "But he was in a position where he could not only reroute the funds, but he could also change the reports."

Companies should ask themselves whether their policies, tools, and processes might make them vulnerable to such insider attacks, Cappelli said. Given the right circumstances, a seemingly harmless developer could leave openings for theft, plant a logic bomb to destroy information, or wipe out backup data that is crucial to the company, she observed.

"In the research we've done with the Secret Service, we've seen more logic bombs and malicious code than we expected," Cappelli said. "Some of them aren't terribly destructive at first and some of them don't go off right away. We saw one developer put malicious code in software that didn't begin to operate until a year after he put it in."

Cappelli and Moore took attendees through a range of scenarios in which a developer might have the leeway to intentionally create security vulnerabilities. Shared passwords, insufficient separation of duties, and a lack of adequate access controls are among the environmental weaknesses that might tempt a greedy or disgruntled worker, they said.... Much More    Comment in the Forums
 

See also

None found.


Hardware | Windows | Linux | Security | Mobile Devices | Gaming
Tech Business | Editorial | General News | folding@home

Forum | Download Files

Copyright ©2001 - 2012, AOA Forums.  All rights reserved.

Alliance of Overclocking Arts

Links monetized by VigLink

Don't Click Here Don't Click Here Either