Ira Winkler, CISSP, is President of Secure Mentem and Co-Host of the Irari Report. He is considered one of the world’s most influential security professionals and has been named a “Modern Day James Bond” by the media. He won the Hall of Fame award from ISSA, and most recently, CSO magazine named him a CSO Compass Award winner as “The Awareness Crusader.” Winkler has designed implemented, maintained and assessed awareness programs for decades, in organizations throughout most industries. He performed empirical research on the success of awareness efforts. His espionage simulations examine the success of security programs in general, and also provide a base of knowledge in responded to incidents, many involving human exploitation.
Christopher P. Skroupa: To introduce some of these ideas covered in your book, Advanced Persistent Security, let’s start off with your recounting your time at NSA, when you came across an obvious cyber vulnerability and how that spoke to the importance of detection in prevention.
Ira Winkler: Right. So when I worked at NSA, I worked with a woman whose last name was Kirk. I was training her how to use the system and I told her, “Ok, log on to the system. Now log on to the database. Ok, the database ID is going to be your last name: Kirk. Now enter your password, Captain, C-A-P-T-A-I-N.” She looked at me in horror and said, “How do you know what my password is?” And I’m like, “You’ve got to be kidding me.”
With that in mind, the issue is that there is a system at NSA that allowed her to have a password as “captain” on an account of Kirk. Now can a script-kiddie have guessed her password? A teenager with nothing better to do with his time? Yeah, he would have guessed “captain” in a heartbeat. Could a cyber-terrorist have guessed this? Yes. Could Russia have guessed this? Yes. However, if I go ahead and enforce that she can’t have the password of “captain,” it doesn’t matter who the threat is—I have taken away the vulnerability for them to attempt to exploit it.
So in that case, that’s the key—if you can mitigate the underlying vulnerabilities, you can stop the bad guys from potentially getting in. So now, we have to assume failure is going to happen— protection will fail. Once protection fails, you need to ensure that you’ve put detection in place, and when you put detection in place, you look for where your protection has been breached. When you find it, you implement a reaction plan.
That’s how good security programs should be designed. They should always be looking for failure in protection, and too many people don’t look for failure in protection. There have been some studies, and this is not the exact number, but it was along the lines of 80% of security money was spent on protection and not on detection and reaction. Detection and reaction were just afterthoughts.
Initially, [detection] was just a side issue. Security was always bolted on, not built in, which meant it was just kind of like, “I don’t know what I’m going to secure, I’ll just figure that out after the system is built.” Now, people have started to build security into things, because security is recognized as a problem, but what they’re building-in is protection. Then they’ll be like, “Ok, let’s bolt on a detection tool after the fact. I’ll put a network monitoring tool or a log tool on.” Again, if they’re building detection on after the fact, it’s usually unacceptable because they’re not looking in the right places. They’re just buying a tool that looks broadly, not looking for specific targets.
Here’s a key factor: If you have the right detection in place, you can save a lot of money on protection as well. Security does not fail until the bad guy gets out with the targeted information. It’s ok if they get in, as long as you stop them from achieving their ultimate goals; whether their goal is stealing information, deleting information, or modifying information.
But if you can detect them when they get in, and detect them when they’re sifting through the information, you can stop them from achieving their goals, and it might be much cheaper. It might be both more effective and less expensive than to implement detection as opposed to unlimited protection.
Skroupa: So, it’s a “one without the other” kind of thing.
Winkler: Right. Though, this is just a hypothetical example, but there are a lot of security managers, and I was talking to one guy who said, “I have more than a million computers. How am I supposed to protect all of them?” And I replied, “You’re not.”
Not all computers are created equal. Some computers have minimal value, and some computers—like the CEO’s computer—would obviously have a lot of value. Email servers have a lot of value. Looking at SONY, the movie databases clearly had a lot of value that they weren’t adequately protecting. So, if you have an immense system such as in an organization like SONY, you can’t stop everybody from getting in—it would be impossible. But you could have implemented intrusion detection systems on the movie storage system, and you could have stopped a lot of the problem—you know that’s where the bad guys want to go. Again, you’re trying to protect value and detect when it is inevitably being compromised.
Same thing with Target. Obviously, with all of the aspects of Target’s infrastructure, some are more valuable than others. Detection should have been on the point of sales systems to see if they were modified wrongly or if there was malware on the system. Or, likewise, the Target backend servers were a clear target for the hacker/criminal. If they would have put detection there, that would have enabled Target to stop the activity. You want to implement enough detection.
There’s also the 80/20 rule where you solve 80% of your problem with 20% of the effort, though with most protection, you can solve 95% of the problem with 5% of the effort. For example, the most devastating attacks lately have been attributed to phishing messages. The SONY hack was attributed to phishing messages against the administrators. The OPM hack, where 22 million federal employees’ records were hacked, was attributed to phishing. The DNC hack has been attributed to phishing.
Phishing is a nice, simple, really low-tech way of getting in. Clearly, once the bad guys had their foot in the door, they were able to completely get in. But what if I were to tell you that there is inexpensive and frequently free protection that would have protected people’s credentials? Multifactor authentication would have prevented the initial compromises of SONY, the OPM, and DNC, and might have stopped the attacks in their entirety.
Skroupa: In your book, Advanced Persistent Security, do you cite any solid statistics on how many companies on average have these security measures in place already, or have this type of digital risk management in place?
Winkler: Down to the level of a specific protection tool, the answer is no—it’s hard to know. That was not one of the things we included in the book. But again the previous example of something available for free, that you just have to turn on, could have stopped some of the most notorious attacks. If they would have implemented multi-factor authentication, not to say that it would have been impossible, but it would have been exponentially more difficult to start the attacks.
Another thing we talk about in the book is how to create a culture of security. Companies don’t understand their culture. There’s also a problem in believing that younger generations know how to secure themselves better. There’s also a distinction between knowing how to secure yourself better and actually doing it—that’s a very important distinction.
Despite the fact that younger people have been exposed to the technology longer, they don’t know how to secure it any better. They’re more comfortable using it, but it doesn’t mean they’ve looked at the right things to know how to secure it better.
One of my friends, a former CIA operative who went on to teach information sciences at the University of Texas, conducted a study. He asked his students what they thought the definition of privacy is, and they said they believed privacy is defined as controlling what you “put out there.”
He assigned his students a term project where he wanted them to put together an intelligence dossier on themselves—in other words, how much information they could find through open sources. He told me, “I can tell you exactly when the people started the term project, because the day after they started, they came to class early and asked me, ‘How do I get this stuff off the internet?’” This was even though they thought they were all protecting their privacy. And remember, they were all graduate students studying information sciences.
There’s another concept: Just because people know something, it doesn’t mean they behave in the way they know they should. For example, if I tell you what a good password is, you repeat back, “Yeah, I know what a good password is: numbers, letters, at least eight characters, do not reuse your password on multiple accounts and all that sort of stuff.” And a lot of people know that, but the reality is that even when people know that, they don’t always use a good password.
Skroupa: Exactly—you can have the knowledge, but you’re not always going to implement it in an effective way, because you don’t understand exactly the risk.
Winkler: Exactly. It’s like being healthy. Everybody knows to be healthy—you need to eat right and exercise. But you look at the average person, even though they know what it takes to be healthy and know they should do it, they don’t.
The first thing we recommend in our protection section is proper governance of security. Governance as a whole is not just security related, it’s financially related—it’s how you run the business.
If you do not have proper specification as to what is important to protect, and then details on how it is to be protected, your security program is an accident. Sometimes it is accidentally good, but usually, it’s accidentally bad. For example, I could accidentally have a CEO who values security and properly funds security, and they find a chief security officer who knows what they’re doing, and that chief security officer uses his budget in a proper way. That’s great. but that’s a complete accident.
Governance acknowledges that security is important, determines how it should be budgeted, explains the behaviors to implement, and how to protect the technology. There should be guidelines and procedures that say how computers should be secured, how information should be released, how people should be trained for awareness, and so on.
Security should be driven by proper governance. It has nothing to do with people being aware individually, and it has nothing to do with industry. Either an enterprise has proper governance or it doesn’t. If they don’t have proper governance and are still secure, that’s awesome to a certain extent, but if the security manager leaves or you get a new CEO in who doesn’t think he wants to spend the money on security anymore, the whole security can be blown away, unless of course it’s documented and made a part of the corporate DNA, and governance is what makes it a part of the corporate DNA.
Skroupa: So it’s really governance training. And then I imagine you go over the details of that in the book as well.
Winkler: Yes. It’s even said that the governance chapter is the most important chapter in the book. Think about it this way—how you handle money, that’s documented. Orders come in and are verified, money is processed, money is logged, and all that sort of stuff. They have all the important business processes and procedures properly documented. It is the same with security: Employees should be told step-by-step how they should behave. The technology people should provide specific guidance on how the technology is to be implemented. Security guards should be told exactly what they look for and how to look for it, and so on. And if it’s not written down, your security is an accident, and usually it is a bad accident.
Skroupa: I think the governance section seems to be where all of this can be implemented on a grand scale. We all have the tools, but we don’t all have the implementation plan, and so that’ll be the next step before anyone can really create a formidable advanced persistent security (APS) infrastructure.
Winkler: Professionally, my primary purpose is to implement awareness programs, which is valuable to my clients. However without proper governance, awareness is basically just telling people what best practices are. We like to work with companies and define their governance, and use the awareness to promote the established governance. It is infinitely better to tell people how to behave, instead of what they should be afraid of and how not to behave.
Skroupa: And that’s why these things exist, so we can bring it to a public conversation, which is not only why you wrote the book, but why we’re hoping to cover it.
Winkler: I appreciate that.
Skroupa: So this has all been really interesting. Do you feel like there is anything we haven’t touched on that is really relevant to the thesis?
Winkler: Proper governance should be the key driver of your security program. Governance essentially makes a should into a must, because most people think they should be secure, but proper governance says specifically how they must be secure, and if something is a should they basically “should all over themselves,” and that’s a problem. So that’s number one.
The second concept is a good APS program should have detection and reaction as a comprehensive security protocol. Most security programs fail because they are not security programs—they are just protection programs. Another thing, I think security doesn’t fail when the bad guy gets in, security fails when the bad guy gets out and achieves their goal.
If the book has a mantra, it is, “Protection, detection, reaction.” These principles come from information warfare, and security is not about security—security is about risk management. In risk management, failure is acceptable—the goal is to ensure you plan for those failures, which, again, is part of governance. What losses are you willing to accept?
Security seems to be the only discipline in business where people think any failure is unacceptable. Clearly it’s embarrassing, but it is usually minimal compared to the amount of money they lose elsewhere, such as in credit card transactions. Billions are lost each year. People can shoplift and that’s accepted. They call it shrinkage—they have a term for it to make it sound like a business element. But for some reason a hacker breaks in and hacks, and they think, “That’s bad, so let’s fire the security people.” Again, unless you give an unlimited budget to security people, there will be security failures, unless you let the security person do it intelligently.