This is a transcription of our Interview with Ted Harrington, author of "HACKABLE: How to Do Application Security Right".

You can watch the original video interview here or tune in to the podcast episode here

or via iTunesSpotify and other podcast apps by searching "Risk Management Show"

8804802258?profile=RESIZE_584x

Boris: Welcome to our Interview with Ted Harrington. Ted is an author of “HACKABLE, How to Do Application Security Right” and the Executive Partner at Independent Security Evaluators, or ISE, the company of ethical hackers farmer four, the first to hack the iPhone for hacking cars, medical devices, and password managers. 

It has helped hundreds of companies fix tens of thousands of security vulnerabilities, including Google, Amazon, Netflix, and more.

Ted, Thank you for coming to our interview today.

Ted: Thanks for having me.

Boris: Absolutely. I think it will be a very interesting interview for us. With your background I believe we will have a really thoughtful conversation about cyber security, emerging threats and how risk and security managers should stop thinking like defenders and start thinking like hackers. 

Ted, Can you tell us a short story about your unique path in the cyber field and what you and your colleagues at Independent Security Evaluators have been up to this day?

Ted: Sure. So I think the simplest way to think about us is, we're the good guy hackers. When companies are looking to try to understand the security issues inherent in their system, like how would an attacker actually exploit it and what should they do to improve the system, that's ultimately when companies come to us to help them solve and because of that, I've been really fortunate to be involved with really some exciting dynamic companies and some really crazy scenarios.

There're so many to that I could think of, but I think one of the short ans crazy story I've been involved with one that has to do with cryptocurrency. Let me give you a little context. We first need to understand this idea that cryptographers call statistical and probability and statistics and probability basically means that it's more complicated than this, but it essentially means the private key can't be predicted.

You can't guess it. It's like if you went to the beach, Boris, you picked up a grain of sand and you threw it back. And then the next day I went to back to that same beach, how likely would I pick up the same grain of sand? It's possible, but it's pretty much statistically improbable. And then you multiply that by like every beach on earth and then multiply that by a gazillion earth. And so that's sort of the likelihood that someone could guess a private key that protects a cryptocurrency wallet. You just can't guess them or can you?

We've published some research that in fact showed that you could, we were looking at Ethereum wallets, the price of Ethereum is certainly going up right now. And this was a few years ago. We were really curious to understand the way that the software provisions keys and what we were able to do as a result of this research was we could actually successfully predict the key 732 times. Now that's like picking up your same grain of sand 733 times, right?

It shouldn't be able to happen once, let alone hundreds of times, which that alone is interesting, but it gets more interesting because since cryptocurrency wallets leveraged a blockchain, which is essentially publicly available information, you can actually see the transactions. We could answer the question, well, how much money is at stake here in the 732 wallets. How much money are we really talking about? And it turns out it wasn't a small amount of money. It was at the time worth about $54 million US dollars worth today. It's worth considerably more than that.

So that was interesting then what was even more interesting because we said, well, if you think about the idea that you've got these vulnerable wallets, someone can actually guess the key to get into them. And there's 54 million US dollars worth of currency there.

That's kind of like a pile of cash is just sitting on the sidewalk, right? Someone's going to steal it eventually. And someone did. And because we could see all the transactions because of the blockchain that was involved, we can see that every single unit of currency in those 732 vulnerable wallets had all been funneled to a single destination wallet.

So clearly we had stumbled across a hacking campaign in progress where someone was actively stealing from vulnerable wallets, using the same vulnerability that our research had found. And maybe the final detail to mention about that story that I think is kind of nuts. We wanted to know how quickly do vulnerable wallets get looted. And so what we did was the thing about cryptocurrency, they are anonymous. So we couldn't contact somebody to say, hey, you know that your wallet is vulnerable, and there was no one to partner with on this.

So we said, all right, let's just put a dollar worth of our own Ethereum in one of these wallets and see what happens. And I mean, it was almost instant that money went right to that same central wallet. So clearly this was not just something that had had happened in the past. It was actively happening. And the reason I think that story is interesting besides the fact that it's just kind of a wild, you know, what's the chances of something like that happening. It tells us some really important facts.

The first fact it shows is that security, vulnerabilities in software they exist. And the second fact that shows is that attackers exploit them. And that's the reason I think those two facts are really important, especially for your entire audience of risk professionals, as well as you know, for me and everybody that comes more from the sort of ethical hacking and the Security viewpoint is that's why, what we do matters because these aren't hypothetical issues 

And I think that's one of the things that is often maybe a struggle that those who work in Risk feel is like, Hey, I'm trying to communicate to the business. This is the risk of what might happen. And they kind of dismiss it because it doesn't feel real. It's like, this is what might happen. And this is what that might cost. And here's how we quantify that might, but here's an example that shows. No, this stuff actually happens. We actually have to take action. 

Boris: Fantastic. It’s a very strong start of the podcast. Ted, could you tell us a little bit about your recent book? What is the main idea behind your recent book “HACKABLE”.

Ted: Sure. So the book is called “HACKABLE, How to Do Application Security, Right”. The thrust of the book is that what I wanted to solve, what I wanted to go set out and do is to really help people understand how to approach securing software systems differently because I noticed two things.

The first thing that I noticed was that I kept having the same conversations over and over and over again. As you mentioned, I'm one of the partners in this security consulting company called ISE and our customers are constantly trying to solve these challenges. 

And what I noticed was whether it was talking to our customers or prospective customers or people that I met after giving a keynote, it seemed like everybody had the same common problems. And there were roughly about 10 of those, those problems that sort of everybody had, no matter how large or small the company was, what geographic region it was, what industry sector it was in, how mature the Security program was.

They all kind of had these same issues. And once I noticed that, I, of course, that's interesting. Then what I did was I started thinking about, well, how do you solve those problems that everybody has?

I really started thinking through that. And that was the moment that just was the Swift kick in the butt that says, you must write this book. And that was that I recognized that the conventional approach is the ways that most people talk about dealing with these sort of 10 or so common problems that everybody has, the conventional approaches to those are pretty universally backwards. There’re so many misconceptions and I know the right way to do it because I help companies with it every single day.

And I said, well, I can't let that continue to happen because think about what that really means, right? That means that you've got someone who is building a business in some way. They recognize that security is an important part of that mission, they recognize they have some challenges in order to do Security, right. So they go out and try to solve and address those challenges. And the answer they get is incorrect. And I said, no, that I can't let that condition exist anymore. So I sat down to write the book. It addresses the misconceptions that hold most organizations back.

It replaces them with here's what you should do instead. Of course, it's filled with stories like the one I just told about the blockchain bandit, which is what we named that thief. And it's all real world examples from the front lines of ethical hacking with this goal of saying that we can do this better. And for that reason, the book is from page one to the last page, it is filled with the exact same type of advice that we give to our clients and our prospective clients.

So that's the goal of it. Help organizations better build a better, more secure software systems.

Boris: Let's discuss a scenario where you are a hacker or an ethical hacker and you are trying to find vulnerabilities in a company. Without putting a lot of details, can you describe in a few steps what is your sequence of action would be?

Ted: Well, I can explain it in a story. So this is sort of a metaphorical story. I mean, it's a true story, but it's a social engineering story as opposed to a technical exploit story, but social engineers are ethical hackers too. And I think that it very vividly paints the picture. 

So a few years ago I was going to meet up with some friends at a bar near my house here in California. I can't remember exactly why I need us to go to this specific bar, I think, or maybe it was like someone's birthday, but we had to go to this one.

It wasn't like, hey, lets go to this neighborhood, it was like we're going to this specific place. And by the time I got there, I showed up a little bit later than everybody else. There was a really long line to get in. This is a popular place. And once you wait through that line, you had to pay $20 a cover charge to get in. And I really wanted nothing to do with either of those things. So what I did was I sort of applied the mindset that really any hacker, whether they're an ethical hacker or an attacker applies and said, how can I make this system behave differently than its supposed to. 

That's the fundamental thing that everyone who falls under the hacker moniker, which is some are good and some are bad, but they're both hackers. Anyone who is a hacker looks at a system, it says, how can I make the system behave differently?

The first thing that I did, and this is what I'm going to do, is to set a goal. So step one set a goal. And my goal was to find a way to bypass the line and bypass the cover charge. So I have my goal now. The second thing I did is then evaluate how the system works, actually understand the mechanics of the system, how the components work together and in this case, I recognized that you've got a VIP line and the VIP is where you are on the list or you can associate yourself with a list, you don't have to wait in line and you don't have to pay cover.

So my goal now was bypass the line by associating myself with VIP. I want to use that functionality in the furtherance of the attack. So I walked up to the VIP hostess and I said, hi, I'm on the list. Now I wasn't on the list and I didn't know anybody on the list, but I needed her to believe that I was on the list. So what I needed to have happened was I needed her to give me information. So this is now the next step and what really any attacker does is starts to probe the system to find information.

And so when she asked what my name was, instead of giving my name or guessing a name, because the chance that I'm right on that, it's just probably not going to work. I said, I'm with the group.

Now that's, what's called a specially crafted input. A specially crafted input is what an attacker or hacker will use in order to see how the system behaves. And so, especially crafted input, I'm with the group, got her to respond saying great, which group? Now, again, I'm not on the list.

I don't know anyone lists, I don't know what groups are so guessing isn't going to help. I need her to tell me who the groups are. So again, I issued a specially crafted input, which in this case was that I'm with the big one. And with that, she looks down her clipboard flips couple of pages. And she says, oh, the Smith party? And I said, yes, I'm with the Smith party. And so that was now the next step, once you identify the vulnerability, you see, can you actually exploit it?

And it turns out that I could, she raised the velvet rope, escorted us, passed the cashier and to the bar. That was a very successful process that I went through that mirrors the exact steps that a hacker would go through where you identify the goal, evaluate the system, issue specially crafted input, see how it reacts. Of course, woven in there was identifying assumptions, like the assumption that she, the VIP hostess assumed that if I could produce a group's name, I was with a group.

So that was an assumption that I was probing to see if that was valid. Ultimately then once you find an issue, you determine, is it exploitable or not because some vulnerabilities maybe impossible to exploit or very difficult to exploit.

So that's more or less the process. There's obviously way more levels of complexity to it than that, but that simplifies it. I think everybody waited in a line before or had to pay for access to something before they didn't want to do. So we all had that experience as human beings.

I'll end it with the note that I more than made up for that cover charge with my bartend, I over tipped everybody, it was part of research. So everyone got taken care of, but that's essentially the process.

Boris: I think I can share with you that I had almost the same story when I was in an airport lounge and I wanted to not stay in the common area because it was too crowded. So I invented a story like this and they gave me access. Is this story also in your book, right?

Ted: Yes, that story I broke down with even additional detail in the book outlined exact steps. I mean, that's one of the things I wanted the book to do was not only to inform and entertain through stories, but to also say, okay, now you have the concept to do this, do this, do this, and then do this. And so that's exactly what it does.

And then there's the whole part of the book too, that I just laid out sort of the process, components of how you might go about exploiting a system, but then there's a whole other method for what are the different aspects to security testing that should happen and where do different companies come up short. 

We can get into that later if you want, but that's also in the book is walking people through those specifics. 

Boris: Fantastic. So tell us, what is the commonly held belief in the cybersecurity field or probably the biggest misconceptions that you strongly disagree with?

Ted: I mean, I got in a book a lot of them. There are so many that are problematic, but why don't we might focus on one that is very surprising. I think to a lot of people, even today in the year that we live in, people are still surprised by this. And it's the distinction about how should we share information? So to set the context of what I'm talking about here, companies who are building software systems, they already know that they need to engage some sort of outside security expert to help them with security testing.

They either know that inherently because they understand the benefits of it or because their customers are requiring it. But most people already recognize that. But where the misconception comes in is how to actually work with those organizations. And most people hold on to this misconception. That's pretty bad actually of what's called black box testing. And what they want to do instead is what's called white box testing. Now here's the distinction. The distinction boils down to information. It's purely about information. 

In black box testing information is intentionally withheld. So for example, a company might hire us, Security Evaluators and might say, okay, here's the system and we want you to actually test it. We want you to do a penetration test. There's a lot of issues with that term because a lot of people don't know what it means. We want to do our penetration test here and we want to do a Blackbox. What they mean to say is we don't want to give you any information about how this system works.

We want you to emulate real world conditions, right? We want you to act like an attacker will act. Now the motivation behind that is reasonable. I understand where they're coming from. They were saying like, well, if the attacker doesn't have this information, my Security Evaluators shouldn't have it either because I want to see what the real world looks like. But it's incredibly flawed. What you actually want to do is instead what's called white box where you actually share information, you share access to source code, designed documents, access to engineers. 

And the reason is that if the goal is to actually find security vulnerabilities, fix them and improve the security posture, sharing information is incredibly powerful way to do that. And when you withhold information, all kinds of bad things happened.

So the, the metaphor here is if you can imagine, imagine a medieval castle, certainly plenty of those across Europe. It shouldn't be hard for your audience to imagine what a medieval castle is going to look like. Now imagine the King back in medieval times and the King wants to know can my enemies assassinate me?

And so what the King does is he orders one of his loyal Nobels to send some Knights, to try to break into the castle. Now in a black box format, what would happen is the King would say, well, I'm not going to tell the knights anything about the castle. They just need to try to break in. So now what are the Knights do? The Knights show up in the castle, they see there's a moat. They start counting alligators like, Oh, there's one there's one. Okay. So we've got this many turrets. Okay. I see archers on the turrets. Oh, I see the guy right there with the hot oil is going to spill that over.

Okay. I can see a bunch of these things. Does that help the King? It doesn't help the King at all because the King already knew those things. So all the time and effort that they just spent counting alligators, the King could you just said, oh, there’re seven alligators. Okay, cool. We now have that information and with that information, we can now use it to understand what are the potential vulnerabilities we should be looking for. And that's really one of these big drawbacks with black box. 

You think you're emulating real world conditions, but really what you're doing is you are reducing the value of the input that your security partners will give you. 

Ultimately, you're not testing your system, your testing, the Partner, you're saying, can this security company get into this system with this amount of time. Whether they can, or they can’t it doesn't really tell you anything about your enemies. And that's one of the real big misconceptions. We’ve got to move away from this, this miss-applied adherence to black box testing.

The black box testing does have its uses. If you want to test whether a system leaks information, okay, that's good for that. But when the goal is to actually find vulnerabilities and fix them, you really want to do white box. 

And that's where the King would take the Knights, walk across the Drawbridge to say, Hey, in that moat, there's seven alligators. Here's how the interior compartments of the castle work. Here's our evacuation plan in the event of a siege. You know, those kinds of things. Now the Knights can say, okay, well, this is potentially an issue this part right here. 

So let's focus on whether or not someone could exploit that.

Boris: This is a very vivid story in your explanation of all these possibilities. Could you tell with regards to risk management, what ethical hackers can do to help mitigate risks and help organizations, specifically risk managers and security officers in the real situations?

Ted: So really what ethical hackers do in regards to risk is two things. Number one, actually understand it, be able to measure it and number to be able to reduce it.

And one of the big challenges that I see really across industries is that when it comes to security, the actual risk of a certain situation isn't really well understood by the business. And what's crazy about that is that the business is making a decision either way, whether they have information or they don't have the information, they are making a decision.

So my advocacy is, well, lets have the information so we can make the right decision. And so, you know, the difference between black box and white box is this is an example of where I just mentioned where the information actually delivers results that help you understand risk and to decide what to do about it.

The thing about risk is that, this is a whole profession unto itself and I didn't attempt to redefine Risk in my book. What I attempted with my book to do was to help inform how to think about risk.

One of the points that I make in the book is that let's accept this condition, right? The condition is that security vulnerabilities exist. The only difference is whether or not an organization has discovered them. So they exist either way, was enough effort put in to actually finding them? Because once you found them, once you understand that the issues exist, now you can actually make a risk based decisions because if you're able to look at a system and say, we have these 30 issues, you can actually make the decision to say, we're going to do nothing about them.

We're going to just accept these 30 and while I'm, maybe don't agree with it in every situation, that's okay because you're making that decision actively. The problem that I see is when organizations try to either cut significantly or in some cases eliminate their investment in security, in security testing in doing it the right way.

So as a result, the only thing that happens is that they don't find the vulnerabilities. The vulnerabilities still exist, they just don't know about them. So now they're still making a decision, they're accepting those vulnerabilities, but they don't realize they're accepting.

And that's the real problem. So how Security, How ethical hacking, how Risk, how are all of these things intersect is around that idea of the issues exist? Are we investing the appropriate amount of time, effort and money in order to identify and understand those issues so we can make decisions based on Risk?

Boris: Based on what you said, how organizations or even just small guys, like software engineers should go about founding issues so we can rebuild more secure solution or software.

Ted: Ultimately Security is a team sport is the way they describe it. It requires collaboration between the business itself and the technology leadership. So whether that's the CTO or the vice president of engineering or whoever it is all the way to engineers and developers themselves and partnership with the outside Security partners.

So the question around what should an engineer do? I mean, ultimately the decision around who to partner with, how to partner with them, how much money to spend, what the goals are, all of those things ultimately are going to be the decision at the leadership level.

The individual engineer, their job is to of course build the thing and they're going to react to the marching orders that are effectively handed to them. But that doesn't mean that they're powerless. It does to a certain extent, mean, an engineer listening to this, isn't going to be able to say, well, I guess I'll go to free up a couple hundred thousand dollars of budget to go do something.

That's not going to happen. But what an engineer can do is an engineer can think about how to approach security in the actual building process.

And I talk about this at length in the book, but there's some really clear moments in the development process where an engineering activity is happening and there's a relevant Security action that happens then too. So for example, when an organization is, let's say designing what they're going to be doing, they've already established requirements. And now because it needs to do whatever that is, it's going to look like this.

So have a Security conversation around that, right? So the engineers are going to be, of course in the room talking about these are the components and here's how they're getting to interact in order to achieve those requirements. Well, there should be a Security voice in the room to saying, okay, based on that design here are what would hypothetically be the security issues and the beauty of that conversation at that time, imagine, you know, we were talking about on a whiteboard, right?

So it's like this circle draws to this square and when you realize that relation maybe has some security issues to it, what are you able to do? You're able to say, well, why don't we instead make the circle, go to this triangle and then go to the square.

We've just changed. That didn't require any effort other than to like five minutes, talk about it and erase it on a whiteboard, but we've now set it up so that when the actual building happens, when the development process engineering is underway, you're building the right thing in the right way. And that's so powerful. And so that's my big advocacy is that at every stage of the process, there is a Security action to be taking don't push Security off to the end.

Boris: So we are now living in a Work from home situation. I know that it adds more challenges to security officers. They have to install all this security access to all the people that work from home. So what do you see? How would this situation naturally involved, what do you see on your as a consultant to the industry?

Ted: I always try to find a silver lining in everything. You know, most Security people are like, the sky is falling doom and gloom and I'm like, well, wait, there's a Ray of sunshine coming through. Obviously, I don't mean to be cavalier about what the entire human race has been through over the last year with a pandemic. It's obviously been just a real tragedy in, in so many ways, but there are some really good things that have come out of it. And one of the many things that I think there are really good that have come out of it is that companies are now more aware of Security.

So the fact that you've even asked the question and the way you did that kind of question is being asked every day in companies all over the world. And it's so exciting because those conversations weren't really happening to the same degree, at least at the same scale all across the world.

I mean, there were companies of course asking those questions, but for the most part, I think a lot of companies still think of security as a niche topic.

To put this into context, I'm a keynote speaker. And when you look at what are the topics that organizations and associations are tending to look for their keynote speakers, cyber security, the entire field of cyber security, I'm not even talking about ethical hacking, the entire field of cybersecurity is still considered a niche topic.

I'm like, no, this impacts every company, every academic institution, every government in every place in the world. And, but people still think it's like this thing is a niche and it's not. And so what the pandemic has done is it's forced those conversations where people are like, Hmm, our model is just changed. We no longer control the infrastructure here at our physical site at our office. 

Our people are now distributed. How do we deal with that? Now the answer to that question is important too, but that question is what matters. And that's so important to Security that we're constantly asking the question, how are we doing with change? Fundamentally that's what the pandemic forced to the front of everyone's minds. Things have changed. How do we deal with that? And that's what us as security professionals and that's what Risk professionals are doing as well too. We are trying to drive awareness of the fact that Hey, things have changed. 

That means the risk profiles changed. We have to think about how we're going to adapt.

Boris: Maybe finalizing, if someone who is listening to this interview, we would like to walk away with is one or two major takeaways. What would that be?

Ted: I think the big thing that we haven't talked about yet necessarily, but that weaves all of these things together would be this takeaway it's that most people think about security and risk, as well as this idea of how do we avoid a bad thing, right? Suffering a security breach is a bad thing. What do we need to do in order to minimize the likelihood of that happening or minimize the impact of what happened? And that's a very valid way to think about security. How do we avoid a bad thing.

But what's not enough talked about, this is one of the main arguments that made it to my books. One of the main arguments I make to anyone who listen to me speak about it. It's the reason that's so many of our customers actually hire us is because there's a second way to think about security too.

In addition to how do we avoid the bad thing, its, how do we pursue a good thing? And this is the very often overlooked benefit that Security can deliver is that Security delivers a competitive advantage. So for a company that is, you know, building a software system and even if it's not a commercial business, right?

Even nonprofits, academic institutions, ultimately when technology is at the heart of the decision, we're trying to convince somebody else that they should use it. They should, even if it's not money, maybe it's time or allocate other resources towards it. So we're trying to convince people that this technology should be used. Well when you can first secure it and then second prove it that's enormously differentiating because most organizations really struggle to secure their systems.

And even those who do secure them really struggle to prove it. So that's an enormous opportunity because think about that, the person on the other side of that transaction, the person who is going to buy that solution or use the solution, they want the solutions that they use to be secure. So let's call that X, right? The buyer wants X yet. Almost nobody can give them X. If you can give them X, that's a competitive advantage. That's enormously differentiating. It's what the buyer wants.

And I talk in the book about exactly how to do that, how to actually prove it. Of course you have to secure it at first. You can't try to prove something that, you know, isn't actually secure, but once you secure it, here's how you prove it.

And that's the big thing that I definitely want people to take away from this with is that, Hey, here's this untapped opportunity. It's a totally different way to think about investing in Security because now it's no longer how do we minimize the amount of dollars we spend in order to minimize the amount of risk that we're accepting.

That's the way that most people think about Risk. Instead now we're saying, Hey that's of course. So let's keep thinking about it that way, but let's also say, how can we convert a certain number of dollars invested into a certain number of dollars earned because when you can give the customer what they want, that leads to a sales closing faster, it's a competitive advantage like I said.

It's something you can very much mark it. And so that's really, I think a big thing anybody here listening can and should take away from this.

Boris: Fantastic. I would like to speak with you more and more, but we have to be in the format. Maybe in a few months we will have another session, but from our own perspective, as Global Risk Community what would you suggest? How can we contribute to a better understanding of this complex world of Risk?

Ted: Well, I think anybody who's listening to this show is already doing the first step, which is just be a continuous learner. I mean, all of us are driving to be better, to learn more to level up our skills. And I definitely want to make it clear to offer myself to everybody who's listening to this right. Use me as a resource, right? So not only listening to this, but if anything I talked about here today, you want more advice on any of these ideas? Do you want to learn more about the book you want to follow me on social media? 

You want to contact me about maybe security testing needs at your company. Do you want to talk to me about coming to speak at your company or your organization? However I can help, like I'm a resource, just go to TedHarrington.com. All of that information is there. Reach out to me. My commitment to you guys is to help. And so just keep doing what you're doing. Keep pushing for continuous learning, use the resources around you. And that's the big thing he pushing. 

Boris: Okay. Thank you. I will put in the show notes a link to your book and your website so people will know where to look at. Thank you for your time and for your fantastic interview. And I wish you a great success with your ethical job, and they do get upset. I love it.

Thank you so much

Votes: 0
E-mail me when people leave their comments –

You need to be a member of Global Risk Community to add comments!

Join Global Risk Community

    About Us

    The GlobalRisk Community is a thriving community of risk managers and associated service providers. Our purpose is to foster business, networking and educational explorations among members. Our goal is to be the worlds premier Risk forum and contribute to better understanding of the complex world of risk.

    Business Partners

    For companies wanting to create a greater visibility for their products and services among their prospects in the Risk market: Send your business partnership request by filling in the form here!