BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations QCon SF 2018: Security Panel

QCon SF 2018: Security Panel

Bookmarks
44:55

Summary

The panelists discuss current security issues and ways to mitigate them.

Bio

Werner Schuster focuses on languages, VMs and compilers, Wolfram Language, performance tuning, and cloud taming. Marshall Kuypers is the Director of Cyber Risk at Qadium. Will Bengtson is senior security engineer at Netflix. Travis McPeak is a Senior Cloud Security Engineer at Netflix. Jarrod Overson works at Shape Security. Mike Ruth is a security engineer at Cruise Automation.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Schuster: Thank you for coming to the security panel. We have a very lustrous crowd here. How about we do a very quick introduction, who you are?

McPeak: I'm Travis McPeak. I'm from Netflix, and I'm very excited to be here. Thank you for having me.

Overson: I'm Jarrod Overson, director of engineering at Shape Security, and likewise, very excited.

Kuypers: My name is Marshall Kuypers. I'm the director of cyber risk at Expanse, which is a San Francisco-based startup.

Ruth: My name is Mike [Ruth]. I'm a security engineer at Cruise Automation in the infrastructure security team. Also excited to be here. Yes, let's get this party started.

Bengtson: I'm William Bengtson. I'm at Netflix as well with Travis. We're on the same team doing infrastructure security. If you're still around after this, Travis and I are talking about building a pizza, but excited to be here.

Getting into Security

Schuster: How we're going to do this is I'm going to start off with a few questions, and then it's open to you all and you could just ask questions and just hammer them, insult them, whatever you want. To kick things off- security is an interesting area, right? How did you get into security? Did you all go through army training? Did you have to deal with a tough mother every morning? How does it work?

McPeak: Yes, let's go with army training. That seems pretty plausible. Take a look at me, I obviously went through army training. No, I was just kind of born with a security mindset. I've always really liked the idea that there's certain controls that you're not supposed to get around, and then clever people can figure out how to get around them, and then even more clever people can prevent those first clever people from doing that. It's always appealed to me. I was doing weird stuff like hacking into my parents' computer because they password protected, they didn't want me to use it all the time, and I'd try and find a way around that, doing stuff like that, even as a kid. And then just got more into it in high school, college, doing cool projects, experimenting in stuff like that. So yes, I've always been interested.

Overson: I still don't consider myself a security guy, despite me working there for about four and a half, almost five years now. I guess at the start, I was a developer from very early on. I got my teeth on reverse engineering Starcraft and Total Annihilation and Fallout way back in the day, map packs and trainers. But then went into the web, full stack engineer for ages. Worked at Napster Riot, and then kind of got sucked into Shape Security because of a connection. Then I realized that a lot of stuff that I did like was actually security-related, like reverse engineering and analysis. It's like, "Oh, this is actually good work." But still, now I talk about this stuff and I still don't consider myself a true security guy. I feel like a fraud constantly, and that's okay. So if you want to get into security and you don't feel ready, it's all right because a bunch of us probably don't.

Ruth: You got to fake it until you make it.

Kuypers: Before I give my answer, a quick show hands. Did anybody come to my talk? I apologize for the few people that did because I'm going to repeat a couple aspects of what I said there. I'm not a developer or anything like that. I actually got into security because I joined a program for my Ph.D. and I joined a risk analysis group. All the different people in my group had different applications of risks. One person was looking at the risk of nuclear weapons to turns failure. Somebody was looking at the risk of the power grid failing. Somebody else was looking at the risk of asteroids hitting the Earth and killing a bunch of people. So I did choose an application. I literally made a list of what seemed to be like promising topics. At the time, cyber seemed to be something that had a lot of low-hanging fruit. So I just kind of dove into it. Within a couple of years, was able to do some like really solid research that has been sort of well received. For me, what's really exciting about cyber security in general is that the learning curve is so steep that you can really become an expert really, really quickly if you really just dive into it.

Ruth: Yes, that's a fair point. In terms of you can dive right into one area and just instantly become an expert because no one has ever bothered to look at that, right? And it's like, "Oh, hey, look at all these problems." Everything has been so interesting for years and years and years. For me, I started off as a software engineer and spent a handful of years doing that, actually kind of bounced around. So was in this rotational program right out of college, where it was for Engineering Leadership, so they kind of threw you all over the place. I was in product management at first and that was not for me. Then I went to doing something like systems engineering, and like QA type stuff. That was hard for me. So I ended up landing on software engineering.

Then there was a role that was open that was focused on vulnerability response. So really identifying like, okay, you have all these CBEs, all these vulnerabilities that are being identified, from open source components and all those types of things. How do they impact the systems that you're working with, the systems and products that you're creating? How do you do an analysis based on that? So there's quantitative and qualitative analysis you can do there. I thought that was pretty interesting, but not quite what I wanted to do, I think, for the rest of my life. So then I took the effort to go through the whole cert game, and get certifications from Sands and all these other places, to really get a foundational knowledge. I really loved everything that I learned from that. And then I just kind of went full steam ahead on that.

Bengtson: Yes, I actually wanted to be a doctor first. And funny enough, I saw a movie where Colin Farrell was recruited by the CIA. He's this computer hacker called "The Recruit." And I was like, "Oh, that's what I want to do." I want to be a hacker for the government. I applied to get the NSA students scholarship and got denied. But I was really good at computer, so I still just went through the track of, like, everything needs to be programmed. So just went to be a software dev and kind of lost track of security for many years, and then somehow found my way back in, and was doing reverse engineering and anti-tamper for the government. That was really cool.

Then I just was like, “There's such a big world out there in security,” and I just went and learned everything I could, and eventually found my way to Silicon Valley, and now here now at Netflix. I was giving the advice, if you want to do security, get a good foundation in something, and then dive into security from there. So software was the foundation for me. I was able to branch in many different areas of security from there.

Bad Security Practices

Schuster: One question I have is, so you're all in the industry, you all have lots of experience of security and dealing with developers who don't know anything about security and all they do is create bugs. What are the bad security practices that piss you off the most? It's things when you see, you really want to go over to the developer and say politely, "Don't do that," or you shake them by the neck.

McPeak: That's a good question. I'll take a little bit issue with all they do is create bugs, because they produce value. They're the reasons I have a job. I'm not in the money-making business. I'm helping developers to do the right things so we can have business value business. I have a lot of, I guess, sympathy for developers because they do crazy stuff, these nutty algorithms that I don't want to learn about. I just want them to kind of give me the TLDR. That's the way that I feel security people should be with developers, is like, "Hey, I don't care about the intricacies of command injection, how that works, just tell me the things that I need to know to not screw this up." I think that's sort of where we can add security value.

I would say that the one thing that does sort of upset me is common, obvious things that developers should know, just naturally, like hard coding a terrible password in code. That's the sort of thing everybody knows not to do that. I understand it can happen accidentally. But when I see that, then I start to think, "Okay, what is it we need to do to prevent that?" But generally speaking, I mean, security vulnerabilities are very complex. I don't want developers to spend a lot of time doing that. I want them to spend time making awesome software so I can still have a job.

Overson: The thing that gets me the most, and I fall victim to it, I don't have a good answer for it, but it's just general invincibility complex that a lot of developers have, that “I know enough about computers, that nothing bad will happen, despite all these awful things that I'm doing on a day-to-day basis.” People throw their hands in the air when instructed to curl a script from the web and paper through Bash, but they will NPM install all day and run code from who knows where without concern. There are so many practices that can leave people vulnerable, and can cause major problems for internal company networks. I think a lot of people just kind of hope that nothing bad happens. It's scary because these are a lot of the people who influence others and set best practices, and we just have, in general, pretty poor practices all around.

Kuypers: So my main, I would say, frustration is not so much a specific thing, but more so like a class of thing is our way of thinking. My belief of the world is that you should be thinking very probabilistically in doing these cost benefits for all the different decisions that you're making when you're choosing to implement something or how much time you're spending from a security standpoint. So what gets me is when it's very clear that folks haven’t done that simple calculus.

As an example, in my personal life, I don't lock the sliding glass door on my balcony because I'm on the second floor. If somebody wants to break into my house, they can repel off the roof and get in. But if they do that, they can have my TV. I don't really care that much. Whereas other low lift stuff that I would say is more risky, of locking your front door because that's where most people are going to get in, is the important thing to go through and do. So just getting people to think about a data driven or probabilistic framework for the decisions that they're making and what they're choosing to implement or not, I think, is something that the industry should move towards more.

Ruth: Yes, we're never going to make something completely secure, right? It's really about how much effort do you need to go into to get to a certain point of mitigation where that's an acceptable risk. That's really the point that you want to get to. I think that's what security is all about, is really mitigating risk to an acceptable degree. One of the things that frustrate me is, I guess, we're not practicing sort of defense in depth. We're really kind of pointing at one thing and saying, "Hey, that thing is a security aspect. And as a result, we're secure."

Then shedding on these ideas or trying to come up with scenarios that might reinforce a certain way of thinking, regardless of if that way of thinking is good or not. One idea is not having authentication and authorization on your applications because you're on an internal network and it's private. Only the people that are on that network are people that are trusted. I heard that once or twice before. I don't think that's a good mentality to have. So it's those types of things that frustrate me.

Bengtson: Did all four go? Best part of going last. Yes, I can relate to everything that everyone said. I remember being at my first startup and seeing that the default database password was pink. But I think what really frustrates me are two things. One, developers, if you give them the right context, they know what the right things are to do. Sometimes they hide information because they know they'll have to go do work. So that to me is the terrible thing, is like if I actually like voice my concern in this area, I actually have to go do things.

Sometimes program management will kind of shield that from security as well. But I think what also really irks me is, as security professionals, we're also developers because we're running tooling. When we're not dog-fooding and doing the things that we're telling others to do, that really gets me, because why trust what a security person is telling me to do if we don't do it ourselves? So I think before we can actually tell developers to do things, we have to do it ourselves to show, "Hey, we do it as well." When I see that, that really gets me.

Integrating Security People with Developers

Schuster: Well, that brings us to an interesting question. How do you integrate the security people with developers? The security team is kind of like the evil ogres in the basement that yell at you if you reuse your password or something. Is there a better way to do that? I think it was at Etsy, where Smith was talking about putting the security team at the center of the company, making them look friendly, integrating demos to developers. Is there something that you guys do, to make yourself look less like evil ogres?

McPeak: Yes, that's an awesome question. I like that a lot because I've seen multiple ways of doing it. I've been at a company, I'm not going to name which one, where the security team required that developers fill out a five-page, literally a five-page checklist, of things to do every single time they would shift the code. And guess what? They'd shift without doing the checklist. They're like, "Oh, yes, this isn't a new version. We're just doing this slightly different thing." So they'd fill it out one time. They’d kind of check the boxes and BS it and then never do it again.

The reason they were doing that is because we weren't providing them value. We weren't acting as an advisory. We weren't their ally. We were there to slap them on the wrist and tell them to go back and do more work. The way that I've seen this to be really effective is the exact opposite of that. Instead of this security team that's going to block them from doing what they need to do, you are the advisory security team, you're their ally. You're the person that they know like, "Oh, crap, this thing needs a security eye, please help me with this thing." And you help them to do it as quickly as possible and get back to doing the business value that they want to do.

Overson: Yes, that's good. It's a good question. Good answer. I think the best, or an effective tactic or dealing with anybody else in the company, not necessarily just developers, is to understand that security is a choice on a gradient, and it's okay to push that up or down as necessary. As business deems appropriate, as long as it's a conscious decision and the cost and value are discussed appropriately all the way up and down the chain, from executives down to developers. And no one feels like they're pulling the wool over anyone's eyes. And no one's feels like they're the only one who has the burden of making sure that company safe. It's a conscious decision the company is making to do something, or to not do something. Then it's properly time-boxed, or the effort necessary is put in place, a team of staff necessary. It's a conversation that often doesn't happen because security teams either want too much and it's a binary decision. It's either you are secure or you're dead, and no one likes having those choices.

Kuypers: Yes, I think that there could be a lot of benefit in having higher quality conversations between the security folks and the developers on why security is implementing certain things. I got the chance to work with the Jet Propulsion Lab. One of the systems that they have is the Voyager spacecraft, which is the farthest man-made object away from Earth. It's still operating, it was launched back in the 1970s. And you can go into this room at JPL and the computers that are used to communicate with it are from the 1970s. You can imagine that they haven't been updated and patched. All of the researchers are going to the security folks and are like, "Don't you dare touch those. If you try to update anything you're going to break it.” This is really a priceless thing. There's no other spacecraft that is that far away from Earth.

So what we tried to do is change the conversation a little bit. We said, "Look, the reason we want to patch this is because we believe it's in an insecure state. And yes, there is a 5% chance that if we try to update the software, that you're going to lose contact with your spacecraft. However, if we don't, we think that there is a 10% chance per year that somebody is going to find this and hack into it and you're going to lose your spacecraft anyway." So changing that and framing this in the right way, of the cost and the benefit, really can get the researchers and the developers on the same page of, almost clawing to try to get some of these things updated. Because then they really see, "Oh, this is really a risk reward thing and I can see why the security team is trying to implement this."

Ruth: Yes, I think framing around us-versus-them mentality is problematic too. I actually think that maybe these three questions that we've had so far are perhaps even reinforcing that, in that it doesn't need to be necessarily security teams and non-security teams. This might be a scalability challenge, but actually embedding folks from the security organization into all of your teams so that they can be advocates and part of that team to show, "Hey, this is perhaps how some security implementation may need to be done. And, by the way, here I am taking that story off your backlog and doing it for you." So you add value immediately to that. I think that's super important. But it is definitely a scaling challenge to have this one to N number of security engineers to developers, or whatever the individual is.

Bengtson: Yes, I think it's very powerful to build that trust and those relationships with the development teams, but also understand that sometimes your security team isn't large enough to build relations with everyone. I really love the approach that Netflix has done, in that they've built the building blocks, taking advantage of things very easy. If you want to enforce Auth everywhere in your environment, it's like installing a Debian package. All of a sudden, you have Auth in front of your app, which I think is the ultimate way to scale. And it shows to the developers that it's just super easy to take advantage of these security features, instead of just saying, "Hey, we want you to go do Auth, go figure it out. I don't have a sample package for you. But you're smart, you can go write it yourself, right?"

That, and I've also found when I work with development teams and want changes, me being willing to actually push code to them and for review has been received very well. It's not just me always throwing things over the fence to them; being willing to actually go make changes to their code and get that understanding of things and that relationship has come a far away as well. So that the next time that I go asking for something, they're like, "Oh, man, you actually wrote code last time for us, let us do it for you this time."

Schuster: At this point, any questions or comments?

Participant 1: [inaudible 00:20:11]

Kuypers: I believe that they didn't end up patching it. They're still in this really bizarre state, where whenever something breaks, they try to replace the part too, and they have to go on eBay to find old computers from landfills to be able to replace these parts because they don't make them anymore. But I think that there was a compromise in the sense of, "Hey, we're going to build some secure environment around this so that it kind of reduces the probability, whereas we're not actually going to touch the system itself." So that defense in depth, almost, frame.

Participant 2: Yes. Well, you had mentioned that you guys try to make it very easy for developers to, let's say, put authentication and authorization in front of the service. What technology do they use so to abstract that away from every service?

Bengtson: That's a good question. Luckily at Netflix we all operate off of a base, a kind of image in the cloud. A good example for authentication, we have an internal SSO system that our identity team built that is off of Google and PingFederate. And for you to actually take advantage of that, it's language agnostic, in that as long as you have Apache in front of your web app, you can install an Apache module that speaks our SSO language. And then it just passes headers in JWT to your application.

So as long as you can read the headers, you're fine. If you're Python, Java, whatever language you're writing your web framework in, you don't have to worry about having a library to actually talk that language. So the next time someone comes up with the new hotness of language and they want to use it, they just need to make sure that they're behind our Apache module, and then they're fine.

It's been really an easy way for us to scale and adopt to all languages instead of having to, I guess - a converse to that is, if you want to tightly integrate with mutual TLS for your given language, you either have to accept these are the languages that we support, or request us to adopt the new language, or “Here's how we did it in these languages.” Basically your mileage may vary. But for the very basic, I just need SSO in front of my app as an author gate. It's very easy as just install this Debian, and then put your secret key here and then you're good to go. That's been pretty powerful, especially for when I do my web application, so I was like, "Oh, I don't even have to write this. This is great." So we, as a security, can take advantage of it just as much as the developers can.

Overson: Are you open sourcing that?

Bengtson: I actually think the Apache - there's an engine X variant of the module, that's already open source. It just speaks OpenID, OIDC, I guess. But I don't know how open source in the future that will be from our end.

Kuypers: I have a random question for the other folks on the panel. Whenever I talk with security professionals, they always have this teamwork attitude, yet there still seem to be quite a few examples in media where there will be a bug reporting program or something like that. And somebody is trying to be helpful and reaches out to the security team, and is threatened with legal action. What do you attribute this disconnect to? Do you guys see this in your day-to-day life? What do you think we can do about it?

McPeak: I attribute that to an old school mindset. I think that most organizations that are forward thinking and understand that researchers are there to help them, would not react to a researcher in that way. Some of the best organizations that I've seen have come out with very researcher-friendly programs and statements so that they attract the best researchers. Not only are you scaring them away by taking this old school mentality, but you're not getting the best people that you couldn't get. Organizations that want to lean into that have adopted really researcher-friendly policies. I can't think of any reason why you would want to treat a researcher that way, except for just an old school way of thinking.

Overson: Yes, I think it's very much aligned with what you said. The teams who respond have responded in that way for years or decades. Without any motivation to change, they will continue to respond in such ways. I think it's a conversation about the risk and reward of responding one way versus another, and then encouraging companies to be a little bit more open and forthcoming with discussing things and working with researchers. Because no one company is going to win the security game.

Ruth: Yes. That's a fear response almost. You're trying to illustrate fear in that researcher so that they don't go in disclosing. I don't think that that's necessarily the right way to go about things. Everything that's said already, I think, is on point.

Bengtson: You can almost say that nowadays with social media, the minute that you treat a researcher badly, that steers everyone away from your program. The idea of having these bug bounty programs is to have it open out there for everyone in the world to take advantage of, and report those vulnerabilities to you before they go sell it on the black market. I've seen tweets about various different programs acting negative to a researcher who was just trying to do the right thing, and maybe came across it the wrong way. Now, I know I would never want to participate in that program.

I can only assume that others out there, they catch wind of a bad interaction with the program and then would steer clear from there. But I have seen progress in adopting the safe harbor approach, and actually putting policies that are like, "We will not take legal action against you if you report a vulnerability to us." I've seen that kind of uptake in a trend. So hopefully, it catches on.

Ruth: Not only that, but bug bounties as a service even. You take a look at Bugcrowd or HackerOne or all those, they make it even easier, and then have all the policies in place, and then it really removes the need of a company to really come up with those ideas. I think that helps a lot, too.

Bug Bounty Programs

Schuster: What do you think of bug bounty programs in general? Because they're kind of popular. Everybody says, "Let's go to wherever." Is it HackerOne or Bugcrowd? What would you recommend to a company? At what size? At what complexity level? When would you use a bug bounty program?

McPeak: That's an awesome question. I think that a lot of companies maybe tend to, if anything, underestimate how much work it's going to be to launch a successful program. You have to be at a certain maturity level to get value out of a program. If you have very low-hanging fruit and you open up your program, it's just going to get clobbered. You're going to end up paying a bunch of money for things that probably you could have had one security person, part-time, find most of these issues.

If you've never had any penetration test before I don't think it's the right time to open a program. That being said, if you have had multiple tests, if you've had people internally look at it, you feel like it's in pretty good shape and you're ready to dedicate the time and resources to make it successful, I think that there's probably nothing equivalent to the success that you could get with a well-run bug bounty program.

Overson: Yes, 100% agree. It's something you do when you are mature, and you've done everything possible to find all the bugs you possibly can, and you cannot find any more, or you have so many people looking at it, that the only way to get new bugs is to open it up to the world and encourage other people to contribute. But that's something that, I think, they're becoming so popular that companies are thinking of it as more of a social media type exercise, which just opens up so many issues.

Ruth: I guess I want to change the question or topic just slightly in that I think it's absurd that we can go to certain companies, bug bounty programs, where they'll offer thousands and thousands of dollars for something simple like a cross site scripting vulnerability, whereas we use open source tools that are everywhere and there's no one looking at those. There's no funding for bug bounty at all for looking at those things. I think that is a huge skew. It's problematic. Because the talk no longer becomes when you're mature enough, because that'll never be the case, there's no backing behind that.

I think it's incredibly important that we have similar bug bounty type things for everything, really. For any open source, I think, especially, it’s super important because it's not being looked at enough. Yes. And selfish plug - I guess, not selfish for me - but BountyGraph is actually something that has been doing that recently, where they're trying to get crowd funding for open source components to do security audits. So, definitely take a look at that if that's something that interests you.

Schuster: These are audits done by security professionals or automated?

Ruth: Yes. It's a pool of security professionals who then will take on that work, because it is being crowd funded from whatever sponsorship. Max Justice is a great guy. He put that together. He's over on the East Coast. I can't say enough great things about what he's trying to do there.

Schuster: Isn't Google doing this with its fussing program in an automated way, fussing the open source projects in a grand scale?

Ruth: I guess I can't speak to the open source project, I don't know, but they certainly do have a grand scale fussing project at hand. I think that feeds into their bug bounty program as well. Within 24 hours, if your fuss that you've provided or library that you've provided, can find something before ours you get paid. Something along those lines.

Bengtson: Yes, I would say, I'm all for bug bounties. I would almost encourage folks to think about it. You could start bug bounty like a super small scope and kind of release things one by one if you want to just fill it out. You can do what we did. We started privat and just kind of invited a handful of researchers. That way, you don't open it up to the world from the beginning. But definitely, I don't know, I guess I wouldn't put one of my services online until I've actually done some sort of internal contest or something. There are definitely different means to get your applications looked at by multitude of different research professionals.

But yes, it's definitely something that is going to take more time than you originally think. It's important to think about how fast you actually triage and remediate vulnerabilities supported you, because then that will detract from researchers wanting to participate in a program as well. I know that's something that our abstract team at Netflix, they've got a pretty high bar for how quickly they respond and consistency in response so that it doesn't vary from team member to team member, and everyone gets that same feel when working with us.

Kuypers: For the open source versus private software question, this is something that academia loves to debate, the two theories being, you'll have something that's open source, so anybody can add to it and put something in there, but there are a ton more people looking at it, versus you've got a smaller group of developers. So sort of less risky if it's private, but nobody is really looking at it. There's not any validation. What's your opinion on the trade-off there of which one is better?

McPeak: I love that question. Supply chain scares the crap out of me, there's so much stuff. You look at all of the open source components that the average software project pulls in. You just use the thing because that's what everybody is using, you don't want to write your own, and that's good. It adds to velocity. But you have no idea who the developers are, what are their motivations may be, who's reviewing that code, how carefully are they reviewing it. Even the person for something like the Python Package Index, what is the password strength that all those developers have for their account? Because if it gets compromised, a malicious developer can push a bad package and you're going to run code when you install it.

That stuff scares me so much. There's not really a good way of handling it. There's not a good way of automatically auditing any updates that are made to the packages to find anything malicious. And I agree, because you don't have the money behind it, people aren't incentivized to put as much attention as you would to the in-house software. So I think it's the ultimate disaster scenario in my opinion.

Overson: On the supply chain problem, we talk a lot about developers being exploited or whatever else. But as a developer, if you have a package out there and you don't particularly care about it, if somebody offered you $10,000 to adopt that package, would you say no? Stuff like that happens, and it might be the stupid two-line package that is consumed by thousands to hundreds of thousands of people on a monthly basis. And that's it's an instant in to corporate networks and applications, and we don't really have good answers for that. This is not a nightmare scenario, this stuff happens.

Ruth: Yes, I can't stop nodding my head, it's so true. There's no guarantee, for example, when you pull in a package that, even if you have access to a GitHub repository for source, that they're even the same thing. What sort of pinning is going on there or signature signing or anything, any sort of squatting? There are a lot of concerns there that I don't think has been solved. It doesn't necessarily answer your question on is open source perhaps more secure because of the crowdsource model, versus something proprietary and not being looked at? I think there are security vulnerabilities in both of them and they're never not going to be. I don't necessarily know if one is more or less secure, per se. It may even be unfalsifiable, or I don't know if there's necessarily an answer to that question. But certainly, the supply chain elements have been talked about are super disconcerting.

Bengtson: Yes, I would say, even though something is open source and people are looking at it, it's not until someone shifted their focus to actually criticize that application, or that piece of software. I forget what it was last year that Travis from Google started picking apart, and then it was vulnerability after vulnerability after vulnerability for that single package. Just one came out, then another and then another. It's just like, "Oh, someone found a vulnerability, there might be more," and that shifted everyone's focus to go do that.

But on the like supply chain attack yet, to me, it's pretty scary and I've been running a type of squat project and own a crap ton of pipeline packages. I might be the number one author of Python packages in pipeline, because I have thousands now. I can't tell you how many times I get requests from folks that are like, "I can't install your package properly, can you help?" I'm like, "Did you read the error and direct to the right package?" Or, one time I got asked if I could grant a licensee to an actual really big company. They wrote me asking if they could use my package in there. So I was like, "Yes, for sure. Go ahead." If you can get it to work, you can use it because it doesn't work.

And Travis was really great. The biggest problem with open source is you depend on the person writing the codes. Is the code good or not? How does the documentation work? So I wrote a tool that seemed to be pretty popular, and Travis was helping me actually test it out. I didn't write any instructions for him. He just started running it, and it actually racked up a huge AWS bill, because I didn't tell him, "Oh, by the way, don't run this with permissions. You should run it without permissions." He actually had file support tickets to get a bunch of stuff canceled. That was like, "Oh, I just subscribed into a 12-month contract for shield." So that's always a big risk too, is “I gave you a really cool tool, but have I given you the tools to use that tool effectively or successfully?” A common misconfiguration can take a really good tool and turn it really crappy.

Third Party Risks

Kuypers: One of the other trends that we're seeing in the security sphere right now is a huge concentration into third party risk. There are companies that have sprung up that will give you a credit score for other entities that are out there. And there's a huge, it seems, concentration into like, "Well, I'm going to give other people my data or allow them access to my systems, and that presents me risk.” Do you guys have any faith in any of the risk scores or from a security practitioner standpoint? How do you think you can actually effectively evaluate some other system that you might partner with?

McPeak: Yes, it's so hard to assign a number 0 to 10 of a risk for something. I think a lot of the times, you're looking at broad indicators. For example, we've looked at IAM policies for AWS, and tried to come up with how risky score is this. It's so context-dependent because you might have something fairly benign, you know, the ability to read an object from a given S3 bucket. But without knowing how sensitive the data in that S3 bucket is, you don't know if that's "meh" or if that's, "Oh, how can we have this in our environment?" So automated risk scoring, to me, has never carried a ton of weight. If anybody comes up with a great way of doing it, I'd love to have it because I think that's one of the holy grails of security, is like, "Oh, that's a four. I need to pay attention to it." Or, even with CVSS scores, that's totally context-dependent, too. So yes, I've never seen a great one that I would love to point to and say, "Yep, that's the one." But if you do, please send it my way.

Kuypers: There have even been some cool papers that have looked at it. If you go and you patch everything that's like a 10 for CVSS, versus if you just go and patch everything that has an exploit in Metasploit, you do way better for the latter than you do for the former. So the way that they go through and come up with these risk rankings, I think, is often quite dubious. But then I would say, my guess is that everybody in security at this point has come across a vendor, you got to fill out a form or a bunch of questions that are like, "Do you guys use two-factor authentication?" And everybody kind of knows that these are just bad questions and a waste of time, and yet everybody still keeps doing it.

Bengtson: I’d say one of the hardest things with doing risk and trying to loop it all together and come up with the bigger picture, is most of the time programs are approaching it like single risk, by single risk, and they don't actually put them all together. So when you think of CVSS score 10 or a Metasploit module, but there's oftentimes vulnerabilities come out that are just like, "Oh, I only have four lows, and maybe one medium, but when I chain all those together, I have a super-duper critical." Oftentimes I've seen programs just not take a step back and look at it holistically all together and see how they all relate, especially when you're given a very complex system and you don't necessarily understand how the components interact with one another. It can become very, very difficult, But yes, likewise with what Travis said, if there's anything that is really good at contextual, where we're automatically analyzing things, that would be awesome.

Ruth: There's one part of that question that you put in there- it sounding like you were saying letting companies or letting products into your environments to do those scannings. I would say that that's not necessarily to speak on, these get a number and then that'll solve all your problems. But I would say that that's particularly dangerous too. You have a lot of SaaS vendors that you can save time and money if they just have admin access into your entire AWS environment, and now you've opened up your threat landscape to that entire company, right? So I think that that's definitely a strong consideration of, is it worth it? Are you getting enough value to increase your risk by doing those types of things?

Schuster: I think we have about five minutes. Any final questions or comments? Last chance.

Social Media & Security

Participant 3: Hey, so what do you think about social media in terms of security?

Bengtson: That's where I learned about security.

McPeak: I was going to say, I'm a big consumer of social media for security news. I think that people, on a personal level, disclose a lot of stuff. What is the site - like pleaserobme.com or whatever, where people would disclose that they're on vacation and it knows where your house is and knows how far away and it'll tell you, "This person - you have like 55 minutes to rob them before they come back home." Some people are clearly over sharing some data. I think companies might tend to do that too. They're giving away OPSEC, typical OPSEC failures on social media, hopefully, not too much, but I think that there are definitely mistakes that you can make if you're not careful about what you disclose on the internet.

Bengtson: Where are you finding these sites?

Schuster: He built it.

Overson: Yes, that is a loaded question for five minutes till the end of time. Everything is horrible. Don't sign up for anything.

Kuypers: I've got no comment.

Ruth: The first thing I thought of was, I think it was back in 2016, there was a DEFCON talk about how they were trying to create sort of this ML, AI type of bot that would go through your entire Twitter history, and then try and craft some sort of phishing link that would be tailored directly to you, to be able to whatever, do a thing. Yes, we over share and there's a risk associated with that, for better or worse. Maybe you get something out of it, though. Maybe have a community and you enjoy all that over sharing. But maybe you'll suffer for it because of that. I don't know. "Black Mirror," if anyone watches that? People were able to recreate others because...

Bengtson: What's that?

Ruth: Is that right?

Bengtson: Damn it.

Ruth: No. It's Netflix, by the way. So anyway, it’s a pretty open-ended question.

Bengtson: Yes, I don't know what else to add to that. But I do have a funny social media story, in that my number one Twitter fan is my mom, and she knows nothing about security. But she has so many people on the security field that follow her. It is amazing. That just leads to the whole fake news thing in social media. It's like my mom knows nothing about it, but she's followed by some of the best security people. That's pretty hilarious.

Overson: From the standpoint of where I work, we deal with a lot of account takeovers, and one of the things that every account has regardless of any monetary value, is just trust associated with it. That trust can be exploited in so many different creative ways, depending on how the criminal wants to target individuals. Social networks, especially the popular ones, have such an incredible amount of trust associated with every individual account, that just it turns the exploitation level up to just an astronomical amount. There's so much risk around what you do on a social network, how the social network can be used against you, or how your network can be used against you, or how you can use it against your network. There are so many terrifying horrible things on the internet right now.

Schuster: So on that positive note, I think the thing I got from this panel is that we should all follow William's mom on Twitter to stay safe.

Bengtson: @eplizard.

 

See more presentations with transcripts

 

Recorded at:

Apr 12, 2019

BT