BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Perspectives on Trust in Security & Privacy

Perspectives on Trust in Security & Privacy

Bookmarks
38:48

Summary

The panelists discuss balancing the adjustment of the security posture and the user experience.

Bio

Clint Gibler is Head of Security Research @r2cdev. Stephanie Olsen is Customer Trust, Abuse & Fraud @Netflix. Cassie Clark is Security Awareness Lead Engineer @brexHQ. Ellen Nadeau is Privacy Analysis Engineer @Cruise.

About the conference

QCon Plus is a virtual conference for senior software engineers and architects that covers the trends, best practices, and solutions leveraged by the world's most innovative software organizations.

Transcript

Westelius: Welcome to this panel on mutual trust in security and privacy. The theme for this year's QCon security track has been establishing and maintaining customer trust. Trust can mean very different things in different aspects of security. Trust is often the measure that we weigh against when introducing controls, policies, or processes. Higher levels of trust often means more lenient security or less friction regardless of who our customer is. In today's track session, we've explored trust in systems and code through security chaos engineering, lack of trust in the organization in its employees through implementation of zero trust, and dynamic authorization for external users through a centralized IAM platform. As we evolve security practices, being able to treat users differently, depending on trust levels offers a lot of flexibility, both when we're considering external users as well as employees.

Background, and How Trusts Plays Into Different Roles

On our panel, we have four experts and leaders in very different areas, with a broad range of customers. We have Clint Gibler, head of security research for r2c. We have Stephanie Olsen, leader for customer trust, fraud and abuse at Netflix. We have Ellen Nadeau, lead for privacy engineering at Cruise. Cassie Clark, Security Awareness Lead at Brex.

I would love for each and every one of you to introduce yourself and tell us, how does trust play into your role and your area of expertise.

Nadeau: I'm Ellen Nadeau. I've been at Cruise for almost two years working to develop self-driving, ride hailing delivery services. I lead our privacy engineering team. Before Cruise, I was at the National Institute of Standards and Technology part of their privacy engineering program as well. I'm not representing Cruise, I'm speaking in my personal capacity. There are so many drivers for doing good privacy, partially because it's the right thing to do, also to meet compliance requirements. Trust that individuals have in one's company is also one of the main drivers for doing good privacy. It's so crucial for your customers and employees to trust how you're managing their data. What you're collecting, why you're collecting it, what you're doing with it. If an individual doesn't trust a company, if they're a customer, it could impact their experience with that company. They may choose to avoid interacting with the company altogether, and absolutely could impact employee trust in their employer if the same care is not taken internally. User trust is really crucial to business goals, and employee trust really crucial to culture internally. I think engineering our systems and the role privacy engineering plays in that is really important to protecting individual's privacy, and being part of that core component of trust.

Westelius: I think that there is a clear alignment between the privacy component and trust.

Olsen: I'm Stephanie Olsen. I manage customer trust at Netflix. I've been in the fraud and abuse space for about a decade, trying to figure out how to stop the fraudsters at various companies like Google and Pinterest, and now here at Netflix. I'm also here just representing my own thoughts, not representing Netflix. In terms of trust, so several years ago, when I was at the beginning of my fraud fighting career, we very much as an industry were fighting fraudsters by loss management. It was really about how do we just shut as much of the fraud down and manage the bottom line through losses. There was an extreme reaction from consumers broadly across industries that that wasn't working very well. That they were not excited about losing access to their accounts, having money stolen from them, not having meaningful happy paths to recovery. That transition in the fraud industry as a whole began about five to seven years ago. For me personally, at the same time, I think that's really when I started to really focus as a security professional on that balance between consumer and customer trust, versus protecting and navigating the security controls that need to take place on a given platform or product. It's an essential part of our decision making all the time, and we measure that in various ways.

Westelius: Absolutely, I think the balance between the friction, or the controls that are put in place versus the usability or even growth, especially in the areas of fraud is hugely important.

Clark: My name is Cassie. I am the Security Awareness Lead Engineer at Brex. We're a company that focuses on financial systems, and really bringing all of those disparate financial systems together into one place for our customers. It depends on the audience, but trust is always going to be the same for us, or for security awareness in general. Really focused on connecting that to behavior change. If you don't have trust from whatever audience you have, whether that is internally, whether that's customers, you will not get them to do the things that you need them to do, or you want them to do. It's really crucial for my own role, and really, for all of security, or privacy, or any of these teams to be able to secure that trust and have employees recognize that we're not asking you to make changes, because we just want to be difficult, but actually, it's really what's best for the company. We're not asking you to go out of your way to do all of these things, we're going to help you along the way. We're going to be a partner to you. We're going to try to make it simpler for you to do the things that are the right thing to do, or the secure thing to do. That trust is one of the ways that we approach all of our thinking when we look at awareness stuff in the industry.

Westelius: I'm really looking forward to see some of the red threads or similarities in how we're thinking about trust for all these different aspects. Clint, you're dealing a lot with development, like SDLC in engineering and trust in code.

Gibler: I think unlike some of the other speakers, I am fully speaking for my entire company, and all things should be considered legally. I'm the head of security research at r2c, which is a small San Francisco based startup working on building security tools and giving them directly to developers. Rather than having the separate security team that is just very friction-full and slows down engineering, we try to approach it from the mindset of how can we be customer centric as security professionals treating engineers and developers as our customers. How do we have good dev user experience? How do we enable them to do their job better, faster, and ideally, make security as orthogonal to their day-the-day job as we can, so they can just ship features and be happy?

One thing I really liked that Ellen said, is security and trust as a business enabler. We've definitely seen that as well. Specifically, we're building an open source static analysis tool called Semgrep. Basically, it's a tool that analyzes source code for vulnerabilities or just best practices, antipatterns, stuff like that. If you're going into a CI/CD pipeline, that's a pretty high measure of trust. You have access to source code, you have access to probably environment variables and secrets that if stolen could lead to severe compromises. We saw like Codecov and a number of other supply chain attacks recently. Obviously, this is on a lot of people's minds. For us having a very high security bar because we're just in a sensitive place for many companies is very important. Us and customers as one aspect of trust, I think is interesting.

Then the other is, how do we maintain and build trust between security teams and development teams at various companies? I think, at least from what I've seen, a number of security teams are a bit hesitant to make significant processes or use new tools. Because if it's very noisy, or just makes developers angry, that damages that relationship. It's just difficult to get the organizational action you would want to improve security posture. We're very cognizant of that as well. Trust between us and customers using our tool in CI, as well as, how do we help various security teams build trust with the developers they support?

How to Establish Trust with End Users

Westelius: I so much agree with that. I do think that to some extent, perhaps the trust between the security team and the engineering team might be similar as to when we're introducing processes or controls for our employees. Or the same almost negotiation or compromise as when we're introducing new controls that might be something that our end users are facing. I think that there is a lot of similarities there. Just at a very high level, how do you go about establishing trust with your end users or your intended customers, or the ones who would be impacted by any friction that you might put in place or a change that impacts them?

Nadeau: In the privacy space, I think in terms of establishing trust with individuals, notice and consent, the transparency piece comes up quite often. I do think that that plays an important role, making sure individuals know how we're processing their data, so we don't do anything that's unpredictable to them, and lose that trust. I think also, thoughtful word choice and expectation setting is so important. If you set the expectation that somebody's data is 100% anonymous, when in reality, you mean you've done a really great job deidentifying this data and reidentification risks are low but present, that's a very different statement. I think the latter sets expectations much more appropriately. I think that honesty can help build trust. Of course, consent plays a role in privacy and user trust.

I think there's also a lot of privacy work that can be done behind the scenes from an engineering perspective, without burdening the customer to accept certain risks that you could avoid or mitigate entirely by the way you're designing your systems. I think if individuals know you're doing that work, considering their privacy as you're building your systems, even if that's not a piece that might be as public, I think that's so crucial. I've been really fascinated watching the privacy space evolve over the last several years, like laws and policies have always been so crucial to privacy. I think we're also seeing privacy engineering as a field develop more. I think it's so important so we can make sure we're actually building our systems in ways that reflect those statements we're making. I think there are so many ways technically to build in privacy and help achieve that trust.

One way that I'll just touch on, I think is so foundational, is really knowing where your individuals' data is. If you can't protect individuals' data and build that trust, if you don't know that it exists, or where it exists. I think it sounds so basic. I remember this Gemalto study from years ago, that showed that 46% of executives believe that their companies didn't even know where sensitive data was. I think just having that understanding and that precaution around the data that you're handling, so that you can build the appropriate protections on top of it is so important, in legitimizing a company, in establishing that trust.

Westelius: Do you see any overlaps between providing transparency and insights into why you are putting in certain controls play a large part in your world? Also, how would you go about establishing trust with your users in your area of expertise?

Olsen: Interestingly, Ellen and I deal with the same end user. We're dealing with the consumers of our products and platforms. I do think that transparency is an important aspect when you're handling fraud. The interesting balance is, however, that there's only so much information that you can share about a given fraud attack, or what's occurring in a given situation. A lot of building trust, I think, in the world of fraud is really about helping to educate consumers on the responsibility both for them and for us. Not all fraud can be stopped from a company alone. We can't put in all of these controls and mitigations that would stop 100% of the actions having negative effects on our customers. It's important that customers understand what it is that they can do, to take the right actions, and then also understand our commitment to making sure that we're doing right by them to the best of our ability.

For us, we do actually a lot of communication directly with our customers to understand the interpretation of a fraud incident. Or, if your account is taken over, how does that make you feel? That helps us balance out controls, because there's several means by which you can address fraud and you can take things in very blunt manners and you can take things in less blunt manners, and knowing where to turn that dial really is about understanding both the risk but also the perception of the risk. What does that actually do to your brand, to the overall comfort and trust to your company as a whole? Those are things that aren't that easily measured. Because in a given instance, it may not seem that valuable. People may think, what's the big deal? It's a Netflix account, start another one. Who cares? People have a lot of passion. They've been curating their browsing history for a long time. They've got all their things in all the right spots. It's important to be able to manage through a lot of that. Transparency, communication, education, I think these are all things that really lay the foundation for some of the strategy to roll out good trust within the broad world.

Westelius: I could not agree more. I think it's interesting. I think we've seen a little bit of a shift in the past 10 years between security, the perception of something being secure, and therefore you have all these controls versus the need for a frictionless experience and people's understanding of that risk.

Cassie, in your world, how do you establish trust with the employees or the company that you're working with? What does that look like for you?

Clark: I do want to respond to what Stephanie said, because I love that approach. It's such an empathetic approach. I will focus on internal employees for the remainder of the panel for myself, because I do some work with customers, but most awareness people focus purely internally. There's something to how you communicate a message. If we were to go to a customer and say, "You can help yourself be secure, all you have to do is set up multi-factor authentication, and make sure you SSO in, and then have this thing on the background." They would have no idea what we meant, and it would drop their trust. They would slowly move and try to find some other type of competitor who would maybe be better able to handle them regardless of whether or not they were more secure, because they wouldn't understand that.

Going back to the internal side, however, echoing everything that other people have said. Having really clear communications, very clear guidance, being very transparent about the things that we can share, because there will always be things that for whatever reason, we can't share. We would never share all of the inner workings of an incident, for example, because it's a need-to-know type of situation. Having that is really important, and having really clear processes. All of my work is really formed from human behavior and the way that we operate as people, whether that is through our biological side, our social learnings over time. One of those things that we know is that humans really don't like to change. Let's say that we're changing a process, if we don't have a very clear understanding of what action we want taken, what we want the end goal to be, and then have one change over time as opposed to four changes to get to an overall change, that will drop user trust. The more that we can simplify things for people and make a very clear process that they can follow for, however they need to, whether that's documentation or walking through it with them, whatever that looks like, we can be really good partners there for them.

I think another really key thing is to own when we make a mistake, or own when we're not making things simple for people, or whatever our side of accountability is, I think that's really important. Equally important to listen to feedback. I think those two go hand in hand, both owning and being accountable and listening to feedback. I love that you keep saying frictionless, because actually, behaviorally, friction is not always bad. It's something that I've really had to work with some people on when they think we want this to be a frictionless experience. A lot of behavioral science research is showing that actually some friction helps maintain long term both understanding of the positive impact, that whatever that change is will have on you as a person and helping you to make a long term commitment to that change. There's some friction that we do want to put into there. Then there's some areas where maybe we want to add friction to deter people from taking a step. Thinking about how trust plays a role in that is actually part of how you make the decision of whether you do what's called a nudge, so you're pushing people in the right direction, or you do a sludge, which is adding friction to deter something. Thinking about how you do that really carefully is really important, because obviously trust is very easily broken and very hard won. It's a fun game to play in a way.

Westelius: It's an interesting balance too between how you're thinking about nudging and sludging versus direct and open and transparent communication around the reasonings behind why you would put a certain control in place. Thus far, I think a lot of the theme here is transparency and communication with our end users. Clint, as you're thinking about establishing trust within your context, how does that play into your decision making?

Gibler: I generally thought of friction as, how do we discourage the bad paths we don't want people to take? What Cassie just said about having it be purposeful in a way for maintaining habits or just having it help sink in, I found that really insightful.

From my perspective, in terms of taking on the persona of a product security engineer or an application security engineer who is supporting engineering teams, I think it comes down ultimately to just understanding their worldview, and how are they generally writing code? What are the workflow? Basically, how can we insert these security controls and mechanisms in a way that uses the existing system that they already use, rather than making them go to a separate dashboard? If, for example, just to be a little bit more concrete for, say, if you're writing code in GitHub, and you're doing pull requests before merging things in, you might want security or other feedback as a pull request comment, rather than some alert in a separate system they need to create a separate account for and log into separately and maintain separate credentials. Ideally, it's SSO or something, but really just how can we give as much security related feedback to the people we're supporting in the systems and processes they already use.

Another thing that we found pretty effective is basically selling the security value, or selling something also from the perspective of like, here's something you already care about. For example, developers often care about, ideally, security, but perhaps even more, code quality, or robustness, or correctness, or performance, or things that their benchmarks are often measured against. Can we show how a security tool or process can actually help all of these metrics, or performance characteristics that they already care about? It's like, you're going to be more secure, but also, there's these five other wins that you get for free. They're like, this is nice.

Then to mirror some points made by a number of other people, I think building in feedback mechanisms is super important. You're adding friction. You're adding something new, having an easy, streamlined way to say, "This is annoying, or you just told me something but I don't understand what it means," is a nice way to gather feedback such that you can say, maybe your messaging needs to be different here. Or A/B testing copy, for example, to say, "This is what's wrong. This is how you should fix it." I think Laksh who is head of product at PayPal, gave an AppSec USA talk a few years ago, where they built a continuous code scanning platform internally. They actually A/B tested various types of messaging to say, "This is what's wrong, and this is how to fix it." I think it was like, here's some educational video. In one case, they were like, "Please watch this video." Then in the second case, they were like, "Watch this video. It's only 3 minutes and 27 seconds." They found that the action rate was 60% higher or something crazy with just adding two or three words. I think the fundamental principle is, how can we just take in feedback and iterate and improve? I think probably small tweaks will make things surprisingly more effective, hopefully.

Westelius: Really important to establish what to expect, I think, and offer that feedback mechanism. I would absolutely have pressed on the one that just told me how long the video was going to be. Otherwise, I don't know if I could commit.

The Role Security Control Plays In Users' Trust

What role does the security control play into users' trust in us? I remember a time where it felt like if you provide people with like, you're going to have to get the certificate. You're going to have to add all these security features in order to access your data or your accounts or your things. That establishes a level of trust with the end users. Now we're moving into this more frictionless experience. Additional security measures like requiring to badging into buildings, or higher levels of friction or expectations when I'm logging into my accounts, does that help establish that trust? From your perspective, where is the balance between that trust or that security sense of safety, versus higher levels of usability or a frictionless experience?

Nadeau: I think this question is a perfect PSA for bringing privacy engineering teams into your system design discussions very early. I think that the earlier that we start thinking about what that usable privacy piece could look like, which does have that impact on user trust, the better. I think a big part of this is recognizing what risks we're actually working to manage. For any system design I look at, I might have X number of privacy remediations I could throw at it, but at what point is it diminishing returns? What is the risk we're really trying to manage here? What are the controls that actually make sense for this system to make sure that we're not over-engineering that first of all?

I'll go back to my comment earlier about not always burning the customer if there are things we can just do to protect individuals' privacy, and build that trust. For instance, let's say there's a team that is looking for information around repeat customers, could be really easy to just give them access to a customer database. Just because you can do something doesn't mean you should. Are there ways that you can either deidentify that data? What does the team actually need? What are the actual goals there? Can you deidentify that data? Can you tokenize that customer ID so that you're propagating that instead of real identifiable customer information? I think all these things mean that you also have to bring less of the individual around like, these are all the things we're doing with your identifiable information.

I also just want to flag that I think there's a lot of opportunity for innovation here in the privacy space. I think we are seeing it at companies moving beyond lengthy privacy policies that somebody might not read. Those are an important part of the puzzle. There's also always opportunity for just-in-time notice about data collection, you have sound or light, and all of these ways to build that trust in new and innovative ways.

Westelius: I think that's super interesting, especially everything that is currently happening especially around data privacy and user data. I think a lot of people now understand that better and have a lot more concerns around what might happen with their personal information. Very interested to see where that innovation goes.

Stephanie, from your perspective, how does that friction or whatever transparency and communication play into you establishing trust within your space?

Olsen: It's really interesting being in this space for a while. Years ago, when we tried to introduce some friction, it was not well received by consumers where there was a lot of question of why? What's happening here? Why do I need to be providing this additional piece of information to you and whatnot? I think that a little bit of that was, of course, implementation. I think there's also just the perception around, what does this security control give me in terms of additional benefits? Thankfully, there has been lots of development in more seamless security controls that allow users to have really independent ways to engage with products and whatnot, but also not feeling like it is overbearing. In some ways, in the fraud world, I actually feel like some consumers actually expect levels of control.

I think the important aspects to factor in is, again, the perceived value of whatever it is that you're protecting and how consumers will react to that piece of friction within there. The innovation or smartness that you actually apply to that control itself. Using something that's more risk based and not just the flat, everybody has to have this control, because we're just going to do it for everyone, is certainly a much smarter way. Because then, if you're traveling or using a new device, you're in a hotel room and you're getting some piece of friction, you're like, they don't recognize me, they don't know it's me. People have a little bit more of a understanding at that place.

At the same time, having those baseline controls in place, again, marry the trust and expectations from consumers so that if there is a bad incident, you don't get that consumer reaction of like, you don't even have 2FA on your site, you don't have MFA. It's like, you didn't ask me for anything. You didn't give me the ability to even protect my information, my account, all of these things. That, again, turns around and becomes more of your problem than it is mine. I think those different factors are really important. Friction has a negative connotation. I think just the word feels like, friction, like, who likes that? I think it does have an important role even in the future.

To Cassie's point earlier, in this complex world of all of these different products, and all of these different adversaries out there trying to get at everything, I don't think it's really a world where we'll be able to create this seamless experience where no one has any friction at any time in the future. At the same time, again balancing all of those things between risk and the customer experience, and making sure that there's components that really pull that all together so that customers understand why you're doing it, what you need to do, and how to really enhance that experience from their own aspect. What can they share with us in any given way to make that experience better? I think on the flip side of that, when we design, lots of companies design to say, there's this bad thing that's happening, now let's make it the best experience possible.

There's also the flip side of that, which is, there's lots of very great experiences that can happen. You can use that good side of information and security controls to create seamless experiences so that users can actually differentiate when bad things are occurring as part of their experience. It's super seamless if I'm always using the same phone, and I have 2FA, and I have all of these controls in place, and everything's good, because I'm on the happy path. Then if there's something that jars, and is like a bump in the road, you can see that there's a differentiation. It allows users to also differentiate their experience, which I think fundamentally builds trust in the company's capabilities.

Westelius: I love that approach. It is a lot of heavy lifting, though, on the companies if we're going to have to create these dynamic access patterns based on all of the different users, their profiles, their history. That eventually might even turn into a privacy situation or something that we need to be considerate of. It also plays together. I think that combination of being able to be flexible in what level of controls we're putting in place, versus the identity or the scenario in which our users are showing up. With a lot of work clearly, and a lot of investments. Cassie, from your perspective, when dealing with behaviors and humans, what makes people feel safer when it comes to security?

Clark: I think some of it is around transparent messaging. I don't necessarily even mean, "Hello, company, we're going to be putting a security control in place." It might even be something like, if we were looking at customers, infusing the idea that our customer trusts us includes the fact that we will protect them, and we will help them protect themselves. Just making that a part of that overall message, even when we're not talking about security. I think having that as the baseline of expectation is part of how we start to build some of that, regardless of who your audience is. It's also really crucial that any time we try to make any change to a security control, we put something in place or we update something, we want people to understand why. I love the people that I have worked with before. Many of them have been really talented, really brilliant. When they write an announcement saying they're about to do this change, update, whatever that might be. They don't put in that, why is this happening? That impact statement. How will this affect you? Why is this happening? That little even just one sentence paragraph can be so crucial to helping people go, ok. It's all you need them to do. You don't need them to be excited about it. You don't need them to care beyond that. You just need them to do it. Having that impact statement is probably the most important thing that you could put in that entire announcement.

The other thing I always really like to make clear, and this gets into behaviors, specifically motivation and habits. You don't necessarily need to have humans be responsible for every choice of theirs. What I mean by that is, there are plenty of security controls that we put into place that the end users never actually interface with or deal with. That's not a bad thing. It's just being mindful of when we want to implement those controls behind the scenes. For example, phishing, obviously, is a very large human risk. We would never remove someone's access to email just because of that risk. We would set up a whole bunch of controls, like multi-factor authentication, or enforce the use of a password manager, or things that will simplify the ability for them to both use the system, and for us to have security controls in place that shouldn't really impact them too much. Those are maybe not the best examples, because there is slight friction there. Generally, it's not huge, or it's something they've been indoctrinated in early on. I think there's a couple of different ways we can think about user trust and when it's applicable for us to just take the reins and help guide them in that process. When it's applicable for us to literally carve out a process and walk them through it, and really have to hold that user trust much more carefully in that sense.

Gibler: We're a startup, so we're trying to move fast and deliver new things to users as quickly as possible. From a security best practice point of view, some protections we might want to implement internally can be friction-full, unless you have the time to build it right. I think there is a lot of business case tradeoff, which fundamentally involves the security and privacy team working with the senior executive leadership team to go, "Ok, this is the current state. We could do these things, which is going to give us these risk reductions. However, it's going to cause these frictions unless we put in this amount of person time into building some nice automated processes." I think, ultimately, some of this is a business decision in terms of how much risk you can reduce by when, and what person time it takes to build. As you mature as an organization probably you'll make a different decision.

Key Takeaways

Westelius: I think some of the takeaways here is clearly open, transparent communication to establish trust with our users, but also, I think, utilizing some of the security controls and visibility into that as a means of helping them understand why we make certain decisions or why we put certain controls in place. Then also being thoughtful about what controls we are then using, and maybe being a little bit more flexible or dynamic as to how we're presenting them and not treating everybody in the same way.

 

See more presentations with transcripts

 

Recorded at:

Oct 21, 2022

BT