BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Panel: Secure Systems

Panel: Secure Systems

Bookmarks
39:28

Summary

The panelists discuss the security for the software supply chain and software security risk measurement.

Bio

Shannon Morrison is Senior Security Engineer - Detection Engineering @Netflix. Michael Fagan is Computer Scientist @NIST. Matt Jones is Vice President, Global Engineering @WindRiver.

About the conference

QCon Plus is a virtual conference for senior software engineers and architects that covers the trends, best practices, and solutions leveraged by the world's most innovative software organizations.

Transcript

Potoczny-Jones: We have Shannon Morrison. Shannon is a senior security engineer on the detection engineering team at Netflix. Then we have Michael Fagan. He is a computer scientist at NIST, the National Institute of Standards and Technology, working with the cybersecurity for IoT program. Matt Jones is Vice President of Global Engineering, responsible for research and development team at Wind River. I'm Isaac Potoczny-Jones, the founder and CEO of Tozny with a focus on using encryption for privacy.

Work Overview, Industry Trends, and Recap

I'll pick on Matt to go first with an overview of your work, what's important to you, trends you're seeing in the industry, whatever you see fit to go over in your part of the talk.

Jones: I'm not sure how many people here would know Wind River. We have a 40 year history in creating these mission critical systems. In reality, I've only been with the company a few years. What I've seen in my experience in automotive, in aerospace, in rail, is that as we've created more complexity in these future devices. As we've created more complexity in that supply chain through open source, through challenges in understanding the pedigree of the software that we're reliant on for our everyday tasks as users. We've found more needs to aid our customers' developers in their secure mission to make security super simple. It's not as though you can have a security team that does everything at this point, because really security needs every line of code, every API call, every integration that we do.

On a day-to-day basis, I spend my time wrangling open source communities in GENIVI, with Yocto, Linux, and these other places, collaborating with our customers. I think anybody knows customer-supplier relationships, they have an understanding what collaborating can mean at times. As well as really trying to focus on the future of developer experience, especially when it comes to these all critical, secure elements where security is no longer an afterthought, because it's essential. You can have a security proof system, like an ATM, that's not safety critical. There's no way you could have a connected autonomous car traveling at 55 miles an hour down a street with security vulnerabilities. That's the future that we're heading into.

Fagan: My name is Michael Fagan. I'm a computer scientist with the cybersecurity for IoT program. Our work has been focused on a specific cut of the IoT cybersecurity problem. The idea that IoT introduces a bunch of new risks to a system as you use it, is one major perspective we are dealing with by necessity as helping the federal government through our guidance directly, where our guidance is also useful to private sector companies, both as customers but also manufacturers as well, in some of our more recent work. From the federal government side, even there, security has to be simple. Our work is trying to cut through a lot of the complexity of understanding, and the way the processes are set to deal with a new device being introduced to a system makes it very complicated to mitigate new risks or even understand them. The knee jerk reaction of a lot of IT professionals, even in the government that I hear anecdotally is, no, we don't want to connect that. An immediate no to any IoT device in particular, just out of, I think, partly this fear of not being able to truly understand the risks. That introduces liability to the organization, particularly the IT part of the house, who is expected to keep security top notch.

Part of the other cut that we look at, and that's not to say these are the only problems, or only issues, or only considerations for IoT, just where our work has been focused on recently, is devices as components of larger systems. A lot of discussion of IoT, cybersecurity is from that system level, completely, legitimately. How a smart city can be used against the populace. How internet connected pipelines can go awry, are certainly large scale challenges for the nation, for the world, for all users on the internet, and who are part of our economy. At the same time, there's also this, more-on-the-ground, everyday problem of even a home consumer understanding, what are they connecting? Other parts of the government in particular say, think, then connect, think before connect. It's a campaign to get folks to be more aware about cybersecurity of what they're connecting to their home networks and enterprise networks. We found with IoT, even determining that is very difficult for particularly home users. That's one problem, one track. Where we've been really focused now most recently is in this enterprise space, what information and really what is required from a manufacturer? Then what capabilities are required from an end device, an IoT device to support the controls and the cybersecurity that might exist at a higher level?

Morrison: I work on the detection engineering team at Netflix. I build detections or retire and reduce risk. I spoke about some of the work we're doing to model application risk. That's useful to my team for detection engineering to figure out what detections to build. We like to prioritize detections, that will have the most impact, but since detections are about finding something after it already happened, it can be hard to figure out which ones are the most important. We use our application risk model to see which applications are carrying the most risk. This can happen because of the type of data that an application can access or because some of our behavior and controls can't be implemented.

I think something I heard Matt and Mike both mention are keeping security simple. At Netflix, we have this concept of a paved road. That's where it's meant to be easier and secure by default for users to deploy an application. In some places, these paved road controls that we apply can't be implemented. Gaps where controls don't work can be good places for detections. If we can't prevent a loss, then we can detect it early and at least reduce the magnitude. In addition to just a single app that carries a lot of risk, our model also gives us good insight into which controls reduce risk the most. We have an asset inventory. From that, we can see which risks are present, which controls are applied. Then we can see where controls are missing. This can help us prioritize detections around missing controls. Instead of one big app, where we've got like a big risk, we have many small events that could add up. That helps us prioritize that way.

In addition to detections about applications, I'm working with my colleague Martin, building a library that detects sensitive data in plain text. We're implementing this library in Java and Python, and this is just a way for us to prevent sensitive data from being logged. It can also detect data spills. Sometimes we don't have control over the data that we get, but maybe somebody else is sending it to us. That way, we can detect sensitive data before it's in our systems. Then also detecting data and databases automatically. That way we can figure out where we have sensitive data, instead of just relying on user provided schemas, which is pretty manual, we can automate some of that data detection. We're hoping to open source this library, maybe later this year, early next year. If we're able to then we should have a tech blog post, and that's available to anyone who is interested in contributing some features or sensitive data detections or improvements. Hopefully we'll have that available later this year.

Potoczny-Jones: I'm Isaac. My background is really in cybersecurity, particularly around the use of cryptography at the application layer, as opposed to just the infrastructure layer, so using cryptography in the application. That can be related to identity management. It can also be about using cryptography for access control. Access control, meaning I have the key, you have the key, no one else has the key. That's how we control what happens to a piece of data. That's maybe in concert with, not necessarily in contrast to, but in concert with access control, it's just rules on a server somewhere. Typical access control says, is this user allowed to access this data? That might be based on their role. That might be based on their username. Fine, here you go. That's a rule on the server.

When people break access control, then that guy's got access to data. The idea of using cryptography for access control is that in addition to breaking the server rules and tricking the server into giving them something, they also have to have captured the keys somehow. We can make that harder. We can make that easier. You can derive a key from a password, and there's a lot of merit in that. You can use it for logging into a system. You can use it on the web, and it's great. You can generate a key in a piece of hardware that you have local to your system that no one else is ever going to get once they break into your room. There's a lot of ranges and ways you can help enforce and do good things with access control.

For us, it really connects strongly with identity and access management. The way those things work together is basically we have to think about now that we know who a user is, how do we know what they're allowed to do? How do we know what their public key is, for instance, so that we can share data between users, between systems and users, and that kind of thing?

I've done a lot of open source and programming languages stuff over the years, not all that's very important to me. I think we have really interesting overlap in multiple panelists in the area of IoT and critical systems, and risk management, and machine learning.

Self-Sovereign Identity Initiative

What are your thoughts on self-sovereign identity initiative? Would it also help to solve internal enterprise security challenges, reduce fraud, and optimize onboarding and entitlement processes?

Jones: Self-sovereign, just to level set, it's where the person who creates the data has these rights to create it. In the past, in previous missions at other companies, I've looked at this in some ways, especially when it comes to the EU GDPR. The way I typically describe it, usually to my boss, when I'm trying to request justification to go off and do it, it's, how could Shannon with the data that she's generating have rights to control that? Maybe you would implement that where Shannon sends some data to Michael, and it's encrypted with Shannon's key, and Michael gets access to that for a given period of time. At any point, Shannon could recall that, and Michael can no longer get access. We can think about that as humans about the data we generate through Netflix or anything else. Nobody else should ever see my Netflix viewing habits. Shannon's been told that many times before, I'm sure. How does this really extend when they're, as I'm looking at now, mission critical systems. Or maybe it's a case of it is not human data as such, but I'm driving down the street in an autonomous car that I own, and Isaac's traveling in an autonomous Uber. We want them to be able to communicate, to share data for a period of time, but I don't want Isaac's Uber to know who I am beyond that block. How can we start to use that SSI to not necessarily constrain data, but start to think about how it could be anonymized? How I can retain ownership. How I can stop people learning what I don't want them to learn.

Fagan: From the habit of filling from the IoT perspective, not knowing a ton about specific self-sovereign identity, other than what Matt just described, and a little bit I knew before. In IoT in general, identity of devices is a challenge, because one of the big reason is there's so many of them. Then another reason, or at least an outcome of how to monitor and identify devices would be just the breadth of companies that create IoT devices relative to laptops, servers, and smartphones, and so on. That creates a challenge in the customer side, where a customer in a smart city is using large numbers of well sourced, from a supply chain standpoint, at least in that case, but still many devices. Having to manage in the layers you just described Matt, complexities for privacy and cybersecurity of many different kinds of users, in a situation where ownership of devices and users of those devices becomes mutual and complex.

A good example we used for a while from our group in trying to think through some of these issues and talk them out. An applicable one here is an IoT device that may be carried over from one user to the next, maybe in an apartment. A smart fridge in a condominium or an apartment that somebody uses for a long time, it accumulates a large amount of data about them. Then they move on, but they don't take the fridge with them. It's probably just offered as part of the apartment. It's up to the apartment manager to reprovision. All the different ins and outs of IoT reprovisioning is an interesting edge case, especially when you imagine that that could then be tied into the broader home apartment network, down to the door lock itself. The fridge knows when you're home, so on and so forth.

All that data could be accumulated just broadly, even outside of the single IoT device. How could me as the person moving out or whoever, feel confident that, one, the apartment manager wasn't able to access that data improperly as per our lease, or whatever I signed in the beginning? Then also the next tenant maybe is a tech savvy individual, because it could be some easy walls that are there. I doubt that it's going to be as simple as it still says, "Hello, Michael," when I walk in. How gone is that data? How much control do I have? New technologies can help solve that issue by presenting novel solutions to them that may be very valuable for IoT. That's one area our group's always looking to. It doesn't have to be an IoT technology for it to benefit IoT in the challenges we see.

Jones: I really liked your point about the fridge. For a long time, I've been wondering, when we could travel, the Hertz rental cars, the Avis rental cars. I'm seeing these lines of Hyundai and Toyota and all these different cars that I know full well are capturing data on a driver. It does entertain me every time I get into one of those cars to look at the names of the Bluetooth pairings and go through the contacts that they've completely not updated. Then I think as well that in a way, all of these cars have got huge amounts of open source. Me as a temporary user of that car, I almost have a right to access all of the open source. I could write to Honda, and say, "Show me the code. Show me what you're doing. What are you doing with my data?" They're not set up for that. All of these car companies, everybody's really focused, Michael, just as you said about the fridge. It's a two and a half ton thing that gets solved once and it has one user.

I think if you think of the converse as well. We bought a new Hyundai last year. Apparently nobody was buying cars, got a fantastic deal. If I go to another one in three years' time, how can I port my data? I don't want the thing to be retrained. I want the same experience that I had the last day of my last car on the first day of my new car, then how can I migrate that data securely? How can I even see it? I have that ability with Google where you go to the website, you wait three hours and it gives you this giant tarball of all of the stuff it's ever collected on you. How can I get that from my fridge, from my Smart TV, from Apple? I don't think we've got the answers to that yet.

Morrison: I think that's really interesting. We don't want people seeing other people's viewing history. Yes, our privacy team has been really focused on protecting data. We do have also a process where you can go to Netflix and get all of your data that we have about you. I think it'd be really interesting to see that from more companies. I haven't rented a car in so long. I hadn't thought about that rental car data. I really like that scenario. I think that'd be pretty horrifying. Hopefully they don't send my driving record over to the insurance company or anything. That would not be good for me. User privacy has been super important and a big area that we focus on, for sure.

Potoczny-Jones: I'll offer a parting shot on self-sovereign identity, which I do like and I do believe in. I think that it does cover attributes about the user. I think a lot of times what we really want from identity is, what's this user allowed to do and who says so? I think from an enterprise perspective, especially as the question alludes to, that enterprise usually probably wants to pretty much decide those answers. In a lot of ways, if the user ID is the number 12345. I don't care who they are, I care if they log into this server, are they going to break it? Are they going to hack into it? Are they going to reboot it? What's going to happen when they do that? I think when you're talking about self-sovereign, where you can take it around with you inside different organizations, I think there is benefit in all the areas you mentioned about reduce fraud and things like that. Also, I think each of those organizations probably will come from a standpoint of saying this is what this user is allowed to do. That's what's important to me from my perspective. They're probably somewhat limited in that ability to really take those rights and attributes across organizational boundaries.

Can Non-Data Scientists Leverage Anomaly Detection?

Do you need to be a machine learning expert or a data scientist to get benefits from anomaly detection? I think some of your work is in this area. Can software developers use this technique either in securing their systems or other aspects of the software development lifecycle without being a full blown data scientist?

Morrison: I was a data scientist at my last job. In my opinion, yes, I don't think you have to be a data scientist to actually contribute to anomaly detection models as well. One of the models that I worked on at my last job was looking at network anomalies in proxy logs. Definitely the most important person on our project was not one of the data scientists, it was our security engineer who was an expert about the data. When you're doing anomaly detection, there can be a lot of noise. Finding something that's anomalous and important can be difficult. It's easy to find things in the data that are just different, and not something that the data scientists can really help you with. Having somebody who's an expert in the data, someone who knows how the process should work, who knows how the users maybe use the system, or at least know how they should be using the system. I think that's really important. I think not only can they, I think definitely they should.

If you're interested in getting started, Facebook has an open source library called Prophet. They have a Python and R options, and they have some nice outlier detections and time series data. That's a nice way to get started if you're looking at data at least, then it's structured over time, and you can see outliers, places where things are different. That'd be easy for a software developer to get started with. I think then having that expertise around the data and what's happening is really important. I think Michael, maybe you do a little bit with machine learning as well, has that been your experience or am I off track here?

Fagan: I have some experience around machine learning, and NIST has work going on in that area of big data as well, AI, there's been a growing traction around some AI work on the other side of the house at NIST. We're aware of it. There are of course connections with IoT, the biggest one being IoT piping big data, a large amount of it into machine learning systems. The next level up from what I think you're describing, Shannon, which is my perception of sometime maybe more of a tool, some machine learning algorithms, particularly more mature ones that can be used by a software developer to understand a solution to some problem they may be dealing with, security or otherwise. Much like a hammer, a carpenter may be able to swing that hammer better than everyone else. A data scientist may be able to manipulate that model, and get crisp, great results. The software developers are competent people, they're putting together the code, this would be using that as any other tool they might use, and maybe that they might use in their IDEs already.

Another one to think about, problems outside the box. When you're doing testing, and all of that, there's a lot of benefit to that. Cybersecurity risks with AI, of course. You give too much power to AI, humans need to be in the loop. A lot of NIST's work is trying to understand how to describe AI and the kinds of models that there are in ways that are on the road to standardization, a little less standard at this point. For IoT developers, I think, especially when you're thinking about developing IoT products that end up in large systems, data modeling may help you understand some of those cybersecurity risks, like I was discussing earlier on, bigger picture ones. Where the data is going to flow in systems you have, and then understanding just challenges in that alone, traffic analysis and all that. Then even beyond that, where there are pressure points for cybersecurity where certain command and control structures have a nexus, I'm thinking like a hub in a complex system, that hub may be of particular interest. How much interest from a risk standpoint, it sounds like some of the work Shannon's been doing is exactly where that can be filled in.

I'll make one final point just broadly from our work, this is one of I think most direct lessons I've learned recently, because we branched out really into this, what we call non-technical capability. The information or policies that need to exist around a device to support cybersecurity beyond just what it does. The clearest example, because this was a telling moment in our work, was we had a soft update capability highlighted in our core baseline, which implied that there was some software update being developed. If not, including a software update capability is possibly a bad move, it creates an opening for software to be changed in a way that it might not have been there before. If the manufacturer has no intention for actually using that update ability to deliver updates in some way, then it's bad from a security standpoint, more likely, to add that. Than if there's a non-technical supporting capability in addition to the technical ability there.

Another follow-on insight that I've had since, and machine learning can bridge this, is these days, particularly with IoT and AI, what used to be a non-technical capability is becoming technical. Your work in and of itself, Shannon, is that. That risk assessment, all that stuff you discussed, in the government, it's all done by hand. The calculator, old pencils, 1920s, a million desks in a giant factory way. Not to disparage what the government does, it's just a very complex job in the government. The resources are sometimes spread very thin in agencies where the mission is not always cyber. NIST we have a very strong cyber mission, particularly our lab. Social Security Administration, they have a large base of services to deliver outside of protecting data, not to say that that's not one of them. It's just one of many services. They have to go about things the most pragmatic way they can.

That said, as new tools become available, like data modeling tools, or data modeling tools specifically for risk analysis are developed and available, that creates the ability to take something that's now non-technical, which is open to its own problems, and designing that non-technical capability. Because then there's a standardization that can happen with a technical capability like that. I think it's really important for software developers to keep going down this path of automation of things that are right now tedious, rudimentary tasks in enterprises in particular.

Jones: Effectively what I'm seeing at the moment, systems are becoming more complicated. Systems in some ways are becoming sparsely connected. At the moment, a lot of the anomaly detection is being done with central cloud systems where you can pretty much guarantee it's got power and network. That seven nines of uptime, because that's what they're building it for. Some of the challenges that we're having, if you think about the emerging 5G network, and the edge compute that will give you on every cell tower. Then think about those connected autonomous things that you have as your part of your daily lives that I can't make a cell call, there's no way it can be connected 24/7. How do we increase the ability for this machine learning, AI, deep learning based anomaly detection to operate in all of those different parts of the system? How do we take the models that we trained en masse in the central cloud, and then bring some of that to the edge cloud, maybe on the 5G network? How do we bring a subset of that to the device, but do it in a way that you're not retraining the data that you can have these pieces of learning?

Then on the device, especially, getting data off is expensive, if you're not on Wi-Fi at home. There's a huge overhead to that. How can some of these anomalies, actually detection systems help you understand what data needs to be extracted when, to again start to retrain, improve the training potentially on the central cloud of those models at given points in time? I think my call to action on that would be, how can we encourage people when they're creating their thing that's going to help AWS or their thing that's going to hyperscale in a central cloud, how can they think about how this would be deployed across these sparsely connected safety critical systems in the future? Because really that's where our next challenge would be. If you look at Gartner's stats, they're saying 80% of data by 2023 will not be processed in the central cloud. Makes sense if you consider the power you're probably looking at now of the screen, consider the power of the CPUs in your car, in your fridge, we're only going to get there in a secure way, if we start thinking about those emerging problems now.

Data Ethics

Morrison: I like that a lot. The other thing that you made me think of is, as we're thinking about anomaly detection with device data, a place where a data scientist could help you, for sure, and maybe just other teammates, is just thinking about the ethics of the data that you're pulling and how you're using it, and user privacy. There's a lot of really good studies about ethics and the right way to use data and the way to anonymize data. That's something that is really important to consider. Hopefully, I think a lot of software developers would work with that. I think that might be a good place where a data scientist can look at it from a different angle, rather than just the intended use case. It could be good to talk to security too, and see what the abuse cases are, but also just thinking about the ethics of that data.

Jones: It's going to be most secure if we don't take the data off the device. How could we get the anonymized learning? How could we get what we need from it based with that piece running on the devices? Always going to be more secure if we don't take it out into the wild.

The NIST Privacy Framework

Fagan: A good moment to plug the NIST Privacy Framework as well. Not a project I'm involved in. That may be helpful in what you just described, Shannon, in understanding from even the development standpoint, the kinds of privacy that, now I won't even say accepted, what you wanted, as an ethical developer, what your goal might be. Help understand that in a standardized way, in a way that you then can communicate with other stakeholders, like maybe your data scientist, to then tune those models to meet those privacy goals that you've set for your organization. Another great piece of work coming out of NIST.

Potoczny-Jones: I have some experience with that framework. It's definitely really great. I think anybody can pick it up and learn something from it, and advocate inside your organizations to say, here's some tools for assessing privacy. They don't have to be super formal. You can have a spreadsheet and say, what's our data? Who really needs it? Where's it going?

Large Scale Data and Privacy

I don't work much in it, but one of the fascinating areas to me is just with such large scale data. If every developer wants to do a good job, if somebody's health condition is coming over the wire, we're probably going to say, this should be encrypted. I think what we don't really see that the data scientist can potentially help with and see is the aggregate collection of data being potentially a lot more sensitive than any one piece of data. When there's so much data as a developer, you're probably not even seeing it at all. You certainly can't get that statistical understanding of collecting all this and making it public or doing anything with it, whatever, to violate the privacy of a large scale number of people, or target some class of people. I don't know if anyone has any reaction to that. I always think it was a fascinating area.

Fagan: There's a cybersecurity or just a general security safety risk perspective, similar to that for IoT, and even in network security, the risk due to an individual device. Our work focuses on one component. The example our team throws around a lot is a fish tank thermometer in a casino, which, in a noteworthy hack allowed access to the broader network. No one in a risk assessment would have thought that that was even of interest by an attacker. The threat sources would have been identified to be much lower, to use some terminology from risk assessment. Because who cares, it's a thermometer. It doesn't do things that we care about. That is one example of where there's this lateral attack on a network, one of many.

The more direct one, though, is the smart city example, the ability to influence a large number of sensors, to then make an automated decision be different. A nation-state level attack, or a large criminal syndicate, maybe level attack, where all the traffic lights have been impacted, the sensors have been messed up to make them all think that there's never anyone waiting, or there's always someone waiting to keep the green one way and red the other way. That alone would cause some amount of chaos in many cities. Whatever your goal may be beyond that chaos, dominoes fall. It's how attacks work. That individual catalyst attack would involve a similar idea where one piece of data being altered, integrity of that's no biggie. We can just drop that and ignore it. If that just keeps happening over again, is there a fault tolerance within that system to handle that attack? That's the complexity that requires coordination between at least manufacturers and customers.

On either side, there's amount of heads in sand with wanting to understand that real huge problem, to what you said earlier, the complexity, Matt, that these systems are gathering, not just in their network topology, but in even how data flows and is used. The reactions that happen in new data that's generated from that can be really challenging from a risk standpoint to understand. The federal government is keen on this, I think, in particular, given our position, large networks that have a lot of impact. Those impacts, these situations, these events would have very large magnitude, protecting against them is a complex task. It can be complex to protect privacy when there's so much data being thrown around sometimes.

 

See more presentations with transcripts

 

Recorded at:

May 06, 2022

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT