BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations The Common Pitfalls of Cloud Native Software Supply Chains

The Common Pitfalls of Cloud Native Software Supply Chains

Bookmarks
48:24

Summary

Daniel Shapira talks about some of the common security vulnerabilities found in cloud-native environments, and why it is important to take security measures immediately to protect instances in the cloud.

Bio

Daniel Shapira is a Sr Staff Researcher at PANW, currently involved in security research of CNCF projects with a focus on OS implementations. For the past 11 years he has found and fixed critical security problems for various enterprises, government agencies, healthcare and open-source projects in the US, Europe & Israel.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Shapira: They're debugging it, building it with various tools, and then submitting it to some kind of a version control system. After that, our CI is pulling that code from the version control system, performing various tests on it, compiling it, and basically providing some kind of binary artifact. Those binary artifacts are later pushed over to the universal package managers where they are stored, and these universal package managers are basically serving as one single source of truth for our binary packages. After that, they are pushed to the publishing infrastructure, which is Docker, Kubernetes, OpenShift, or the public cloud that you know, AWS, GCP, and so on.

We'll cover the problems that I met throughout these components, but before that, let me talk about supply chains in general, and specifically about the supply chains of the whole nation.

Supply Chains

I like to introduce you to this fellow. His name is Vladimir Vetrov, and he was an electrical engineer in the early '50s for the U.S.S.R. In the early '60s, he became a KGB agent, and in '65, he was sent to France with one mission on his mind. He was sent to recruit various agents that was already deployed in various R&D centers throughout the NATO countries. He didn't have any specific action that he would have to provide to them. He just wanted to recruit as many agents as he possibly could. He spent five years in France, and he was sent back to the U.S.S.R., where he got promoted. Now, he is the commander of the new initiative, which is called Line X. Line X is a new initiative of the Soviet Union, whose sole mission was to steal information and R&D projects from the NATO countries. They actually used those agents that he recruited in order to provide or compile a supply chain that provided them with the artifacts from the NATO countries.

After he is promoted, he is sent to Canada. In Canada, the Canadian intelligence is very quickly to pick up on him, and they are uncovering that he is, in fact, the KGB agent. He cannot perform his duties anymore, he is burnt, and he is sent back to the U.S.S.R., against his will, because he is loving the west side country life. He is loving the money, he is loving the blue jeans, and he doesn't really want to go back. He is getting mad on the KGB, because he wants to stay. Nevertheless, they don't just pull him back, they also demote him, because now, he is burnt. Everybody knows about him. They sent him to some remote village around Moscow, and so he begins to drink. He is drinking, and he is spending five years brewing his anger against the KGB. He decides to execute a plan and then execute revenge upon them.

He turns to his French friends, and he exposes the names of the Line X agents. The French intelligence is quickly sharing this information with the CIA. The CIA, first, they thought about that, and they thought to just arrest these agents. Finally, they decided to actually think about it a little bit more, and they decided to play a bigger game. What they did is, they filtered out which agents were responsible for stealing the most important information, the most important technology. For instance, technology that was developed for automation of utilization, for example, automation of gas pipes or maybe electricity, and stuff like that, space programs. They selected these few agents, and they actually planted buggy software for these agents to steal.

The Russians – or the Soviets – are stealing these infected artifacts and then implementing them in their products. This actually leads to one of the most spectacular explosions that ever happened on the Soviet Union geographical area, which happened in 1982. What happened is that the trans-Siberian gas pipe project basically exploded. The whole pipes went ballistic, the pressure raised to such levels that they didn't have any way to actually manage that, and it's led to an explosion. To summarize what's actually happened here and how this explosion occurred, the Soviets are deploying agents throughout the NATO countries that infiltrate in actively into the supply chains of the NATO countries. They are stealing their artifacts, and they are building their products with these artifacts, which later on, because of the buggy artifacts, breaks the product.

Common Problems of the Modern Software Supply Chain

Now, I will cover the most common problems of the modern software supply chain, and we'll see that the issues that led to this gas explosion are not so unique to supply chains in general, but they are also present in the modern software supply chains. As I said before, we begin our journey at the source code, and in the source code, we basically have two components. We have our developers who actually write the code, and we have various tools that are used to compile the code and version control systems that will actually commit the code to.

When we talk about version control and build tools, we basically talk about a few tools that are considered, today, mature and safe. Because they already have passed their phases of testing, they already got covered by a lot of security guys that uncover the vulnerabilities in them. It's not to say that they are completely safe – there is no tool that is completely safe – but they are considered mature and relatively safe than other tools. It's true only as long as they are actually authentic. If I'm taking some tool that you're using, and I'm actually modifying it in some way, maybe I'm taking your ID and I'm modifying it in such way that it will actually inject malicious code in each and every project that you'll compile with it, then you will have nothing to do about it. This tool is not authentic anymore, and basically, I will infect any product that you'll compile with it.

Now, we'll move to the developers; the devs are basically doing three main things. they write the code, they debug it and build it through the various tools, and then they commit into our version control system. When we use our build tools, as I said before, we need to make sure that they're actually authentic. For example, there is a very popular instance that occurred in 2015 that involved fake tools. Specifically, it was a fake distribution of Xcode. Xcode is an ID for OS X and iOS. It was called XcodeGhost, and it alone led to an infection of over 600 million users, because it got spread and big companies got this tool into their toolkits. One of these companies was the company behind WeChat, and WeChat, alone, had around 500 million users. They actually compiled WeChat with that tool, so every user that used WeChat was infected throughout this instance.

How we can actually battle this situation and solve it. The first thing that we can actually do is utilize some kind of a single source of truth. We have our universal package managers, any kind of a repository that you use today. These repositories can actually store your binary artifact, but not only that, they can store your tools. They can store your installers, your tools that you want your developers to use. You can actually create a repository for your tools for your developers and then supply your developers only with these tools from the source and not allow them to go over the internet and download maybe possibly malicious tools.

The other thing that you can do is educate the devs to manually verify the things that they download. Today, it's a common practice to actually publish some kind of a hash sum in the download page that you can actually verify when you download the package. This is provided because things can be modified during transit, even when you download them, so it's very important to actually verify that what you have downloaded is what you expected to download. Last but not least, if you are actually creating this kind of a repository for your tools, you actually have to make sure that this is completely secure. If you are using such tool and storing all of your binary artifacts in it and storing your build tools in it, and it is not secure, it is open to the public, or maybe you grant anonymous access to it, so anybody could actually go into it and modify your tools. Then everybody who actually pulls these tools from this repository will be infected. We will talk about it in a couple of minutes.

We move on to the coding stage. We have three main components that our developers are actually involved in. When we code, we sometimes end up with bugs in our code. Some of these bugs may lead to very serious critical security issues during the way. To battle that, I'm proposing to actually implement fuzzing into your continuous integration, continuous testing, and so on.

What exactly is fuzzing? Fuzzing is when you take a program and provide various unexpected or invalid or I would call malicious inputs to it. Then you monitor the program and look for crashes, memory leaks, and any misbehavior of the program. Once you find it, you know that there is a possibility that someone from outside will be able to provide maybe a more elaborate input that will lead to code execution or maybe any other kind of vulnerability. What is more important for you guys is that now, in the cloud-native ecosystem, there is a possibility to actually implement a continuous fuzzing.

Continuous fuzzing is when you run fuzz tests but you run them continuously and only restart them once you add new code into your codebase. By this, you will gain higher code coverage. You will gain the ability to uncover vulnerabilities or other kind of bugs throughout your product before they land into my hand, before you actually release them to the public. This will lead to less work for me, because I will have less vulnerabilities to uncover, and will make your customer safer, and eventually, you will be happier.

There are some tools that you can utilize for that. The most probably famous of them is now ClusterFuzz which is a Google tool that they are actually using to fuzz almost all of the OSS projects that you are using today. There are others, specifically some tools that are built for specific languages, like go-fuzz, which is used only to fuzz go code, or american fuzzy lop, which is built to compile C code with it. Then, actually, the compilation is to provide instrumentation to the code and to gain coverage in statistics from the fuzzing process.

We move on to the third stage, which is committing the code to our version control system. Usually, we commit to private repositories, which are secure. They're on-prem, no one can access them, which is good, but sometimes private repositories become public repositories or sometimes we'll commit to public repositories. In any case, when we commit secrets to the repositories, this is a big problem. Why this is a problem, because once you commit something to a repository with public access, you should consider it as being compromised.

For example, if you commit a password or a key, you should generate a new key and change the password. Why is that? Because the repository tools are saving the history. They are saving logs about your commits. Even if you're committing a password and later deleting that commit or swapping it with something else, the history will still contain the secrets that you have committed. Anybody with access to that history will be able to gain these credentials and later on use them to maybe log into your Git account, maybe log into your CI, and so on.

What you actually need to do to prevent that is to actively monitor what is committed to your version control systems, and there are various tools that will provide you the ability to actually do that. You will be able to monitor if there are secrets that are committed. You will be able to monitor if there are maybe some API keys, credentials, any kind of things that you want to prevent from being committed. You will be able to do that. Some of these tools are over here, Git-secrets, Git-Hound, TruffleHog. All of these tools are able to actually monitor your commits in real-time, but nevertheless, it's also important to scan what your history already contains, because maybe your repositories already contain a lot of secrets out there. You should actually actively scan the history that you already have to prevent from all secrets to be leaked.

Tools for Continuous Integration

Now, let's assume that we got our source code secured. We got our version control secured. We move on to our continuous integration tools, which pull the code from our version control and use it for various things. You can see some of these tools on the screen right now, and I will cover some of the most common problems with these tools and how I uncovered a lot of these instances exposed and a lot of companies being in a very dangerous situation.

The first thing is CVEs. For those of you who don't know a CVE is, it's an ID for vulnerability, a known vulnerability. For instance, once I uncover a vulnerability in one of your products, I'm going to write an email to an organization which is called MITRE, which is funded by the Homeland Security of the U.S.A. and is providing a database for known vulnerabilities. Once I'm reporting to them, they are giving me back a CVE-ID, which now identifies this vulnerability. Anybody with this ID can go online, put it in Google or some any other search engine, and actually find exploits for it, find very detailed technical analysis of these problems, and then utilize that data to go further and exploit instances that are still vulnerable to this kind of vulnerability.

These tools have over 300 known vulnerabilities as of today. Almost 60% of these vulnerabilities were uncovered this year alone. You saw these tools that I showed you, you know for a fact that they were not released yesterday. You know for a fact that they were not released in 2018. That tells you two things: one is that these tools only now are actually gaining traction from security researchers. We were not interested in these tools before. This is why you see vulnerabilities in them right now. These vulnerabilities were there before. Someone with the knowledge about these vulnerabilities could hack you in any moment. The other thing that I saw personally, which is very worrying for me, is that people or companies tend to not have a great system, which leads them to expose their instances with various known vulnerabilities.

A CVE is your mark to upgrade your product. I know that sometimes when you upgrade a product, it may break something, maybe something will not work anymore, but I urge you, you should do that. If you are actually leaving your instances outdated, you're just basically opening your door and letting anybody to pass.

The other thing that I saw in these tools are secrets. I mentioned that your secrets may leak to your version control system, but if they are not leaking through a version control system, they might leak through your CI. The CI tools are working in the same manner of the version control systems, and they are also collecting logs about your build processes, about what credentials you are using, so anybody with access to your build history will be able to uncover your credentials that were used throughout your CI processes. We saw numerous instances that provided anonymous access to the logs of the system from which you could pick out these credentials and basically log into other components of the software supply chain.

Specifically on Jenkins, now it's much more secure than it was two years ago, but still, anybody with an access to create a job on Jenkins will be able to uncover all of the secrets that the Jenkins actually stores. You should really think about what access you're granting to what users, because we saw a lot of instances with users granted over-permissive permissions. In some cases, personnel who really didn't need the access, got the access to build stuff or read stuff that they're not supposed to read.

The last thing that we saw is misconfigurations throughout these tools, and we saw tens of thousands of them throughout the internet. The most common problem that I saw is basically anonymous access. These tools are providing anonymous by default, most of them, and you should actively disable this. What happens most of the time is that people forget to disable this, and they are leaving it out to the open. Maybe they think that their IP is secure information or something like that, but let me just emphasize how unsecure an IP address is.

Let's take for an instance that you are spinning up some instance on the cloud to test your product, and you're giving it maybe five hours of lifetime. You are thinking, "These five hours, nobody is going to know this IP address, nobody is going to hack me." In reality, hackers are not looking for you. They are not looking for your IP address. What they are doing is picking up a product, for example, Jenkins. They are studying this product, and they are uncovering which port this product is listening on. The understand what vulnerabilities this product contains, and then they are scanning the whole internet for the product. They are looking throughout the whole range of the IPv4 for this specific port that responds with this specific header that contains Jenkins, for example. If they are scanning the internet in that window of the five hours, they will hit your IP address and they will hack your instance in less than 10 minutes, because everything is automated today.

The other thing is that instances are, in fact, publicly exposed. If you really don't need an instance to be publicly accessible, block it. Hide it behind a VPN. Don't let unknown users to actually be able to access it. There is no need for that. You are just opening another door for people to attack you. As I said before, over-permissive privileges are given. You should really consider what you are giving to your users. If you really don't need something, then take it away from the user.

Another huge problem in these products is that authentication is not enforced throughout the whole components of the product. When we're talking about these products, it's not just the web UI that this product has. Sometimes there is an API, sometimes there are even more than one APIs. Sometimes they expose four different ports with four different APIs, and they provide authentication only to their web interface. You should actively check if any product that you use is listening on anything other than the web interface and if, in fact, there is authentication implemented throughout this interface, because a lot of them do not that and it provides a full sense of security to you.

All you have to do is a few steps. Hide your instance behind a VPN if you don't really need public access. Regularly update and upgrade your systems to actually eliminate the danger of CVEs. Then you'll only be exposed to unknown vulnerabilities. Not to say that this is much more secure, but the people with the knowledge of the unknown vulnerabilities are a very small group. You will cut like 90% of your attack surface just by updating your products. Try to avoid using secrets throughout your CI projects. Try to avoid using secrets in your build systems. What I would advise is pushing the use of secrets to the latest step, which is actually the runtime. There are tools that can help you with that. I'm not in liberty to discuss which tools, but you can actually look it up. Limit the access scope. Last but not least, you should actually expect to be hacked. You should actually actively monitor your hash and know that there is danger. You should actually understand that.

Universal Package Managers

We covered our CIs and we secured them, and we move on to our universal package managers. My universal package managers consist of various tools. Some of them are running on-prem, some of them are running on the cloud as a managed service. I'm concentrating on the tools that are running on-prem. I didn't have the opportunity to actually test the managed services. What I have found in these tools is that authentication is still the Achilles heel of these tools. Most of these tools are providing dangerously over-permissive default settings, and I mean really dangerous. For example, a lot of these tools are providing default accounts. Some of these tools provide hidden default accounts that you are not aware of. You are going to the UI of the product, and you are looking to the user list, and you are deleting the default accounts. Let's say that you have deleted all the default accounts that you see, but in reality, the database of the product is still storing an internal user that someone with the knowledge will be able to log in into your product.

Ask yourself, is there a universal package manager tool that you are using that is publicly accessible today? Yes. Do you have the chance to change the default accounts? No. Maybe just enabled an SSO and you thought that you are safe now, but in reality, it's not true. If this is the situation, you should consider your artifacts to be infected. You should consider your artifacts to be stolen. You should really check your logs in the system and see if there happened to be an unknown access to these systems. This actually happens a lot today, for these tools alone, I have found 25,000 instances exposed to the public. Not to say that there were bad decisions made throughout these products, for the users of this product, because a lot of them actually enabled SSO. These products didn't provide the adequate security once you enable SSO, because you could actually just bypass the SSO if you would know the default accounts and just fire up direct requests to the APIs of these products. The SSO is no longer providing any security at all, in that case.

Publishing Infrastructures

We move on to our publishing infrastructure, which is actually the final stage where our artifacts are getting into. You can see a lot of these products on the screen: Docker, Kubernetes, and so on, the public cloud that we're using.

In 2017 I published a paper about Docker and escaping Docker containers. It was the first paper that actually showed that it is possible to escape from a Docker container by utilizing the kernel vulnerability in the underlying Linux kernel. From that point on, we uncovered at least a dozen of other vulnerabilities that you can actually utilize in order to escape from the Docker container, be it in the Linux kernel or actually vulnerabilities in the Docker engine itself.

In these products, or in these infrastructures, I would like you to think about a few specific points. The first one is who's actually listening on these products, and my example is, again, Docker. If you are familiar with Docker, you are aware that there is Docker daemon to the Docker. This is actually listening on UNIX socket today, and you can actually execute Docker commands through that socket. Before the UNIX socket or for the first four years of the Docker product lifetime, it was listening on all available interfaces throughout your system. It would listen on any possible IP that your system provides. In addition to that, it didn't require any kind of authentication to that socket. Anyone who uncovered that the Docker daemon is actually listening on port 2935, if I'm right, would be able to just execute any kind of a container in your Docker environment, and I mean any kind, including privilege containers, and so on, which later can lead to a full host takeover.

The thing is that Docker is secured today from that point of view. They changed the listening interface to listen only on the UNIX socket, which is only to the local interface. Other products are still happily listening on all available interfaces, and some of the products are consisted or built from various tools or various components. You might have a product from this list that is built from 13 other products. For example, there is maybe some kind of interface to execute containers, then there is an interface to monitor these containers, and there is another product to actually provide logging for these systems, and so on. You would expect these products to be secure from any kind of angle. You would expect the logging to be secure. You would expect the execution environment to be secure.

In reality, they provide security only to their main component, and they're forgetting about all the other components that they have throughout the product. Some of these products are having a situation where you actually have the main window or the main UI secure. You can enable SSO or you can close down the API, and everything like that, but six other components are exposing an unauthenticated API that you can actually go and execute code through. What it leads to is a situation where companies are actually taking this product, implementing it throughout their systems, and thinking that they are secure, because the main actual component is secure, and this is the component that you are actually interacting with. You're managing everything throughout that component, so you're actually forgetting about all the other components.

In reality, all the other components are still listening on all available interfaces without authentication. I can actually look for this secure component in order to find your instances. Then, I will just need to know about all the other API ports that are actually exposed without authentication. There are thousands of these instances online. You can actually check that. It's very easy to find instances like that. Even a seven-year-old would be able to hack these instances, and I'm serious.

The other point is, who's actually talking throughout these environments? By default, when you, for example, execute two Docker containers, they can communicate with each other. Sometimes you actually need that. Sometimes you actually have containers communicating with each other, passing messages. If you are actually doing that, you should make sure that this communication is secure, because if not, you may actually leak a lot of sensitive information. I might take over one of your containers, then just listen who he's talking to. Just listen over the interface of Docker and I will get a lot of other information that will actually provide me with the ability to gain latter movement throughout your container environment and maybe break into other containers and later on take over maybe the whole Kubernetes cluster. It depends on the information that is actually leaking throughout these communications.

Another thing is that metadata throughout these instances is leaking actively sensitive information, and you should actually check that out. It will be very interesting to take a look, for example, spin up an instance in the cloud and just execute tcpdump and see what is going on over there. You will see that even if you are not exposing any application, if you are not executing any application, there will still be a lot of communication going on. This communication is actually leaking information that is related to your clusters, related to your instances. This information, again, provides me with the ability to collect it and later on use it to move from one exposed instance that I have hacked throughout your whole cluster. What you should really do is explicitly allow the communications that you need. Block everything, rest. If you don't need it, just block it.

The last thing about these tools is who's actually accessing. The most common hacking activity in the cloud today is crypto-mining. That is because of the power of the cloud, because the hackers are aware that when you spin up an instance, it is running on a cluster that can run maybe 1,000 more instances like that. They'll take over a Kubernetes cluster and they will spin a service that will spin 500 instances of a miner. They will just do that today instead of hacking and gaining other possibilities, because this actually provides a very ripe ground to collect money from you without you realizing that.

Another thing that we noticed is that some attackers are actually more intelligent than just spinning up miners. For example, we had an instance that we used for testing that we exposed over the internet, and this instance caught a miner after four hours of being live. Once we noticed that, we actually tried to execute various tools and see what is going to happen. The attacker noticed our activity probably by analyzing the CPU utilization. Once he noticed that, he dropped the number of instances that he used for mining in order for us to not pay attention for the CPU utilization. This is something that is very interesting. They're actually thinking about how are you looking at the system, how are you using it, and how are you going to uncover if you are hacked or not.

Another thing about who is accessing is the configurations of the systems. One tricky configuration that we have seen is a configuration in AWS, specifically. It is about authenticated users. There is a group that is called Authenticated Users in AWS. This group represents any AWS user, not just your organization users, but any AWS authenticated user. If you are granting any access to this specific group, it means that I am able now to go open a new AWS account without any relation to your organization and log into your resources and use them. You should really pay attention to that. Today, it is documented in the docs of the AWS, but if we go about a year and a half ago, it was not documented. A lot of people actually used that, because the name is suggestive of security. We saw a lot of instances that are actually exposed to that. People could actually just open accounts and log into other systems.

What you should actually do to avoid these situations is to actively monitor your clusters, actively monitor your execution environments, see what is getting executed, what services are running on. Most of the time, you will see a new service created. You will see a new service created for your cluster that will spin out the miner instances on your cluster. Most of the time, they will use the service as against to executing a single container, because a single container is not much for mining bitcoin, but 500 containers, that's a nice number.

These cloud providers are actually providing you with the ability to monitor CPU utilization so you can set up alerts on CPU utilization. If you are sure that your product is never utilizing more than, let's say, 40% of the CPU, then set up an alert that will alert you once the CPU is raised to 60%, 70%. Anything that is unusual for your environment, it will be able to provide you the ability to stop the attack from going further, maybe stopping at the right time before it actually utilizing your resources and gaining money from you.

What to Ask Yourself

We have covered the components of the software supply chain, and I would like you to leave this talk with a few questions on your mind. The first question is, can anyone access anything, and I mean anything? You should know what is anything, because some of you probably are not aware of what components are actually employed throughout your tools that you're using. You should really, first of all, check what are these interfaces that are actually exposed and then check if anyone can access them without authentication at all. We saw too many instances that were exposed in that way without any authentication, without anything that would actually provide any kind of security to them.

Another thing is, you should check what permissions are you granting to your users, to your testers, to your developers. You should use the least privilege principle. If someone is not in need of a specific privilege, you should not grant it. There is no need for that. I would also advise to actually categorize your groups of privileges. For example, one privilege is for devs, one privilege is for QA personnel.

You should check out if there is any sensitive information in your commit history, build history, or any kind of a log system that you're exposing. I have talked about version control systems and CI tools, but there are other components that may be exposing log information and any other information that may be sensitive. You should actually scan this and look for this stuff.

You should check how and where your secrets are stored, because we have seen a lot of instances of secrets being stored in cleartext or maybe Base64 encoding, which is not secure. You should really try to utilize some kind of a tool, such as a vault or maybe Google secret management, any kind of a tool that will be able to provide you with a secure environment to manage these secrets. Do not manage them by yourself, because you probably will do it the wrong way.

The last but not least is, is there any internet-accessible component throughout your whole infrastructure? You should really go and check that becaus this is the most common problem today. I see some badges over here, and I can tell you that some of the badges, some of the companies are actually names that I have encountered throughout my research, so I would advise you to look into that.

Questions and Answers

Participant 1: I have a question regarding the fuzzing. [inaudible 00:42:46] Will it be on the integration side or will it be on the unit testing side?

Shapira: In the testing side.

Participant 1: When we accept our build, and when our build passes all the checks and our flow steps, and then when we're going to the integration level, when we are accepting our product, the fuzzing, is it right that we have to introduce or into the level one, where we are doing our unit testing?

Shapira: I would introduce the fuzzing where we're doing the unit testing or right after the unit testing that you perform. That will provide kind of a unified methodology that you'll use and not push the testing in different phases.

Participant 1: My next question is relating to circle. When we are introducing session tokens for our builds, is it secure?

Shapira: Do they expire?

Participant 1: They do expire, but we have to renew them every time they got expired. We do have jobs running which can tell us our token is expired or not, but when we are in our internal company resources, when we are building our product and releasing our product, we do have session tokens in our circle steps, and then we are encrypting it on our own product level. Is this the right way to do?

Shapira: It's a little bit of a tricky question. Once you use tokens, you still need to make sure that these tokens are secure. The thing with tokens is that, actually, if you put to a short living token a very short expiry date, then it will provide you a much safer token. For example, I talked about exposing secrets in log systems and stuff like that. You could be exposing tokens throughout these logs, but the timing of the exposure is very sensitive in that case. Maybe I will need to actually actively sit on your system and monitor once you are executing a build, let's say, and this token is actually getting used. Then I will need to rush with the token to actually use that to hack you. If you are providing less than, let's say, five minutes for the token, I will not have enough time to actually utilize it in an effective manner for myself.

Participant 1: TTL really matters here.

Shapira: Yes.

Participant 1: Got it. My last question is, when we are introducing our instances, security group is the right option for everything. When you say the authenticated group, I know it's a default pool, and people use it very often. When you're going into the enterprise level, we care about our security groups. Are the security groups the current solution of making things secure?

Shapira: Yes.

Participant 1: There is no other way as for now where we can secure our instances.

Shapira: There are more other ways you can go actually and be more explicit about it, I think.

Participant 1: I'm not going for a cloud specific, I'm just talking about multi-cloud specific.

Shapira. Yes, I understand. You could be more specific with your configurations and not just depend on the defaults that are provided, whether they are secure or not.

Participant 2: One of the challenges that we have in our company is transitive dependencies. Developers just like seeing [inaudible 00:46:22], seeing an MPM package, whatever package, and that package comes with other dependencies. All those things get introduced into the supply chain. Would you have a word of advice for that or some sort of guidance?

Shapira: Sure. I didn't cover package vulnerability management, because I felt like this topic has been covered a lot already. To answer your question, there are various tools that are able to actually scan the packages that you're using, both free tools and paid. There are various options that you could utilize for that.

Participant 2: We do use those tools. You talked about upgrade [inaudible 00:47:08]. It's not that easy.

Shapira: Yes, but it is out of your control in that case. All you can do is actually hope for the maintainers to upgrade and fix that in the end for you. What we do in that case is actively talk with the maintainers and provide them guidelines how to fix the problems, and so on. There are cases that you're not able to do anything about that. I can provide you with information that these vulnerabilities throughout the packages that you are actually utilizing, most of the time, will not lead to your product being actually vulnerable. Most of the time, these vulnerabilities are very dependent on how you actually utilize this package. We saw throughout our research that, most of the time, the usage is ok.

 

See more presentations with transcripts

 

Recorded at:

Jan 22, 2020

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT