Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News GitHub CodeQL Code Scanning Now Supports Setting a Threat Model

GitHub CodeQL Code Scanning Now Supports Setting a Threat Model

GitHub has recently extended its CodeQL-based code scanner by adding the possibility to specify the desired threat model. The new feature is available in beta for the Java language.

The new feature is implemented as a setting that allows users to select which threat model should be used to decide what input data can be trusted and what data should be considered as a potential source of risk for the system.

In its default configuration, CodeQL uses a threat model that considers any remote sources, including HTTP requests, as tainted, i.e., untrusted. This will be fine for the vast majority of codebases, says GitHub, but many of them will want to extend the set of tainted input data sources by including local files, command-line arguments, environment variables, and databases:

You can enable the local threat model option in code scanning to help security teams and developers uncover and fix more potential security vulnerabilities in their code.

To enable the new option, you can either use the GitHub UI, where the Threat model setting is listed along with the Query suite setting, which allows you to select the group of CodeQL queries to run against your codebase.

Alternatively, you can enable it by specifying threat-models: local in an Actions workflow file.

Finally, if you run CodeQL scanning through the command line or in a third-party CI/CD, you can provide the --threat-model=local flag.

By extending CodeQL settings with the possibility of specifying the threat model to use, GitHub is making its code scanning solution more adaptable to different codebases by providing it with specific information about the context where code scanning takes place.

Understanding the threat model associated with a system or codebase is a key step to ensure its security. According to the Threat modeling manifesto, this kind of analysis starts with identifying what can go wrong and listing all possible threats. Threats are usually specific to each system and depend on how it was designed and implemented.

As is the case with many security-related practices, the earlier you identify threats, the better you can address them. Code scanning can be seen as a "shift-left" approach to improving the security of a system, failing which you can define mitigations in later phases of the system's lifetime.

While adding support for a local threat model is undoubtedly a step forward in GitHub's offering, there are many additional dimensions to code scanning threat modeling it does not cover yet, including authentication, frequency of execution, accessed resources, protected assets, and so on.

About the Author

Rate this Article