BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Leveraging Diversity to Enhance Cybersecurity

Leveraging Diversity to Enhance Cybersecurity

This item in japanese

Bookmarks

Key Takeaways

  • Cybersecurity isn’t just coding. There is more to the development, implementation and sustainability of an effective cybersecurity program than can be achieved with technology solutions alone. 
  • Group-think has real consequences. Attackers only need to find one way into a system. Defenders must feasibly find them all - this can’t be effectively done without a diverse mindset building defenses. 
  • Non-technical voices make for more cyber-resilient systems. The prevalence of cybersecurity risks is constantly growing and expanding beyond a specific tech-stack. People, processes and technologies must mitigate cybersecurity risks.  
  • How we define qualified must change. Sticking to old-school criteria and strict educational requirements will result in missing great team members, and make bridging the resource gap harder than it already is.
  • We won’t get it right immediately, but we have to start. Efficacy is difficult to measure in cybersecurity, and this is no exception. There will likely be mis-steps along the way, but it’s evident that the strategy to date has been insufficient and we must try something new.  

2020 & 2021 have shaken the foundations of every facet of our lives. From our health, our jobs and finances, to our governments and our law enforcement. These systems will be rebuilt in our lifetime - and we all need to ask ourselves: what can we do to ensure cybersecurity alleviates the bias that exists today? Ensure there is a diverse mindset applied to cybersecurity you and your organizations face. This means including non-technical people, those from non-traditional backgrounds, and being intentional about avoiding herd mentality. 

If we as an industry proclaim security in depth as a best practice, we must equally ensure diversity in depth to ensure we have most effectively mitigated the risks that abound. 

Problems caused by homogeneity in security

Whenever a system or process is designed, it can require multiple data components, and various users performing different tasks. As a user engages with the system, it can drive information being shared across multiple people, processes and technologies. For the developer and defender of a system, they must attempt to predict this behavior pattern and build systems to be resilient to potential weaknesses. This can sometimes require making assumptions. 

These assumptions on how a user, or process, is expected to engage can possibly leave threat vectors unidentified or unseen. When an unmitigated threat vector exists in a system, this might be a mechanism for an attacker to gain unauthorized access to a system. 

Imagine a hospital setting for a moment: it’s filled with devices made by a manufacturer, installed by a technician, operated by a clinician and monitored by a hospital IT organization. 

For a patient to be “processed” requires data moving across various functions in a hospital, such as from the medical device, to the hospital record system, to the billing system, creating an invoice sent by billing ultimately to collections. A successful transition between departments requires a common understanding on what the end objective is and who is responsible for what. If there is a misalignment, for example, it becomes possible that sensitive data could be disseminated or accessed inappropriately. 

Assumptions aren’t always made intentionally; sometimes it’s the ambiguity of a requirement that results in assuming what a requirement means. Or perhaps a plan is made by borrowing a requirement from a similar past project, not knowing that certain qualities may be preferred. 

This concept is frequently applied to design, but directly applies to security as well.  If everyone in the security team thinks the same and follows the same way of working, you’re going to carry on assuming a user will interact with a system in a specific set of ways. The absence of creativity in thinking about user behavior with a system can mean missed threat vectors. 

As defenders, we are best poised for success when we understand the universe of threats we face and can plan accordingly.  Human error and social engineering scams are attributed to the majority of security breaches that occur today. These attacks and techniques exploit and manipulate human behavior to trick users. To prevent these attacks, defenders must understand the psychology and behavior of all users, not just those from a single background. 

Building a cyber-resilient strategy means understanding more than the technology behind it.  Getting a wider range of people into security isn’t just equitable; diversity is the best chance we have to make a real difference with security.

Biases and assumptions about how individuals and organizations deploy and use technology

The absence of diversity at all levels makes things harder across the board, from identifying & addressing threats, to innovating and meaningfully collaborating with partners. For example, a more seasoned population working in security may assume a digitally native generation innately understands cyber threats, while a mostly young team may be well aware of phishing attacks, and assume that baby boomers, who are frequently victims of these types of attacks, are equally aware of them.

When designing a security program, there is seemingly a never-ending list of things that could possibly derail the success of the program, and they must be mitigated. Seasoned executives will confirm that protecting assets used to be relatively straightforward, with amateur or opportunistic attackers being the most probable vector. 

Today, the situation is different. Cybercriminals are organized, motivated, funded, and possess a wide range of skills. In an assessment of the SolarWinds attack in the winter of 2020, Microsoft estimated at least 1000 engineers were involved in creating the attack. Is there any non-government entity that has an equivalent amount of resources defending their ecosystem? 

It’s a common trope in cybersecurity, and healthcare, to say that people are the weakest link. This is often followed by a statistic from the 2020 Cost of a Data Breach Report published annually by IBM and Ponemon stating 23% of breaches were attributed to human error or negligence.  

But maybe that statistic should instead be interpreted as we are missing 23% of use-cases where a human’s behavior has been misunderstood, and where technology failed so that the human became the expected last line of defense. A great example is in email - we’ve all sat through training which shows when clicking an email, make sure you check various features to avoid falling for a phishing scam. In reality, most email providers already have ML/AI trained filters to identify potential scams and have filtered suspicious emails out. If this filter cannot identify a phishing email, is it really fair to ask an end user to be able to do this? 

There is also a larger societal responsibility we face as security practitioners. 

The 2016 U.S. Presidential Election saw Russian disinformation heavily target black communities by using fake accounts on all the major social media platforms to sow seeds of discord by sharing racially charged-posts. 

How did the authentication model fail in identifying fake accounts? What aspect of spoofing was missed? Or perhaps it was never expected that such politically sensitive data would be shared through social media?  

And it didn’t stop with the elections, but has continued in the harassment of BLM activists by cybercriminals. In spring of 2020 we saw DDOS-ers target BLM groups.  Cloudflare reports that organizations classed as advocacy groups were subject to a much higher rate of attack than other organizations: May attack volumes were 1120 times the April figure.

NIST found examples of age, gender, and racial bias in several widely deployed systems were 10 to 100 times more likely to misidentify African American, Alaskan Indian, Pacific Islander, and Asian American faces compared with their Caucasian counterparts.   

Bias breeds distrust in systems and institutions, and as noted above, there are multiple examples of how technology has furthered this problem.  Technology and policy mitigations need to be implemented where society, systems and institutions have weaknesses. 

Increasing participation of those who aren't engineers in cyber security

Cybersecurity encompasses a variety of functions, from pen testing to incident response to training & awareness.  But one of the common issues security faces as a community is the perception by outsiders.  Media depicts all security professionals as hoodie-wearing, in front of a computer screen and coding. To an outsider with no previous experience, cybersecurity does not seem very appealing if you don’t have a technical background.  

But according to a cybersecurity workforce study led by Frost & Sullivan, 30% of all cybersecurity roles are filled by people with non-technical backgrounds. According to (ISC)2 cybersecurity workforce study in 2020, there is a cybersecurity workforce gap of over 3 million people globally, meaning the workforce needs to grow by 145 percent to help close that gap. That’s a lot of non-technical jobs. 

We’re never going to be done, but if we can start and show progress is being made, it is already a step forward. There is no one way to pursue this and it takes more than one skill (it’s a big set of activities just like security).  

In practical steps, this begins with identifying the non-technical cybersecurity areas of focus for the various business functions at a company. As noted in the table below, aligning the cybersecurity insight to each function will align the relevant constituent with a vested interest in their process operating effectively.   

For those trying to determine what a non-technical cybersecurity role may entail/ need to address, the table below contemplates where cybersecurity is relevant for the intersection of a non-technical cybersecurity and a business function. For example, where governance and production intersect, the topic of procurement criteria is identified. This could include supporting third party software penetration testing/remediation requirements, or sharing the software bill of materials for vulnerability management.  Each of these intersections requires some cybersecurity consideration that does not necessarily require technical expertise. 

Non-technical support →

Business Function ↓

Policy

Governance

People

Management

Product Development

-security integration in development framework 

-gate checks to ensure security enforcement 

-culture

-documentation

-customer -vendor

Production

-supply chain management 

-procurement criteria

-proactive behaviors 

-traceability

Finance

-preventative controls

-system driven controls

-privacy and regulatory requirements 

-impact on bottom line 

Marketing

-Transparency to customers

-Procurement/sales support

-culture of collaboration 

-Learn from sales insights to inform decisions

People Management 

-UX considerate system controls

-procedure for decision making (ex. Incident response) 

-training & awareness

-regular exercises 

Benefits of increased participation

With the new United States presidential administration and their commitment to prioritizing cybersecurity, as shown in the executive order on improving the Nation’s cybersecurity, it is anticipated that the security of critical infrastructure will get a major overhaul. This includes the FDA ranking medical device premarket cybersecurity guidance finalization as a priority in 2021.  

This corroborates that we cannot persist “as we have” in dealing with cybersecurity threats. Instead, with a diverse team we have a chance to design our new systems with the intentionality of proactively protecting our users from threats. Measures that are proactive run the gamut, but can include cryptographically signing commands that must be confirmed prior to being executed, reviewing software bill of materials for systems to identify known vulnerabilities and proactively performing digital forensics to identify potential vulnerabilities.  Being proactive, as shown by the cybersecurity resource allocation and efficacy index, resulted in a higher confidence in how a system operates. 

With a well-structured team that includes diverse perspectives, our systems will grow to prioritize reducing the extent of reliance on users against unknown threats. Note the nuance: I’m not saying the user doesn’t know how to use the device. I’m saying with tech, there will always be unknowns and there will always be weaknesses. The best systems are those which do not rely on the user for detection.  

In healthcare this is especially relevant as we cannot be in a situation where a patient or provider questions the integrity of data from devices. As demonstrated in research around modifying CT scans, malware could be used to add realistic growths to CT or MRI scans, or remove real nodules and lesions without detection. This could lead to misdiagnoses and possibly negative patient outcomes.  

We must be intentional and prioritize designing user-considered security into devices if we are to ever change the landscape of cyberthreats in healthcare. 

How people who don't have a technical background can demonstrate the value that they can bring

One of the most impactful skills is the ability to translate cybersecurity into business risks. I’m not talking about FUD (fear, uncertainty or doubt) or crying wolf. I am talking about a methodical and intentional way to understand the impact of cybersecurity on business operations. 

This ties in perfectly to the growing trend of implementing risk-based cybersecurity strategies.  In essence, risk-based means aligning technological decisions with risk. It’s helpful regardless of maturity in existing programs. As noted in an article by McKinsey & Corporation discussing the evolution of risk-based cybersecurity, an organization that reorganized priorities based on risk, increased its projected risk reduction 7.5 times above the original program at no added cost. 

This can begin with a cybersecurity team asking the businesses about the processes they regard as valuable and the risks that they most worry about. This doesn’t require any technical skills! Just a willingness to learn. Making the connection between the cybersecurity team and the businesses is a highly valuable step in itself. It motivates the businesses to care more deeply about security, appreciating the bottom-line impact of a recommended control. 

Bringing a wider range of people into security

Deloitte conducted a study focusing on closing the cybersecurity gap that delivered multiple interesting insights. My favorite part of this is they titled the study “the changing faces of cybersecurity” - and it focused on changing skill sets required to be successful in this space. 

We need to think about how we encourage a more diverse workforce to consider working in security. That requires also thinking about different ways into the security field, as well as about training people moving into the field, and accessible tools that support the widest possible workforce. 

Specific trends identified included changing trends in job descriptions, moving away from narrow technical disciplines, and becoming more “esoteric.” The report also emphasized that future cybersecurity needs expertise in privacy and security regulation. 

A few suggestions to get started on building a more diverse and inclusive cybersecurity team are included below: 

  1. Change the definition of qualified 

Often a specific set of criteria are requested when finding candidates for cybersecurity. The formal paths often used in recruitment often don’t accommodate different experiences very easily.  

It takes effort to understand people who come from different backgrounds, different education and different experiences.  But there is a growing population of technology companies that are changing this “requirement.” 

Salesforce, for example, released a cybersecurity training for everyone - an attempt to address the systemic access issues in technical education and change how a candidate can demonstrate their qualifications. 

  1. Target different populations 

One way to attract a more diverse cross-section is to work with organizations that are focusing on attracting underrepresented groups such as The Diana Initiative. And if the oversubscribed women-only Blackhoodie workshops are any indicator, there are plenty of women interested in cybersecurity opportunities.

  1. Retain talent 

With all these efforts to attract diverse talent, it would be remiss not to think about retaining talent. 

How do we keep teams from burning out and protect them from stress? Security can be so demanding that a report found more than half of practitioners have considered switching to different jobs entirely.  

We need better support systems; not just managing and mentoring staff to keep learning and progressing in their career, but also giving them resilience training to deal with being on the front line defending their organization.

To be successful as a cybersecurity community, we need to work to find pathways into the security industry that mean we accept all who want to take part, hiring for skill and passion rather than just the right certifications or college degrees. 

About the Author

Vidya Murthy is fascinated by the impact of cybersecurity on the healthcare space.  Beginning her career in consulting, she realized a passion for healthcare and worked for global medical device manufacturer Becton Dickinson. She serves as chief operating officer for MedCrypt, a company focused on bringing cybersecurity leading practices to medical device manufacturers. Murthy holds an MBA from the Wharton School. 

Rate this Article

Adoption
Style

BT