Addressing the Cybersecurity Skills Gap

Are More Defined Parameters the Key to Addressing the Cybersecurity Skills Gap? (Security Intelligence)

...the skill sets required tend to be more diverse than other IT-related jobs. In addition to tech skills, cybersecurity jobs also require skills that align with liberal arts and humanities fields, such as communications and psychology. This has the potential to open the door to a wide range of candidates.

What’s missing is an accurate job description, said Wesley Simpson, chief operating officer with (ISC)2, during a conversation at the company’s Security Congress in October. Hiring managers who write up job descriptions often don’t have a complete understanding of the actual skill needs for these cybersecurity careers. There is a tendency to become enamored with certifications, which a person often can’t qualify for until they have years of job experience.

However, many of these jobs that “require” certifications are essentially entry-level jobs, so the people who should be applying for them don’t because they don’t carry certifications. On the other hand, people who do apply may be over-qualified and see the position as a lateral move, which could lead them to turn an offer down.

Is an inability to define security the main cause of the cybersecurity skills gap? If we can't truly define what security is, how can organizations design the right cybersecurity jobs for their needs?

Are More Defined Parameters the Key to Addressing the Cybersecurity Skills Gap? (Security Intelligence)

...the skill sets required tend to be more diverse than other IT-related jobs. In addition to tech skills, cybersecurity jobs also require skills that align with liberal arts and humanities fields, such as communications and psychology. This has the potential to open the door to a wide range of candidates.

What’s missing is an accurate job description, said Wesley Simpson, chief operating officer with (ISC)2, during a conversation at the company’s Security Congress in October. Hiring managers who write up job descriptions often don’t have a complete understanding of the actual skill needs for these cybersecurity careers. There is a tendency to become enamored with certifications, which a person often can’t qualify for until they have years of job experience.

However, many of these jobs that “require” certifications are essentially entry-level jobs, so the people who should be applying for them don’t because they don’t carry certifications. On the other hand, people who do apply may be over-qualified and see the position as a lateral move, which could lead them to turn an offer down.

As part of the interview team, I am sometimes interviewing individuals with less experience but who appear to be enthusiastic about the field. Some are often on my shortlist for recommended hiring. However, many times, the rest of the interview team and the hiring manager want someone with more experience. Everyone wants a unicorn.

How do we fix this?

Machine Learning Threat Taxonomy

Failure Modes in Machine Learning - Security documentation

In the last two years, more than 200 papers have been written on how Machine Learning (ML) can fail because of adversarial attacks on the algorithms and data; this number balloons if we were to incorporate non-adversarial failure modes. The spate of papers has made it difficult for ML practitioners, let alone engineers, lawyers and policymakers, to keep up with the attacks against and defenses of ML systems. However, as these systems become more pervasive, the need to understand how they fail, whether by the hand of an adversary or due to the inherent design of a system, will only become more pressing. The purpose of this document is to jointly tabulate both the of these failure modes in a single place.

In the last two years, more than 200 papers have been written on how Machine Learning (ML) can fail because of adversarial attacks on the algorithms and data; this number balloons if we were to incorporate non-adversarial failure modes. The spate of papers has made it difficult for ML practitioners, let alone engineers, lawyers and policymakers, to keep up with the attacks against and defenses of ML systems. However, as these systems become more pervasive, the need to understand how they fail, whether by the hand of an adversary or due to the inherent design of a system, will only become more pressing. The purpose of this document is to jointly tabulate both the of these failure modes in a single place.

Failure Modes in Machine Learning - Security documentation

In the last two years, more than 200 papers have been written on how Machine Learning (ML) can fail because of adversarial attacks on the algorithms and data; this number balloons if we were to incorporate non-adversarial failure modes. The spate of papers has made it difficult for ML practitioners, let alone engineers, lawyers and policymakers, to keep up with the attacks against and defenses of ML systems. However, as these systems become more pervasive, the need to understand how they fail, whether by the hand of an adversary or due to the inherent design of a system, will only become more pressing. The purpose of this document is to jointly tabulate both the of these failure modes in a single place.

Understanding this threat becomes important as more cyber-security functions, especially security operations, become dependent on machine learning algorithms.

Are Cryptographers Being Denied Entry into the US?

Why Are Cryptographers Being Denied Entry into the US? - Schneier on Security (Schneier on Security )

In March, Adi Shamir -- that's the "S" in RSA -- was denied a US visa to attend the RSA Conference. He's Israeli.

This month, British citizen Ross Anderson couldn't attend an awards ceremony in DC because of visa issues. (You can listen to his recorded acceptance speech.) I've heard of two other prominent cryptographers who are in the same boat. Is there some cryptographer blacklist? Is something else going on? A lot of us would like to know.

It certainly seems that way on the surface.