Thoughts on the Impact of Generative AI on Security Engineering Careers
It will be OK for most people but ya gotta get to learnin'
I know everyone has a hot take or a prediction out there and writing mine down is simply for the purpose of being able to reference my guess later on. This also isn’t an exhaustive list, this is more of a post Jiu Jitsu, morning coffee reflection of the current state of the job market. This is more for me than it is for you. As such, and like most of my posts this is an unpolished view into my thought stream, enjoy the ride. For those who don’t want to read this, and I don’t blame you here’s the elevator pitch.
AI is going to dry up the quantity of security engineering roles, especially entry level like SOC analyst but not entirely remove them. AI will create a new subset of security engineering (security automation engineer) but AI experience will be a must on any Security Engineering resume. Finally, AI will not make important security decisions in a magic box and will be guided by humans, even if its accuracy can be proven.
1. AI will reduce Security Engineer headcount requirements (and already have).
Every security team I have worked on is overwhelmed with the quantity of work in front of them. No-fail situations, tight deadlines, tech-debt and the constant pressure from leaders asking “are we secure” being given an uncertain answer do not make a “chill” work environment. Burnout is high, people are stressed and its not getting easier. However, many security engineers are burdened by relatively similar work patterns and constraints. Assessing risk for system architecture or designs, writing queries to look for things, creating dashboards, responding to the same alert, tuning that alert, filling out spreadsheets or other data entry tasks, reading giant swaths of information to infer risk or if a threat exists.
This reduction will shrink security roles that accomplish low-risk, repetitive tasks.
All of these things take tons and tons of time and mental energy but generative AI can dramatically assist with making these tasks more efficient, freeing up security engineers to offload low-skill tasks to higher order thinking or and more importantly working on the harder part of the job (the job AI can’t do btw) which is cross-functional collaboration to get shit done. So think of a world where your SOC receives only half of the alerts it used to and as a security engineer you’re sampling instead of triaging every one. Same thing for detection engineering, every alert triaged is feedback into the model to tune to your environment. Before everyone says but Scott AI hAlLuCiNaTeS let me remind you that humans make errors ALL the time too and humans are REALLY hard to fix. We have egos, we let emotions make our decisions most of the time, we get tired, we get burnt out, we get frustrated, we have families, the coffee is low. All of these things impact our ability to make decisions and learn. AI does not have those same constraints and it learns REALLY well when you give it feedback.
AI experience (using and protecting it) as a Security Engineer will become a minimum job requirement.
Incorporating AI into your workflow, tuning it to your needs and monitoring it will become an ESSENTIAL task to being an efficient, modern, Security Engineer. Period. There used to be jokes about 10x engineers and now that joke is no longer a joke. You can be a 10x engineer with the help of your AI buddy. If you deny its existence, its impact on the modern world or say its not helpful you’re just lying to yourself and the only one being hurt by that is you. Sorry for the cold truth there.
While AI will shrink the Security Engineering job market in some roles, it will create new jobs as well.
When I started in security 15+ years ago the only security engineering role was a network security analyst and your job was to write snort rules on the IDS and manage the firewall. Then IR came, five years later you managed and responded to endpoint logs, then data lake, then a SIEM, then a SOAR, then case management, then intel, then threat hunting and the list goes on and on. What was once hard (SOC) become routine and IR took the title of the “hard” job and SOC became entry.
Entry level roles will change from SOC analyst to Incident Responder (and other “higher” skilled jobs)
This is the same thing I’m seeing today. Security Engineering will take on “Automation Engineering” as another responsibility and while efficacies in the “easier” tasks come online managing, tuning, automating an AI to accomplish all of these security tasks will require someone. Which brings me to my last point.
3. AI will not govern AI, at least not in security.
Look we’ve had playbooks and the ability to automate many tasks in security for well over a decade. Those actions are TIGHTLY bound, prescriptive and most times unimaginative, aka. If IP or Domain is in Bad IOC list, make firewall rule. HOW MANY OF YOU ARE RUNNING THIS? No one. Ok lets extrapolate shall we? AI determine if IOC is bad, if its, block it. In the previous example you controlled the list, you knew what got in there and yet you and executives were terrified of the result. Google DNS accidently gets added to the bad IOC list? Congrats you brought the company to its knees. Security is a no-fail job in a lot of ways, the tolerance for a false negative or a mistake is near-zero. LLM’s are inaccurate to say the least and that level of comfort will likely take at least a decade to emotionally overcome. This is another way of saying, I think AI accuracy will improve dramatically but people will not be comfortable handing over control for security actions for a long time. So who will govern the AI? Security engineers.
