ChatGPT is one of the hottest tech topics in 2023. There are plenty of people who welcome it, while others are hesitant. Some even believe it could become sentient, as conversations about AI always tend to venture there. Its use cases have primarily been in customer support, creating content, and generating code. However, other applications, not part of its initial design, are evolving. Several of those impact the cybersecurity ecosystem, including using it as a solution for incident response, payloads for penetration testing, compliance and security documentation, and anti-phishing, to name a few.
The question is can it be effective in triaging? Should it be? How will it impact your human workers? Let’s dive into these questions and more.
What Is ChatGPT?
First, let’s talk about what this technology is. ChatGPT is an AI chatbot developed by OpenAI. Its initial development was for use in online customer service. It’s a pretrained generative chat that leverages Natural Language Processing (NLP). Its “training” materials include all types of available data, from websites to articles to textbooks. The materials consist of biased and unbiased data. As a result, its responses are much more humanlike than traditional chatbots.
Any technology has both positives and negatives, which has implications for cybersecurity.
ChatGPT in Cybersecurity Incident Response: Friend or Foe?
ChatGPT has grown from its original use cases. Users can train it for more specific tasks that could be helpful to your team. It can generate incident reports on its own. How accurate are they?
Here are some highlights from a recent experiment using ChatGPT for incident response.
ChatGPT Attempts to Deliver an Incident Report
The platform that applied it already had the ability to capture forensic data related to an incident, including elements like root causes, compromised systems and roles, and other important information. In a traditional workflow, a human would look at the events around the incident and analyze the situation. But could ChatGPT do this on its own? Or at least augment human analysis?
After inputting some prompts, ChatGPT delivered a triage report, which appeared to be legitimate. There were some inaccuracies when digging deeper into the storyline. The team running the experiment noted it added some things that weren’t true, stating that it can “hallucinate” and does so with confidence. The initial scenario also had many worrying about privacy because ChatGPT is API-driven, so data must go through a third party.
The team tried a second test with less information provided. This time the information was more accurate and helpful. These demonstrations show the potential of the tool to augment human analysis. They could also accelerate response time, but the optimization of ChatGPT likely has a way to go before it can make any meaningful impact.
Another possible cyber use case is if it could identify malicious hashes and domains.
What Does ChatGPT Know About Well-Known Threats?
Could ChatGPT be an effective tool in identifying known threats? Some of the training model content poured into would be threat research. In testing how well it knew things, the chatbot did identify many known adversary tools. However, it missed several — one of which was Wannacry. It produced a list of domains associated with these malicious hashes and domains.
A second part of this assessment was determining if ChatGPT could recognize mimicked well-known websites. The result was not a complete success, with more research needed. What they did conclude was that ChatGPT performed well for host-based artifacts. The next step was asking ChatGPT to write code to extract metadata from a test system to see if it could find Indicators of Compromise (IoC).
The final conclusions for the experiments yielded these learnings:
- ChatGPT was able to identify and characterize some IoCs.
- It was able to detect code obfuscation.
- There were false positives and negatives.
- Asking the right question matters. If it’s a basic question without details, the answers aren’t going to be correct.
Let’s look at other key areas where ChatGPT could positively influence cybersecurity.
Payloads for Penetration Testing
Penetration testing is one of the most critical parts of a cybersecurity program. Ethical hacking makes you aware of vulnerabilities and weaknesses. ChatGPT is trainable on datasets of payloads used in penetration testing. It can also generate new ones. This use case would be beneficial in testing.
Compliance and Security Documentation
Most organizations have to adhere to compliance and security regulations and requirements. Along with this comes the need for documentation; you may have to update it often. It might be an area your technical folks loathe so that ChatGPT could take on these tasks.
ChatGPT can train on existing phishing messages. From its learnings, it could develop new sets that are very convincing. You could use these for testing and training employees. It’s another task area where your team would likely welcome the help.
Overall, the possibilities are growing, but we can’t forget that hackers could also use this tool.
ChatGPT and AI-Powered Attacks
We’ve talked about the pros and cons of AI in cybersecurity. With anything, there’s always a dark side. ChatGPT may become a favorite for hackers. They are doing the same things as the good guys. They are using it to create polymorphic malware, which mutates to avoid detection.
Even though ChatGPT’s code-writing skills are average, the programs’ ability to evolve is concerning. It will improve, and cybercriminals will be there to exploit this, as they are with any opportunity. ChatGPT could also act as an automation facilitator for specialized attacks like phishing. Those convincing emails it could create for you can also be what hackers use in their next attempt.
So, what does all this mean for your cybersecurity team? Will it be a tool they want to adopt? What pushback should you foresee?
ChatGPT and Your Cybersecurity Team
As I stated earlier, your technical folks will respond differently to this tool. Some may see it as something that could benefit their efforts and remove some manual work they don’t want to do. Others will see it as something they immediately dismiss. They are certainly welcome to their opinions and to bring to the table questions and concerns.
Introducing anything new into your environment means change; technical people aren’t big fans of this. So, then the conflict is less about if the chatbot can bring value and more about how they want things to stay the same. And that’s the real problem, one you face daily as a cyber leader. You know all too well that cybersecurity is dynamic and constantly changing, so why do all these people who fear change end up as cybersecurity staff?
Most of your people are bright and have excellent technical skills and aptitude. They tend to lack soft skills — being adaptive, flexible, open-minded, and communicative. Those are things that make the human component so valuable in cybersecurity.
ChatGPT will never have these human traits, but it could be an excellent resource for multiple areas. I suggest having open conversations about the possibilities and asking people to give their feedback. You may want to put some parameters around this to avoid posturing, or all you’ll hear are fears. This could be a great topic to apply some aspects of Secure Methodology™, a seven-step guide and framework for developing technical people into better communicators and collaborators.
Here are some ideas:
- Changing perspective: In step one, Awareness, the methodology explores the notion of broadening perspective from one that’s self-centered to more inclusive. With ChatGPT, your team could ponder how such a tool would fit into their current mindset and how it might expand it. It’s a lot about reframing. ChatGPT would change their existing workflows in some ways, and they can embrace it instead of only viewing it as an unwelcome disruptor.
- Communication: Could ChatGPT help your employees be better communicators? Its conversational qualities are supposed to be its best attributes. If ChatGPT takes a role in documenting or delivering reporting, could they learn something? Maybe they struggle with these things, which could ease that stress and enable them to add their own analysis.
- Monotasking: This step in the Secure Methodology is about focusing on one task at a time, which is challenging in cybersecurity. If ChatGPT could take over some of this work, they could have more time to focus on more meaningful things in a dedicated manner. In presenting the idea, ask them what they think ChatGPT could do for them. Get them thinking about the possibilities.
The story of ChatGPT and cybersecurity is just beginning. We’ll likely see more applications and refinement this year and beyond. If it’s on your radar, you have a sense of what it could do and tips on adopting it into your organization. It will bring about change, and you can work through it and any other transition with the Secure Methodology. Check out Christian Espinosa’s book, The Smartest Person in the Room, and the class on it to learn more.