ChatGPT can rapidly build solutions for network defense
Connecting state and local government leaders
A new report outlines how the generative AI tool can quickly build scripts to thwart attackers and identify security vulnerabilities, but stresses that secure and responsible use of the evolving technology is essential.
Agencies can use ChatGPT to quickly build security code, adding new ammunition to their cybersecurity arsenal, according to a new report.
“What we found with ChatGPT was it had such a quick ability to build code that you may be able to use in your own systems at a rate that was … faster than your average employee would be able to develop something,” said Sean Heide, research technical director at the Cloud Security Alliance and an author of the “Security Implications of ChatGPT” report that CSA released April 23.
If, for example, some agency employees opened a phishing email but the security team was unsure if anyone clicked on the embedded link, the best way to find out might be to write a script to find recent email logons within 30 minutes of the malicious email’s arrival. “Instead of hand jotting it down, ChatGPT can do it in minutes. You just provide it the prompts [for] what you need to see, and now you have this entirely almost accurate script that you can use internally,” Heide said.
What’s more, the technology, a form of generative artificial intelligence that OpenAI, an AI research and deployment company released at the end of November 2022, can generate specific security code, he added. For example, an agency can build custom scripts from previous or existing standards and frameworks by telling ChatGPT what it needs. The tool can also translate scripts from one programming language to another.
“It will change and morph that code into something that is now applicable to your own use case,” Heide said. “For state and local governments that need a specific standard or to fall under a certain compliance framework, they can now do so and engineer those sandbox prompts with information that they would specifically need otherwise.”
Additionally, ChatGPT can scan for and identify security vulnerabilities. Users can ask it about common vulnerabilities and exposures—and how to mitigate them, Heide added.
And it can identify gaps in security frameworks and standards. “It can assess potential risks or threats by analyzing large datasets,” Heide said. “Of course, you need to be very careful with the data you’re feeding it” to ensure accuracy. Users should also be cautious about uploading agency information into the open source tool. Any data uploaded is used to improve the model, so users are advised not to feed in personally identifying information or sensitive data, the report states.
Heide shared several other potential use cases for the technology that could benefit state and local government users. One is using ChatGPT to do a comprehensive review of existing policies and data for policy analysis and development. “By being able to cross-reference each of those, compare and contrast, it can take a look at the impacts of different policies over time,” he said. “It can look at the faults or look at the successes of policies and make suggestions.”
Another use is in public engagement. ChatGPT can provide fast and accurate answers to questions on government websites or social media platforms, for instance.
A third area is education and training, such as informing employees about new policies, procedures and technologies.
The technology has its limitations, however. For one, it has access to information only up to 2021, so it can’t pull from current events, standards and policies.
“What you’re getting is a compilation of older material,” Heide said. “On the flip side of that, it does have the ability to take in new information, but it’s what the user provides within the prompts…. The potential for inaccuracy is actually very high.”
Other constraints include placing undue importance on certain aspects of the prompt, meaning “the way a query is framed can significantly affect the output,” according to the report. And it is not great at performing complex computations, such as hash algorithm calculations.
What’s more, malicious actors use ChatGPT, too. They can insert inputs to spread misinformation and disinformation, disrupt functionality or produce false information, the report states.
“As AI becomes more advanced, bad actors will persist in devising methods to exploit it for malicious ends,” the report states. “Addressing these challenges demands a multi-faceted approach, encompassing user education, stringent security measures, and cooperation with stakeholders to formulate.”
To ensure accuracy, then, agencies need checks and balances, Heide said, adding that responsible use largely relies on prompts at this early stage. Meanwhile, CSA is working to build policy around ChatGPT for enterprises.
Until then, the report lays out four ways to foster responsible user/AI interaction. The first is ensuring that the connection is encrypted and authenticated to prevent eavesdropping or man-in-the-middle attacks. Second is safeguarding users’ data privacy and preventing unauthorized access. The final two are about protecting the integrity of user inputs to avoid manipulating ChatGPT responses and ensuring that the responses have indeed not been tampered with.
Heide said that ChatGPT’s growth—as of April, it had more than 1 billion users, representing an increase of 55% from February to March, according to research by DemandSage—is likely to continue and that more positive use cases are likely to emerge.
“It’s almost like having a full-time analyst at a fraction of the cost,” he said. “And it’s working 24/7 if you wanted it to.”
Stephanie Kanowitz is a freelance writer based in northern Virginia.