Dec 9, 2022
OpenAI's ChatGPT is turning lots of heads on the internet. A technologist considers the implications of this type of AI on cybersecurity. And makes it talk like a pirate.
OpenAI's ChatGPT is turning lots of heads on the internet. It grew to a million users in five days, and people are already putting it to all sorts of uses, from writing gear reviews to sales emails to movie scripts. Pundits are widely proclaiming the academic essay finished. According to headlines, it’s both smarter and dumber than we think.
Like many people interested in technology, I’ve recently wondered about the broader implications of this type of AI.
Before we dive into that question, what is ChatGPT? Simply put, it is a program from the artificial intelligence research lab OpenAI trained built on an AI language model and using a technique known as Reinforcement Learning from Human Feedback (RLHF). This training used a large set of text data including books, articles, and other web-based text data. It focuses on providing a conversational interaction via a simple chatbot interface.
To converse with ChatGPT is to suddenly have a friend who can read and remember every book in your library, knows every language – including every programming language – and can even help you debug code that’s already been written. What would you ask such a friend?
Oh, and your friend is happy to respond like a pirate, or in the form of a screenplay using characters from any TV or movie you could imagine. Why would you search for answers on Google when you could just ask your friend? (Here, I’m not the only one wondering.)
Since it's read most every book, and its breadth of knowledge is encyclopedic, I asked ChatGPT to explain the first and second laws of thermodynamics in a manner that a sixth-grader would understand and to use humor doing it. It did an outstanding job.
It doesn't just give answers either. Many people ask it to write a short story or a scene from a movie. It's happy to provide an essay on just about any topic you can imagine and in the style you specify. It is also writing programs on request. There are endless examples of the amazing output of ChatGPT appearing on the internet. It's similar to the release of DALL-E and the splash of AI-generated art earlier this year, only much more profound.
To keep up, Google has been refining its ability to provide specific answers to questions. You can see this when searching for specific answers on the web or talking to its assistant, but it hasn't achieved the level of conversational and programming response capability as ChatGPT. Search and language AI models are not the same thing.
Of course, ChatGPT does have limitations. It isn't looking things up on the internet. It only knows things it has already digested. Its dataset ends in 2021, and if you ask it what time it is, it doesn't know. These limitations could be removed. Imagine the consolidation of a ChatGPT AI with a google search tool and a continuous learning model environment where new data is continuously being used to refine the model and make it smarter.
Pandora’s box for AI-enabled cybercrime?
The ability to hold a conversation with an AI bot, or have it write your college essay sounds interesting, but what about the security implications? ChatGPT can write programs, including malware. If it were able to learn about zero day exploits from a companion web search tool, we have the makings of a dangerous program.
I also see an unsettling potential for social engineering attacks. The bot insists it won’t conduct a spear phishing attack (as any good cybercriminal would). It refused to explain why I should buy it a Target gift card, and asking for an email in the style of a Nigerian prince scam was a violation of OpenAI’s content policy. (It did it anyway.) It also eagerly answered a prompt to write a generic phishing email – complete with the placeholders [insert phishing link here] and [your bank name].
Of course, it could also learn to write code that avoids vulnerabilities. ChatGPT's programming ability is basic today, but it is learning, and the automation of tools like ChatGPT could quickly escalate the cyberwar we all fight today.
Whether it’s generating malicious scripts or crafting more believable pretexts for phishing attacks, the concern is not so much about what ChatGPT is capable of in its current form, but rather what the technology behind it could be hijacked to do. ChatGPT is not designed to generate malicious scripts, but I don't look forward to the day someone trains a model based on all of the different exploit codes written.
For now, ChatGPT is (I think) admirably open about what it is and isn’t well suited for. But, as someone who writes about technology and the future, I can’t help but think about a time when that’s no longer the case.
In the meantime, I recommend you continue to double down on the security basics and implement a zero trust network design. And, in case you were curious, here's what ChatGPT has to say when I asked it to tell me about zero trust security in the voice of a pirate.
Arrr, matey! Zero trust security be a way of protectin' yer sensitive information from scurvy cyber-thieves. It be based on the idea that ye can't trust anyone, even if they be part of yer own crew. All users and devices must be authenticated and authorized before they be granted access to yer treasure chest of data. And even then, they only be allowed to access what they be needin', not the whole booty. It be a smart way to keep yer information safe, even in the treacherous waters of the internet. Yarr!
Yarr! Stay safe out there.
What to read next
What Star Wars can teach us about zero trust
The power of prediction: Harnessing AI and ML for cybersecurity [podcast]
Recommended