Subscribe

ChatGPT and cyber security: What AI means for digital security


Johannesburg, 31 Mar 2023
Steve Flynn, Sales and Marketing Director at ESET Southern Africa.
Steve Flynn, Sales and Marketing Director at ESET Southern Africa.

As AI technology like ChatGPT evolves, so do the strategies and tactics used by cyber criminals. Steve Flynn, Sales and Marketing Director at ESET Southern Africa says ongoing awareness is crucial in understanding how to manage potential cyber security challenges posed by these developing tools.

As artificial intelligence (AI) technology becomes a new reality for individuals and businesses, its potential impact on cyber security cannot be ignored. OpenAI and its language model, ChatGPT, are no exception and, while these tools offer significant benefits to almost every industry, they also present new challenges for digital security. ChatGPT raises concerns due to its natural language processing capabilities, which could be used to create highly personalised and sophisticated cyber attacks.

The impact of AI on cyber security

  1. The potential for more sophisticated cyber attacks: AI and ChatGPT can be used to develop highly sophisticated cyber attacks, which can be challenging to detect and prevent as natural language processing capabilities may bypass traditional security measures.
  2. Automated spear phishing: With the ability to generate highly personalised messages, AI can be used to send convincing targeted messages to trick users into revealing sensitive information.
  3. More convincing social engineering attacks: AI and ChatGPT can also be used to create fake social media profiles or chatbots, which can be used to engage in social engineering attacks. These attacks can be difficult to detect, as the chatbots can mimic human behaviour.
  4. Malware development: AI can be used to develop and enhance malware, making it more difficult to detect and clean out.
  5. Fake news and propaganda: ChatGPT can be used to generate fake news and propaganda, which can manipulate public opinion and create panic and confusion.

Weapon or tool: it’s in the user’s hands

However, as with any other tool, the use (or misuse) depends on the hand that wields it. Organisations like OpenAI are visibly committed to ensuring their technology is used ethically and responsibly and have implemented safeguards to prevent misuse. Businesses can do the same. To protect their digital assets and people from harm, it is essential to implement strong cyber security measures, and to develop ethical frameworks and regulations to ensure that AI is used for positive purposes and not for malicious activities.

Eight steps organisations can take to enhance safety:

  1. The implementation of multi-factor authentication (MFA): MFA adds an extra layer of security, requiring users to provide multiple forms of identification to access their accounts. This can help prevent unauthorised access, even where a hacker has compromised a user's password.
  2. Educating users about security dos and don'ts: Continuous awareness training about cyber security best practices, such as avoiding suspicious links, updating software regularly and being wary of unsolicited e-mails or messages, can help prevent people from falling victim to cyber attacks.
  3. Leveraging advanced machine learning algorithms: Advanced machine learning algorithms can be used to detect and prevent attacks that leverage OpenAI and ChatGPT. These algorithms can identify patterns and anomalies that traditional security measures might miss.
  4. Implementing network segmentation: Network segmentation involves dividing a network into smaller, isolated segments, which can help isolate the spread of an attack if one segment is compromised.
  5. Developing ethical frameworks for the use of AI: Developing ethical frameworks and regulations can help ensure that ChatGPT is used for positive purposes and not for malicious activities.
  6. Increasing monitoring and analysis of data: Regular monitoring and analysis of data can help identify potential cyber security threats early and prevent attacks from unfolding.
  7. Establishing automated response systems: Detect and respond to attacks quickly, minimising damage.
  8. Updating security software regularly: Ensuring that security software is up to date can help protect against the latest cyber security threats.

Safeguard against misuse

By leveraging the power of AI technology, businesses and individuals can drive innovation, improve productivity and business outcomes with powerful new solutions. However, it is important to balance the potential benefits of AI technology with the potential risks and ensure that AI is used ethically and responsibly. By taking a proactive approach to AI governance, we can help minimise the potential risks associated with AI technology and maximise the benefits for business and humanity. As AI technology evolves, so too must our cyber security strategies. 

Share

ESET

For more than 30 years, ESET® has been developing industry-leading IT security software and services to protect businesses, critical infrastructure and consumers worldwide from increasingly sophisticated digital threats. From endpoint and mobile security to endpoint detection and response, encryption and multifactor authentication, ESET’s high-performing, easy-to-use solutions unobtrusively protect and monitor 24/7, updating defences in real-time to keep users safe and businesses running without interruption. Evolving threats require an evolving IT security company that enables the safe use of technology. This is backed by ESET’s R&D centres worldwide, working in support of our shared future. For more information, visit www.eset.com/za or follow us on LinkedIn, Facebook, and Instagram.

Editorial contacts

Aloma Swanepoel
GinjaNinja
(082) 652 3398
aloma@ginjaninja.co.za