This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minute read

AI and cyber security

At our recent cyber seminar co-hosted with PR crisis specialists from FGS Global, a panel made up of distinguished professionals including Ryan Rubin, Head of Cyber for EMEA at Ankura; Oliver Sherwood, Managing Director at FGS Global; and our very own Senior Associate Oz Watson, discussed key topics relating to AI, including:

  • the challenges posed by AI to cyber security, including emerging risks such as deepfakes
  • what organisations should do to fortify cyber defences in light of such threats
  • opportunities that AI provides for threat detection and response; and 
  • an overview of the current legal and regulatory landscape.

In summary, the key points you should consider are as follows:

What threats do deepfakes pose and what are the possible legal responses?

  • Though deepfakes have been known to be used around election campaigns to impact voting, threat actors have recently increasingly used the same technology against organisations. This has taken the guise of impersonation of senior individuals, a progression of classic phishing techniques, targeting individuals in the organisation to make money transfer or data transfer requests. 
  • A recent example involved a payment fraud of USD$15 million. The target was an employee in the finance department of a large company based in Hong Kong. She received a video call appearing to be from the CEO, requesting her assistance with conducting 'due diligence' on the acquisition of a business in another jurisdiction. The 'CEO' promised her during the call that should the deal finalise she would be promoted to MD of that entity. He said that a law firm who would be conducting the due diligence would be in touch with her to arrange everything. He asked her to keep everything confidential and not to tell other employees. The firm made contact. Over the coming months they submitted invoices for 'work done', and the employee arrange for money to be paid.  
  • The level of sophistication makes identifying deep fakes more difficult for individuals who are targeted.
  • Tackling these is challenging from a legal perspective. Threat actors are unknown (or would not respond to legal threats or claims), so any potential legal recourse against them may be a waste of time.

Does the addition of AI to a cyber-attack change communication with stakeholders?

  • Anything involving deepfakes can be more newsworthy, so mentioning the involvement of deepfakes in comms can backfire and create a bigger story. 
  • The mechanism in which breach has occurred is irrelevant, a breach is a breach.  
  • The involvement of AI in the attack should not affect the way you communicate with stakeholders. 

What can businesses do to mitigate AI threats?

  • Awareness and training to employees is vital. It's all well and good having sophisticated defence mechanisms in place, but if people 
  • Think about utilising AI tools to combat threats. AI can be used to detect attacks, carry out analysis and respond quicker.  
  • However, caution, AI itself can be vulnerable to attacks.

What are the issues with acquiring AI services?

  • Procuring an AI service from a third party carries some risk. 
  • Do your due diligence on the product before rolling out across your organisation.
  • What data will be put in by your employees once rolled out? Where does it end up? Will you be disclosing sensitive data to a third party by using the AI tool?
  • Ensure that staff are fully trained and aware on best use of the AI tool. 

What other AI threats exist?

  • Rogue platforms available on the Dark Web can be used by threat actors to write malware or other methods that can be used for attacks.

Can AI play a role in managing communications with stakeholders after an attack?

  • There is definitely a role for AI in crisis communications.  
  • AI is good for analysing social media and media coverage during a crisis. This can help you shape your communications strategy. 
  • AI can also help to find the spread of misinformation so that you know how to tackle it.
  • However, AI is not an alternative for human judgment and empathy, and should not be used to draft crisis communications!

Tags

technology media & communications, data protection & cyber, artificial intelligence & machine learning, cyber security & data breaches