Depending on where you stand, artificial intelligence (AI) might either be the great equalizer of the Veterinary industry—or one of the most divisive elements to disrupt the industry in decades. From diagnostics to voice dictation to SOAP notes, there isn’t an aspect of Vet Med that is not currently sitting in the shadow of AI, and its benefits and potential drawbacks must be considered at every avenue to ensure this technology is used ethically, legally, and with an eye towards true patient care.

This same nuanced understanding of AI must apply to the technology infrastructure that serves as the foundation for every Veterinary hospital, and especially on the ways this infrastructure is both made more vulnerable AND protected by the rising sophistication of AI tools. Simply put, AI is both a blessing and a curse when it comes to cybersecurity. As generative AI tools like Bard or ChatGPT have become more commonplace, threat actors have used these and similar tools to enhance the authenticity of their social engineering tactics, which has made businesses of every size more susceptible to ransomware and data breaches. At the same time, AI tools are being used by cybersecurity professionals to help identify and defend against a broader array of digital threats.

How Can AI Be Used by Cyber Criminals?

Social engineering is a method by which cyber criminals try to deceive or manipulate their target to gain access to personal information, financial data, or an entire network. While there are many attack vectors for social engineering attempts, one of the most prevalent in the Veterinary community is via phishing emails that try to trick the recipient into clicking a link that deploys malicious software. In the past, these attacks were transparent, rife with spelling and grammar mistakes, obvious changes to URLs (such as receiving an email from Netfl1x instead of Netflix asking to reset a password), or through being unrelated to the target’s work functions.

Enter AI. Cyber criminals have used generative AI tools to refine their language, creating phishing emails that “feel” as though they were written by a real human. Moreover, these phishing attempts are being trained against situations where clinic staff might be most willing to let their guard down and click a link that they might not otherwise. One such technique is for cyber criminals to send phishing links that appear as though they are FedEx or UPS tracking confirmation numbers, specifically targeting clinic owners, hospital managers, and accounting staff who often receive several such tracking numbers per week for legitimate orders. Another such technique that was unfortunately popular during the 2024 tax season were phishing emails that posed as tax preparation software companies, targeting accountants and hospital managers with requests to “update their software.”

Scary. And even scarier is the fact that the use of these tools allows cyber criminals to scale their operations, since AI tools can be used to automate the steps that were previously done manually. Even worse, machine learning is especially good at identifying trends, meaning that techniques that are not successful will get removed or improved upon, while successful techniques will be iterated upon. This means that cyber threats will become exponentially more difficult to identify as time passes, making ransomware and data breaches an increasingly likely inevitability for many victims.

And these aren’t the only ways the bad guys are using AI to target Veterinary hospitals:

  • Deep Fake Videos—These videos use machine learning to digitally “swap” the faces on existing footage or digitally created footage to create a convincing fake video. Even the most security-conscious employee might be tempted to let their guard down and click a link if they see a manufactured video featuring the image of their boss, coworker, or someone else they trust accompanying the link.
  • Voice Emulation—Machine learning can be applied to the vocal tone and mannerisms of a speaker, especially if ample recordings of that speaker’s voice exist. Sometimes, this can feel harmless or even fun; there is currently a swarm of fake songs that are being passed off as being by popular artists on YouTube, which is legally and ethically sketchy, but doesn’t impact the Veterinary industry directly. More concerning is when scammers pose as Veterinary business owners asking staff to wire money, send cryptocurrency, or purchase gift cards/provide the numbers and pins to the scammers. The FTC has warned that these Telephone Oriented Attack Delivery (TOAD) phishing tactics are on the rise.
  • Thread Hijacking—Message boards are a popular way for hospital managers to share business tips and veterinarians to share treatment approaches with peers. Cyber criminals have realized this and have begun utilizing compromised business accounts to steer conversations towards infected URLs or downloads. Proofpoint, a reputable email security firm, has tracked more than 90 million such malicious messages in the past five years alone.
  • Business Email Compromises—As in the example above, AI tools are being paired with compromised email accounts to lure victims into a false sense of security—after all, why would my friend or colleague send me a malicious link? Often, this method is used to either deploy ransomware on a network, or when the compromised email account belongs to a vendor, to trick the victim into installing a Remote Access Tool that opens a portal directly into the victim’s network.

How Can AI Be Used to Protect You?

It is perhaps unfair that I lead with the risks and concerns caused by AI, but a healthy dose of fear and paranoia can keep one’s defenses sharp. That said, AI can also be deployed to identify the techniques above with higher accuracy than the human eye can. With proper safeguards, you can “fight fire with fire” in your Veterinary hospital, using powerful AI tools to protect you against AI-based phishing tactics.

The tools you can use include (but are nowhere near limited to) the following:

  • Anti-Phishing Tools—These tools use AI to inspect messages—specifically looking for suspicious misalignment in content, metadata, and originating emails/URLs—and can reduce human error when identifying phishing attempts.
  • Isolating Attacks—AI can block suspicious messages before they are ever delivered, and when applied across an entire domain, can protect every user on the network from mass-targeted phishing scams.
  • 24/7 Security Operations Center (SOC)—A Security Operations Center combines human monitoring and AI detection tools to protect a Veterinary hospital’s network around the clock. If a breach does happen, the AI tools can flag suspicious behavior (such as movement or installation of files at the server/root level or lateral movement between devices on a network), and can then isolate potentially infected machines until a robust review can be done to ensure there is no infection.
  • Post-Event—If an attack does occur, AI tools deployed at the network level can be used to help notify administrators of defensive actions taken, generate logs, and help restore backups to reduce clinic downtime, rescheduled appointments, and lost revenue.

Even with AI Tools at Your Disposal, User Training is as Important as Ever

Take a moment and think of the most gullible, technologically impaired co-worker. Without the use of strong cyber defense tools protecting you, your practice is only as safe as that person. That said, raising the awareness and skill level of all staff members makes your practice less likely to be a target. It is highly advised you build regular, documented training and SOPs around cyber security awareness. A study by KnowBe4Inch says “Organizations average phish-prone percentage (the percentage of users who fall prey to social engineering scams) drops from 32.4% to 5% after training monthly for a year.”

Training/policies you should adopt include the following:

  • Zero-Fault Policies—These policies encourage team members to come forward if they have a question about a link or if they fear they have accidentally clicked something they were not supposed to. Historically, cybercriminals took 4.5 days from initial compromise to the time of ransomware deployment, but that time has dramatically dropped (to under 24 hours) within the past year. Time is essential, and you want your team members to feel comfortable raising their hand.
  • Social Engineering—This is training on the variety of tactics that team members may face. Such topics should include:
    • Phishing
    • Telephone-oriented attack delivery (TOAD)
    • Vendor impersonation
    • Gift card scams
    • Seasonal phishing tactics, such as tax season or holiday-centric attacks
  • Security Awareness Training—All staff should receive regular refresher training at least once a quarter, including simulated or blind phishing email audits, if possible.
  • Domain-Based Email—The strongest AI defense tools are those that can be deployed across an entire network, which is only possible with domain-based email, such as those provided through Microsoft O365 or Google Workspace. These tend to appear with a custom business name appearing after the @, such as @abcvet.com. Individual, “free” email boxes, such as those offered by Yahoo!, Hotmail, Google, AOL, etc., can only receive box-specific protections, making them a source of vulnerability for Veterinary practices that have not updated their email structure.
  • Multi-Factor Authentication—Where possible, Veterinary hospitals should train staff in strong password hygiene, including complex, unique passwords. However, multi-factor authentication tools provide an extra level of safeguard, reducing the likelihood of sensitive credentials becoming compromised.

AI tools are a part of the world in which we live. We cannot pretend they don’t exist, no matter how scary some of the threats may feel. Instead, the safest, most successful Veterinary practices are those that learn the risks and learn which AI tools they can use to “fight fire with fire.”

No AI tools were harmed (or used) in the creation of this article.

Resources:

https://www.forbes.com/sites/forbestechcouncil/2023/05/26/how-ai-is-changing-social-engineering-forever/?sh=25ec551e321b
https://krebsonsecurity.com/2024/03/thread-hijacking-phishes-that-prey-on-your-curiosity/
https://www.proofpoint.com/us/corporate-blog/post/five-ways-prevent-social-engineering-attacks
https://consumer.ftc.gov/consumer-alerts/2023/03/scammers-use-ai-enhance-their-family-emergency-schemes
https://www.pipelinepub.com/cybersecurity-assurance-2023/single-sign-on-SSO-security