June 26, 2022

Volume XII, Number 177

Advertisement
Advertisement

June 24, 2022

Subscribe to Latest Legal News and Analysis
Advertisement

Privacy Tip #333 – Chatbots Used to Steal Credentials

I am not a huge fan of using chatbots, as I never end up getting my questions fully answered. I get the efficiency of using a chatbot for simple questions, but my questions are usually not so easily resolved, so I end up completely frustrated with the process and trying to find a human being to help. This happens a lot with my internet service provider. I start with the chatbot, don’t get very far and then yell, “Can’t you just let me talk to someone who can fix my problem?”

At any rate, it seems that lots of people use chatbots and are quite comfortable giving chatbots all sorts of information. Probably not a great idea after reading a summary of research done by Trustwave.

Bleeping Computer obtained research from Trustwave before publication which shows that threat actors are deploying phishing attacks “using automated chatbots to guide visitors through the process of handing over their login credentials to threat actors.” Using a chatbot “gives a sense of legitimacy to visitors of the malicious sites, as chatbots are commonly found on websites for legitimate brands.”

According to Bleeping Computer, the process begins with a phishing email claiming to have information about the delivery of a package (it’s an old trick that still works) from a well-known delivery company. After clicking on “Please follow our instructions” to figure out why your package can’t be delivered, the victim is directed to a PDF file that contains links to a malicious phishing site. When the page loads, a chatbot appears to explain why the package couldn’t be delivered – the explanation usually being that the label was damaged – and shows the victim a picture of the parcel. Then the chatbot requests that the victim provide their personal information and confirms the scheduled delivery of the package.

The victim is then directed to a phishing page where the victim inserts account credential to pay for the shipping, including credit card information. The threat actors provide legitimacy to the process by requiring a one-time password to the victim’s mobile phone number (which the victim gave the chatbot) via SMS so the victim believes the transaction is legit.

The moral of this story: continue to be suspicious of any emails, texts, or telephone calls -(phishingsmishing, and vishing) and now chatbots – asking for your personal or financial information.

Copyright © 2022 Robinson & Cole LLP. All rights reserved.National Law Review, Volume XII, Number 146
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

About this Author

Linn F. Freedman, Robinson Cole Law Firm, Cybersecurity and Litigation Law Attorney, Providence
Partner

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She provides guidance on data privacy and cybersecurity compliance to a full range of public and private clients across all industries, such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine, and charitable organizations. Linn is a member of the firm's Business Litigation Group and chairs its Data Privacy + Cybersecurity Team. She is also a member of the Financial Services Cyber-Compliance Team (CyFi ...

401-709-3353
Advertisement
Advertisement
Advertisement