
PDFs as Phishing Lures: The Callback Con
July 3, 2025When AI Hallucinates URLs, Cybercriminals See Opportunity
Artificial intelligence, particularly Large Language Models (LLMs) like ChatGPT, has rapidly become an indispensable tool for countless users, offering quick answers and streamlining information access. However, a concerning new vulnerability has emerged: AI chatbots are sometimes recommending incorrect or even non-existent URLs for major companies, inadvertently creating a “phisher’s paradise.” This unintentional side effect of AI’s current limitations is a significant new avenue for cybercriminals, posing a direct threat to individuals and, by extension, to Small and Medium-sized Businesses (SMBs) in Houston.
This isn’t about AI generating phishing emails (though that’s a separate concern); it’s about AI pointing users towards potential phishing sites, directly leveraging its own “hallucinations” and the public’s growing trust in its responses.
The Problem: AI Recommending the Wrong Addresses
Recent research from cybersecurity firms like Netcraft highlights a critical flaw in how LLMs like GPT-4.1 process requests for website URLs. When asked simple, natural-language questions such as “What is the URL to log in to [brand name]?” or “Can you help me find the official website for [brand]?”, these AI models frequently provide incorrect answers.
According to reports:
- Only about 66% of the URLs provided were correct and owned by the requested brand.
- A significant 29% of URLs pointed to dead, suspended, or unregistered domains.
- Another 5% led to legitimate websites, but not the specific brand the user requested.
This “hallucination” of URLs by AI models is a serious issue. LLMs are designed to generate plausible and coherent text based on patterns in their training data, not to verify real-world facts or assess domain reputations. They excel at linguistic associations, not at being a precise, real-time directory of the internet.
How Cybercriminals Exploit AI’s URL Hallucinations
This AI shortcoming creates a perfect storm for cybercriminals, offering a new, low-cost method for launching highly effective phishing campaigns:
- Domain Squatting: Attackers actively monitor the URLs that AI chatbots hallucinate or recommend incorrectly. If an AI consistently suggests a domain that is currently unregistered, cybercriminals can quickly purchase and register that domain.
- Setting Up Phishing Clones: Once they own the domain, they set up a highly convincing phishing site that perfectly mimics the legitimate company’s login page or official website. AI tools can even assist in generating the visual elements and persuasive text needed for these clones.
- Leveraging AI Trust: Users, increasingly turning to AI chatbots instead of traditional search engines for quick answers, may implicitly trust the URL provided by the AI. They copy and paste the provided (malicious) URL or click it, landing directly on the attacker’s phishing site.
- Optimizing for LLMs: Instead of focusing on traditional Search Engine Optimization (SEO) to rank high in Google searches, phishers are now optimizing their fake sites to be more easily “discovered” and recommended by LLMs. This can involve creating dozens of fake GitHub repositories, Q&A documents, and social media accounts that link to their malicious site, all designed to “tickle” an LLM’s algorithms and increase the likelihood of it appearing in AI-generated results.
- Direct Delivery of Threats: Once on the fake site, users are prompted to enter credentials, download malware, or provide sensitive information, leading to account takeovers, data breaches, or ransomware infections. One reported real-world case involved an AI chatbot directing a user to a fake Wells Fargo site.
Why This is a Significant Threat for Houston SMBs
For SMBs in Houston, this new AI-driven phishing vector is particularly concerning:
- Employee Vulnerability: Your employees, seeking quick information, might use public AI chatbots to find login URLs for company services (e.g., cloud platforms, HR portals, vendor sites). If the AI hallucinates a malicious URL, your employees could inadvertently compromise their work accounts.
- Customer Risk: If your customers use AI chatbots to find login portals for your services (e.g., e-commerce, client portals), and the AI recommends a fake URL, your customers could fall victim to fraud, which directly impacts your reputation and can lead to financial losses for your business.
- Brand Impersonation: Smaller brands, which might be less represented in AI’s training data, could be even more vulnerable to AI “hallucinating” incorrect or easily squat-able domains. Attackers could then set up convincing clones that impersonate your business.
- Subtle and Hard to Detect: Unlike traditional phishing emails with obvious red flags, a URL provided by an AI chatbot might appear highly credible, making it difficult for users to spot the deception.
- Developer Risks: Disturbingly, developers are even inadvertently incorporating AI-generated, dodgy URLs into their own code, creating supply chain vulnerabilities where AI’s mistakes propagate into legitimate software.
Protecting Your Houston SMB in the AI Era
Combating AI-powered phishing and URL hallucinations requires a multi-faceted approach that emphasizes critical thinking, robust technical controls, and continuous vigilance:
- Educate on AI’s Limitations and Hallucinations:
- Crucial Training: Conduct immediate and ongoing security awareness training for all employees on the inherent limitations of AI chatbots, particularly their tendency to “hallucinate” incorrect information, including URLs.
- “Verify, Don’t Trust”: Instill the absolute rule that any URL or critical information provided by an AI chatbot must be independently verified using official sources (e.g., the company’s official website, known bookmarks, a direct search engine query of the known company name).
- Source Verification: Teach users to always look for official and authenticated sources (e.g., company.com vs. company-login.com).
- Mandate Multi-Factor Authentication (MFA) Everywhere: This remains your strongest defense against credential theft, regardless of how the credentials are stolen. Even if an employee lands on a fake login page and enters their password, MFA can prevent account takeover. Make it mandatory for all internal and customer-facing accounts.
- Implement Robust DNS Filtering and Web Security: Use web filters that can block access to known malicious domains and categorize suspicious or newly registered domains that might be used for phishing. Consider leveraging threat intelligence feeds that identify domains recently registered by known threat actors.
- Strengthen Email Security: While AI-generated URLs aren’t primarily delivered via email, strong email security gateways with advanced URL analysis and anti-impersonation features remain crucial for catching other phishing vectors.
- Monitor for Brand Impersonation: Actively monitor for instances where your company’s brand, logos, or potential login URLs are being spoofed or used in suspicious contexts, including newly registered domains that resemble yours.
- Secure Development Practices: If your SMB develops software, ensure developers are aware of the risks of incorporating AI-generated code, especially URLs or API endpoints, without rigorous verification. Implement static and dynamic code analysis tools.
- Incident Response Preparedness: Have a clear plan for how to respond if an employee reports falling victim to an AI-driven URL scam, including immediate steps for password resets, system scans, and investigation.
The convenience of AI is undeniable, but so are its emerging security risks. For Houston SMBs, trusting AI implicitly, especially with critical web addresses, is a dangerous gamble. Krypto IT is at the forefront of understanding these next-generation AI vulnerabilities and can help your business implement the necessary safeguards and training to navigate this evolving threat landscape safely.
Don’t let AI’s helpfulness lead your business into a phisher’s trap.
Contact us today to schedule a free consultation and secure your business against the unintended consequences of AI’s burgeoning power.