Effects of Phishing Attacks on Large Language Models

Authors

DOI:

https://doi.org/10.5281/zenodo.18213660

Keywords:

LLM-based threat mitigation, Phishing attacks, AI-enabled phishing

Abstract

The widespread application of large language models (LLMs) has raised new security challenges and ethical concerns that have attracted significant academic and societal attention. Analysis of LLMs’ vulnerabilities and their misuse in cybercrime reveals that their advanced text generation capabilities pose serious threats to personal privacy, data security, and information integrity. Furthermore, the effectiveness of existing LLM-based defense strategies is examined and evaluated. This study examines the social implications of LLMs against phishing attacks and suggests future applications to enhance their security practices and ethical governance, aiming to inform the development of the field.

References

Afane, K., Wei, W., Mao, Y., Farooq, J., & Chen, J. (2024). Next-generation phishing: How LLM agents empower cyber attackers. In 2024 IEEE International Conference on Big Data (Big Data) (pp. 2558–2567). IEEE. https://doi.org/10.1109/BigData62323.2024.10825316

Alabdan, R. (2020). Phishing attacks survey: Types, vectors, and technical approaches. Future Internet, 12(10), 168. https://doi.org/10.3390/fi12100168

Altunay, H. C. (2024a). Detection of SQL Injection attacks using machine learning algorithms based on NLP-based feature extraction. In 9th International Conference on Computer Science and Engineering (UBMK) (pp. 468–472). IEEE. https://doi.org/10.1109/UBMK62933.2024.10756715

Altunay, H. C. (2024b). Analysis of cyber attacks using honeypot. Black Sea Journal of Engineering and Science, 7(5), 954–959. https://doi.org/10.52704/bsscience.1481075

Altunay, H. C., & Albayrak, Z. (2024). SMS spam detection system based on deep learning architectures for Turkish and English messages. Applied Sciences, 14(24), 11804. https://doi.org/10.3390/app142411804

Annepaka, Y., & Pakray, P. (2025). Large language models: A survey of their development, capabilities, and applications. Knowledge and Information Systems, 67(3), 2967–3022. https://doi.org/10.1007/s10115-024-02264-0

Bethany, M., Galiopoulos, A., Bethany, E., Karkevandi, M. B., Beebe, N., Vishwamitra, N., & Najafirad, P. (2025). Lateral phishing with large language models: A large organization comparative study. IEEE Access, 13, 60684–60701. https://doi.org/10.1109/ACCESS.2025.3526685

Bossetta, M. (2018). The weaponization of social media: Spear phishing and cyberattacks on democracy. Journal of International Affairs, 71(2), 97–106.

Boyd, S. W., & Keromytis, A. D. (2004). SQLrand: Preventing SQL injection attacks. In Applied Cryptography and Network Security (ACNS 2004) (pp. 292–302). Springer. https://doi.org/10.1007/978-3-540-24852-1_23

Das, B. C., Amini, M. H., & Wu, Y. (2025). Security and privacy challenges of large language models: A survey. ACM Computing Surveys, 57(6), 1–39. https://doi.org/10.1145/3690650

Geren, C., Board, A., Dagher, G. G., Andersen, T., & Zhuang, J. (2025). Blockchain for large language model security and safety: A holistic survey. ACM SIGKDD Explorations Newsletter, 26(2), 1–20. https://doi.org/10.1145/3706497.3706500

Hadnagy, C. (2010). Social engineering: The art of human hacking. John Wiley & Sons.

Khonji, M., Iraqi, Y., & Jones, A. (2013). Phishing detection: A literature survey. IEEE Communications Surveys & Tutorials, 15(4), 2091–2121. https://doi.org/10.1109/SURV.2013.032213.00001

Kulkarni, A., Balachandran, V., Divakaran, D. M., & Das, T. (2024). From ML to LLM: Evaluating the robustness of phishing webpage detection models against adversarial attacks. Digital Threats: Research and Practice, 6(2), 1–25. https://doi.org/10.1145/3696455

Quinn, T., & Thompson, O. (2024). Applying large language model (LLM) for developing cybersecurity policies to counteract spear phishing attacks on senior corporate managers [Preprint]. Research Square. https://doi.org/10.21203/rs.3.rs-4405206/v1

Salt, C. (2009). SQL injection attacks and defense. Elsevier (Syngress).

Wang, S., Zhao, Y., Hou, X., & Wang, H. (2025). Large language model supply chain: A research agenda. ACM Transactions on Software Engineering and Methodology, 34(5), 1–46. https://doi.org/10.1145/3702995

Xiao, X., Zhang, Y., Xu, J., Ren, W., & Zhang, J. (2025). Assessment methods and protection strategies for data leakage risks in large language models. Journal of Industrial Engineering and Applied Science, 3(2), 6–15.

Zheng, J., Qiu, S., Shi, C., & Ma, Q. (2025). Towards lifelong learning of large language models: A survey. ACM Computing Surveys, 57(8), 1–35. https://doi.org/10.1145/3703155

Zheng, X., Pang, T., Du, C., Liu, Q., Jiang, J., & Lin, M. (2024). Improved few-shot jailbreaking can circumvent aligned language models and their defenses. Advances in Neural Information Processing Systems (NeurIPS), 37, 32856–32887.

Zhu, X., Zhou, W., Han, Q. L., Ma, W., Wen, S., & Xiang, Y. (2025). When software security meets large language models: A survey. IEEE/CAA Journal of Automatica Sinica, 12(2), 317–334. https://doi.org/10.1109/JAS.2024.410762

Downloads

Published

2025-06-15

How to Cite

ALTUNAY, H. C. (2025). Effects of Phishing Attacks on Large Language Models. Black Sea Journal of Artificial Intelligence, 1(1), 11–14. https://doi.org/10.5281/zenodo.18213660

Issue

Section

Original Research Article