Human-targeted cyber threats exploit how humans perceive and interact with systems to bypass security defenses, rather than targeting technical vulnerabilities alone. Many attacks rely on social engineering, shaping users' trust, attention, and decision-making to achieve their goals. This includes a wide range of phenomena such as phishing, scams, misinformation/disinformation, and other forms of online manipulation. Recent advances in AI have made these threats more scalable and more convincing, while also changing how systems are built and used. At the same time, understanding human behavior has become increasingly important for designing effective protections.
We welcome submissions that develop practical approaches to improve system robustness and help users interact with AI-driven systems more safely (e.g., system design, organizational practices, governance and policy), as well as studies that evaluate and mitigate cyber threats from a human perspective. We also welcome Systematization of Knowledge (SoK) papers and other empirical research related to the topics below.
Important Dates
Submission deadline: June 25, 2026 11:59PM AOE
Notification: July 21, 2026 11:59PM AOE
Camera-ready deadline: July 29, 2026 11:59PM AOE
Topics of Interest (but not limited to)
Understanding, Measuring, and Characterizing Human-Targeted Cyber Threats
Human-subjects studies (e.g., surveys) on online fraud, scams, phishing, misinformation/disinformation, harassment and online abuse
Measurement studies that yield new insights into Human-Targeted Cyber Threats (e.g., bottlenecks)
Analysis of attack infrastructure (e.g., phishing kits ecosystems)
AI-driven generation of human-targeted attacks
Emerging human-centric threats that exploit human vulnerabilities (e.g., urgency, fear, curiosity)
Studies identifying gaps between existing defenses and real-world threats
Governance, policy, and ethical challenges in human-centric cybersecurity
Countermeasures to Mitigate Human-Targeted Cyber Threats
AI-powered defense mechanisms against human-targeted attacks
Machine Learning or other advanced techniques for detecting and mitigating human-targeted threats (e.g., phishing detectors)
Human factors in the design, usability and effectiveness of defense mechanisms
Security and privacy in human-centric systems
Adversarial robustness of defense mechanisms
Security education and training
Submission Guidelines
Submissions must not substantially overlap with previously published papers or with works that are simultaneously submitted to a journal or a conference/workshop with proceedings.
Submit. Please submit your papers via EasyChair.
Format. Papers must be written in English, submitted as a single PDF file, anonymized for double-blind review,
and must follow the official LNCS template.
Length. Long papers are limited to 16 pages and short papers to 8 pages, excluding references and appendices. Note that reviewers are not required to read the appendices.
Publication & Presentation. Accepted papers will be published by Springer in the LNCS collection. At least one author of each accepted paper will be required to register for the workshop and present the work orally or as a poster.
Open Science Expectations
We encourage authors to release code, data, and other materials needed to reproduce their work on a public platform (e.g., github or Zenodo), under an open source license. However, we acknowledge that sometimes it is not possible to share these openly, such as when it involves malware samples, data from human subjects that must be protected, or proprietary data obtained under agreement that precludes publishing the data itself. In those cases, authors can provide a clear explanation of why the data cannot be released in the "Open Science" appendix (which is not subject to the page limit at submission time).
Use of AI
The use of AI-generated content, including but not limited to text, figures, images, and code, must be disclosed in the acknowledgements section, which does not count toward the page limit at the time of submission. The use of AI tools solely for language editing or grammar improvement is considered common practice and is not covered by this policy. In such cases, disclosure is not required.
Downloads
Downloads the workshop poster: Poster image.
General Chairs
Ying Yuan, Örebro Univeristy, Sweden
Eugenio Nemmi, Sapienza University of Rome, Italy
PC Chairs
Ying Yuan, Örebro Univeristy, Sweden
Eugenio Nemmi, Sapienza University of Rome, Italy
Qingying Hao, ShanghaiTech University, China
Program Committee
Giovanni Apruzzese, Reykjavik University, Iceland
Alessandro Brighente, University of Padua, Italy
Mauro Conti, University of Padua, Italy & Örebro University, Sweden
Federico Cernera, Sapienza University of Rome, Italy
Zilong Lin, University of Missouri-Kansas City, USA
Ruofan Liu, National University of Singapore, Singapore
Luigi V. Mancini, Sapienza University of Rome, Italy
Alberto Maria Mongardini, Technical University of Denmark, Denmark
Margie Ruffin, Spelman College, USA
Angelo Spognardi, Sapienza University of Rome, Italy
Zhibo (Eric) Sun, Drexel University, USA
Francesco Sassi, Sapienza University of Rome, Italy
HumSec 2026 will be a half-day in-person workshop including presentations, posters, etc. We are looking forward to receiving contributors from academia, industry, and government to share cutting-edge ideas on human-centric cyber threats. For sponsorship inquiries, please contact: ying.yuan@oru.se or eugenio.nemmi@uniroma1.it.
For questions, please contact: ying.yuan@oru.se and eugenio.nemmi@uniroma1.it