Mathematics and Natural Sciences
Wei Jin, PhD
ASSISTANT PROFESSOR, EMORY COLLEGE OF ARTS AND SCIENCES, COMPUTER SCIENCE
Toward Next-Generation Graph Anomaly Detection: Characterization, Robustness, and Contextual Integration
Anomalies in graph-structured data pose critical challenges to modern industries and public sectors, impacting areas such as finance, cybersecurity, and healthcare. For instance, undetected anomalies contribute to billions in annual losses due to fraud, increased vulnerabilities in national infrastructure, and delayed responses to health crises. The ability to detect these anomalies with precision and reliability is essential for protecting economic stability, ensuring the security of critical infrastructure, and safeguarding public health. This proposal addresses the limitations of current graph anomaly detection (GAD) techniques by introducing an advanced, robust, and contextually rich detection framework to meet the demands of real-world applications. Our proposal aims to provide a comprehensive solution by focusing on three key objectives. First, we develop a systematic characterization of graph anomalies at node, edge, and subgraph levels, which will inform advanced detection tools. Second, we will rigorously assess the robustness of existing detectors and develop detection models resilient to various perturbations through a dual-sharpness optimization strategy. Lastly, by integrating external knowledge such as Large Language Models, we will enable context-aware detection, improving accuracy for context-sensitive anomalies that existing models often overlook.
The proposed framework has transformative potential to enable proactive anomaly detection across critical domains. This research promises to advance GAD capabilities to meet the demands of dynamic, high-stakes environments. The success of this project will help address societal imperatives such as strengthening cybersecurity, reducing financial fraud, and enhancing public health monitoring, through a next-generation GAD system that is both technically rigorous and socially impactful.
Kai Shu, PhD
ASSISTANT PROFESSOR, EMORY COLLEGE OF ARTS AND SCIENCES, COMPUTER SCIENCE
Towards Reliable Authorship Attribution in the Era of LLMs: Assessment and Enhancement
Authorship attribution has long been an important problem and is widely used in various applications pertaining to web security and integrity such as misinformation intervention, fraud tracking, plagiarism detection, forensics, etc. The advent of Large Language Models (LLMs) has brought two major challenges for authorship attribution models: (1) LLMs can be utilized to conceal the identity of texts and then fool Human-Authorship (HA) Attribution Models; and (2) LLMs can generate human-like texts that are hard for Neural-Authorship (NA) Attribution Models to distinguish from human-written texts. Malicious users can possibly utilize LLMs to produce harmful contents online without the risk of being identified. Thus, we study two pressing and critical tasks to assess and enhance the reliability of HA and NA authorship attribution models respectively in the era of LLMs.