A curated LLM Security Application related academic papers. All papers are sorted based on the conference name and published year.
Welcome developers or researchers to add more published papers to this list.
The cryptocurrency donation address: 0xCC28B05fE858CDbc8692E3272A4451111bDCf700.
Welcome to visit my homepage and Google Scholar.
Security & Crypto | Networking & Database | Software Engineering & Programming Language | Machine Learning |
---|---|---|---|
IEEE S&P | SIGMETRICS | ICSE | AAAI |
ACM CCS | ICDE | ESEC/FSE | ACL |
USENIX Security | VLDB | ASE | ICML |
NDSS | ACM SIGMOD | ACM PLDI | NeurIPS |
IEEE DSN | IEEE INFOCOM | ACM OOPSLA | |
SRCS | IMC | ISSTA | |
RAID | WWW | ACM POPL | |
CAV |
Large Language Models for Blockchain Security: A Systematic Literature Review.
A survey on large language model (llm) security and privacy: The good, the bad, and the ugly.
Large language models for software engineering: A systematic literature review.
Securing Large Language Models: Threats, Vulnerabilities and Responsible Practices.
Unveiling security, privacy, and ethical concerns of chatgpt.
Combing for Credentials: Active Pattern Extraction from Smart Reply.
DrSec: Flexible Distributed Representations for Efficient Endpoint Security.
Moderating New Waves of Online Hate with Chain-of-Thought Reasoning in LargeLanguage Models.
TROJANPUZZLE: Covertly Poisoning Code-Suggestion Models.
Transferable Multimoda!Attack on Vision-LanguagePre-Training Models.
SMARTINV: Multimodal Learning for Smart Contract Invariant Inference.
LLMIF: Augmented Large Language Model for Fuzzing IoT Devices.
Examining zero-shot vulnerability repair with large language models.
Analyzing Leakage of Personally Identifiable Information in Language Models.
Asleep at the Keyboard? Assessing the Security of GitHub Copilot's Code Contributions.
Spinning language models: Risks of propaganda-as-a-service and countermeasures.
Privacy risks of general-purpose language models
Stealing the Decoding Algorithms of Language Models.
Large Language Models for Code: Security Hardening and Adversarial Testing.
Protecting intellectual property of large language model-based code generation apis via watermarks.
Dp-forward: Fine-tuning and inference on language models with differential privacy in forward pass.
Lost at c: A user study on the security implications of large language model code assistants.
CodexLeaks: Privacy Leaks from Code Generation Language Models in GitHub Copilot.
{Two-in-One}: A Model Hijacking Attack Against Text Generation Models.
Extracting Training Data from Large Language Models.
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion.
LMSanitator: Defending Prompt-Tuning Against Task-Agnostic Backdoors
MASTERKEY: Automated Jailbreaking of Large Language Model Chatbots.
DeGPT: Optimizing Decompiler Output with LLM.
DEMASQ: Unmasking the ChatGPT Wordsmith.
Large Language Model guided Protocol Fuzzing.
Facilitating Threat Modeling by Leveraging Large Language Models
Enhancing Static Analysis for Practical Bug Detection: An LLM-Integrated Approach.
PyDex: Repairing Bugs in Introductory Python Assignments using LLMs.
Large Language Models are Edge-Case Fuzzers: Testing Deep Learning Libraries via FuzzGPT
Fuzz4All: Universal Fuzzing with Large Language Models.
LLMParser: An Exploratory Study on Using Large Language Models for Log Parsing.
Exploring the Potential of ChatGPT in Automated Code Refinement: An Empirical Study.
Large Language Models are Edge-Case Fuzzers: Testing Deep Learning Libraries via FuzzGPT.
UniLog: Automatic Logging via LLM and In-Context Learning.
Prompting Is All You Need: Automated Android Bug Replay with Large Language Models.
Large Language Models for Test-Free Fault Localization.
Large language models are few-shot testers: Exploring llm-based general bug reproduction.
GPTScan: Detecting Logic Vulnerabilities in Smart Contracts by Combining GPT with Program Analysis.
Automated Program Repair in the Era of Large Pre-trained Language Models.
Does data sampling improve deep learning-based vulnerability detection? Yeas! and Nays!.
An Empirical Study of Deep Learning Models for Vulnerability Detection.
RepresentThemAll: A Universal Learning Representation of Bug Reports.
Contrabert: Enhancing code pre-trained models via contrastive learning.
On the robustness of code generation techniques: An empirical study on github copilot.
Two sides of the same coin: Exploiting the impact of identifiers in neural code comprehension.
Automated repair of programs from large language models.
Cctest: Testing and repairing code completion systems.
CodaMosa: Escaping Coverage Plateaus in Test Generation with Pre-trained Large Language Models.
Impact of Code Language Models on Automated Program Repair.
ReCode: Robustness Evaluation of Code Generation Models.
Better Patching Using LLM Prompting, via Self-Consistency.
Towards Autonomous Testing Agents via Conversational Large Language Models.
Let's Chat to Find the APIs: Connecting Human, LLM and Knowledge Graph through AI Chain.
Log Parsing: How Far Can ChatGPT Go?.
Robust Learning of Deep Predictive Models from Noisy and Imbalanced Software Engineering Datasets.
How Effective Are Neural Networks for Fixing Security Vulnerabilities.
InferFix: End-to-End Program Repair with LLMs.
Getting pwn'd by ai: Penetration testing with large language models.
Llm-based code generation method for golang compiler testing.
Assisting static analysis with large language models: A chatgpt experiment.
Assess and Summarize: Improve Outage Understanding with Large Language Models.
Generating realistic vulnerabilities via neural code editing: an empirical study.
You see what I want you to see: poisoning vulnerabilities in neural code search.
Vulnerability detection with fine-grained interpretations.
Not the end of story: An evaluation of chatgpt-driven vulnerability description mappings.
Understanding Programs by Exploiting (Fuzzing) Test Cases.
Backdooring Neural Code Search.
Membership inference attacks against language models via neighbourhood comparison.
ReCode: Robustness Evaluation of Code Generation Models.
Knowledge unlearning for mitigating privacy risks in language models.
Contamination attacks and mitigation in multi-party machine learning.
Adversarial Robustness of Deep Code Comment Generation.
Bag of tricks for training data extraction from language models.
Deduplicating training data mitigates privacy risks in language models.
Recovering private text in federated learning of language models.
ZipZap: Efficient Training of Language Models for Large-Scale Fraud Detection on Blockchain.
Coprotector: Protect open-source code against unauthorized training usage with data poisoning.
(Security) Assertions by Large Language Models.
A Performance-Sensitive Malware Detection System Using Deep Learning on Mobile DevicesA Performance-Sensitive Malware Detection System Using Deep Learning on Mobile Devices.
PrivacyAsst: Safeguarding User Privacy in Tool-Using Large Language Model Agents.
CD-VulD: Cross-Domain Vulnerability Discovery Based on Deep Domain Adaptation.
Software Testing with Large Language Models: Survey, Landscape, and Vision.
An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation.
Deep Learning Based Vulnerability Detection: Are We There Yet?.
On the Value of Oversampling for Deep Learning in Software Defect Prediction.
Prompt Sapper: A LLM-Empowered Production Tool for Building AI Chains.
Adversarial Robustness of Deep Code Comment Generation .
LLM4Fuzz: Guided Fuzzing of Smart Contracts with Large Language Models
CHEMFUZZ: Large Language Models-assisted Fuzzing for Quantum Chemistry Software Bug Detection
Attack Prompt Generation for Red Teaming and Defending Large Language Models