Skip to main content
TR EN

MSc. Thesis Defense: Bengü Gülay

MITIGATING VULNERABILITY LEAKAGE FROM LLMS FOR SECURE CODE ANALYSIS

 

Bengü Gülay

Computer Science, MSc. Thesis Dissertation, 2025

 

Thesis Jury

Prof. Dr. Cemal Yılmaz (Thesis Advisor),

Assist. Prof. Dilara Keküllüoğlu,

 Assoc. Prof. Ali Furkan Kamanlı

 

 

Date & Time: July 18th, 2025 –  10:00 AM

Place: FENS L029

Zoom Link: https://sabanciuniv.zoom.us/j/8211125255?omn=97533881023



Keywords : vulnerability detection, information leakage, obfuscation, honeypots, code privacy

 

Abstract

 

Large Language Models (LLMs) are increasingly integrated into software development workflows, offering powerful capabilities for code analysis, debugging, and vulnerability detection. However, their ability to infer and expose vulnerabilities in source code raises security concerns, particularly regarding unintended information leakage when sensitive code is shared with these models. This thesis investigates defense strategies to mitigate such leakage: traditional obfuscation techniques and a novel deception-based approach involving honeypot vulnerabilities. We constructed a dataset of 400 C and Python code snippets spanning 51 CWE categories and evaluated their vulnerability detection performance across three state-of-the-art LLMs: GPT-4o, GPT-4o-mini, and LLaMA-4. Firstly, we applied obfuscation methods including comment removal, identifier renaming, control/data flow transformations, dead code insertion, full encoding, and LLM-based rewriting and measured their impact on LLM detection accuracy and functionality retention. Dead code insertion and control flow obfuscation proved most effective in suppressing vulnerability leakage, though aggressive techniques like encoding impaired functionality comprehension. Secondly, we introduced honeypot vulnerabilities combined with misleading strategies that were proven effective earlier such as control flow obfuscation, data flow obfuscation, and identifier renaming and additional techniques like cyclomatic complexity increases and misleading comments. Honeypots significantly reduced vulnerability detection accuracy by over 60 percentage points in some cases, while maintaining high functional clarity, with LLM-generated similarity scores consistently above 4.1 on a 5-point scale. Misleading comments emerged as a lightweight yet robust defense across all models. These findings underscore the need to balance security and usability in AI-assisted development and highlight ethical considerations, as similar techniques could potentially be misused to conceal malicious flaws from automated audits.