Applied Research
Security research that matters. We publish, we present, we release tools.
Our research program investigates fundamental questions in security. We are not a vendor research team publishing marketing content. We are researchers who publish peer-reviewed work and release open-source tools.
Our team includes researchers with publications in IEEE S&P, USENIX Security, ACM CCS, and NDSS. We maintain active collaborations with academic institutions. We take positions on contested methodological questions.
Critical Evaluation of Security Research
Where we challenge published work based on operational experience
Adversarial ML: Threat Model Reality
We align with critiques arguing that much adversarial ML research assumes unrealistic attacker capabilities. As argued in "Taking off the Rose-Tinted Glasses" (arXiv 2024), published attack scenarios often assume "extremely privileged access" to target models. The paper "You Don't Need Robust Machine Learning to Manage Adversarial Attack Risks" (Bieringer et al., arXiv 2023) makes the compelling point that security failures enabling attacks often make those attacks redundant.
Dataset Validity in ML Security
Our operational experience contradicts headline accuracy figures from benchmark evaluations. When models trained on CICIDS or NSL-KDD encounter operationally-representative traffic, accuracy typically degrades 15-40 percentage points. NSL-KDD is based on 1998 traffic; CICIDS-2017 contains invalid features. We advocate for evaluation methodologies that predict production performance.
Neural Fuzzing: Reality Check
Research by Nicolae et al. (ESEC/FSE 2023) found that standard gray-box fuzzers "almost always surpass neural program smoothing-based fuzzers despite additional GPU resources." Original NPS performance claims do not hold—incomplete coverage bitmaps limit what neural networks can learn. We incorporate ML-guided approaches selectively where they demonstrate concrete benefits.
LLM Security Evaluation Gaps
"LLM Cyber Evaluations Don't Capture Real-World Risk" (arXiv 2025) argues that current evaluations don't model realistic adversary behavior. CTF challenges don't reflect operational complexity. Despite developer reports of threat actors using LLMs, there has been "no drastic increase in phishing attacks."
Current Research Programs
Active areas of investigation
Post-Quantum Implementation Security
Side-channel vulnerabilities in NIST-standardized PQC implementations. We evaluate countermeasures against attacks demonstrated by researchers breaking fifth-order masked implementations with deep learning. Our work informs deployment guidance for hybrid cryptographic systems.
Detection System Robustness
Evaluating ML detection against practical evasion. MasqueradeGAN-GP (Dong et al., 2025) achieved high attack success rates using WGAN-GP for traffic transformation. We develop detection approaches robust against such transformations by identifying artifacts that evasion networks cannot eliminate.
Threat Intelligence Quality
Ground truth problems in threat attribution. As Dambra et al. (CCS 2023) found, ground truth assembly varies dramatically across studies. AVClass provides probabilistic consensus, not true ground truth—19% of samples remain unlabeled due to generic tokens.
Graph Neural Networks for Threat Detection
CyberVeriGNN (Huang & Wang, 2025) combines BERT embeddings, MITRE ATT&CK semantics, and multi-head GAT for detecting forged CTI. We extend graph-based approaches for lateral movement detection and APT campaign attribution.
Security Advisories
Responsible disclosure with 90-day timeline
Disclosure Process
All vulnerabilities discovered through our research are disclosed responsibly. We work with vendors to ensure patches are available before public disclosure. Each advisory includes CVE identification, technical root cause analysis, exploitation requirements, and detection recommendations.
Open Source Tools
Tools for the community, maintained actively
Security Tools
We release tools that solve real security problems. Our tools are used by security teams at organizations of all sizes. Offensive security tooling, defensive detection tools, forensic analysis utilities, and automation frameworks. Permissive licensing (MIT/Apache) for maximum adoption.
Publications
Peer-reviewed research at top security venues
Academic Publications
We publish in top-tier academic security venues. Our research undergoes rigorous peer review. Selected recent publications span post-quantum implementation security, detection system robustness, and threat intelligence quality.
Conference Presentations
We present our research at security conferences globally. Speaking engagements selected for technical audiences including Black Hat (USA, Europe, Asia), DEF CON, CCC, and specialized security conferences.
Start a Conversation
Tell us about your security requirements. We respond within 24 hours.