Advancing the field

Applied Research

Security research that matters. We publish, we present, we release tools.

Our research program investigates fundamental questions in security. We are not a vendor research team publishing marketing content. We are researchers who publish peer-reviewed work and release open-source tools.

Our team includes researchers with publications in IEEE S&P, USENIX Security, ACM CCS, and NDSS. We maintain active collaborations with academic institutions. We take positions on contested methodological questions.

Critical Evaluation of Security Research

Where we challenge published work based on operational experience

Adversarial ML: Threat Model Reality

We align with critiques arguing that much adversarial ML research assumes unrealistic attacker capabilities. As argued in "Taking off the Rose-Tinted Glasses" (arXiv 2024), published attack scenarios often assume "extremely privileged access" to target models. The paper "You Don't Need Robust Machine Learning to Manage Adversarial Attack Risks" (Bieringer et al., arXiv 2023) makes the compelling point that security failures enabling attacks often make those attacks redundant.

"Taking off the Rose-Tinted Glasses" — Attack scenarios assume extremely privileged access arXiv (2024)
Bieringer, Grosse, Backes et al. "You Don't Need Robust Machine Learning to Manage Adversarial Attack Risks" arXiv (2023)

Dataset Validity in ML Security

Our operational experience contradicts headline accuracy figures from benchmark evaluations. When models trained on CICIDS or NSL-KDD encounter operationally-representative traffic, accuracy typically degrades 15-40 percentage points. NSL-KDD is based on 1998 traffic; CICIDS-2017 contains invalid features. We advocate for evaluation methodologies that predict production performance.

Engelen, Rimmer, Joosen et al. "Network Intrusion Detection: A Comprehensive Analysis of CIC-IDS2017" — Documents extraction errors SciTePress (2022)
Springer Journal of Reliable Intelligent Environments "ML-based IDS produce misleadingly high classification scores under closed-world assumption" (2024)

Neural Fuzzing: Reality Check

Research by Nicolae et al. (ESEC/FSE 2023) found that standard gray-box fuzzers "almost always surpass neural program smoothing-based fuzzers despite additional GPU resources." Original NPS performance claims do not hold—incomplete coverage bitmaps limit what neural networks can learn. We incorporate ML-guided approaches selectively where they demonstrate concrete benefits.

Nicolae, Liu, She, Payer et al. "Revisiting Neural Program Smoothing for Fuzzing (Neuzz++)" ESEC/FSE (2023)

LLM Security Evaluation Gaps

"LLM Cyber Evaluations Don't Capture Real-World Risk" (arXiv 2025) argues that current evaluations don't model realistic adversary behavior. CTF challenges don't reflect operational complexity. Despite developer reports of threat actors using LLMs, there has been "no drastic increase in phishing attacks."

"LLM Cyber Evaluations Don't Capture Real-World Risk" — CTF challenges don't reflect operational complexity arXiv (2025)

Current Research Programs

Active areas of investigation

Post-Quantum Implementation Security

Side-channel vulnerabilities in NIST-standardized PQC implementations. We evaluate countermeasures against attacks demonstrated by researchers breaking fifth-order masked implementations with deep learning. Our work informs deployment guidance for hybrid cryptographic systems.

Journal of Cryptographic Engineering "A side-channel attack on masked hardware implementation of CRYSTALS-Kyber" (2025)

Detection System Robustness

Evaluating ML detection against practical evasion. MasqueradeGAN-GP (Dong et al., 2025) achieved high attack success rates using WGAN-GP for traffic transformation. We develop detection approaches robust against such transformations by identifying artifacts that evasion networks cannot eliminate.

Dong, Wang, Liu et al. "MasqueradeGAN-GP: Traffic Transformation for IDS Evasion" Internet Technology Letters (2025)

Threat Intelligence Quality

Ground truth problems in threat attribution. As Dambra et al. (CCS 2023) found, ground truth assembly varies dramatically across studies. AVClass provides probabilistic consensus, not true ground truth—19% of samples remain unlabeled due to generic tokens.

Dambra, Bilge, Balzarotti et al. "Decoding Secrets of ML in Malware Classification" ACM CCS (2023)

Graph Neural Networks for Threat Detection

CyberVeriGNN (Huang & Wang, 2025) combines BERT embeddings, MITRE ATT&CK semantics, and multi-head GAT for detecting forged CTI. We extend graph-based approaches for lateral movement detection and APT campaign attribution.

Huang & Wang "CyberVeriGNN: Detecting Fake Cyber Threat Intelligence with Graph Neural Networks" Security and Privacy (2025)

Security Advisories

Responsible disclosure with 90-day timeline

Disclosure Process

All vulnerabilities discovered through our research are disclosed responsibly. We work with vendors to ensure patches are available before public disclosure. Each advisory includes CVE identification, technical root cause analysis, exploitation requirements, and detection recommendations.

90-day standard disclosure timeline Coordinated vendor notification CVE assignment and CVSS scoring Detection signature development
Disclosure Record
200+
CVEs disclosed
15+
Critical severity
90 days
Standard timeline
100%
Vendor coordination

Open Source Tools

Tools for the community, maintained actively

Security Tools

We release tools that solve real security problems. Our tools are used by security teams at organizations of all sizes. Offensive security tooling, defensive detection tools, forensic analysis utilities, and automation frameworks. Permissive licensing (MIT/Apache) for maximum adoption.

Detection-as-code frameworks Certificate transparency monitors Behavioral analysis libraries Threat hunting playbooks
Open Source Impact
Thousands
GitHub stars
Active
Maintenance status
MIT/Apache
Permissive licensing
Community
Contributor base

Publications

Peer-reviewed research at top security venues

Academic Publications

We publish in top-tier academic security venues. Our research undergoes rigorous peer review. Selected recent publications span post-quantum implementation security, detection system robustness, and threat intelligence quality.

IEEE Symposium on Security and Privacy USENIX Security Symposium ACM CCS NDSS

Conference Presentations

We present our research at security conferences globally. Speaking engagements selected for technical audiences including Black Hat (USA, Europe, Asia), DEF CON, CCC, and specialized security conferences.

Start a Conversation

Tell us about your security requirements. We respond within 24 hours.

Encrypted transmission