🎉 I am excited to share that our latest research paper, Model-based Feature Weight Drift Detection (MFWDD) Showcased on TLS and QUIC Traffic, has been published at CNSM24! 🚀
🔍 In the ever-evolving world of network traffic classification, machine learning models face constant challenges from concept and data drifts. These shifts degrade model performance over time, making it harder to maintain accuracy without frequent, often inefficient retraining.
✨ That's where MFWDD comes into play! Unlike traditional approaches, MFWDD doesn't just detect drifts—it quantifies their severity and links them to specific features, offering unprecedented insights into what's driving the change. By leveraging statistical tests like the Kolmogorov–Smirnov test and Wasserstein distance, MFWDD enables smarter, more targeted retraining. This means fewer unnecessary updates, better resource efficiency, and stronger model performance over the long term. 💡
💡 In our paper, we showcase how MFWDD guided the retraining of ML models for TLS and QUIC service classification, maintaining and even improving their accuracy despite evolving network conditions. The results highlight the importance of timely retraining and the advantages of feature-focused drift detection. 📊
📄 Curious to learn more? Check out the paper on IEEE Xplore: https://lnkd.in/eY_2K5EW 🔗
#Research #NetworkSecurity #MachineLearning #DataDrift #OpenSource
ML Scientist @ Protect AI | PhD in Computer Science
1moInteresting read! Liked how the proposed method uses eigenvalue analysis of an LLMs’ internal representation to determine if a given text is hallucinated. I would also hypothesize that a greater understanding of an LLMs brain map might inform us about their capabilities and limitations.