Authors:
Kun Huang
1
and
Dafang Zhang
2
Affiliations:
1
School of Computer and Communication, Hunan University, China
;
2
School of Software, Hunan University, China
Keyword(s):
Intrusion detection, Evaluation data, Network traffic, Self-similarity.
Related
Ontology
Subjects/Areas/Topics:
Information and Systems Security
;
Intrusion Detection & Prevention
;
Network Security
;
Reliability and Dependability
;
Security in Information Systems
;
Security Metrics and Measurement
Abstract:
While intrusion detection systems (IDSs) are becoming ubiquitous defence, no comprehensive and scientifically rigorous benchmark is available to evaluate their performances. In 1998 and again in 1999, the Lincoln Laboratory of MIT conducted a comprehensive evaluation of IDSs and produced the DARPA off-line evaluation data to train and test IDSs. However, there is the lack of detailed characteristics of the DARPA/Lincoln Laboratory evaluation data. This paper examines the self-similarity of the 1999 DARPA/Lincoln Laboratory evaluation data sets for training and indicates that the evaluation data clearly exhibits self-similarity during preceding tens of hours period, while not during other time periods. Also the likely causes failing self-similarity are explored. These finding results can help evaluators to understand and use the 1999 DARPA/Lincoln Laboratory evaluation data well to evaluate IDSs.