As illustrated in Figure 3, the performance of
different models varies significantly under complex
scenarios. In the Normal scenario, all three models
are able to accurately detect lane lines, with
predictions closely matching the ground truth.
LaneATT, in particular, demonstrates smoother
fitting at lane curvature points, reflecting superior
detail recovery capabilities. In the Crowd scenario, all
three models successfully detect the primary lane
lines with minimal prediction error, showcasing good
robustness.
In contrast, in the Shadow scenario, where
lighting conditions change drastically, the models
show noticeable differences. SCNN exhibits
significant deviations and broken lines in its
predictions, leading to reduced accuracy. PINet
detects only two lane lines, but they align well with
the ground truth. LaneATT successfully identifies all
lane lines with predictions almost fully overlapping
the annotations, demonstrating the best overall
performance in this setting.
Under the Night scenario, both LaneATT and
PINet maintain high detection accuracy, whereas
SCNN shows missed detections under low-light
conditions, failing to identify the rightmost lane line
and exhibiting a notable performance drop.
In conclusion, the visual results further support
the quantitative findings presented in Section 3.2.
LaneATT demonstrates stronger robustness and
generalization in complex scenarios, with more stable
and accurate predictions. PINet maintains high
localization accuracy in curved or partially occluded
environments. SCNN, while stable in scenarios with
clear lane continuity, exhibits limited performance
under strong environmental interference.
4 CONCLUSIONS
This study focused on the task of lane detection by
selecting three representative deep learning models—
SCNN, PINet, and LaneATT — for systematic
reproduction and performance comparison under a
unified dataset (CULane) and evaluation framework.
By standardizing the input-output settings, evaluation
metrics, and visualization analysis, the aim was to
explore the detection effectiveness of these models
under various driving scenarios and provide an
empirical foundation for future research.
Experimental results reveal significant
differences in overall performance and detailed
behavior among the three models. LaneATT achieved
the highest F1 scores across all scenarios,
demonstrating superior robustness and generalization
capabilities, particularly in complex environments
such as nighttime, crowded traffic, and variable
lighting conditions. PINet performed well in handling
curved roads and lane-dense scenes, making it
suitable for recognizing structurally complex lane
patterns. While SCNN maintained stable detection in
standard scenarios with good lane continuity, its
performance declined under more challenging
conditions. The visual analyses further confirmed
these quantitative findings, showcasing the prediction
differences on specific test images.
Despite the comprehensive comparative analysis
conducted in this study, some limitations remain.
First, the evaluation focused solely on the inference
stage without including the full training process.
Second, only the CULane dataset was used, lacking
cross-dataset generalization analysis. Third, practical
deployment factors such as detection speed and
resource consumption were not addressed.
Future research can be extended in several
directions: further optimizing model architectures to
improve adaptability in complex scenes; expanding
evaluation to include diverse urban environments and
varying weather conditions; incorporating
lightweight network designs to enhance inference
efficiency and promote real-world deployment in
autonomous driving systems; and exploring multi-
task learning approaches to integrate lane detection
with other perception tasks.
This study holds practical relevance and reference
value. On the one hand, reproducing and comparing
typical models within a unified evaluation
framework, clarifies the applicability and strengths of
current mainstream lane detection methods under
different scenarios, providing a basis for industrial
model selection. On the other hand, the standardized
comparison procedure and multi-perspective
visualization analysis proposed in this work serve as
an experimental paradigm and evaluation reference
for future model improvements and academic studies.
REFERENCES
Singal, G., Singhal, H., Kushwaha, R., Veeramsetty, V.,
Badal, T., & Lamba, S. (2023). RoadWay: lane
detection for autonomous driving vehicles via deep
learning. Multimedia Tools and Applications, 82(4),
4965-4978.
Waykole, S., Shiwakoti, N., & Stasinopoulos, P. (2021).
Review on Lane Detection and Tracking Algorithms of
Advanced Driver Assistance
System. Sustainability, 13(20), 11417.
Tian, J., Liu, S., Zhong, X., & Zeng, J. (2021). LSD-based
adaptive lane detection and tracking for ADAS in