as a means to take advantage of pre- trained facial
expression models and subsequently fine-tunes them
on crowd surveillance data for more accurate emotion
detection. It also makes use of Maximum Mean
Discrepancy (MMD), a metric for comparing
distribution similarity in the feature domain
accounting for the changes in angle and illumination
for the input images. Training on data to October
2023, the FEDM has shown significant advancements
in the early identification of suspicious behavior,
leading to reduced response times across public safety
cases. This involves analysing and tracking feet and
head motion to yield a very accurate and detailed
overview of what a person is doing, thus proving
extremely useful in determining expressions of fear
or aggression in populous situations and places like
for instance airports or stadiums where security is
very crucial. Extensive experiments demonstrate that
FEDM achieves 18% improvement in performance
over state-of-the-art facial recognition models,
making FEDM a viable solution for a real-time
crowd monitoring in a high-security area such as
airports or stadiums. FEDM, which we found from
extensive experiments to outperform the existing
facial recognition models by a margin of 18%, thus
holds the potential for future usage in real-time
crowd monitoring. Further tests demonstrated that
FEDM performs uniformly well in different lighting
scenarios and at different angles, thus affirming its
dependability in surveillance applications. It also
gives security staff critical, real-time insight to crowd
behavior that helps avert incidents before they
escalate. The study punctuates that FEDM Calibrated
Sends a permanent Downloadable File on an external
device, which needs to be understood with respect to
a much broader platform of other CCTV analytics
tools for a closing-loop understanding of the data and
which form the skeleton of a crowd management
system. Such amalgamation would facilitate multi-
layered security mechanism thus enriching situational
awareness in overcrowded locations.
Kim, et. al. proposed a Vehicle Identification and
Alert System (VIAS) that would detect stolen
vehicles in parking lots and secured areas, providing
information on vehicle theft. VIAS: The system,
which stands for Violation and Stolen Vehicle
Detection system, uses license plate recognition
along with attribute matching to identify vehicles and
detect stolen ones. It incorporates an SDT layer to
provide secure data transmission between the CCTV
cameras and the central monitoring server, reserving
sensitive information. The facility also incorporates a
Clustering- Based Alert (CBA) algorithm that
processes recorded footage and matches the
characteristics of the vehicle against police-issued
bulletins about stolen vehicles. If a match is found,
security is immediately alerted. The system has
achieved 20% better detection, while also greatly
reducing false alerts. This study highlights the
promise that VIAS holds to make places with high
encounter rates for stolen vehicles, such as parking
garages and secured access facilities, safer for
bystanders. Moreover, VIAS was validated in real-
time environments in various urban scenarios, where
it detected flagged vehicles moments after they
actually entered the base. It aligns with privacy
regulations by securely storing sensitive vehicle and
owner data through the use of encrypted data storage.
They suggest extending VIAS to leverage multi-
camera streams and improve the algorithm for use in
larger parking structures. Future work will involve
the integration of VIAS with traffic management
systems enabling the city to monitor high-risk
vehicles.
3 BACKGROUND
In the practical world of video surveillance, video
footage analysis has always been the ring process.
And if we talk about massive amounts of videos from
CCTV cameras in public places, roads, parking lot,
high security areas, it gets even more difficult. There
is a growing necessity for an intelligent system
capable of improving video quality, detecting
hazards, and verifying secure monitoring. This is
especially relevant in situations where one needs to
keep an eye on large groups of people, identify
suspicious activity, or monitor particular features,
such as license plates or facial features in the name of
security. Standard video analytics systems tend to not
work as well under low-light conditions, with blurry
footage, or objects that are not identifiable. This
limits their ability to they can accurately identify and
track critical objects like vehicle registration plates,
facial features, and abnormal activity patterns in real-
time, which can result in security breaches; this
technological gap also extends to security because
modern encryption technologies are not incorporated;
hence, the video is vulnerable to malicious access or
tampering. Thanks to deep learning and artificial
intelligence (AI), innovative methods to upgrade
video surveillance is on the rise. With the right deep
learning models, video footage can be enhanced so
that vehicles, people, and other objects are easier to
recognize and classify, even with poor conditions.
Advanced image enhancement techniques can be
employed to enhance license plates, identify faces