models forecast resource usage. For precise schedule
optimization, advanced fuzzy logic systems use
environmental data, deadline limits, and real-time
resource availability. The hybrid method balances
efficiency and justice by assigning critical tasks to
resources without overburdening them. A major
benefit of the hybrid architecture is its energy
efficiency. The cloud data center's high energy usage
has financial and environmental consequences. The
system applies ML predictions together with fuzzy
logic rules to decrease resource disuse and select the
right resources for each task (Chen et al., 2022).
The reduction of energy consumption occurs
because energy-aware scheduling algorithms use
real-time energy profiles from resources as part of
their decision-making process. The suggested
framework adopts dynamic approaches to operational
changes, whereas static systems use established
criteria exclusively to operate. Using fuzzy logic
systems along with machine learning models allows
for flexible changes in priority levels and can predict
delays during busy times, helping with replacing
resources. The adaptable nature of this system
ensures high completion rates when task uncertainties
exist, and this creates substantial stability
improvements (Rahman et al., 2021). The
implementation of scheduling systems using
combinations of ML and fuzzy logic generates
important advantages, though it comes with certain
implementation challenges. Table processing speed
rises due to both sophisticated ML model design and
massive training data needs. The development of
fuzzy logic rules needs knowledge about the domain
together with continuous modifications to capture
actual situations accurately. New developments in
automatic fuzzy rule generation together with ML
model optimization have effectively reduced this
challenge (Li, Y., and Wang, T, 2023). The paper
helps exploration scheduling by looking at smart
scheduling methods that use fuzzy logic along with
machine learning. The recent development
demonstrates the outstanding capacity for dynamic
cloud system employment because test results
indicate it boosts resource utilization while
decreasing energy usage while upholding task
execution timing guarantees.
2 RELATED WORKS
In Saad et al., (2023). the authors translated K-Means
clustering through fuzzy logic for the effective
organization of fog nodes by their resource
characteristics and workload patterns. The arrived-at
method distributes work in real time by linking K-
means clustering to fuzzy logic and fuzzy logic
adaptability. Their approach demonstrated how
distributed job placement to fog nodes using machine
learning generated decreased execution times and
reduced response times and network utilization rates.
Thus, extensive testing confirms that the proposed
solution results successful in being versatile in
changing fog scenarios. The time-consuming VM
work cluster detection, but we the entire process is
very efficient. They developed and evaluated their
proposed approach using iFogSim. It shows
distinguished improvements in comparison to
machine learning and non-ML-based scheduling
methods inside the iFogSim framework in terms of
response time, execution time, and lesser network
utilization in the simulation results. In Thapliyal et al.,
(2024), authors proposed an optimized approach
based on fuzzy logic (FL) and best-fit-decreasing
(BFD) for job scheduling process in a cloud
computing environment. They all play into making
FL-BFD worthy of your time, money, power, and
resources. The FL-BFD reallocates the cloud VMs
by the user demand. We find it important to leverage
the FL capabilities to deal with uncertainty and
missing information to properly provision the
needed with what the user requires in the BFD for
properly provisioning VMs. The proposed FL-BFD
inspects multiple factors including makespan,
computational time, degree of imbalance, power
consumption, and SLA violations. Output: FL-BFD
has the longest makespan of 9.2 ms among 1000
jobs, compared to IWHOLF-TSC and MCT-PSO.
The authors of Radhika, D (2022) presented a
cloud dynamic task scheduling in which they consider
big data analysis processing in the cloud
environment. They employed multiple methods,
including a machine learning classifier and an
optimization approach. For classification of various
virtual machine tasks, they use a machine-learning
classifier known as a Support Vector Machine
(SVM). Using this classifier, we can effectively
reduce makespan and execution time when
classifying incoming requests. They also assigned the
classified job using moth flame optimization to the
SVM classifier. This proposed system is used to:
classify the virtual machine (VMs) tasks and
evaluate decision make methodology for the
resources allocation. The proposed method showed
that the make-span time may be reduced, while load
balancing may also seem beneficial according to their
work, which they tested in a cloud modeling
environment to improve VM classification. Iin Alam
et al., (2021), the authors introduce a new static