to the total number of samples. This metric provides
a straightforward indication of the model's
effectiveness in recognizing the Lontara characters
(Pratama, Nurtanio, and Paundu 2024).
To evaluate the model's performance in
recognizing Lontara handwriting, a confusion matrix
was used to provide a detailed overview of the
prediction accuracy rate, classification error
distribution, and model performance for each
character class (Ullah et al. 2023). The matrix
presents the distribution of true positives, false
positives, true negatives, and false negatives,
allowing us to identify specific classes that were
frequently misclassified and to assess the model's
strengths and weaknesses across all categories.
3 RESULT AND DISCUSSION
Image augmentation was applied to artificially
increase the variety of the training dataset by applying
several transformations to the original images,
including rotation, inversion, and scaling. A total of
50 epochs were used during training, enhancing the
model's robustness to different handwriting
variations. Normalization played a crucial role in this
study by scaling the image pixel values to a range
between 0 and 1. This step facilitated the learning
process by the model, accelerating the training speed
and improving the stability of the model.
Normalization enabled the CNN to focus on essential
features without being influenced by large pixel value
differences. Furthermore, normalization helped
prevent common training issues, such as exploding or
vanishing gradients, ensuring that the model could
effectively capture complex patterns and perform
well in recognizing new, unseen data.
CNN is employed for feature extraction and
classification, capturing deeper spatial features such
as edges, textures, and stroke variations. The results
show that CNN is the most optimal machine-learning
technique.
Table 2 shows the accuracy results of the Lontara
character recognition system evaluation. The CNN
method proposed in this study has demonstrated high
accuracy in recognizing variations in Lontara
handwriting. CNN effectively learns spatial features
and increasingly complex representations in the
network layers for classification.
Table 2: Test results for each class.
Class
Accuracy
(%)
Precision Recall F1-Score
A 100.00% 0.7812 1.0000 0.8772
Ba 96.15% 1.0000 0.9615 0.9804
Ca 91.66% 0.9429 0.9167 0.9296
Da 95.65% 0.9565 0.9565 0.9565
Ga 100.00% 1.0000 1.0000 1.0000
Ha 93.10% 0.9310 0.9310 0.9310
Ja 100.00% 0.9667 1.0000 0.9831
Jo 100.00% 1.0000 1.0000 1.0000
Ka 100.00% 0.9615 1.0000 0.9804
La 88.88% 0.8889 0.8889 0.8889
Lo 100.00% 1.0000 1.0000 1.0000
Ma 92.85% 0.9630 0.9286 0.9455
Mpa 93.75% 0.9375 0.8824 0.9091
Na 100.00% 0.9722 1.0000 0.9859
Nca 100.00% 0.9688 1.0000 0.9841
Nga 93.33% 1.0000 0.9333 0.9655
Ngka 100.00% 0.9615 1.0000 0.9804
No 100.00% 1.0000 1.0000 1.0000
Nra 96.42% 0.9643 0.9643 0.9643
Nya 94.11% 0.9697 0.9412 0.9552
Pa 100.00% 1.0000 1.0000 1.0000
Ra 100.00% 0.9394 1.0000 0.9688
Sa 96.87% 1.0000 0.9688 0.9841
Ta 100.00% 0.8571 1.0000 0.9231
To 92.85% 1.0000 0.9286 0.9630
Wa 96.87% 0.9688 0.9688 0.9688
Wo 100.00% 1.0000 1.0000 1.0000
Ya 78.78% 1.0000 0.7879 0.8814
Total 96.19% 0.9619 0.9619 0.9619
A comparison between Hidayat's research (2019)
and our research (2025) shows significant differences
in methods and results as shown in Table 3. Hidayat's
research uses contour feature-based segmentation and
sliding windows, followed by character recognition
using a Convolutional Neural Network (CNN), with
an accuracy of 96%. However, this study had
difficulty distinguishing between very similar
characters, such as “Ta” and the diacritic “O”. Our
study integrates Zoning feature extraction techniques
with CNN, which divides images into small zones to
capture local features. This approach improves
recognition accuracy by 21%, reaching 96.19%, and
overcomes the segmentation problems faced by
Hidayat, particularly in distinguishing similar
characters.