Intelligent Classification of Different Types of Plastics using Deep
Transfer Learning
Anthony Ashwin Peter Chazhoor
1
, Manli Zhu
1
, Edmond S. L. Ho
1
, Bin Gao
2
and Wai Lok Woo
1
1
Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne, NE1 8ST, U.K.
2
School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu, China
Keywords: Deep Learning, Transfer Learning, Image Classification, Recycling.
Abstract: Plastic pollution has affected millions globally. Research shows tiny plastics in the food we eat, the water
we drink, and even in the air, we breathe. An average human intakes 74,000 micro-plastic every year, which
significantly affects the health of living beings. This pollution must be administered before it severely impacts
the world. We have substantially compared three state-of-the-art models on the WaDaBa dataset, which
contains different types of plastics. These models are capable of classifying different types of plastic wastes
which can be reused or recycled, thus limiting their wastage.
1
INTRODUCTION
Plastics refer to a wide extend of materials that can
be formed, cast, spun, or coated as a coating at
some point throughout the fabricating process.
Synthetic polymers are ordinarily made by
polymerizing monomers obtained from oil or gas,
and plastics are often manufactured by adding
different chemical additives to them, improving
manufacturing and material performance such as
flexibility, longevity, and aesthetics (Thompson et
al., 2009). Plastic has various uses in day-to-day life
and is used abundantly around the globe as it is
affordable, lightweight and can be used in a wide
range of applications. Based on the application, Some
types of plastics are recyclable while others are
disposed of after single-use (Bonifazi et al., 2018).
Approximately 359 million tonnes of plastics are
produced every year and this number is going to
increase in the coming years due to their excessive
use (Ferdous et al., 2021). Textiles, industrial
machinery, consumer and institutional products,
building and construction, electrical and electronic
industries use plastic on a large scale (Geyer et al.,
2017).
Worldwide, 6.3 billion tonnes of plastic waste
have been generated to date (Mazhandu and
Muzenda, 2019). This has become a global
environmental issue. Moreover, only 19 percent of
this waste is recycled and the rest is dumped into
landfills or incinerated. Plastic biodegradation is a
prolonged process. Almost 40 percent of the plastic
waste generated is from the packaging industry,
which is the highest waste generator in its segment
(Balwada et al., 2021). If one tonne of plastic is
recycled around 5,774 kW-hours of energy is
generated. Approximately 16.3 barrels of oil and
22.9 cubic meters of landfill space can be saved,
together with the environmental impact from its
incineration which releases toxic gases into the
atmosphere (Ferdous et al., 2021). Plastic has
polluted the marine ecosystem and is found in
seafood to the deepest ocean trenches. Most of the
plastic which enters the ocean is sourced through
land (Harris et al., 2021). This massive volume of
plastics can be reused and recycled. The main
challenge is to reduce plastic pollution by
minimizing its use, reusing the existing plastic
materials, and recycling the types of a suitable
plastic. Domestic and industrial plastics can be
segregated and categorized according to their
respective types, which helps minimize its impact by
differentiating recyclables and reusable from dead-
end plastics, which can be done with modern image
classification methods. However, it is challenging
and time- consuming to classify these waste plastics
manually. It leads to the automation of the process
for plastic waste segregation based on its types. With
the advance of computer vision and deep neural
networks, the classification of objects in images and
their localization has become accessible
commercially and is available at a lower price, with
its accuracy increasing every day (Eitel et al., 2015).
This paper aims to benchmark the three widely
implemented architectures on the WaDaBa dataset
190
Chazhoor, A., Zhu, M., Ho, E., Gao, B. and Woo, W.
Intelligent Classification of Different Types of Plastics using Deep Transfer Learning.
DOI: 10.5220/0010716500003061
In Proceedings of the 2nd International Conference on Robotics, Computer Vision and Intelligent Systems (ROBOVIS 2021), pages 190-195
ISBN: 978-989-758-537-1
Copyright
c
2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved
Table 1: Types of plastics and examples.
Types of plastic Examples
1.Polyethylene Terephthalate (PET or PETE) Beverage bottles, Food bottles
2.High-Density Polyethylene (HDPE) Milk cartons, detergent bottles
3.Polyvinyl Chloride (PVC or Vinyl) Plumbing pipes, credit cards
4.Low-Density Polyethylene (LDPE) Plastic wrap, sandwich and bread bags
5.Polypropylene (PP) Straws, bottle caps, prescription bottles
6.Polystyrene (PS or Styrofoam) Cups, takeout food containers
7.Other baby bottles, electronics, CD, DVDs
to find out the best model with the support of
trans
fer learning. To ease the recycling process
worldwide, seven different types of plastic have
been categorized based on their chemical
composition and is detailed in Table 1. PET, HDPE
PP and PS dominate the household waste and
segregating them into their respective types will
allow the reuse of certain types and recycling of
other types of plastics (Bobulski and Kubanek,
2021). This paper is the first benchmark paper aimed
towards classifying different types of plastics from
the images using deep learning models, and this can
stimulate the research in this area and serve as a
baseline for future research work.
2 RELATED WORK
Plastics can be sorted manually or by using
sophisticated means of technologies based on the
differences in their chemical, optical, electrical, and
physical property.
In 1994, Inculet et al., patented the separation of
waste plastic materials using electrostatics. In this
method the waste plastic is shredded into small
pieces and then separated electrostatically after
charging by suitable means. The waste materials
were separated based on of different rates of contact
charges picked up by the plastic materials (Inculet et
al., 1994).
Safavi et al. proposed the use of visible
reflectance spectroscopy which was fast and
accurate to separate the polypropylene resins based
on their colour using the “Three-Filter”
identification algorithm, which was limited to only
single type of plastic (Safavi et al., 2010).
In 2012, Masoumi et al., proposed sorting of
different types of plastics using near infrared (NIR)
spectroscopy (Masoumi et al., 2012). Infrared is
used in detecting wide range of materials including
plastics and metals (Gao et al., 2017); (Gao et al.,
2014). Using the NIR spectroscopy with two
specific wave- lengths plastic resins can be correctly
identified but its use is limited to light coloured
plastics only (Feng et al., 2018). The NIR reflects
from the plastic surface and in is received by a
receiver, and based on the intensity of reflection the
plastics is categorized (Masoumi et al., 2012).
The presence and amount of many elements are
identified by a spectroscopic technique called X- ray
fluorescence (XRF). The energy irradiated by the
XRF can classify plastics based on their chemical
composition accurately but at a relatively high cost
and health concerns (Chaqmaqchee et al., 2017);
(Ahmed et al., 2020).
Agarwal et al. achieved an accuracy of 99.7 %
which differentiated 5 types of plastics using
supervised deep learning on the WaDaBa database.
Triplet loss and Siamese network architectures were
used to get the output results (Agarwal et al., 2020).
Having deep architectures and the capacity to
learn more complex models, the deep neural
networks (DNN) have a superior advantage
compared to the traditional approaches for
classification. The robust training techniques make it
possible to learn complex object representations
without having to design features by hand which has
been clearly demonstrated on the challenging
imagenet classification task on a wide range of
classes (Szegedy et al., 2013). These features of the
DNN make it an absolute fit for the classification of
different types of plastic wastes.
This paper benchmarks existing models like
ResNet-50, Alexnet and ResNeXt, which uses
transfer learning and under-sampling and weight
balancing to classify the WaDaBa dataset.
3 METHODOLOGY
3.1 Database
WaDaBa dataset is used for the experiments and can
be requested from its creator, which is available on the
WaDaBa website after signing a consent form. The
dataset consists of 4000 images in which majority are
of PET images (2200) followed by PP images (640),
PE-HD images (600), PS images (520) and other
Intelligent Classification of Different Types of Plastics using Deep Transfer Learning
191
images (40). Each image is made out of a single
object that has been deformed to certain degrees to
mimic the natural settings (Bobulski and Piatkowski,
2017).
3.2 Convolutional Neural Network
With the convolution neural network advancement,
deep learning has become the primary tool for
classification problems. Deep learning is an end-to-
end method based on neural networks (Koh et al.,
2021). CNN has made unprecedented success in the
field of image processing (Ruan et al., 2020) and
because of its superior performance in computer
vision, deep learning has changed a variety of
sectors (Zhao et al., 2021). The Convolutional
Neural Network (CNN) is a popular Deep Learning
model for image classification (Fadli and
Herlistiono, 2020). A convolutional layer, a pooling
layer, and a fully connected layer are used in a CNN
to extract features and recognize tar- gets (Luo et al.,
2019). The convolutional layer and pooling layer are
the foundation of a CNN. The net- work
accomplishes its training by a back-propagation
algorithm (Yang et al., 2021).
3.3 Deep Transfer Learning
CNNs are great at image recognition. CNNs require
a large amount of training data and take ample time
to complete a set of training. However, by using
transfer learning, we can overcome these limitations.
Transfer learning helps to train new data with the
help of previously trained data. A pre-trained model
is generally utilized for fine-tuning in transfer
learning.
The pre-trained model is a deep learning model
trained on a large benchmark dataset such as the
ImageNet and typically excels in extracting the
image features. Transfer learning also helps to avoid
the over- fitting of data effectively. Thus, pre-trained
models might have better performance while
training (Zeng et al., 2021).
3.4 Classification Models
3.4.1 ResNet-50
ResNet-50 is a convolution neural network with 50
layers. Resnet employs residual blocks, mainly
consisting of skip connections, which provide a
quicker gradient flow. Even if the network is too
deep, it reduces the complications like vanishing
gradient (He et al., 2016).
3.4.2 AlexNet
AlexNet is a neural network with three
convolutional layers and two fully connected layers
and was introduced by Alex Krizhevesky in 2012.
By expanding network depth and employing multi-
parameter optimization techniques, AlexNet
improves learning capacity. After AlexNet’s
outstanding performance on the ImageNet dataset in
2012, CNN-based applications became popular
(Krizhevsky, 2014).
3.4.3 ResNeXt
Facebook proposed the ResNeXt model, which
ranked second in the ILSVRC 2016 classification
competition and improved COCO detection
performance. ResNeXt model introduced a new
dimension called cardinality along with width and
depth as an essential parameter. When it came to
expanding model capacity, cardinality is seen to be
more effective than going deeper or broader,
especially when going deeper and broader resulted
in decreased returns (Hitawala, 2018).
3.5 Experimental Settings
The WaDaBa dataset was requested from its creator
by signing a consent form. The data set has been
described in the section 3.1.
3.5.1 Imbalance in the Data
The classes in the dataset have an unequal number of
images. The first class (PET) has 2200 images, and
the last class (Others) has only 40 images. It is pretty
challenging to get datasets for certain types of
plastic due to their size and cost. Due to the
imbalance in the classes, the under-sampling
approach was adopted along with the balanced
weight distribution of the WaDaBa dataset. Five
hundred images were selected from the first four
classes and were split at 80 percent for the training
and the remaining 20 percent for the testing. From
the last class, 32 images were taken for training and
the remaining 8 for testing.
3.5.2 Model Parameters
The training images were passed through ResNet50,
AlexNet, and ResNeXt architectures and has been
normalized. The dataset was run through each of
these models for 20 epochs. Before passing on the
training, the data has been normalized. The data then
goes through a series of augmentation techniques
such as random horizontal flip and center crop. The
ROBOVIS 2021 - 2nd International Conference on Robotics, Computer Vision and Intelligent Systems
192
optimizer used was Stochastic Gradient Descent
(SGD) with a learning rate of 0.001 and momentum
of 0.9. The loss used for the experiments was cross-
entropy loss.
Figure 1: Accuracy and loss curves of training and testing for ResNet50.
Figure 2: Accuracy and loss curves of training and testing for AlexNet.
Figure 3: Accuracy and loss curves of training and testing for ResNext.
Intelligent Classification of Different Types of Plastics using Deep Transfer Learning
193
Figure 4: ROC and AUC comparison between different
models.
Once the training was completed the testing
accuracy was computed and has been given in Table
2.
4 EXPERIMENTAL RESULTS
4.1 Accuracy Results
The ResNeXt architecture shows the highest testing
accuracy with 91 percent followed by ResNet-50
with an accuracy of 89 percent and Alexnet with an
accuracy of 88 percent. The accuracy curves and the
loss curves with respect to epochs for ResNet50,
AlexNet and ResNeXt architectures are given in the
Fig. 1, 2 and 3 respectively. From the graphs, we can
see the training and the testing rates increases with
the number of epochs and once it reaches a certain
threshold, it maintains its accuracy. Similarly, the
loss decreases with the increase in epochs. We can
also infer that there is no over-fitting of data after
viewing the accuracy versus epoch curves.
From the ROC curves and AUC in Fig. 4, we can
see that all three models have very high AUC.
ResNeXt achieves the best performance, ResNet-50
has a very similar AUC to ResNet50, followed by
AlexNet. The reason is that both ResNet50 and
ResNext50 have a deeper model than AlexNet.
Table 2: Accuracy comparison between different models.
Pre-trained Network Accuracy
ResNeXt 91 %
ResNet-50 89 %
AlexNet 88 %
5 CONCLUSIONS
In this paper, we have benchmarked the accuracy of
three different models on the WaDaBa dataset,
which helps in automatically classifying different
types of plastic wastes that need to be reused and
recycled. This will help increase the overall
recycling of plastic products and thus reduce plastic
wastes. It can be used in the recycling industry to
classify different plastic wastes and for further
research to segregate them from others with different
classes. From the results, we can see that ResNeXt
model achieved the highest accuracy.
Once the plastic is correctly classified, it can be
segregated with the help of an air nozzle or a robotic
arm. Future work includes localization and detection
of plastic objects.
REFERENCES
Agarwal, S., Gudi, R., and Saxena, P. (2020). One-shot
learning based classification for segregation of plastic
waste. In 2020 Digital Image Computing: Techniques
and Applications (DICTA), pages 1–3.
Ahmed, J., Gao, B., Woo, W. L., and Zhu, Y. (2020). En-
semble joint sparse low-rank matrix decomposition for
thermography diagnosis system. IEEE Transac- tions
on Industrial Electronics, 68(3):2648–2658.
Balwada, J., Samaiya, S., and Mishra, R. P. (2021). Packag-
ing plastic waste management for a circular economy
and identifying a better waste collection system using
analytical hierarchy process (ahp). Procedia CIRP,
98:270–275.
Bobulski, J. and Kubanek, M. (2021). Deep learning for
plastic waste classification system. Applied Computa-
tional Intelligence and Soft Computing, 2021.
Bobulski, J. and Piatkowski, J. (2017). Pet waste classi-
fication method and plastic waste database-wadaba. In
International Conference on Image Processing and
Communications, pages 57–64. Springer.
Bonifazi, G., Capobianco, G., and Serranti, S. (2018).
A hierarchical classification approach for recognition
of low-density (ldpe) and high-density polyethylene
(hdpe) in mixed plastic waste based on short-wave
infrared (swir) hyperspectral imaging. Spectrochim-
ica Acta Part A: Molecular and Biomolecular Spec-
troscopy, 198:115–122.
Chaqmaqchee, F. A. I., Baker, A. G., and Salih, N. F.
(2017). Comparison of various plastics wastes using x-
ray fluorescence. American Journal of Materials
Synthesis and Processing, 5(2):24–27.
Eitel, A., Springenberg, J. T., Spinello, L., Riedmiller, M.,
and Burgard, W. (2015). Multimodal deep learning for
robust rgb-d object recognition. In 2015 IEEE/RSJ In-
ternational Conference on Intelligent Robots and Sys-
tems (IROS), pages 681–687.
ROBOVIS 2021 - 2nd International Conference on Robotics, Computer Vision and Intelligent Systems
194
Fadli, V. F. and Herlistiono, I. O. (2020). Steel surface de-
fect detection using deep learning. Int. J. Innov. Sci.
Res. Technol, 5:244–250.
Feng, Q., Gao, B., Lu, P., Woo, W. L., Yang, Y., Fan, Y.,
Qiu, X., and Gu, L. (2018). Automatic seeded re- gion
growing for thermography debonding detection of
cfrp. NDT & E International, 99:36–49.
Ferdous, W., Manalo, A., Siddique, R., Mendis, P., Zhuge,
Y., Wong, H. S., Lokuge, W., Aravinthan, T., and
Schubel, P. (2021). Recycling of landfill wastes (tyres,
plastics and glass) in construction–a review on global
waste generation, performance, application and future
opportunities. Resources, Conservation and Recy-
cling, 173:105745.
Gao, B., Bai, L., Woo, W. L., and Tian, G. (2014). Ther-
mography pattern analysis and separation. Applied
Physics Letters, 104(25):251902.
Gao, B., Li, X., Woo, W. L., and yun Tian, G. (2017).
Physics-based image segmentation using first order
statistical properties and genetic algorithm for induc-
tive thermography imaging. IEEE Transactions on Im-
age Processing, 27(5):2160–2175.
Geyer, R., Jambeck, J. R., and Law, K. L. (2017). Produc-
tion, use, and fate of all plastics ever made. Science
advances, 3(7):e1700782.
Harris, P., Westerveld, L., Nyberg, B., Maes, T., Macmillan-
Lawler, M., and Appelquist, L. (2021). Exposure of
coastal environments to river-sourced plastic pollu-
tion. Science of The Total Environment, 769:145222.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
Hitawala, S. (2018). Evaluating resnext model archi-
tecture for image classification. arXiv preprint
arXiv:1805.08700.
Inculet, I. I., Castle, G., and Brown, J. D. (1994). Elec-
trostatic separation of mixed plastic waste. US Patent
5,289,922.
Koh, B. H. D., Lim, C. L. P., Rahimi, H., Woo, W. L., and
Gao, B. (2021). Deep temporal convolution network
for time series classification. Sensors, 21(2):603.
Krizhevsky, A. (2014). One weird trick for paralleliz-
ing convolutional neural networks. arXiv preprint
arXiv:1404.5997.
Luo, Q., Gao, B., Woo, W. L., and Yang, Y. (2019). Tem-
poral and spatial deep learning network for infrared
thermal defect detection. NDT & E International,
108:102164.
Masoumi, H., Safavi, S. M., and Khani, Z. (2012). Identi-
fication and classification of plastic resins using near
infrared reflectance. Int. J. Mech. Ind. Eng, 6:213–20.
Mazhandu, Z. S. and Muzenda, E. (2019). Global plastic
waste pollution challenges and management. In 2019
7th International Renewable and Sustainable Energy
Conference (IRSEC), pages 1–8. IEEE.
Ruan, L., Gao, B., Wu, S., and Woo, W. L. (2020). Deftect-
net: Joint loss structured deep adversarial network for
thermography defect detecting system. Neurocomput-
ing, 417:441–457.
Safavi, S., Masoumi, H., Mirian, S., and Tabrizchi, M.
(2010). Sorting of polypropylene resins by color in
msw using visible reflectance spectroscopy. Waste
management, 30(11):2216–2222.
Szegedy, C., Toshev, A., and Erhan, D. (2013). Deep neural
networks for object detection.
Thompson, R. C., Swan, S. H., Moore, C. J., and Vom Saal,
F. S. (2009). Our plastic age.
Yang, X., Zhang, Y., Lv, W., and Wang, D. (2021). Im-
age recognition of wind turbine blade damage based
on a deep learning model with transfer learning and an
ensemble learning classifier. Renewable Energy,
163:386–397.
Zeng, F., Li, X., Deng, X., Yao, L., and Lian, G. (2021). An
image classification model based on transfer learning
for ulcerative proctitis. Multimedia Systems, pages 1–
10.
Zhao, W., Chen, F., Huang, H., Li, D., and Cheng, W.
(2021). A new steel defect detection algorithm based
on deep learning. Computational Intelligence and
Neuroscience, 2021.
Intelligent Classification of Different Types of Plastics using Deep Transfer Learning
195