DL-CNN: Double Layered Convolutional Neural Networks 
Lixin Fu and Rohith Rangineni  
Department of Computer Science, University of North Carolina at Greensboro, Greensboro, NC 27401, U.S.A. 
Keywords: The Convolutional Layers, Double Layers, Neural Networks, Classification, Image Processing. 
Abstract:  We studied the traditional convolutional neural networks and developed a new model that used double layers 
instead of only one. In our example of this model, we used five convolutional layers and four fully connected 
layers. The dataset has four thousand human face images of two classes, one of them being open eyes and the 
other closed eyes. In this project, we dissected the original source code of the standard package into several 
components and changed some of the core parts to improve accuracy. In addition to using both the current 
layer and the prior layer to compute the next layer, we also explored whether to skip the current layer. We 
changed the original convolution window formula. A multiplication bias instead of originally adding bias to 
the linear combination was also proposed. Though it is hard to explain the rationale, the results of 
multiplication bias are better in our example. For our new double layer model, our simulation results showed 
that the accuracy was increased from 60% to 95%. 
1 INTRODUCTION 
For many years, Convolutional Neural Networks 
(CNN) has long been the main classification 
algorithm for image processing. Their accuracy can 
be further improved. To this end, we dissected the 
CNN source code from the famous Pytorch Python 
package. We then greatly changed some core parts of 
the algorithm by applying multiple connected layers, 
skip layers, generating the input from the prior layer 
and observing whether the newly developed 
algorithms can improve the accuracy over the original 
algorithm. 
In our research we have modified and 
implemented a new CNN classifier called DL-CNN 
(Double Layer CNN) which computed the current 
layers from previous two layers. As experiments 
show, our model’s performance is significantly better 
in the test cases in terms of classification accuracy. 
The remaining of the paper is structured as 
follows. Next section presents related work, 
implementation of a convolutional neural network, 
and recent developments. Section 3 deals with the 
architecture, various parameters, activation functions, 
FC layers, propagations (forward and backward) 
topology of the convolutional neural networks. 
Section 4 explains our network implementation with 
our new methods. Section 5 covers simulations and 
results including our models and original model. 
Section 6 gives a conclusion and suggests the future 
improvements. 
2 RELATED WORKS 
The earlies neural model was proposed by Walter 
Pitts, Warren McCulloch proposed in their seminal 
paper (McCulloch, 1943). They gave a concept of a 
set of neurons and synapses. Then, Frank Rosenblatt 
invented a single layer Neural Network called 
“perceptron” which uses a simple step function as an 
activation function (Rosenblatt, 1957). In 1986, 
David Rumelhart, Geoffrey Hinton, and Ronald 
Williams published a paper on “backpropagation” 
(Rumelhart, 1986). This started the training of a 
multi-layered network. Yann LeCun et.al. proposed 
Convolutional Neural Network (CNN)  (Lecun, 
1989). 
In the field of computer vision convolutional 
neural networks is being widely used. The structure 
of convolutional neural nets consists of hidden layers 
– convolutional layers, pooling layers, fully 
connected layers, normalization layers. In 
convolutional neural networks we use pooling and 
convolution functions as an activation function. 
In the field of Natural Language processing 
Recurrent Neural Networks also called RNN's are 
being used. RNN are widely applied to hand-writing