Neurosymbolic Spike Concept Learner towards Neuromorphic
General Intelligence
Ahmad Najiy Wahab, Khaled Mahbub and Abdel-Rahman Tawil
School of Computing and Digital Technology, Birmingham City University, Birmingham, U.K.
Keywords: Neuromorphic General Intelligence, Spiking Neural Networks, Functional Plasticity, Structural Plasticity,
Neurosymbolic, Representation Learning, Concept Learning.
Abstract: Current research in the area of concept learning makes use of deep learning and ensembles methods to learn
concepts. Concept learning allows us to combine heterogeneous entities in data which could collectively
identify as individual concepts. Heterogeneity and compositionality are crucial areas to explore in machine
learning as it has the potential to contribute profoundly to artificial general intelligence. We investigate the
use of spiking neural networks for concept learning. Spiking neurones inclusively model the temporal
properties as observed in biological neurones. A benefit of spike-based neurones allows for localised learning
rules that only adapts connections between relevant neurones. In this position paper, we propose a technique
allowing dynamic formation of synapse (connections) in spiking neural networks, the basis of structural
plasticity. Achieving dynamic formation of synapse allows for a unique approach to concept learning with a
malleable neural structure. We call this technique Neurosymbolic Spike-Concept Learner (NS-SCL). The
limitations of NS-SCL can be overcome with the neuromorphic computing paradigm. Furthermore,
introducing NS-SCL as a technique on neuromorphic platforms should motivate a new direction of research
towards Neuromorphic General Intelligence (NGI), a term we define to some extent.
1 INTRODUCTION
Neuromorphic computing (NC) is introducing a new
computing paradigm along with a class of processors
that differs from conventional computing to simulate
neural networks. The neural networks that NC adopts
behave as closely as possible to spike-based neurones
in biology. Initially the goal of NC was to simulate
the brain with large scale integrations of hardware-
based neurones. With the prominent advancement of
deep learning algorithms, dedicated processors or
Neural Processing Units (NPU) are being developed
to accelerate machine learning algorithms.
To mention a few NPUs and their use cases, the
Tensor Processing Unit (TPU) by Google have been
developed to accelerate deep learning algorithms for
consumers through Google’s cloud AI compute
services (Sengupta, Kubendran, Neftci, & Andreou,
2020); Vision Processing Units (VPU) have been
developed to serve as co-processors to accelerate
vision compute tasks and a few can perform image
inference tasks (Barry & Riordan, 2015; Rivas-
Gomez, Pena, Moloney, Laure, & Markidis, 2018);
Field Programmable Gate Arrays (FPGA) have been
used to conduct research on neural networks and their
various learning mechanisms (Lammie, Hamilton,
Van Schaik, & Azghadi, 2019; Perez-Peña, Cifredo-
Chacon, & Quiros-Olozabal, 2020; Rosado-Muñoz,
Bataller-Mompeán, & Guerrero-Martínez, 2012), but
these applications are demonstrated towards visual
pattern recognition (Liu & Yue, 2019).
It is important to note that majority of NPUs are
currently being developed to accelerate existing
neural network algorithms. Only a few are being
developed at the frontier of brain-inspired computing
research, that focuses on biologically plausible neural
models. Plausible neural models constitute of
neurones that neurologically functions much closer to
neurones observed in our central nervous system
(CNS). These plausible neurones are the spike-based
neurones which makes the 3
rd
generation of neural
networks. Intel’s Loihi, IBM’s TrueNorth,
SpiNNaker and Neurogrid (Boahen, 2017; Davies et
al., 2018; Debole et al., 2019; Painkras et al., 2013)
are few examples that could be considered as NPUs
with 3
rd
generation neural networks but distinctively
are being considered as Neuromorphic Processors.
The defining aspect of Neuromorphic Processors are
1168
Wahab, A., Mahbub, K. and Tawil, A.
Neurosymbolic Spike Concept Learner towards Neuromorphic General Intelligence.
DOI: 10.5220/0010339911681176
In Proceedings of the 13th International Conference on Agents and Artificial Intelligence (ICAART 2021) - Volume 2, pages 1168-1176
ISBN: 978-989-758-484-8
Copyright
c
2021 by SCITEPRESS Science and Technology Publications, Lda. All r ights reserved
their scalability and massively parallel compute
potential to simulate spiking neurones at grand scales.
With neuromorphic devices on the rise, more
specialised devices/hardware is being developed to
host and accelerate various neural networks. We
acknowledge that these devices increase the
performance of such networks but, from a much
broader perspective in the domain of neuromorphic
computing, there is relatively slow advancements
being made on the algorithmic developments for what
could potentially be machine learning techniques
unique to the neuromorphic platforms.
It is highly probable, perhaps inevitable, that the
future of artificial intelligence will have mankind in
an age where general intelligence machines are more
like human beings contributing to society: reasoning
with and learning from its environment causing real
world effects. To reach such an age, cognition and
behavioural dynamics in general intelligence
machines should exhibit the likeness to behaviours of
human beings. This may be achieved by somewhat
imitating the underlying cognitive processes or
neurological processes that is observed in our central
nervous systems. Alternatively, purely speculative
developments that do not imitate nature may also lead
to general AI, this raises the questions as to which is
more favourable for the future of general AI. We may
perhaps trust more on general AI that work closer to
our biology than a more obscure forms of general AI
that we cannot relate to in processing rationality.
Artificial general intelligence (AGI) is considered
the holy grail of AI for decades and has been the core
motivation of machine learning since the birth of the
field. AGI is still anticipated to bring about
revolutionary advancement to science and
technology, with a wave of machine reasoning.
Initially, AGI is considered as intelligence expressed
by machines in contrast to natural intelligence
expressed by humans. Nevertheless, general
intelligence can also be considered loosely yet sub-
specifically as a form of intelligence that possess
cross-domain expertise.
Considering general intelligence in the context of
neuromorphic computing, it is reasonable to define a
very specific branch of artificial general intelligence
that is achieved through Neuromorphic means as a
form of Neuromorphic General Intelligence (NGI).
General intelligence can be achieved through
cognitive models consisting of various machine
learning techniques, formal methods and algorithms
like State, Operator, And Result (SOAR), Adaptive
Control Thought – Rational (ACT-R) and Learning
Intelligence Decision Agent (LIDA) (Bruckner,
Zeilinger, & Dietrich, 2012). We can differentiate
NGI as an approach of general intelligence emerging
from Neuromorphics, spike-based neural computing
platforms.
In this work we investigate the possibility of
delivering general intelligence on the neuromorphic
computing paradigm. Specifically, we propose to
investigate a method of structural and functional
plasticity for learning on a neurosymbolic spike-
based network to achieve concept learning on
neuromorphic platforms. Concept learning is a crucial
element to inaugurate generality in machine
intelligence.
Key questions this work will address towards NGI:
How can spike-based networks achieve concept
learning?
How will such spike-based concept learners be
employed in future neuromorphic platforms?
Will spike-based concept learners inform any
design specification for neuromorphic hardware?
The real-world use cases – for concept learning
emphasising structural plasticity – can be in situations
where intelligent dynamical systems are required that
are extensible, flexible and function in real-time. The
world of Internet of Things (IoT) are such situations
as new sensors are constantly added to extend a
system in its wide variety of uses. Applying concept
learning in IoT situations allows us to form real-time
associations between various sensors regardless of
sensor type. The supposed intelligent dynamical
systems require algorithms that are unconstrained, we
have proposed structural plasticity for our approach
to adapt with extensibility as associations can form
with new components whilst existing associations
remains unaffected yet functional. The intelligence
aspect of our approach as a concept learner lies in the
neurosymbolic space. Concepts could consist of
entities in the symbolic space, representing
composites derived by heterogeneous data streamed
from different IoT sensors. The universality of
neurosymbolic space due to the heterogeneity enables
inter-correlation of information across different
sensors in an IoT ecosystem. For IoT sensor, data
requires spike-encoding for spike-based neural
processing. The encoding provides a standard
mechanism for processing data further promoting the
universality of symbolic space.
The rest of this paper will be presented as follows.
In section 2, we cover related works on inter-domain
knowledge, representation and concept learning. In
section 3, we briefly describe different aspects of our
algorithm and their rationale to realise our take on
concept learning with spiking neurones. In section 4,
Neurosymbolic Spike Concept Learner towards Neuromorphic General Intelligence
1169
we discuss the potential implications of our
techniques and the prospect of our approach to further
research in NGI.
2 RELATED WORK
Within the domain of computer science, machine
learning has given machines the ability to exhibit
intelligence and in cases excel humans in certain
intelligence tasks. However, modern AI in the current
academic standpoint are considered as narrow AI
(weak AI), excelling at a very specific intelligence
task at which they were purposefully trained for
(Fjelland, 2020). Narrow AI are domain specific,
requires human intervention to develop with and
utilise. Strong AI are not necessarily domain specific
because they are, as they should be, domain-neutral
(multi-domain). Strong AI should fundamentally
learn in an unsupervised general manner, thus, are
educated into domains they have been exposed to, as
opposed to being trained objectively and specifically.
With cross-domain knowledge in intelligent
systems, agents could tackle problems in a more
rational way by addressing new information with
consideration of patterns acquired from past
observations inter-correlated across domains. This
feat mirrors the inductive process facilitating
reasoning, to make conclusions based on one or
several evidence, in all its various forms, from past
observations and experiences. Concept learning is a
strategy of combining features or attributes
(fundamental pieces of information) which
collectively identify as one distinct concept. The goal
of concept learning, we assume in our context, is the
ability to capture complex patterns based on past
observations. The complexity of constituents making
complex patterns may be derived from the
heterogeneity of data. Concept learning is crucial to
achieving the domain-unconstrained learning
complimentary to general forms of intelligence.
Neural network algorithms are a sophisticated
development that has given us the techniques to
perform classifications and predictions. However,
neural network as a technique alone are obscure in the
sense that knowing the basis for which it took to reach
its conclusions are near impossible. The reasoning of
such is behind a ‘wall of matrices’ with little to no
semantic grounds. This has introduced a branch of AI
known as explainable AI (XAI) (Barredo Arrieta et
al., 2020) in which the goal is to make AI as
transparent as possible. Furthermore, a goal to make
AI that can explain the reasoning behind its own
conclusions and if not be as transparent as possible
for humans to interpret and understand. For humans,
we understand each other due to the same mental
model and social context we innately possess and
grew with. XAI aims to endow intelligent systems
with the communicative style through that shared
model and context for users to understand with
minimal comprehension barrier.
Neuro-symbolic AI is one approach that can
achieve reasoning with generality and yet with
interpretability, this approach couples the opaqueness
of old-school rule-based symbolic AI with the
obscure internal workings of neural networks.
Symbols are established to give fundamental meaning
to individual neurones in neural networks.
Compositionality through hierarchical attribution is a
symbolic method to represent concepts.
The techniques towards concept learning have
been explored in machine learning but the approaches
often employ ensemble methods – using multiple AI
algorithms to accomplish concept learning. Existing
literature in concept-learning consists of deep neural
networks coupled with natural language processing,
taking visual and question-answer pair as inputs in
order to learn concepts in a joint visual-linguistic
space. This technique is known as grounded learning,
using a joint representational space for both visual
and linguistic compositions. The shared space further
benefits as the semantic interface for users and
interactors to understand the reasonings behind AI’s
conclusions. Some of the outcomes on this line of
research have introduced Visual Concept-
Metaconcept Learner as VCML (Han, Mao, Gan,
Tenenbaum, & Wu, 2019), Neuro-symbolic Concept
Learner as NS-CL (Mao, Gan, Kohli, Tenenbaum, &
Wu, 2019) and Neuro-symbolic Visual Question
Answering as NS-VQA (Yi et al., 2018).
Further research in the area of concept
compositionality and semantic representation that
utilise spiking neural networks is the Semantic
Pointer Architecture: Unified Network (SPAUN)
(Stewart, Choo, & Eliasmith, 2012). SPAUN is
among one of the most accurate cognitive models on
the spiking neurone framework. The primary
component of SPAUN is the Semantic Pointer
Architecture (SPA) (Blouw, Solodkin, Thagard, &
Eliasmith, 2016) which have demonstrated
compositionality and symbolic induction.
Compositionality allows items to form associations
making composites, in symbolic terms, making up
concepts. Symbolic induction in this case is the
predictive process based on sequences of temporally
presented visual inputs, to be precise, the temporal
sequences of activities in the symbolic space in
correlation to the visual inputs.
ICAART 2021 - 13th International Conference on Agents and Artificial Intelligence
1170
For advanced concept learners; a certain set of
inputs patterns could influence and affect other set of
patterns regardless of sensory and perception
modalities. Associations can be formed from data of
disparate sources such as sights, sound and touch – in
computing this can extend to sensors beyond common
sensory modalities of biology. Symbolically a set of
description-patterns can be regarded as individual
concepts by compositionality and symbolic
induction. Conveniently concepts could consist of
description-patterns fragmented across different
modalities in a unified symbolic space.
For this position paper, we propose a structurally
unconstrained concept learner that can learn
dynamically from heterogeneous data streams,
Neuro-Symbolic Spiking Concept Learner (NS-
SCL).
3 SPIKE CONCEPT LEARNER
In artificial neural networks (ANN) and deep learning
(DL) frameworks, neural networks are substantially
non-spiking and during the training phases the models
undergo global weight changes. Spike-based
networks can be feasibly unique in this respect as
learning could involve making localised changes only
between relevant neurones. These localised changes
are mediated and determined by spike activities and
is known to play essential roles in learning as
observed in biology. Several learning rules have been
postulated due to the variety of learning dynamics
observed between neurones from various regions of
our central nervous system. We will cover the
learning rules further on in this section.
Figure 1: NS-SCL overlapping aspects.
Our proposed concept learner (NS-SCL) is based
on the spiking neural framework, but we
deterministically and dynamically structure the
network in a neuro-symbolic way based on incoming
spike-encoded data. In this work, we will exploit the
localised learning of spike-based networks and its
temporal properties to learn neuro-symbolic
constructs in an unsupervised way. We investigate
further into synaptic enhancements as derived in
neuroscience to mean functional forms of plasticity –
how synapse (connections) between neurones adapts
during neural processing and learning. We have
devised a learning mechanism for NS-SCL to achieve
experiential learning. The learning mechanism is
inspired by functional plasticity observed in the
central nervous system with regards to changes to
synapses (connections). The novelty lies in how we
incorporate many synaptic enhancements profiles,
this is achieved through synapse manifolds briefly
covered in Section 3.3.
Furthermore, we investigate the creation and
formation of synapse between neurones based on the
synaptic enhancement conditions as a form of
structural plasticity. New synapse indicates a new
association between neurones, in the symbolic space
we can form associations between items resulting in
the composition of higher symbolic constructs. The
conditions for when synaptic association should form
depending on spikes (of data). Two neurones spiking
may be spuriously correlated yet we form the synapse
but treating such synaptic association as a latent
connection with no functional effect to the network.
Latent synapse will cease to exist after a given period
if not subjected to further stimulation. However, if
further stimulation occurs the latent connections will
persist to exists and it should have a functional effect
in the network. Persistent stimulation indicates a
deterministically coordinated activity, it is then
reasonable to treat the synapse as overt since the
frequency of firings would satisfy some synaptic
enhancement conditions as it could be part of a
genuine association. We will cover briefly the
mechanism of structural plasticity within a neuro-
symbolic space in Section 3.4.
Figure 2: Essential algorithms of NS-SCL.
The essential algorithms in Figure 2 are required
to realise spike-based concept learning. The
functional plasticity aspect of the network is very
much like ANNs and DL with connections between
neurones strengthening or weakening based on the
training data given. The difference is our adoption of
Neurosymbolic Spike Concept Learner towards Neuromorphic General Intelligence
1171
Figure 3: NS-SCL framework.
the spiking neurone paradigm and so our approach to
functional plasticity would differ, an approach being
unique to the spike-based paradigm. Structural
plasticity is our approach to introduce a symbolic
space with spiking neural networks, this symbolic
space allows us to examine and make sense of the
network – this represents an unconstrained ontology-
like space that holds relationships and concepts.
Introducing a neural-symbolic space with live
learning mechanisms as concept leaners opens the
possibility of a self-learning NGI agent that can learn
through experience (with structural and functional
plasticity).
We will employ NS-SCL for concept learning in
an IoT space, as such space is rich with data of various
forms, with varying breadth and lengths. Here, NS-
SCL will generate concepts constituting patterns
present in data across several sensor streams. Figure
3 illustrates the framework of NS-SCL. At process A,
data from sensor streams are encoded into spike-
trains. At process B, we form new neurones and
synapse in symbolic space, where associations are
non-existent – will be based on spike timings in spike-
train evaluation space. At process C, we apply our
manifold algorithm to process spikes in evaluation
space – reinforcing associations where relevant in
symbolic space. In summary, the framework is a real-
time learning model with characteristics of functional
and structural neuroplasticity.
3.1 Learning with Neuroplasticity
In ANNs and DL, the core algorithm for learning is
founded on functional plasticity. Functional plasticity
refers to the changes made to weightings of
connections between neurones as training takes place.
We will not go into further details about functional
plasticity regarding ANNs and DL since the direction
of this paper is towards spike-based functional
plasticity.
Functional plasticity is derived from synaptic
enhancements in neuroscience. Synaptic
enhancements are the changes made to the
neurotransmitter release probability as observed at
the synapse between neurones. Effectively, higher
probability indicates more influence a neurone has
through such synapse in causing another neurone to
fire. Furthermore, short-lived synaptic enhancements
have been classified as paired-pulse facilitation,
synaptic augmentation and post-tetanic potentiation
(Regehr, 2012). The variation between these
classifications are the magnitude and duration for
which the synapse can influence subsequent
neurones, these duration ranges from milliseconds to
minutes. For our concept learner, NS-SCL, we
propose a novel mechanism for learning to cater all
scopes of synaptic influence (facilitation,
augmentation and potentiation) into one unified
neuro-symbolic spiking model. The method in which
we achieve all this is through a technique we call
Temporal Scope Synapse Manifolds (TSSM), further
covered in Section 3.3.
In biology, spiking activities and synaptic
enhancements functions on the 100
th
millisecond
timescales. With the compute performance of current
general-purpose computers, simulating spiking
activities at natural speeds is unachievable. Hence, it
is the core motivation of Neuromorphic Computing.
Nevertheless, we can exploit the mechanisms by
simulation altering the duration of spikes and
ICAART 2021 - 13th International Conference on Agents and Artificial Intelligence
1172
simulation speed to a larger scale in order to
demonstrate the feasibility and potential of our
algorithm.
Structural plasticity is another form of
neuroplasticity regarding the changes to the structure
of the network. It has been observed that neural
structures are continuously growing and rearranging.
Dendrites are parts of neurones that allows
connections from/to other neurones. Dendritic spines
have been observed to appear and disappear
depending on their relevance. Spines can last for
months to few days and less (Trachtenberg et al.,
2002). It has been revealed that dendritic spines allow
for the formation of new synapse and is considered to
play a role in learning. The dendritic spine evidently
implies neural networks not only adapts by changes
to connections but by establishing new connections.
In ANNs and DL, the structure of the network is
often defined at the beginning during initialisation –
number of layers, neurones and connection
configurations. The structure of networks in ANNs
and DL do not change once initialised and remains
constant, but the weightings are subjected to
alteration during training. In spiked-based networks,
it is plausible to allow structural change since in
biology this phenomenon happens continuously and
frequently. The structural plasticity method for NS-
SCL will be briefly covered in Section 3.4.
3.2 Functional Plasticity Fundamentals
There is well-known postulate by Donald Hebb in
1949 regarding the activities of neurones during
learning (Hebb, 1949). Hebbian learning, as it is well-
known as, have led to significant advancement in
machine learning for the past decades - specifically
the area of neural networks across its generations. The
very most recent technological advancement in the
past decade that had emerged from the long-credited
postulate have led to the sophisticated deep learning
algorithms.
Hebb's rule is a learning rule regarding the
activities neurones exhibit during learning,
specifically, by persistent stimulation of a neurone
results in the rise in synaptic efficacy (influence on
subsequent neurones). With higher synaptic efficacy,
the more influence the pre-synaptic neurone has on
the post-synaptic neurones. Following the date of
Hebb's postulate, the field of neuroscience have made
further progress through new observations in the
biological process of neurones and spike activities.
Later observations have found that the timing of
neurones firing is a critical component in the process
of learning.
Synaptic plasticity is the observed process that
demonstrates that synaptic efficacy only rises when
connected neurones fires within a very short time-
window. Spike-Time-Dependant-Plasticity (STDP) is
the biological process by which neurones changes the
synaptic efficacy exclusively depending on the
timings of spikes between neurones, the inter-spike
interval (Shrestha, Ahmed, Wang, & Qiu, 2017).
Spike-Rate-Dependant-Plasticity (SRDP) is another
extension by which the persistent number of spikes
leads to more pronounced adjustments to synaptic
efficacy (He et al., 2014). SRDP uses spike averages
to temporally sum the potentials for synaptic
enhancements.
For NS-SCL, we will investigate how STDP can
be used with our discrete-time spiking neural model
for synapse formation. Spiking neural networks in
Neuromorphics embraces the learning rules adapting
with local changes to achieve synaptic plasticity (Liu
& Yue, 2019; Moraitis et al., 2017; Shrestha et al.,
2017). The localised nature of the adaptation
conveniently allows us to extend the structure of the
network without affecting the entire network
functionally. This kind of dynamic structural
characteristics is perhaps not so different to our own
central nervous systems.
3.3 Functional Plasticity with
Temporal Scope Synapse Manifolds
In order to cater for all scope of synaptic influence we
can assume having one connection between neurones
but functionally we can compute the connection with
many different synaptic enhancement profiles
facilitation, augmentation and potentiation.
Figure 4: Temporal Scope Synapse Manifold (TSSM).
Figure 4 illustrates four synaptic enhancements (SE)
each would have different profiles – varying in
duration and magnitude of influence. All
enhancement profile shares the same vector-direction
component between neurones. Since each SE
functions at different durations, we have imposed
temporal boundaries – for spikes satisfying within,
only the corresponding synapse would be made the
subject to adaptation.
Increasing the base speed at which the network
operate could yield abnormally different result with
the same SE profile. For temporal manifolds, we can
Neurosymbolic Spike Concept Learner towards Neuromorphic General Intelligence
1173
introduce and establish our own parameters that do
not necessarily correspond to natural synaptic
enhancement properties as observed in neuroscience.
We can change to find ideal parameters optimal for
certain base network speeds. We consider defining
SE parameters as the optimisation aspect of our
algorithm.
Table 1: A crude temporal upscaling of SE profiles.
SE
Profile
Mag. Dur. Condition
Facilitation 0.8 1 s 0 < s < 1s
Augmentation 0.2 10 s 1s < s < 10s
Potentiation 0.05 5 mins 10s < s < 5mins
Long-Term P. 0.05 1 hr 5 min < s < 1hr
Table 1 is an example of such SE profiles for
synapse manifolds. The magnitude represents the
weight of influence synapse have on post-neurone at
each spike simulation tick. The duration is how long
the influence will last on post-neurone. The condition
is what must be satisfied for the influence to take
place. Our algorithm will not treat the SE profile
parameters as definitive constants but as modifiable
parameters to fine-tune the behaviour of our spike-
symbolic network. We assume that different
applications for our algorithm will benefit from
different set of SE profiles.
3.4 Structural Plasticity for
Neurosymbolic
This aspect of NS-SCL is to allow the generation of
new neurones and synapses. We will form new
synapse between neurones when they fire satisfying
the manifold rules, and if there are no existing
synaptic connections between them. In the neuro-
symbolic space, new neurones and synapse
effectively forms new symbolic representation of an
item. Manifold rules and unconstrained structure
results in a deterministic yet dynamic behaviour as
unsupervised learning.
Initially the generated synapse will be treated as
latent synapse, causing no real functional effects in
neuro-symbolic network but subjected to functional
plasticity. Latent properties are applied to individual
SE profile. Latent synapse is our mechanism to
structurally regulate the neuro-symbolic space. We
will give generated synapses a probationary period. If
no subsequent changes are made to the synapse
within a set period, we will discard the SE profile
associated as we can conclude that the spike activities
leading to their generations were spuriously
grounded.
Latent synapse could become more progressively
enhanced through our plasticity learning mechanics;
they can be promoted to an overt state. In an overt
state, synapses are effectively active in the network.
Progressive synaptic enhancement can only occur
with well-coordinated spikes across spike-trains;
therefore, we can conclude that the synapse is well-
grounded yet not necessarily known for what cause.
However, the cause can be traced to by observing
neurones that fire depending on the input spike-trains
(data) and would fire somewhat deterministically.
NS-SCL requires each neurone to atomically
represent a concept/constructs in the neuro-symbolic
space. When a neurone fires it would indicate that a
learned-pattern or concept is relevant in present
moment which can be considered as short-term recall.
Two unrelated neurones – having no direct
connections – can form synapse by the temporal
manifold conditions. Thus, NS-SCL can form
complex hierarchical structures of which is
determined by temporal activity.
4 DISCUSSION
Concept learners such as VSCL, NS-CL and NS-
VQA are examples on some of the work aiming to
couple different types of learning spaces. These
approaches have demonstrated learning can be done
on joint visual-semantic space. NS-SCL is our
approach where the joint space is universal. We use
spike-based neural framework which allows us to
temporally encode any information into a shared
universal symbolic space – allowing for information
such as those originating from visual sources,
semantic sources, auditory sources, etc. Hence, NS-
SCL should be able to relate information from one
sensory mode to other sensory modes. This approach
is inspired by how our central nervous system handles
information from various sources.
Major constraints of our approach of concept
learning relates to compute and memory resources.
Since new synapse and neurones can be formed
dynamically, it would require a machine with
considerable processing capabilities and storage
volumes to handle large NS-SCL networks. On
general-purpose computers with the Von Neumann
architecture, the processor would need to process the
number of dynamically formed neurones in under
milliseconds along with synapse which could be
much greater in number. This is impracticable, there
is a point at which exceeding a certain number of
neurones in the NS-SCL neuro-symbolic space will
render the whole algorithm incomputable. We
ICAART 2021 - 13th International Conference on Agents and Artificial Intelligence
1174
proposed upscaling the algorithm’s timing-related
mechanisms and slowing the spiking simulation
speed to avoid such scenarios allowing the processor
time to compute. Though, reducing the spiking
simulation speed allows for the system to function
with heavy loads, the feasibility depends on the use
case of the algorithm. For IoT use case, we can adapt
sensors to function slower to better accommodate the
sensor data in numbers considering the limitations of
NS-SCL algorithm on the Von Neumann computing
paradigm.
Neuromorphic Computing is a broad field and
requires contribution from many different disciplines.
The motivation of Neuromorphic Computing is to
allow for extreme parallel processing of neurones at
grand scales. Developing algorithms for NC can also
inform the design requirements for neuromorphic
processors. Adapting NS-SCL for Neuromorphic
platforms is the ideal solution as it would eliminate
the Von Neumann compute and memory constraints
that impedes neural processing. NS-SCL requires
dynamical creation of neurones and synapse. In
neuromorphic hardware we require a reserved pool of
unused neurones that can be utilised spontaneously at
runtime in addition forming latent synapse.
Further algorithmic developments should be
made in neuromorphic computing as it has the
potential to influence future developments of
neuromorphic hardware. Future improvements,
regarding concept learning on such platform, could
further reach a level of sophistication where spike-
based concept learners exhibit a degree of general
intelligence functioning in real-time. There have also
been emerging concerns as to the level of
sophistication AI could reach on the intelligence
spectrum. A valid proposition of maintaining AI is to
contain the general forms of AI within isolated
computing mediums like Neuromorphics. Thus, it is
plausible to define a specific branch of artificial
general intelligence that emphasises the
neuromorphic approaches – where intelligence is
coupled to hardware. We identify this specific branch
as Neuromorphic General Intelligence, NGI.
REFERENCES
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J.,
Bennetot, A., Tabik, S., Barbado, A., … Herrera, F.
(2020). Explainable Artificial Intelligence (XAI):
Concepts, taxonomies, opportunities and challenges
toward responsible AI. Information Fusion, 58
(October 2019), 82–115.
https://doi.org/10.1016/j.inffus.2019.12.012
Barry, B., & Riordan, M. O. (2015). Always-on Vision
Processing Unit for Mobile Applications. IEEE
Computer Society, 56–66.
Blouw, P., Solodkin, E., Thagard, P., & Eliasmith, C.
(2016). Concepts as Semantic Pointers: A Framework
and Computational Model. Cognitive Science, 40(5),
1128–1162. https://doi.org/10.1111/cogs.12265
Boahen, K. (2017). A Neuromorph’s Prospectus. (April
2017).
Bruckner, D., Zeilinger, H., & Dietrich, D. (2012, May).
Cognitive automation-survey of novel artificial general
intelligence methods for the automation of human
technical environments. IEEE Transactions on
Industrial Informatics, Vol. 8, pp. 206–215.
https://doi.org/10.1109/TII.2011.2176741
Davies, M., Srinivasa, N., Lin, T.-H., Chinya, G., Cao, Y.,
Choday, H., … Wang, H. (2018). Loihi: A
Neuromorphic Manycore Processor with On-Chip
Learning. Retrieved from www.computer.org/micro
Debole, M. V., Taba, B., Amir, A., Akopyan, F.,
Andreopoulos, A., Risk, W. P., … Modha, D. S. (2019).
TrueNorth: Accelerating From Zero to 64 Million
Neurons in 10 Years. Computer, 52(5), 20–29.
https://doi.org/10.1109/MC.2019.2903009
Fjelland, R. (2020). Why general artificial intelligence will
not be realized. Humanities and Social Sciences
Communications, 7(1), 1–9.
https://doi.org/10.1057/s41599-020-0494-4
Han, C., Mao, J., Gan, C., Tenenbaum, J. B., & Wu, J.
(2019). Visual Concept-Metaconcept Learning.
(NeurIPS).
He, W., Huang, K., Ning, N., Ramanathan, K., Li, G., Jiang,
Y., … Pei, J. (2014). Enabling an integrated rate-
temporal learning scheme on memristor. Scientific
Reports, 4, 1–6. https://doi.org/10.1038/srep04755
Hebb, D. O. (1949). The Organization of Behaviour. New
York: Wiley & Sons.
Lammie, C., Hamilton, T. J., Van Schaik, A., & Azghadi,
M. R. (2019). Efficient FPGA Implementations of Pair
and Triplet-Based STDP for Neuromorphic
Architectures. IEEE Transactions on Circuits and
Systems I: Regular Papers, 66(4), 1558–1570.
https://doi.org/10.1109/TCSI.2018.2881753
Liu, D., & Yue, S. (2019). Event-driven continuous STDP
learning with deep structure for visual pattern
recognition. IEEE Transactions on Cybernetics, 49(4),
1377–1390.
https://doi.org/10.1109/TCYB.2018.2801476
Mao, J., Gan, C., Kohli, P., Tenenbaum, J. B., & Wu, J.
(2019). The neuro-symbolic concept learner:
Interpreting scenes, words, and sentences from natural
supervision. 7th International Conference on Learning
Representations, ICLR 2019, 1–28.
Moraitis, T., Sebastian, A., Boybat, I., Le Gallo, M., Tuma,
T., & Eleftheriou, E. (2017). Fatiguing STDP: Learning
from spike-timing codes in the presence of rate codes.
Proceedings of the International Joint Conference on
Neural Networks, 2017-May, 1823–1830.
https://doi.org/10.1109/IJCNN.2017.7966072
Neurosymbolic Spike Concept Learner towards Neuromorphic General Intelligence
1175
Painkras, E., Plana, L. A., Garside, J., Temple, S., Galluppi,
F., Patterson, C., … Furber, S. B. (2013). SpiNNaker:
A 1-W 18-core system-on-chip for massively-parallel
neural network simulation. IEEE Journal of Solid-State
Circuits, 48(8), 1943–1953.
https://doi.org/10.1109/JSSC.2013.2259038
Perez-Peña, F., Cifredo-Chacon, M. A., & Quiros-
Olozabal, A. (2020). Digital neuromorphic real-time
platform. Neurocomputing, 371, 91–99.
https://doi.org/10.1016/j.neucom.2019.09.004
Regehr, W. G. (2012). Short-term presynaptic plasticity.
Cold Spring Harbor Perspectives in Biology, 4(7), 1
19. https://doi.org/10.1101/cshperspect.a005702
Rivas-Gomez, S., Pena, A. J., Moloney, D., Laure, E., &
Markidis, S. (2018). Exploring the vision processing
unit as co-processor for inference. Proceedings - 2018
IEEE 32nd International Parallel and Distributed
Processing Symposium Workshops, IPDPSW 2018,
589–598. https://doi.org/10.1109/IPDPSW.2018.00098
Rosado-Muñoz, A., Bataller-Mompeán, M., & Guerrero-
Martínez, J. (2012). FPGA implementation of Spiking
Neural Networks. In IFAC Proceedings Volumes
(IFAC-PapersOnline) (Vol. 45).
https://doi.org/10.3182/20120403-3-DE-3010.00074
Sengupta, J., Kubendran, R., Neftci, E., & Andreou, A.
(2020). High-Speed, Real-Time, Spike-Based Object
Tracking and Path Prediction on Google Edge TPU.
Proceedings - 2020 IEEE International Conference on
Artificial Intelligence Circuits and Systems, AICAS
2020, 134–135.
https://doi.org/10.1109/AICAS48895.2020.9073867
Shrestha, A., Ahmed, K., Wang, Y., & Qiu, Q. (2017).
Stable spike-timing dependent plasticity rule for
multilayer unsupervised and supervised learning.
Proceedings of the International Joint Conference on
Neural Networks, 2017-May, 1999–2006.
https://doi.org/10.1109/IJCNN.2017.7966096
Stewart, T. C., Choo, F.-X., & Eliasmith, C. (2012). Spaun:
A Perception-Cognition-Action Model Using Spiking
Neurons. Proceedings of the 34th Annual Meeting of
the Cognitive Science Society CogSci 2012, 1018–
1023. Retrieved from
http://palm.mindmodeling.org/cogsci2012/papers/018
4/paper0184.pdf
Trachtenberg, J. T., Chen, B. E., Knott, G. W., Feng, G.,
Sanes, J. R., Welker, E., & Svoboda, K. (2002). Long-
term in vivo imaging of experience-dependent synaptic
plasticity in adult cortex. Nature, 420(6917), 788–794.
https://doi.org/10.1038/nature01273
Yi, K., Torralba, A., Wu, J., Kohli, P., Gan, C., &
Tenenbaum, J. B. (2018). Neural-symbolic VQA:
Disentangling reasoning from vision and language
understanding. Advances in Neural Information
Processing Systems, 2018-Decem
(NeurIPS), 1031–
1042.
ICAART 2021 - 13th International Conference on Agents and Artificial Intelligence
1176