Learning Spatial Relations with a Standard Convolutional Neural Network

Kevin Swingler, Mandy Bath

Abstract

This paper shows how a standard convolutional neural network (CNN) without recurrent connections is able to learn general spatial relationships between different objects in an image. A dataset was constructed by placing objects from the Fashion-MNIST dataset onto a larger canvas in various relational locations (for example, trousers left of a shirt, both above a bag). CNNs were trained to name the objects and their spatial relationship. Models were trained to perform two different types of task. The first was to name the objects and their relationships and the second was to answer relational questions such as “Where is the shoe in relation to the bag?”. The models performed at above 80% accuracy on test data. The models were also capable of generalising to spatial combinations that had been intentionally excluded from the training data.

Download


Paper Citation