Novelty and Objective-based Neuroevolution of a Physical Robot Swarm

Forrest Stonedahl, Susa H. Stonedahl, Nelly Cheboi, Danya Tazyeen, David Devore

Abstract

This paper compares the use of novelty search and objective-based evolution to discover motion controllers for an exploration task wherein mobile robots search for immobile targets inside a bounded polygonal region and stop to mark target locations. We evolved the robots' neural-network controllers in a custom 2-D simulator, selected the best performing neurocontrollers from both novelty search and objective-based search, and compared performance relative to an unevolved (baseline) controller and a simple human-designed controller. The controllers were also transferred onto physical robots, and the real-world tests provided good empirical agreement with simulation results, showing that both novelty search and objective-based search produced controllers that were comparable or superior to the human-designed controller, and that objective-based search slightly outperformed novelty search. The best controllers had surprisingly low genotypic complexity, suggesting that this task may lack the type of deceptive fitness landscape that has previously favored novelty search over objective-based search.

References

  1. Brambilla, M., Ferrante, E., Birattari, M., and Dorigo, M. (2013). Swarm robotics: a review from the swarm engineering perspective. Swarm Intelligence, 7(1):1- 41.
  2. Choset, H. (2001). Coverage for robotics - a survey of recent results. Annals of Mathematics and Artificial Intelligence, 31(1):113-126.
  3. Coleman, O. (2012). AHNI (Another HyperNEAT Implementation). [Computer Software].
  4. Davids, A. (2002). Urban search and rescue robots: from tragedy to technology. IEEE Intelligent Systems, 17(2):81-83.
  5. Gomes, J., Urbano, P., and Christensen, A. L. (2013). Evolution of swarm robotics systems with novelty search. Swarm Intelligence, 7(2-3):115-144.
  6. Ichikawa, S. and Hara, F. (1999). Characteristics of objectsearching and object-fetching behaviors of multirobot system using local communication. In IEEE International Conference on Systems, Man, and Cybernetics, volume 4, pages 775-781 vol.4.
  7. Konarski, M., Szominski, S., and Turek, W. (2016). Mobile robot coordination using fear modeling algorithm. International Journal of Mechanical Engineering and Robotics Research, 5(2):96.
  8. Lehman, J. and Stanley, K. O. (2011). Abandoning objectives: Evolution through the search for novelty alone. Evolutionary computation, 19(2):189-223.
  9. Lund, H. H. (2003). Co-evolving control and morphology with LEGO robots. In Morpho-functional Machines: The New Species, pages 59-79. Springer.
  10. McLurkin, J. and Smith, J. (2007). Distributed Autonomous Robotic Systems 6, chapter Distributed Algorithms for Dispersion in Indoor Environments Using a Swarm of Autonomous Mobile Robots, pages 399-408. Springer Japan, Tokyo.
  11. Parker, G. B. and Georgescu, R. A. (2005). Using cyclic genetic algorithms to evolve multi-loop control programs. In Mechatronics and Automation, 2005 IEEE International Conference, volume 1, pages 113-118. IEEE.
  12. Rekleitis, I., Lee-Shue, V., New, A. P., and Choset, H. (2004). Limited communication, multi-robot team based coverage. In Proceedings of the 2004 IEEE International Conference on Robotics and Automation, volume 4, pages 3462-3468. IEEE.
  13. Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model. In SIGGRAPH 7887: Proceedings of the 14th annual conference on Computer graphics and interactive techniques, pages 25- 34, New York, NY, USA. ACM.
  14. Risi, S., Vanderbleek, S. D., Hughes, C. E., and Stanley, K. O. (2009). How novelty search escapes the deceptive trap of learning to learn. In Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation, GECCO 7809, pages 153-160, New York, NY, USA. ACM.
  15. Stanley, K. O. and Miikkulainen, R. (2002). Evolving neural networks through augmenting topologies. Evolutionary Computation, 10(2):99-127.
  16. Wagner, I. A., Altshuler, Y., Yanovski, V., and Bruckstein, A. M. (2008). Cooperative cleaners: A study in ant robotics. The International Journal of Robotics Research, 27(1):127-151.
  17. Wilensky, U. (1999). Netlogo. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.
Download


Paper Citation


in Harvard Style

Stonedahl F., Stonedahl S., Cheboi N., Tazyeen D. and Devore D. (2017). Novelty and Objective-based Neuroevolution of a Physical Robot Swarm . In Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-758-220-2, pages 382-389. DOI: 10.5220/0006118303820389


in Bibtex Style

@conference{icaart17,
author={Forrest Stonedahl and Susa H. Stonedahl and Nelly Cheboi and Danya Tazyeen and David Devore},
title={Novelty and Objective-based Neuroevolution of a Physical Robot Swarm},
booktitle={Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
year={2017},
pages={382-389},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006118303820389},
isbn={978-989-758-220-2},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - Novelty and Objective-based Neuroevolution of a Physical Robot Swarm
SN - 978-989-758-220-2
AU - Stonedahl F.
AU - Stonedahl S.
AU - Cheboi N.
AU - Tazyeen D.
AU - Devore D.
PY - 2017
SP - 382
EP - 389
DO - 10.5220/0006118303820389