Optimized Non-visual Information for Deep Neural Network in Fighting Game

Nguyen Duc Tang Tri, Vu Quang, Kokolo Ikeda

Abstract

Deep Learning has become most popular research topic because of its ability to learn from a huge amount of data. In recent research such as Atari 2600 games, they show that Deep Convolutional Neural Network (Deep CNN) can learn abstract information from pixel 2D data. After that, in VizDoom, we can also see the effect of pixel 3D data in learning to play games. But in all the cases above, the games are perfect-information games, and these images are available. For imperfect-information games, we do not have such bit-map and moreover, if we want to optimize our model by using only important features, then will Deep CNN still work? In this paper, we try to confirm that Deep CNN shows better performance than usual Neural Network (usual NN) in modeling Game Agent. By grouping important features, we increase the accuracy of modeling strong AI from 25.58% with a usual neural network to 54.24% with our best CNN structure.

References

  1. Clark, C. and Storkey, A. (2014). Teaching deep convolutional neural networks to play go. arXiv preprint arXiv:1412.3409.
  2. Karpathy, A. (2016). Deep reinforcement learning: Pong from pixels. Technical report.
  3. Kempka, M., Wydmuch, M., Runc, G., Toczek, J., and Jaskowski, W. (2016). Vizdoom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097.
  4. Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105.
  5. Lu, F., Yamamoto, K., Nomura, L. H., Mizuno, S., Lee, Y., and Thawonmas, R. (2013). Fighting game artificial intelligence competition platform. In 2013 IEEE 2nd Global Conference on Consumer Electronics (GCCE), pages 320-323. IEEE.
  6. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. (2015). Humanlevel control through deep reinforcement learning. Nature, 518(7540):529-533.
  7. Nielsen, M. (2015). Neural Networks and Deep Learning. Determination Press.
  8. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489.
Download


Paper Citation


in Harvard Style

Duc Tang Tri N., Quang V. and Ikeda K. (2017). Optimized Non-visual Information for Deep Neural Network in Fighting Game . In Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-758-220-2, pages 676-680. DOI: 10.5220/0006248106760680


in Bibtex Style

@conference{icaart17,
author={Nguyen Duc Tang Tri and Vu Quang and Kokolo Ikeda},
title={Optimized Non-visual Information for Deep Neural Network in Fighting Game},
booktitle={Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
year={2017},
pages={676-680},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006248106760680},
isbn={978-989-758-220-2},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - Optimized Non-visual Information for Deep Neural Network in Fighting Game
SN - 978-989-758-220-2
AU - Duc Tang Tri N.
AU - Quang V.
AU - Ikeda K.
PY - 2017
SP - 676
EP - 680
DO - 10.5220/0006248106760680