ppbench - A Visualizing Network Benchmark for Microservices

Nane Kratzke, Peter-Christian Quint

2016

Abstract

Companies like Netflix, Google, Amazon, Twitter successfully exemplified elastic and scalable microservice architectures for very large systems. Microservice architectures are often realized in a way to deploy services as containers on container clusters. Containerized microservices often use lightweight and REST-based mechanisms. However, this lightweight communication is often routed by container clusters through heavyweight software defined networks (SDN). Services are often implemented in different programming languages adding additional complexity to a system, which might end in decreased performance. Astonishingly it is quite complex to figure out these impacts in the upfront of a microservice design process due to missing and specialized benchmarks. This contribution proposes a benchmark intentionally designed for this microservice setting. We advocate that it is more useful to reflect fundamental design decisions and their performance impacts in the upfront of a microservice architecture development and not in the aftermath. We present some findings regarding performance impacts of some TIOBE TOP 50 programming languages (Go, Java, Ruby, Dart), containers (Docker as type representative) and SDN solutions (Weave as type representative).

References

  1. Apache Software Foundation (2015). ab - Apache HTTP server benchmarking tool. http://httpd.apache.org/docs/2.2/programs/ab.html.
  2. Berkley Lab (2015). iPerf - The network bandwidth measurement tool. https://iperf.fr.
  3. Bormann, D., Braden, B., R.Scheffenegger (2014). TCP Extensions for High https://tools.ietf.org/html/rfc7323.
  4. Jacobsen, V., and RFC 7323, Performance.
  5. CoreOS (2015). Flannel. https://github.com/coreos/flannel.
  6. Docker Inc. (2015). Docker. https://www.docker.com.
  7. Fielding, R. T. (2000). Architectural styles and the design of network-based software architectures. PhD thesis.
  8. Hindman, B., Konwinski, A., Zaharia, M., Ghodsi, A., Joseph, A. D., Katz, R. H., Shenker, S., and Stoica, I. (2011). Mesos: A platform for fine-grained resource sharing in the data center. In NSDI, volume 11.
  9. Kratzke, N. (2014). Lightweight virtualization cluster - howto overcome cloud vendor lock-in. Journal of Computer and Communication (JCC), 2(12).
  10. Kratzke, N. and Quint, P.-C. (2015a). About Automatic Benchmarking of IaaS Cloud Service Providers for a World of Container Clusters. Journal of Cloud Computing Research, 1(1):16-34.
  11. Kratzke, N. and Quint, P.-C. (2015b). How to operate container clusters more efficiently? International Journal On Advances in Networks and Services, 8(3&4):203- 214.
  12. netperf.org (2012). The Public Netperf Homepage. http://www.netperf.org.
  13. Newman, S. (2015). Building Microservices. O'Reilly Media, Incorporated.
  14. R Core Team (2014). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.
  15. Satzger, B., Hummer, W., Inzinger, C., Leitner, P., and Dustdar, S. (2013). Winds of change: From vendor lock-in to the meta cloud. IEEE Internet Computing, 17(1):69-73.
  16. Schmid, H. and Huber, A. (2014). Measuring a Small Number of Samples, and the 3v Fallacy: Shedding Light on Confidence and Error Intervals. Solid-State Circuits Magazine, IEEE, 6(2):52-58.
  17. Sun Microsystems (2012). uperf - A network performance tool. http://www.uperf.org.
  18. Talukder, A., Zimmerman, L., and A, P. (2010). Cloud economics: Principles, costs, and benefits. In Antonopoulos, N. and Gillam, L., editors, Cloud Computing, Computer Communications and Networks, pages 343-360. Springer London.
  19. Velásquez, K. and Gamess, E. (2009). A Comparative Analysis of Network Benchmarking Tools. Proceedings of the World Congress on Engineering and Computer Science 2009 Vol I.
  20. Verma, A., Pedrosa, L., Korupolu, M. R., Oppenheimer, D., Tune, E., and Wilkes, J. (2015). Large-scale cluster management at Google with Borg. In Proceedings of the European Conference on Computer Systems (EuroSys), Bordeaux, France.
  21. Weave Works (2015). https://github.com/weaveworks/weave.
Download


Paper Citation


in Harvard Style

Kratzke N. and Quint P. (2016). ppbench - A Visualizing Network Benchmark for Microservices . In Proceedings of the 6th International Conference on Cloud Computing and Services Science - Volume 2: CLOSER, ISBN 978-989-758-182-3, pages 223-231. DOI: 10.5220/0005732202230231


in Bibtex Style

@conference{closer16,
author={Nane Kratzke and Peter-Christian Quint},
title={ppbench - A Visualizing Network Benchmark for Microservices},
booktitle={Proceedings of the 6th International Conference on Cloud Computing and Services Science - Volume 2: CLOSER,},
year={2016},
pages={223-231},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005732202230231},
isbn={978-989-758-182-3},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 6th International Conference on Cloud Computing and Services Science - Volume 2: CLOSER,
TI - ppbench - A Visualizing Network Benchmark for Microservices
SN - 978-989-758-182-3
AU - Kratzke N.
AU - Quint P.
PY - 2016
SP - 223
EP - 231
DO - 10.5220/0005732202230231