last performance experiment upon docker.io did expose some overhead, i was a bit concerned for the environment setup as i had it running on vm. It could be that docker.io doesn’t play along with the vm that well. therefore, i got a workstation converted to ubuntu 12.04 and rerun the jmeter test.
here’s the setup:
- control: node.js 0.10.24, cluster2, express 3.1.x, 10 concurrent threads, 1k loop, 2 worker processes (apprx 300ms+ request time)
- docker: docker 0.7.2, ubuntu image 12.04, node.js 0.10.24, express 3.1.x, http proxy. 2 docker processes, 10 concurrent threads, 1k loop.
well, both TPS & median time could hardly reflect much difference, less than 5% (last time, it was 40% upon vm setup), besides, this time we actually added the http-proxy for routing. (the worst case does seem to be more consuming though)
in fact, i had a simpler test using single process w/o docker, and the performance was even less!
the implication is huge, given the simplicity of build/deploy/provision of docker, and the scalability without application coding at all (no pm2, no cluster2, ever in the code for app developer to figure out how to work along), it enables a different level of scalability!
btw, i had a problem of dns resolution, as the docker process couldn’t access anything in corp domain. u can verify by doing:
vi /etc/resolv.conf in docker, it probably says nameserver 22.214.171.124 & 126.96.36.199, then u need to check your hosting machine’s /etc/resolv.conf, and if u see nameserver 127.0.0.1 in it, it means u probably had dnsmasq feature turned on which caused the different /etc/resolv.conf in the docker image.
u can comment out /etc/NetworkManager/NetworkManager.conf and do ‘sudo restart network-manager’, after restarting docker process, u shall see /etc/resolv.conf the same as ur hosting’ machine thereafter