Due to the way that NRPE/Nagios reads output from this fine plugin, it will show only the first running container in the Web-interface if asked for check_docker –status running. Python solves “printf” functionality by adding a suffix, so here’s a quick diff for the plugin so it can display all of the running containers when asked by check_docker plugin:
--- check_docker 2016-09-06 13:16:50.396425436 +0200
+++ /opt/nagios-plugins/check_docker/check_docker-master/check_docker.py 2016-05-16 03:20:13.000000000 +0200
@@ -1,5 +1,4 @@
-from __future__ import print_function
__author__ = 'Tim Laurence'
__copyright__ = "Copyright 2016"
__credits__ = ['Tim Laurence']
@@ -15,6 +14,7 @@
Note: I really would have preferred to have used requests for all the network connections but that would have added a
from sys import argv
from http.client import HTTPConnection
from urllib.request import AbstractHTTPHandler, HTTPHandler, HTTPSHandler, OpenerDirector
@@ -25,6 +25,7 @@
DEFAULT_SOCKET = '/var/run/docker.sock'
DEFAULT_TIMEOUT = 10.0
DEFAULT_PORT = 2375
@@ -273,15 +274,15 @@
if len(messages) > 0:
if len(performance_data) > 0:
- print(messages + '|' + performance_data, end=' ')
+ print(messages + '|' + performance_data)
- print(messages, end=' ')
for message in messages[1:]:
- print(message, end=' ')
if len(performance_data) > 1:
for data in performance_data[1:]:
- print(data, end=' ')
if __name__ == '__main__':
I had a rough time with Docker’s messages saying “connectivity on endpoint” (AKA “your port is already used by something else”) messages a few hours ago. The log on the upgraded Ubuntu looked somewhat like:
Aug 2 00:28:30 continuum rc.local: /usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint apache (0d941233cf3651b560252498b3be9cdf8fe0e5c89ab2c6443a44ece3a3ee27d1): Error starting userland proxy: listen tcp 192.168.43.31:9000: bind: cannot assign requested address.
Aug 2 00:40:10 continuum dockerd: time="2016-08-02T00:40:10.227947585+02:00" level=error msg="Handler for POST /v1.24/containers/6e818fdc7e048321b0afd1b5e2355772a3bc488deb95bc26d94e25b3ca7a867e/start returned error: driver failed programming external connectivity on endpoint confluence (ee8261d117a81f8ad1af2214f693c04eb3ec3749bb91ff339f11c9366eb38c69): Error starting userland proxy: listen tcp 192.168.43.114:9909: bind: cannot assign requested address"
Aug 2 00:40:10 continuum rc.local: /usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint confluence (ee8261d117a81f8ad1af2214f693c04eb3ec3749bb91ff339f11c9366eb38c69): Error starting userland proxy: listen tcp 192.168.43.114:9909: bind: cannot assign requested address.
Aug 2 00:40:12 continuum dockerd: time="2016-08-02T00:40:12.211933605+02:00" level=error msg="Handler for POST /v1.24/containers/26d7d02a14fe0b037dd9099edf49f4365432023b03af1e7bb68994185a36976b/start returned error: driver failed programming external connectivity on endpoint jira (4651cf8fca76c6c3ce3a0eee28877bc9d996250918127a41ffbe95222c74684c): Error starting userland proxy: listen tcp...
I thought the upgrade from 14.04 LTS to 16.04 LTS did it, because it was a hell of its own kind, but apparently everything was there, and the forums weren’t clear enough.
It ended up to be my tainted /etc/networks/interfaces file, where my aliased IPs wouldn’t start normally for a reason still unknown to me. I did rewrite those aliases in order to make sure it was tidy, but it wouldn’t start normally until I moved those aliases of an interface directly beneath the “auto p4p3…” directive for the parent interface.
As soon as I cleared those, the next reboot again took only 20-ish seconds, and the docker containers started as expected.
New Confluence 5.9.7 is now available to the members of the appropriate groups in LDAP. There was an issue with Confluence-in-Docker in the installation phase where Confluence would reach the “Insert license key” step, and then simply spin in a vicious circle.
Found a workaround for that – simply do not attempt to add SSL keys to Confluence during the installation, but reach it through an openssh tunnel (make sure you reach it as “127.0.0.1”) finish the installation, and then add SLL, LDAP and other necessities.
The jenkins.simulakrum.org server is now running from within a Docker container. Migration was flawless and done from scratch in less than 30 minutes. The reason was that Ubuntu would fail to restart a native jenkins service if another Docker container would use a port, albeit on a different IP. After being fed up with constant joggling between solutions for that, I decided it would be faster to simply “dockerise” Jenkins, too, and have it confined in a container for good. Continue reading →
Had to move 389-ds from a docker container running under a CentOS 7 to docker container running under Ubuntu 14.04 LTS. An exotic message appeared once I tried to run the dirsrv from within the new container:
Running a private docker-registry behind a few proxies took me while to configure, because I had several things that I couldn’t move. In particularly, it is an nginx in front of everything, and the docker-registry that I wanted as a “real” service, because I am still learning the docker ways, and I don’t want it as a container, yet.
I installed the docker-registry in a KVM VM, on a CentOS 7 – a standard business requirement one might say.
Running Docker containers behind a firewalld can be a routing nightmare. I had to use CentOS 7 docker images on a customised CentOS 7 host, and the situation turned into an incompatibility fest pretty soon after I figured out the followng:
CentOS host came with no firewall, and systemctl listed dbus-org.fedoraproject.FirewallD1.service,
Dockerised CentOS containers have no systemd,
Docker’s internal routing isn’t exactly the shiniest piece of documentation on Docker,
IPTables-services and firewalld shouldn’t work simultaneously, and usage of IPTables-services is strongly discouraged on new hats, in favour of new the interface – firewalld,
Docker’s daemon uses own interface to write to Netfilter, that can be clearly visible by an “iptables -L” inspection,
Docker (apparently) creates random RFC1918 addresses for new containers,
Docker assigns two IPs for each container regardless of the third IP you might call for on the command line during “docker run…”.
After a trillion of attempts, here is the most sane and simple solution I have come by for now: Continue reading →