Google’s verification annoyance is nightmarish on Evolution 3.26.6 (3.26.6-1.fc27) that I still want to use, because it works like a charm with an Exchange server I was recently pushed into, with GPG, and with almost anything else you throw at it. Evolution is great, and has been for a long time now.
Yet a recent “feature” in GMail has been dramatic at first, preventing me to add my GMail accounts at all, until I had figured out I should turn off Google’s OAuth2 login in Account Settings, and set it to password and login for imaps and ssmtp respectively.
This leaves me with usable Evolution for GMail accounts, albeit I have to turn the annoying pop-up “feature” off every time I start the client.
Lost quite a few hours on allowing a JMX console for a JBoss 6.4.13 (tested on series 7, too) until I figured out a winning combination of JAVA_OPTS and other settings that allow JMX to be remotely accessible. Here’s a bin/standalone.conf recipe for insecure access, once you have this sorted, move on to secure JMX access:
somewhere at the top of the file add this:
at the end of the file, set the rest
JAVA_OPTS=”$JAVA_OPTS -Dcom.sun.management.jmxremote.port=9934″ <!– pick a port, you can use the same for jmxremote.rmi.port –>
JAVA_OPTS=”$JAVA_OPTS -Dcom.sun.management.jmxremote.rmi.port=9934″ JAVA_OPTS=”$JAVA_OPTS -Dcom.sun.management.jmxremote.ssl=false” JAVA_OPTS=”$JAVA_OPTS -Dcom.sun.management.jmxremote.authenticate=false” JAVA_OPTS=”$JAVA_OPTS -Dcom.sun.management.jmxremote.local.only=false” JAVA_OPTS=”$JAVA_OPTS -Djava.rmi.server.hostname=220.127.116.11″ <!– put your IP here, not your hostname –>
Users in the appropriate group in Simulakrum directory can use Rocket.Chat now. Rocket.Chat works from within a browser, and allows for very fast and quality multi-user video-conferences, desktop display, and many other useful functions. It is using a jitsi-based server in the background.
Rocket.Chat is the second multi-user video-conferencing tool available for Simulakrum – HipChat, the Atlassian’s proprietary commercial solution that integrates well Jira and Confluence, is also currently available at Simulakrum’s. Ask for the access to those tools if you don’t have the access already.
Due to the way that NRPE/Nagios reads output from this fine plugin, it will show only the first running container in the Web-interface if asked for check_docker –status running. Python solves “printf” functionality by adding a suffix, so here’s a quick diff for the plugin so it can display all of the running containers when asked by check_docker plugin:
--- check_docker 2016-09-06 13:16:50.396425436 +0200
+++ /opt/nagios-plugins/check_docker/check_docker-master/check_docker.py 2016-05-16 03:20:13.000000000 +0200
@@ -1,5 +1,4 @@
-from __future__ import print_function
__author__ = 'Tim Laurence'
__copyright__ = "Copyright 2016"
__credits__ = ['Tim Laurence']
@@ -15,6 +14,7 @@
Note: I really would have preferred to have used requests for all the network connections but that would have added a
from sys import argv
from http.client import HTTPConnection
from urllib.request import AbstractHTTPHandler, HTTPHandler, HTTPSHandler, OpenerDirector
@@ -25,6 +25,7 @@
DEFAULT_SOCKET = '/var/run/docker.sock'
DEFAULT_TIMEOUT = 10.0
DEFAULT_PORT = 2375
@@ -273,15 +274,15 @@
if len(messages) > 0:
if len(performance_data) > 0:
- print(messages + '|' + performance_data, end=' ')
+ print(messages + '|' + performance_data)
- print(messages, end=' ')
for message in messages[1:]:
- print(message, end=' ')
if len(performance_data) > 1:
for data in performance_data[1:]:
- print(data, end=' ')
if __name__ == '__main__':
I had a rough time with Docker’s messages saying “connectivity on endpoint” (AKA “your port is already used by something else”) messages a few hours ago. The log on the upgraded Ubuntu looked somewhat like:
Aug 2 00:28:30 continuum rc.local: /usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint apache (0d941233cf3651b560252498b3be9cdf8fe0e5c89ab2c6443a44ece3a3ee27d1): Error starting userland proxy: listen tcp 192.168.43.31:9000: bind: cannot assign requested address.
Aug 2 00:40:10 continuum dockerd: time="2016-08-02T00:40:10.227947585+02:00" level=error msg="Handler for POST /v1.24/containers/6e818fdc7e048321b0afd1b5e2355772a3bc488deb95bc26d94e25b3ca7a867e/start returned error: driver failed programming external connectivity on endpoint confluence (ee8261d117a81f8ad1af2214f693c04eb3ec3749bb91ff339f11c9366eb38c69): Error starting userland proxy: listen tcp 192.168.43.114:9909: bind: cannot assign requested address"
Aug 2 00:40:10 continuum rc.local: /usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint confluence (ee8261d117a81f8ad1af2214f693c04eb3ec3749bb91ff339f11c9366eb38c69): Error starting userland proxy: listen tcp 192.168.43.114:9909: bind: cannot assign requested address.
Aug 2 00:40:12 continuum dockerd: time="2016-08-02T00:40:12.211933605+02:00" level=error msg="Handler for POST /v1.24/containers/26d7d02a14fe0b037dd9099edf49f4365432023b03af1e7bb68994185a36976b/start returned error: driver failed programming external connectivity on endpoint jira (4651cf8fca76c6c3ce3a0eee28877bc9d996250918127a41ffbe95222c74684c): Error starting userland proxy: listen tcp...
I thought the upgrade from 14.04 LTS to 16.04 LTS did it, because it was a hell of its own kind, but apparently everything was there, and the forums weren’t clear enough.
It ended up to be my tainted /etc/networks/interfaces file, where my aliased IPs wouldn’t start normally for a reason still unknown to me. I did rewrite those aliases in order to make sure it was tidy, but it wouldn’t start normally until I moved those aliases of an interface directly beneath the “auto p4p3…” directive for the parent interface.
As soon as I cleared those, the next reboot again took only 20-ish seconds, and the docker containers started as expected.
Running through the hostapd and dhcpd logs recently gave me an interesting puzzle to have fun with – amongst various MAC addresses in my syslog I noticed a pattern that was repeating: an ID of an Android device (a Samsung phone) had been registering itself with two different IDs. A phone would offer the ID in a DHCPREQUEST, but the strange thing to me – at the moment I saw it – was that the dhcp server would give different IPs to different MACs, but both of those using the same ID of the device.
I started looking for the MACs of the devices I use, trying to eliminate various MACs from the log, and at first it made no sense at all – it was either a break into my WLAN, or my phone indeed had more than one MAC. To confuse me even more, turning on the bluetooth of the phone would spawn one more MAC in “Settings >> About device >> Status >> Wi-Fi MAC address”, and I couldn’t figure out what it had to do with the above-described situation found in logs, nor find any proofs. Could there be more MACs, activated and/or assigned under similar conditions, like using HDSPA, 4G?
Then I made an educated guess that it had to do something with my WiFi range extender, TP-Link’s TL-WA850RE, that I use to cover some of the less accessible parts of the apartment. I made an experiment, and proved to myself that the guess was a good one – walking away of the main WiFi AP and approaching the extender would have the phone quickly disassociate and then associate again, borrowing a MAC from the extender, but keeping the ID!
There, if you notice the same in the logs – strangely leased IPs to a same ID of a device that suddenly appears to have a ghost MAC – there is a slim chance that you have not been hacked. Yet… 😛