The Keychron K2 and K3 have dual-purpose media/function keys. The accompanying card says to use fn+x+l to change modes, but I tried it and it didn’t work. I need my function keys, I’m already used to pressing fn
when I do need to access multimedia functionality.
I found this repo which explains how to set up a systemd service to configure the keys by writing a value to a driver configuration file. This works, but I was also able to get this changed immediately (though not persistently) by doing:
# Set the keys to operate in Fx mode
echo 0 | sudo tee /sys/module/hid_apple/parameters/fnmode
# Set the keys to operate in multimedia mode
echo 1 | sudo tee /sys/module/hid_apple/parameters/fnmode
We added a new device which can expose a connected USB drive via DLNA, internally it uses minidlna which uses SSDP for service discovery. For some strange reason that rendered my *existing* minidlna (hosted on a raspberry pi) invisible. When researching the problem, it looks like neighbor discovery (which didn’t happen before as there were no other devices) uses a multicast 239.0.0.0/8 address which my rpi was blocking due to reasons (only allows traffic via the local network and a vpn gateway). My theory is that the new minidlna device took over as “primary” and then couldn’t find other peers and so the old server wasn’t visible anymore. The solution was to allow the specific multicast address used by SSDP.
#!/bin/bash
iptables -F
#Tunnel interface
iptables -A INPUT -i tun+ -j ACCEPT
iptables -A OUTPUT -o tun+ -j ACCEPT
#Localhost and local networks
iptables -A INPUT -s 127.0.0.0/16 -j ACCEPT
iptables -A OUTPUT -d 127.0.0.0/16 -j ACCEPT
iptables -A INPUT -s 192.168.0.0/16 -j ACCEPT
iptables -A OUTPUT -d 192.168.0.0/16 -j ACCEPT
#multicast for minidlna/SSSP
iptables -I OUTPUT -d 239.255.255.250 -j ACCEPT
iptables -I INPUT -d 239.255.255.250 -j ACCEPT
#Allow VPN establishment, this is the port in the config's #remote
iptables -A OUTPUT -p udp --dport 1198 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p udp --sport 1198 -m state --state ESTABLISHED,RELATED -j ACCEPT
#Drop everything else
iptables -A INPUT -j DROP
iptables -A OUTPUT -j DROP
These mkv files have h.265 hevc video which my media player can’t read, so I’d like to convert only the video stream to h.264, while leaving all other streams (2 audio tracks in aac, 2 subtitle tracks) intact.
ffmpeg -i some-x265-video.mkv -map 0 -c:v libx264 -c:a copy /tmp/x264-version.mkv
Working remotely for a timezone-distributed company poses an interesting challenge: that of having to figure out dates and times for people in different timezones. This involves not only the relatively trivial “what time is it now in A_FARAWAY_PLACE”, but “what time, in FARAWAY_PLACE_X, will it be in FARAWAY_PLACE_Z” and other fun things.
There are a handful of websites that have handy tools to do these conversions for you; but a problem I’ve found is that the web is going to the crapper, and these sites often have confusing UIs concocted by some javascript-crazed, CSS-infected webmonkey; and often they are completely swamped and rendered unusable by a rising tide of ads and other aggressive content (oh and some won’t let you do anything until you agree to them storing information in cookies in your browser – which they then bafflingly don’t use to store the PREFERENCE you have selected , so like a forgetful vampire, they ask you every single time if you want to accept their silly cookies).
I’ve known how to use the date
command to show the date on a different place/timezone, which is already a huge timesaver:
$ TZ="Taiwan/Taipei" date
Fri Apr 12 19:25:31 Taiwan 2019
but – today I was trying to answer “what time in TZ=”America/Chicago” is 1 PM, on Tuesday, in “UK/London“. This is interesting because it’s conversion between two timezones which are not the one I’m in, of a date/time in the future. So I was checking date’s man page for “how to convert a specific point in time”, when I realized date
can do this for you! Right in the man page there’s this example:
Show the local time for 9AM next Friday on the west coast of the US
$ date --date='TZ="America/Los_Angeles" 09:00 next Fri'
so then I combined that with the earlier one to come up with:
$ TZ="America/Chicago" date --date='TZ="UK/London" 1:00 PM next Tue'
Tue Apr 16 08:00:00 CDT 2019
This combines:
- TZ argument to calculate dates for a specific timezone, not the current one
--date
parameter to “display time described by STRING, not ‘now’”
- Descriptive time specifications (1:00 PM next Tuesday – this is a pseudo-human-readable format which is not entirely intuitive – info date has the specifics)
- TZ support inside the descriptive specification
And a list of known timezones can be obtained with timedatectl list-timezones
.
In this case I’m hosting the VM on a fast server and trying to access the display on another system (a laptop).
One way to do it is by simply SSHing with X forwarding and running KVM like so:
qemu-system-x86_64 -boot d -cdrom ubuntu-18.04.2-live-server-amd64.iso -m 8192 -enable-kvm
This by default uses a terminal window, but it’s quite slow.
Another option is to start the KVM machine in nographic mode and enable a VNC server:
qemu-system-x86_64 -nographic -vnc :5 -boot d -cdrom ubuntu-14.04.6-desktop-amd64.iso -m 8192 -enable-kvm
then on the desktop system use a vnc client to connect to the magic port:
xtightvncviewer thehost.local:5905
The goal here is to instantiate VMs with a br0 interface grabbing an IP from the LAN DHCP, so in turn the VM can instantiate LXD containers whose IP is also exposed to the LAN. That way everything is visible on the same network segment and this makes some experimentation easier.
Host configuration
Some info taken from this URL.
The metal host is running Ubuntu 18.04, which uses netplan. Here’s the netplan.yaml file:
network:
ethernets:
enp7s0:
addresses: []
dhcp4: no
dhcp6: no
optional: true
bridges:
br0:
dhcp4: true
dhcp6: no
interfaces:
- enp7s0
parameters:
stp: false
forward-delay: 0
version: 2
With this, on boot the system grabs an address from the network’s DHCP service (from my home router) and puts it on the br0 interface (which bridges enp7s0, a Gigabit Ethernet port).
The system also has avahi-daemon installed so I can ssh the-server.local easily.
VM configuration
Next, the VM which I created using uvt-kvm:
# Get a Xenial cloud image
uvt-simplestreams-libvirt --verbose sync release=xenial arch=amd64
# Create/launch a VM
PARAMS='--memory 8192 --disk 32 --cpu 4'
uvt-kvm create the-vm $PARAMS --bridge br0 --packages avahi-daemon,bridge-utils,haveged --run-script-once setup_network.sh
The setup_network.sh script takes care of setting up the network 🙂 This can more cleanly be done with cloud-init but I’m lazy and wanted something fast.
The script deletes the cloudconfig-created .cfg file, tells cloud-init to NOT reconfigure the network, and drops the config file I actually need in place.
#!/bin/bash
echo "Acquire::http::Proxy \"http://192.168.1.187:3128\"; " >/etc/apt/apt.conf.d/80proxy
# Drop the cloudinit-configured interface
ifdown ens3
# Reconfigure the network...
cat <<EOF >/etc/network/interfaces.d/1-bridge.cfg
auto lo br0
iface lo inet loopback
iface ens3 inet manual
iface br0 inet dhcp
bridge_ports ens3
bridge_stp off # disable Spanning Tree Protocol
bridge_waitport 0 # no delay before a port becomes available
bridge_fd 0 # no forwarding delay
EOF
echo "network: {config: disabled}" > /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
rm /etc/network/interfaces.d/50-cloud-init.cfg
# Then bring up the new nice bridge
ifup br0
apt-get remove -y snapd && apt-get -y autoremove
The network config in /etc/network/interfaces.d/1-bridge.cfg should look like:
auto lo br0
iface lo inet loopback
iface ens3 inet manual
iface br0 inet dhcp
bridge_ports ens3
bridge_stp off # disable Spanning Tree Protocol
bridge_waitport 0 # no delay before a port becomes available
bridge_fd 0 # no forwarding delay
LXD configuration
Finally, install lxd. When asked to configure the lxd bridge, respond
“no”, and on the next question you’ll be asked whether to
supply an existing bridge. Respond “yes” and specify
“br0”.
Now, when an lxd container is instantiated, it’ll by default appear on
the same network (the home network!) as the VM and the main host, getting its
DHCP from the home router.
When things break
Suddenly the bridge interface stopped working. I checked this to help diagnose
it. But that wasn’t it. Turns out, I’d installed Docker on the
main host and Docker messes with the firewall
configuration by setting
iptables -P FORWARD DROP. I just set it back to ACCEPT to get it working.
Many of our test runs use parallelization to run faster. Sometimes we see test
failures which we can’t reproduce locally, because locally we usually run
sequentially; and even then, the test ordering seems to be somewhat
unpredictable so it’s hard to reproduce the exact test ordering seen in
our test runner.
Most of the time these failures are due to unidentified test interdependencies:
either test A causes test B to pass (where running test B in isolation would
fail), or test A causes B to fail (where running B in isolation would pass).
And we have seen more complex scenarios where C passes, A-B-C passes, but A-C
fails (because A sets C up for failure, while B would set C up for success). We
added some diagnostic output to our test runner so it would show exactly the
list of tests each process runs. This way we can copy the list and run it
locally, which usually reproduces the failure.
But we needed a tool to then determine exactly which of the tests preceding the
failing one was setting up the failure conditions. So I wrote this simple
bisecter script, which expects a list of test names, which must contain the
faily test “A”, and of course, the name of the faily test
“A”. It looks for “A” in the list and will use
bisection to determine which of the tests preceding “A” is causing
the failure.
As an example, I used it to find a test failure in Ubuntu SSO:
python bisecter.py test-orders/loadbad1.txt webui.tests.test_decorators.SSOLoginRequiredTestCase.test_account_must_require_two_factor
273 elements in the list, about 8 iterations left
Test causing failure is in second half of given list
137 elements in the list, about 7 iterations left
Test causing failure is in second half of given list
69 elements in the list, about 6 iterations left
Test causing failure is in first half of given list
34 elements in the list, about 5 iterations left
Test causing failure is in second half of given list
17 elements in the list, about 4 iterations left
Test causing failure is in second half of given list
9 elements in the list, about 3 iterations left
Test causing failure is in second half of given list
5 elements in the list, about 2 iterations left
Test causing failure is in second half of given list
3 elements in the list, about 1 iterations left
Test causing failure is in second half of given list
2 elements in the list, about 1 iterations left
Test causing failure is in first half of given list
The test that causes the failure is webui.tests.test_views_account.AccountTemplateTestCase.test_backup_device_warning
I’m working on adding some periodic maintenance tasks to a service deployed using Juju. It’s a standard 3-tier web application with a number of Django application server units for load balancing and distribution.
Clearly the maintenance tasks’ most natural place to run is in one of these units, since they have all of the application’s software installed and doing the maintenance is as simple as running a “management command” with the proper environment set up.
A nice property we have by using Juju is that these application server units are just clones of each other, this allows scaling up/down very easily because the units are treated the same. However, the periodic maintenance stuff introduces an interesting problem, because we want only one of the units to run the maintenance tasks (no need for them to run several times). The maintenance scripts can conceivably be run in all units, even simultaneously (they do proper locking to avoid stepping on each other). And this would perhaps be OK if we only had 2 service units, but what if, as is the case, we have many more? there is still a single database and hitting it 5-10 times with what is essentially a redundant process sounded like an unacceptable tradeoff for the simplicity of the “just run them on each unit” approach.
We could also implement some sort of duplicate collapsing, perhaps by using something like rabbitmq and celery/celery beat to schedule periodic tasks. I refused to consider this since it seemed like swatting flies with a cannon, given that the first solution coming to mind is a one-line cron job. Why reinvent the wheel?
The feature that ended up solving the problem, thanks to the fine folks in Freenet’s #juju channel, is leadership, a feature which debuted in recent versions of Juju. Essentially, each service has one unit designated as the “leader” and it can be targeted with specific commands, queried by other units (‘ask this to my service’s leader’) and more importantly, unambiguously identified: a unit can determine whether it is the leader, and Juju events are fired when leadership changes, so units can act accordingly. Note that leadership is fluid and can change, so the charm needs to account for these changes. For example, if the existing leader is destroyed or has a charm hook error, it will be “deposed” and a new leader is elected from among the surviving units. Luckily all the details of this are handled by Juju itself, and charms/units need only hook on the leadership events and act accordingly.
So it’s then as easy as having the cron jobs run only on the leader unit, and not on the followers.
The simplistic way of using leadership to ensure only the leader unit performs an action was something like this in the crontab:
* * * * * root if [ $(juju-run {{ unit_name }} is-leader) = 'True' ]; then run-maintenance.sh; fi
This uses juju-run with the unit’s name (which is hardcoded in the crontab – this is a detail of how juju run is used which I don’t love, but it works) to run the is-leader command in the unit. This will print out “True” if the executing unit is the leader, and False otherwise. So this will condition execution on the current unit being the leader.
Discussing this with my knowledgeable colleagues, a problem was pointed out: juju-run is blocking and could potentially stall if other Juju tasks are being run. This is possibly not a big deal but also not ideal, because we know leadership information changes infrequently and we also have specific events that are fired when it does change.
So instead, they suggested updating the crontab file when leadership changes, and hardcoding leadership status in the file. This way units can decide whether to actually run the command based on locally-available information which removes the lock on Juju.
The solution looks like this, when implemented using Ansible integration in the charm. I just added two tasks: One registers a variable holding is-leader output when either the config or leadership changes:
- name: register leadership data
tags:
- config-changed
- leader-elected
- leader-settings-changed
command: is-leader
register: is_leader
The second one fires on the same events and just uses the registered variable to write the crontabs appropriately. Note that Ansible’s “cron” plugin takes care of ensuring “crupdate” behavior for these crontab entries. Just be mindful if you change the “name” because Ansible uses that as the key to decide whether to update or create anew:
- name: create maintenance crontabs
tags:
- config-changed
- leader-elected
- leader-settings-changed
cron:
name: "roadmr maintenance - {{item.name}}"
special_time: "daily"
job: "IS_LEADER='{{ is_leader.stdout }}'; if [ $IS_LEADER = 'True' ]; then {{ item.command }}; fi"
cron_file: roadmr-maintenance
user: "{{ user }}"
with_items:
- name: Delete all foos
command: "delete_foos"
- name: Update all bars
command: "update_bars"
A created crontab file (in /etc/cron.d/roadmr-maintenance) looks like this:
# Ansible: roadmr maintenance - Delete all foos
@daily roadmr IS_LEADER='True'; if [ $IS_LEADER = 'True' ]; then delete_foos; fi
A few notes about this. The IS_LEADER variable looks redundant. We could have put it directly in the comparison or simply wrote the crontab file only in the leader unit, removing it on the other ones. We specifically wanted the crontab to exist in all units and just be conditional on leadership. IS_LEADER makes it super obvious, right there in the crontab, whether the command will run. While redundant, we felt it added clarity.
Save for the actual value of IS_LEADER, the crontab is present and identical in all units. This helps people who log directly into the unit to understand what may be going on in case of trouble. Traditionally people log into the first unit; but what if that happens to not be the leader? If we write the crontab only on the leader and remove from other units, it will not be obvious that there’s a task running somewhere.
Charm Ansible integration magically runs tasks by tags identifying the hook events they should fire on. So by just adding the three tags, these events will fire in the specified order on config-changed, leader-elected and leader-settings-changed events.
The two leader hooks are needed because leader-elected is only fired on the actual leader unit; all the others get leader-settings-changed instead.
Last but not least, on’t forget to also declare the new hooks in your hooks.py file, in the hooks declaration which now looks like this (see last two lines added):
hooks = charmhelpers.contrib.ansible.AnsibleHooks(
playbook_path='playbook.yaml',
default_hooks=[
'config-changed',
'upgrade-charm',
'memcached-relation-changed',
'wsgi-file-relation-changed',
'website-relation-changed',
'leader-elected',
'leader-settings-changed',
])
Finally, I’d be remiss not to mention an existing bug in leadership event firing. Because of that, until leadership event functionality is fixed and 100% reliable, I wouldn’t use this technique for tasks which absolutely, positively need to be run without fail or the world will end. Here, I’m just using them for maintenance and it’s not a big deal if runs are missed for a few days. That said, if you need a 100% guarantee that your tasks will run, you’ll definitely want to implement something more robust and failproof than a simple crontab.
I had a hell of a time configuring Munin to send out e-mail alerts if values surpass specific thresholds. Many of the articles I found focused just on setting up the email command (which was the easy part), while few told me *how* to configure the per-service thresholds.
Once the thresholds are configured, you’ll see a green line for the warning threshold and a blue line for the critical one, like in this graph:
Some of Munin’s plugins already have configured thresholds (such as disk space monitoring which will send a warning at 92% usage and a critical alert at 96% or so). But others don’t, and I wanted to keep an eye on e.g. system load, network throughtput and outgoing e-mail.
The mail command can be configured in /etc/munin-conf.d/alerts.conf:
contact.myname.command mail -s "Munin ${var:group} :: ${var:host}" thisisme@somewhere.com
Next in /etc/munin.conf, under the specific host I want to receive alerts for, I did something like:
[www.myserver.com]
address 127.0.0.1
use_node_name yes
postfix_mailvolume.volume.warning 100000
load.load.warning 1.0
load.load.critical 5.0
df._dev_sda1.warning 60
This will send alert if the postfix plugin’s volume surpasses 100k, if the load plugin’s load values surpass 1.0 or 5.0 (warning and critical, respectively) and if df plugin’s _dev_sda1 value is over 60% (this is disk usage).
Now here’s the tricky part: How to figure out what the plugin name is, and what the value from this plugin is? (if you get these wrong, you’ll get the dreaded UNKNOWN is UNKNOWN alert).
Just look in /etc/munin/plugins for the one that monitors the service you want alerts for. Then run it with munin-run, for example, for the memory plugin:
$ sudo munin-run memory
slab.value 352796672
swap_cache.value 6959104
page_tables.value 8138752
vmalloc_used.value 102330368
apps.value 413986816
free.value 120274944
buffers.value 215904256
cached.value 4964200448
swap.value 28430336
committed.value 962179072
mapped.value 30339072
active.value 2746691584
inactive.value 2787188736
These are the values you have to use (so memory.active.warning 500000000 will alert if active memory goes about 5GB).
A tricky one is diskstats:
# munin-run diskstats
multigraph diskstats_latency
sda_avgwait.value 0.0317059353689672
sdb_avgwait.value 0.00127923627684964
sdc_avgwait.value 0.00235443037974684
multigraph diskstats_utilization
sda_util.value 6.8293650462148
sdb_util.value 0.000219587438166445
sdc_util.value 0.000150369658744413
In this case, use diskstats_utilization.sda_util.warning (so the value in “multigraph” is used as if it were the plugin name).
diskstats_utilization.sda_util.warning 60
As part of a project I’m working on, I wanted to be able to do some “side processing” while writing to a file-like object. The processing is basically checksumming on-the-fly. I’m essentially doing something like:
source = get_a_readable_file_like_object()
destination = get_a_writable_file_like_object()
destination.write(source.read())
what I’d like is to be able to also get the data read from source and use hashlib’s update mechanism to get a checksum of the object. The easiest way to do it would be using temporary storage (an actual file or a StringIO), but I’d prefer to avoid that since the files can be quite large. The second way to do it is to read the source twice. But since that may come from a network, it makes no sense to read it twice just to get the checksum. A third way would be to have destination be a file-like derivative that updates an internal hash with each read block from source, and then provides a way to retrieve the hash.
Instead of creating my own file-like where I’d mostly be “passing through” all the calls to the underlying destination object (which incidentally also writes to a network resource), I decided to use padme which already should do most of what I need. I just needed to unproxy a couple of methods, add a new method to retrieve the checksum at the end, and presto.
A first implementation looks like this:
#!/usr/bin/python
from __future__ import print_function
import urllib2 as requestlib
import hashlib
import padme
class sha256file(padme.proxy):
@padme.unproxied
def __init__(self, *args, **kwargs):
self.hash = hashlib.new('sha256')
return super(sha256file, self).__init__()
@padme.unproxied
def write(self, data):
self.hash.update(data)
return super(sha256file, self).write(data)
@padme.unproxied
def getsha256(self):
return self.hash.hexdigest()
url = "http://www.canonical.com"
request = requestlib.Request(url)
reader = requestlib.urlopen(request)
with open("output.html", "wb") as destfile:
proxy_destfile = sha256file(destfile)
for read_chunk in reader:
proxy_destfile.write(read_chunk)
print("SHA256 is {}".format(proxy_destfile.getsha256()))
This however doesn’t work for reasons I was unable to fathom on my own:
python ./cp2.py
Traceback (most recent call last):
File "./cp2.py", line 33, in
proxy_destfile.write(read_chunk)
File "./cp2.py", line 20, in write
return super(sha256file, self).write(data)
AttributeError: 'super' object has no attribute 'write'
This is clearly because super(sha256file, self)
refers to the *class* and I need the *instance* which is the one with the write method. So Zygmunt helped me get a working version ready:
#!/usr/bin/python
from __future__ import print_function
try:
import urllib2 as requestlib
except:
from urllib import request as requestlib
import hashlib
import padme
from padme import _logger
class stateful_proxy(padme.proxy):
@padme.unproxied
def add_proxy_state(self, *names):
""" make all of the names listed proxy state attributes """
cls = type(self)
cls.__unproxied__ = set(cls.__unproxied__)
cls.__unproxied__.update(names)
cls.__unproxied__ = frozenset(cls.__unproxied__)
def __setattr__(self, name, value):
cls = type(self)
if name not in cls.__unproxied__:
proxiee = cls.__proxiee__
_logger.debug("__setattr__ %r on proxiee (%r)", name, proxiee)
setattr(proxiee, name, value)
else:
_logger.debug("__setattr__ %r on proxy itself", name)
object.__setattr__(self, name, value)
def __delattr__(self, name):
cls = type(self)
if name not in cls.__unproxied__:
proxiee = type(self).__proxiee__
_logger.debug("__delattr__ %r on proxiee (%r)", name, proxiee)
delattr(proxiee, name)
else:
_logger.debug("__delattr__ %r on proxy itself", name)
object.__delattr__(self, name)
class sha256file(stateful_proxy):
@padme.unproxied
def __init__(self, *args, **kwargs):
# Declare 'hash' as a state variable of the proxy itself
self.add_proxy_state('_hash')
self._hash = hashlib.new('sha256')
return super(sha256file, self).__init__(*args, **kwargs)
@padme.unproxied
def write(self, data):
self._hash.update(data)
return type(self).__proxiee__.write(data)
@padme.unproxied
def getsha256(self):
return self._hash.hexdigest()
url = "http://www.canonical.com"
request = requestlib.Request(url)
reader = requestlib.urlopen(request)
with open("output.html", "wb") as destfile:
proxy_destfile = sha256file(destfile)
for read_chunk in reader:
proxy_destfile.write(read_chunk)
print("SHA256 is {}".format(proxy_destfile.getsha256()))
here’s the explanation of what was wrong:
– first of all the exception tells you that the super-object (which is a relative of base_proxy) has no write method. This is correct. A proxy is not a subclass of the proxied object’s class (some classes cannot be subclasses). The solution is to call the real write method. This can be accomplished with type(self).\__proxiee__.write()
– second of all we need to be able to hold state, namely the hash attribute (I’ve renamed it to _hash but it’s irrelevant to the problem at hand). Proxy objects can store state, it’s just not terribly easy to do. The proxied object (here a file) may or may not be able to store state (here it cannot). The solution is to make it possible to access some of the state via standard means. The new (small) satateful_proxy class implements __setattr__ and __delattr__ in the same way __getattribute__ was always implemented. That is, those methods look at the __unproxied__ set to know if access should be routed to the original or to the proxy.
– the last problem is that __unproxied__ is only collected by the proxy_meta meta-class. It’s extremely hard to change that meta-class (because padme.proxy is not the real class that you ever use, it’s all a big fake to make proxy() both a function-like and class-like object.)
The really cool thing about all this is not so much that my code is now working, but that those ideas and features will make it into an upcoming version of Padme 🙂 So down the line the code should become a bit simpler.