If juju1 and juju2 are installed on the same system, juju1’s bash auto completion breaks because it expects services where in juju2 they’re called applications.
Maybe juju2 has correct bash completion, but in the system I’m working on, only juju1 autocompletion was there, so I had to hack the autocomplete functions. Just added these at the end of .bashrc to override the ones in the juju1 package. Notice they work for both juju1 and juju2 by using dict.get() to not die if a particular key isn’t found.
# Print (return) all units, each optionally postfixed by $2 (eg. 'myservice/0:')_juju_units_from_file(){python -c '
trail="'${2}'"
import json, sys; j=json.load(sys.stdin)
all_units=[]
for k,v in j.get("applications", j.get("services",{})).items():
if v.get("units"):
all_units.extend(v.get("units",{}).keys())
print "\n".join([unit + trail for unit in all_units])
'<${1?}}# Print (return) all services_juju_services_from_file(){python -c '
import json, sys; j=json.load(sys.stdin)
print "\n".join(j.get("applications", j.get("services",{}).keys());'<${1?}}
So today I woke up to beautiful “Hacked by…” top posts, as well as modifications to existing posts in my blogs. It’s entirely my fault for not promptly upgrading to WordPress 4.7.2, so I was vulnerable to this:
Picadillo is a traditional Mexican recipe, usually made with minced meat. Seitan, however, makes a great substitute for minced meat, and since most of picadillo’s flavor comes from the sauce and reduction process, the flavor stays mostly similar.
Mince the seitan: Chop it into small dice, then run in small batches through a food processor on high, until you get a size similar to cooked, minced meat.
Prepare the sauce: Put the tomatoes, garlic and broth in the blender, blend for 1 minute or until smooth.
Do the thing: On a large (5L or more) pot, fry the onion with the olive oil until transparent. Once fried, dump the seitan, potato and carrot dice in the pot, dump the sauce and stir (it should initially look like a stew – if it’s drier, make some more sauce and add it to the pot). Set the heat to medium-high, bring the mixture to a boil and let simmer until the liquid is consumed and the carrots and potatoes are soft. BEWARE, there’ll come a point where you will need to start stirring to avoid burning the bottom part of the stew. This will happen even if the top seems to have enough liquid, so keep an eye on it. It should take 20-25 minutes to evaporate the sauce to the desired consistency.
When done, stir in the already-cooked green peas (so they remain firm, if you cook them in the stew they’ll go mushy).
Serve with white or red rice, or with corn tortillas.
I’m working on adding some periodic maintenance tasks to a service deployed using Juju. It’s a standard 3-tier web application with a number of Django application server units for load balancing and distribution.
Clearly the maintenance tasks’ most natural place to run is in one of these units, since they have all of the application’s software installed and doing the maintenance is as simple as running a “management command” with the proper environment set up.
A nice property we have by using Juju is that these application server units are just clones of each other, this allows scaling up/down very easily because the units are treated the same. However, the periodic maintenance stuff introduces an interesting problem, because we want only one of the units to run the maintenance tasks (no need for them to run several times). The maintenance scripts can conceivably be run in all units, even simultaneously (they do proper locking to avoid stepping on each other). And this would perhaps be OK if we only had 2 service units, but what if, as is the case, we have many more? there is still a single database and hitting it 5-10 times with what is essentially a redundant process sounded like an unacceptable tradeoff for the simplicity of the “just run them on each unit” approach.
We could also implement some sort of duplicate collapsing, perhaps by using something like rabbitmq and celery/celery beat to schedule periodic tasks. I refused to consider this since it seemed like swatting flies with a cannon, given that the first solution coming to mind is a one-line cron job. Why reinvent the wheel?
The feature that ended up solving the problem, thanks to the fine folks in Freenet’s #juju channel, is leadership, a feature which debuted in recent versions of Juju. Essentially, each service has one unit designated as the “leader” and it can be targeted with specific commands, queried by other units (‘ask this to my service’s leader’) and more importantly, unambiguously identified: a unit can determine whether it is the leader, and Juju events are fired when leadership changes, so units can act accordingly. Note that leadership is fluid and can change, so the charm needs to account for these changes. For example, if the existing leader is destroyed or has a charm hook error, it will be “deposed” and a new leader is elected from among the surviving units. Luckily all the details of this are handled by Juju itself, and charms/units need only hook on the leadership events and act accordingly.
So it’s then as easy as having the cron jobs run only on the leader unit, and not on the followers.
The simplistic way of using leadership to ensure only the leader unit performs an action was something like this in the crontab:
* * * * * root if [ $(juju-run {{ unit_name }} is-leader) = 'True' ]; then run-maintenance.sh; fi
This uses juju-run with the unit’s name (which is hardcoded in the crontab – this is a detail of how juju run is used which I don’t love, but it works) to run the is-leader command in the unit. This will print out “True” if the executing unit is the leader, and False otherwise. So this will condition execution on the current unit being the leader.
Discussing this with my knowledgeable colleagues, a problem was pointed out: juju-run is blocking and could potentially stall if other Juju tasks are being run. This is possibly not a big deal but also not ideal, because we know leadership information changes infrequently and we also have specific events that are fired when it does change.
So instead, they suggested updating the crontab file when leadership changes, and hardcoding leadership status in the file. This way units can decide whether to actually run the command based on locally-available information which removes the lock on Juju.
The solution looks like this, when implemented using Ansible integration in the charm. I just added two tasks: One registers a variable holding is-leader output when either the config or leadership changes:
The second one fires on the same events and just uses the registered variable to write the crontabs appropriately. Note that Ansible’s “cron” plugin takes care of ensuring “crupdate” behavior for these crontab entries. Just be mindful if you change the “name” because Ansible uses that as the key to decide whether to update or create anew:
- name:create maintenance crontabstags: - config-changed - leader-elected - leader-settings-changedcron:name:"roadmr maintenance - {{item.name}}"special_time:"daily"job:"IS_LEADER='{{ is_leader.stdout }}'; if [ $IS_LEADER = 'True' ]; then {{ item.command }}; fi"cron_file:roadmr-maintenanceuser:"{{ user }}"with_items: - name:Delete all fooscommand:"delete_foos" - name:Update all barscommand:"update_bars"
A created crontab file (in /etc/cron.d/roadmr-maintenance) looks like this:
A few notes about this. The IS_LEADER variable looks redundant. We could have put it directly in the comparison or simply wrote the crontab file only in the leader unit, removing it on the other ones. We specifically wanted the crontab to exist in all units and just be conditional on leadership. IS_LEADER makes it super obvious, right there in the crontab, whether the command will run. While redundant, we felt it added clarity.
Save for the actual value of IS_LEADER, the crontab is present and identical in all units. This helps people who log directly into the unit to understand what may be going on in case of trouble. Traditionally people log into the first unit; but what if that happens to not be the leader? If we write the crontab only on the leader and remove from other units, it will not be obvious that there’s a task running somewhere.
Charm Ansible integration magically runs tasks by tags identifying the hook events they should fire on. So by just adding the three tags, these events will fire in the specified order on config-changed, leader-elected and leader-settings-changed events.
The two leader hooks are needed because leader-elected is only fired on the actual leader unit; all the others get leader-settings-changed instead.
Last but not least, on’t forget to also declare the new hooks in your hooks.py file, in the hooks declaration which now looks like this (see last two lines added):
Finally, I’d be remiss not to mention an existing bug in leadership event firing. Because of that, until leadership event functionality is fixed and 100% reliable, I wouldn’t use this technique for tasks which absolutely, positively need to be run without fail or the world will end. Here, I’m just using them for maintenance and it’s not a big deal if runs are missed for a few days. That said, if you need a 100% guarantee that your tasks will run, you’ll definitely want to implement something more robust and failproof than a simple crontab.
I had a hell of a time configuring Munin to send out e-mail alerts if values surpass specific thresholds. Many of the articles I found focused just on setting up the email command (which was the easy part), while few told me *how* to configure the per-service thresholds.
Once the thresholds are configured, you’ll see a green line for the warning threshold and a blue line for the critical one, like in this graph:
Some of Munin’s plugins already have configured thresholds (such as disk space monitoring which will send a warning at 92% usage and a critical alert at 96% or so). But others don’t, and I wanted to keep an eye on e.g. system load, network throughtput and outgoing e-mail.
The mail command can be configured in /etc/munin-conf.d/alerts.conf:
contact.myname.command mail -s "Munin ${var:group} :: ${var:host}" thisisme@somewhere.com
Next in /etc/munin.conf, under the specific host I want to receive alerts for, I did something like:
This will send alert if the postfix plugin’s volume surpasses 100k, if the load plugin’s load values surpass 1.0 or 5.0 (warning and critical, respectively) and if df plugin’s _dev_sda1 value is over 60% (this is disk usage).
Now here’s the tricky part: How to figure out what the plugin name is, and what the value from this plugin is? (if you get these wrong, you’ll get the dreaded UNKNOWN is UNKNOWN alert).
Just look in /etc/munin/plugins for the one that monitors the service you want alerts for. Then run it with munin-run, for example, for the memory plugin:
As part of a project I’m working on, I wanted to be able to do some “side processing” while writing to a file-like object. The processing is basically checksumming on-the-fly. I’m essentially doing something like:
what I’d like is to be able to also get the data read from source and use hashlib’s update mechanism to get a checksum of the object. The easiest way to do it would be using temporary storage (an actual file or a StringIO), but I’d prefer to avoid that since the files can be quite large. The second way to do it is to read the source twice. But since that may come from a network, it makes no sense to read it twice just to get the checksum. A third way would be to have destination be a file-like derivative that updates an internal hash with each read block from source, and then provides a way to retrieve the hash.
Instead of creating my own file-like where I’d mostly be “passing through” all the calls to the underlying destination object (which incidentally also writes to a network resource), I decided to use padme which already should do most of what I need. I just needed to unproxy a couple of methods, add a new method to retrieve the checksum at the end, and presto.
A first implementation looks like this:
#!/usr/bin/pythonfrom__future__importprint_functionimporturllib2asrequestlibimporthashlibimportpadmeclasssha256file(padme.proxy):@padme.unproxieddef__init__(self,*args,**kwargs):self.hash=hashlib.new('sha256')returnsuper(sha256file,self).__init__()@padme.unproxieddefwrite(self,data):self.hash.update(data)returnsuper(sha256file,self).write(data)@padme.unproxieddefgetsha256(self):returnself.hash.hexdigest()url="http://www.canonical.com"request=requestlib.Request(url)reader=requestlib.urlopen(request)withopen("output.html","wb")asdestfile:proxy_destfile=sha256file(destfile)forread_chunkinreader:proxy_destfile.write(read_chunk)print("SHA256 is {}".format(proxy_destfile.getsha256()))
This however doesn’t work for reasons I was unable to fathom on my own:
This is clearly because super(sha256file, self) refers to the *class* and I need the *instance* which is the one with the write method. So Zygmunt helped me get a working version ready:
#!/usr/bin/pythonfrom__future__importprint_functiontry:importurllib2asrequestlibexcept:fromurllibimportrequestasrequestlibimporthashlibimportpadmefrompadmeimport_loggerclassstateful_proxy(padme.proxy):@padme.unproxieddefadd_proxy_state(self,*names):""" make all of the names listed proxy state attributes """cls=type(self)cls.__unproxied__=set(cls.__unproxied__)cls.__unproxied__.update(names)cls.__unproxied__=frozenset(cls.__unproxied__)def__setattr__(self,name,value):cls=type(self)ifnamenotincls.__unproxied__:proxiee=cls.__proxiee___logger.debug("__setattr__ %r on proxiee (%r)",name,proxiee)setattr(proxiee,name,value)else:_logger.debug("__setattr__ %r on proxy itself",name)object.__setattr__(self,name,value)def__delattr__(self,name):cls=type(self)ifnamenotincls.__unproxied__:proxiee=type(self).__proxiee___logger.debug("__delattr__ %r on proxiee (%r)",name,proxiee)delattr(proxiee,name)else:_logger.debug("__delattr__ %r on proxy itself",name)object.__delattr__(self,name)classsha256file(stateful_proxy):@padme.unproxieddef__init__(self,*args,**kwargs):# Declare 'hash' as a state variable of the proxy itselfself.add_proxy_state('_hash')self._hash=hashlib.new('sha256')returnsuper(sha256file,self).__init__(*args,**kwargs)@padme.unproxieddefwrite(self,data):self._hash.update(data)returntype(self).__proxiee__.write(data)@padme.unproxieddefgetsha256(self):returnself._hash.hexdigest()url="http://www.canonical.com"request=requestlib.Request(url)reader=requestlib.urlopen(request)withopen("output.html","wb")asdestfile:proxy_destfile=sha256file(destfile)forread_chunkinreader:proxy_destfile.write(read_chunk)print("SHA256 is {}".format(proxy_destfile.getsha256()))
here’s the explanation of what was wrong:
– first of all the exception tells you that the super-object (which is a relative of base_proxy) has no write method. This is correct. A proxy is not a subclass of the proxied object’s class (some classes cannot be subclasses). The solution is to call the real write method. This can be accomplished with type(self).\__proxiee__.write()
– second of all we need to be able to hold state, namely the hash attribute (I’ve renamed it to _hash but it’s irrelevant to the problem at hand). Proxy objects can store state, it’s just not terribly easy to do. The proxied object (here a file) may or may not be able to store state (here it cannot). The solution is to make it possible to access some of the state via standard means. The new (small) satateful_proxy class implements __setattr__ and __delattr__ in the same way __getattribute__ was always implemented. That is, those methods look at the __unproxied__ set to know if access should be routed to the original or to the proxy.
– the last problem is that __unproxied__ is only collected by the proxy_meta meta-class. It’s extremely hard to change that meta-class (because padme.proxy is not the real class that you ever use, it’s all a big fake to make proxy() both a function-like and class-like object.)
The really cool thing about all this is not so much that my code is now working, but that those ideas and features will make it into an upcoming version of Padme 🙂 So down the line the code should become a bit simpler.
One of lxc’s nice time-saving features is that, after initial container creation, it will cache the files it downloaded to do so, and when you create a new container using the same template/version/architecture, it will leverage the existing files and create the container with minimal downloads and really quickly.
A downside of this is that the cache can become stale; this is apparent when you want to install a package in a container and apt-get gives 404 errors indicating the version of the package the container knows about, is no longer available in the archive (most likely superseded by a new one).
This is easily fixed by always doing apt-get update in the container prior to any package installs/upgrades. However, it’s cumbersome, and if you’re creating dozens of new containers every day, the bandwidth and time spent re-downloading can quickly add up.
To update the “base image” or cache, which resides in /var/cache/lxc for each version, you can do two things.
most templates also support --flush-cache so if you’re calling lxc-create directly, just add an extra --flush-cache as template args (after --) and the cache will be flushed before making the container. Something like
Sometimes you may want to configure a wireless interface on a system with Ubuntu Server. The most common use case (for me, at least) is to run some tests with server, which require two network interfaces, on a laptop (it’s what I have available to play with) with an ethernet interface and a wireless interface. As long as Ubuntu sees the wireless interface, it’s quite easy to set things up so the wireless comes up at boot time.
You will probably need to set up the server to forward and masquerade the internal network (usually, the ethernet segment is the internal one, while the wireless counts as the “outside” interface). There are plenty of tutorials to do this over the internet, so I won’t extend this post by detailing that.
Of course, the wireless will grab a dynamic IP address, so use caution with that as the address may change (or, assign a static one from your router’s unused range). Anyway. Put this in /etc/network/interfaces:
# This file describes the network interfaces available on your system# and how to activate them. For more information, see interfaces(5).# The loopback network interfaceauto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 10.10.10.1
netmask 255.255.255.0
auto wlan0
iface wlan0 inet dhcp
wpa-ssid your-network-ssid
wpa-ap-scan 1wpa-proto RSN
wpa-pairwise CCMP
wpa-group CCMP
wpa-key-mgmt WPA-PSK
wpa-psk your-network-password
Then you can do ifup wlan0 to bring the interface up. It should also come up automagically at boot time.
This was used to resync a file whose audio was consistently 1.75 seconds behind the video track. The resulting file also contains the first 2 subtitle tracks from the original file.