When requesting copies of X-ray or other medical imaging studies, it’s typical to receive a CD-ROM containing the images in an apparently inscrutable DICOM format. The CD usually contains a viewer since it’s recognized that DICOM is not a format typically openable with consumer-grade applications; however, the viewer is typically a Windows-only application.
Luckily it turns out that DICOM is a widespread, well-documented and general standard for medical imaging, including provisions for metadata and custom fields, ability to use multi-layer images, and it supports all kinds of medical imaging needs, not just X-rays. So even if it’s a bit niche and not heard-of too much, it’s fairly easy to find ways to view DICOM images on Linux or Mac.
On Linux apt-cache search dicom reveals a plethora of applications able to not only open DICOM images, but connect to a DICOM network to send data back and forth. Some of them have a GUI but in the end I settled for dcmtk which has a command (dcm2pnm +on DICOM_FILE image.png) with which I was able to convert those files to pngs to share with, you know, actual humans who live in the 21st century.
At least they didn’t send us the imaging results by fax…
I just noticed that after the upgrade to iOS 18, 1Blocker is no longer working, so I see ads in all websites on my phone. I’d forgotten how horrible the web with ads is - I use an ad blocker everywhere (uBlock Origin on desktop, 1Blocker on mobile) and going back to the adful web again is horrible. I hope they fix 1Blocker soon!
As part of setting this thing up I had to learn a bit about how a WSGI app and the path under which it is mounted or exposed in a server’s URL hierarchy interact. The main key was this page that describes the rather obscure SCRIPT_NAME variable and how it designates URL components that WSGI will chop off or add back when sending requests back and forth with the fronting proxy (Apache in this case).
Long story short, in the systemd unit that starts the gunicorn/flask app I had to set SCRIPT_NAME=miniblog and in the Apache proxypass config, this:
RequestHeader set X-Forwarded-Proto "https"
ProxyPass "/miniblog" "http://localhost:19891/miniblog/"
ProxyPassReverse "/miniblog" "http://localhost:10891/miniblog/"
I finally got around to updating my server to Debian Bullseye from Buster. The thing that had been holding me was this notice in the upgrade notes:
Please consider the version of Exim in bullseye a major Exim upgrade. It introduces the concept of tainted data read from untrusted sources.
The basic strategy for dealing with this change is to use the result of a lookup in further processing instead of the original (remote provided) value.
To ease upgrading there is a new main configuration option to temporarily downgrade taint errors to warnings, letting the old configuration work with the newer Exim. To make use of this feature add
to the Exim configuration (e.g. to /etc/exim4/exim4.conf.localmacros) before upgrading and check the logfile for taint warnings. This is a temporary workaround which is already marked for removal on introduction.
This sounded scary, so I put it off for years, but then I decided to just do it since the docs above said one could set the allow_insecure option and then check the logs for specific problems. Alas, after the upgrade my exim started bouncing mails due to a lookup error:
temporarily rejected RCPT <someonetomechangosubanana.com>: Tainted name '/etc/exim4/WHATEVER' for file read not permitted
Long story short, instead of injecting a tainted variable when expanding a string; in this case, for a router’s file option like so:
The operation to perform, a single-key lookup in this case.
The key to search for. In the examples above, we build the key from user-input data, which is fine by the tainting rules, as long as that’s used just to look up data in a database, table or file listing. The point of taintedness is to NOT use the tainted value itself to build values that will go, for example, in filenames.
This is the type of lookup, dsearch is “directory search” - this will look for a file named KEY in the ABS_DIR directory, and return (this is the important part) the NAME OF THE FILE IT FOUND, not the value of the key (which might be evil).
ret=full just means “return the entire value”, in this case, the full path, rather than just the file name.
ABS_DIR is the directory where we will search for the file.
{$value} is what gets returned if the lookup is successful. In this case we want to return the actual value that was found.
If the lookup fails, then this value gets returned. If not specified, it always returns the empty string, which resulted in another error:"" is not an absolute path because then it thinks we’re assigning "" to the router’s file. Instead, what we want is for the thing to fail so the router gets marked as unprocessed and the processing continues in the normal order. Specifying the special value fail gives that behavior.
With these changes, the routers work as they did before, while following the rules about when and how to use a tainted user-input value.
Luckily for the upgrade to Bookworm and Exim 4.96, there is no such breaking change in exim configuration!
How to convert a flac-extracted album to aac? this will lose quality but aac files are more compact and I don’t need the super extra audiophile quality.
Assume there’s a .flac file and corresponding .cue file for the entire album.
shnsplit can do it in one fell swoop, just ensure you have shnsplit and fdkaac installed and then:
I recently saw some discussions on line about people mentioning they’d been blogging for 20 or 25 years and it got me thinking.
How long have I been blogging?
My page famously claims “since 1995” - I’m fairly certain that’s about when I created my first personal web page which lived at http://teesa.com/~roadmr (https was incredibly arcane and hard to set up back then). I don’t remember what it contained; Sadly the Internet Archive has no record of this page’s original content; this could be because the Internet Archive’s Wayback Machine wasn’t launched until May 1996!
The only snapshots of that URL are from May, 1997; by then it already only contained a “We Moved” link pointing to what would be my home page for the next decade, at least: http://www.entropia.com.mx/~roadmr. Even if the first snapshot of that page in Wayback Machine is from December, 1998, it can be seen that I’ve demonstrably had at least a form of web page since 1997.
My page looked like this for maybe 5 years, being basically a collection of pages and links with no chronology.
The first appearance of an actual blog-like format is in 2002. I used a PHP application called Personal Weblog. I didn’t (and still don’t) blog much, so this choice of tooling was reasonable for what I wrote, which were short-length snippets reminiscent of what would later become “twitter”.
Eventually, if I remember correctly, I nuked all the files in an accident and decided to go for broke and install Wordpress, which I did in November 2005. I continued to use Wordpress consistently, even after the move to my new URL https://www.tomechangosubanana.com in 2006; indeed the last Wayback Machine snapshot of http://www.entropia.com.mx/~roadmr dates from sometime in 2005-2006.
I’ve managed to keep / migrate all my content since then, so my first Wordpress post can still be seen here; only it’s now stored as static content which is rendered by Hugo, to which I migrated in 2021, after 16 years on Wordpress.
So by the above and my calculations:
Had a web page since 1995 (28 years at time of writing)
If you don’t want to believe that claim and prefer to go with the provable that I’ve had a web page since 1997, that’s fine by me :) it’s still 26 years at time of writing.
Been blogging since 2002 (21 years at time of writing).
Been on the same URL since 2006 (17 years at time of writing).
$ ssh bazaar.launchpad.net
X11 forwarding request failed on channel 0No shells on this server.
Connection to bazaar.launchpad.net closed.
# The above is successful, the server did accept the connection \o/