The thing is, the backticks are a bash construct, and whatever you specify to -exec does *not* get shell-expanded. So it’s being passed verbatim, with the only substitution being the {} for the found filenames.
But this works! (though quoting may pose some challenges). This works because it *is* being processed through bash, so all the normal expansion and shell tricks will work.
OK, so you forgot to add the notification to your initial command line. You can use a loop to monitor a particular process and notify you when it’s done.
In this case I’ll be monitoring an instance of netcat. Determining the process name is up to you 🙂 The delimiters $ and ^ look for the executable names only.
The while loop will run while the process exists; once the process disappears the loop continues with the next instruction in the line, which is popping up an alert on the desktop and then sending an email. So if I’m not glued to the desktop, I’ll still get an email when this is done.
while pgrep $nc^;do sleep 5;done; alert;(echo"finished"|mail -s "finished" you @somewhere.com)
If you use find, it outputs full paths, which may not always be desirable. Turns out find has a -printf action with which you can do niceties such as outputting plain filenames (as if you’d used basename on them, but this means one less command on your pipeline):
find data/{audio,documents,images,video,websites}/ -type f -printf "%f\n"
The -printf command has a lot of formatting variables and possibilites! give it a try, look at the man page for more information.
So the problem was to draw people at random from a list. The list is contained in a leads.txt text file, one per line.
This nifty one-liner will output a randomly-picked person from that file every time it’s invoked. it’ll then remove the name from the file so it doesn’t get repeated again.
exporti=`sort leads.txt |shuf |head -1`;echo$i; sed -i "s/^$i$//;/^$/d" leads.txt
It can be shortened by changing shuf |head 1 to shuf -h 1.
If you’d rather avoid deleting already-chosen entries from the file, this version just comments the names it picks:
Many of the on-line instructions and tutorials are quite complicated. Why? It was easy for me:
sudo apt-get install sbuild
To build a virtual machine:
mk-sbuild --distro=ubuntu --arch=i386 precise
this will create a schroot in /var/lib/schroots/precise-i386. Note how it appends the architecture to the schroot name. Also note that the first time you run mk-sbuild, it’ll show you a configuration file and configure your environment. I didn’t change anything in the config file, I used it “as it was”. When it prompts you to log out, do it, otherwise things won’t work.
OK now you want to build a package using your chroot with sbuild:
sbuild -A -d precise package.dsc
This will build the package on precise for ALL available architectures. Note that -d is just “precise”; the -A flag will tell sbuild to build architecture: any packages for all available architectures (so if you have amd64 and i386 chroots, it’ll do the right thing and build two packages).
If you want to build arch-specific packages:
sbuild -d precise-i386 package.dsc
This will magically build for the given architecture (i386). Note that arch: any packages will also be built.
You can also specify the arch as a parameter (but then you have to leave it out of the -d name):
Ever wanted to diff the output of two commands? Usually it’s done by first piping each command to a temporary file and then diffing them.
The following syntax creates a named pipe for the command and uses the pipe’s name instead of a filename. Bash takes care of everything automagically so all you have to do is:
sort <(cat /etc/passwd)
That’s a dumb example, but how about this?
diff <(command1) <(command2)
The commands can be as complicated as you need them to be!
A very interesting conversation erupted today, beginning when a coworker sent a lengthy email stating his reasons for altogether leaving Ubuntu 11.04’s new Unity desktop interface and instead resorting to the good, old-fashioned Gnome 2 “Classic” session.
In it he makes some very valid points about functionality that’s different to what he was used to. This understandably affects his workflow, so instead of wrestling with a new interface, he chose to go with the old one, hopefully until Unity matures enough for him to be able to customize it to his liking.
What’s interesting was the amount of responses it got, where everyone spoke about their “pet peeves” with Unity. The vast majority were changes in how Unity handles things, that interfered with people’s workflows. It’s understandable that even a small change in how your user interface behaves, when you’ve become adept at working with it, disrupts things enough (and annoyingly enough) that you either go back to the old user interface, or just start fiddling with the new one until you find a way to get things to an acceptable state.
Which is what struck me as curious about this thread: there were basically two camps, those who flat out abandoned Unity for the time being, and those who actually went looking into how Unity behaves and integrates with the environment, and came up with ways to make Unity more comfortable to those used to the “old ways” of Gnome 2.x and its desktop interface.
Without demerit to the original poster, whose points were quite valid, a lot of responses suggested ways to solve about 80% of his complaints about Unity. However, the fact that it took a team of experts to solve the problems that a user (and another expert, at that) was experiencing, is testament to the fact that Unity could still be made more intuitive, easier and more customizable.
I finally upgraded to Ubuntu 11.04 and Unity this past weekend. Like many, I experienced some usability issues, where the desktop wasn’t behaving the way I was used to. However, my use of the system means that I basically want the UI to stay out of my way. So the main change I had to make was to get the Unity dock to auto-hide, so that it only appears when I ask it to. The rest of the time it’s hidden away. Everything else, well, it’s admittedly different than what I’m used to, but that’s change for you. Was Unity making a change for change’s sake? Maybe so, but I think it’s change in the right direction. Even if it somewhat alienates experienced users (for whom, however, workarounds exist that handle nearly all their concerns), I think the true success of Unity is in how it works for new users. And here are two examples.
Another coworker posted his experience with showing Ubuntu and Unity to a newbie, fresh-from-Windows user. The user’s comments were along the lines of “this looks nice”, “It’s easy to use” and “I’m keeping it”.
Also, even though some have complained about the app lens being hard to use (and it’s a complaint I’ve seen already twice), I’ve seen users realize “but hey, if it’s really that messy, you can use the search field to find what you need, right?”. So yes, end users are realizing this, and it’s just a matter of polishing how things work. If all, I think it’s great to move users away from the “the computer has only two buttons” mindset and get them using the keyboard a little more.
So yes indeed, I’m staying on Unity, and I’m looking forward to seeing it maturing into a better desktop interface. as Mark Shuttleworth said, it’s a foundation on which the next generations of Ubuntu user experience will be built. I’ll be thrilled to be along for the ride.
Finally, for a great write on why your desktop changed, and why the developers would appreciate you giving it a whirl and helping improve it (even just commenting on the stuff you find hard, unintuitive or just plain wrong) is better than just swearing off these newfangled changes (without which, face it, you’d still be using fwm and MIT Athena widgets), please drop by Federico Mena-Quintero’s activity log and read his wonderful and short article “Moving into your new Gnome 3 house“.
I remember an easier time when all keyboards had the same layout (C-64, anyone?) and if you wanted to type special characters you had to resort to arcane command sequences, if they were at all possible.
My, how times have changed.
My first PC compatible had a spanish keyboard, and you could very simplistically tell the OS (MS-DOS) about your keyboard layout. For a while this worked pretty well. Then someone decided that Latin America was so different from Spain, that we needed our very own keyboard layout; this layout just moves stuff around needlessly, destroying many years of experience for those of us who were accustomed to the spanish keyboard. I understand removing the ç as it’s not used in Latin America, but why move all the rest of the stuff around?
So basically I got used to the spanish keyboard which has worked well in all kinds of OSes, from MS-DOS to Windows, OS/2 and yes, Linux.
While the Latin American layout was such a pariah that, at some point, it got overwritten by the Latvian keyboard (la), so when doing a system upgrade, all of a sudden your keyboard was in latvian, and you had to select “latam” for Latin America.
Eventually I happened to get a laptop with a Canadian French keyboard. Luckily, this is not the dreaded french AZERTY keyboard, but basically an english keyboard layout with most symbol keys mapped very strangely. So if you want to type the basic alphabet you’re OK, like you’d be with an english keyboard, but things start getting weird when you need to create special characters or compose accents, cedillas and stuff like that. This was so different from any other layout I’ve used, that I was basically freaking out. I could just ignore the red characters on my keyboard, and/or use it as just an english keyboard, but I routinely need to compose text in spanish and in french, so how would I go about doing this?
And no, the ages-old trick of memorizing ASCII codes for special characters doesn’t cut it: for one, it’s unreliable on Linux (especially on graphical mode), and for another, it’s just primitive! I used to chuckle at all the people I’ve seen through the years who had a nice “cheat sheet” glued to their desktop with ASCII codes for frequently-used accented characters, as opposed to taking 15 minutes to correctly configure their keyboards to do this natively.
So anyway, what I came across while checking out the available keyboard maps under Linux and trying to figure out how to type stuff on the Canadian keyboard, was this wonder of wonders, the US International with AltGr Dead Keys layout.
Basically, it takes the right Alt key (labeled AltGr on my keyboard, a monstrosity I was already used to from the LatinAmerican and spanish keyboards) and uses it to “compose” or “deadkey” stuff (dead keys are like accents, for instance, where you press the accent key and then the next letter you type will be accented). In combination with ~, “, ‘ and `, this enables me to type nearly all accented characters with relative ease.
Also, I can use AltGr+vowel to type acute-accented vowels (áéíóú), and AltGr+n for ñ.
Grave accents (è) and tilded letters (ã) can be composed by AltGr+accent (use ` for grave, ~ for tilde), and then the letter you want to type.
What I like about Linux’s keyboard selection thingy is that you can see an actual layout map. Thus, even if my keyboard doesn’t have the characters stenciled in, I can take a quick peek and see where stuff I need might be.
Thus I can do things like use ç or €, all with a minimum of fuss. Also more complicated stuff like ï œ ø is still just one AltGr+key away. All this while preserving a layout that’s very familiar to everyone (english), and where most strange characters using while programming {}][\|~ are also much easier to use than on the spanish keyboard I was used to (it needs AltGr for all sorts of braces and piping, which makes it very painful on my hands).
So there you have it, if you see yourself wrestling with choosing a good physical keyboard layout *and* making it work on your OS, stop pulling your hair out, get an english-layout keyboard and use US International with AltGr Dead Keys!