M0UNTAIN 0F C0DE

My PC is on literaly 24/7 as I run a FAH (Folding At Home) client and Transmission to seed the RPi and Ubuntu image torrents, but when I want to use my machine I have to pause them both as they bog the machine down.

This was a manual task I had to do every time I sat down and I had to remember to set them both going again when I was done, something I didn't always remember to do at 3AM after a session of "I'll just do a little more..."

Out of habit I lock my PC whenever I leave it and I thought that was an ideal trigger to pause/resume the FAH client and Transmission!

Today I was writing a script that needs to run without user interaction and need to get the latest version of a single file from a private BitBucket Git repo over SSH.

BitBucket allows you to do this over HTTPS and I could use something like curl or wget with digest auth but then that would require the user name and password to be added to the script in plain text...

Not ideal, esspecially when SSH keys are already setup and far more secure than passwords, but there is a solution...

I use SSH literally every single day, at work and at home, so for security and because I don't want to spend time typing long secure passwords I use SSH keys for authentication.

What's the problem?

Usually you'll generate a key pair with ssh-keygen, copy the public key to any server you want to login to and youre done. So what's the problem with that? Well if you ever want to renew that single key, increase it's length for better security, find out which user and server that key is authorised for etc then you are going to have to change the public key on each of servers you can access.

It would be much better if we had a key pair per user per server, then we can renew, change or delete a key for a single login. We have complete control.

Every time I need to write a new image to my Pi, usually because i've broken it, I have to look up how to write the image, check mounts and find and download the latest version of the image I want. Even then I have no idea if dd is actually progressing or how long i'm going to have to wait...

There wasn't really anything out there that can take an image name and a location and do the rest for me, now there is!

I've always thought the Linux way of everything auto-updating for you, albeit after asking you first, was the best way forward so it always felt kind of wrong having to run composer self-update manually.

What better tool is there to periodically run a script than cron. It's as simple as adding the following to a file located, at least on Ubuntu, here: /etc/cron.daily/composer

#!/bin/sh

# Update Composer to the lastest version
composer self-update

It's probably a good idea to redirect the output to a log file with a timestamp but i'm not that worried. Composer complains at you if it's older than 30 days anyway:

Warning: This development build of composer is over 30 days old.
It is recommended to update it by running "composer self-update" to get the latest version.

There was a new version of the Linux kernel released today, 3.13.0-49, once again I came across the issue of my /boot partition not having enough free space to fit the new kernel and was greated by this message:

The upgrade needs a total of XX M free space on disk /boot.
Please free at least an additional XX M of disk space on /boot

Theres an easy if a little dangerous fix to be found on the AskUbuntu site.

TL;DR version:

sudo apt-get purge linux-image-3.13.0-{X,Y,Z}-generic`

Where X Y and Z are the versions you want to delete.

Can't help but think there is way to automate the purging of all but say the last 2 kernel versions any time a new one is released...