I recently completed a project which involved the Python Imaging Library (PIL) and its ability to load bitmap fonts, which must be presented in PIL format. I have a script, I have a PIL font, and I have an error…

$ grep ImageFont.load inkywhat-writer.py 
font = ImageFont.load("tom-thumb.pil")
$ ./inkywhat-writer.py 
Traceback (most recent call last):
File "./inkywhat-writer.py", line 14, in <module>
font = ImageFont.load("tom-thumb.pil")
File "/usr/lib/python2.7/dist-packages/PIL/ImageFont.py", line 249, in load
f._load_pilfont(filename)
File "/usr/lib/python2.7/dist-packages/PIL/ImageFont.py", line 81, in _load_pilfont
raise IOError("cannot find glyph data file")
IOError: cannot find glyph data file

Hmm. The PIL file is in the same directory as the script, so it should work. A quick DuckDuckGo of the error turns up little to help, beyond someone asking for help with the same error in 2003… with no response.

Turns out that a PIL file isn’t all you need: while it contains information about the font, it doesn’t actually contain the bitmap font itself – the “glyph data file” Python is complaining about not being able to find.

The key part of the PIL library itself is the ImageFont.py file:

 with open(filename, "rb") as fp:
        with open(filename, "rb") as fp:
            for ext in (".png", ".gif", ".pbm"):
                try:
                    fullname = os.path.splitext(filename)[0] + ext
                    image = Image.open(fullname)
                except Exception:
                    pass
                else:
                    if image and image.mode in ("1", "L"):
                        break
            else:
                raise IOError("cannot find glyph data file")

What this chunk of code is doing is taking the filename of the font you’re trying to load – in my case “tom-thumb.pil” – and looking for a bitmap graphic of the same name with a .png, .gif, or .pbm extension: Portable Network Graphic, Graphics Interchange Format, or Portable BitMap images. If it can’t find it, you get the “cannot find glyph data file” error – which is, perhaps, not the clearest for the actual error it’s encountering.

The fix, then, is to make sure that you have both the PIL file and the corresponding bitmap:

$ ls tom-thumb*
tom-thumb.pbm tom-thumb.pil

Providing you’ve got both files, they have the same filename apart from the extension, and the extension’s in lower-case characters only, you’re golden: the PIL library will load the font and your program continues as normal.

If you’ve got BDF or PCF format bitmap font and need to covert it to PIL and PBM for use with the PIL library, the following handy-dandy Python script from Peter Samuel Anttila is what you need. Save it, execute it, and any BDF or PCF file in the current directory will be converted to PIL and PBM.

#!/usr/bin/env python
# Author: Peter Samuel Anttila
# License: The Unlicense <http://unlicense.org, October 16 2018>
from PIL import BdfFontFile
from PIL import PcfFontFile
import os
import glob

font_file_paths = []
current_dir_path = os.path.dirname(os.path.abspath(__file__))
font_file_paths.extend(glob.glob(current_dir_path+"/*.bdf"))
font_file_paths.extend(glob.glob(current_dir_path+"/*.pcf"))

for font_file_path in font_file_paths:    
    try:
        with open(font_file_path,'rb') as fp:
            # despite what the syntax suggests, .save(font_file_path) won't 
            # overwrite your .bdf files, it just creates new .pil and .pdm
            # files in the same folder
            if font_file_path.lower().endswith('.bdf'):
                p = BdfFontFile.BdfFontFile(fp)
                p.save(font_file_path)
            elif font_file_path.lower().endswith('.pcf'):
                p = PcfFontFile.PcfFontFile(fp)
                p.save(font_file_path)
            else:
                # sanity catch-all
                print("Unrecognized extension.")
    except (SyntaxError,IOError) as err:
        print("File at '"+str(font_file_path)+"' could not be processed.")
        print("Error: " +str(err))

As anyone who knows me will be only too aware, I’m something of an avid (read: obsessive) collector of vintage computing equipment. Some time ago, I was given the task of helping a widow out with a loft full of gear, which included several boxes of floppy disks and a rather nifty Dragon 64 in third-party dual-floppy housing which had belonged to her late husband. Since then, I’ve been gradually sorting through the collection to see what should be preserved for posterity and what should be recycled and came across something which should not only be preserved but shared.

First, some background: Des Critchlow, born 1940, was a man whose two passions were amateur radio and computing. Having upgraded from the Dragon 64 to an IBM Compatible, Des began keeping notes and logs on floppy disks both in service of his personal life and the radio network – RAYNET – to which he belonged. The following story is extracted from one of those disks, and something which I believe Des would have wanted to find a wider audience. The text has been corrected for typographical errors, but is otherwise duplicated intact.

Fact is often stranger than fiction.
Quite a few years ago when yours truly used an FT101EX a friend of mine, who we will call Rimmer, said can you do me a favour. He explained his son, a devout C.B.er on SSB was hoping to acquire a new rig for “DXing” with, and could I please test it out as it covered Ham bands as well as 27 MHz.
As I agreed, a date/time was suggested a few day later, and he duly arrived one evening. The rig turned out to be same as mine – an FT101EX modified to cover 27 MHz as well as all the Ham bands. After a few quick checks into a dummy load an antenna test was done. The set proved quite good and as at that time I was running a dipole on 20 metres I set out to get a report with it.
Tuning to a quite spot on 20 meters I was about to give a call, but I heard a weak and watery signal. The signal was a French maritime mobile asking for help from anyone who knew about engines or a fitter. By a stroke of luck the young man with me was a heavy goods fitter, and said he would help. We questioned the French station and gathered that his starter motor was sticking in, and so he could not get more than a few revs before horrible noises came from the engine. My young friend was most excited and insisted this was going to be dead easy. He said “tell him to shove it into second gear and rock it a bit and it should clear the problem.”
My friend could not understand why I just rolled about laughing. I said “don’t you understand you don’t tell anyone to rock a boat!!!” It took several minutes before an alternative method of clearing the problem was sent to the poor Frenchman. Which was, by the way, to belt the starter casing with a hammer. Lord knows who was listening, but they would have creased themselves if that message had been sent.
G3PTV

I recently purchased a ZoomFloppy board, which allows old Commodore IEC serial devices – most importantly, floppy drives – to be connected to a USB port on a modern computer. It’s a great little device, but the instructions aren’t up to scratch: binary software is provided for Windows, manual installation instructions for OS X, but Linux users are left to their own devices.

Here, if for no other reason than I’ll need these if I have to reinstall it, is how I got the ZoomFloppy working on Ubuntu 14.04. You’ll need an up-to-date compilation environment installed, so start by making sure you’ve got the basics:

sudo apt-get install libusb-dev build-essential linux-headers-generic git

Next, you’ll want to download, compile and install the CC65 compiler. No, you really will. Trust me on this one.

cd ~
git clone https://github.com/cc65/cc65.git
cd cc65
make
sudo prefix=/usr make install
Assuming all went well, download, compile and install OpenCBM:
cd ~
git clone  git://git.code.sf.net/p/opencbm/code opencbm
cd opencbm/opencbm
make -f LINUX/Makefile
sudo make -f LINUX/Makefile install install-all install-plugin-xum1541
sudo ln -s /usr/local/lib/libopencbm.so.0 /usr/lib/libopencbm.so.0
That last line fixes a problem where OpenCBM ends up looking in the wrong place for its library.  Finally, you’ll need to add udev rules for the ZoomFloppy hardware itself:
sudo vim /etc/udev/rules.d/45-opencbm-parallel.rules
Add the following lines to the bottom of the file, then save and quit:
SUBSYSTEM!="usb_device", ACTION!="add", MODE="0666", GOTO="opencbm_rules_end"
# zoom floppy
ATTRS{idVendor}=="16d0", ATTRS{idProduct}=="0504", GROUP="users", MODE="0666"
LABEL="opencbm_rules_end"
Restart udev:
sudo service udev restart
Connect your ZoomFloppy and IEC device and check all is well:
cbmctrl detect
That’s it. Enjoy!
UPDATE:
I’ll probably need to refer to these instructions for enabling the VICE C64/128 emulator to talk to a physical drive via the ZoomFloppy at some point, too. You need to check two options off. The first is Settings -> Peripheral Settings -> Device #8 -> Enable IEC Device. The second Settings -> Peripheral Settings -> Device #8 -> Device Type -> Real Device Access. With both ticked and the ZoomFloppy connected, you can talk to the physical floppy drive as though it were Device 8 (i.e. load”*”,8,1). Huzzah!

I recently became the owner of an HP ProLiant MicroServer N54L, a small box with a bunch of hard drive bays and a low-power AMD Turion dual-core processor. I’ve also been tweaking it quite heavily, resulting in a dramatic improvement in network performance and a significant drop in power draw – and this is how I did it. If you’re not running Linux, mind, everything past Step 1 is likely of no use to you.

Warning: The advice here worked for me, but some of it – in particular turning off journalling and using an aggressive spin-down profile on the drives – is hardly best-practice. It can even shorten the lifespan of certain components. Where something can have a deleterious effect, I’ll highlight it – but go into this warned that not everything I’ve done here is for everyone.

Step 1: Unlock the BIOS
The MicroServer is HP’s entry-level server, and as a result is missing several features of higher-end models – in particular hot-swap drive bays. This isn’t a physical restriction, but a fake restriction put in place by HP – and it can be removed by installing a modified BIOS. Doing so also allows all the SATA ports to run at 3Gb/s – two are limited to 1.5Gb/s by default – and fixes a flaw in the original BIOS that prevents the NIC from operating under certain operating systems.

HP has recently taken the decision to lock its BIOS updates behind a warranty wall: if you don’t have a warranty, you can’t get BIOS updates. That’s not very nice, so it’s a good job that someone has uploaded a pre-modified BIOS. The naughty person. Just download the file, write the image to a USB stick with dd (or ImageWriter if you’re on Windows) and power the server on with the stick in a port. Wait for the DOS prompt to appear and power off – you’re done.

Step 2: Network Tuning
Out of the box, I found the MicroServer’s network performance to be poor indeed under Ubuntu 13.10. Some of the blame has to go to the cheap Broadcom NIC, but I figured I could do better.

First, edit /etc/rc.local and stick the following lines into it, just above ‘exit 0’:

ethtool -G em1 rx 511
ifconfig em1 txqueuelen 1000
defaultroute=`ip route | grep "^default" | head -1`
ip route change $defaultroute initcwnd 10

This forces the NIC to use its entire ring buffer for RX packets – by default only 200 bytes of the 511 bytes available get used – and increases the transmit queue length to a figure better suited to gigabit network traffic. The default route is also tweaked.

Next, edit /etc/sysctl.conf and add the following:

fs.file-max = 100000
net.core.netdev_max_backlog = 50000
net.core.optmem_max = 40960
net.core.rmem_default = 16777216
net.core.rmem_max = 16777216
net.core.wmem_default = 16777216
net.core.wmem_max = 16777216
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.ip_local_port_range = 10000 65000
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_max_syn_backlog = 30000
net.ipv4.tcp_max_tw_buckets = 2000000
net.ipv4.tcp_rfc1337 = 1
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_sack=0
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.udp_rmem_min = 8192
net.ipv4.udp_wmem_min = 8192
vm.swappiness = 10

A reboot will bring the box up with your new settings, or you can force a reload with sysctl -p (or sudo sysctl -p if you’re not root.) Combined, these two tweaks resulted in the throughout of the system going from about 40MB/s to 120MB/s, so it’s definitely worth doing.

Step 3: Power Tuning
The idle power of the MicroServer wasn’t great when I was finished tuning the network, pulling around 40W from the wall. A little careful tuning of the system, though, and that dropped to 21W – a significant saving.

First, we’ll discuss the fixes that won’t cause you any problems. Open /etc/rc.local again and add the following lines:

echo 'min_power' > '/sys/class/scsi_host/host2/link_power_management_policy'
echo 'min_power' > '/sys/class/scsi_host/host3/link_power_management_policy'
echo 'min_power' > '/sys/class/scsi_host/host4/link_power_management_policy'
echo 'min_power' > '/sys/class/scsi_host/host5/link_power_management_policy'
echo '0' > '/proc/sys/kernel/nmi_watchdog'
echo '1500' > '/proc/sys/vm/dirty_writeback_centisecs'
echo 'auto' > '/sys/bus/pci/devices/0000:00:18.3/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:00.0/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:01.0/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:06.0/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:01:05.0/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:11.0/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:12.0/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:13.0/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:13.2/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:18.4/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:12.2/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:18.2/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:14.0/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:14.1/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:14.3/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:14.4/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:16.0/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:16.2/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:18.0/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:00:18.1/power/control'
echo 'auto' > '/sys/bus/pci/devices/0000:02:00.0/power/control'

That’ll have a small but noticeable impact on power draw at idle, enabling a bunch of typically-off power management features. If you’ve got additional USB or PCIe devices in your MicroServer, you may find that this doesn’t catch all of ’em; install PowerTOP and go to the ‘Tunables’ tab to see if there’s anything labelled ‘Bad,’ then toggle it to ‘Good.’

That’s a start, but for real savings we need to spin the drives down. This may or may not be possible in your usage scenario: mechanical drives have a limited number of load/unload cycles before they break, and the last thing you want to be doing is having it constantly spinning down and spinning back up again. Because I’m not always accessing the MicroServer – I back up to it once a day, stream media from it during the evening, and otherwise mostly leave it alone – I can get away with a reasonably aggressive power-saving plan; you may well need to adjust the numbers below to suit your own needs.

If you want to continue, start with adding the following to /etc/rc.local:

hdparm -S 60 /dev/sda /dev/sdb /dev/sdc /dev/sdd
hdparm -B 1 /dev/sdb /dev/sdc /dev/sdd
hdparm -M 128 /dev/sdb /dev/sdc /dev/sdd

That tells the system to spin all four hard drives down after 300 seconds of inactivity. If you ain’t got four hard drives, edit the line accordingly. Some drives may not sleep on their own; if that’s the case, have a look at the Powernap tool and use it in conjunction with ‘hdparm -y’ to force a drive to spin down. I use it on my system because one drive of a mirror sleeps while the other stays awake; Powernap watches for this, and when the first drive sleeps it forces the second into the low-power state.

The second command puts the drives in the most aggressive power management level possible, while the third puts it into the quietest possible acoustic management mode. Note that I haven’t included the boot drive sda there: the drive that came with my MicroServer doesn’t support either of those settings. Try running the command yourself, and if your drives do support it then add them to the above list.

One drive that won’t spin down is your boot drive – in my case, the 250GB drive the MicroServer came with. If that’s a problem, consider switching to a solid-state drive (SSD) instead. If you’re truly desperate to save as much power as possible but have no cash left, however, there are options. First, move /var – the bit of the disk that sees the most activity, in particular due to the log files held in /var/log, to a spare USB flash drive you’ve got lying around. You’ll need at least 1GB, preferably 2GB or more to allow for sudden growth.

The procedure for moving /var isn’t straightforward, and is rife with danger – it’s perfectly possible to end up with a non-booting system. If you’re still reading, here’s how I did it (as root) with the USB stick inserted as /dev/sde. First, edit /etc/fstab and insert a line to mount the USB stick as /var, and while you’re at it edit the root mount point to disable access time writing.

/dev/sde1       /var    ext4    defaults,noatime,nodiratime     0       1
/dev/sda1       /       ext4    noatime,nodiratime,errors=remount-ro 0       1

I’d recommend using the drive’s UUID (visible with the blkid command) instead of the device node, ‘cos you don’t want the system to get confused and mount the wrong drive. Next, populate the drive:

mkfs.ext4 /dev/sde1
mkdir /tempvar
mount -t ext4 /dev/sde1 /tempvar
cp -rfp /var -T /tempvar
umount /tempvar
mount /dev/sde1 /var
reboot

When the server comes up – assuming nothing went wrong – /var is now running from the USB drive, saving writes to the mechanical drive. However, you may find the mechanical drive still doesn’t spin down. In my case, this was ‘cos I was running a journalling file system – ext4 – which was always writing data to the disk. Simple fix: turn journalling off.

Another warning: turning journalling off is a really bad idea. Journalling is there to ensure that the file system doesn’t get corrupted. Turn it off, and there’s a non-zero chance that a sudden reboot – a power-cut, say – will result in a corrupt root file system and a non-booting server. I’ve got a UPS and a backup of the file system, so it’s a chance I’m willing to take – but consider the implications long and hard before continuing.

If you’re still convinced you want to do this, you can turn ext4’s journalling option off by remounting the root filesystem read-only and issuing the following command:

tune2fs -O "^has_journal" /dev/sda1

With that, the root drive is now free to spin down – resulting in the aforementioned 21W idle power draw. But seriously, don’t do this. Just buy a cheap SSD instead.

During a discussion on a web forum, I put forward the claim that for heavy usage scenarios buying a power supply that delivers twice the wattage required can add up to energy savings. This was brought into question, so I devised a neat little spreadsheet with three simulated scenarios – the result of which I have reproduced below.

Using several assumptions, I built three models: a gamer who plays two hours a day, a pro-gamer who plays eight hours a day, and a folder or miner who has the system fully loaded 24 hours a day. Each of these users is building a new rig, the specifications of which are given below. In all three cases, it’s a completely new rig: no existing parts are being used. The models compare the energy used by two otherwise identical power supplies, one of which is running at at near-full-load and the other one at half-load – and compare the energy savings that come from the fact that PSUs are naturally more efficient at 50% load. Actual figures for this increase in efficiency, taken from the official 80 PLUS certification requirements, can be found in the Assumptions section.

THE RIG

  • Radeon R9 290(X) TDP: 300W
  • Intel Core i7 4770K TDP: 84W
  • Motherboard, Fans and So Forth: 40W
  • Non-Green Hard Drive: 8W active, 3W idle
  • Total maximum system power draw: 432W.

ASSUMPTIONS
When gaming, the GPU is 100% loaded and the processor 60% loaded (two cores versus all four cores, plus overhead), while the hard drive is mostly idle for a total power draw of 393.4W rounded down to 393W for simplicity’s sake.
When participating in distributed computing projects like Folding@Home or Litecoin mining, both CPU and GPU are 100% loaded, while the hard drive is mostly idle for a total power draw of 428W.
Electricity currently costs on average 15.32p per kilowatt hour (KWh), based on figures from the Energy Saving Trust. From the same page, generating each KWh of electricity causes 0.517kg of carbon dioxide to be emitted into the atmosphere.
The cost of electricity is rising at 7 per cent annually, based on an average of the most recent price rises listed on USwitch.
The PSUs in question have a five-year warranty, and thus five-year worst-case lifespan. All calculations, therefore, are based over a five-year period.
The two PSUs under comparison are both 80 PLUS Titanium rated, one at 450W and one at 900W. As a result, at the system’s peak load the 450W offers 91 per cent efficiency, and the 900W offers 96 per cent efficiency – both minimum efficiency figures at 100% and 50% load respectively as required by the 80 PLUS certification. Buying the 900W PSU costs £50 more than the 450W PSU.

With that in mind, let’s run the numbers.

The Gamer
The gamer works in an office all day, during which time his or her PC at home is powered off. On average, the gamer manages to get in around two hours of gaming every day – some days there’s no gaming at all, but on a weekend it might be an eight-hour marathon. At all other times, the computer is switched off or in an extremely low power mode.

Result of Simulation:
Over a five-year period, paying the extra £50 for the 900W PSU will have cost the user £36. In other words, this use-case makes no financial sense. Additionally, however, the move will have reduced the environmental impact of the PC by preventing the emission of 8.49kg of carbon dioxide into the atmosphere.

The Pro-Gamer
The pro-gamer works at gaming all day. Eight hours a day, seven days a week he or she is hammering the system, honing skills and pwning the opposition. Outside the ‘office hours,’ the PC is switched off or in an extremely low power mode.

Result of Simulation:
Over a five-year period, paying the extra £50 for the 900W PSU will have saved the user £8. Not much, but it is a saving. Additionally, the move will have reduced the environmental impact of the PC by preventing the emission of 33.96kg of carbon dioxide into the atmosphere.

The Folder
This user has their system forming part of a distributed computing cluster. Perhaps they’re running Folding@Home or BOINC for scientific research, or renting their system out as a renderfarm, cracking passwords and generating rainbow tables, or perhaps they’re trying to mint the latest cryptocurrency. Whatever the reason, the system is at full load – CPU and GPU – all day, every day. Hey, on the plus side: at least their room is nice and warm.

Result of Simulation:
Over a five-year period, paying the extra £50 for the 900W PSU will have saved the user £139. Hey, that’s enough to buy a replacement PSU! Additionally, the move will have reduced the environmental impact of the PC by preventing the emission of 110.94kg of carbon dioxide into the atmosphere.

CONCLUSIONS
If you only load your PC for a couple of hours a day, don’t bother speccing it with a PSU capable of delivering double your wattage requirement. You’ll never recoup your investment, and the environmental impact is minimal. If you’re a pro-gamer, it could be worth doing – especially as you’ll be able to claim the cash spent on the PSU as a business expense against tax, something I didn’t take into account in my calculations. If you’re a folder, though, absolutely go for maximum efficiency – it has a real-world environmental benefit and gives you the cash you’d need to replace said PSU once it’s out of warranty. Win-win!

UPDATE, 20161210

When I originally wrote this post, the only way to download collections of files from the Internet Archive in bulk was to perform a manual search, process the resulting CSV, and feed that into wget in a rather inefficient process. Thankfully, that’s no longer the case: there’s now an official Internet Archive software package which includes a command-line too, ia, for performing a variety of actions – including downloading content in bulk.

So, here’s how you really download bulk content from the Internet Archive. You start by installing the tools:

sudo apt-get install python-pip
pip install internetarchive

You log in to your Internet Archive account via the tool:

ia configure

Then, finally, you trigger the download for your chosen collection name:

ia download --search='collection:personalcomputerworld' --no-directories --glob=\*pdf

Change the file extension at the end if you’d prefer formats other than PDF. If you’re a power-user (oooh) and have GNU Parallel installed, you can even run multiple download streams at once to speed things up:

ia search 'collection:personalcomputerworld' --itemlist | parallel 'ia download {} --no-directories --glob=\*pdf'

The original script and post are included below, for posterity.

Read More →

I had cause to need to mount a hard drive – actually, a Compact Flash card, but that’s beside the point – from an Amiga on my PC the other day. In theory, not a problem: Linux includes in-built support for Amiga FastFileSystem (AFFS) devices, so it should just be a case of identifying which of the three partitions I’m after and giving the mount command.

So, let’s fire up fdisk:

$ sudo fdisk -l
 Disk /dev/sdc: 4009 MB, 4009549824 bytes
 124 heads, 62 sectors/track, 1018 cylinders, total 7831152 sectors
 Units = sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disk identifier: 0x00000000
Disk /dev/sdc doesn't contain a valid partition table

Huh. That’s odd – it certainly should. Other partitioning tools say the same thing – and the reason becomes clear: apparently, while Linux technically understands AFFS partitions, it doesn’t actually understand Amiga partition tables. At least, most packages don’t: thankfully, one package does. So, to the wonderful parted:

$ parted /dev/sdc
 WARNING: You are not superuser. Watch out for permissions.
 GNU Parted 2.3
 Using /dev/sdc
 Welcome to GNU Parted! Type 'help' to view a list of commands.
 (parted) u
 Unit? [compact]? b
 (parted) p
 Pralloc = 0, Reserved = 2, blocksize = 1, root block at 62832
 Pralloc = 0, Reserved = 2, blocksize = 1, root block at 2112896
 Pralloc = 0, Reserved = 2, blocksize = 1, root block at 5965912
 Model: Generic STORAGE DEVICE (scsi)
 Disk /dev/sdc: 4009549824B
 Sector size (logical/physical): 512B/512B
 Partition Table: amiga
Number Start End Size File system Name Flags
 1 278528B 64061439B 63782912B affs1 DH0 boot
 2 64061440B 2099544063B 2035482624B affs1 DH1
 3 2099544064B 4009549823B 1910005760B affs1 DH2
(parted) quit

You may notice I told parted to switch unit type from the default – Compact – to Bytes. There’s a reason for that: because the kernel can’t see the partition table, it hasn’t created the usual sdc1, sdc2 and sdc3 devices under /dev – meaning I can’t tell mount where to look for the third partition, which is the one I’m after. A problem – but one that can be resolved by giving mount an offset option, taken from the ‘Start’ column of parted’s output:

$ sudo mkdir /media/AmigaMount
$ sudo mount -t affs -o offset=2099544064 /dev/sdc /media/AmigaMount
 mount: wrong fs type, bad option, bad superblock on /dev/sdc,
 missing codepage or helper program, or other error
 In some cases useful info is found in syslog - try
 dmesg | tail or so

Huh? What does dmesg have to say about this?

$ dmesg | tail -1
 [83740.730900] AFFS: No valid root block on device /dev/sdc

Okay, so that didn’t work. Now it’s time for the brute-force option: if mount’s offset command won’t work, let’s try setting up a loop device on the partition’s offset:

$ sudo losetup -o 2099544064 /dev/loop1 /dev/sdc
$ sudo mount -t affs /dev/loop1 /media/AmigaMount
$ ls /media/AmigaMount
 A500_A600_EtoZGames Disk.info Favorite Games.info
 A500_A600_EtoZGames.info Favorite Games

Bingo! At this point, you can read or write any file you like to the partition. To mount a different partition on the disk, simply give losetup the offset of that partition as reported by parted – remembering to tell parted to switch to bytes as its unit.

Having finished sticking my files on the drive, it’s time to tidy up and unmount:

$ sync
$ sudo umount /media/AmigaMount

Eject the drive, stick it back in the Amiga and lo: my files are there.

An alternative method to the above is to use an Amiga emulator on your PC: UAE, for example, can mount a real Amiga drive. This way’s easier, though – at least, if you don’t have to go through the troubleshooting steps that brought me to my understanding of the flaw.

To recap: insert Amiga drive, use parted in bytes mode to get partition offset, setup loop device at that offset, mount loop device. Not as simple as it should be, but hey: it works. This also works, interestingly enough, on disk images: create a backup of your Amiga drive with dd and you can then mount partitions directly from the backup rather than the real drive.

ExifTools

I upgraded my Galaxy SII to a custom version of Android 4.0.4 Ice Cream Sandwich the other night, and it’s been brilliant – except for one little bug. If I go into the Gallery app, images on my microSD are in a semi-random order. If I tell Android to group by time, the problem becomes apparent: large numbers of images, always in contiguous blocks, are thought to have been taken between 1992 and 2018. I didn’t have an Android phone in 1992, and 2018 hasn’t happened yet – clearly, it’s a bug.

A quick search turned up this Google Code issue, which matches what I’m seeing exactly: GPS data is sometimes corrupted as it writes to the image. Worryingly, the bug dates back to at least Android 2.3 Gingerbread – which I can confirm, as the images in question were taken while the Galaxy SII was still running 2.3.4 – but Google promises a fix is incoming.

Here’s an example of the problem. The GPS Date tag from a non-corrupt image reads 2012:10:30 – year, month, day with a colon delimiter. The GPS Date tag from a corrupt image reads 0112:09:26 – a clearly invalid date which is confusing the heck out of the Gallery app. (You can check this out yourself with the command exiftool -gps:GPSDateStamp filename.jpg)

But why have I only just spotted the problem? Simple: the ICS – and Jelly Bean – Gallery app has changed its behaviour from previous versions. Where the old Gingerbread Gallery used to use the file modified date to sort the images, the ICS and newer Gallery uses the GPS Date Stamp tag from the EXIF data – the very tag that is being corrupted by the bug. Why it does that, I have no idea: there are numerous other EXIF tags which have the right date stamp and would be more appropriate for sorting, such as the Date Time (Original) tag. Perhaps that’s Google’s fix: modify the Gallery to sort on a different tag.

A pending fix doesn’t solve the fact that I’ve got a few hundred images with corrupt EXIF information, though. Switching to a third-party image viewer, like QuickPic, orders the images correctly, but that’s a workaround at best. So, I’ve written a quick tool to correct the invalid EXIF data on all my images.

It’s a bash shell script, and simple enough: it calls the ExifTool Perl package to copy the date from the Date Time (Original) tag over the top of the GPS Date Stamp tag. The result: the invalid tag goes away, Android knows what year the image was taken, and it orders the gallery correctly. It’s a hack: it overwrites the GPS Date Stamp of all JPG images in the current directory, regardless of whether the year makes sense or not, but it fixes the problem – albeit slowly.

You can find the script on GitHub. Hope it helps!

When you’ve corrected your images, you’ll need to go into Settings and Applications on your Android phone. Choose ‘All,’ find the Gallery app and tap it to get into the menu. Hit ‘Force Stop’ and then ‘Clear Data’ – don’t worry, you won’t lose any images, this just deletes the file database and thumbnail cache. If you don’t, the Gallery will still use the old ordering data and you’ll still have a problem. When that’s done, load the Gallery, let it find your files, and enjoy having all your images in the right order.

Oh, and one other thing: take a backup of your images before running the script. Although it worked fine for me, I can’t guarantee it won’t corrupt images further – and I don’t want to be responsible for you losing your precious photos.

Helping out a friend with a picture or two of the Astec UM1233 RF Modulator’s inner workings. If you’re curious, it’s a device which was incredibly popular in the 80s and 90s in home computers, and takes video and audio as an input and turns them into either VHF or UHF (depending on country) radiofrequency signals – the same signals as analogue TV tuners expect to receive. If you crack open a ZX80, ZX81, Spectrum of almost any model, Commodore 64, ORIC, Newbrain, Dragon – almost any 80s microcomputer aimed at the home market – you’ll probably find one of these.

The Burnduino Kit
The Burnduino Kit

The Burnduino Kit

The Burnduino Shield is, as the name suggests, an add-on ‘shield’ for the Arduino microcontroller and compatibles. It’s designed to make programming an ATmega328 DIP-package microcontroller easier, and turns the Arduino into a remarkably efficient AVR programmer. Features include:

  • 28-pin universal ZIF (zero insertion force) socket, meaning no bent pins and fast chip change
  • Supports uploading of 16MHz external crystal and 8MHz internal oscillator Arduino bootloaders onto bare ATmega microcontrollers
  • Fully supported within the Arduino IDE software
  • Allows uploading of bootloaders and sketches from a single shield
  • Mode select jumpers mean no rewiring – ever
  • Saves money: buy unprogrammed ATmega328 chips and burn the bootloader yourself

The Burnduino is provided as a kit, which contains the printed circuit board, a high-quality 28-pin locking ZIF socket made by 3M, a 16MHz crystal with two 22pF decoupling capacitors, a 10KOhm pull-up resistor for the reset line, male 2.54mm headers and three 2.54mm jumpers for the mode select pins. All components are through-hole and easy to solder, even for a novice.

The Burnduino’s mode select jumpers allow a single shield to perform two tasks: when in ‘BTLDR’ mode, with the RX and TX jumpers open, the Burnduino works with an Arduino and the Arduino-as-ISP sketch to burn new bootloaders onto blank or pre-used ATmega microcontrollers. Close the RX and TX jumpers and move the mode select jumper to ‘SKETCH’ and the Burnduino allows the Arduino IDE to upload sketches to bootloader-equipped ATmegas.

Bootloader Mode

  1. Upload the ‘ArduinoISP’ sketch (File – Examples) from the Arduino IDE to your Arduino or compatible board.
  2. Connect the Burnduino shield.
  3. Set the lower-left Mode Select jumper to BTLDR, shorting the middle and top pins.
  4. Remove the jumpers from the RX and TX pins.
  5. Insert your ATmega microcontroller into the ZIF socket
  6. Select your chosen board type from the Tools – Board menu in the Arduino IDE.
  7. Upload the bootloader using Tools – Burn Bootloader – w/ Arduino as ISP.
  8. To program another ATmega, simply replace the chip in the socket and burn the bootloader again.

Sketch Mode

  1. Remove the ATmega chip from your Arduino board and put it somewhere safe.
  2. Connect the Burnduino shield.
  3. Set the lower-left Mode Select jumper to SKETCH, shorting the middle and bottom pins.
  4. Short the RX and TX pins with the jumpers provided.
  5. Select the bootloader type from the Tools – Board menu in the Arduino IDE.
  6. Insert your bootloader-equipped ATmega chip into the ZIF socket.
  7. Load your sketch into the Arduino IDE and hit the upload button.
  8. To upload the sketch to a new chip, simply replace the chip in the ZIF socket and hit the upload button again.

If you want to take advantage of the ATmega’s internal oscillator, which means your finished design can ditch the 16MHz crystal and decoupling capacitors to save money and space, you’ll need to follow the ‘Minimal Circuit’ instructions from the Arduino to Breadboard article to add a new board definition to the Arduino IDE. If you need to upload bootloaders to ATmega328-PU chips – rather than the more common ATmega328P-PU – follow these instructions.

NOTE: The Burnduino uses the ArduinoISP sketch to upload a bootloader to the chip in the ZIF socket. This sketch, at present, does not support the new Arduino Uno, which uses different USB circuitry. If you want to use the Burnduino as an ISP, you’ll need to have an Arduino Duemilanove or third-party compatible with FTDI circuitry instead.