...a clever blog name here...

23:49 Wed 18 Jul 2012

The DS3231 chip is an accurate real time clock with support for battery backed non-volatile RAM to maintain the time while the computer is off. Some very cheap boards are currently available to add one of these to a Raspberry Pi with minimal footprint. However, recent changes to the Linux kernel make this slightly different to enable. Fortunately the most recent distributions now include support for the Device Tree based peripheral support system.

I2C RTC support is already included in the current Raspberry Pi compatible distributions but must be enabled by editing config.txt which is used in the early boot stages to pass configuration parameters to the kernel. On lubuntu or rasbian this file is in /boot but on OpenElec it is found in /flash which is a separate filesystem that needs remounting with read-write permissions. This file only needs a line to enable the i2c device in Device Tree by adding dtoverlay=i2c-rtc,ds3231. Once this is done the next boot will detect the device and automatically make use of it.

If you want to manually add the device without forcing a reboot and set the time then install i2c-tools and check that it is detected:

pat@monitor1:~$ sudo i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- 68 -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- --

To then add this as an RTC device the following command enables it in the running system and causes a new /dev/rtc0 device to be created:

sudo su -c 'echo ds3231 0x68 > /sys/class/i2c-adapter/i2c-1/new_device'

Testing this will show the time is set to some meaningless default:

pat@monitor1:~$ sudo hwclock -r
Sat 05 Feb 2000 07:45:53 GMT  -0.696570 seconds

The system time is currently correct as this system is already running NTP and connected to the network. The hardware clock can therefore be updated from the system clock.

pat@spd-status:~$ sudo hwclock --systohc
pat@spd-status:~$ sudo hwclock
Thu 13 Aug 2015 12:42:30 BST  -0.573334 seconds

Now it should all work properly with the kernel keeping the hardware clock current and reading it on bootup to fix the time in the absence of networking and NTP.

One of my EE co-workers got hold of an Arduino Zero recently but failed to get it working. I took a looks at this and realized that he had walked into the battle of the arduinos and was trying to use the wrong software. Currently Arduino.cc have not released the Zero but Arduino.org have released an Arduino Zero Pro and this is what we have here. To support this they have issued a version of the Arduino IDE labelled as 1.7.2 but which is really a 1.5ish version with Zero support bolted in. Quite poorly as well from a first look.

So having got the device and an IDE capable of programming it, we can try the basic serial output test as shown below. However, it turns out that the included examples will not work as the default Serial class is the wrong one. To get this working Serial5 needs to be used.

#include <Arduino.h>

// for the Arduino Zero
#ifdef __SAMD21G18A__
#define Serial Serial5

static int n = 0x10;

void setup()
    Serial.println("Serial test - v1.0.0");

void loop()
    Serial.print(n++, HEX);
    if (n % 0x10 == 0)
        Serial.print(" ");
    if (n > 0xff)
        n = 0x10;

Moving on I thought it would be interesting to try and use this with a WizNet based Ethernet shield I have. So with the board set as Arduino Zero let us build one of the Ethernet demos.

C:\opt\arduino.org-1.7.2\libraries\Ethernet\src\Ethernet.cpp: In member function 'int EthernetClass::begin(uint8_t*)':
C:\opt\arduino.org-1.7.2\libraries\Ethernet\src\Ethernet.cpp:19:7: error: 'class SPIClass' has no member named 'beginTransaction'

I sincerely hope arduino.cc do a better job than this when they release their version of this interesting board.

Debugging on the Zero

The ability to use gdb to debug the firmware is what makes this board so attractive. The 1.7.2 IDE package includes a copy of OpenOCD with configuration files for communicating with the Zero. While this is not integrated into the current IDE at all it is still possble to start stepping through the firmware. You do need to know something about the way the Arduino IDE handles building files to make use of this however.

OpenOCD is run as a server that mediates communications with the hardware. The following script makes it simple to launch OpenOCD in a separate window on Windows (adjust the Arduino directory path as appropriate). This gives a console showing any OpenOCD output and two TCP/IP ports will be opened. One on 4444 is for communicating with OpenOCD itself. The other on 3333 is for debugging.

@set ARDUINO_DIR=C:\opt\arduino.org-1.7.2
@set OPENOCD_DIR=%ARDUINO_DIR%\hardware\tools\OpenOCD-0.9.0-dev-arduino
@start "OpenOCD" %OPENOCD_DIR%\bin\openocd ^
  --file %ARDUINO_DIR%\hardware\arduino\samd\variants\arduino_zero\openocd_scripts\arduino_zero.cfg ^
  --search %OPENOCD_DIR%\share\openocd\scripts %*

To connect for debugging it is necessary to run a suitable version of gdb and set it to use the remote target provided by OpenOCD on localhost:3333. Provided the sources (the arduino .ino file and any .cpp or .h files included with the project) and the .elf binary are specified gdb can start showing symbolic debugging information. This is where the Arduino IDE needs to provide some assistance. The .ino source file is processed to produce some C++ files and then built with g++ but this all happens in a temporary directory with some relatively random name. Looking for the most recent directory in your temporary directory (%TEMP%) will have the build files. From this folder, running arm-none-eabi-gdb.exe and running the following commands in gdb will enable symbols and start controlling the firmware. After that it is normal gdb debugging.

Note that gdb is found at <ARDUINODIR>\hardware\tools\gcc-arm-none-eabi-4.8.3-2014q1\bin\arm-none-eabi-gdb.exe

# gdb commands...
file PROJECTNAME.cpp.elf
target remote localhost:3333
monitor reset halt
break setup

Worked example

Launching OpenOCD using the script described above.

Open On-Chip Debugger 0.9.0-dev-g1deebff (2015-02-19-15:29)
Licensed under GNU GPL v2
For bug reports, read
Info : only one transport option; autoselect 'cmsis-dap'
adapter speed: 500 kHz
adapter_nsrst_delay: 100
cortex_m reset_config sysresetreq
Info : CMSIS-DAP: SWD  Supported
Info : CMSIS-DAP: JTAG Supported
Info : CMSIS-DAP: Interface Initialised (SWD)
Info : CMSIS-DAP: FW Version = 01.1F.0118
Info : SWCLK/TCK = 1 SWDIO/TMS = 1 TDI = 1 TDO = 1 nTRST = 0 nRESET = 1
Info : DAP_SWJ Sequence (reset: 50+ '1' followed by 0)
Info : CMSIS-DAP: Interface ready
Info : clock speed 500 kHz
Info : IDCODE 0x0bc11477
Info : at91samd21g18.cpu: hardware has 4 breakpoints, 2 watchpoints

Launching gdb

GNU gdb (GNU Tools for ARM Embedded Processors)
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "--host=i686-w64-mingw32 --target=arm-none-eabi".
For bug reporting instructions, please see:
(gdb) file SerialTest1.cpp.elf
Reading symbols from C:\Users\pt111992\AppData\Local\Temp\build1861788453707041664.tmp\SerialTest1.cpp.elf...done.
(gdb) directory c:\\src\\Arduino\\SerialTest1
Source directories searched: c:\src\Arduino\SerialTest1;$cdir;$cwd
(gdb) target remote localhost:3333
Remote debugging using localhost:3333
0x000028b8 in ?? ()
(gdb) monitor reset halt
target state: halted
target halted due to debug-request, current mode: Thread
xPSR: 0x21000000 pc: 0x000028b8 msp: 0x20002c08
(gdb) break setup
Breakpoint 1 at 0x415e: file SerialTest1.ino, line 12.
(gdb) continue
Note: automatically using hardware breakpoints for read-only addresses.

Breakpoint 1, setup () at SerialTest1.ino:12
12          Serial.begin(115200);
(gdb) next
setup () at SerialTest1.ino:13
13          Serial.println("Serial test - v1.0.0");

And now we are stepping through the code actually running on the Arduino Zero. Awesome!

I wanted to setup a monitor showing the status of the jobs in our Jenkins cluster to emphasise the importance of fixing broken builds within the team. This requires some screen to permanently display a web page configured using the Build Monitor Plugin view of a selection of the Jenkins jobs. As we have an old XP machine that has been retired it seemed like an ideal task for a lightweight Linux that boots straight into an X session that just runs a single web-browser task.

I selected lubuntu as a suitable distribution that does not require too much resources (we only have 256MB RAM on this PC). Lubuntu uses the LXDE desktop environment to keep resource usage low but uses the same base and package system as standard Ubuntu which we are already familiar with.

On startup the system defaults to launching lightdm as the X display manager. To configure a crafted session we create a new desktop file (kiosk.desktop) and add a new user account for the kiosk login. We do not need to know the password for this account as lightdm can be configured to autologin as this user by adding the following lines to the /etc/lightdm/lightdm.conf file:

# lines to include in /etc/lightmdm/lightdm.conf
# kiosk.desktop
[Desktop Entry]
Name=Kiosk Mode
Comment=Chromium Kiosk Mode
# kiosh.sh - session creation
/usr/bin/xset s off
/usr/bin/xset -dpms
/usr/bin/ratpoison &
/usr/bin/chromium-browser http://jenkins/view/Status/ --kiosk --incognito --start-maximized
/usr/bin/xset s on
/usr/bin/xset +dpms

These lines cause the system to automatically log the kiosk user account in and start the kiosk user session which will run the program defined in our kiosk.desktop file. For this all we now need to do is disable the screen-saver and monitor DPMS function and launch chromium to display the correct page. On the first attempt we found that the browser did not fill the screen and while this can be configured it was simpler to add the ratpoison tiling window manager to deal with this.

To avoid wasting power when no-one is around to view this display a cronjob was added to turn the monitor off at night and back on for the working day. The script used below either turns the monitor on and then disables DPMS or turns it off and re-enables the DPMS mode.

# Turn monitor permantly on or off.
DISPLAY=:0.0; export DISPLAY

function log()
    logger -p local0.notice -t DPMS $*

function dpms_force()
    xset dpms force $1 && log $1 || log FAILED to set $1

if [ $# != 1 ]
    echo >&2 "usage: dpms on|off|standby|suspend"
    exit 1

[ -r $COOKIE ] && xauth merge $COOKIE

case "$1" in
        # Turn on and disable the DPMS to keep it on.
        dpms_force on
        xset -dpms
        # Allow to turn off and leave DPMS enabled.
        dpms_force $1
        echo >&2 "invalid dpms mode \"$1\": must be on, off, standby or suspend"
        exit 1

We have been using GitLab to provide access control and repository management for Git at work. Originally I looked for something to manage the user access and found gitolite looked like best suiting our needs with one exception. Gitolite access is managed by making commits and pushing the gitolite-admin repository which is not especially user friendly. It is easy to automate however and so in searching for a web front end to gitolite I found GitLab.

GitLab is a Rails web application that can handle user account management and repository access for git using gitolite underneath. It has a number of plugins to allow for a selection of authentication mechanism and in a Windows culture we can set it up for LDAP authentication against the local Active Directory servers so that our users can use their normal Windows login details and do not have to remember yet another password. They do however have to create an SSH key and upload it to GitLab but GitExtensions provides a reasonable user interface for dealing with this. In practice we find Windows users can create an account and create repositories with minimal assistance.

However, I find that at least on the server we are using that the git repository browsing from GitLab is quite slow and all repository access requires an account. I have some automated processes that don't really need an SSH key as they are read-only and these are using the git protocol to update their clones. I also like using gitweb for repository browsing.

gitolite has a solution to this that involves using two pseudo-user accounts but GitLab erases these whenever it updates the gitolite configuration file. Further, if we simply go and create the git-daemon-export-ok file that git-daemon uses to test for exported repositories, either GitLab or GitExtensions will delete these on the next configuration update. So in searching for some way around this I found the git-daemon access-hook parameter. This allows git-daemon to call an external program to check for access. If we combine this with the gitweb project file which can be used to git gitweb a specfic set of visible directories then we don't have to hack GitLab itself.

This is achieved by creating a text file that just lists the repository names we wish to be publicly readable via git-daemona and gitweb. Then edit /etc/gitweb.conf and set the $projects_list variable to point to this file.

$projects_list = "/home/git/gitweb.projects";
$export_auth_hook = undef; # give up on the git-daemon-export-ok files.

Next, setup our git-daemon xinetd configuration to call our access hook script.

# /etc/xinetd.d/git-daemon
service git
        disable = no
        type = UNLISTED
        port = 9418
        socket_type = stream
        wait = no
        user = git
        server = /usr/local/bin/git
        server_args = daemon --inetd --base-path=/home/git/repositories --export-all --access-hook=/home/git/git-daemon-access.sh /home/git/repositories
        log_on_failure += USERID

After a bit of experimentation I found that you need to --export-all as well as have the access-hook.

Finally, the access hook script.

# Called up from git-daemon to check for access to a given repository.
# We check that the repo is listed in the gitweb.projects file as these
# are the repos we consider publically readable.
# Args are: service-name repo-path hostname canonical-hostname ipaddr port
# To deny, exit with non-zero status

logger -p daemon.debug -t git-daemon "access: $*"
[ -r /home/git/gitweb.projects ] \
&& /bin/grep -q -E "^${2#/home/git/repositories/}(\\.git)?\$" /home/git/gitweb.projects

Here we log access attempts and fail if the gitweb.projects file is missing. Then trim off the base-path prefix and check for the repository name being listed in the file.

Now I can have GitLab and git-daemon and gitweb available.

In the future it might be nice to use this mechanism to have GitLab provide some control of this from it's user interface. That way I will avoid having to occasionally update this project file. However, thats something for another day - especially as the most recent release of GitLab has dropped the gitolite back-end in favour of an internal package (git-shell). I'm waiting to see how that turns out.

Recently I have been persuading my co-workers that we should be using Git instead of CVS. To help this along I've spent some time trying to ensure that any switch over will be as painless as I can. At the moment they are using TortoiseCVS on the workstations with the CVS server being CVSNT on a Windows machine. This is a convenient setup on a Windows domain as we can use the NT domain authentication details to handle access control without anyone needing to remember specific passwords for CVS. The repositories are just accessed using the :sspi:MACHINE:/cvsroot path and it all works fine.

One thing we missed about CVS even when we originally switched to it was that it was hard to tell what set of files had been committed together. CVSNT adds a changeset id to help with this and also can be configured to record an audit log to a mysql database. So a few years ago we made an in-house web app that shows a changeset view of the CVS repository and helps us review new commits and do changeset merges from one branch to another using the cvsnt extended syntax (cvs update -j "@<COMMITID" -j "@COMMITID"). So if they are to switch we must ensure the same facilities continue to be available. We also need the history converted but thats the easy part. Using cvs2git. I've already been running a git mirror of the cvs repository for over a year by converting twice a day and then pushing the new conversion onto the mirror. It is done this way as cvsimport got some things wrong and cvs2git doesn't do incremental imports. However, performing a full conversion is about 40mins processing and pushing the new repository onto the old one helps to show up any problems quickly (like the time one dev worked out how to edit the cvs commit comments for old commits).

So we need a nice changeset viewing tool, access control - preferably seamless with Windows domains, and simple creation of new repositories. The first is the simplest. gitweb is included with git and provides a nice view of any repositories you feed it. Initially I found it a bit slow on my Linux server with Apache, but switching from CGI to Fast-CGI has sorted this out. In case this helps I had to install libapache2-mod-fastcgi, libcgi-fast-perl and libfcgi-perl. Then added the following to /etc/apache2/conf.d/gitweb. Supposedly this can run under mod_perl but I failed to make that work. The fast-cgi setup is performing well though.

# gitweb.fcgi is just a link to gitweb.cgi
ScriptAlias /gitweb /usr/local/share/gitweb/gitweb.fcgi
<Location /usr/local/share/gitweb/gitweb.fcgi>
  SetHandler fastcgi-script
  Options +ExecCGI

Next we need access control. The best-of-breed for this appears to be gitolite and it does a fine job. This uses ssh keys to authenticate developers for repository access and means there is only a single unix user account required. It also permits access control down to individual branches which may be quite useful. The way this is configured is by pushing committed changes to an administrative git repository. I can see this not being taken so favourably by my fellow developers although it is very powerful. So I thought I might need some kind of web UI for gitolite and discovered GitLab. This fills the gap very nicely by sitting on top of gitolite and giving a simple method to create new repositories and control of the access policy. If we need finder control than is provided by gitlab, then we can still use the gitolite features directly.

Setting up gitlab on a Ubuntu 10.04 LTS server was a minor pain. Gitlab is a Ruby-on-Rails application and these kind of things appear to enjoy using the cutting edge releases of everything. However, the ubuntu server apt repositories are not keeping up so for Ruby, it is best to compile everything locally and give up on apt-get. Following the instructions it was relatively simple to get gitlab operating on our server. It really does need the umask changing as mentioned though. I moved some repositories into it by creating them in gitlab then using 'push --mirror' to load the git repositories in. The latest version supports LDAP logins so once configured it is now possible to use the NT domain logins to access the gitlab account. From there, developers can load up an ssh key generated using either git-gui or gitextensions, create new repositories and push.

With gitlab operating fine as a standalone rails application it needed to be integrated with the Apache server. It seems Rails people like Nginx and other servers - however, Apache can host Rails applications fine using Passenger. This was very simple to install and getting the app hosted was no trouble. There is a problem if you host Gitlab under a sub-uri on your server. In this case LDAP logins fail to return the authenticated user to the correct location. So possibly it will be best to host the application under a sub-domain but at the moment I'm sticking with a sub-uri and trying to isolate the fault. My /etc/apache2/conf.d/gitlab file:

# Link up gitlab using Passenger
Alias /gitlab/ /home/gitlab/gitlabhq/
Alias /gitlab /home/gitlab/gitlabhq/
RackBaseURI /gitlab
<Directory /home/gitlab/gitlabhq/>
  Allow from all
  Options -MultiViews

Now there are no excuses left. Lets hope this keeps them from turning to TFS!

A recently merged commit to the CyanogenMod tree caught my eye recently - Change I5be9bd4b: bluetooth networking (PAN). Bluetooth Personal Area Networking permits tethering over bluetooth. Now this is something that will allow me to tether a Wifi Xoom to my phone when I'm someplace without wifi. Unfortunately, while this is in the CM tree - we need some additional kernel support for the Nexus S. The prebuilt kernel provided with CM 7.1.0 RC1 doesn't include BNEP support which we need to complete this feature. So lets build a kernel module.

The Nexus S kernel is maintained at android.git.kernel.org but I notice there is also a cyanogenMod fork on github as well. So to begin we can clone these and see what differences exist between the stock kernel and the CM version. To setup the repository:

mkdir kernel && cd kernel
git clone git://android.git.kernel.org/kernel/samsung.git samsung
cd samsung
git remote add cm git://github.com/CyanogenMod/samsung-kernel-crespo.git
git remote update
git log --oneline origin/android-samsung-2.6.35..cm/android-samsung-2.6.35 
f288739 update herring_defconfig

The only changes between the samsung repository and the CM repository are a single commit changing the configuration. This has evidently been done using the kernel configuration utility so its a bit hard to work out the changes by comparing the config files directly. However, if I use each config in turn and regenerate a new one via the kernel configuration utility I can then extract just the changes.

git cat-file blob cm/android-samsung-2.6.35^:arch/arm/configs/herring_defconfig > x_prev.config
git cat-file blob cm/android-samsung-2.6.35:arch/arm/configs/herring_defconfig > x_cm.config
Then make gconfig and load each one, saving to a new file.
diff -u z_prev.config z_cm.config
and we can see that the new settings are just:

In current versions of Linux the modules retain version information that includes the git commit id of the kernel source used to build them. This is also present in the kernel and the kernel will reject a module with the wrong id. So to make a module that will load into the currently running kernel I need to checkout that version - simple enough as it is the current head-but-one of the cm-samsung-kernel repository (e382d80). Adding CONFIG_BNEP=m to the kernel config file enables building BNEP support as a module and taking the HEAD herring_defconfig and building HEAD^ results in a compatible bnep.ko module.

To test this I copied the module onto the device and restarted the PAN service.

% adb push bnep.ko /data/local/tmp/bnep.ko
% adb shell
# insmod /data/local/tmp/bnep.ko
# ndc pan stop
# ndc pan start
# exit
With this done we can try it out. The Blueman app on ubuntu lets me rescan the services on a device and following the above changes the context menu for my device now shows Network Access Point. Selecting this results in a bluetooth tethering icon on the Nexus S and we are away. Further checking on the Xoom with Wifi disabled proves that it all works. Routing is properly configured and the Xoom can access the internet via the phone.

To make this permanent, I could just remount the phone system partition read-write and edit /system/etc/init.d/04modules to add my new module on restart. That works ok. However, I may as well add the configuration changes from above to the current samsung stock kernel and change the kernel in use when I re-build a CyanogenMod image. So that is what I am running now.

Moving Python packages out of the Windows Roaming profile

On my work machine it has recently been taking quite a while to start-up when it is first turned on. This tends to mean there is too much data in the corporate roaming profile so I started to look into this. I recently installed ActivePython and added a few packages using the python package manager (pypm). This has dumped 150MB into my Roaming profile. This is the wrong place for this stuff on Windows 7. It should be in LOCALAPPDATA which would restrict it to just this machine (where I have Python installed) and not get copied around when logging in and out.

A search points up PEP 370 as being responsible for this design and this document suggests how to move the location. In the specific case of ActivePython the packages are stored in %APPDATA%\Python so we need to set the PYTHONUSERBASE environment variable to %LOCALAPPDATA%\Python. Editing the environment using the Windows environment dialog and using %LOCALAPPDATA% doesn't work correctly as the variable does not get expanded. However, we can set it to %USERPROFILE%\Local\Python which is expanded properly and produces right result.

Once the environment is setup we can move the package directory from %APPDATA%\Python to %LOCALAPPDATA%\Python and check everything is still available:

C:\Users\pat> pypm info
PyPM 1.3.1 (ActivePython
Installation target: "~\AppData\Local\Python" (2.7)
(type "pypm info --full" for detailed information)

C:\Users\pat>pypm list
numpy        1.6.0    NumPy: array processing for numbers, strings, records, a
pil          1.1.7~2  Python Imaging Library
xmpppy       0.5.0rc1 XMPP-IM-compliant library for jabber instant messaging.

I have been running various versions of CyanogenMod on my HTC Magic for a long time now. Ever since I decided that Vodafone were never going to bother to release any updates after the Donut (Android 1.6) update. However, I recently noticed that in December 2010 they finally release an update to upgrade these phones to Android 2.2.1 (FroYo). The files for this can be downloaded from Google's servers just like the last OTA update. This time there are two files - one to update the bootloader and one for the operating system update.

Before going any further I'll just point out that when I started this the fastboot screen reports the following settings and the final settings after completing the update in column 2:

HBOOT-1.33.0004 (SAPP10000)HBOOT-1.33.0013 (SAPP10000)
Apr 9 2009,23:30:40Oct 21 2009,22:33:27

Official OTA updates are written to expect a known configuration as a starting point. So if we want to use this it is time to revert back to the original image shipped with this phone. Fortunately I have my nandroid backup from the 1.5 days, so using the Amon RA recovery image I have installed as part of the CyanogenMod ROM I can revert back and begin applying first the update to Donut and then the new one to FroYo. Part of the reason for this is that the lastest update includes a new radio image which apparently leaves a bit more RAM available for the running system. As memory contention is the most significant problem with the HTC Magic, this has to be good. We shall see.

I downloaded 3 update files. The update from Cupcake to Donut, the bootloader update and the update from Donut to FroYo. Each file in turn needs to be copied to the sdcard and called update.zip. (adb push filename /sdcard/update.zip). Then reboot the phone holding down the Home key to restart in recovery mode. Once the recovery image is shown, pressing Home and Power shows a menu and you can select apply update.zip to flash the image. The hboot image looks quite scary as it works - it reboots the phone 3 times but eventually boots the operating system once again. Just wait for it patiently.

So now I have the official Vodafone released Android 2.2.1. So lets see about making a backup. Boot the phone to fastboot mode and try running the Amon RA recovery image: fastboot boot recovery-RA-sapphire-1.7.0G.img. Access denied. I had a suspicion this might happen given the bootloader update. This is a shame but not really a problem. We can use various rooting methods but I simply copied psneuter to /data/local/tmp and ran that from an adb shell. With that done I have a root shell and can remount the system partition readwrite and copy su, buzybox and the Superuser.apk file from my CyanogenMod build tree. This fixes root access.

Completing the job requires changing the recovery image so that I can boot that and make complete nandroid backups. The Vodafone release includes a script that checks for a valid image in the recovery partition and replaces it if it no longer matches the known version. This script is in /etc/install-recovery.sh and this is called from the Android init process. It updates the recovery partition by patching a copy of the boot partition using the binary patch file in /system/recovery-from-boot.p. So to make our recovery stick around we need to rename the install-recovery.sh script and the /system/recovery-from-boot.p files and place our own copy of Amon-RA 1.7.0G at /system/recovery.img. Then we can flash this using flash_image recovery /system/recovery.img.

The reason Vodafone use a patch file is to keep the size of the recovery copy small. Possibly we could so the same thing however I used the imgdiff program from the Android build to generate a patch to from the current boot image to the new recovery image but it just makes a patch containing the whole image. Clearly there is insufficient commonality between the two to make this worthwhile.

Now at last I can make a nandroid backup of the updated system by booting to the new recovery screen. And I've got root access for anything that might require it. And after all this: Was it worth it? Actually yes. The phone seems to be running more smoothly with this over CyanogenMod 6. Watching logcat there are less messages about processes being killed due to low memory. So far it does appear to be worth doing. But I'm noticing all sorts of little CyanogenMod tweaks that are no loger with me. So how long it will stay this way I'm not sure.

I recently decided that it might be nice to provide some Windows shell customizations for handling a new file format that I am working on. Making some fields available to Windows Search might be nice and customizing the default view when a user examines a directory of our files. Also customizing the icon shown so that it reflects the file content. All these are possible using Shell plugins and there is even a nice wizard for ATL in Visual Studion 2010 that will get things started.

So I began with the default ATL filehandler extension that is provided by the ATL wizard and started to add some code to the preview window and implemented the thumbnail handler. For this we can see if the file has some data we could draw and then generate an image. In our case, sometimes the file contains an image - if so, we can draw this in the preview and also use it as the thumbnail.

Now the preview view was working fine but the images fail to paint properly in the thumbnail. I reduced the code there to just draw some lines and it started to work in that the line was present but the color was always black.

So here is the code used by ATL to prepare the drawing context. It creates a memory display context and selects a bitmap into that and when we later draw on this memory DC the result ends up in this bitmap.

BOOL GetThumbnail(
		_In_ UINT cx,
		_Out_ HBITMAP* phbmp,
		_In_opt_ WTS_ALPHATYPE* /* pdwAlpha */)
		HDC hdc = ::GetDC(NULL);
		RECT rcBounds;

		SetRect(&rcBounds, 0, 0, cx, cx);

		HDC hDrawDC = CreateCompatibleDC(hdc);
		if (hDrawDC == NULL)
			ReleaseDC(NULL, hdc);
			return FALSE;

		HBITMAP hBmp = CreateCompatibleBitmap(hDrawDC, cx, cx);
		if (hBmp == NULL)
			ReleaseDC(NULL, hdc);
			return FALSE;

		HBITMAP hOldBitmap = (HBITMAP) SelectObject(hDrawDC, hBmp);

		// Here you need to draw the document's data
		OnDrawThumbnail(hDrawDC, &rcBounds);

		SelectObject(hDrawDC, hOldBitmap);

		ReleaseDC(NULL, hdc);

		*phbmp = hBmp;
		return TRUE;

There are two problems here. When a memory DC is created it has by default a monochrome bitmap. Here is what MSDN has to say:

A memory DC exists only in memory. When the memory DC is created, its display surface is exactly one monochrome pixel wide and one monochrome pixel high. Before an application can use a memory DC for drawing operations, it must select a bitmap of the correct width and height into the DC.

So when we then create a compatible bitmap from this, we get a monochrome bitmap. So the first fix is to use the window display context so that we can support a color bitmap: CreateCompatibleBitmap(hdc, cx, cx);.

The second problem shows up when reading the documentation for IThumbnailProvider::GetThumbnail.

[out] When this method returns, contains a pointer to the thumbnail image handle. The image must be a device-independent bitmap (DIB) section and 32 bits per pixel.

Oops. CreateCompatibleBitmap creates device dependent bitmaps. We need to be using CreateDIBSection to get a device independent bitmap. If we create a DIB and select that into the memory display context then all should be well. So to fix this the default GetThumbnail() function must be overridden to prepare a proper surface for drawing.

BOOL CDemoDocument::
GetThumbnail(_In_ UINT cx, _Out_ HBITMAP* phbmp, _In_opt_ WTS_ALPHATYPE* /* pdwAlpha */)
    BOOL br = FALSE;
    HDC hdc = ::GetDC(NULL);
    HDC hDrawDC = CreateCompatibleDC(hdc);
    if (hDrawDC != NULL)
        void *bits = 0;
        RECT rcBounds;
        SetRect(&rcBounds, 0, 0, cx, cx);

        BITMAPINFO bi = {0};
        bi.bmiHeader.biWidth = cx;
        bi.bmiHeader.biHeight = cx;
        bi.bmiHeader.biPlanes = 1;
        bi.bmiHeader.biBitCount = 32;
        bi.bmiHeader.biSizeImage = 0;
        bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
        bi.bmiHeader.biClrUsed = 0;
        bi.bmiHeader.biClrImportant = 0;

        HBITMAP hBmp = CreateDIBSection(hdc, &bi, DIB_RGB_COLORS, &bits, NULL, 0);
        if (hBmp != NULL)
            HBITMAP hOldBitmap = (HBITMAP)SelectObject(hDrawDC, hBmp);
            OnDrawThumbnail(hDrawDC, &rcBounds);
            SelectObject(hDrawDC, hOldBitmap);
            *phbmp = hBmp;
            br = TRUE;
    ReleaseDC(NULL, hdc);
    return br;

Now it works!