...a clever blog name here...

23:49 Wed 18 Jul 2012

I wanted to setup a monitor showing the status of the jobs in our Jenkins cluster to emphasise the importance of fixing broken builds within the team. This requires some screen to permanently display a web page configured using the Build Monitor Plugin view of a selection of the Jenkins jobs. As we have an old XP machine that has been retired it seemed like an ideal task for a lightweight Linux that boots straight into an X session that just runs a single web-browser task.

I selected lubuntu as a suitable distribution that does not require too much resources (we only have 256MB RAM on this PC). Lubuntu uses the LXDE desktop environment to keep resource usage low but uses the same base and package system as standard Ubuntu which we are already familiar with.

On startup the system defaults to launching lightdm as the X display manager. To configure a crafted session we create a new desktop file (kiosk.desktop) and add a new user account for the kiosk login. We do not need to know the password for this account as lightdm can be configured to autologin as this user by adding the following lines to the /etc/lightdm/lightdm.conf file:

# lines to include in /etc/lightmdm/lightdm.conf
[SeatDefaults]
autologin-user=kiosk
autologin-user-timeout=0
user-session=kiosk
# kiosk.desktop
[Desktop Entry]
Encoding=UTF-8
Name=Kiosk Mode
Comment=Chromium Kiosk Mode
Exec=/home/kiosk/kiosk.sh
Type=Application
#!/bin/bash
# kiosh.sh - session creation
/usr/bin/xset s off
/usr/bin/xset -dpms
/usr/bin/ratpoison &
/usr/bin/chromium-browser http://jenkins/view/Status/ --kiosk --incognito --start-maximized
/usr/bin/xset s on
/usr/bin/xset +dpms

These lines cause the system to automatically log the kiosk user account in and start the kiosk user session which will run the program defined in our kiosk.desktop file. For this all we now need to do is disable the screen-saver and monitor DPMS function and launch chromium to display the correct page. On the first attempt we found that the browser did not fill the screen and while this can be configured it was simpler to add the ratpoison tiling window manager to deal with this.

To avoid wasting power when no-one is around to view this display a cronjob was added to turn the monitor off at night and back on for the working day. The script used below either turns the monitor on and then disables DPMS or turns it off and re-enables the DPMS mode.

#!/bin/sh
# Turn monitor permantly on or off.
DISPLAY=:0.0; export DISPLAY
COOKIE=/var/run/lightdm/root/\:0

function log()
{
    logger -p local0.notice -t DPMS $*
}

function dpms_force()
{
    xset dpms force $1 && log $1 || log FAILED to set $1
}

if [ $# != 1 ]
then
    echo >&2 "usage: dpms on|off|standby|suspend"
    exit 1
fi

[ -r $COOKIE ] && xauth merge $COOKIE

case "$1" in
    on)
        # Turn on and disable the DPMS to keep it on.
        dpms_force on
        xset -dpms
        ;;
    off|standby|suspend)
        # Allow to turn off and leave DPMS enabled.
        dpms_force $1
        ;;
    *)
        echo >&2 "invalid dpms mode \"$1\": must be on, off, standby or suspend"
        exit 1
        ;;
esac

We have been using GitLab to provide access control and repository management for Git at work. Originally I looked for something to manage the user access and found gitolite looked like best suiting our needs with one exception. Gitolite access is managed by making commits and pushing the gitolite-admin repository which is not especially user friendly. It is easy to automate however and so in searching for a web front end to gitolite I found GitLab.

GitLab is a Rails web application that can handle user account management and repository access for git using gitolite underneath. It has a number of plugins to allow for a selection of authentication mechanism and in a Windows culture we can set it up for LDAP authentication against the local Active Directory servers so that our users can use their normal Windows login details and do not have to remember yet another password. They do however have to create an SSH key and upload it to GitLab but GitExtensions provides a reasonable user interface for dealing with this. In practice we find Windows users can create an account and create repositories with minimal assistance.

However, I find that at least on the server we are using that the git repository browsing from GitLab is quite slow and all repository access requires an account. I have some automated processes that don't really need an SSH key as they are read-only and these are using the git protocol to update their clones. I also like using gitweb for repository browsing.

gitolite has a solution to this that involves using two pseudo-user accounts but GitLab erases these whenever it updates the gitolite configuration file. Further, if we simply go and create the git-daemon-export-ok file that git-daemon uses to test for exported repositories, either GitLab or GitExtensions will delete these on the next configuration update. So in searching for some way around this I found the git-daemon access-hook parameter. This allows git-daemon to call an external program to check for access. If we combine this with the gitweb project file which can be used to git gitweb a specfic set of visible directories then we don't have to hack GitLab itself.

This is achieved by creating a text file that just lists the repository names we wish to be publicly readable via git-daemona and gitweb. Then edit /etc/gitweb.conf and set the $projects_list variable to point to this file.

$projects_list = "/home/git/gitweb.projects";
$export_auth_hook = undef; # give up on the git-daemon-export-ok files.

Next, setup our git-daemon xinetd configuration to call our access hook script.

# /etc/xinetd.d/git-daemon
service git
{
        disable = no
        type = UNLISTED
        port = 9418
        socket_type = stream
        wait = no
        user = git
        server = /usr/local/bin/git
        server_args = daemon --inetd --base-path=/home/git/repositories --export-all --access-hook=/home/git/git-daemon-access.sh /home/git/repositories
        log_on_failure += USERID
}

After a bit of experimentation I found that you need to --export-all as well as have the access-hook.

Finally, the access hook script.

#!/bin/bash
#
# Called up from git-daemon to check for access to a given repository.
# We check that the repo is listed in the gitweb.projects file as these
# are the repos we consider publically readable.
#
# Args are: service-name repo-path hostname canonical-hostname ipaddr port
# To deny, exit with non-zero status

logger -p daemon.debug -t git-daemon "access: $*"
[ -r /home/git/gitweb.projects ] \
&& /bin/grep -q -E "^${2#/home/git/repositories/}(\\.git)?\$" /home/git/gitweb.projects

Here we log access attempts and fail if the gitweb.projects file is missing. Then trim off the base-path prefix and check for the repository name being listed in the file.

Now I can have GitLab and git-daemon and gitweb available.

In the future it might be nice to use this mechanism to have GitLab provide some control of this from it's user interface. That way I will avoid having to occasionally update this project file. However, thats something for another day - especially as the most recent release of GitLab has dropped the gitolite back-end in favour of an internal package (git-shell). I'm waiting to see how that turns out.

Recently I have been persuading my co-workers that we should be using Git instead of CVS. To help this along I've spent some time trying to ensure that any switch over will be as painless as I can. At the moment they are using TortoiseCVS on the workstations with the CVS server being CVSNT on a Windows machine. This is a convenient setup on a Windows domain as we can use the NT domain authentication details to handle access control without anyone needing to remember specific passwords for CVS. The repositories are just accessed using the :sspi:MACHINE:/cvsroot path and it all works fine.

One thing we missed about CVS even when we originally switched to it was that it was hard to tell what set of files had been committed together. CVSNT adds a changeset id to help with this and also can be configured to record an audit log to a mysql database. So a few years ago we made an in-house web app that shows a changeset view of the CVS repository and helps us review new commits and do changeset merges from one branch to another using the cvsnt extended syntax (cvs update -j "@<COMMITID" -j "@COMMITID"). So if they are to switch we must ensure the same facilities continue to be available. We also need the history converted but thats the easy part. Using cvs2git. I've already been running a git mirror of the cvs repository for over a year by converting twice a day and then pushing the new conversion onto the mirror. It is done this way as cvsimport got some things wrong and cvs2git doesn't do incremental imports. However, performing a full conversion is about 40mins processing and pushing the new repository onto the old one helps to show up any problems quickly (like the time one dev worked out how to edit the cvs commit comments for old commits).

So we need a nice changeset viewing tool, access control - preferably seamless with Windows domains, and simple creation of new repositories. The first is the simplest. gitweb is included with git and provides a nice view of any repositories you feed it. Initially I found it a bit slow on my Linux server with Apache, but switching from CGI to Fast-CGI has sorted this out. In case this helps I had to install libapache2-mod-fastcgi, libcgi-fast-perl and libfcgi-perl. Then added the following to /etc/apache2/conf.d/gitweb. Supposedly this can run under mod_perl but I failed to make that work. The fast-cgi setup is performing well though.

# gitweb.fcgi is just a link to gitweb.cgi
ScriptAlias /gitweb /usr/local/share/gitweb/gitweb.fcgi
<Location /usr/local/share/gitweb/gitweb.fcgi>
  SetHandler fastcgi-script
  Options +ExecCGI
</Location>

Next we need access control. The best-of-breed for this appears to be gitolite and it does a fine job. This uses ssh keys to authenticate developers for repository access and means there is only a single unix user account required. It also permits access control down to individual branches which may be quite useful. The way this is configured is by pushing committed changes to an administrative git repository. I can see this not being taken so favourably by my fellow developers although it is very powerful. So I thought I might need some kind of web UI for gitolite and discovered GitLab. This fills the gap very nicely by sitting on top of gitolite and giving a simple method to create new repositories and control of the access policy. If we need finder control than is provided by gitlab, then we can still use the gitolite features directly.

Setting up gitlab on a Ubuntu 10.04 LTS server was a minor pain. Gitlab is a Ruby-on-Rails application and these kind of things appear to enjoy using the cutting edge releases of everything. However, the ubuntu server apt repositories are not keeping up so for Ruby, it is best to compile everything locally and give up on apt-get. Following the instructions it was relatively simple to get gitlab operating on our server. It really does need the umask changing as mentioned though. I moved some repositories into it by creating them in gitlab then using 'push --mirror' to load the git repositories in. The latest version supports LDAP logins so once configured it is now possible to use the NT domain logins to access the gitlab account. From there, developers can load up an ssh key generated using either git-gui or gitextensions, create new repositories and push.

With gitlab operating fine as a standalone rails application it needed to be integrated with the Apache server. It seems Rails people like Nginx and other servers - however, Apache can host Rails applications fine using Passenger. This was very simple to install and getting the app hosted was no trouble. There is a problem if you host Gitlab under a sub-uri on your server. In this case LDAP logins fail to return the authenticated user to the correct location. So possibly it will be best to host the application under a sub-domain but at the moment I'm sticking with a sub-uri and trying to isolate the fault. My /etc/apache2/conf.d/gitlab file:

# Link up gitlab using Passenger
Alias /gitlab/ /home/gitlab/gitlabhq/
Alias /gitlab /home/gitlab/gitlabhq/
RackBaseURI /gitlab
<Directory /home/gitlab/gitlabhq/>
  Allow from all
  Options -MultiViews
</Directory>

Now there are no excuses left. Lets hope this keeps them from turning to TFS!

A recently merged commit to the CyanogenMod tree caught my eye recently - Change I5be9bd4b: bluetooth networking (PAN). Bluetooth Personal Area Networking permits tethering over bluetooth. Now this is something that will allow me to tether a Wifi Xoom to my phone when I'm someplace without wifi. Unfortunately, while this is in the CM tree - we need some additional kernel support for the Nexus S. The prebuilt kernel provided with CM 7.1.0 RC1 doesn't include BNEP support which we need to complete this feature. So lets build a kernel module.

The Nexus S kernel is maintained at android.git.kernel.org but I notice there is also a cyanogenMod fork on github as well. So to begin we can clone these and see what differences exist between the stock kernel and the CM version. To setup the repository:

mkdir kernel && cd kernel
git clone git://android.git.kernel.org/kernel/samsung.git samsung
cd samsung
git remote add cm git://github.com/CyanogenMod/samsung-kernel-crespo.git
git remote update
git log --oneline origin/android-samsung-2.6.35..cm/android-samsung-2.6.35 
f288739 update herring_defconfig

The only changes between the samsung repository and the CM repository are a single commit changing the configuration. This has evidently been done using the kernel configuration utility so its a bit hard to work out the changes by comparing the config files directly. However, if I use each config in turn and regenerate a new one via the kernel configuration utility I can then extract just the changes.

git cat-file blob cm/android-samsung-2.6.35^:arch/arm/configs/herring_defconfig > x_prev.config
git cat-file blob cm/android-samsung-2.6.35:arch/arm/configs/herring_defconfig > x_cm.config
Then make gconfig and load each one, saving to a new file.
diff -u z_prev.config z_cm.config
and we can see that the new settings are just:
CONFIG_SLOW_WORK=y
CONFIG_TUN=y
CONFIG_CIFS=y
CONFIG_CIFS_STATS=y
CONFIG_CIFS_STATS2=y
CONFIG_CIFS_WEAK_PW_HASH=y
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y

In current versions of Linux the modules retain version information that includes the git commit id of the kernel source used to build them. This is also present in the kernel and the kernel will reject a module with the wrong id. So to make a module that will load into the currently running kernel I need to checkout that version - simple enough as it is the current head-but-one of the cm-samsung-kernel repository (e382d80). Adding CONFIG_BNEP=m to the kernel config file enables building BNEP support as a module and taking the HEAD herring_defconfig and building HEAD^ results in a compatible bnep.ko module.

To test this I copied the module onto the device and restarted the PAN service.

% adb push bnep.ko /data/local/tmp/bnep.ko
% adb shell
# insmod /data/local/tmp/bnep.ko
# ndc pan stop
# ndc pan start
# exit
With this done we can try it out. The Blueman app on ubuntu lets me rescan the services on a device and following the above changes the context menu for my device now shows Network Access Point. Selecting this results in a bluetooth tethering icon on the Nexus S and we are away. Further checking on the Xoom with Wifi disabled proves that it all works. Routing is properly configured and the Xoom can access the internet via the phone.

To make this permanent, I could just remount the phone system partition read-write and edit /system/etc/init.d/04modules to add my new module on restart. That works ok. However, I may as well add the configuration changes from above to the current samsung stock kernel and change the kernel in use when I re-build a CyanogenMod image. So that is what I am running now.

Moving Python packages out of the Windows Roaming profile

On my work machine it has recently been taking quite a while to start-up when it is first turned on. This tends to mean there is too much data in the corporate roaming profile so I started to look into this. I recently installed ActivePython and added a few packages using the python package manager (pypm). This has dumped 150MB into my Roaming profile. This is the wrong place for this stuff on Windows 7. It should be in LOCALAPPDATA which would restrict it to just this machine (where I have Python installed) and not get copied around when logging in and out.

A search points up PEP 370 as being responsible for this design and this document suggests how to move the location. In the specific case of ActivePython the packages are stored in %APPDATA%\Python so we need to set the PYTHONUSERBASE environment variable to %LOCALAPPDATA%\Python. Editing the environment using the Windows environment dialog and using %LOCALAPPDATA% doesn't work correctly as the variable does not get expanded. However, we can set it to %USERPROFILE%\Local\Python which is expanded properly and produces right result.

Once the environment is setup we can move the package directory from %APPDATA%\Python to %LOCALAPPDATA%\Python and check everything is still available:

C:\Users\pat> pypm info
PyPM 1.3.1 (ActivePython 2.7.1.4)
Installation target: "~\AppData\Local\Python" (2.7)
(type "pypm info --full" for detailed information)

C:\Users\pat>pypm list
numpy        1.6.0    NumPy: array processing for numbers, strings, records, a
pil          1.1.7~2  Python Imaging Library
xmpppy       0.5.0rc1 XMPP-IM-compliant library for jabber instant messaging.

I have been running various versions of CyanogenMod on my HTC Magic for a long time now. Ever since I decided that Vodafone were never going to bother to release any updates after the Donut (Android 1.6) update. However, I recently noticed that in December 2010 they finally release an update to upgrade these phones to Android 2.2.1 (FroYo). The files for this can be downloaded from Google's servers just like the last OTA update. This time there are two files - one to update the bootloader and one for the operating system update.

Before going any further I'll just point out that when I started this the fastboot screen reports the following settings and the final settings after completing the update in column 2:

BeforeAfter
SAPPHIRE PVT 32B SHIP S-ON GSAPPHIRE PVT 32B SHIP S_OK G
HBOOT-1.33.0004 (SAPP10000)HBOOT-1.33.0013 (SAPP10000)
CPLD-10
CPLD-10
RADIO-2.22.19.26IRADIO-2.22.28.25
Apr 9 2009,23:30:40Oct 21 2009,22:33:27

Official OTA updates are written to expect a known configuration as a starting point. So if we want to use this it is time to revert back to the original image shipped with this phone. Fortunately I have my nandroid backup from the 1.5 days, so using the Amon RA recovery image I have installed as part of the CyanogenMod ROM I can revert back and begin applying first the update to Donut and then the new one to FroYo. Part of the reason for this is that the lastest update includes a new radio image which apparently leaves a bit more RAM available for the running system. As memory contention is the most significant problem with the HTC Magic, this has to be good. We shall see.

I downloaded 3 update files. The update from Cupcake to Donut, the bootloader update and the update from Donut to FroYo. Each file in turn needs to be copied to the sdcard and called update.zip. (adb push filename /sdcard/update.zip). Then reboot the phone holding down the Home key to restart in recovery mode. Once the recovery image is shown, pressing Home and Power shows a menu and you can select apply update.zip to flash the image. The hboot image looks quite scary as it works - it reboots the phone 3 times but eventually boots the operating system once again. Just wait for it patiently.

So now I have the official Vodafone released Android 2.2.1. So lets see about making a backup. Boot the phone to fastboot mode and try running the Amon RA recovery image: fastboot boot recovery-RA-sapphire-1.7.0G.img. Access denied. I had a suspicion this might happen given the bootloader update. This is a shame but not really a problem. We can use various rooting methods but I simply copied psneuter to /data/local/tmp and ran that from an adb shell. With that done I have a root shell and can remount the system partition readwrite and copy su, buzybox and the Superuser.apk file from my CyanogenMod build tree. This fixes root access.

Completing the job requires changing the recovery image so that I can boot that and make complete nandroid backups. The Vodafone release includes a script that checks for a valid image in the recovery partition and replaces it if it no longer matches the known version. This script is in /etc/install-recovery.sh and this is called from the Android init process. It updates the recovery partition by patching a copy of the boot partition using the binary patch file in /system/recovery-from-boot.p. So to make our recovery stick around we need to rename the install-recovery.sh script and the /system/recovery-from-boot.p files and place our own copy of Amon-RA 1.7.0G at /system/recovery.img. Then we can flash this using flash_image recovery /system/recovery.img.

The reason Vodafone use a patch file is to keep the size of the recovery copy small. Possibly we could so the same thing however I used the imgdiff program from the Android build to generate a patch to from the current boot image to the new recovery image but it just makes a patch containing the whole image. Clearly there is insufficient commonality between the two to make this worthwhile.

Now at last I can make a nandroid backup of the updated system by booting to the new recovery screen. And I've got root access for anything that might require it. And after all this: Was it worth it? Actually yes. The phone seems to be running more smoothly with this over CyanogenMod 6. Watching logcat there are less messages about processes being killed due to low memory. So far it does appear to be worth doing. But I'm noticing all sorts of little CyanogenMod tweaks that are no loger with me. So how long it will stay this way I'm not sure.

I recently decided that it might be nice to provide some Windows shell customizations for handling a new file format that I am working on. Making some fields available to Windows Search might be nice and customizing the default view when a user examines a directory of our files. Also customizing the icon shown so that it reflects the file content. All these are possible using Shell plugins and there is even a nice wizard for ATL in Visual Studion 2010 that will get things started.

So I began with the default ATL filehandler extension that is provided by the ATL wizard and started to add some code to the preview window and implemented the thumbnail handler. For this we can see if the file has some data we could draw and then generate an image. In our case, sometimes the file contains an image - if so, we can draw this in the preview and also use it as the thumbnail.

Now the preview view was working fine but the images fail to paint properly in the thumbnail. I reduced the code there to just draw some lines and it started to work in that the line was present but the color was always black.

So here is the code used by ATL to prepare the drawing context. It creates a memory display context and selects a bitmap into that and when we later draw on this memory DC the result ends up in this bitmap.

BOOL GetThumbnail(
		_In_ UINT cx,
		_Out_ HBITMAP* phbmp,
		_In_opt_ WTS_ALPHATYPE* /* pdwAlpha */)
	{
		HDC hdc = ::GetDC(NULL);
		RECT rcBounds;

		SetRect(&rcBounds, 0, 0, cx, cx);

		HDC hDrawDC = CreateCompatibleDC(hdc);
		if (hDrawDC == NULL)
		{
			ReleaseDC(NULL, hdc);
			return FALSE;
		}

		HBITMAP hBmp = CreateCompatibleBitmap(hDrawDC, cx, cx);
		if (hBmp == NULL)
		{
			ReleaseDC(NULL, hdc);
			DeleteDC(hDrawDC);
			return FALSE;
		}

		HBITMAP hOldBitmap = (HBITMAP) SelectObject(hDrawDC, hBmp);

		// Here you need to draw the document's data
		OnDrawThumbnail(hDrawDC, &rcBounds);

		SelectObject(hDrawDC, hOldBitmap);

		DeleteDC(hDrawDC);
		ReleaseDC(NULL, hdc);

		*phbmp = hBmp;
		return TRUE;
	}

There are two problems here. When a memory DC is created it has by default a monochrome bitmap. Here is what MSDN has to say:

A memory DC exists only in memory. When the memory DC is created, its display surface is exactly one monochrome pixel wide and one monochrome pixel high. Before an application can use a memory DC for drawing operations, it must select a bitmap of the correct width and height into the DC.

So when we then create a compatible bitmap from this, we get a monochrome bitmap. So the first fix is to use the window display context so that we can support a color bitmap: CreateCompatibleBitmap(hdc, cx, cx);.

The second problem shows up when reading the documentation for IThumbnailProvider::GetThumbnail.

phbmp
[out] When this method returns, contains a pointer to the thumbnail image handle. The image must be a device-independent bitmap (DIB) section and 32 bits per pixel.

Oops. CreateCompatibleBitmap creates device dependent bitmaps. We need to be using CreateDIBSection to get a device independent bitmap. If we create a DIB and select that into the memory display context then all should be well. So to fix this the default GetThumbnail() function must be overridden to prepare a proper surface for drawing.

BOOL CDemoDocument::
GetThumbnail(_In_ UINT cx, _Out_ HBITMAP* phbmp, _In_opt_ WTS_ALPHATYPE* /* pdwAlpha */)
{
    BOOL br = FALSE;
    HDC hdc = ::GetDC(NULL);
    HDC hDrawDC = CreateCompatibleDC(hdc);
    if (hDrawDC != NULL)
    {
        void *bits = 0;
        RECT rcBounds;
        SetRect(&rcBounds, 0, 0, cx, cx);

        BITMAPINFO bi = {0};
        bi.bmiHeader.biWidth = cx;
        bi.bmiHeader.biHeight = cx;
        bi.bmiHeader.biPlanes = 1;
        bi.bmiHeader.biBitCount = 32;
        bi.bmiHeader.biSizeImage = 0;
        bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
        bi.bmiHeader.biClrUsed = 0;
        bi.bmiHeader.biClrImportant = 0;

        HBITMAP hBmp = CreateDIBSection(hdc, &bi, DIB_RGB_COLORS, &bits, NULL, 0);
        if (hBmp != NULL)
        {
            HBITMAP hOldBitmap = (HBITMAP)SelectObject(hDrawDC, hBmp);
            OnDrawThumbnail(hDrawDC, &rcBounds);
            SelectObject(hDrawDC, hOldBitmap);
            *phbmp = hBmp;
            br = TRUE;
        }
        DeleteDC(hDrawDC);
    }
    ReleaseDC(NULL, hdc);
    return br;
}

Now it works!

The Tcl developers have now switched from CVS to and new and shiny distributed version control system. There are an number of these to select from with various advantages and disadvantages. Git is perhaps the most famous and possibly the most powerful although it can be unintuitive for Windows users. Mercurial is getting extremely popular as it works fairly smoothly on Windows as well as unix platforms. Bazaar appears to be another contender.

Tcl has chosen fossil. This is a rather less well known DVCS written by the guy who did sqlite. It seems to do what all the others do. So we are going to be using fossil.

Fossil holds the repository files in an sqlite file so we create a clone database and then 'open' a working tree from this repository file.

fossil clone http://mirror1.tcl.tk/tcl /opt/repos/tcl.fossil
mkdir /opt/src/tcl
cd /opt/src/tcl
fossil open /opt/repos/tcl.fossil

Following this procedure we now have a local repository that we can share and a working tree that we can build. To pull in more changes from the remote we can to fossil pull and then use fossil update when we want to update the working tree or switch to another branch (eg: fossil update core-8-5-branch).

% fossil update
Autosync:  http://core.tcl.tk/tk
                Bytes      Cards  Artifacts     Deltas
Sent:             130          1          0          0

Huh? I was only trying to update from the local repository. Why is it talking to the network? This might not be a problem but at the time I was offline with no WiFi link. It turns out that fossil has per-repository settings (fossil settings) and one of these (autosync) makes it work like CVS. We need to disable this using fossil settings autosync 0. And I'll need to do that each time I make a clone. At least it now works like a typical DVCS.

Now to setup on the Windows machine. To save bandwidth I'll use a copy of the repo I obtained earlier on the linux machine and update it. This is partly because cloning the tcl core repository is really damn slow - apparently because the Tcl core machine is bandwidth limited.

C:\opt\tcl\src\tk>fossil open /opt/repos/tk.fossil
c:\opt\bin\fossil.exe: incorrect repository schema version
c:\opt\bin\fossil.exe: you have version "2011-02-25 14:52" but you need version
"2011-01-28"
c:\opt\bin\fossil.exe: run "fossil rebuild" to fix this problem

Hmm. We seem to be exposing our guts a bit here. So a given repository would only seem to work with a given version of fossil? A sign of an immature system I think. At least it told me how to fix this.

C:\opt\tcl\src\tk>fossil rebuild -R /opt/repos/tk.fossil
  100.0% complete...

C:\opt\tcl\src\tk>fossil open /opt/repos/tk.fossil
c:\opt\bin\fossil.exe: already within an open tree rooted at C:/opt/tcl/src/tk/

Huh!

C:\opt\tcl\src\tk>dir
 Volume in drive C has no label.
 Volume Serial Number is 56FF-5C9C

 Directory of C:\opt\tcl\src\tk

10/03/2011  12:15    <DIR>          .
10/03/2011  12:15    <DIR>          ..
10/03/2011  12:15             7,168 _FOSSIL_
               1 File(s)          7,168 bytes
               2 Dir(s)  163,961,884,672 bytes free

Delete and repeat.

C:\opt\tcl\src\tk>fossil open /opt/repos/tk.fossil

C:\opt\tcl\src\tk>fossil pull
Server:    http://core.tcl.tk/tk/
via proxy: http://uknml1869:3129/
                Bytes      Cards  Artifacts     Deltas
Sent:             146          2          0          0
Received:        2854         62          0          0
Sent:             741         15          0          0
Received:        6766         74          5          8
Total network traffic: 876 bytes sent, 2983 bytes received

C:\opt\tcl\src\tk>fossil update
UPDATE ChangeLog
REMOVE doc/.cvsignore
REMOVE unix/.cvsignore
UPDATE unix/configure.in
REMOVE win/.cvsignore
--------------
updated-to:   53d9debe536c57e0f54b7ab88dc941e57cf21edb 2011-03-09 17:03:34 UTC
tags:         trunk
comment:      Fix libXft check (user: rmax)
"fossil undo" is available to undo changes to the working checkout.

C:\opt\tcl\src\tk>fossil branch
  core-8-4-branch
  core-8-5-branch
* trunk

Great. We now have a current checkout of Tk and I can go and commit a patch I've been nursing in my git repository.

I have 64 bit Ubuntu 10.10 installed on my laptop. When using the default Gnome desktop the volume wheel on the side of the laptop correctly controls the audio volume. However I quite like using XFCE so I have been booting into an XFCE desktop instead recently and I noticed that the volume wheel doesn't work for this desktop. I turns out there is an additional package to install that is not included when you apt-get install xfce4 and this is xfce4-volumed. As soon as this daemon was running, the volume wheel operated as expected. Excellent!