Friday, 2 September 2011

Custom USB devices on an Android phone

I have a Samsung Galaxy S2 which is a lovely phone that I've had plenty of fun customising thanks to its unlocked bootloader and open-source kernel. One feature of the phone that is of particular interest is the USB On-The-Go (OTG) support which allows it to switch between slave and host modes, meaning that with the right adapter you can plug a mouse, keyboard or even memory stick straight into the phone and it will handle it (as long as the power requirements for the peripheral are very low).

My work involves dealing with a custom USB hardware device and so I was very keen to find out whether or not I could talk to the hardware via my phone.

Powering the device
The first step was to provide external power since the SGS2 would be incapable of supplying enough to run the device. This was simply a case of taking an old USB cable and attaching the +5V and ground wires directly to the supply pins of the device so that I can power it via a computer or phone charger (a final product could involve some form of battery instead).

Debugging the phone via adb
I also needed a way to access the logs and shell on the phone which would normally be achieved using the USB cable and adb, but since the port was going to be used for the OTG adapter this was not an option. Instead I found that it is possible to connect to adb over TCP/IP, free widgets can be found in the market that will enable it using a custom port number (5555 is the default) and then you can connect to the device using:

adb connect

Initial testing
Having already confirmed that my OTG host adapter was working by testing a mouse and memory stick, I then started experimenting with plugging our device into the phone. I had noticed that when supported peripherals were plugged in it was logged in the standard logcat (via adb) but nothing appeared for our custom hardware. Therefore I started to dig a bit deeper and monitored the dmesg output only to find that it was noticing the device but then rejecting it and reporting that "device vXXX pYYY is not supported" where XXX was the vendor ID and YYY was the product ID.

It was time to start digging through the kernel sources (initially pulled from the repository set up by supercurio), which Samsung have very sensibly open-sourced, looking for the log message above. I tracked it down to some header files in "drivers/usb/core" that were named:
  • otg_whitelist.h
  • s3cotg_whitelist.h
  • sec_whitelist.h
So it would seem that Samsung/Android have set up whitelists of devices that are allowed to connect (primarily mass storage and HIDs), how annoying!

Building a kernel
The next big challenge was to figure out how to build my own Android kernel. It proved quite tricky finding all of the relevant information although a post by codeworkx helped a lot. At the time of writing this post the kernel sources repo was unfortunately out of date and so I switched to the one listed in the post. I also pulled the Android sources for the build toolchain and began building.

A number of soft-bricks later I managed to get a self-built kernel running successfully on my phone (including an initramfs in the kernel build is an easy thing to miss but oh so important). It was scary to test but very satisfying once working. Thank you Samsung for your wonderful download mode without which I probably would have completely bricked my phone! Thank you also to Benjamin Dobell for his Heimdall application which now fully supports the SGS2 and provides us linux users with a method of flashing our phones in download mode.

Hacking a kernel
Now I could really get down to business and started hacking those header files I found earlier. I chose the obvious one (s3cotg_whitelist.h) since it seems to refer to the Samsung "Systems-on-Chip" and added the following to the whitelist table:

{ USB_DEVICE_INFO(0xff, 0x0, 0x0) },   /* vendor specific USB devices */

After rebuilding the kernel and flashing it I still found that the custom device was being rejected. It seems that the important file is actually "sec_whitelist.h" in which there are two tables, so I added my little hack to both, rebuilt, flashed....success! The device was accepted and the connection was even visible in logcat.

Since then I have also discovered that whitelisting is a kernel option. I have not tested it but would assume that if you edit the kernel ".config" file and set "CONFIG_USB_SEC_WHITELIST=n" then it would allow any type of USB device to connect.

Writing an application
Now that the device can connect to the phone it is time to start developing an application to communicate with it. The Android SDK provides a number of classes that give direct access to USB devices and if you have experience with libraries such as libusb then it will seem fairly familiar.

One thing to note is that the UsbDevice and related classes are part of Android 3.1 and so an application needs to be built against API level 12 but to run it on the SGS2 you can get away with setting the minSdkVersion to 10 in the manifest (this will generate a warning but it's all good). 

Thursday, 21 April 2011

Adaptive Threshold Edge Detection in GIMP

We have a particular edge detection algorithm that was developed for the OpenIllusionist project specifically to help with detecting fiducials in live video since it was faster and better suited to the task than algorithms such as the canny edge detector.

The original algorithm was designed by Prof. John Robinson at the University of York and then modified and optimised by myself. The following diagram, taken from my thesis, provides an overview of how the algorithm works:

The GIMP is an extremely useful tool, not only for manipulating your photos but also for prototyping machine vision algorithms. Since the edge detection is part of our standard toolkit it would make sense to be able to test it out on images alongside all of the usual filters found in the GIMP.

The porting of this algorithm to a GIMP plugin was not simple, since GIMP plugins usually make use of tile-based processing thereby reducing the overall memory required to handle very large images. However, as can be seen in the above diagram, the adaptive threshold edge detector uses many full and scaled down buffers to produce the final result and as such cannot easily be modified to work with tiles. The current implementation of my plugin simply uses full image regions and therefore the user must be aware that it may have issues if applied to very large images.

One problem that this particular algorithm attempts to address is that of edge ringing in images, especially those captured from low-quality cameras or affected by compression artefacts. The following image is based on an example from wikipedia but also includes the results from our edge detector which clearly shows that it is unaffected by the jpeg compression artefacts present in the lossy image:

The code for the plugin along with a binary for Linux x64 can be found in my github plugins repository and further examples of the algorithm in action are shown below.

Source image copyright Dan Parnham

Source image copyright Benh Lieu Song

Sunday, 20 March 2011

Cross-compilation Adventures

At work we were recently asked to produce a Windows 64-bit build of our library, however our existing build system is not able to build 64-bit DLLs. The server is a Debian x64 machine running trac, subversion and hudson.

The automated build process is based on a combination of nant and premake scripts (we use premake to generate code::blocks project files for our development machines and makefiles for the build machine). The scripts use gcc/g++ to build the Linux version of the library (.so) for our kiosks and they use MinGW to cross-compile the library for Windows x86.

MinGW is just the standard version installed from the repositories and is not currently capable of producing 64-bit binaries, so we needed to switch that section of our toolchain.

The rest of this post documents how we configured a new toolchain capable of 64-bit builds and makes use of the MinGW-w64 project.

Install MinGW-w64

The MinGW-w64 project provides two toolchains, one targetting Win32 builds and one targetting Win64 builds.

Download the Win32 toolchain and extract to /opt/mw32
Download the Win64 toolchain and extract to /opt/mw64

In /usr/local/bin create the following helper scripts (this is optional but we found them useful):

mw32-ar :

i686-w64-mingw32-ar $@

mw32-gcc :

i686-w64-mingw32-gcc -m32 $@

mw32-g++ :

i686-w64-mingw32-g++ -m32 $@

mw64-ar :

x86_64-w64-mingw32-ar $@

mw64-gcc :

x86_64-w64-mingw32-gcc -m64 $@

mw64-g++ :

x86_64-w64-mingw32-g++ -m64 $@

Building wxWidgets

Our library is built using wxWidgets since it is fast, easy to use and provides cross-platform access to string functions, file handling, XML parsing, image loading/saving and so on. We statically link wxWidgets and so the following describes how to build 32-bit and 64-bit static versions of wxWidgets.

Download wxWidgets >= 2.9 since 2.8.x does not seem to build to 64-bit successfully. As of writing there is also a problem with legacy references that break the build and the simplest way to fix this is to hack the "configure" and "" scripts removing "-lwctl3d32".

Open a shell in the wxWidgets root folder (having extracted it somewhere) and run the following. I recommend going away and having a nice cup of tea while the make is running since it will take a while.

$ PATH=/opt/mw32/bin/:$PATH ./configure prefix=/opt/mw32/mingw --host=i686-w64-mingw32 --enable-unicode --disable-monolithic --disable-shared --build=`./config.guess`
$ PATH=/opt/mw32/bin/:$PATH make
$ sudo make install

$ make clean

$ PATH=/opt/mw64/bin/:$PATH ./configure prefix=/opt/mw64/mingw --host=x86_64-w64-mingw32 --enable-unicode --disable-monolithic --disable-shared --build=`./config.guess`
$ PATH=/opt/mw64/bin/:$PATH make
$ sudo make install

As of writing the make install does not copy the headers to a nice "wx" folder and so the simplest solution is to symlink them:
$ cd /opt/mw32/mingw/include
$ sudo ln -s wx-2.9/wx ./
$ cd /opt/mw64/mingw/include
$ sudo ln -s wx-2.9/wx ./

Since our server is also used to build Linux versions of the library, it has the wxWidgets developmental packages installed from the repositories and therefore also has the "wx-config" tool available. To make this tool aware of our cross-compilation configurations we simply need to symlink the scripts to the appropriate location:
$ cd /usr/lib/wx/config
$ sudo ln -s /opt/mw32/mingw/lib/wx/config/i686-w64-mingw32-msw-unicode-static-2.9 ./
$ sudo ln -s /opt/mw64/mingw/lib/wx/config/x86_64-w64-mingw32-msw-unicode-static-2.9 ./


Building something using the appropriate wxWidgets libraries is now a case of passing the appropriate wx-config command to the compiler:

32-bit build options:
`wx-config --host=i686-w64-mingw32 --version=2.9 --cxxflags`
32-bit link options:
-static-libstdc++ -static-libgcc `wx-config --host=i686-w64-mingw32 --version=2.9 --libs`
64-bit build options:
`wx-config --host=x86_64-w64-mingw32 --version=2.9 --cxxflags`
64-bit link options:
-static-libstdc++ -static-libgcc `wx-config --host=x86_64-w64-mingw32 --version=2.9 --libs`

If this is configured in a makefile then the helper scripts can be used to tell make which toolchain to use, e.g.:
$ make config=cross CC=mw32-gcc CXX=mw32-g++ AR=mw32-ar

Congratulations, you should now have a Linux system capable of cross-compiling to both 32-bit and 64-bit Windows binaries.


I have since found that when setting up a build (using tools such as premake) it is simpler if the helper scripts act in the same way as the native build tools and handle the -m32 and -m64 flags appropriately. The following is an example of the mw-g++ script I am now using to replace the previously defined mw32-g++ and mw64-g++ :


for arg in $*; do
        if [ "$arg" = "-m64" ]; then

if [ "$target" = "x64" ]; then
        x86_64-w64-mingw32-g++ $@
        i686-w64-mingw32-g++ $@

Saturday, 19 February 2011

Nested Elements in MooTools

When developing a web application my usual javascript library of choice is MooTools since it is simple to use, and provides cross-browser support along with a form of object-orientation.

The library has an Element structure that allows for the easy construction of HTML elements, but the way those elements are tied together is based on functions such as inject(), adopt() and grab(). This works well, but I was recently constructing a fake dialog box consisting of many nested elements and wanted to be able to declare them in a way that clearly shows the hierarchy.

This would be one way of doing it with MooTools:
var dialog       = new Element('div.background');
var container    = new Element('div.container').inject(dialog);
var window       = new Element('div.window').inject(container);
var header       = new Element('h1', { text: 'A title' }).inject(window);
var content      = new Element('div.content').inject(window);
var bar          = new Element('').inject(window);
var okButton     = new Element('button', { text: 'OK', events: { ... }}).inject(bar);
var cancelButton = new Element('button', { text: 'Cancel', events: { ... }}).inject(bar);

However, although it is fairly tidy and concise, it is difficult to just look at the code and determine the resulting structure. Instead I wanted to be able to do this:
new Element('div.background', { children: [
    new Element('div.container', { children: [
        new Element('div.window', { children: [
            new Element('h1', { text: 'A title' }),
            new Element('div.content'),
            new Element('', { children: [
                new Element('button', { text: 'OK', events: { ... }}),
                new Element('button', { text: 'Cancel', events: { ... }})
using a property that would allow me to declare child elements.

One of the most powerful features of MooTools is that it provides methods to extend the toolkit further. I have already taken advantage of this in the past by adding a search function to the Array class that takes a predicate function. The Element structure, however, is not an actual MooTools class and therefore cannot be easily extended. I did find an example that created a new utility function which allows child elements to be specified, but I wanted to simply enhance the existing Element.

The best way of enhancing Element is to add to the global element property set as follows, then the corresponding get/set in the Element structure simply use that property when they find it:
Element.Properties.children = 
    get: function()      { return this.getChildren(); },
    set: function(value) { this.adopt(value);         } 

Friday, 7 January 2011

Kindle 1-Click Security

What would happen if your Kindle was lost or stolen while away?

Your Kindle is linked to your Amazon account and you have to enable 1-click ordering to be able to buy books on it. That means if somebody got hold of your device they could potentially purchase a huge number of books before you could even get online to deregister it.

There is the option to add password security to your Kindle but that would mean having to type your password in each time you wake it up. An alternative is to remove all payment details from the account but this would be inconvenient next time you actually wanted to purchase something. I stumbled across a suggestion on a forum and thought I would post it here since it seems to work:

In your "Manage Your Kindle" account page edit your "Default 1-click Payment Method" and add a new card as follows:

Card Number
4111 1111 1111 1111Standard dummy VISA number
DummyOr something else to remind you that this is not a valid card
Start Date
01/11Or any date in the recent past
Expiry Date
01/15Or any date in the future

Before going away you can switch to this card so that if your device does go missing you will not end up with a credit card bill full of book purchases. Using the dummy card does not appear to prevent you from ordering free books through the 1-click system either!