I'm experimenting with simple edge detection in Java. The result above is from an incredibly simple algorithm which subtracts the highest contrast neighbouring pixel from each pixel in the image: for each pixel, find the neighbour with the highest contrast relative to the original pixel and subtract its value from the original pixel. Pixels surrounded by similarly valued pixels will get a value of 0 whereas pixels with high contrast neighbours will get a higher value.
Where next? There's a lot that can be done to improve this simple algorithm, starting with pre-processing like increasing the contrast on the original image. I'd eventually like to turn this into a 'Camscanner' style application, which transforms an image of a document into an upright rectangular viewing format. To do this I'd need to find the four edges of the largest rectangle in the image (very likely to be the document), then use a simple transformation to do the rest of the work.