Who I am
..and how I got hereI'm a research scientist and engineer: I enjoy discovering new knowledge and using it to solve real problems. I have a strong interest in natural and artificial intelligence as applied to complex data. Since my PhD in machine learning, I've carried out research and development in several areas including computational linguistics/NLP, visual neuroscience, image processing, botanical taxonomy and most recently, online social networks and journalism. In each case, I have helped to develop robust solutions to complex, data-centred problems.
After 10+ years as an academic researcher, I recently returned to the commercial sector as a data scientist at Signal Media. Here, we combine text analytics, machine learning and business knowledge to automatically filter and rank millions of news articles every day. Our customers can track news from their industry, monitoring risks and find opportunities, all without being overwhelmed by irrelevant data.
In April 2012, I joined the SocialSensor project at City University London as part of a team developing novel multimedia information retrieval systems. The focus is to find interesting news stories from online social networks -- principally Twitter -- and to explain to the user why the stories are trending. We also want to help journalists to find eye-witnesses to events, such as by finding relevant images and videos. The team has since relocated to the Robert Gordon University.
Before that, I worked as a research fellow at the University of Surrey, in the Department of Computing. I developed software to analyse digital photographs of leaves, using specimens kindly provided by botanists at Kew Gardens herbarium. The software analyses the shapes of leaves to aid identification and to allow further rigorous analysis. I used and enhanced geometric morphometric algorithms alongside more general image processing methods. One aim was to model relationships between leaf shape and climate, based on images of herbarium specimens.
Previously, I was a research fellow in the UCL Institute of Ophthalmology as part of Beau Lotto's group. I used various machine learning and statistical methods to investigate the human visual system. Why is it that we can see the world in such detail, with such robustness, and yet we still see optical illusions? These are not simply random errors of a imperfect instrument, but are systematic and consistent. They can tell us a lot about how the rest of the visual system works, and by extension, other modes of perception too. I developed a virtual visual ecology into which I could place "virtual animals". I let them adapt and evolve and learn to see, and then tested their perceptions and their internal workings. This model allowed me to investigate the perception of lightness, colour, depth and so on. Some of the work was described in this New Scientist article.
Until 2006, I was working in the area of information extraction and text mining in the UCL Computer Science department (where I also did my PhD). I developed the BioRAT software, which performs information extraction from biological literature. I was in the Bioinformatics Group at UCL and helped to organise "BioText", a workshop held to discuss the application of text mining to the life sciences. I also explored the use of Google Maps then, using them to display publication rates of UK universities in the life sciences. It provides a novel, visual interface to PubMed.
Along the way, I've also taught various aspects of computer science. Besides face-to-face teaching and supervision at several universities, I have supervised online computer science undergraduate projects at the University of Hertfordshire and taught various distance-learning modules at Queen Mary, University of London.