Current and Recent Work
Please note: some of this work done at the Active Vision Group (AVG), University of Oxford. Links may transfer there.
I am affiliated with the Real-Time Vision Group, the Robotics Group and the BRL

My research interests are usually around three related areas: robotics, wearable computing and computer vision. Robots and Wearables face very similar challenges, they usually have to work in real-time under reduced computational, energy and sensing resources, and therefore, similar techniques for reasoning and sensing can be applied in both. On the other side, Computer Vision studies how to understand and represent the world using visual sensors, and for a number of reasons, cameras are very interesting sensors: they are commonplace, cheap and small, while being able to recover a plethora of information, e.g. 3D structure, object ID, self and object motion and serve as input devices for human-computer (or robot) interaction.

Following are some examples of my recent work that was done with a number of collaborators to whom I am deeply grateful.

 


In-situ interactive model building

 

This work with Pished Bunnun, is about developing real-time techniques that allow the construction and immediate use of the models created from mobile devices. Much work has been done to enable interactive modeling away from the object of interest, and usually using a desktop machine. Here we want to allow mobile users to capture models of objects quickly and accurately with reduced computational demand.

 

ยท                    Pished Bunnun, Walterio Mayol-Cuevas, OutlinAR: an assisted interactive model building system with reduced computational effort. 7th IEEE and ACM International Symposium on Mixed and Augmented Reality. September 2008.


 




Click images for videos

Discovering high level structure, its effects and applications in real-time visual SLAM

This work with Andrew P. Gee,  Denis Chekhlov and Andrew Calway aims to assess the effects and benefits of the discovery of high level structure such as planes and lines from the simple features commonly used in visual SLAM.  
So far, this allow us to reduce the state space which benefits scalability and also to explore applications in augmented reality such as autonomous virtual agents being able to interact more reallistically with the created maps in real-time.


 



Click image for video (30MB)

Real-Time Model-Based SLAM Using Line Segments.

This work with Andrew P. Gee develops a monocular real-time SLAM system that uses line segments extracted on the fly and that builds a wire-frame model of the scene to help tracking. The use of line segments provides viewpoint invariance and robustness to partial occlusion, whilst the model-based tracking is fast and efficient, reducing problems associated with feature matching and extraction.

  • A. P. Gee and W.W. Mayol. Real-Time Model-Based SLAM Using Line Segments [PDF]. To appear in LNCS proceedings of the 2nd International Symposium on Visual Computing. November. 2006 .

 



Click image for video (50MB)

Robust Real-Time SLAM Using Multiresolution Descriptors

This work in collaboration with Denis Chekhlov, Mark Pupilli and Andrew Calway, uses SIFT-like descriptors within a coherent top-down framework.  The resulting system provides superior performance over previous methods in terms of robustness to erratic motion, camera shake, and the ability to recover from periods of measurement loss.

  • Denis Chekhlov, Mark Pupilli, Walterio Mayol-Cuevas and Andrew Calway. Robust Real-Time Visual SLAM Using Scale Prediction and Exemplar Based Feature Description [PDF]. [VIDEO] In proceedings IEEE Computer Vision and Pattern Recognition CVPR, Minneapolis, Minnesota, USA. 2007.
  • Denis Chekhlov, Mark Pupilli, Walterio Mayol-Cuevas and Andrew Calway. Real-Time and Robust Monocular SLAM Using Predictive Multi-resolution Descriptors [PDF]. To appear in LNCS proceedings of the 2nd International Symposium on Visual Computing, November 2006.

 


video

Hand Activity Detection with a wearable active camera.

This work aims towards the automatic detection of hand activity as observed by a wearable camera. A Probabilistic approach fuses different cues to infer the current activity based in the objects subject to manipulation.

  • W.W. Mayol and D.W. Murray. Wearable Hand Activity Recognition for Event Summarization [PDF]. Proc. IEEE Int. Symposium on Wearable Computers (ISWC). Osaka, Japan, October 2005.

 


Click image for video

Hand gesture and "grasping vector" detection.

This work aimed at detecting hand gestures in real time as seen by an active wearable camera.

  • W.W. Mayol, A.J. Davison, B.J. Tordoff, N.D. Molton, and D.W. Murray. Interaction between hand and wearable camera in 2D and 3D environments [PDF]. Proc. British Machine Vision Conference 2004. London, UK, September. 2004.

 

Click images for videos

General Regression for Image Tracking.

This work developed a method to track planar and near-planar image regions by posing the problem as one of statistical general regression. The method is fast and robust to occlusions and rapid object motions.

  •  W. W. Mayol and D. W. Murray. Tracking with General Regression. Journal of Machine Vision and Applications, 19(1). ISSN 09328092, pp. 65-72. January 2008. Published Online June 2007.[PDF]
    Note: This work was actually done at the end of 2002 and a technical report from then is available upon request.

 

 


 

Click image for video

Annotating 3D scenes in real time using hand gestures.

This work used hand gesture recognition to add virtual objects on top of a scene also recovered in real time. The system uses a passive wearable wide-angle camera and Andrew Davison's SLAM algorithm.

  • W.W. Mayol, A.J. Davison, B.J. Tordoff, N.D. Molton, and D.W. Murray. Interaction between hand and wearable camera in 2D and 3D environments [PDF]. Proc. British Machine Vision Conference 2004. London, UK, September. 2004.

 

 



Click image for video

Simultaneous Localisation and Mapping with a Wearable Active Vision Camera.

This research was done together with Andrew Davison and presented the first real-time simultaneous localisation and mapping system for an active vision camera. The intended example application is remote collaboration where a remote expert observes the world through the wearable robot and adds augmented reality annotations to be seen by the wearer.

  • A.J. Davison, W.W. Mayol, and D.W. Murray. Real-Time Localisation and Mapping with Wearable Active Vision [PDF]. Proc IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Tokyo, Japan, October 7 - 10, 2003.
  • W.W. Mayol, A.J. Davison, B.J. Tordoff, and D.W. Murray. Applaying Active Vision and SLAM to Wearables. Proc. 11th Int. Symposium of Robotics Research ISRR03. Siena, Italy, 19-22 October. 2003.

 

Wearable camera placement.

This work studies different constraints involved in deciding where to place a camera (or another optical device) around the human shape.

  • W.W. Mayol, B. Tordoff and D.W. Murray. On the Positioning of wearable Optical Devices. Technical report OUEL224101. 2001.
  • W.W. Mayol, B. Tordoff and D.W. Murray. Designing a Miniature Wearable Visual Robot [PDF]. Proc. IEEE International Conference on Robotics and Automation ICRA2002, Washington D.C. USA. 2002.

 


 

Objective Design of an Active Vision  Robot using Pareto non-dominance analysis.

In this work, a method to design the morphology of a miniature active vision camera is presented. The method uses the Pareto non-dominance test to evaluate seven criteria such as range of motion, working volume, power consumption etc.  to achieve an optimal (in the multi-criteria sense) robot shape.

  • W.W. Mayol, B. Tordoff and D.W. Murray. Designing a Miniature Wearable Visual Robot [PDF]. Proc. IEEE International Conference on Robotics and Automation ICRA2002, Washington D.C. USA. 2002.

 

Click image for video

Click image for video

Wearable Visual Robots

This research developed a new type of robot to be used as an interface for wearable computing. The robot has three degrees of freedom: elevation, paning and cyclotorsion (rotation around the camera's optic axis). It also has a 2D accelerometer to detect its orientation relative to the gravity vector, and a more recent version has a fire-wire interface for the camera sensor and a full 3D orientation sensor. These allows to have a level of decoupling between the sensor and the motions and posture of the wearer.

  • W.W. Mayol, B. Tordoff, T.E. de Campos A.J. Davison, and D.W. Murray. Active Vision for Wearables [PDF] . Proc IEE Eurowearable03, Birmingham UK, September 4-5, 2003.
  • W.W. Mayol, B. Tordoff and D.W. Murray. Wearable Visual Robots [PDF]  . IEEE International Symposium on Wearable Computing ISWC00 . Atlanta GA, USA. October 2000.