I am a Reader in Computer Vision at the University of Bristol in the Department of Computer Science and a member of the Visual Information Laboratory (VIL) and the Bristol Robotics Laboratory (BRL). My research covers computer vision and its applications - robotics, wearable computing and augmented reality - and I have done a lot of work on 3-D tracking and scene reconstruction, mainly in simultaneous localisation and mapping (SLAM). Working with industry and on interdisciplinary projects is a high priority for me - please get in touch if you are interested in working with me. More details can be found below and in my publications.
Contact details: Department of Computer Science, University of Bristol, Merchant Venturers Building, Woodland Road, Bristol BS8 1UB, UK; T: +44 117 9545149; E: andrew at cs dot bris dot ac dot uk.
News
- Dec 18: Pleased to announce that I'll be chairing the 2nd International Workshop on Lines, Planes and Manhatten Models for 3-D Mapping (LPM 2019) at ICRA 2019 in Montreal, co-organising with Michael Kaess and Sri Ramalingam.
- Oct 18: Welcome to 4 new PhD students joining my group: Obed Samano Abonce (Linking images to maps); Tom Bale (AR for Nuclear); Yuhang Ming (Semantic SLAM); and Mengjie Zhou (Automated map reading).
- Oct 18: Our papers on Automated Map Reading and Out-of-View 3-D Tracking presented at IROS 2018 in Madrid.
- Sept 18: Welcome to Sam Martin who has joined us on our Innovate UK project with Perceptual Robotics. Sam is working on real-time 3-D tracking for drones.
- May 18: New vacancy for Research Associate in Computer Vision and Sensor Fusion to join our Innovate UK project with Perceptual Robotics. This is a unique opprtunity to join an exciting project to develop an inspection system for offshore wind turbines using drones. The RA will be responsible for developing drone localisation and landing capability wrt a USV using vision and other sensors. Further details and applications.
- Apr 18: Very pleased to be on the advisory board for the recently opened Bristol VR Lab, a collaborative space for start-ups and researchers working in VR, AR and related areas.
- Apr 18: Work on 3-D reconstruction of volcanic ash plumes from groundview images presented at the European Geosciences Union
General Assembly in Vienna. Collaboration with colleagues at Bristol in Earth Sciences and Aerospace engineering - 3D Reconstruction of Volcanic Ash Plumes using Multi-Camera Computer Vision Techniques.
- Apr 18: New Innovate UK project starting with Perceptual Robotics on inspection of offshore wind turbines using drones, part of the UK Industrial Strategy Challenge Fund for Robotics and AI in Extreme Environments. We will be hiring a Research Associate for this project very soon - more information to follow shortly.
- Mar 18: New paper on arXiv: Automated Map Reading: Image Based Localisation in 2-D Maps Using Binary Semantic Descriptors, with Pilailuck Panphattarasap, describes novel approach to localisation wrt a 2-D map using image data and semantic modelling.
- Mar 18: New paper on arXiv: Predicting Out-of-View Feature Points for Model-Based Camera Pose Estimation, with Oliver Moolan-Feroze, describes using a CNN approach to predict out of view feeature points for robust 'up close' 3-D tracking.
- Feb 18: NEW: PhD studentship available in Augmented Reality, starting October 2018. Full details are here.
Research Assistants and Students
I enjoy working with people who want to discover, innovate and work with others to make things happen. If you are interested in working with me then please get in touch. If you want to do a PhD then I'd be happy to hear from you but please take a look at what I do and how we might work together before you contact me. If you are looking for funding, then any vacancies I have will be advertised on this page; otherwise, you may like to consider the various scholarships offered by the University.
Projects
|
AUTOMATED MAP READING USING SEMANTIC FEATURES
New research on localising images in 2-D cartographic maps by linking semantic information, akin to human map reading. Based on minimal binary route patterns indicating presence or absence of semantic features and trained networks to detect such features in images. Leads to highly scalable map representations.
[IROS 2018 paper][Project page]
|
|
OUT-OF-VIEW MODEL-BASED TRACKING
3-D model-based tracking which uses a trained network to predict out-of-view feature points, allowing tracking when only partial views of an object are available. Designed to deal with tracking scenarios involving large objects and close view camera motion.
[IROS 2018 paper]
|
|
HDRFUSION: RGB-D SLAM WITH AUTO EXPOSURE
RGB-D SLAM system which is robust to appearance changes caused by RGB auto exposure and is able to fuse multiple exposure frames to build HDR scene reconstructions. Results demonstrate high tracking reliability and reconstructions with far greater dynamic range of luminosity.
[3DV 2016 paper][Project page]
|
|
LDD PLACE RECOGNITION
Place recognition using landmark distribution descriptors (LDD) which encode the spatial organisation of salient landmarks detected using edge boxes and represented using CNN features. Results demonstrate high accuracy for highly disparate views in urban environments.
[ACCV 2016 paper][Project page]
|
|
MULTI-CORRESPONDENCE 3-D POSE ESTIMATION
Novel algorithm for estimating the 3-D pose of an RGB-D sensor which uses multiple forms of correspondence - 2-D, 3-D and surface normals - to gain improved performance in terms of accuracy and robustness. Results demonstrate significant improvement over existing algorithms.
[ICRA 2016 paper][Project page]
|
|
RGB-D RELOCALISATION USING PAIRWISE GEOMETRY
fast and robust relocalisation in an RGB-D SLAM system based on pairwise 3-D geometry of key points encoded within a graph type structure combined with efficient key point representation based on octree representation. results demonstrate that the relocalisation out performs that of other approaches.
[ICRA 2015 paper][Project page]
|