Welcome



Welcome to the Computer Vision Group at RWTH Aachen University!

The Computer Vision group has been established at RWTH Aachen University in context with the Cluster of Excellence "UMIC - Ultra High-Speed Mobile Information and Communication" and is associated with the Chair Computer Sciences 8 - Computer Graphics, Computer Vision, and Multimedia. The group focuses on computer vision applications for mobile devices and robotic or automotive platforms. Our main research areas are visual object recognition, tracking, self-localization, 3D reconstruction, and in particular combinations between those topics.

We offer lectures and seminars about computer vision and machine learning.

You can browse through all our publications and the projects we are working on.

We have a paper on Scene Flow Propagation for Semantic Mapping and Object Discovery in Dynamic Street Scenes at IROS 2016

Aug. 19, 2016

We have three papers accepted at the British Machine Vision Conference (BMVC) 2016.

Aug. 19, 2016

We have a paper on Joint Object Pose Estimation and Shape Reconstruction in Urban Street Scenes Using 3D Shape Priors at GCPR 2016

June 19, 2016

Semantic Segmentation dataset released

We just uploaded our dataset used to train the semantic classifier in our ICRA 2016 paper on tracking of generic objects. You can find the dataset here.

May 23, 2016

Recent Publications

Incremental Object Discovery in Time-Varying Image Collections

IEEE Conference on Computer Vision and Pattern Recognition (CVPR'16)

In this paper, we address the problem of object discovery in time-varying, large-scale image collections. A core part of our approach is a novel Limited Horizon Minimum Spanning Tree (LH-MST) structure that closely approximates the Minimum Spanning Tree at a small fraction of the latter’s computational cost. Our proposed tree structure can be created in a local neighborhood of the matching graph during image retrieval and can be efficiently updated whenever the image database is extended. We show how the LH-MST can be used within both single-link hierarchical agglomerative clustering and the Iconoid Shift framework for object discovery in image collections, resulting in significant efficiency gains and making both approaches capable of incremental clustering with online updates. We evaluate our approach on a dataset of 500k images from the city of Paris and compare its results to the batch version of both clustering algorithms.

 

Mobile Manipulation, Tool Use, and Intuitive Interaction for Cognitive Service Robot Cosero

Accepted for Frontiers in AI and Robotics, section Humanoid Robotics, to appear 2016

Cognitive service robots that shall assist persons in need in performing their activities of daily living have recently received much attention in robotics research. Such robots require a vast set of control and perception capabilities to provide useful assistance through mobile manipulation and human-robot interaction. In this article, we present hardware design, perception, and control methods for our cognitive service robot Cosero. We complement autonomous capabilities with handheld teleoperation interfaces on three levels of autonomy. The robot demonstrated various advanced skills, including the use of tools. With our robot we participated in the annual international RoboCup@Home competitions, winning them three times in a row.

 

PatchIt: Self-supervised Network Weight Initialization for Fine-grained Recognition

British Machine Vision Conference (BMVC'16), (to appear)

ConvNet training is highly sensitive to initialization of the weights. A widespread approach is to initialize the network with weights trained for a different task, an auxiliary task. The ImageNet-based ILSVRC classification task is a very popular choice for this, as it has shown to produce powerful feature representations applicable to a wide variety of tasks. However, this creates a significant entry barrier to exploring non-standard architectures. In this paper, we propose a self-supervised pretraining, the PatchTask, to obtain weight initializations for fine-grained recognition problems, such as person attribute recognition, pose estimation, or action recognition. Our pretraining allows us to leverage additional unlabeled data from the same source, which is often readily available, such as detection bounding boxes. We experimentally show that our method outperforms a standard random initialization by a considerable margin and closely matches the ImageNet-based initialization.

Disclaimer Home Visual Computing institute RWTH Aachen University