THESIS PROPOSALS.

Object detection, clustering and classification using 4D-imaging radar

Location: Gothenburg

Scene reconstruction from 4D radar data

Location: Stockholm
Thesis proposal

Object detection, clustering and classification using 4D-imaging radar

Location: Gothenburg

In the past, much focus has been put into detecting objects from images, video footage and 3D-point clouds. Useful approaches for images include Yolov5 and DETR2 from Facebook research, and for 3D-point clouds Pointnet++ is a well cited example. These examples are all based on deep neural network architectures. 4D imaging radars is a more recent sensor, which provide point clouds at 20 frames-per-second containing thousands of 4-dimensional detections (range, range rate, azimuth, elevation). Plus, they work in environments and weather situations where other sensors fail. At Qamcom we are developing such a radar. Example applications can be found here: https://arberobotics.com

An annotated data set of high quality is important for training neural networks, and when using images, it is often obtained manually. For 4D radars, manual annotation is more difficult since it is not obvious from the point cloud which points belong to a certain object and what type of object it is. Therefore, we will in this project use an automatic, but less accurate, annotation procedure based on projecting the point cloud onto an image. Objects are detected and classified in the image using an existing neural network. The radar points are then annotated with type and object id based on their projections onto the image. An annotated data set based on this principle already exist and seems to result in accurate annotations when the objects are well separated, and not occluded by other objects.

Given the annotated data set we want to train a neural network for detecting, clustering, and classifying objects in the radar point cloud. Additionally, we want to explore if the annotations can be improved in more difficult scenarios where the objects are not resolved by the radar. Based on a sequence of point clouds and images, where the objects are resolved only in parts of the sequence, object-tracking and smoothing can be used to predict the position of the objects also when they are not resolved by the radar.

Radar hardware details: 

Our radar: https://www.qamcom.com/radar

Radar examples and more details on the chipset: https://arberobotics.com

Our radar is one of the most advanced in the world and we think it is possible to make significant contributions to the research community and our customers. We would be very happy if we can glue your achievements into a publication and make it part of our product.

Contact details:

On this thesis one shall:

  • Perform a literature study on deep-learning-based object detection and classification methods using point clouds.
  • Select an accurate subset of the annotated data set in which the objects are well separated.
  • Train a first neural network based on the selected subset of annotated point clouds.
  • Investigate how sequences of data can be used to improve the data annotation in more complicated scenarios. Model-based approaches seem like a promising direction for this task.
  • Train a second network containing more complicated scenarios, potentially by using sequences of data as input.
  • Compile a thesis containing a summary of the findings, a discussion of results, and suggestions for future developments.

References:

Yolov5: https://github.com/ultralytics/yolov5

DETR2: https://github.com/facebookresearch/detectron2

In the pt, much focus has been put into detecting objects from images, video footage and 3D-point clouds. Useful approaches for images include Yolov5 and DETR2 from Facebook research, and for 3D-point clouds

Qi, C. R., Yi, L., Su, H., & Guibas, L. J. (2017). Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems30.
Bai, J., Zheng, L., Li, S., Tan, B., Chen, S., & Huang, L. (2021). Radar transformer: An object classification network based on 4d mmW imaging radar. Sensors21(11), 3854.
Scheiner, N., Kraus, F., Appenrodt, N., Dickmann, J., & Sick, B. (2021). Object detection for automotive radar point clouds–a comparison. AI Perspectives3(1), 1-23.

Thesis proposal

Scene reconstruction from 4D radar data

Location: Stockholm

Radar 4D imaging is increasingly becoming a critical component in the automotive industry. This rejuvenation is mainly attributed to the advancement in beamforming technology and hardware capabilities. However, the contextual information reconstructed from the 4D radar is more of a functional construct and does not replace cameras’ (classical) visual information. The recent development of deep (conditional) generative models in AI has shown great potential for generating realistic images. Namely, it is possible to mimic the discriminative feature of the training data to generate new realistic images straight from 4D radar data.

These generative models can generate plausible images, given that the training data is of primary quality. Diffusion models and generative adversary networks (GAN) can generate images with higher resolution relative to other generative models, such as variational autoencoders (VAE) or normalizing flows (NF). However, these latter models do estimate the likelihood as well. Therefore, a conditional version of the diffusion model, GAN, VAE, or NF, could be the starting point for image reconstruction from 4D radar data. Furthermore, a distributional shift of the 4D radar data is also a critical key driver for the validity of these models since their deployment should not be tailored to any specific environment. Also, the proposed model should not be affected by the relative velocity of the objects in close and far proximity since this information will be conveyed using Doppler information.

Last but not least, the evaluation of this method remains of primary importance, given that there is no annotation on the given data. The data acquisition will be performed using a 4D radar and a high-resolution camera mounted on a designed road intersection. Potential radar and camera deployment on a moving vehicle will be part of the method validation. For an example, see https://youtu.be/1mXy0oi8d60?t=2404

Contact details:

The student is expected to:

  • Design and train generative models for synthetic data generations
  • Design and execute a protocol for collecting the training data
  • Design and validate evaluation metric for the generative models.

Prior knowledge should include:

  • Good understanding of probability for modeling and inference
  • Good knowledge of mathematics behind neural network components
  • Understanding of optimizations and the mathematics around gradient-based methods
  • Fluency in python and some experience with popular frameworks such as Pytorch, Keras, Tensorflow, etc.

What we offer:

  • Computation GPU resources for the experiment
  • Training data from both camera and state-of-the-art 4D radar
  • Financial compensation upon completion of the work