Vehicular Re-Identification from Uncontrolled Multiple Views †

Research output: Contribution to journalArticlepeer-review

Abstract

Vehicle re-identification (re-ID) across disparate sensing modalities remains a fundamental challenge for transportation research. In this work, we introduce a deep multi-view vehicle re-ID framework that leverages Siamese networks to compare pairs of vehicle images and produce matching scores, enabling robust association across drastically different viewpoints such as those from UAVs, surveillance cameras, and ground sensors. The model exploits convolutional neural networks to learn features that remain discriminative under changes in angle, distance, and illumination, supporting more generalizable re-ID performance. As part of this effort, we also developed an automated pipeline to synchronize roadside and UAV video streams, producing a multi-perspective dataset that complements preexisting real collections and a synthetic dataset generated in this study. Together, these contributions advance the capability to re-identify vehicles across wide viewing baselines; establish a foundation for scalable, reproducible research in vehicle re-ID; and open pathways for future applications, such as inferring routine behaviors, movement patterns, and daily habits of the individual associated with the vehicle.

Original languageEnglish
Article number202
JournalFuture Transportation
Volume5
Issue number4
DOIs
StatePublished - Dec 2025

Keywords

  • convolutional neural networks
  • re-identification
  • surveillance systems
  • unmanned aerial vehicles
  • vehicle

Fingerprint

Dive into the research topics of 'Vehicular Re-Identification from Uncontrolled Multiple Views †'. Together they form a unique fingerprint.

Cite this