Core Concepts
The author explores the integration of deep learning techniques into urban computing through various data fusion methods, aiming to enhance predictive capabilities and facilitate complex analyses.
Abstract
The content delves into the taxonomy of deep learning-based data fusion methods in urban computing. It categorizes these methods into feature-based, alignment-based, contrast-based, and generation-based fusion strategies. The article highlights the importance of integrating diverse data sources for comprehensive insights into urban dynamics.
The discussion covers the significance of geographical, traffic, social media, demographic, and environmental data in urban computing. Various datasets are analyzed within these categories to understand spatial relationships, traffic patterns, social behaviors, population demographics, and environmental conditions.
Furthermore, the methodology perspective outlines specific fusion models within each category. Feature-Based Data Fusion combines features from different sources; Alignment-Based Data Fusion aligns diverse data representations; Contrast-Based Data Fusion enhances feature discriminability; Generation-Based Data Fusion creatively generates one modality under the condition of others.
Overall, the content provides a comprehensive overview of deep learning applications in cross-domain data fusion for urban computing.
Stats
Zhang et al. [353] collected crime data from NYC Open Data website.
Yuan et al. [334] utilized Dark Sky API for weather characteristics extraction.
Bai et al. [10] collected POI records using AMaps Service Platform.
Lu et al. [179] gathered geo-tagged video data from MediaQ and GeoVid platforms.
Xi et al. [294] extracted population statistics from WorldPop platform.
Quotes
"The paradigm shift derived from deep learning renders previous surveys somewhat obsolete." - Author
"Feature-Based Data Fusion consolidates raw or processed features from various sources." - Author