Papers, reproducible artifacts, and references to experiments.
- [2026 Nature] Artificial intelligence tools expand scientists’ impact but contract science’s focus
Qianyue Hao, Fengli Xu, Yong Li, James A. Evans
[paper]
Keywords: AI for Science, Science of Science, Research Productivity, Topic Convergence, Scientific Diversity
-
[2025 arXiv] Earth AI: Unlocking Geospatial Insights with Foundation Models and Cross-Modal Reasoning
[paper]
Keywords: GeoAI, Foundation Models, Cross-Modal Reasoning, Earth Observation, Spatial Intelligence -
[2025 arXiv] LSDTs: LLM-Augmented Semantic Digital Twins for Adaptive Knowledge-Intensive Infrastructure Planning
[paper]
Keywords: Semantic Digital Twins, Large Language Models, Knowledge Extraction, Infrastructure Planning, Regulation-Aware Optimization -
[2025 arXiv] BuildingWorld: A Structured 3D Building Dataset for Urban Foundation Models
[paper]
Keywords: 3D Urban Modeling, Structured Building Dataset, Urban Foundation Models, LiDAR Point Clouds, Building Reconstruction, Urban AI
- [2025 Annals of GIS] GIScience in the Era of Artificial Intelligence: A Research Agenda Towards Autonomous GIS
[paper]
Keywords: Autonomous GIS, GIScience, Large Language Models, Agentic AI, Intelligent Geosystems, Spatial Analysis
-
[2026 arXiv] Thinking with Map: Reinforced Parallel Map-Augmented Agent for Geolocalization
[paper]
Keywords: Geolocalization, Map-Augmented Agents, Vision-Language Models, Reinforcement Learning, Spatial Reasoning -
[2026 IJGIS] Georeferencing Complex Relative Locality Descriptions with Large Language Models
[paper]
Keywords: Georeferencing, Relative Locality Descriptions, Large Language Models, Spatial Relations, Text-to-Location -
[2024 Applied Sciences] GeoLocator: A Location-Integrated Large Multimodal Model (LMM) for Inferring Geo-Privacy
[paper]
Keywords: Geolocalization, Multimodal Models, Geo-Privacy, Location Inference, Vision–Language Models -
[2025 arXiv] FRIEDA: Benchmarking Multi-Step Cartographic Reasoning in Vision–Language Models
Jiyoon Pyo, Yuankun Jiao, Dongwon Jung, Zekun Li, Leeja Jang, Sofia Kirsanova, Jina Kim, Yijun Lin, Qin Liu, Junyi Xie, Hadi Askari, Nan Xu, Muhao Chen, Yao-Yi Chiang
[paper]
Keywords: Cartographic Reasoning, Spatial Reasoning, Vision–Language Models, Map Understanding, Multi-Step Reasoning, GIS
- [2025 Weather Ready Research] Do Virtual Reality Hazard Simulations Increase People’s Willingness to Contribute to Hazard Mitigation? Results From an Experiment
[paper]
Keywords: Virtual Reality, Hazard Communication, Risk Perception, Mitigation, Human-Subject Experiment
-
[2025 arXiv] BRIGHT: A Globally Distributed Multimodal Building Damage Assessment Dataset with Very-High-Resolution for All-Weather Disaster Response
[paper]
Keywords: Multimodal Disaster Dataset, Building Damage Assessment, Optical and SAR Imagery, All-Weather Disaster Response, Earth Observation -
[2025 Computers, Environment and Urban Systems] Hyperlocal Disaster Damage Assessment Using Bi-Temporal Street-View Imagery and Pre-Trained Vision Models
[paper]
Keywords: Bi-temporal Street-View Imagery, Disaster Damage Assessment, Pre-trained Vision Models, Hyperlocal Analysis, Urban Resilience -
[2025 ICA Abstracts] Perceiving Multidimensional Disaster Damages from Street-View Images Using Visual-Language Models
[paper]
Keywords: Visual-Language Models, Disaster Perception, Street-View Imagery, Multimodal AI, Resilience -
[2025 arXiv] DisasterM3: A Remote Sensing Vision–Language Dataset for Disaster Damage Assessment and Response
[paper]
Keywords: Vision–Language Models, Remote Sensing, Multimodal Disaster Dataset, Damage Assessment, Disaster Response, Cross-Sensor Generalization