Junwoo Lee
Junior Research Scientist
Affiliation: NYU Abu Dhabi
Education: BA, New York University Abu Dhabi
Research Websites: Center for Interacting Urban Networks (CITIES)
Research Areas: Vision models; Language models; World models
Junwoo Lee is a junior researcher specializing in vision language deep learning models. He earned his Bachelor of Science degree in computer science from New York University Abu Dhabi. His research involves computer vision in world models and how deep learning models can assist people in need of help for vision problems by laying out instructions based on vision input of the real world. Lee's recent work focuses on using vision language models for the task of sign language translation and generation for deaf people. Lee is broadly interested in language models and world models in the area of deep learning models.
Summary of Research
The SafeCross AI Necklace project, led by Principal Investigator Yi Fang, develops an AI-powered wearable designed to enhance the safety and independence of blind and low-vision pedestrians when crossing streets. The necklace integrates multi-modal sensors, cameras, and microphones to capture real-time environmental data, which is processed by a cloud-based large vision-language model capable of interpreting traffic lights, vehicle movement, pedestrian flow, and potential hazards. Through multimodal feedback, including haptic vibrations, spatialized audio, and voice guidance, the device enables users to make safe, informed crossing decisions. The system will first be tested in a simulated traffic environment to ensure reliability across diverse conditions before real-world deployment. Aligned with CITIES’ focus on mobility, uncertainty, and fairness, the project leverages advanced AI and assistive technologies to address accessibility challenges and promote safer, more equitable urban mobility for the visually impaired.