AI Research Roundup: Video-Based Navigation, Secure LLMs, and Graph-Based Intelligence
Latest research summaries in ML, Robotics, CV, NLP and AI
Contents
Ola: Pushing the Frontiers of Omni-Modal Language Model with Progressive Modality Alignment
Dolphin: A Programmable Framework for Scalable Neurosymbolic Learning
ChamaleonLLM: Batch-Aware Dynamic Low-Rank Adaptation via Inference-Time Clusters
Uni-NaVid: A Video-based Vision-Language-Action Model for Unifying Embodied Navigation Tasks
Ola: Pushing the Frontiers of Omni-Modal Language Model with Progressive Modality Alignment
Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple Interactions
Ola: Pushing the Frontiers of Omni-Modal Language Model with Progressive Modality Alignment
Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple Interactions
MA4DIV: Multi-Agent Reinforcement Learning for Search Result Diversification
SMART: Advancing Scalable Map Priors for Driving Topology Reasoning
Keep reading with a 7-day free trial
Subscribe to State of AI to keep reading this post and get 7 days of free access to the full post archives.