State of AI

State of AI

Share this post

State of AI
State of AI
Transformers, Diffusion, and Quantum Computing

Transformers, Diffusion, and Quantum Computing

Latest research summaries in ML, Robotics, CV, NLP and AI

State of AI's avatar
State of AI
Jul 05, 2025
∙ Paid
12

Share this post

State of AI
State of AI
Transformers, Diffusion, and Quantum Computing
2
Share

Welcome to today's edition of State of AI 👋 And a warm welcome to our 21 new subscribers since last edition!

Let's get into it 👇

Contents

  1. Enabling Population-Level Parallelism in Tree-Based Genetic Programming for Comprehensive GPU Acceleration

  2. FlowSpec: Continuous Pipelined Speculative Decoding for Efficient Distributed LLM Inference

  3. MAPS: Advancing Multi-Modal Reasoning in Expert-Level Physical Science

  4. RefTok: Reference-Based Tokenization for Video Generation

  5. Less is Enough: Training-Free Video Diffusion Acceleration via Runtime-Adaptive Caching

  6. Bootstrapping Grounded Chain-of-Thought in Multimodal LLMs for Data-Efficient Model Adaptation

  7. Transformers Don't Need LayerNorm at Inference Time: Scaling LayerNorm Removal to GPT-2 XL and the Implications for Mechanistic Interpretability

  8. StructTransform: A Scalable Attack Surface for Safety-Aligned Large Language Models

  9. Gradient-Based Model Fingerprinting for LLM Similarity Detection and Family Classification

  10. Batch-Max: Higher LLM Throughput using Larger Batch Sizes and KV Cache Compression

  11. GPAS: Accelerating Convergence of LLM Pretraining via Gradient-Preserving Activation Scaling

  12. DeSTA2.5-Audio: Toward General-Purpose Large Audio Language Model with Self-Generated Cross-Modal Alignment

  13. MultiGen: Using Multimodal Generation in Simulation to Learn Multimodal Policies in Real

  14. Large Language Model-Driven Closed-Loop UAV Operation with Semantic Observations

  15. DexVLG: Dexterous Vision-Language-Grasp Model at Scale

Enabling Population-Level Parallelism in Tree-Based Genetic Programming for Comprehensive GPU Acceleration

Authors: Zhihong Wu, Lishuang Wang, Kebin Sun, Zhuozhao Li, Ran Cheng

Source and references: https://arxiv.org/abs/2501.17168v4


Introduction

This paper presents EvoGP, a high-performance framework that enables comprehensive GPU acceleration of Tree-based Genetic Programming (TGP) through population-level parallel execution.

Keep reading with a 7-day free trial

Subscribe to State of AI to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 StateOfAI
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share