State of AI

State of AI

Share this post

State of AI
State of AI
Quiet-STaR, MoAI, VLOGGER, Chronos and Stealing Part of LLM

Quiet-STaR, MoAI, VLOGGER, Chronos and Stealing Part of LLM

Week 3, March 2024

Mar 19, 2024
∙ Paid
4

Share this post

State of AI
State of AI
Quiet-STaR, MoAI, VLOGGER, Chronos and Stealing Part of LLM
2
Share

Dear readers,

It's with great excitement that we present to you the 50th edition of the State of AI. This milestone issue showcases the remarkable progress and diversity in AI research and applications.

In this edition, we explore the fascinating world of self-reflective language models that learn to think before speaking, the incredible potential of multimodal intelligence combining language and vision, the captivating synthesis of embodied avatars, the intricate language of time series, and the thought-provoking implications of partial language model extraction.

Each article in this issue offers a unique perspective on the ever-evolving landscape of AI, promising to engage, inform, and inspire. Join us as we celebrate this significant milestone and embark on an exciting journey through the latest advancements in artificial intelligence.

Thank you for being a part of our community, and happy reading!

Warmest regards,

State of AI

State of AI is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


Contents

  1. Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking

  2. MoAI: Mixture of All Intelligence for Large Language and Vision Models

  3. VLOGGER: Multimodal Diffusion for Embodied Avatar Synthesis

  4. Chronos: Learning the Language of Time Series

  5. Stealing Part of a Production Language Model


Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking

Authors: Eric Zelikman, Georges Harik, Yijia Shao, Varuna Jayasiri, Nick Haber, Noah D. Goodman

Source and references: https://arxiv.org/abs/2403.09629


Introduction

Language models (LMs) are smart enough to make human-like predictions, but they often miss the subtleties hiding between the lines without understanding the reasoning behind words. It's been shown that reasoning-focused tasks can improve a language model's performance. However, these tasks are commonly trained on curated datasets meant specifically for reasoning. The authors of the paper, "Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking," aim to address this challenge. They propose a novel and scalable technique for training language models called Quiet Self-Taught Reasoner (Quiet-STaR), which teaches them to generate and use internal rationalizations based on arbitrary text.

The Challenge of Reasoning in Language Models

Teaching language models to reason is not a new concept, but existing approaches rely on curated datasets and specific reasoning tasks. Researchers have trained language models to generate chains of thought that help them solve problems, but these approaches only work for question-answer datasets, limiting their scalability and broad applicability. Quiet-STaR aims to change this by teaching language models to think through arbitrary text.

Quiet-STaR: An Overview

Quiet-STaR works in three main steps: think, talk, and learn.

  1. Think (Parallel Rationale Generation): For parallel rationale generation, the algorithm generates rationales at each token position in the input sequence. The generated rationales are marked with <startofthought> and <endofthought> tokens to guide the model.

  2. Talk (Mixing Post-Rationale and Base Predictions): The model incorporates the outputs of the rationale generation process with its base predictions. A mixing head is introduced to combine the model's next-token predictions, with or without rationales, to ease the distribution shift.

  3. Learn (Optimizing Rationale Generation): The algorithm optimizes rationale generation parameters with REINFORCE, a reinforcement learning algorithm. It uses teacher-forcing to train the language model on both the next token and later tokens, improving the likelihood of generating helpful rationales.

Efficient Rationale Generation with Parallel Sampling

Quiet-STaR faces the challenge of generating multiple rationales for each token position efficiently. To address this, the authors propose a parallel sampling algorithm that generates rationales in parallel for all tokens in a given text sequence. Each generated token attends only to the tokens used to generate it and itself, allowing for parallel generation without loss of context.

Mixing Head: Smoothing the Introduction of Rational Thinking

As the model starts to learn reasoning, it may initially face difficulty due to the out-of-distribution nature of its thoughts. To overcome this, a mixing head is introduced to blend the generated rationales with the base predictions of the model. This learned interpolation eases the transition to incorporating reasoning into the language model's predictions.

Learning Rationales through Reinforcement

The learning process in Quiet-STaR relies on REINFORCE, a reinforcement learning algorithm, to optimize the rationale generation parameters. By increasing the likelihood of rationales that make future text more probable and reducing the variance with a teacher-forcing trick, the model learns to generate and use more helpful rationales.

Results and Improvements

Quiet-STaR demonstrates significant improvements in zero-shot direct reasoning abilities on multiple datasets, such as CommonsenseQA (36.3% to 47.2%) and GSM8K (5.9% to 10.9%). The model's performance improves consistently as the number of tokens used in its internal thoughts increases. Additionally, the generated rationales aid the model in predicting difficult tokens better than a model trained on the same web text.

Conclusion

Quiet-STaR marks a step towards improving language models' general reasoning ability by enabling them to learn from diverse, unstructured text data rather than relying on curated reasoning tasks. By introducing efficient parallel generation, custom meta-tokens, and learning mechanisms like mixing heads and non-myopic losses, Quiet-STaR helps language models generate reasoning that ultimately leads to better predictions of future text.

This research not only offers a more scalable approach to training language models on reasoning tasks but also makes it more accessible to users, as it doesn't require the use of fine-tuning on specific tasks. With the Quiet-STaR method, language models can now learn to reason in a more general and scalable way, helping them better understand the world around us.


Mixture of All Intelligence for Large Language and Vision Models

Authors: Byung-Kwan Lee, Beomchan Park, Chae Won Kim, and Yong Man Ro

Source and references: https://arxiv.org/abs/2403.07508


Introduction

In the world of large language and vision models (LLVMs), a study from Korea Advanced Institute of Science and Technology (KAIST) presents an innovative LLVM called Mixture of All Intelligence (MoAI). The motivation behind MoAI is to make use of the specialized computer vision models that have been overshadowed by LLVMs. MoAI efficiently leverages auxiliary visual information by combining external computer vision models, such as panoptic segmentation, open-world object detection, scene graph generation, and optical character recognition, enhancing its visual perception capabilities for complex question answering tasks.

Get 7 day free trial

Keep reading with a 7-day free trial

Subscribe to State of AI to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 StateOfAI
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share