State of AI

State of AI

Share this post

State of AI
State of AI
State of AI

State of AI

Week 1, May 2023

May 06, 2023
∙ Paid
8

Share this post

State of AI
State of AI
State of AI
1
Share

Dear readers,

Welcome to the fifth edition of the State of AI newsletter! We are truly grateful for your continued support as we bring you the most discussed ML/AI research papers of the week, curated entirely by GPT-4. As the field of AI advances at an incredible pace, it becomes nearly impossible to keep up with all the progress. Our goal is to distill the wealth of information into a digestible read that keeps you informed and engaged.

In this week's edition, we will learn about how researchers used Machine Learning to reconstruct thoughts, using AI to reconstruct what a mouse sees by scanning its brain activity while it watches a film, how to merge different models to perform tasks without training, automatic Machine Learning with GPT and unlimited input transformers.

As we continue to navigate the ever-evolving AI landscape, we hope our newsletter helps you stay updated on the most recent research and developments. Thank you for joining us on this journey, and we look forward to bringing you even more insightful content. Happy reading!

Best regards,

State of AI

Get 7 day free trial


Contents

  1. Learnable Latent Embeddings For Joint Analysis

  2. Semantic reconstruction of continuous language from non-invasive brain recordings

  3. AutoML-GPT: Automatic Machine Learning with GPT

  4. ZipIt! Merging Models from Different Tasks without Training

  5. Unlimiformer: Long-Range Transformers with Unlimited Length Input


Learnable Latent Embeddings For Joint Analysis

Authors: Steffen Schneider, Jin Hwa Lee, Mackenzie Weygandt Mathis

Source & References: https://doi.org/10.1038/s41586-023-06031-6


Introduction: Bridging the Gap Between Neural Activity and Behavior

Neuroscience has long aimed to understand and map the relationship between neural activity and behavior. In practice, this involves working with large, complex datasets and trying to build models that can effectively decode neural dynamics. Recent research by Steffen Schneider, Jin Hwa Lee, and Mackenzie Weygandt Mathis has taken us one step closer to this goal with a new encoding method called CEBRA (contrastive encoding of behavioral and neural activity). This cutting-edge technique uses behavioral and neural data jointly to produce consistent, high-performance embeddings, paving the way for a deeper understanding of how our brains drive our actions.

CEBRA: The Foundation of a Breakthrough

CEBRA combines the powerful concepts of nonlinear independent component analysis (ICA) with contrastive learning to build a flexible method for producing high-fidelity representations of neural and behavioral data. Unlike traditional techniques, CEBRA can handle a wide range of dimensions and types of labels, making it a valuable tool for neuroscientists seeking to navigate the complex relationships between neural activity and behavior.

A key innovation of CEBRA is its ability to incorporate task-relevant and task-irrelevant variables. This means researchers can decide which data to focus on and which to ignore, depending on their specific research goals. This flexibility allows CEBRA to be used for hypothesis-driven and discovery-driven analyses in a wide array of settings.

Putting CEBRA to the Test: Outperforming the Competition

To assess CEBRA's capabilities, Schneider and his colleagues compared it against several other algorithms in reconstructing ground-truth synthetic data. In these tests, CEBRA significantly outperformed well-known methods such as t-SNE, UMAP, automatic LFADS (autoLFADS), and pi-VAE, showcasing its potential as a game-changer in the field of neuroscience.

Moving beyond synthetic data, the researchers applied CEBRA to a hippocampus dataset to benchmark its performance against other methods when applied to real-world experimental data. Once again, CEBRA proved its worth, producing visually informative and consistent embeddings that surpassed rival techniques.

Exploring Neural Dynamics Through Hypothesis-Driven and Discovery-Driven Analysis

CEBRA's flexible design allows it to address a wide range of scientific questions, making it an invaluable research tool. In the hippocampus dataset, for example, its performance with position and direction variables shed light on the precise roles these variables play in shaping neural activity.

Additionally, CEBRA can act as both a hypothesis-driven tool, guided by specific scientific questions, and a discovery-driven tool that uncovers hidden relationships between neural activity and behavior. This versatility makes it an excellent choice for neuroscientists navigating the vast and complex world of neural dynamics.

Validating Robustness With Algebraic Topology

To further establish the reliability of CEBRA's embeddings, the researchers turned to the branch of mathematics known as algebraic topology. By measuring persistent cohomology, they could assess the embeddings' robustness and compare how well they aligned with the expected topology. CEBRA's embeddings displayed a strong ring topology consistent with the expected behavior of place cells in the hippocampus, further demonstrating its potential as a powerful analysis tool.

Implications: The Broad Reach of CEBRA's Capabilities

CEBRA's groundbreaking approach to jointly analyzing behavioral and neural data opens the door to profound new insights in neuroscience. Its ability to work with a broad range of dimensions, labels, and input types makes it a versatile and adaptable tool for researchers seeking to unlock the secrets of the brain.

Moreover, CEBRA's performance with real-world experimental data and its strong theoretical foundations in algebraic topology underscore its potential to facilitate advances in our understanding of complex neural dynamics. As the study of the brain continues to evolve, researchers can now leverage the power of CEBRA to develop more accurate models and more deeply probe the intricate interplay between neural activity and behavior.

Closing Thoughts: A New Era in Neuroscience Modeling

In summary, CEBRA represents a significant leap forward in our ability to explore the relationship between neural activity and behavior. This breakthrough technique combines the power of nonlinear ICA with contrastive learning to create consistent, high-performance embeddings that can shed light on the hidden connections between our brains and our actions.

Equally at home in hypothesis-driven and discovery-driven research settings, CEBRA's flexibility and versatility make it an invaluable addition to the neuroscientist's toolbox. Its strong performance in test settings, coupled with its ability to produce robust and visually informative embeddings, opens the door to a new era in neuroscience modeling.

As the field continues to progress, CEBRA stands poised to play a crucial role in shaping our understanding of the fascinating and complex world of neural dynamics. By bridging the gap between neural activity and behavior, CEBRA has the potential to revolutionize our understanding of the brain and pave the way for new discoveries that could transform how we perceive and interact with the world around us.


Semantic reconstruction of continuous language from non-invasive brain recordings

Authors: Jerry Tang, Amanda LeBel, Shailee Jain, Alexander G. Huth

Source & References: https://doi.org/10.1101/2022.09.29.509744


Introduction

Communicating with technology just by thinking about it – this mind-blowing concept could become a reality, thanks to researchers at The University of Texas at Austin's groundbreaking work on non-invasive brain-computer interfaces (BCIs) that decode continuous natural language from a person's brain. Their newly developed language decoder is crucial to overcoming the challenges of existing BCIs and making this futuristic dream a reality.

Keep reading with a 7-day free trial

Subscribe to State of AI to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 StateOfAI
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share