Noga Zaslavsky

Assistant Professor

NYU Psychology

About me

I’m an Assistant Professor in the Psychology Department at NYU.

My research aims to understand language, learning, and reasoning from first principles, building on ideas and methods from machine learning and information theory. I’m particularly interested in finding computational principles that explain how we use language to represent the environment; how this representation can be learned in humans and in artificial neural networks; how it interacts with other cognitive functions, such as perception, action, social reasoning, and decision making; and how it evolves over time and adapts to changing environments and social needs. I believe that such principles could advance our understanding of human and artificial cognition, as well as guide the development of artificial agents that can evolve on their own human-like communication systems without requiring huge amounts of human-generated training data.

Selected Publications

All publications ≫

Efficient compression in color naming and its evolution

Zaslavsky, Kemp, Regier, Tishby. PNAS, 2018.
ELSC Prize for Outstanding Publication

Deep learning and the Information Bottleneck principle

Tishby and Zaslavsky. IEEE ITW, 2015.

Optimal compression in human concept learning

Imel and Zaslavsky. CogSci 2024.

Recent
Publications

Nathaniel Imel, Noga Zaslavsky . Optimal compression in human concept learning. CogSci, 2024.

PDF Proceedings

Eleonora Gualdoni, Mycal Tucker, Roger P. Levy, Noga Zaslavsky . Bridging semantics and pragmatics in information-theoretic emergent communication. SCiL, 2024.

PDF DOI

Alicia Chen, Matthias Hofer, Moshe Poliak, Roger P. Levy, and Noga Zaslavsky . Discreteness and systematicity emerge to facilitate communication in a continuous signal-meaning space. EvoLang XV, 2024.

PDF DOI

Eghbal A Hosseini, Martin Schrimpf, Yian Zhang, Samuel Bowman, Noga Zaslavsky, Evelina Fedorenko . Artificial neural network language models predict human brain responses to language even after a developmentally realistic amount of training. Neurobiology of Language, 2024.

PDF DOI

Nathaniel Imel, Richard Futrell, Michael Franke, Noga Zaslavsky . Noisy Population Dynamics Lead to Efficiently Compressed Semantic Systems. InfoCog @ NeruIPS, 2023.

PDF

Andi Peng, Mycal Tucker, Eoin M. Kenny, Noga Zaslavsky, Pulkit Agrawal, Julie Shah . Human-Guided Complexity-Controlled Abstractions. NeurIPS, 2023.

PDF Code Video Proceedings Supplemental

Eghbal Hosseini, Noga Zaslavsky, Colton Casto, Evelina Fedorenko . Teasing apart the representational spaces of ANN language models to discover key axes of model-to-brain alignment. CCN, 2023.

PDF

Mora Maldonado, Noga Zaslavsky, Jennifer Culbertson . Evidence for a language-independent conceptual representation of pronominal referents. CogSci, 2023.

PDF Proceedings

Eghbal Hosseini, Martin Schrimpf, Yian Zhang, Samuel Bowman, Noga Zaslavsky, Evelina Fedorenko . Alignment of ANN Language Models with Humans After a Developmentally Realistic Amount of Training. COSYNE, 2023.

PDF