US tab


Joey Valez-Ginorio

Undergraduate Major: Computer Engineering

Future Plans: Ph.D. in Computer Science

Joey Valez-Ginorio

Joey Velez-Ginorio is pursuing a bachelor's degree in Computer Engineering with a minor in Mathematics. He has involved himself with several volunteer organizations across his undergraduate career: Limbitless Solutions, I.D.E.A.S., AAP Ambassadors, and the Society for Advancement of Chicanos/Native Americans and Hispanics in Science. His current research focus is in Computational Cognitive Science & Artificial Intelligence, where he seeks to build computational models for different features of human cognition - in an effort to build better A.I. With this, Joey plans to matriculate into academia, where he can pursue these ideas in the long term.

The Language of Mental States: Rationality and Compositionality in Theory of Mind

Conducted at the Massachusetts Institute of Technology

Mentors: Dr. Joshua B. Tenenbaum, Professor of Cognitive Science and Computation, Massachusetts Institute of Technology

Abstract: When we interact with others we cannot see their mental states: what they think and what they want. Nevertheless, we infer them by watching how they behave. Computationally, Bayesian inference over planning algorithms, known as inverse planning, captures how humans infer desires from observable actions. These models, however, represent desires as simple associations between the agent and the state of the world (e.g., the person wants coffee, no matter the context), failing to capture the sophistication of mental state inferences humans perform. In this paper we show that representing desires as probabilistic programs enables inferring complex desires that accurately describe complex behaviors (e.g., the person either wants coffee and a scone, or just tea, and she will get whatever is more convenient). Inspired by formal models of concept learning, our model performs Bayesian inference over a probabilistic grammar that builds desires by combining logical primitives (e.g., then, or, and) with basic desires (e.g., coffee, tea, scones). We show that our model outperforms current models, performing more human-like inferences on social inference tasks.