|10:30-11:30am||From percepts to concepts: Understanding category selectivity for animals and objects
Olivia Cheung, NYU Abu Dhabi
|11:30am-12pm||Category representations in convolutional neural networks: A layer-by-layer analysis
Niels Verosky and Olivia Cheung, NYU Abu Dhabi
|2-3pm||Harmonic analysis of social cognition
Anne Maass, Michele Pavon, Caterina Suitner, NYU Abu Dhabi and Padova University)
|3-3:30pm||A mathematician’s perspective: equivalence relations steering the abstraction process
Michele Pavon, NYU Abu Dhabi
|4:15-5:15pm||The diverse representational substrates of the predictive mind
Michael Gilead, Tel Aviv University
We maintain stable representations of the world via interactions of incoming sensory input and existing conceptual knowledge about the world. Distinct concepts, such as animate and inanimate entities, often comprise vast and systematic differences in visual features. As the human brain transforms percepts into concepts, to what extent the mental and neural representations contain visual vs. conceptual information? In this talk, I will present behavioral and neural evidence that the well-documented distinction between animal and object representations are unlikely merely due to differences in low-level visual or shape properties. Instead, the involvement of higher-level conceptual influences may play a critical role for interpreting visual input during categorization.
Artificial neural networks can identify an impressive variety of images, but what abstractions do they form when they “see” the world? We tracked the representation of animals and human-made objects through 50+ layers of two convolutional neural networks: a neural network trained to classify images into one of 1,000 predefined labels (ResNet) and a neural network trained to pair images with realistic text captions (CLIP). Distinct patterns of animal and object category representations emerged across different layers in both neural networks, revealing similarities in categorization between humans and artificial neural networks.
We argue that some fundamental concepts and tools of signal processing may be effectively applied to represent and interpret social cognitive processes. From this viewpoint, social stimuli are thought of as the weighted sum of harmonics with different frequencies: Low frequencies represent general categories such as gender, ethnic group, nationality, etc., whereas high frequencies account for personal characteristics. Individuals are then seen by observers as the output of a filter that emphasizes a certain range of frequencies. The selection of the filter depends on the social distance between observer and target, as well as on motivation, cognitive resources, and cultural background. Enhancing low- or high-frequency harmonics is not on equal footing, the latter requiring supplementary energy; this mirrors a well-known property of signal processing filters. Several classical findings in social cognition admit a natural interpretation and integration in the signal processing language.
Passing from a high-frequency to a low-frequency harmonic often entails an abstraction process, much like grouping single languages into language families or individuals into social categories. Originating with the work of Dedekind, Frege, and Cantor in the second half of the nineteenth century, this process has been formalized in mathematics for some time and is based on the concept of equivalence relation leading to the passage to the quotient set. Here I explain this definition of abstraction and discuss its applicability to social cognition.
In recent years, scientists have increasingly taken to investigate the predictive nature of cognition. We argue that prediction relies on abstraction, and thus theories of predictive cognition need an explicit theory of abstract representation. We propose such a theory of the abstract representational capacities that allow humans to transcend the “here-and-now.” We suggest that the representational substrates of the mind are built as a hierarchy, ranging from the concrete to the abstract; however, we argue that there are qualitative differences between elements along this hierarchy, generating meaningful, often unacknowledged, diversity. Echoing views from philosophy, we suggest that the representational hierarchy can be parsed into: modality-specific representations, instantiated on perceptual similarity; multimodal representations, instantiated primarily on the discovery of spatiotemporal contiguity; and categorical representations, instantiated primarily on social interaction. These elements serve as the building blocks of complex structures discussed in cognitive psychology (e.g., episodes, scripts) and are the inputs for mental representations that behave like functions, typically discussed in linguistics (i.e., predicators). We support our argument for representational diversity by explaining how the elements in our ontology are all required to account for humans’ predictive cognition (e.g., in subserving logic-based prediction; in optimizing the trade-off between accurate and detailed predictions) and by examining how the neuroscientific evidence coheres with our account. In doing so, we provide a testable model of the neural bases of conceptual cognition and highlight several important implications to research on self-projection, reinforcement learning, and predictive-processing models of psychopathology.