Integrating the Study of Mind, Brain, Behavior, and Language

Our aim is to represent a leading center for the multidisciplinary study of mind, brain, behavior, and language - including such phenomena as perception, thinking, learning, memory, attention, action, personality, speech, language processing, and linguistic structure. The department examines the functional organization of these capacities, the representational and computational processes that underlie them, their neural bases, their development across the lifespan, and how they shape individual and social behavior. (See more...)

 

How do we select an appropriate action, given our goals? How do people decide to blame others for their behavior? New software automatically identifies behaviors of laboratory mice. Which variables influence control over learning and action? Searching for memory. How do we make decisions and learn from experience? A stroke leads to resolution of foreign accent syndrome. How do we integrate higher-order cognitive processes & actions? Using electrophysiology & optogenetics to probe memory. How does the brain develop & change in response to cues? Using an immersive virtual environment to test perception & action.

Upcoming Events

  • Todd Zickler, Harvard University Download Todd Zickler, Harvard University to my desktop calendar

    July 29, 2016 12:00 PM - 1:30 PM Title: A Multi-scale Consensus Model for Low-level Vision. Abstract: During the past ten years my research group has been using various tools, from applied mathematics and signal processing to computer graphics and computational imaging, to help build a catalogue of how local patterns of image brightness can be used by vision systems to constrain their local interpretations of shape, lighting, and materials. Supposing we have a complete catalogue of such local, low-level constraints, and supposing we can apply them at multiple scales across a visual field, how should we combine their predictions for vision? I will try to answer this question by suggesting a computational framework for incorporating diverse, noisy scene constraints at dense locations and scales. The framework can be understood as a large collection of distributed computational units, with each unit corresponding to a spatial region of the visual field (a “receptive field”) having a certain position and size. Regardless of its location and scale, each of the computational units iteratively performs the same set of simple calculations, and it iteratively trades messages with other units through sparse feed-forward and feed-back connections across scales. I will show results of applying this framework to estimate disparity from stereo images and surface normals from monocular diffuse shading. Then, I will give you some time to tell me what in this framework, if anything, resembles what is known about neural implementations for similar visual tasks. http://brown.edu/Departments/CLPS/events Metcalf Research Bldg, Room 305 Dept: CLPS, Departments, Other, Other Events