Welcome to the Konkle Lab!

Our broad aim is to understand how we see and represent the world around us. How is the human visual system organized, and what pressures guide this organization? How does vision interface with action demands, so we can interact in the world, and with conceptual representation, so we can learn more about the world by looking?

Our approach starts from the premise that the connections of the brain are driven by powerful biological constraints—as such, where different kinds of information is found in the brain is not arbitrary, and serves as a clue into the underlying representational goals of the system. Our research approach is inspired by considering the experience and needs of an active observer in the world—this thinking continually deepens our understanding of how behavioral capacities are expectant in the local and long-range architecture of the brain, and how neural networks absorb the statistics of visual experience and the consequences of actions, to realize the functions latent in the structure.

The techniques we use include both empirical and computational methods. We use functional neuroimaging and electroencephalography to measure the human brain. We develop computational models to link network architecture with cortical topography. We use behavioral methods to measure human perceptual and cognitive capacities. And, we draw on machine vision and deep learning approaches to gain empirical traction into the formats of hierarchical visual representation that can support different visual behaviors.

contact:
talia_konkle@harvard.edu | CV | google scholar | @talia_konkle
William James Hall 780
33 Kirkland St
Cambridge, MA
(617) 495-3886

Some Current Research:
New model development

  • Poster | Paper : Cognitive Steering in Deep Neural Networks via Long-Range Modulatory Feedback Connections Konkle & Alvarez, NeurIPS 2023.

Deep neural network insights into biological visual representation

  • Talk | Preprint | Tweprint : A Contrastive Coding Account of Category Selectivity in the Ventral Visual Stream. Prince et al, bioRxiv, 2023
  • Preprint | Tweprint : What can 1.8 billion regressions tell us about the pressures shaping high-level visual representation in brains and machines? Conwell et al, bioRxiv, 2023
  • Preprint: Manipulating dropout reveals an optimal balance of efficiency and robustness in biological and machine visual systems. Prince, et al, ICLR, 2024.
  • Talk | Preprint: Understanding the invariances of visual features with separable subnetworks. Hamblin, et al, arxiv, 2022.

Modeling Cortical Topography

  • Preprint : A paper showing that "Greedy Local Wiring" growth rules on a cortical sheet build a hierarchical connectome.
    Chandra et al, bioRxiv, 2023
  • Talk | Paper : Cortical topographic motifs emerge in a self-organized map of object space. Doshi & Konkle, Science Advances, 2023

Scenes, reachable environments, and the periphery

  • Talk | preprint | Method Page : Full-field fMRI: a novel approach to study immersive vision. Park, et al., biorxiv, 2023
  • Talk | Paper : Emergent dimensions underlying the reachable world. Josephs, et al., Cognition, 2023
  • Paper : Systematic transition from boundary extension to contraction along an object-to-scene continuum. Park, et al., Cognition, 2024
Some Konkle Talks:
Video: A High-Fidelity Perceptual Interface of the Visual World
Cognitive Science Society, Lila R. Gleitman Award. July 26-29, 2023.

In this talk, I re-think traditional accounts of the visual stream hierarchy, which posit that the visual system drives to a high-level representation that is abstract, categorical, and maybe even semantic. Instead, I lay out an arguement that the late-stage visual code is perceptual, and featurally-detailed.


Video:Why is that there? Feature mapping across the visual cortex
Cognitive Computational Neuroscience Keynote. Sept 13-16, 2019.

This talk focuses on the origins of large-scale organization. All proposals balance the causal roles of two pressures: innately specified cortical patterning mechanisms (phylogenetic) establishing large-scale network architecture, and self-organizing mechanisms driven by the statistics of natural experience (ontogenetic) effecting local fine-scale organization. In this talk I discuss these local and long-range organizing pressures and the cortical scales at which these two causal pressures meet.
Earlier Key Findings:
Self-supervised learning models of visual representation

Object-responsive cortex encodes substantial information about object categories. Are category-level learning pressures critical for arriving at this representation?

We developed a fully self-supervised model which learns to represent individual images, rather than categories, such that views of the same image are embedded nearby in a low-dimensional feature space. This model not only learned emergent category information, but also learned hierarchical features which capture the structure of brain responses across the human ventral visual stream, on par with category-supervised models.

These results provide computational support for a domain-general framework guiding the formation of visual representation, where the proximate goal is not explicitly about category information, but is instead to learn unique, compressed descriptions of the visual world.

A self-supervised domain-general learning framework for human ventral stream representation. Konkle & Alvarez. Nature Communications, 2020.

See also this recent blog post featuring this work, on the capacities of current self-supervised learning systems.
Organizing dimensions of occipitotemporal cortex

Occipitotemporal cortex has strong object-centered responses. However, there is no widely accepted model of the coding dimensions of objects, nor how this high-dimensional domain is mapped onto the cortical sheet. How do you parameterize objects?

We have found that the real-world size of objects is a fundamental dimension that has a large-scale organization across the cortical surface, and shows an interleaved organization with the dimension of animacy. This work demonstrates object-responsive cortex is not a heterogeneous bank of features but has a systematic organization at a macro-scale.

A Real-World Size Organization of Object Responses in Occipito-Temporal Cortex. Konkle & Oliva. Neuron, 2012.

Tripartite Organization of the Ventral Stream by Animacy and Object Size. Konkle & Caramazza. Journal of Neuroscience, 2013.

The large-scale organization of object-responsive cortex is reflected in resting-state network architecture. Konkle & Caramazza. Cerebral Cortex, 2016.
Evidence for perceptual differences underlying categorical distinctions

Apples look like other apples, oranges look like other oranges, but do small objects look like other small objects? Because there are so many kinds of small objects (e.g., cups, keys), is is often assumed that there are not reliable perceptual features that distinguish them from big objects (e.g., cars, tables)?

However, we have found that there are mid-level shape differences that capture broad conceptual distinctions like real-world size and animacy. Further, a substantial portion of ventral stream organization can be accounted for by these differences in coarse texture and form information, without requiring explicit recognition of intact objects.

Broadly, this line of work explores the idea that there is an extensive perceptual representational space which supports downstream processes like categorization and conceptual processing.


Mid-level visual features underlie the high-level categorical organization of the ventral stream. Long & Konkle (2018). PNAS.

Mid-level feature differences support early animacy and object size distinctions: Evidence from EEG decoding. Wang, Janini, & Konkle (2022). JOCN.

Mid-level perceptual features distinguish objects of different real-world sizes. Long, Konkle, & Alvarez (2016) JEP:General.

A familiar-size Stroop effect in the absence of basic-level recognition. Long, & Konkle (2017) Cognition.

How big should this object be? Perceptual influences on viewing-size preferences. Chen, Deza, & Konkle (2022). JOCN.


For this line of work, we developed a new stimulus class we called "texforms". Read more about them here (Texform FAQ), with code to generate them here (GitHub Repo).
Mapping the reachable world

While there are clear distinctions between objects and scenes, what about the intermediate-scale space in between?

Neurally, we found that images of "reachspaces" activate a distinct large-scale topographic representation from both close-up object views and navigable-scale scene views. Behaviorally, we found that perceptual similarity computations dissociate reachspace images from both object and navigable-scale scene images.

To facilitate this research we have created the Reachspace Database: a new image database of over >10k high-quality images.

Large-scale dissociations between views of objects, scenes, and reachable-scale environments in visual cortex. Josephs & Konkle, T. (2020) PNAS.

Emergent dimensions underlying human understanding of the reachable world. Josephs, Hebart, & Konkle, T. (2021) PsyArXiv.

Perceptual dissociations among views of objects, scenes, and reachable spaces. Josephs & Konkle (2019). JEP:HPP.

The world within reach: an image database of reach-relevant environments. Josephs, Zhao, & Konkle (2021) Journal of Vision
En route to action understanding

Recognizing actions and inferring intentions and goals are essential capacities for navigating the social world. What are the perceptual precursors to more abstract action representation?

We found evidence for five large-scale networks underlying visual action perception: one related to social aspects of an action, and four related to the scale of the “interaction envelope”, ranging from fine-scale manipulations directed at objects, to large-scale whole-body movements directed at distant locations. behavioral assays into action representation revealed converging insights: actions are intuitively considered similar based on the agent’s goals, but the visual brain responses reflect the similarity of body configurations. Broadly, this work begins to articulate the visual representation en route to understanding the actions of others around us.

Sociality and Interaction Envelope Organize Visual Action Representations. Tarhan, & Konkle (2020) Nature Communications.

Behavioral and Neural Representations en route to Intuitive Action Understanding. Tarhan, De Freitas, & Konkle (2021) bioRxiv.

Reliability-Based Voxel Selection. Tarhan & Konkle (2020). NeuroImage.
Links between neural organization and perceptual similarity computations

The human visual system is built to efficiently extract and encode the structure of the natural world, transforming information from early sensory formats into increasingly abstract representations that support our behavioral capacities.

In a series of studies, we probed the links between neural responses and a variety of visual behavioral measures, including visual search, visual masking, and visual working memory. This line of work points to the overarching result that there is a common representational structure across all of high-level visual cortex that underlies our ability to process object categories.

Processing multiple visual objects is limited by overlap in neural channels. Cohen, Konkle, Nakayama, Alvarez. PNAS, 2014.

Visual awareness is constrained by the representational architecture of the visual system. Cohen, Konkle, Nakayama, Alvarez. Journal of Cognitive Neuroscience, 2015

Visual search for object categories is predicted by the representational architecture of high-level visual cortex. Cohen, Nakayama, Alvarez, & Konkle. Journal of Neurophysiology, 2017.

[Video] Object-selective cortex shows distinct representational formats along the posterior-to-anterior axis: evidence from brain-behavior correlations. Magri & Konkle Vision Sciences Society, 2020.
The real-world size of objects is a key property of internal object representations

One insight into the nature of object representation is to consider that objects are physical entities in a 3-dimensional world. This geometry places places important constraints on how people experience and interact with objects of different sizes.

In a series of behavioral studies, we found that the real-world size of objects is a basic component of object representation. Just as objects have a canonical perspective, we showed they also have a canonical visual size (proportional to the log of their real-world size). Further, size-knowledge is automatically activated when an object is recognized.

Finally, we are exploring how this property of object representations emerges in development. We found found that by the pre-school years, kids are sensitive to the perceptual differences between big and small objects, and automatically activate real-world size information in a size-stroop task.

Canonical visual size for real-world objects.
Konkle & Oliva. Journal of Experimental Psychology: Human Perception and Performance, 2011.

A Familiar Size Stroop Effect: Real-world size is an automatic property of object representation.
Konkle & Oliva. Journal of Experimental Psychology: Human Perception and Performance, 2012.

Real-world size is automatically encoded in preschoolers’ object representations.
Long, Moher, Carey, & Konkle. PsyRxiv.

Animacy and object size are reflected in perceptual similarity computations by the preschool years.
Long, Moher, Carey, & Konkle. SRCD.
How much can we remember about what we see?

Another way we investigate the nature of high-level visual representations by understanding how and what we store about them in memory.

We discovered that people are capable of remembering thousands of visually-presented objects and scenes with much more detail than previously believed. This remarkable capacity for retaining highly-detailed memory traces relies on our existing conceptual knowledge: the more we know about the different kinds of objects, the less they interfere in memory.

The thesis emerging from this research is that one cannot fully understand memory capacity or memory processes without also determining the nature of representations over which they operate.

Selected Publications:
Visual long-term memory has a massive capacity for object details.
Brady, Konkle, Alvarez, & Oliva. PNAS 2008.

Conceptual knowledge supports perceptual detail in visual long-term memory.
Konkle, Brady, Alvarez, & Oliva. Journal of Experimental Psychology: General, 2010.

Scene memory is more detailed than you think: the role of scene categories in visual long-term memory.
Konkle, Brady, Alvarez, & Oliva. Psychological Science, 2010.

Compression in visual short-term memory: using statistical regularities to form more efficient memory representations.
Brady, Konkle, & Alvarez. Journal of Experimental Psychology: General, 2009.


Review:
A review of visual memory capacity: Beyond individual items and toward structured representations.
Brady, Konkle, & Alvarez. Journal of Vision, 2011.
current members:
Talia Konkle
Principal Investigator
Jeffery Andrade
Graduate Student
Nick Blauch
Post Doc (with Alvarez)
Fenil Doshi
Graduate Student (with Alvarez)
Jacob Prince
Graduate Student
Jeongho Park
Post Doc
Chris Hamblin
Graduate Student (with Alvarez)
Srijani Saha
Graduate Student (with Alvarez)
lab affiliates:
Andy Keller
Kempner Fellow
Binxu Wang
Kempner Fellow
Seda Akbiyik
Affiliated Graduate Student
Wilka Carvalho
Kempner Fellow
lab alumni:
Colin Conwell
Post Doc (with Alvarez)
Daniel Janini
Graduate Student
Emilie Josephs
Graduate Student
Leyla Tarhan
Graduate Student
Bria Long
Graduate Student
Caterina Magri
Graduate Student
Kasper Vinken
Post Doc (with Livingstone)
Dina Obeid
Post Doc
Ruosi Wang
Postdoctoral fellow
Aylin Kallmayer
Research Fellow
Arturo Deza
Post-Doc
Rocco Chiou
Visiting Scholar
Chen-Ping Yu
Post-doc
Xiuye Chen
Data Scientist / Post-Doc
Nastaran Arfaei
Research Fellow
Michael Cohen
Graduate Student
Katherine Gallagher
Research Scientist
PREPRINTS
Self-organized emergence of modularity, hierarchy, and mirror reversals from competitive synaptic growth in a developmental model of the visual pathway
Chandra, S., Khona, M., Konkle, T. & Fiete, I. (2024). biorxiv.
How does the primate brain combine generative and discriminative computations in vision?
Peters, B, et al., (2024) biorxriv. [tweet-thread]
Getting aligned on representational alignment
Sucholutsky et al,. (2024) arxiv [tweet-thread]
A Contrastive Coding Account of Category Selectivity in the Ventral Visual Stream
Prince, J., Alvarez, G., & Konkle, T. (2023). biorxiv. [tweet-thread]
What can 1.8 billion regressions tell us about the pressures shaping high-level visual representation in brains and machines?
Conwell, C., Prince, J., Alvarez, G., & Konkle, T. (2023) biorxriv. [tweet-thread]
Ultra-wide angle neuroimaging: insights into immersive scene representation
Park, J., Soucy, E., Segawa, J., Mair, R., & Konkle, T. (2023) bioRxiv. [tweet-thread] | [full-field fMRI method information]
Contributions of early and mid-level visual cortex to high-level object categorization
Kramer, L.E., Konkle, T., Chen, Y-C., Long, B., & Cohen, M. R. (2023) bioRxiv. [tweet-thread]
PUBLICATIONS
2024
Manipulating dropout reveals an optimal balance of efficiency and robustness in biological and machine visual systems
Prince, J., Fajardo, G., Alvarez, G., & Konkle, T. (2024) ICLR
2023
The neural code for `face cells' is not face specific
Vinken, K., Prince, J. S., Konkle, T., Livingstone, M. (2023) Science Advances. [tweet-thread]
Pruning for Interpretable, Feature-Preserving Circuits in CNNs
Hamblin, C., Konkle, T., Alvarez, G. (2023). arXiv.
The Neuroconnectionist Research Programme
Doerig et al., (2023). Nature Reviews Neuroscience. [tweet-thread]
Dimensions underlying human understanding of the reachable world.
Josephs, E., Hebart, M., & Konkle, T. (2023) Cognition.
2022
General object-based features account for letter perception
Janini, D., Hamblin, C., Deza, A., & Konkle, T. (2022) PLOS Computational Biology. [tweet-thread]
Mid-level feature differences support early animacy and object size distinctions: Evidence from EEG decoding
Wang, R., Janini, D., & Konkle, T. (2022) Journal of Cognitive Neuroscience.
2021
Behavioral and Neural Representations en route to Intuitive Action Understanding
Tarhan, L., De Freitas, J., & Konkle, T. (2021) Neuropsychologia.
2020
Emergent Properties of Foveated Perceptual Systems
Deza, A. & Konkle, T. (2020) arXiv (pre-print only). [video]
Large-scale dissociations between views of objects, scenes, and reachable-scale environments in visual cortex.
Josephs, E., & Konkle, T. (2020) Proceedings of the National Academy of Sciences.
Reliability-Based Voxel Selection.
Tarhan, L., & Konkle, T. (2020). NeuroImage, 207, 116350. Open Science Repository | FAQ
2019
Animacy and object size are reflected in perceptual similarity computations by the preschool years.
Long, B., Moher, M., Carey, S. E., & Konkle, T. (2019) Visual Cognition , 27 (5-8), 435-451. Open Science Repository
A Pokemon-sized window into the human brain.
Janini, D., & Konkle, T. (2019) Nature Human Behavior: News and Views.
Perceptual dissociations among views of objects, scenes, and reachable spaces.
Josephs, E., & Konkle, T. (2019). Journal of Experimental Psychology: Human Perception and Performance.
Real-world size is automatically encoded in preschoolers’ object representations.
Long, B., Mariko, M., Carey, S., Konkle, T. (2019). Journal of Experimental Psychology: Human Perception and Performance.
2018
Mid-level visual features underlie the high-level categorical organization of the ventral stream.
Long, B., Yu, C.-P., & Konkle, T. (2018). Proceedings of the National Academy of Sciences.
The role of textural statistics vs. outer contours in deep CNN and neural responses to objects.
Long, B. & Konkle, T. (2018). Proceedings of the Cognitive Computational Neuroscience Conference.
2013 - 2017
A familiar-size Stroop effect in the absence of basic-level recognition.
Long, B., & Konkle, T. (2017). Cognition, 168, 234-242.
Visual search for object categories is predicted by the representational architecture of high-level visual cortex.
Cohen, M., Nakayama, K., Alvarez, G. A. & Konkle, T. (2017). Journal of Neurophysiology, 117 (1), 388-402.
Mid-level perceptual features distinguish objects of different real-world sizes.
Long, B., Konkle, T., Cohen, M., & Alvarez, G. A. (2016). Journal of Experimental Psychology: General. 145(1), 95-109. (git hub)
Visual awareness is constrained by the representational architecture of the visual system.
Cohen, M., Konkle, T., Nakayama, K., & Alvarez, G. A. (2015). Journal of Cognitive Neuroscience. 27 (11), 2240-52.
Parametric Coding of the Size and Clutter of Natural Scenes in the Human Brain.
Park, S. J., Konkle, T. & Oliva, A. (2015). Cerebral Cortex, 25 (7), 1792-1805.
Processing multiple visual objects is limited by overlap in neural channels.
Cohen, M., Konkle, T., Rhee, J., Nakayama, K., & Alvarez, G. A. (2014). Proceedings of the National Academy of Sciences.
Tripartite Organization of the Ventral Stream by Animacy and Object Size.
Konkle, T., & Caramazza, A. (2013). Journal of Neuroscience, 33 (25), 10235-42.
Real-world objects are not represented as bound units: Independent forgetting of different object details from visual memory.
Brady, T. F., Konkle, T., Alvarez, G. A., & Oliva, A. (2013). Journal of Experimental Psychology: General, 142(3), 791-808.
Long-term memory has the same limit on fidelity as working memory.
Brady, T. F., Konkle, T., Gill, J., Oliva, A., & Alvarez, G. A. (2013). Psychological Science, 24 (6), 981-990.
2012 and earliear
A real-world size organization of object responses in occipito-temporal cortex.
Konkle. T., & Oliva, A. (2012). Neuron, 74(6), 1114-24.
A Familiar Size Stroop Effect: Real-world size is an automatic property of object representation.
Konkle, T., & Oliva, A. (2012). Journal of Experimental Psychology: Human Perception & Performance, 38, 561-9.
Canonical visual size for real-world objects.
Konkle, T. & Oliva, A. (2011). Journal of Experimental Psychology: Human Perception & Performance, 37(1):23-37.
A review of visual memory capacity: Beyond individual items and toward structured representations.
Brady, T. F., Konkle, T. & Alvarez, G. A. (2011). Journal of Vision, 11(5):4, 1-4.
Conceptual distinctiveness supports detailed visual long-term memory.
Konkle, T., Brady, T. F., Alvarez, G. A., & Oliva, A. (2010). Journal of Experimental Psychology: General, 139(3), 558-578.
Representing, Perceiving and Remembering the Shape of Visual Space.
Oliva, A., Park, S., & Konkle, T. (2010). Computational Vision in Neural and Machine Systems, Cambridge University Press, edited by Laurence R Harris and Michael Jenkin.
Scene memory is more detailed than you think: the role of scene categories in visual long-term memory.
Konkle, T., Brady, T. F., Alvarez, G. A., & Oliva, A. (2010). Psychological Science, 21(11), 1551-1556.
Sensitive period for a vision-dominated response in human MT/MST.
Bedny, M., Konkle, T., Pelphrey, K., Saxe, R., & Pascual-Leone, A. (2010). Current Biology, 139(3), 20(21),1900-6.
Compression in visual short-term memory: using statistical regularities to form more efficient memory representations.
Brady, T. F., Konkle, T., & Alvarez, G. A. (2009). Journal of Experimental Psychology: General, 138(4), 487-502.
What can crossmodal aftereffects reveal about neural representation and dynamics?
Konkle, T. & Moore, C. I. (2009). Communicative and Integrative Biology, 2(6), 479-481.
Motion Aftereffects Transfer Between Touch and Vision.
Konkle, T., Wang, Q., Hayward, V., & Moore, C. I. (2009). Current Biology, 19, 745-750.
Detecting changes in real-world objects: The relationship between visual long-term memory and change blindness.
Brady, T. F., Konkle, T., Oliva, A., & Alvarez, G. (2009). Communicative and Integrative Biology, 2:1, 1-3.
Visual long-term memory has a massive storage capacity for object details.
Brady, T. F., Konkle, T., Alvarez, G. A. & Oliva, A. (2008). Proceedings of the National Academy of Sciences USA, 105(38), 14325-9.
Tactile Rivalry Demonstrated with an Ambiguous Apparent-Motion Quartet.
Carter, O. L., Konkle, T., Wang, Q., Hayward, V., & Moore, C. I. (2008). Current Biology, 18(14), 1050-4.
Normative representation of objects: Evidence for an ecological bias in perception and memory.
Konkle, T., & Oliva, A. (2007). In D. S. McNamara & J. G. Trafton (Eds.), Proceedings of the 29th Annual Cognitive Science Society, (pp. 407-413), Austin, TX: Cognitive Science Society.
Searching in Dynamic Displays: Effects of configural predictability and spatio-temporal continuity.
Alvarez, G. A., Konkle, T., & Oliva, A. (2007). Journal of Vision, (pp. 407-413)7(14):12, 1-12.
Bilateral Pathways Do Not Predict Mirror Movements: A Case Report.
Verstynen, T. D., Spencer, R., Stinear, C. M., Konkle, T., Diedrichsen, J., Byblow, W. D., Ivry, R. B. (2007). Neuropsychologia, 45(4), 844-852.
Two types of TMS-induced Movement Variability After Stimulation of the Primary Motor Cortex.
Verstynen, T. D., Konkle, T., & Ivry, R. B. (2006). Journal of Neurophysiology, 96, 1018-1029.
download stimulus sets:
Action video clips

2 video sets of 60 everyday actions (.zip)


Tarhan & Konkle, 2020, Nature Communications.
Animacy x Size

60 small animals, 60 big animals,
60 small objects, 60 big objects (.zip)


Konkle & Caramazza, 2013, Journal of Neuroscience.
Big and Small Objects

200 big objects, 200 small objects (.zip)


Konkle & Oliva, 2012, Neuron.
"Massive Memory" Scene Categories

128 Scene categories with 1-64 exemplars (.zip)

Konkle, Brady, Alvarez, & Oliva, 2012, Psychological Science
8 "Classic" Categories

30 each of Bodies, Buildings, Cars, Cats,
Chairs, Faces, Hammers, Phones (.zip)


Cohen et al., Journal of Neurophysiology. 2016.
Object quartets: State x Exemplar and State x Color
100 sets of 2 states x 2 exemplars (.zip)
100 sets of 2 states x 2 colors (.zip)

Brady, Konkle, Oliva, & Alvarez, 2012, JEP:General.
Object Size Stroop

Sample congruent and incongruent displays from two experiments (.zip)


Konkle & Oliva, 2012, JEP:HPP.
Current Open Positions:
Post-Doctoral Fellow

Topic area: Cognitive Neuroscience & fMRI
Seeking candidates with expertise/familiarity with fMRI, interest natural scene representation, object representation, visual attention, retinotopy vision-language, visual development, and/or other related topics that relate to high-level vision. Canididate will have opportunity to learn more about deep neural network modeling; interest/background in this area is a plus but not required. More detailed information coming soon! If interested, send me an email with "Post Doc Candidate - [your name]" in the subject line, with your CV, a paper, and notes about your scientific interests, and timeline.
Lab Research Assistant and Manager

Seeking a candidate to join and support the research and activities of the broader Vision lab (PIs: Konkle & Alvarez). Position involves (i) 1/3 research (supporting active onging projects in the lab); (1/3) administration (organizing lab meetings, coordinating interdisciplinary events); and (1/3) personal development (attending seminars, taking classses, applying for graduate schools, etc). Position for 1-2 years. More detailed information coming soon! If interested in the mean time, send me an email with "Lab RA Candidate - [your name]" in the subject line, your CV, and tell me a little about your background, interests and what drew you to this position.
--

To get to know us better:

Explore the research page, watch one of my recent talks and check out some of the videos of our presentations. Reach out to the lab alumni to learn more about the lab climate, my mentorship style, and my speedy-skills on our lab slack. And, read one of our recent papers that aligns with your interests!

My lab is part of the joint Harvard Vision Sciences lab, co-led with Prof. George Alvarez, and you can explore more about the broader Vision Lab here! I value working hard but also maintaining a workable balance across the various life fronts. I am a mom of two wonderful daughters (age 7 and 9), and a cancer survivor. As a lab, we value normalizing mistakes and learning from them, developing thoughtful and effective systems for doing high-quality science, and recognizing the value of many perspectives, from Reviewer #2, to those with different racial origins and ethnic backgrounds, gender orientations, and other identities.