Low-cost wearable cameras are now entering the mainstream. Unlike consumer-style photos which capture a highly biased sample of the visual world, wearable cameras allow people to record and share their everyday lives from a first-person, "egocentric" perspective. In addition to applications like providing memory aids for dementia patients, serving as virtual assistants for people with visual impairments, and enhancing police security and accountability, wearable cameras also let people create visual "life-logs" of their day-to-day lives, potentially creating new data sources for studying the world and human behavior. However these cameras also raise challenges, including the huge volume of imagery they produce (typically thousands of photos or dozens of hours of video per day), and the significant privacy concerns they create. I'll describe two lines of our work on using computer vision to automatically understand and manage first-person imagery. The first is for consumer applications, where our goal is to develop automated classifiers to help categorize lifelogging images across several dimensions. The second, a collaboration with psychology researchers, is using computer vision with wearable cameras to study children's interactions with others and the environment to better understand how they learn. Despite the different goals, these applications share common themes of robustly recognizing image content in noisy, highly-dynamic first-person imagery.