VAIDS

Friday, August 25, 2017

"I think my code quality improved a lot. At Google", - PhD Research Intern Philip Haeusser

Today’s blog interviews Philip Haeusser, a PhD Research Intern at Google. Read on to learn about his projects, publishing at Google, coding, and his internship's impact. Enjoy! 

 So tell us about yourself and your PhD topic…
Hi! My name is Philip, and I’m a third-year PhD student in computer science at TU Munich, supervised by Daniel Cremers. I am working in the field of computer vision, the discipline where we teach computers to understand images and videos. To a computer, images and videos are nothing but a huge collection of meaningless numbers. If you represent them as colors, a human is immediately able to tell what’s in the picture. 
 
In order to get a computer to achieve the same, I train neural networks — a family of models that can be interpreted as instances of a “mini visual cortex.” The goal is to map the many numbers that make up an image to something more meaningful, such as a class label like “cat.” Neural networks are amazing at this. I have worked on problems like optical flow (“what changes from one video frame to the next?”) or domain adaptation [“how can we use knowledge (labels) from one domain (e.g. handwritten images) on another domain (e.g. house numbers from Google street view)?”].
When I’m not doing research, I work on my YouTube channel “Phil’s Physics” where I present experiments and talk about science. 
 
How did you get to work in this area?
In 2014, I was completing my Master’s in physics at the University of California in Santa Cruz. I was part of an interdisciplinary team working on retina implants for blind people. In one of our experiments, we had to deal with a lot of data that was very expensive to get — but we couldn’t use all of it because our data processing pipeline was not complex enough. So I started to read about machine learning and neural networks. I got immediately hooked and reached out to professors who were working in this area. It was a great honor to get invited to present my work to Daniel Cremers, who then offered me a PhD position at his chair.
 
Why did you apply for an internship at Google and how supportive was your PhD advisor?
The field of deep learning is moving very fast. Almost every week, a new paper on some new groundbreaking neural network or training trick appears. More often than not, the authors work at Google. That got me interested in the kind of work that Google is doing in this field. At a summer school, I met Olivier Bousquet, who gave an amazing talk about the Google Brain team. He told me about research internships at Google, and then I applied. My PhD advisor liked the idea, because it’s always good to get new perspectives, to connect with people and to engage in exchange, particularly in a new field like deep learning. Plus, Google has the resources to facilitate experiments that are computationally unfeasible at many universities.
 
What project was your internship focused on?
I had the honor to be working with Alexander Mordvintsev, one of the creators of DeepDream. The project was on a novel method of training neural networks with unlabeled data and semi-supervised learning.
We developed a new method that we called “Learning by Association.” It’s similar to the “association game,” where you’re told a word and you respond with the first thing that you associate with it. After a few “iterations,” you usually get very funny “association chains.”
We did something similar: We trained a neural network to produce representations (neural activation patterns) that allow for associations, too. Associations from labeled data to unlabeled data. Imagine an association chain from an example of the labeled batch to an example of the unlabeled batch. Then, you make a second association from unlabeled to labeled data. That would then be an “association cycle.” You can now compare the label of the example that you ended up at with the label of the example at the beginning of the cycle. The goal is to make consistent association cycles, meaning that the labels are the same. We formulated this as a cost function and showed that this technique works extremely well for training classification networks with less labeled data.
 

No comments:

Post a Comment

Share

Enter your Email Below To Get Quality Updates Directly Into Your Inbox FREE !!<|p>

Widget By

VAIDS

FORD FIGO