In our machine learning lab, we’ve accumulated tens of thousands of training hours across numerous high-powered machines. The computers weren’t the only ones to learn a lot in the process, though: we ourselves have made a lot of mistakes and fixed a lot of bugs. Here we present some practical tips for training deep neural … More Practical Advice for Building Deep Neural Networks
Our work for the International Joint Conference on Artificial Intelligence (Australia, 2017) was picked up by a number of media outlets. Read some of our favorites. … More What Can You Do with a Rock?
The physical world is filled with constraints. You can open a door, but only if it isn’t locked. You can douse a fire, but only if a fire is present. Research in our laboratory shows that this common-sense knowledge can be encapsulated in word vectors and used to improve agent performance on linguistic reasoning tasks. … More Vectors for Common-sense Reasoning
If you knew what I was going to do next, how would your actions change?
Theory of mind describes an agent’s ability to model the intents, beliefs or desires of others – A difficult task because the agent’s mental state is hidden and can only be approximated using observations. This process produces high levels of uncertainty. … More Robots that Know What You’re Thinking
Faculty Advisor David Wingate has received a National Science Foundation Career Grant for his work combining Bayesian models with probabilistic programming. … More Hey Siri, can we simplify?
Probabilistic programming has become a hot buzzword lately, but if you’re like me, you’ll learn to pronounce all the syllables long before you understand what it’s good for. This post gives a layman’s introduction, plus a few helpful links. … More What is Probabilistic Programming, Anyway?