Hello! This is the website of Adam Lesnikowski. I am very interested in machine learning, statistical learning theory, active learning, neural networks and AI, especially applied to computer vision and other web-scale data problems.
Since 2017, I've been working at Nvidia in Santa Clara, CA. I am a machine learning scientist working on problems related to creating optimal train and test data sets and learning with limited labels. Towards this, I'm using techniques from the active learning, uncertainty in AI and generative model communities. I've very excited to be working on these problems with a great and passionate group of people!
Here's a slightly out of date resume.
Deep Active Learning, talk at the GPU Technology Conference, March 2018.
Predicting Prices for House Shares using Deep Convolutional Neural Networks, with Rong Yuan, Genevieve Patterson, 2016.
How Much Did it Rain?: Predicting Real Rainfall Totals Based on Polarimetric Radar Data, paper of a project with Peter Bartlett and Alexei Efros, 2015.
NP-Completeness Papers by Cook, Levin and Karp, P =?NP, and a Lost Letter, slides with Justine Sherry, UC Berkeley CS 294, 2014.
A Geometric Interpretation of the Metaphysics of the Tractatus, paper based on project with Professor Paolo Mancosu, 2014.
Reasoning About an Ordering of Theories, my undergraduate thesis, with Warren Goldfarb, on a modal logic that captures all correct reasoning about mathematical interpretations.
From 2012 to 2016 I was at the Ph.D. program at UC Berkeley, where I was worked on set theory, mathmatical logic, large cardinals and rationality. From 2009 to 2011, I was at UvA's Institute for Logic, Languague and Computation M.Sc. program studying mathmatical logic, theoretical CS and AI. From 2004-2009, I was at Harvard University, where I studied mathematics and philosophy.