Is it raining at Tanah Merah?
Is it raining at Pasir Ris?
Is it raining at Changi Airport?
We don't directly measure wind patterns, humidity, etc.
If we know it's raining in Pasir Ris and Changi, how likely is it to rain on campus in the next 5 minutes?
|
![]() |
As we collect data, our guess for $P(X|\theta)$ gets more accurate
You collect observations to "fit" the probability distribution of each node
The less flexibility in your model, the less data you need
![]() |
![]() |
We can view these techniques as computational graphs
Each neuron has a non-linear response to the weighted sum
![]() |
![]() |
![]() |
![]() |
![]() |
➔ |
![]() |
![]() |
➔ |
![]() |
![]() |
➔ |
![]() |
![]() |
➔ |
![]() |
How could we make it work?
Just has a lot of big layers ¯\_(ツ)_/¯
During training, we tweak convolutional kernels that capture key features of our 2-D data
![]() |
![]() |
https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
Each neuron has a feedback loop, so it can remember previous examples
By BiObserver - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=43992484
http://blog.fastforwardlabs.com/post/148842796218/introducing-variational-autoencoders-in-prose-and
http://blog.fastforwardlabs.com/post/148842796218/introducing-variational-autoencoders-in-prose-and
![]() |
![]() |
http://blog.fastforwardlabs.com/post/148842796218/introducing-variational-autoencoders-in-prose-and
Have lots of descriptions of "good" design software, and train the system to produce text with similar product features
Treat design principles as conditional probabilities over sets of features, and hope the latent space picks up on that
What is your $X$? What are your $\theta$s?
Does your statistical model have a lot of known structure/dependence?
Fast, lots of examples, backed by Google
Links!