# Using negative log-likelihoods of TensorFlow Distributions as Keras losses

Nothing here yet. This short post is supposed to summarize its supplementary notebook. For now, please go there instead.

# A Probabilistic Interpretation of CycleGAN as Approximate Bayesian Inference with Implicit Distributions

Draft

## Sketch

1. Revisit Probabilistic PCA [1], Factor Analysis.
2. Generalize to deep latent Gaussian models (DLGMs) [2] and describe how inference is done: amortized variational inference / stochastic backpropagation with inference networks.
3. Generalize amortized variational inference to implicit distributions: Adversarial autoencoders, BiGAN/ALIGAN, AVB [5] [6].
4. Formulate CycleGAN [3] as a deep latent Gaussian model with a implicit prior distribution, where inference is done using amortized variational inference with an implicit approximate posterior distribution.

# Inference in Variational Autoencoders with Different Monte Carlo Sample Sizes (Addendum)

Draft

This is a short addendum to a previous post that demonstrates how to perform Inference in Variational Autoencoders with Different Monte Carlo Sample Sizes using the basic modular framework we developed in an earlier post.

The negative evidence lower bound (ELBO) plotted after each training epoch for various combinations of batch and Monte Carlo sample sizes.

# Inference in Variational Autoencoders with Different Monte Carlo Sample Sizes

In a previous post, I demonstrated how to leverage Keras' modular design to implement variational autoencoders in a way that makes it easy to tweak hyperparameters, adapt to it to other related models, and extend it to the more sophisticated methods proposed in the current research.

Recall that we optimize the generally intractable evidence lower bound (ELBO) using reparameterization gradients, which approximates the expectation of gradients with Monte Carlo (MC) samples. In their original paper, Kingma and Welling (2014) [1] remark that an MC sample size of 1 is adequate for a sufficiently large batch size (~100). Obviously, this is highly dependent on the problem (more specifically the likelihood). In general, it is important to experiment with different MC sample sizes and observe the various effects it has on training stability. In this short post, we demonstrate how to tweak the MC sample size under our basic framework.

# Implementing Variational Autoencoders in Keras: Beyond the Quickstart Tutorial

Keras is awesome. It is a very well-designed library that clearly abides by its guiding principles of modularity and extensibility, enabling us to easily assemble powerful, complex models from primitive building blocks. This has been demonstrated in numerous blog posts and tutorials, in particular, the excellent tutorial on Building Autoencoders in Keras. As the name suggests, that tutorial provides examples of how to implement various kinds of autoencoders in Keras, including the variational autoencoder (VAE) [1].

Visualization of 2D manifold of MNIST digits (left) and the representation of digits in latent space colored according to their digit labels (right).

Like all autoencoders, the variational autoencoder is primarily used for unsupervised learning of hidden representations. However, they are fundamentally different to your usual neural network-based autoencoder in that they approach the problem from a probabilistic perspective. They specify a joint distribution over the observed and latent variables, and approximate the intractable posterior conditional density over latent variables with variational inference, using an inference network [2] [3] (or more classically, a recognition model [4]) to amortize the cost of inference.

# Visualizing the Latent Space of Vector Drawings from the Google QuickDraw Dataset with SketchRNN, PCA and t-SNE

This is the third part in a series of notes on my exploration of the recently released Google QuickDraw dataset 1, using the concurrently released SketchRNN model.

The QuickDraw dataset is curated from the millions of drawings contributed by over 15 million people around the world who participated in the "Quick, Draw!" A.I. Experiment, in which they were given the challenge of drawing objects belonging to a particular class (such as "cat") in under 20 seconds.

SketchRNN is an impressive generative model that was trained to produce vector drawings using this dataset. It was of particular interest to me because it cleverly assembles many of the latest tools and techniques recently developed in machine learning, such as Variational Autoencoders, HyperLSTMs (a HyperNetwork for LSTM), Autoregressive models, Layer Normalization, Recurrent Dropout, the Adam optimizer, among others.

# Save Matplotlib Animations as GIFs

I a previous post, I outlined how to embed a Matplotlib Animation directly in the Jupyter Notebook as a HTML5 video. In this notebook, we take the same Animation and save it as a GIF using Imagemagick. First, let us reproduce the FuncAnimation object from the notebook.

In [1]:
%matplotlib inline
In [2]:
import numpy as np
import matplotlib.pyplot as plt

from matplotlib import animation, rc
from IPython.display import HTML, Image
In [3]:
# equivalent to rcParams['animation.html'] = 'html5'
rc('animation', html='html5')
In [4]:
# First set up the figure, the axis, and the plot element we want to animate
fig, ax = plt.subplots()

ax.set_xlim(( 0, 2))
ax.set_ylim((-2, 2))

line, = ax.plot([], [], lw=2)
In [5]:
# initialization function: plot the background of each frame
def init():
line.set_data([], [])
return (line,)
In [6]:
# animation function. This is called sequentially
def animate(i):
x = np.linspace(0, 2, 1000)
y = np.sin(2 * np.pi * (x - 0.01 * i))
line.set_data(x, y)
return (line,)
In [7]:
# call the animator. blit=True means only re-draw the parts that
# have changed.
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=100, interval=20, blit=True)
In [8]:
anim
Out[8]:

Now, we just need to save the animation instance with writer=imagemagick. But before we do that, we first make sure imagemagick has been properly installed on our system.

In [9]:
!brew install imagemagick

Now we can go ahead and save it as a GIF.

In [10]:
anim.save('../../files/animation.gif', writer='imagemagick', fps=60)

Finally, let's read it back in and display it to make sure it saved as expected.

In [11]:
Image(url='../../../animation.gif')
Out[11]:

# Walkthrough: Deploying a Flask app with Redis Queue (RQ) Workers and Dashboard using Kubernetes

In this article, we walkthrough the steps to deploy a simple Flask app together with Redis Queue (RQ), a minimalistic job queue built on Redis, using Kubernetes to provision and manage the necessary deployments and services for our stack.

# Re-implementing the Kubernetes Guestbook Example with Flask and NGINX

The official Kubernetes walkthrough guides often points to the guestbook application as a quintessential example of how a simple, but complete multi-tier web application can be deployed with Kubernetes. As described in the README, it consists of a web frontend, a redis master (for storage), and a replicated set of redis 'slaves'.

This seemed like an ideal starting point for deploying my Flask applications with a similar stack, and also makes use of redis master/slaves. The difficulty I found with readily making use of this example as a starting point is that the frontend is implemented in PHP, which is considerably different to modern paradigms (Node.js, Flask/Django, Rails, etc.) As described in the README:

A frontend pod is a simple PHP server that is configured to talk to either the slave or master services, depending on whether the client request is a read or a write. It exposes a simple AJAX interface, and serves an Angular-based UX. Again we'll create a set of replicated frontend pods instantiated by a Deployment — this time, with three replicas.

I figured re-implementing the frontend pod in with Flask would require minimal changes - the UI would remain mostly the same, and the actual interaction with the redis master/slaves is quite trivial.

# A Better Approach For Initializing New Nikola Themes (since v7.7.5)

A few months ago, I wrote a post on Creating a Nikola theme with Sass-compiled Bootstrap. Since then, Nikola 7.7.5 has added several new features which makes it less tedious to get started with your custom theme.

## Initializing the Theme

First, I initialize a theme named tiao, which automatically creates the necessary directories and files for me.

$nikola theme --new=tiao --engine=jinja --parent=bootstrap3-jinja [2016-05-18T02:29:49Z] INFO: theme: Creating theme tiao with parent bootstrap3-jinja and engine jinja in themes/tiao [2016-05-18T02:29:49Z] INFO: theme: Created directory themes/tiao [2016-05-18T02:29:49Z] INFO: theme: Created file themes/tiao/parent [2016-05-18T02:29:49Z] INFO: theme: Created file themes/tiao/engine [2016-05-18T02:29:49Z] INFO: theme: Theme themes/tiao created successfully. [2016-05-18T02:29:49Z] NOTICE: theme: Remember to set THEME="tiao" in conf.py to use this theme.$ tree themes/tiao
themes/tiao
├── engine
└── parent

0 directories, 2 files