Author Archives: dbgannon

About dbgannon

Dennis Gannon is a computer scientist involved with the application of cloud supercomputing to data analysis for science. From 2008 until he retired in 2014 he was with Microsoft Research as the Director of Cloud Research Strategy. In this role he helped provide access to Azure cloud computing resources to over 300 projects in the research and education community. Gannon is professor emeritus of Computer Science at Indiana University and the former science director for the Indiana Pervasive Technology Labs. His research interests include cloud computing, large-scale cyberinfrastructure, programming systems and tools, distributed computing, parallel programming, data analytics and machine learning, computational science, problem solving environments and performance analysis of scalable computer systems. His publications include more than 100 refereed articles and three co-edited books. Gannon received his PhD in computer science from the University of Illinois Urbana-Champaign and a PhD in mathematics from the University of California, Davis.

Deep Learning on Graphs (a Tutorial)

This tutorial gives an overview of some of the basic work that has been done over the last five years on the application of deep learning techniques to data represented as graphs. Convolutional neural networks and transformers have been instrumental in the progress on computer vision and natural language understanding. Here we look at the generalizations of these methods to solving problems where the data is represented as a graph. We illustrate this with examples including predicting research topics by using the Microsoft co-author graph or the more heterogeneous ACM author-paper-venue citation graph. This later case is of interest because it allows us to discuss how these techniques can be applied to the massive heterogeneous Knowledge networks being developed and used by the search engines and smart, interactive digital assistants. Finally, we look at how knowledge is represented by families of graphs. The example we use here is from the Tox21 dataset of chemical compounds and their interaction with important biological pathways and targets.

The full tutorial is on our other site: https://cloud4scieng.org/2020/08/28/deep-learning-on-graphs-a-tutorial/

Predicting Tomorrows Temperature with RNNs and Gaussian Processes

Abstract

This note provides a gentle introduction to streaming data regression and prediction using Recurrent Neural Networks and Gaussian processes.  We look at two examples.  In the first we look at daily high temperature for two years at three different, but nearby NOAA weather stations. In the second example we look at daily confirmed infection of the Coronavirus in several US states.  This study shows that if you have a predictable natural pattern that changes in predicable ways from season to season one can make reasonable predictions, but in the case of complex phenomenon such as the Covid19 pandemic, simple models  do not always work well.

 In a previous post we looked at anomaly detection in streaming data using sophisticated cloud based tools, but here we use simple “classic” ML tools that you can run on a laptop.  We expect the seasoned data scientist will not get much out of this article, but if the reader is not familiar with recurrent networks or Gaussian processes this may be a reasonable introduction.  At the very least, if you are interested in studying the Coronavirus infection data, we show you how to search it using Google’s BigQuery service.

Building a Very Simple LSTM Recurrent Neural Network

Recurrent Neural Networks were invented to capture the dynamic temporal behaviors in streams of data.  They provided some early success in areas related natural language understanding, but in that area they have been superseded by Transformers, which we discussed in an earlier article.   We will construct an extremely simple recurrent network and ask it to predict the temperature tomorrow given the temperature for the last few days.

We will train it on the average daily temperature over 2 years measured at three nearby weather stations in the Skagit valley in Washington state.   The data comes from the Global Summary of the Day (GSOD) weather from the National Oceanographic and Atmospheric Administration (NOAA) for 9,000 weather stations.  The three stations are Bellingham Intl, Padilla Bay Reserve and Skagit Regional.  The data is available in Google’s BigQuery service named “bigquery-public-data.noaa_gsod.stations”. (We will show how to search BigQuery in the next section.)

Averaging the daily temperature for 6 years (2 years each for 3 stations) we get data like Figure 1 below.

f1

                           Figure 1.  Average daily high temperature for 2 years at 3 stations.

RNNs work by having a “memory” state tensor that encodes the sequence of inputs that have been seen so far.  The input to the RNN is a word or signal, along with the state of the system based on words or signals seen so far; the output is a predicted value and a new state of the system, as shown in figure 2 below.

f2

Figure 2.  Basic Recurrent Neural network with input stream x and output stream h

Many variations of the basic RNN exist. One challenge for RNNs is ensuring that the state tensors retain enough long-term memory of the sequence so that patterns are remembered. Several approaches have been used for this purpose.  One popular method is the Long-Short Term Memory (LSTM) version that is defined by the following equations, where the input sequence is x, the output is h and the state vector is the pair [c, h].

f3

Where sigma is the sigmoid function and W is the leaned tensor.   We are not going to use these equations explicitly because PyTorch has a built-in version that we will use.  All we need to do is provide the dimension of the input (which is a sequence of scalar values, so that is 1) and the dimension of the state vector c_h is a tuple of tensors of size (arbitrarily chosen here to be) 100. We also have linear layer that will map the final value of h down to the one-dimensional output.  Our simple LSTM neural network is

f4

It is important to understand the role of the input x as a sequence.   We want to train this network so that if we set x to be a sequence of consecutive daily high temperatures [t1, t2, … tn] then the forward method will return [tn+1].  To accomplish this training task we build a training set by scanning through the weather record andcreating tuples of sequences of length tw and the next-day temperature as a training label. This is accomplished with this function

f5

From this we create a list of such tuples called train_inout_seq and the training loop takes the form

f6

The complete details are in the notebook lstm-for-stream-final  in the Github repository.  This was trained on the average year and one the six yearly record.   The results are shown below.  The first two are the years that were training cases.  The original data is printed in blue and the predicted data is printed in orange.   As you can see virtually no blue shows.  The network has memorized the training data almost perfectly with an average daily error that is less than 0.23 degrees Fahrenheit.

f7

f8

Figure 2.  The results for the average of all the data (Figure 1 above) and for one of the individual stations. The LSTM network has memorized the training data.

In Figure 3 below we show the results when applied to two of the weather records for two of the other stations.   In these cases, the results are not very impressive.  In both cases the error average error was over 3.5 degrees and it was greater than 10 degrees for more than a dozen days.  However, the predictions for one day ahead did track the general trends.   It looks like it was able to predict todays temperature better than tomorrows.

f9

f10

Figure 3.   Predicting tomorrows temperature on two other station records.

Doing Regression and Prediction with Gaussian Processes

Before we define Gaussian Processes let us point to Christopher Bishop’s amazing book “Pattern Recognition and Machine Learning” (Springer 2006) for a complete treatment of the subject.  We will only provide a superficial introduction here.   For on-line resources there is the excellent blog on the subject by Peter Roelants.   We will use many of the code bits from that blog in what follows.  Another fun on-line source for learning about Gaussian Processes is the blog A Visual Exploration of Gaussian Processes by Görtler, Kehlbeck and Deussen.

In the simplest terms, a Gaussian Process is a statistical distribution of functions with some special properties.   In our case, the functions will represent the time evolution of stochastic processes. For example, the temperature at some location as a function of time,  or the number of daily infections of a virus in a community, or the random walk of a particle suspended in a fluid.

The distribution that defines a Gaussian Process are characterized by a mean function u(x) and a covariance function k (x, x).   A function f(x) drawn from this distribution, which is written

f11

has the property that if when we pick a finite set of time points  X = { x1 … xn } which we view as n random variable, the values y=f(X) are normally distributed in a multivariate Gaussian distribution with mean u(X) and a covariance matrix by a kernel function k(X, X).  Written another way,

f12

Given the covariance kernel function k() and mean function u() we can use this multivariant distribution to visualize what functions drawn from the Gaussian distribution look  like.   Let us pick 300 points on the interval [0,2] and a specific kernel (which we will describe later) and a mean function with constant value 1.   The following  numpy function will allow us to draw 10 sample functions

f13

As shown in Figure 4 below they appear to be like random walks but they also appear to be not only continuous be also smooth curves.  That is because nearby points on the x axis correspond to highly correlated random variables due to the choice of k().  If we had set Σ to be the identity matrix the variables at neighboring points would be independent random variables and the path would look like noise.  (We will use that fact below.)

f14

Figure 4. 10 different functions drawn from the Gaussian process.

Now for the interesting part.  What if we have some prior knowledge of values of y for a sample of x points?   We can then ask what is then the  distribution of the  functions?

View the n points on the time axis as n random variables.   Partition them into two sets X1 and X2 where we are going to suppose we have values Y1 for the X1 variables.  We can then ask for  the posterior distribution p(Y2 | Y1 , X1 , X2 ).  Reordering the variables so that X1 and X2  are contiguous the equation takes the form

f15

where

f16

One can prove that our condition probability distribution p(Y2 | Y1 , X1 , X2) is also a multivariate normal distribution described by the formulas

f17

The proof of this is non-trivial.  See this post for details.   The good news here is we can calculate this if we know the prior kernel function k() and mean m().  Picking these function is a bit of an art.  The usual way to do this is to pick k() so that m(x) = 0 so that u2 and u1 in the above are 0.   Picking the kernel is often done by forming it as a linear combination of well-known standard kernel function and then formulating a hyper-parameter optimization problem to select the best combination.

To illustrate this, we can return to the weather station data.   We have two years of data from three nearby stations.    We note two properties of the data we must exploit: it is noisy and approximately periodic with a period of 365 days.   We will not bother with the optimization and rather take a straightforward linear combination of three standard kernels.

f18

The first of these is the exponential quadratic and it is a very good, default kernel.   The second is the white noise kernel where the parameter sigma gives us the standard distribution of the noise we see in the data and the third is the periodic kernel which if we map our 365 days onto to the unit interval we can set p = 1.   Our kernel of choice (chosen without optimization, but because it seems to work o.k.) is

f19

Where for the first two terms we have set sigma to one and we pick the sigma for the noise term to best fit the data at hand.   The figure below illustrates the result of using the average of the six instrument years as the raw (prior) data.   Then we select 46 points in the first 230 days (spaced 5 days apart) as our X1 days.

In the figure the red dots are the points and the red line is u2|1 conditional mean function.  Three additional lines (blue, green and yellow) are sample function drawn from the posterior.  The pink zone is two sigma of standard deviation in the prediction.   We also calculated the error in terms of average difference between the mean prediction and the raw data.   For this example, that average error was 3.17 degrees Fahrenheit.  The mean function does a reasonable job of predicting the last 130 days of the year.

f20

Figure 5.   Graph of the raw data,  the mean conditional u2|1 (red line), and three additional functions (blue, yellow and green) drawn from the posterior.

The full details of the computation are in the jupyter notebook “Gaussian-processes-temps-periodic”,  but the critical function is the one that computes p(Y2 | Y1 , X1 , X2) and it is shown below (it is taken from Roelants blog)

f21

In this case we invoked it with kernel_function  as keq + kp.   Sigma_noise was 0.3.   The clever part of this code was the use of  a standard linear algebra solver to solve for Z in this equation

f22

But because Sigma11 is symmetric and the transpose of Sigma12 is Sigma21 we have

f23

Once you have that the rest of the computation is accomplished with the matrix multiply (@) operator.

In the notebook Gaussian-process-temps-periodic in the Github repository you can see the Gaussian processes for the six year samples.

The Coronavirus Data

Another interesting source of data comes from the daily confirmed cases of coronavirus infections in various states.   We shall see that the troubling recent growth rate is so large that it is very hard for our Gaussian process models to make predictions based on recent past samples.  However, we thought it may be of value to illustrate how to obtain this data and work with it.

The Covid-19 data is in the Google cloud uploaded from the New York times. To access this you must have a google cloud account which is free for simple first-time use.   We will run google’s bigquery to extract the data and we will run it through a client in a Jupyter  notebook.    You will need to install the bigquery libraries.   A good set of instructions are here.    To use Jupyter go here . You will need to add the json package containing you service account key to your environment variables and described here.  Finally install the local libraries with this command on your machine.

pip install –upgrade google-cloud-bigquery[pandas]

First load the bigquery library and create the client in a Jupyter notebook with the following

f24

There are a number of covid19 data sets available on BigQuery.   The one we will use is the New York Times collection.   The following query will request the data for the state of Washington, load it into a Pandas dataframe and print it.

f25

f26

 In our notebook bigquery-Covid we have the code that will extract the number of cases per day so that we can fit the Gaussian process to that.    That data is stored in the array ar_wash.   We attempted to make predictions with a sample every 9 days until the last 10 day.  Because of the large range of the data we scale it down by a factor of 1000.   The result is shown below.   The function make_gaussian is the same one we used for the weather station data except that the kernel is only the exponential quadratic.

f27

As can be seen the mean function (red line) capture features of the last 10 days reasonably well.   Looking at New York we see similar results, but the fit for the last few days is not as good.

f28

Where we fail most spectacularly is for those states that experienced a wave of new cases on the first week of July.  Here is Florida.

f29

Changing the prediction window to the last 3 days does a bit better.  But 3 days is not much of a prediction window.

f30

However, it is clear that a much more complex process is going on in Florida than is captured by this simple Gaussian process model.   The approach presented here is not a proper infectious disease model such as those from Johns Hopkins and IHME and other universities.  Those models are far more sophisticated and take into account many factors including social and human behavior and living conditions as well as intervention strategies.

Conclusion

As was pointed out in the introduction, this is a very superficial look at the problem of predicting the behavior of streaming data.  We looked at two approaches; one that focuses on accurate prediction of the next event using neural networks and one that attempts to capture long range statistical behavior using Gaussian process models.  The neural net model was able to learn the temperature patterns in the training data very well, but for test data it was much less accurate with average error of about 3- or 4-degrees Fahrenheit per day.   (This is about as good as my local weather person).   On the other hand, the Gaussian process made very good long range (over 100 days) predictions with only a small number of sample points.  This was possible because the Gaussian process model works well for patterns that have reasonably predictable cycles such as the weather.   However, the Gaussian process failed to capture changes in the more complex scenario of a viral infection where the dynamics changes because of social and human behavior or by evolutionary means.

If the reader is interested in extracting the data from Google’s BigQuery,  we have included the detail here and in the notebooks in the repository https://github.com/dbgannon/predicting_streams.

 

Notes on Deep Learning and Differential Equations.

Over the last two years  some very interesting research has emerged that illustrates a fascinating connection between Deep Neural Nets and differential equations.    There are two aspects of these discoveries that will be described here.  They are

  1. Many differential equations (linear, elliptical, non-linear and even stochastic PDEs) can be solved with the aid of deep neural networks.
  2. Many classic deep neural networks can be seen as approximations to differential equations and modern differential equation solvers can great simplify those neural networks.

The solution of PDE by neural networks described here is largely the excellent work of Karniadakis at Brown University and his collaborators on “Physics Informed Neural Networks” (PINNs).   This work has led to some impressive theory and also advances in applications such as uncertainty quantification of the models of subsurface flow at the Hanford nuclear site, one of the most contaminated sites in the western hemisphere.

While the work on PINNs shows us how to use neural networks to solve differential equations, the second discovery cited above tells us how modern differential equation solvers can simplify the architecture of many neural networks.  A team led by Chen , Rubanova , Bettencourt, Duvenaud at the University of Toronto, Vector Institute examined the design of some neural networks and noticed how their architecture resembled discretizations of certain differential equations.  This led them to define a hybrid of neural net and differential equation they call a “Neural Ordinary Differential Equation”. Neural ODEs  have several striking properties including excellent accuracy and greatly reduced memory requirements.

These works suggest that there exists a duality between differential equations and many deep neural networks that is worth trying to understand.  This paper will not be a deep mathematical analysis, but rather, try to provide some intuition and examples (with PyTorch code).  We first describe the solution of PDEs with neural networks and then look neural ODEs.

PINNs

We now turn to the work on using neural networks to solve partial differential equations.  The examples we will study are from four papers.

  1. Raissi, Perdikaris, and Karniadakis, “Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations”, Nov 2017, is the paper that introduces PINNS and demonstrates the concept by showing how to solve several “classical” PDEs.
  2. Yang, Zhang, and Karniadakis, “Physics-Informed Generative Adversarial Networks for Stochastic Differential Equations”, Nov 2018, addresses the problem of stochastic differential equations but uses a generative adversarial neural network.
  3. Yang, et. all “Highly-scalable, physics-informed GANs for learning solutions of stochastic PDEs”, oct 2019, is the large study that applies the GAN PINNs techniques to the large scale problem of  as uncertainty quantification of the models of subsurface flow at the Hanford nuclear site.
  4. Shin, Darbon and Karniadakis, “On the Convergence and Generalization of Physics Informed Neural Networks”, 2020, is the theoretical proof that the PINNs approach really works.

We will start with a very simple partial differ that is discussed in the Rassi paper.  Burgers’ Equation is an excellent example to study if you want to understand how shock waves can come about from relatively benign initial conditions.  The equation is

f1

Were the domain of x is the interval [-1, 1] and the time variable t go from 0 to 1.   The initial, t=0, condition is u(0,x) = -sin(pi*x). and the boundary conditions are u(t,-1) = u(t,1)=0.  The parameter v is 0.01/pi.   (If we set v = 0 then this is the inviscid equation which does describe shock waves, but with v >0 the equation is called the viscus Burgers’ equations.  While not technically describing discontinuity shock as t goes forward in time, it is awfully close.    The figure below shows the evolution of the value of u(t,x) from the initial sine wave at t=0 to a near discontinuity at t=39.

f2

Figure 1.  Samples of u(t, x) for x in [-1,1] at 40 points and t in 4 points.

The basic idea of a physics informed neural net is,  in this case, a network defining a map

f3

This network, when trained, should satisfy both the boundary condition and the differential equation.   To satisfy the differential equation we must be able to differentiate the network as a function of x and t.  But we know how to do that! We compute the derivatives of a network symbolically when we do the back propagation for training.   Neural nets are non-linear functions with first derivatives.   In our case we also need the second derivatives which means that we cannot use ReLU as an activation function because it is piecewise linear and has second derivatives that are all zeros.   However Tanh is smooth with fine second derivatives so we will build our net function as follows.

f4

Our goal is to train an instance of this network to minimize the functions

f5

The function f in Torch is

f6

Where we are using the grad function from torch.autograd  to compute derivatives.  The function flat(x) just returns the list [x[i] for i in range(x.shape[0)].

The unique thing here is that we are training the network to satisfy the differential equation and boundary conditions without data samples from the solution u.  When converged the network should give us the solution u.  This works because Burger’s equation, in this form, has been proven to have a unique solution,  Of course, this raises the question: when are a differential equation and boundary conditions sufficient to guarantee the existence and uniqueness of a solution?  Simple linear differential operators do, but not all differential equations.  In the case of PINNs, one can speculate that if our laws of nature are correct, then nature proves the solution exists. (Thanks to Wolfgang Gentzsch for making me realize this point needed to be made.)

To train the network we iterate over 200000  epochs  using two optimizers (one for the boundary and one for the function differential equation function f).   For each epoch we randomly draw a batch of boundary samples and a batch of samples drawn from the interior of the [0,1]x[-1,1] rectangle.   Each sample is a tuple consisting of a t-value, an x-value and a u boundary value or zero.  (Recall we are forcing f(t, x) to zero and bndry to zero or u(0,x).)  The main part of the code is below.  The full details are in the github repository.

f7

A python library giving us an approximation of an exact solution is available on line that we can use to compare to our result. Figure 2 below shows the heatmap of the solution.

f8

Figure 2.  The horizontal axis is time (t in [0,1]) sampled at 40 points and the vertical axis is x in the interval [-1,1] sampled at 40 points.  Dark colors are u values near 1 and white are u values near -1.

The mid horizontal line shows the evolution of the shock as it becomes a sharp transition as t goes from 0 to 1 on the right.

A better view can be seen from the evolution of the solution from the initial condition.  This is shown in Figures 3 and 4.  It is also interesting to note that there is a subtle difference between the neural net solution and the approximation to the exact solution.

f9

Figure 3.  The x axis x in the interval [-1,1] at 40 points and the y axis is u(t,x).  Each line represents u(t, x) for a specific value of t.   The blue line is the initial condition t=0.  Subsequent lines show the evolution to the singularity.

f10

Figure 4.   A 3D view showing the evolution of the sine wave on the left to the sharp shock on the right edge.

Nonlinear Boundary Value Problems

The classical elliptical PDE takes the form

f11

where u is defined on a region in x-y space and known on a bounding edge.   A good physical example is a steel plate that is clamped and temperature controlled on perimeter and f represents heat applied to the surface.  u will represent the temperature when it reaches a steady state.

For purposes of illustration we can consider the one-dimensional case, but to make it interesting we will add some non-linear features.  In addition to the function f(x),  let us add another known function k(x) and consider

f20

To show that the technique works, we will consider special cases where we know the exact solution.  We will let k(x) = x and u(x) = sin2(x).    Taking the derivative and applying a few trig identities we get f(x) as

f13

Where we will look for the solution u on the interval [0, pi] subject to the boundary condition u(0) = u(1) = 0.  The second case will be similar, but the solution is much more dynamic: u(x) = sin2(2x)/8 with the same boundary conditions.   In this case the operator is

f14

The network is like the one above but with only one input parameter

f15

The differential operator that is the left side of the equation above is computed with the function Du(x) shown below.

f16

Training the network is straightforward.  As with the Burgers example, we use a mean squared error lost function and gradient descent optimizer. We have an array vx of x-axis points from which batches of samples are drawn.   For each point x in each batch we also create a batch consisting of f(x).  There is only one batch for the boundary.   In this case, the boundary is only two points, so it is represented by one batch.

f17

Running this for 3000 epochs with batch of size 10 will converge the network.   The result for the 1st case (u = sin2(x) ) is extremely close to the exact solution as shown in figure 5.   Both the true solution (a blue line) and the computed solution (an orange line) are plotted together.   They are hard to distinguish.   Below the solution we plot the differential equation f(x) which is what the networked was trained to learn and it is not surprising the fit is good there too.

The second case (u = sin2(2x)/8 ) is more dynamic.    The differential equation f(x) fit is excellent but the solution is shifted up (because the boundary condition was off on one end.

f18

Figure 5.   Solution to the differential equation d/dx(x du/dx) = f(x)

Stochastic Differential Equations and Generative Adversarial Nets

If we consider the differential equation from the previous section

f20

but now make the stipulation that the functions k(x) and f(x) are Gaussian processes rather than fixed functions, we now have a stochastic differential equation.   Papers 2 and 3 mentioned above show us how to create a generative adversarial network to solve for the Gaussian process that defines u.  they do not treat this as an initial value, boundary value problem as described above, but rather the authors assume we have a series of snapshots  ( k(xi), u(xi), f(xi) ) for i in 1, … n for some reasonably large n.  With these we will set up a GAN to find a representation of u.

Recall how a GAN works.  We have two networks, a generator, and a discriminator.  The generator is trained to transform a normal distribution into a distribution that fits your data.  The discriminator is trained to recognize your true data and distinguish it from the fake data from the generator.  This is illustrated in Figure 6 below.

f21

Figure 6.  Basic GAN configuration.

It is relatively easy to build a GAN that can reproduce u from samples of x, k(x) and u(x).  Unfortunately, making a GAN that looks like u(x) does not mean it satisfies the differential equation.  Taking the simple case where k(x)=x, a GAN base based on the design above generated the result in Figure 7 below.  On the left is a plot of the distribution of the converging solution for random normally distributed samples X over the true solution in blue (see right half of figure 5).  However, when we apply the differential operator to the generated solution we see it (multi-colored dots) is nothing like our the true solution operator (blue).

f22

Figure 7.  GAN from figure 6 distribution of normally distributed sample X and, on right differential operator applied to gen(X)

To solve this problem the GAN discriminator must evaluate both u(x) AND the result of the differential operator f(x) = Du(x, k(x), u) as show in Figure 8.

f23

Figure 8.  Full GAN to map normal distribution to solution that also satisfies the differential equation.

The construction of the GAN generator is straight forward.  It takes inputs as batches of samples from a normal distribution and generates a batch of points in R2 that are eventually trained to represent samples from (x, u(x)).   As for our previous examples we use Tanh () for the activation function.

f24

The discriminator takes triples of the form (xi, u(xi), f(xi)) from the samples.  (Note we are using k(x)=x, so there is no need for that argument.   We can use ReLU for the activation because we do not need second derivatives of the discriminator.

f25

The torch version of the differential operator is

f26

The discriminator wants inputs of the form

f27

Which are provided by the samples (xi, u(xi), f(xi)) and from the generator in the form

f28

The training algorithm alternates between optimizing  log(D(G(z))) for the generator and optimizing log(D(x)) + log(1 – D(G(z))) + a gradient penalty for the discriminator.   The gradient penalty is introduced by Yang, Zhang, and Karniadakis in paper 2.  The full code for this example is in the Github repository.  Figure 9 illustrates the convergence of the solution in for snapshots (A, B, C, D) with step D taken at 200000 epochs.  It is interesting to observe that image of the 1-D normal sample space in R2 takes to form of a path that gradually conforms to the desired distribution.  However at step D there is still unfocused resolution for x > 2.6.

f29

Figure 9.   Four snapshots in time (A, B, C, D) of the convergence of the GAN.

We should also note that this is not a true stochastic PDE because we have taken our samples for the training from the exact solution and are not Gaussian sources, but the concepts and training are correct.

A much more heroic and scientifically interesting example is in paper 3.   The authors of this paper address the problem of modeling the subsurface flow at the Hanford Site in Washington State where the reactors to generate plutonium for the US atomic arsenal.   The 500 square mile site had nine nuclear reactors and five large plutonium processing complexes and produced 3 million US gallons of high-level radioactive waste stored within 177 storage tanks, an additional 25 million cubic feet of solid radioactive waste.  And over the years the tanks have been leaking.   Hanford is now nation’s largest environmental cleanup project.   The team involved with paper 3 is from Brown University, Lawrence Berkeley National Lab, Pacific Northwest National Lab, Nvidia, Julia Computing and MIT.  They look at a large 2-D version of the PDE discussed above.   In this case u(x,y) is hydraulic head of the subsurface flow, k(x,y) is depth-averaged hydraulic conductivity and f is the infiltration from the earth to the flow.   These quantities are measured by sensors at a large number of sites on the Hanford reservation.

Because k and u are both stochastic and are supported only by data at the sensor points the generator in their GAN must produce both k and u (actually  log(k) and u) as shown in Figure 10.

f30

Figure 10.  GAN architecture from Yang, et. al, “Highly-scalable, physics-informed GANs for learning solutions of stochastic PDEs”, arXiv:1910.13444v1.

In order to tackle a problem of the size of the Hanford problem, they partitioned the domain into a hierarchy of subdomains  and had a separate discriminator for each subdomain.   The top levels (1 and 2) capture long range characteristics while the lower levels capture properties that correspond to short range interactions.   They parallelized the computing utilizing over 2700 GPUs on the ORNL Summit machine so that it maintained 1.2 exaflops.

Neural Ordinary Differential Equations

In the previous section we saw how neural networks can solve differential equations.   In this section we look at the other side of this coin: how can differential equation solvers simplify the design, accuracy, and memory footprint of neural nets.   Good papers and blogs include the following.

  1. Chen, Rubanova, Bettencourt, Duvenaud “Neural Ordinary Differential Equations
  2. Colyer, Neural Ordinary Differential Equations, in the Morning Paper, Jan 9, 2029, Colyer’s amazing blog about computer science research.
  3. Duvenaud, comments in Hacker News about the Chen, et. al. paper.
  4. Chen, “PyTorch Implementation of Differentiable ODE Solvers”.
  5. Rackauckas, “Mixing Differential Equations and Machine Learning” .
  6. He, et. al. “Deep Residual Learning for Image Recognition”.
  7. Gibson, “Neural networks as Ordinary Differential Equations
  8. Holländer, “Paper Summary: Neural Ordinary Differential Equations
  9. Surtsukov, “Neural Ordinary Differential Equations”

Chen et. al. in [1] and others have observed that the deep residual networks that made training very deep networks possible [6] had a form that looked like Euler’s method for solving a differential equation.  Residual networks use blocks of network layers where the input is transformed by a residual before sending it to the next layer by the simple equation

f31

So that what we are training is the sequence or residual layers Ni  .  If you have a differential equation dy/dx = f(x) then Euler’s method is to compute y from a sequence of small steps of based on an approximation of dy/dx by

f32

By analogy, our residual network now looks like this

f33

With delta i is equal to 1.   If we abstract the sequence of networks into a single network that depends upon t as well as y, we can define a neural ODE to be of the form

f34

Where theta represents the network parameters.

The interesting idea here is that if residual networks are basically Euler’s method applied then why not use a much more modern and accurate differential equation solver?  If we have an initial value y0 at  time t0, we can integrate forward to time tn to obtain

f35

To illustrate how we can train a neural ordinary differential equation we look at a slightly modified version of the ode_demo.py example provided by the authors. we will take our sample data from the solution of a simple ODE that generates spirals in 2-D given by

f36

Where y is a 2-d row vector.   Given the starting point y0 = (2.0, 0.0) and evolving this forward 1000 steps the values are plotted below.

f37

Figure 11.  Spiral Training Data

Our neural network is extremely simple.

f38

You will notice that while the network takes a tuple (t, y) as input, the value t is unused.  (This is not normally the case.) The training algorithm is quite simple.  Recall that the function is a derivative, so to predict new values we must integrate forward in time.

f39

The authors use Hinton’s Root Mean Square Propagation optimizer. And the function get_batch() returns a batch of 20 points on the spiral as starting points as batch_y0.  Batch_t is always the 10 time points between 0 and 0.225 and batch_y is a batch of 20 10-step paths along the spiral starting from the corresponding batch_y0 point.   For example, shown below is a sample batch.

f40

Figure 12.  Sample training data batch

An incredibly important point is a property of the ODE solver we use “odeint”.

f41

It comes from the author’s  torchdiffeq package as an adjoint integrator.  The actual integrator we want to use is scipy.integrate.odeint which is a wrapper for an old, but sophisticated method written in the 1980s and from ODEPACK.  But we need to be able to compute the symbolic derivative of the loss operation so we can do the backpropagation.   But we can’t do the symbolic integration back through the old ODEPACK solver.  To get around this problem the Chen and the team uses the integrator to solve an adjoint problem which, very cleverly, allows us to compute all the derivatives without trying to differentiate the solver.  The details of this step are described in the original paper and in the other references above.  We won’t go into it here.

It is fun to see the result of the training as the it evolves.   We captured the output every 10 iterations.

f42

Figure 13.  Snapshots of training showing the trajectories of the solution and the correct values.

It is worth observing that, once trained our network, when given a point (x, y) in the plane returns a vector of the trajectory of the spiral path through that point.   In other words, it has learned a vector field which can be plotted as on the right in figure 13 and below.

f43

Figure 14.  plot of the vector field generated by the network

Neural ODEs can be applied well beyond applications that resemble simple differential equations.  The paper illustrates that they can be applied to the vision tasks that resnet was invented to solve.  The paper show how it can be applied to build a MNIST hand written digit classifier but with a much smaller memory footprint (one layer  versus many layers in resnet).  There is a very nice implementation in Surtsukov [9].

The solution (detailed in Surtsukov’s notebook in his gitub repo) also contains all the details of the solution to the adjoint problem for computing the gradients of the loss.   The model to classify the MNIST images is almost identical to the classical Resnet solution.   A slightly modified version that combines parts of Surtsukov’s solution with Chen’s solution  is in our Github repo.

The input to the model is passed through an initial down-sampling followed by the residual blocks that compute the sequence

f44

More specifically the Resnet version consists

  • A down sampling layer
  • 6 copies of a residual block layer
  • And a layer that does the final reduction to 10 scores.

The neural ODE  replaces the six residual layers Ni(yi ) with the continuous derivative  N(y,t).  The forward method of the NeuralODE invokes the adjoint solution to the integration by using the odeint() function described in the previous example.   You can experiment with this in the Jupyter notebook in the repo.  With one epoch of training the accuracy on the test set was 98.6%.  Two epoch put it over 99%.

One of the other applications of Neural ODEs is to time series data.  A challenge for many tradition neural network approaches to time series data analysis is that they require uniformly spaced samples for training.  Because the Neural ODE is continuous,  the sampling can be flexible and adaptive.

Final Thoughts

This report is an attempt to illustrate a striking duality that seems to exist between deep neural networks and differential equations.   On the one hand, neural networks are non-linear functions that, with the right choice of activation functions, can have smooth first and second derivatives and, consequently, they can be trained to solve complex differential equations.    Generative adversarial networks can even model the solution to stochastic differential equations that fully satisfy the governing laws of physics.  Seen from another perspective, many deep networks are just discrete approximations to continuous operators that can be solved with advanced differential equations packages.   These Neural ODEs have much smaller memory requirements and are more adaptive in their execution and may more accurately solve difficult problems.

It is going to be interesting to see where these approaches lead in the years ahead.

The github repository for this paper contains four Jupyter notebooks:  the solution to Burger’s equation, the simple non-linear and generative adversarial example, and the simple Neural ODE solution to the spiral described above and the MNIST solver.

Postscript. I realize now that I overlooked another significant contribution to this discussion.  “Deep Learning Based Integrators for Solving Newton’s Equations with Large Timesteps” arXiv:2004.06493v2 by Geoffrey Fox and colleagues show how RNN can be use to vastly improve the performance of the integration of Newton’s equation.

Accelerating Deep Learning Inference with Hardware and Software Parallelism

Abstract

A persistent problem when using deep neural networks in production is the speed of evaluating the network (known as inference) on a single input.   While neural network inference has ample opportunities for using parallelism to gain speedup, these techniques are not as easy to exploit as when training the network. In this short report we will look how several new system/chip designs from companies like Groq, Celebras, Graphcore and Samba Nova are approaching the inference performance problem and we will also explore the software challenge in compiling Neural Nets to run on this parallel computer hardware.

Why is Inference Harder to speedup than Training?

Training Deep learning systems require vast computational resources and data. Fortunately, the algorithms used for training are highly parallelizable and hardware that supports either data parallel and/or highly multithreaded execution can make a huge difference in the training time.   In a previous post, we described how simple GPUs and clusters of CPUs can be used to train networks. However, once a network has been trained it must be deployed so one can use it to make inferences. Making an inference involves taking a single input (an image, query or sound clip) and pushing it through the many layers of the network. The trained model may be hosted in the cloud to support image identification or search, or to do natural language translation or question answering. Doing this fast for on-line applications is essential as the load on the application increases. The speed of inference is also critically important in robotics applications such as self-driving vehicles or controlling complex critical hardware which may involve life-support.

While the inference process still involves operations that can be parallelized, the challenge in using this parallelism to gain performance is different than it is when training the network.  The reason that this is the case is easy to understand. The metric of performance for inference is latency (the time it takes to push an item through the network) while the metric of performance for training is throughput (the volume of training data per second that you can manage). Training a network involves pushing batches of training data through the pipeline of network layers. GPUs are effective when you can reuse data that has already been loaded into their local memories.   Evaluating a batch of input data allows the GPU to load the layer weights once and reuse them for each item in the batch.

It is a well-known fact of life in high performance computing that the latency involved moving data is a performance killer unless you can hide that latency by using your hardware to do other useful computation. Because inference has fewer opportunities to reuse data, the best way to reduce inference latency is to reduce the amount or cost of data movement.

The new architectures we will look at all use extraordinary amounts of parallelism, but they also depend very heavily on compilers that can translate the neural network designs into the low level code and optimized data movements that will achieve the performance goals. In fact most, if not all, of the systems here were co-design efforts involving the simultaneous planning for the hardware and software. Hence as we present hardware details, we will need also to describe the compiler and runtime.   The last section of this report will focus on the general challenges of compilers for this class of computing system.

Hardware Advances

In the following paragraphs we will outline the advanced deep learning processor designs that are now coming on the market. While they all address the issues of training and inference, there are several that has put the issue of inference performance as a prime design objective.   The descriptions below vary greatly in the level of detail. The reason for this is that some are still a work in progress or highly proprietary. For example, all we know about the Huawei Ascend 910 is that it “performs much better than we expected”.

Groq

Groq.com is a Silicon Valley company co-founded by Johnathan Ross who was on the team that designed the Google Tensor Processing Unit.   The Groq Tensor Streaming Processor (TSP) is very different from the other systems which rely on massive scale multi-core parallelism. Instead the TSP can be classified as a Very Long Instruction Word (VLIW) single core, single instruction stream system.   The design is very unusual but there is a good description in the Linley Group Microprocessor Report January 2020. We will only give a capsule summary here.

The TPU is a type of systolic processor in that it has horizontal data flows with instructions streaming from the main issue engine down through 20 data layers called superlanes.   Each Superlane is composed of 16 parallel lanes of 8 byte wide data paths.   The superlanes have blocks for Matrix accumulators, transpose and permute operations and vector ALUs as shown in Figure 1 below.   Note that memory is imbedded directly in the superlanes.   Notice also that each superlane has is duplicated around a central axis so data moves between units in both directions

fig1

Figure 1.   Groq Architecture.

Instruction issue is also systolic.   The first instruction is executed on superlane 0 in one cycle. In the next cycle that instruction is executed on superlane 1 while the 2nd instruction is executed on superlane 0. In the next cycle the first instruction is executed on superlane 2, the 2nd instruction is now on superlane 2 and the 3rd instruction is on superlane 0. So, in 20 cycles an instruction has been executed on all superlanes and each subsequent instruction is complete on all superlanes, etc.   (Note: this description may not be totally accurate. We do not have the detailed Groq technical specs.)

fig2

Figure 2. Groq TSP

The Groq TSP is designed to deliver a 1000 trillion operations per second and live up to a major design goal: on the Resnet-50 deep learning model it delivers 20,000 inferences per second with a latency of 0.04ms on a batch size of 1.

A big challenge in creating a system like the Groq TSP is building a compiler that can generate an efficient instruction stream to keep all that hardware busy.   Because there are no caches, locality is not an issue and advanced architectural features like prefetch and branch prediction are not needed, so the computation is completely deterministic, and performance is completely predictable. We will return to the compiler issues below.

Habana Goya

Habana Labs was one of the first to introduce a fast inference processor.   In 2018 they announced the Goya processor.   By the end of 2019 they were acquired by Intel. The Goya architecture has an array of Tensor Processor Core (TPC) compute engines which are VLIW single-instruction-multiple-data processors. It also has a general matrix multiply engine. The TPC engines have local memory but there is a fast, shared static RAM.

One very interesting thing about the Habana design team is their work with Facebook on the Glow compiler back end for Pytorch.   More on that later.

Alibaba Hanguang 800

Another newcomer to the race for faster inference is the Alibaba Hanguang 800.   Alibaba is not planning on selling this new chip and it is intended solely for internal use in its cloud servers. There is very little that is published about its internal architecture.   In the table below we see some interesting performance numbers including one that indicates that the Alibaba system has better inference performance than the Groq TSP. However, we do not know if this IPS number is for a batch size of 1.

fig3

Figure 3. From https://www.techarp.com/computer/alibaba-hanguang-800-details/

A Digression about Compilers, ResNet-18 and ONNX.

Before we continue discussing interesting new architectures, it is helpful to stop and discuss some general issues related to the compilers and benchmarks.

One of the big problems you encounter when writing a compiler for a new architecture is that there are several very good deep learning frameworks that are used to build deep neural networks. These include MxNet, Caffe, CNTK, Tensorflow, Torch, Theano, and Keras. One could write a compiler for each, but, given that they all build very similar network models, it makes sense to have a “standard” high-level, graph intermediate form that captures the properties of a large fraction of all neural nets.  Then, if third parties build translators from the high-level frameworks to this intermediate form, the chip architect’s job is half done: all they need to do is write a code generator mapping that intermediate to their architecture.

The Open Neural Network Exchange (ONNX) may become that standard intermediate form. Originally developed by Microsoft and Facebook, it has been taken over as a community project involving 20 companies. While we do not know how Groq, or some of the other hardware companies described here, are building their proprietary compilers, looking at ONNX as it relates to a real example can give a clue of how compilers like these do work.

In the last three hardware descriptions, performance number were often cited in terms of ResNet-50. Resnet is one of a family of very deep convolutional neural networks. Originally presented by He, Zhang, Ren and Sun in their 2015 paper, they describe a clever way to improve the ability to train very deep networks.   You can think of each level of a deep neural network as learning more subtle and abstract features of the training images than were detected by the previous layers.   A residual network is one where you “subtract” the features discovered by previous layers so the following layers can work on learning the properties of the residual.  Doing this subtraction is a way to focus the learning on what remains and helps solve a problem known as the vanishing gradient that makes it hard to train very deep networks. Mathematically If your training goal is to learn a function H(X), then the residual at some layer is F(X) = H(X)-X. Hence we want the following layers to learn F(X)+X to recover H(X). Creating F(X)+X in the network is easy and it is shown in Figure 4.

fig4

Figure 4. From Zhang, Ren and Sun in their 2015 paper.

We can construct such residual network with Torch or TensorFlow and then we can look at the ONNX intermediate. In Torch, the code is summarized below. (The complete code is in a Jupyter notebook that accompanies this post.) There are two networks. One is the residual block as illustrated in Figure 4 above and the other is the full model that incorporates a sequence of residual blocks.

fig4.5

In the image above, we created an instance of the model as “resnet” and then set it to the eval() state. Using the Torch built-in ONNX export operator we can save a copy of the model in the file resnet.onnx.   Doing so gives an output like Figure 5 below.   On the right we have fragments of the ONNX intermediate code and on the left, a graph that is generated from the ONNX code with a tool called netron. What is shown here is only a small part of the ONNX graph. The top is just a list of all the model variables. Following that we have actual code for the graph.

The ONNX exporter will build the graph from the internal Torch model.   There are two ways in which it does this.   One is to directly “unroll” the graph by interpreting the execution of the forward(input) eval operator. In some cases, if the definition of the model contains conditionals, it will insert conditional code in the graph, but these are rare cases.

In this case the code consists of an initial convolutional layer followed by a batch normalization which is based on the mean and variance of previously seen batches.   This is followed by the first instance of the Residual block model.

fig5

Figure 5. Fragment of the ONNX output for the Resnet18 model.

As you can see, the ONNX graph consists of nodes that are parameterized operators with inputs that are the model tensors and they produce one output that is a well-defined tensor. Generating code for a specific architecture can be a simple as building well-tuned native versions of the ONNX operators and then managing the required data movement to ensure the input tensors are in the right place at the right time for their associated operation nodes. On the other hand, there are a number of important optimization that can be made as we “lower” the ONNX graph to a form that is executed. We will return to this point after we complete the descriptions of the new architectures.

Cerebras Systems

Cerebras Systems has taken the parallelism to an extreme. The power of their approach is most evident during network training rather than inference, but it is interesting enough to describe it in more detail. Their CS-1 system is based on a wafer-scale chip that consists of a 2d grid of 400,000 compute cores interconnected by a 2d-mesh network capable of 100 Petabits/sec bisection bandwidth that delivers single word active messages between individual cores.

fig6

Figure 6. Cerebras WSE

The Cerebras software contains the Cerebras Graph Compiler that maps deep learning models to the hardware. Their approach is an extreme form of model parallelism where each layer of the network is mapped to as many compute cores as is required to contain it. Their philosophy is nicely described in the post “Neural Network Parallelism at Wafer Scale”​ by Natalia Vassilieva and in their product overview.

fig7

Figure 7. Cerebras Software Stack

The training uses pipelined back-propagation.   The graph compiler takes the source description of the network and extracts a static graph representation of the problem and converts it into the Cerebras Linear Algebra Intermediate Representation (CLAIR). This is then converted into a “Kernel graph” and mapped to the hardware as shown in Figure 7. In their approach the entire network is mapped onto the computing fabric, so pipelining batches through has no points of congestion.

Graphcore

Graphcore is a U.K. startup started shipping their accelerator, called the Intelligence Processing Unit (IPU), in 2018 and it is now available on Azure for evaluation.   Like Cerebras, the architecture is based on massively parallel processing. Each IPU contains 1,216 processing elements called tiles; a tile consists of one computing core plus 256 KiB of local memory.   There is no shared memory, but the local memory is SRAM and faster than the DRAM in most CPU servers.  To hide latencies, the IPU cores are multithreaded.   Each core has 6 execution contexts that served in round-robin style. In terms of computational performance, each IPU is 31.1 TFlops/s in single precision.

fig8

Figure 8.   Graphcore IPU

There is an inter-processor communication switch called the exchange that provides processing element data communication and multiple IPUs are connected via a fast, off-chip interface network.   Citadel has published an excellent performance analysis by Jai et.al. “Dissecting the Graphcore IPU Architecture via Microbenchmarking”.   They have measured “On a per-tile basis, each of the 1,216 tiles can simultaneously use 6.3 GB/s of bandwidth to transfer data to an arbitrary destination on chip. The latency of an on-chip tile-to-tile exchange is 165 nanoseconds or lower and does not degrade under load above that value.” They also measured the latencies between cores that reside on different IPU boards where several boards had to be crossed to deliver the message. This increased the latency to about 700 nanoseconds.  Their report provides a very complete analysis of the data traffic performance under a variety of conditions.

Ilyes Kacher, et.al. from the European Search engine company Quant have also produced an analysis: “Graphcore C2 Card performance for image-based deep learning application: A Report”. Their interest in Graphcore was to improve performance of their image search product. In their study they considered the image analysis network ResneXt101. Inference experiments for batch sizes of 1 and 2 are 1.36 ms. Their benchmarks claim this is 40time lower latency than an Nvidia V100 GPU.   They also compare performance on BERT and they measure 30% lower latency with 3 time higher throughput.

The programming model is based on Bulk Synchronous Parallelism in which computation is dived into phases, with each phase consisting of a computation step followed by a communication step and then a barrier synchronization.

fig9

Figure 9. Graphcore BSP execution (from Graphcore IPU Programmers Guide.)

https://www.graphcore.ai/products/poplar is a discussion of their stack.   More significantly they have open sourced the entire software stack.  They have a runtime environment called the Poplar Advanced Run Time (PopART) that can be used to load a ONNX model from python and run it on their hardware. For Tensoflow they have a separate compiler and runtime.

Graphcore hardware is now available on Azure for evaluation.

SambaNova

SambaNova is a bay area startup founded by two Stanford professors and a former executive from Sun and Oracle. They have not yet announced a product, but they have an interesting background that may indicate a very novel approach to the design of an AI accelerator.

Reconfigurable computing is an idea that has been around for since the 1960s. Field Programmable Gate Array are in common use today to configure a processor to execute a new algorithm, but these usually take 10s to 100s of milliseconds to “reprogram”. Suppose you could configure the logic elements on the chip to perform a needed transformation on a block of tensors just as that block emerges from a previous operation? The SambaNova team has looked at specialized programming languages that allow them to generate streams of high-level templated instructions such as map, reduce, shuffle and transpose that are natural elements of deep network kernels.   This is clearly a talented, well-funded team and it will be interesting to see what is eventually released.

Tenstorrent

A Toronto startup called Tenstorrent has built a device called GraySkull.   The chip has 120 small processing nodes, called tensix, and two toroidal mesh networks that can be extended off-chip to build larger clusters. There is no shared memory.   In various articles about Tenstorrent they emphasize their approach to dealing with sparsity in large neural net models is key to high performance on big models.   Like several of the other startups, their compiler translates ONNX graphs into tensix primitive operators which are mapped to the nodes. They claim 22,431 IPS on resnet 50 and 23,345 sentences/sec on BERT.

fig10

Figure 10. From Tenstorrent.

NLVDA

Finally, we include NLVDA from NVIDIA. Called a Deep Learning Accelerator, this is an open source modular architecture for building inference accelerators. There is a hardware instance called Xavier that NVIDIA has produced to support inference for autonomous transportation applications.

Compiling Neural Nets for Parallel execution.

In the remainder of this report we will look at the techniques that are used in modern compilers to optimize performance on neural network training and inference. Many of the basic techniques have been used in compilers for 50 years. These techniques evolved as CPU arithmetic and logical units became so fast that many operations were dominated by the time it took to move data from main memory through layers of faster and faster caches. Data locality was critical: if an item of data was going to be reused you needed to keep it in fast cache as long as possible.

Almost all of the operations in a neural network involve matrix and vector arithmetic.   If you consider the most basic type of network layer, an n by n full connection, it is just an n by n matrix and a vector of offsets. Applying such a network to a single vector of n inputs is just a matrix-vector multiply and a vector addition. The problem with matrix-vector multiply is that the matrix elements must be fetched from memory and are used only once. On the other hand, if the computation is properly blocked so that small chunks of the array are loaded into the GPU or CPU caches, then if you have a batch of n vectors, each element of the array can be fetched once and used n times.   This method of improving matrix-matrix computation is the basis of the standard library known as the Level-3 Blas developed by Jack Dongarra and others in the 1980s.

A more interesting example of how locality can be used to speed up performance in a neural network is 2-D convolutions that are used in deep learning image networks.   Figure 11 below shows a 2-D convolution operating on a 6×7 image data with 3 color channels and outputs a new 6×7 array with 6 channels.   Each output channel is produced by using, in this case, 3 filters of size 3×3. Each filter is applied to a channel of the input (which has been expanded with an extra border of ghost pixels). The filter moves across the input channel computing the inner product of the filter with the image at that point. The corresponding output point is the sum of the three filter inner products applied to each of the input channels.

fig11

Figure 11. A 2-D convolution applied to a 3 layer, 6 by 7 image producing 6 output images.

The 6 by 3 by 9 tensor of filters, W, is the learned object and the full computation is show in the formula above (we have suppressed the bias terms to simplify the presentation). If we let Cin be the number of input channels and Cout be the number of output channels for a width by height image the full computation takes the form below. (The input is the padded (width+1) by (height+1) array, so that the pixel at position (0,0) is in location (1,1) in the array Input.)

fig11.1

This form of the computation has extremely poor locality and it will run slowly. However, we illustrate below a sequence of program transformations that allow us to simplify this loop nest and “lower” the execution to primitives that will allow this to run up to 400 times faster.

fig11.2

A close inspection of this nest of six loops will convince you that we can execute them in any order. In fact, the addition recurrence is only carried by the inner three loops.   Consequently, we can pull out the three inner loops as a separate function that does not involve repeated writes to memory around the Output tensor. The result is shown below.

fig11.3

The next thing to notice is that if you move the t loop in the kernel function to the innermost position the operation is an inner product. The computation now takes the form

fig11.4

One final reduction can be made when we notice that the two loops in kernel2 are just a pointwise matrix product of the 3×3 filter W with the Input shifted to position (k,j). And the summation can be done with the torch.sum() function, so our function now takes the form below.

We ran these four versions of the function on two machines: an Intel core I7 and an Nvidia Jetson nano. The results are in Tables 1 and 2 below. As you can see, the performance improves substantially for each transformation. In addition, the speed up of the matrix product version over the 6 nested loop version varies from 68 to over 400 times with the greatest speedup occurring when the values of Cin are largest.

6 nested loops Factored Kernel Kernel with dotprod Matrix product Cin Cout W,H SpeedUp
2.24 seconds 1.26 1.0 0.022 16 4 10,10 68
4.47 2.75 0.19 0.047 16 8 10,10 95
8.24 4.98 0.39 0.077 16 16 10,10 107
8.51 4.97 0.20 0.038 32 8 10,10 223
8.66 5.06 0.10 0.020 64 4 10,10 433

Table 1. Execution time on Intel Core i7 for the four versions of the loop with various values of Cin and Cout. The speedup is measured as the ratio of the 6 nested loop time to the time for the matrix product.

6 nested loops Factored Kernel Kernel with dotprod Matrix product Cin Cout W,H SpeedUp
47.9 seconds 28.1 7.02 0.7 16 4 10,10 68
87.9 52 9.7 0.73 16 8 10,10 120
168.9 107 18.9 1.17 16 16 10,10 144
171 107.9 9.8 0.59 32 8 10,10 289
174 104.0 4.38 0.43 64 4 10,10 404

Table 2. Execution time on Nvidia Jetson Nano for the four versions of the loop with various values of Cin and Cout.

The final point to notice about this last version is the final (i,j,k) loops may all be executed in parallel. In other words, if you had a processor for each pixel on each output plane the entire operation can be run in parallel with an addition speedup factor of Cout*width*height. Of course, all of these versions are far slower than the highly optimized conv2d() library function.

The compilers we talk about below do not operate at the level of program transformation on Python loop nests.   They start at a higher level, transforming ONNX-like flow graphs and eventually lowering the granularity to primitive operators and scheduling memory management and communication traffic and eventual code generation.

Deep Neural Network Compilers.

Every one of the hardware projects we described above has a companion compiler capable of mapping high level DNN frameworks like PyTorch or Tensorflow to run on their new machine.   Not all of these have detailed descriptions, but some do, and some are also open source.   Here is a list of a few notable efforts.

  • Intel’s NGraph compiler “Adaptable Deep Learning Solutions with nGraph Compiler and ONNX” with extensive open source and documentation.
  • ONNC: A Compilation Framework Connecting ONNX to Proprietary Deep Learning Accelerators. Also fully open source. It is designed to support NVDLA hardware.
  • Nvidia TensorRT is an SDK for high performance inference that is built on CUDA. A TensorRT backend for ONNX is also available and open source.
  • Apache TVM is an open source compiler stack for deep learning that originated with the Computer Science department of the University of Washington.
  • Google MLIR, a Multi-Level IR Compiler Framework that provides tools for many types of programming language compiler challenges.
  • ONNX Runtime for Transformer Inference from Microsoft has been open sourced.
  • The Facebook Glow compiler “Glow: Graph lowering compiler techniques for neural networks” paper.
  • The GraphCore PopART runtime was discussed in the GraphCore section above.
  1. Sivalingam and N. Mujkanovic of CRAY EMEA has a nice summary of these compilers in this post.

Glow translates the input from ONNX or Caffe2 into a high-level intermediate graph that is very similar to ONNX. Because they are interested in training as well as inference they next differentiate the graph to facilitate gradient decent training. This new graph contains the original and the derivative. TVM also generates a differentiable internal representation.   All of the others have similar high-level internal representation. Where they differ is in the layers of transformation and lowering steps.

High Level Graph Transformations

The next step is to do optimization transformations on the graph.   The nGraph compiler has a Reshape Elimination pass exploits the fact that matmul(A.t, B.t).t = matmul(B,A) and other algebraic identities to make tensor restructuring simplifications.   Common subexpression elimination and constant folding are standard compiler techniques that can be applied as transformations the high-level graph. For example when compiling a model to be used for inference most of the parameters in the high level nodes, such as various tensor dimensions are known integers and expressions involve address arithmetic can be simplified.

An important part of the “lowering” process is where some of the high-level nodes are broken down into more primitive linear algebra operations.   This step depends on the final target architecture: for a GPU certain transformation are appropriate and for a CPU, different choices are made. For example, with ResNet Glow has different strategies for different instances of the convolution operator depending on the size of the filter weights and these require different memory layouts. TVM, Glow and ONNC use a type of layer fusion to combine consecutive operators such as Convolution and Batchnormalization or ReLu into a single special operator.

Low Level Internal Representations

Now the graph is transformed into the Low-level internal representation. This layer is more specific about the representation of memory layout and important optimization can be made there. For example, if there are sequences of operation that must sweep across a large tensor, one can break the tensor into blocks so each block can be loaded once, and the operation sequence can be applied to the block. This is a classic locality optimization. Managing memory can involve other transformations. For example, ONNC uses layer splitting to handle memory constrains as shown in Figure 12 below.

fig12

Figure 12. From Skymizer ONNC tutorial.

Quantization is an issue that several compilers address. Glow also does profile-guided quantization so that floating point networks can be converted into efficient integer-based networks.   Finally, depending upon the backend architecture, code is generated from the final low-level graph.

Runtime Systems

Because the compilation system involves mapping the neural network onto hardware configurations which may have more than one processor that must communicate with each other, there must be a runtime system to handle the coordination of the execution.

Glow has a runtime system that is capable of partition networks into an acyclic graph of subgraphs and scheduled across multiple accelerators. As we have discussed previously the GraphCore PopArt runtime manages BSP-style execution across thousands of processor threads.

The Microsoft ONNX runtime focuses on CPU and CPU + GPU execution on Windows, Linux and Mac OS. For the GPU it supports CUDA, TensorRT and DirctML.   It also supports IOT/Edge applications using Intel OpenVMINO, ARM and Android Neural Networks API.

Final Thoughts

The explosion of computer architecture innovation exemplified by the new systems described here is very impressive. It is reminiscent of the boom in HPC innovation in the 1990s which led to the current generation of parallel supercomputer designs. The density and scale of some of the chips are very impressive.   In the 1980s we considered the impact of wafer-scale integration on parallel computing, so 40 years later, it is interesting to see it come to pass in systems like Cerebras.

There are many details of the compiler infrastructure that we covered here very superficially.   We will return to this topic in the future when we have more access to details and hardware.

 

Anomaly Detection: From the Edge to the AWS and Azure Cloud

There are now billions of sensors that monitor the world around us. Bio sensors are used to monitor every aspect of life. Environmental sensors measure temperature, humidity, pressure, chemical concentrations, vibrations, acceleration, light wavelengths and more. These sensors produce a constant stream of data that must be analyzed and when unusual behavior is detected these anomalies need to be reported. This alarming behavior may consist of spikes in sensor readings or device failures or other activity that should be flagged and logged. Often these sensors communicate with a nearby small edge computing device which can upload summary data to the cloud as illustrated in Figure 1. Typically, the edge computer is responsible for some initial data analysis or, if it has enough computing capacity, it may be responsible for detecting the anomalies in the data stream.

01

Figure 1. Edge sensors connected in clusters to an edge computing device which does initial data analysis prior sending aggregated information to the cloud for further analysis or action.

In this short note we look at two cloud services that provide anomaly detection. One is the Azure Cognitive Service anomaly detector and the other is from the Amazon Sagemaker AI services. In both cases these services can be (mostly) installed as Docker Containers which can be deployed on a modestly endowed edge computer.    We will illustrate them each with three example data streams.   One data stream is from an s02 sensor that was part of an early version of the Chicago-Argonne Array-of-thing edge device. The second is from the Global Summary of the Day (GSOD) weather from the National Oceanographic and Atmospheric Administration (NOAA) for 9,000 weather stations between 1929 and 2016. In particular, we will look at a sensor that briefly failed and we will see how well the anomaly detectors spot the problem. The second example is an artificial signal consisting of a sine wave of gradually lengthening period with several anomalous data spikes.

The two services each use a different algorithm to detect anomalies.   The Sagemaker algorithm uses a machine learning method called Random Cut Forest and the Azure detector uses a method which combines spectral analysis with a convolutional neural network. We will describe both algorithms in more detail at the end of this section, but first we go to the set-up and experiments.

Azure Cognitive Services.

To use the cognitive service, you need to go to the Azure portal and then to cognitive services. There you can use the search bar to look for the “Anomaly Detector” (at the time of this writing it is still in “preview”). You will need to create an instance and that will get you an API key and an endpoint for billing.   (You can use it for free until you use up the free quota.   After that you can switch to payments.   I did this and it did not cost me much: so far $0.75, for the work on this paper. )

Download and Launch the Container

You should go to this page to see what is currently required to launch the container. Assuming you have docker installed on a machine (your laptop or in the cloud), you must first pull the container.

docker pull containerpreview.azurecr.io/microsoft/cognitive-services-anomaly-detector:latest

Next you will use your ApiKey and billing endpoint to launch the container.   This command works:

docker run --rm -it -p 5000:5000 containerpreview.azurecr.io/microsoft/cognitive-services-anomaly-detector:latest Eula=accept Billing={ENDPOINT_URI} ApiKey={API_KEY}

We can now use the anomaly API to directly interact with the algorithm running on the container. We have supplied a Jupyter notebook with the details of the experiment that follows.

02

Rather than use the endpoint for the cloud resident service we need an endpoint on the container.

endpoint = 'http://localhost:5000/anomalydetector/v1.0/timeseries/entire/detect'

(which assumes your container is running on your local machine.   If it is remote you need to make sure port 5000 is open and substitute the host name for local host.) Notice the word “entire” in this endpoint.   The detector operates in two modes: entire and last.   Entire mode considers the entire history of a stream and spots the past anomalies.   Last mode is used to do real-time detection.

To illustrate its behavior on an example we will use a data stream captured from an SO2 sensor on an early version of the Argonne-Chicago “Array-of-Things” edge device. Running the detect method, decoding the output and plotting the results (see code in the Notebook) gives us the graph below. While hard to see, there are three things being plotted.   One is the value of the data, the other is a line of expected values and a region of boundary of uncertainty and finally the anomaly (red dot).   In this case the region of uncertainty is very narrow and not visible. We will see it more clearly in the next example.

03

Now the real-time monitoring case.

To run the algorithm in a continuous mode you need to use the “last” endpoint.

nendpoint = 'http://localhost:5000/anomalydetector/v1.0/timeseries/last/detect'

This allows the algorithm to look at a window of data and make a prediction about the last item in the window.   We can now send a new “sliding” window consisting of the last “window-size” data points to the service every time step.

04

The detector returns a dictionary and, if the last item in the window as anomalous, then the flag result[‘isAnomaly’] is True.   By keeping track of the anomalies (see code in the notebook), we can plot the result.

05

In this case we have spotted the true anomaly (red dot) and another that looks like a false alarm.   By lowering the sensitivity value, we may be able to eliminate some of the false alarms.

The Skagit Valley Temperature

Turning now to the NOAA GSOD data for the temperature sensor in the Skagit valley, WA station, we have one measurement per day for a year.   There are a few days in October where the sensor goes bad and signals a temperature of over 100 degrees Fahrenheit. (We looked at this case using Google’s data analysis tools in our book.   By looking at other nearby sensors we saw that this was not the temperature in this location.)  The figure below shows the batch anomaly detection for this data. In this case the region of expended value uncertainty is very clearly defined and the bad data is easily spotted.

06

Turning to the real-time detection mode, we see below that the red dots show the true anomaly, but it also flagged to other locations. One in the month of May, looks a false alarm and another in late November that is unclear.

07

Synthetic Oscillatory Data

We now look at the case where the data consists of a sine wave that has a slightly increasing period with a few added spikes.   The code to generate the data is

08

In this case the batch detector accurately tracks the sine wave and catches the big spike, but misses the first spike.  However, the real-time detector does a very good job.

09

10

The AWS Sagemaker Anomaly Detector.

The AWS Sagemaker service can be completely managed inside a container, but one difference with the Azure service is that the Sagemaker version does all the analysis in the cloud.   We have provided in the github repository the Docker file you need to create a container that will run in the edge device that will make the calls to the cloud.  In this case the job of the container is to gather the data, interact with the cloud service to set up the algorithm training, deploy a server that will host the trained model return inference results.   This container-based component is a script that makes calls to aws sagemaker. To better illustrate the details, we have a Jupyter notebook.   You can use the following script to build the container, run it and launch jupyter from inside.

docker build -t="yourname/sagemake" .
….   a great deal of build output follows
docker run -it -p 8888:8888 yourname/sagemake
…. Once container starts we are now running as ec2-user in the container.
ec2-user@29e378df61a9:~$ jupyter notebook
….. the output will tell you how to point your browser to see the notebook.

Note: to run the container, it must have your AWS credentials and your Sagemaker execution role identifier. Consequently, do not push your container to the docker repository and delete it when you are finished.

To get started you must log in to the AWS Sagemaker portal and create a user and an execution role. You will also need to create a bucket where Sagemaker will store your model data. The notebook shows the details for how to use this information to create a “session” and “role” object. We will use these to train the algorithm.

One of the ways the Sagemaker documents suggest for presenting data to the algorithm in cases where the size of the collection may be small is to create a “shingled” version. In other words, we take the data stream and for each time instance, create a set of the next “shingle-size” data values as follows.   Using the data from our Array-of-things S02 sensor.

11

To train the model we use the session and role objects as follows:

12

Next create a cloud instance that can be used for doing inference from this model.

13

We can now do a “real-time” analysis by doing a sequence of inferences on sliding windows of shingles. We will use 25 shingles each of width 48 so the window covers 73 time units (we used windows of size 100 in Azure examples). The code now looks like

14

Plotting the anomalous points with black dots we get the picture below.

15

As you can see,   it detected the anomaly at 400, but it flagged four other points.   Notice that if flags any point that has an anomaly score greater than 3 standard deviations from the others in the sliding window. Raising the threshold above 3 caused it to lose the actual anomaly but retained the three false alarms.

Applying the same procedure to the Skagit Valley temperature sensor and the artificial sinusoid al signal we get similar results.

16

17

Comparing the two anomaly detectors, we found it was easier to get accurate results from the Azure cognitive service than the AWS Sagemaker.   One the other hand, the Sagemaker method has a number of hyperparameters that, if tuned with greater care than we have given them, may yield results superior to the Azure experience.

Another important difference between the two detectors is that the Azure detector can be completely deployed in the container while the AWS detector relies on cloud hosted analysis.   (Of course the Azure system still keeps a record of your use for billing purposes, but it was not expensive: $0.14 for the experiments above. For the work using Sagemaker it was a total of $6.41) We expect that the Sagemaker team will make a container available that will run entirely in the edge device. They may have already done so, but I missed it. If a reader can help me find it, I will happily amend this article. One possibility is their excellent greengrass framework.

The Algorithms

The Random Cut Forest

An excellent github site with good details about using the Sagemaker service is here.   This is also what we modified to create our Jupyter notebook. A basic description of the algorithm is in “Machine Learning for Business” by Doug Hudgeon and Richard Nichol and available on-line.    However, for a full technical description one should turn to the paper by Guha, Mishra, Roy and Schrijvers, “Robust Random Cut Forest Based Anomaly Detection On Streams” published in 2016.

At the risk of over simplifying, Figure 2 below illustrates how a random cut forest can be built for one-dimensional data values.   We start by picking a point somewhere near the middle of the data set and then divide everything above that point into one group and everything below that in another group. Then for each group repeat the process until the points are each in groups of size 1. Now, for each point count the number of tree divisions it took to divide the tree to get isolate that point. Low counts indicate possible outliers.   Now do the tree construction many times. Compute the average of scores.  Again low indicates possible anomaly.

18

Figure 2.   Forest of Random Trees for a one-dimensional data collection.

Unfortunately, if you apply this technique to a large window of a data stream that has a wide range of values it will only capture the extreme ends of the values. If there is an anomaly in the mid range of values it may not be seen as anomalous when taken as a whole.   For example in the image below we considered the example of the SO2 sensor and apply the algorithm across the whole data set we see it completely misses the anomaly at 400 and flags the overall highs and lows.

19

But, as we showed above, when we applied a sequence of small windows of data to the same forest of trees it did capture the anomalous spike at 400.

To better illustrate this point, consider a variation on the sine wave fake data example. The random forest tree algorithm was able to detect the anomaly when the spikes were introduced. But a variation on this example can show its failure. Instead of introducing the spikes, we simply flatten part of the sine wave for a segment of the range.   The result is that that no anomaly is detected, and for the modified range, the anomaly score actually drops.   The situation is not helped by the sliding window test.

20

The Azure Spectral Residue CNN Anomaly Detector

Hansheng Ren, et al. published “Time-Series Anomaly Detection Service at Microsoft” at the KDD 2019 conference.  The paper describes the core algorithm in the Azure cognitive service anomaly detector. There are two part to the algorithm.

Part 1. Spectral Residual and Salience

Salience in image analysis is the property that allows some parts of an image to stand out and be easily identified. It is a technique that is often used in image segmentation.   Spectral residue is computed as follows. Applying an FFT to the stream sequence yields a measure of the frequency spectrum of the data.   The spectral residue is the difference between the log of the spectrum and an averaged version of the same.   Using the inverse FFT transforms the spectral residue back to physical space.   That result is the saliency map of the signal and locates the potentially anomalous parts.

Part 2. Applying a CNN to the saliency map

The novel feature of the algorithm is that it uses a convolutional neural network to do the final anomaly detection.   The network is trained on saliency maps that are generated by injecting artificial anomalies into a variety of real signal types. The paper describes this process in more detail.

To illustrate the power of this method, consider the flattened sine wave that defeated the Forest of Random Trees example above.   As shown below, the SR-CNN method captures this obvious anomaly perfectly.   As you can see, it projected the sinusoidal oscillations into its “expected value” window and the flat region certainly did not match this “expected” feature.

21

While this example is amusing, we note that in the cases (Skagit weather, So2) where we looked at the real-time sliding window analysis, both methods found the anomaly, even though the random tree method has more false alarms.

All of the data and Jupyter notebooks for these examples are in GitHub.

 

Modeling Natural Language with Transformers: Bert, RoBERTa and XLNet.

Chapter 10.4 of ‘Cloud Computing for Science and Engineering” described the theory and construction of Recurrent Neural Networks for natural language processing. In the three years since the book’s publication the field of language modeling has undergone a substantial revolution.   Forget RNNs. Transformers are now in charge, so this report is an update of that part of the chapter.

The original RNNs were not very good at capturing the context of very long passages. A technique was developed, called ‘attention’ that was an add-on to the RNNs to help them with this problem. In a landmark paper, “Attention is all you need”, Viswani, et. al. showed that the recurrence part was not really needed. This supplement will describe the basic transformer architecture and look at three examples.   The first is called BERT and it was the transformer that changed the field of natural language processing. We will briefly describe its architecture and demonstrate how to use it with an optimized version called RoBERTa.   Finally we will move onto a more recent transformer called XLNet that has drawn a lot of interest.

There are dozens of blog posts and articles on-line that describe transformers and the original papers do a great job describing the mechanical details.   The challenging part is understanding how the details of the network design capture the networks ability to model the probability distributions associated with natural language. What do I mean by that? Consider the sentence “He decided to walk across the lake.” A native English speaker would be troubled by that and, perhaps suggest that an error occurred and the sentence should have been to be “He decided to walk across the lane.” There is nothing grammatically wrong with the first sentence.   It just does not feel right. It is beyond normal in the internal language model we use for comprehension.

We can also consider ‘fill in the blank’ test to see how we draw on reasonable expectations of how words fit into context. For example, consider these three sentences.

  1. “Because she was a good swimmer, she decided to <mask> across the <mask>”
  2. “He went to the farmer’s <mask> and <mask> a bunch of green <mask>.”
  3. “Whenever <mask> go to the whiskey <mask>, <mask> have a lot of <mask>”

In sentence 1 the masked pair could have been “walked”, “road”, but context “swam”, “lake” is more natural because, in context, she is a swimmer. For sentence 2, the masked triple could be “home”, ”ate”, ”cars”, but “market”, “purchased”, “beans” feels better. We leave it to your imagination to decide what fits in sentence 3.

A masked language model is one that can input words in sentences like those above and any masked word is replaced by something that, with reasonably high probability, fits in the context of the sentence. Before launching into experiments with masked language modes let’s briefly look at the architecture of transformers as described in the original Viswani paper.

Bert and the Transformer Architecture

A transformer has two major components: an Encoder and a Decoder. The Transformer has an implicit model of language in that it has learned a probability distribution that is associated with “meaningful” sentences in the language. The encoder is a non-linear function that maps an input text object into an object in high dimension real space that is somewhat “near” very similar sentences.   To do this, Transformers have a special tokenizer that can convert text into token lists, where each token is an integer index into a vocabulary list for that transformer.  More specifically let s be a string, then let In_tokens = Tokenizer.encode(s) and n = length(in_tokens). The Encoder has an embedding function

Encoder.input_embedding: ZM -> Rk

Where M is the size of the model vocabulary and k is the model-specific dimension of the embedding space.     Hence the representation of the entire string is n vectors of dimension k (or vector of dimension Rk*n ). There is a now famous diagram that illustrates the architecture of the transformer.

01

Figure 1. The transformer model (from “Attention is all you need”, Viswani, et. al.)

The full encoder is the vertical stack on the left. So far, we have only described the Input Embeddings. Ignoring (for now) the Positional Encoding, note that our Rk*n input is fed to a stack of “N” blocks, each of which consists of a Mult-Head Attention and a Feed forward network, each of which generate a new Rk*n vector output. The Multi-Head Attention block consists of a parallel collection of scaled dot product blocks as illustrated in Figure 2.

02

Figure 2.   Multi-Head Attention and Scaled-Dot-Product Attention (again from Viswani et. al.)

The basic Attention part is the critical component.   It is a function of three tensors.   Q is called the query tensor and it is of dimension nxt for some t and K is called a key tensor and it is of the same dimension.   The keys are “associated” with values in the vector V which is of size nxk.   The attention function is then

03

Note that Q*KT is of size nxn. The softmax is applied to each row of the product (after normalizing by the square root of t), so the final result is nxn time nxk which is size nxk.   The way to think of this function is that each t dimensional query vector is being dot-product compared to each key vector. If those two vectors are nearly orthogonal then the dot-product is small and the corresponding row on the result of then attention function will be small.   Here is the motivational concept.   Let’s suppose the queries are associated with words in a sentence and we want to see what other words in the sentence are most related.   The other words are keys.   So if the sentence is “John got in his car then he went to the store”, then we would expect a strong link between “John”, “his” and “he” as illustrated below.

04

Figure 3. Dark lines indicate where we expect the attention between words to be the strongest. (Apologies to those who feel the reference to gendered pronouns is bad.)

Now to explain the Multi-Headed Attention.   We are going to replicate the scaled dot-product attention network h times where h is a divisor of k and we set t = k/h. Our embedded sentence is of dimension nxk, so to create the Q, K and V vectors we have to project these into vectors of size nxt (for Q and K) and nxk for V. These projections are the linear transformations in Figure 2.   Also we now have h outputs of size nxk. We concatenate those outputs and use a final linear map to project it back to a single nxk vector.

The idea of multiple heads is to allow different heads to attend to different subspaces of attention. It also slightly reduces the computational complexity of the attention convergence computation.   However, there is some question as to whether multiple heads helps that much (see “Are Sixteen Heads Really Better than One?” by Michel, Levy and Neubig.)

The final critical component of the network is the feed forward block.   This is a basic two-level network with one hidden layer.   Notice that this does not involve token-to-token analysis; that was the job of the attention blocks.   Hence the network can process the tokens independently, so its internal structure is independent of the token stream length.

There are other components of the Transformer including the add & norm steps and the Decoder.   The Decoder half has the same basic components as the Encoder, and it is the part that is critical for building language translation systems but we will not address that task here. (A good discussion of the Decoder is given in “Dissecting BERT Appendix: The Decoder”, a Medium post by Miguel Romero Calvo.)

There are several other ways to train a model like this and one is to turn it into a masked language model. To do this we wrap the basic Encoder with a “Masked Language Model Head”.     To explain this, we need to get into a few details related to the experiments we will show.

The original Transformer version, BERT (Bidirectional Encoder Representations from Transformers) was developed by Google, but we will use the more optimized version called RoBERTa (from Facebook and the University of Washington), which was released together with the paper a Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. (RoBERTa was trained with mixed precision floating point arithmetic on DGX-1 machines, each with 8 × 32GB Nvidia V100 GPUs interconnected by Infiniband.) The Roberta Masked language model is shown in Figure 4 below. The RobertaMaskedLanguage model is composed of a Language Model head on top of the base language model.

05

Figure 4.   Vword = a vector of probabilities indexed by words in the vocabulary.

The LM head consists of a linear transformation normalized with GeLU (Gaussian Error Linear Unit) activation and then again with the a LayerNorm function ( (x – x.mean)/|x-x.mean| ) followed by a linear transformation mapping the result back into a list of vectors of probabilities each of which has length equal to the size of the vocabular. The length of the vector list is the number of tokens in the original input.   If the transformer is doing its job, the vector of probabilities associated with a given work-token has a maximum at the index of that word (or a better one) in the vocabulary.   (The Masked Language Header also returns cross entropy loss of the predicted values from the input string, but we will not use that here.)

Demo Time!

To illustrate the behavior of RoBERTa language model can load an instance as follows. We can use the PyTorch-Transformers by HuggingFace Team who have provided excellent implementations of many of the examples in the Transformer family.

06

Roberta-base has 12-layer, 768-hidden, 12-heads and 125M parameters. To use the model, one need only convert a text string to a tensor of input tokens, feed that to the model and pull out the list of prediction scores (which is returned as a tensor with shape string-length by vocabulary size). Taking the largest prediction as the likely correct word and converting that back to a token ( and removing an internally added character) yields the result.   In code this is:

07

To illustrate this on a simple sentence, we will use one that is not grammatically correct and see what the model comes up with.

08

Notice two things. The model split the word speller into a root and suffix. But the model generated a new sentence that was closer to what the probability distribution said a current sentence should look like. In this case that was a change from “were” to “was”.

To illustrate its use as a masked model we can substitute words in the text with a <mask> to see how the model replaces the masks.

09

In this case it matched the pronoun “she” and inferred that “swim” and “lake” was a good choice.

Changing the context slightly we have

10

Taking our first example from the list of three above,

11

And finally, our third sample sentence

12

The source notebook for this example is in the GitHub archive. Load it and try it out yourself.

XLNet and sentence generation.

Another important point we neglected to discuss in the discussion of BERT, involves the position of tokens in the input.   The Attention operation compares each work in the string with each other word “in parallel”, however the order of words in the string matters if we want to understand its meaning.   Hence, it is necessary to tag each token with position information.   The way this is done is to create an additional value (based on a clever use of sine waves of different frequency … the details are not important here) to encode position in the string.   These values are literally added to the embedding values before being sent to the Attention function.   This is a detail we do not see when we invokee the RobertaForMaskedLM model. It is handled internally. (You can see it referenced in Figure 1.)

BERT is an autoencoding (AE) language model: it is trained to recover masked tokens in its input. XLNet is a newer Transformer language model that is showing better performance than BERT or RoBERTa on many test cases.   BERT derives its power to predict a token at sequence position t from the fact that it looks at both the elements larger than t and smaller than t. But BERT has one subtle weakness.   Because the it is trying to predict masked tokens ‘in parallel’, that does not mean that it will predict them consistently. For example, “”he went to the <mask> <mask> and purchased lots of postage stamps.” gives BERT a hard time. It tries to replace the pair of masks with “store store”, instead of “post office”. (Replacing postage stamps with vegetables and it guesses “grocery store”)

An autoregressive model, like XLNet, operates differently.   Given a string of word-tokens X = [x1, x2, … , xt], an autoregressive language model will try compute a sequence of conditional probabilities to compute p(X)

13

The problem with this is that is that if we are going from the left to right, we miss the context for words that have important information that are to the right. BERT avoids this by doing all words in parallel. Developed by Yang, Dai, Yang, Carbonell, Sakakhutdinov and Le from CMU and Google, XLNet is referred to as a generalized autoregressive (AR) language model.   Rather than do the predictions from the left (x1) to right (xT ), XLNet uses all permutations of the tokens to do the computation.

The following is very abstract and a bit confusing, so feel free to skip this next part and go to the fun demos that follow.

Let ZT be the set of all possible permutations of the length-T index sequence [1, 2 ,… ,T]. Let z be a permutation in ZT. The notation zt refers to the tth element in the permutation. Similarly, z<t refers to the t-1 elements of the permutation that precede it. We can apply the same conditional probability sequence stated above to elements in the permutation order to compute the conditional probability

14

The key idea is that if we compute the probabilities for the correct word at each position in the string using every permutation of the words in the string then all contexts for the word in that position can be considered. For example the sentence “He is tall.” Looking at the permutations

He is tall –
tall He is +
is tall He +
tall is He +
is He tall –
He tall is –

If we compute the probabilities from left to right of each of these permutations we see that the permutations marked with a + have the word “tall” before “He”, so in calculating the conditional probabilities p(“He” | … ) we see the word “tall” is encountered in that computation.   If the sentence had been “He is talls”, these conditional probabilities would have been lower and the result would have been lower.

The XLNet designers decided to train a network that would use this principle as a goal and optimize the network parameters (theta) to find

15

The result is a sequence of M query operators g that are defined by the model parameters, such that for each permutation the conditional probability can expressed in a softmax form as

16

That is, for position zt in a permutation , g is a function of the elements that precede it and on the position itself (and not the value at that position.) Computing the sequence of query operators g we need to use what the authors call a two stream recurrence.   One sequence is called the content stream and it is computed as

17

Notice that the content at the next level up depends on the position zt as well as the value xzt . (Notice the <= in the KV pair. While this may seem like an obscure point, it is important that g depend only on the position in the permutation and not the value there because we are trying to compute the condition probability of that value!) We now define the query operator g by

18

For each permutation we initialized the recurrences with

19

Where w is a learned parameter and, in the case of strings longer than 512 words, the previous 512 blocks.

If these recurrences look too abstract, here is how they look in a (simplified) version of the attention stack.

20

What this fails to show is how values are averaged over multiple different permutations.

Demo Time Again!

This demo will be to use XLNet to generate stories. The way this works is that we will start with a sentence fragment and we will add an extra token at the end and get XLNet to predict the next word. We then add that word to the end of the sentence and repeat the process.

21

We have cast this process as a function which takes a string and an integer to indicate the length of the string we want to make.   XLNet likes to have a long string to start, so we have added a paragraph of PADDING_TEXT at the beginning.

To force the model to generate the new last word, we do a bit of black magic. We need to generate a permutation mask and a target mask. We are using the excellent HuggingFaces library of Transformer implementations. The API for their XLNetHeadModel can be found here. We first add a blank token to the end of the list of ids (this is just a 0).  If the length of the input_ids is m, then the perm_mask is of dimension 1xmxm and all zeros except the last column which is all ones. The target mapping is 1x1xm is all zeros except the last element which is a one. The target mask tells us which outputs we want and, in this case, it is only the last element.

The output is a list whose first element is a tensor of dimension 1x1x3200. This is the vector of token logit values for each word in the 3200 word dictionary.  To select the word, draw a sample from a multinomial distribution based on softmax of the logit vector. Because of this random draw the results are never the same twice when we run the function. Below are some samples.

22

The source notebook for this demo is xlnet-story-generator.ipynb in github.

Document classification

Document classification with Transformers require you to add a doc classifier head to the basic model.  In the case of both Bert and XLNet the 0th position of the last hidden state can be considered as a summary of the document as a vector of size 765 and a Tanh activation function is applied to that.

The Classifier basically consists of an additional linear layer of size 765xq where q is the number of classes as shown in figure 5 below.

23

Figure 5. Document classifier uses pooled output from Bert (or XLNet) as input to an additional linear layer to do get final classifier values.

In the experiments that follow we will use Thilina Rajapakse’s Simple transformers library which wraps the standard HuggingFaces library in a way that make the entire process very simple.

Demo—Classifying Scientific Abstracts

The documents we are going to classify are abstracts of papers from Cornel’s ArXIV.   The collection we are going to use is small (7100 paragraphs).   ArXIV is a collection of papers that are submitted by scientists and are self-classified into one or more of several dozen categories.   In our case there are papers from 138 subtopics and we have grouped them into 5 broad categories: Math, Computer Science, Biology, Physics and Finance. These categories are not uniformly represented and to make things very complicated there are many papers that could be classified in several of these broad groups.   This is not surprising. Science is now very multidisciplinary.   It is not unusual to find a mathematics paper about computational biology that uses techniques from computer science.   Or a physics paper that uses neural networks with an application to finance.    We first experimented with this data collection in Nov. of 2015.  We used a very early version of the Azure ML Studio.   The confusion matrix for the result is shown below.   As you can see, it is not very impressive.

24

We looked at this problem again in Nov of 2017, but this time we used document analysis with the package genism.

25

And once again the results were underwhelming. If want to classify these documents with a Transformer, we must first train the top layer to fit our data.   These two lines are sufficient to create the model and train it.

26

The input to the train function is a Pandas data frame with one column the text of the abstract and the second column the classification into one of our five categories. The training on an Intel Core-7 takes about 20 minutes. Creating and training the same model with XLNet takes about 30 minutes.

We trained the model with 4500 of the abstracts and evaluated it with 2600 of the remaining abstracts. To do the evaluation you use

27

Result is a 2600×5 tensor where each row a vector of length 5 that is the softmax of the model predictions. Wrong predictions is a tensor that described where the model failed.   This information is very interesting and it illustrates the types of “interdisciplinary” confusion that can arise.   Here are three examples of failures.

A Math paper predicted to be Computer Science.

28

A Physics paper predicted to be Math.

29

Finally a Finance paper predicted to be Physics

30

It is the last sentence that is the clue that this is finance related.   We now can use the results output to compute the confusion matrix.    Here are the results for BART and XLNet.

31

As can be seen the XLNet results were, on average, slightly more accurate and both methods were superior to the older approaches described above.

The interesting data here is the frequency with which Math papers are labeled as Computer Science.   This is largely due to the fact that the major of papers about neural networks are in the computer science category, but there are also many mathematicians that are looking at this topic.

Because we have the softmax ‘probabilities’ for the classification of each document, we can ask about the 2nd most likely choice and compute a “best-of-2” score for each paper.   In other words, an X paper is classified as X if one of the two top predictions is an X.   The results are shown below.   As you can see, it resolves the math-cs confusion as well as bio-physics.

32

The data and notebook for this demo are available in GitHub.

Final Thoughts

The experiments on document classification are simple illustrations.   The Transformer scientific literature has a host of tests to which all of these methods have been subjected.  The careful reader will notice that in the examples we have illustrated above, we only used the smallest of the available language models of each type.   This was to allow our Notebooks to run in a relatively standard laptop.  Of course XLNet is not the last word in Transformers.   Some of the new models are huge.   For example Microsoft had just announced Turing-NLG, with 17 billion parameters.

One of goals of Turing-NLG and many other Transformer models is to improve performance around some important tasks we did not discuss above. For example, question answering and document summarization.   These are very important examples. For example, the Stanford Question Answering Dataset (SQuAD 2.0) is often cited. Another application of Transformers is uncovering the structure of language expressions. John Hewitt has done some interesting experiments along these lines in Finding Syntax with Structural Probes,   In Oct of 2019 we posted an article “A ‘Chatbot’ for Scientific Research: Part 2 – AI, Knowledge Graphs and BERT” where we discussed the role Transformers are playing in only search and the role they will play in smart digital assistants. We concluded there that it was necessary to extend the analysis from sentences and paragraphs to incorporate additional information from knowledge graphs.   Chen Zhao et.al. from Microsoft consider aspects of this problem in the paper Transformer-XH: Multi-Evidence Reasoning with Extra Hop Attention.    We are excited by the progress that has been made in this area and we are convinced that many problems remain to be solved.

 

Doing Deep Learning in Parallel with PyTorch.

This is a small tutorial supplement to our book ‘Cloud Computing for Science and Engineering.

Introduction

Machine learning has become one of the most frequently discussed application of cloud computing.  The eagerness of cloud vendors to provide AI services to customers is matched only by their own interest in pushing the state of the art for their own internal use.   In Chapter 10 of our book we discussed several “classical” machine learning algorithms and we introduced some the methods of deep leaning using three different toolkits:  MXNet, Tensorflow and CNTK.   We also described several cloud AI services available on AWS and Azure.    However, one topic that we did not address at all was the training of neural nets that use the parallel computing capabilities available in the cloud.  In this article we will do so using another deep learning toolkit, PyTorch, that has grown to be one of the most popular frameworks.

In the simple tutorial that follows, we will first describe PyTorch in enough detail to construct a simple neural network.   We will then look at three types of parallelism that can be used while training a neural net.  The easiest to use is GPU parallelism based on Nvidia-style parallel accelerators.  We will illustrate this with an example based on the PageRank algorithm.   Next, we will consider distributed parallelism where multiple processes collaborate and synchronize around the training of a single neural network.   The obvious extension of this is to use multiple processes, each with a GPU to accelerate performance.   Finally, we will briefly describe the use of multiple GPUs in a single thread to pipeline the training of a network.

A Tiny Intro to PyTorch.

PyTorch 1.0, which was open sourced by Facebook in 2018, has become one of the standards for deep learning.   The website is well documented with some excellent tutorials, so we will not duplicate them here.   However, to make this readable, we will introduce some basic Torch ideas here and refer to the tutorials for in-depth learning.   PyTorch is deeply integrated with Python. As with all deep-learning frameworks, the basic element is called a tensor. At a superficial level, a PyTorch tensor is almost identical to a Numpy array and one can convert one to the other very easily.  The primary difference is the set of operators possible on a PyTorch tensor and the fact that a tensor can retain the history of operators that created it.   This history can be used to create derivative and gradients that are essential for training neural networks.   We will return to this property later, but first let’s look at some basic tensor properties.

In the example below we create a Numpy array of 5 rows and 3 columns with 1’s for each element and use it to create a PyTorch tensor.

1a

PyTorch is extremely flexible.  For example, we can create a tensor from a python list of values and use this tensor to create a diagonal matrix.

1b

Operations such as matrix-matrix multiply (torch.mm()) and matrix-vector multiply (torch.mv()) are based on reasonable standard algebraic rules. We will illustrate this with our vector z and matrix M from above.  Matrix-Matrix multiply requires 2-d tensors, but we can reshape our vector into a 2-d array.  Once it is 2-D we can multiply it by our matrix.   Or we can do a matrix vector multiply using M and the original 1-d vector z.  The result of the matrix multiply is a 2D 5×1 matrix.   The matrix vector multiply yields a 1-d vector.

02

Regular multiplication and addition are different.  if you multiply a tensor of the form [a, b]*[c,d] the result is the elementwise product [ac, bd].   In the case one of the operands is a matrix the result is shown below.  Let’s start with M and add 1 to each element then do a point wise multiply.

03

These are only a few of the PyTorch operators, but they are sufficient for describing the material that follows.

Using the GPU

If you have access to a server with a GPU, PyTorch will use the Nvidia Cuda interface.  Working with the GPU is not very elegant, but it is simple and explicit.   If you have a server with three GPUs, they are named “cuda:0”, “cuda:1”, ‘’cuda:2”.   To do computation on the GPUs you must move all the associated data explicitly to the GPUs.  To create a tensor on “cuda:1”  one writes

04a

If there is only one GPU, the name “cuda” is sufficient.   To move a tensor to the GPU from the CPU memory to the GPU you write

04b

Moving a GPU resident tensor back to the CPU memory one uses the operator .to(‘cpu’).

GPU parallelism:  The PageRank algorithm

To illustrate the programming and behavior of PyTorch on a server with GPUs, we will use a simple iterative algorithm based on PageRank. It is not necessary to understand the derivation below because we will only be concerned with the performance results.   But, for completeness we describe it anyway.

PageRank is an algorithm introduce in 1996 by Larry Page and Sergey Brin to rank web pages in their early version of the Google search engine.   The rank can be considered the relative importance of a page.  Consider a graph G of N nodes.  Let link(i) by the set of nodes that point to node i.   And let out(j) be the number of links out of node j.  The formula for computing the rank Pr(i) for each node i in g is given by

04c

Where the scalar d is called the damping factor and it is a number between 0 and 1.  This formula is for the special case that there is at most one link from a node to another.   (You can read the Wikipedia article to learn more about this equation.)  We can recast this in matrix algebra terms as follows.    Let G(I,j) be the adjacency matrix of G , i.e.  G(i,j) = 1 if there is a link from j to i in G and zero otherwise.   Let Out be a diagonal matrix with the value 1/out(j) on the jth diagonal element when out(j) is not zero and zero otherwise.   The formula above can be written in matrix form as

04d

Where Pr is the vector of Pr(j)’s and the “dot” is matrix or vector multiply depending on the association (G*Out)*Pr or G*(Out*Pr).   (As we shall see the choice can make the difference in computational cost substantial.) As you can see, Pr is closely related to an eigenvector of G*Out.   In fact if d = 1, it is an eigenvector.  To compute it, we can turn this into a simple iteration

04e

Fortunately, this converges to the “principle eigenvector” solution rather rapidly.    We will use this to compare GPU computation to regular CPU computing.

Our goal is to demonstrate how to do PyTorch computation on the GPU and compare the performance to running on the cpu and we will use the iterative page rank algorithm to do this. To build a sample network and adjacency graph, we will use the NetworkX python package to create a slightly modified random (binomial) Erdős-Rényi graph (see this page for details).  The graph was modified so that every node has at least one outgoing node.  To use this graph with PyTorch, we  use the lovely DGL library from library from  NYU and Shanghai  by Minjie Wang, Jake Zhao, Prof. Zheng Zhang and Quan Gan.

We considered 4 different implementations of the page rank algorithm and run them on a single cpu and a single GPU.  (We describe multiple CPUs and GPUs in the neural net example in the next section.)  Version-0 is one that is described in the DGL documents and we will not reproduce it here because it uses many of the features DGL that are not important for our discussion.  We ran it (and the others) on AWS with a p2.xlarge server which has a single NVIDIA K80 GPU.  The graph we created and tested with has N= 10000 nodes.  We used K=200 iterations and a value of d=0.85.   In all our tests the results converged to the same result within roundoff error.  (The complete source for these examples and experiments are available in gitub.)

Version-1 of the algorithm uses the fact that the DGL library of a sparse graph interoperates fully with the PyTorch library.    Hence the PyTorch matrix-matrix multiply and matrix-vector multiply  work when one of the arguments is a sparse matrix representation of our graph.   The core of the algorithm is shown below. This function takes the DGL representation of the graph,  the number K of iterations and a parameter device that is either the text “cuda” or “cpu”.

05

Our pv vector is created as all ones on the selected device.   Lines 2 and 4 are DGL operators.  In line 2 and 3 we construct the Out array as a Nx1 matrix.  (To simplify things, there is no divide-by-zero here because of the modification to the graph above.)  In line 4 we extract the adjacency matrix as edgesIdmp is a Nx1 matrix that is (1-DAMP)/N for each element where DAMP is the value for d.   It is important to notice we are using the sparse representation of the matrix edges in the matrix-matrix multiply.

Version-2 is almost identical to Version-1  except that we convert the edges matrix to a dense form  prior to executing the loop  as shown below.

06

Notice that in version-2 above there is a pointwise product pv*Out inside the loop prior to the matrix vector multiply.  Because of the associativity we described above, we can move Out outside the loop and convert it to a diagonal matrix and multiply it with gm as shown in version-3 below.

07

The performance of all four of these Pagerank algorithms is shown in the table below.  Each algorithm was run with the parameter device set to “cpu” and “cuda”.  As one can see the CPU-only time for version-0 is the second best, but it does not profit from the GPU as much as version 2 and 3.  Version-1 is not competitive at all.   It is most interesting to note the difference between Version-2 and Version-3.  The behavior can be explained easily enough.   Replacing K (200) vector point products with an NxN matrix-matrix multiply is not very smart if N=10000 and matrix multiply is an O(N3) algorithm.  The surprising thing is how well the matrix multiply and matrix vector multiply are optimized on the GPU.   In these cases speed-ups of 2 and 3 orders of magnitude are not uncommon.   Unfortunately, the DGL sparse matrix-vector multiply in Version-1 is not as well optimized.

08

This example illustrates that the GPU is capable of substantial performance improvements in a matrix-vector computation in PyTorch.  The source code for this example as a Jupyter notebook is in github along with the other examples from this chapter.   Now we return to Neural nets.

Neural Net training with the PyTorch and the GPU

One of the advantages of frameworks like PyTorch is how easy it is to build a neural network.  To illustrate this point, we will build network that takes real, scalar values for x and learns to compute an arbitrary function f(x).   To do this we will train the network by selecting a sequence of values xi in an interval [a, b] for i=0,….N and supplying it with f(xi).   The result will be a network that is very good at computing f(z) for any z in [a, b].   Of course, this is not as exciting as deep neural networks that take inputs x that represent an image and f(x) is a description of the content of the image, but the concept is exactly the same.   The only difference is that when x is a 64×64 pixel color image it is not a single real number but a vector of real numbers in a space of dimension 64x64x3, which is pretty big.  For our purposes, dimension = 1 is sufficient.

We will construct the network as four layers with an input layer of size 80, two middle layers of size 80 and an output layer of size 40.  (80 and 40 are somewhat arbitrary here.  There was no attempted to find optimal values.)  These are linear layers so the connections to the previous ones are complete as illustrated in the figure bellow.

09

This is an extremely simple type of network that has enough layers we can say it is “deep-ish”.  (A really deep network for a serious imaging problem will have around 50 layers.)   The PyTorch code to specify this network is shown below.

10

The network is created as a subclass of torch.nn.Module and each instance contains instances of our four layers.  It also contains an instance of a rectified linear operator defined by ReLU(x) = max(0, x).   The method forward(x) defines how an input scalar tensor x is processed through the network.  Each linear layer takes a vector of length 80 and produced an output of the same shape.  However, we apply the ReLU operator to the input of all interior nodes except the input layer.  ReLU is an “activation” function that decides when a “neuron” turns on (in this case when the input is positive).  In more mathematical terms, this allows the network to behave like an arbitrary continuous non-linear function.  Without it, the network is a completely linear transformation.  (The mathematical proof if this “universality” can be seen in the early work of George Cybenko and others.)

To train the network we will provide inputs and  target values and adjust the parameters that define the links. More specifically each layer has an associated matrix W and offset vector b.  The forward(x) function computes the following.   It is the training step determines the “best values” of the Ws and bs.

11a

We use the standard least squares error between the values for Out and the target data to determine “best values” and we use stochastic gradient decent to do the optimization.   But to do this we need to compute the gradient of Out with respect to each of the Ws and bs.  As mentioned previously, torch tensors are capable of recoding the history of their creation and we can work backward to compute the derivate of values computed with them.   To force a tensor to begin a chain of computations we create it with the flag requires_grad=True as in the example below.

11b

Then to compute the gradient of out with respect to x we invoke the backward() operator as follows.

12a

To verify this, we can get out the old derivative chain rule and compute this by hand as

12b

and plug in the numbers to get.

12c

You don’t want to compute the gradient of our neural network output by hand.   Let Backward do it automatically.

Training the network is now easy.   We first gather together values for the input and target.   In our trivial test case we will use the input to be numbers from the interval [0.0, 20.0] and the function is f(x) = sqrt(x)*sin(3.14*x/5.0).

The training loop, in its simplest form, is shown below with an SGD optimizer and the mean squared error loss function.  The optimizer is given the learning rate of 0.001.  Like our choice of 80 for the size of the network layers, this was somewhat arbitrary, and it worked.

13

Running this with various parameters to make it converge, one can test the network by providing values for the input and recoding the output.  Plotting the result (blue) against the target function (orange) one can see the convergence.

14

A more conventional approach to the training is to divide the input and target vectors into small “mini-batches” and passing each mini-batch through the loss-gradient-generation-optimization step independently.  This generates substantially faster convergence rates.  As we shall see, this independence also allows us to compute them in parallel and merge the results later.

Basic distributed computing with PyTorch

We have already described how PyTorch exploits GPU parallelism.   If a server has multiple cores or if you have a cluster of servers available, it is also possible to use MPI-like message passing style to coordinate multiple thread of computation.  One must create a master process that forks off child processes that do the work.   One way to do this that works for Jupyter notebooks but not GPUs looks like the following.

   15

The Process function creates an instance of a processes that is passed an initialization function and an identifier “rank”, the total “size” of the collection of threads and a function “run” that will be invoked once the initialization is complete.  The init_process is run on each created thread and it initializes a communication backend that the spawned processes share.   In the case above it is called “gloo” and it works for a single multicore server.   However, one can also MPI or NCCL.  If you have a cluster with Infiniband,  NCCL is the one to use.   We will not discuss NCCL  here. See this page for more details.

If you are running on a multi-GPU system you must  use a different method to launch the processes from a python program and not a Jupyter notebook.   The program looks like

16

We will call this the mp.spawn(…) version of the launch program.  It is run from the command line and not Jupyter.

In this case the run(rank, size) function must call setup(rank, size) first then then clean-up later.  The Github site has three programs: distributed_learn_multicore-final.ipynb which illustrates the training of our neural net using the first method described above, distributed_lean_gpu-final.py which uses a server with 8 GPUs to do the training,   and distributed_local.ipynb which illustrates the basic communication primitive.   We  discuss these basic communication methods next and then give some performance results of the distributed learning cases.

We illustrate the communication system with three different “run” function examples.   The first one illustrates basic MPI-style message passing.  This one simply passes a tensor from process 0 to 1 and then from 1 to 2 and then to 3.  Each time it adds 1 to the tensor.

17

The output from each process is routed to the master and is what you would expect (except the order in which the lines are received are random).  Running this with P=3 child process gives the result below.

18

PyTorch also support many standard collective communication routines.   For example, to compute the sum of elements from each process we use a reduction operator.   The input is a Torch data structure of identical shape on each process.   The reduction overwrites the input tensor with the sum of the corresponding elements of all processes.

19a

In this case process 0 has a scalar tensor with value 1, process 1 has a tensor with value 2 and process 2 has a tensor with value 3.   The output of the reduction is

19b

The notebook distributed-local.ipynb has complete code for these communication examples.

Parallelizing the Training of our Neural Net

In this section we will describe the results of using two different approaches to parallelize the training phase of our demo neural network.  The first method will be to use a server with multiple GPUs. The second method use multiple cores on a multicore server without GPUs.

The multi-GPU method

In this case we are using an AWS p2.8xlarge server which has 8 NVIDIA K80 GPUs and 16 cores with 488 GBytes of memory.  Because we are using GPUs we have to use the mp.spawn( … ) version of the launch program described above. The network is the same one we described above.   We created a dataset of size 80000.   The variable M is the number of processes and BS is the batch size.  A function batchtodev(rank, device) delivers a list of 80000/(M*BS) unique batches of data to the process identified by the local variable rank.   The run function now takes the form

20

The initialization function setup(…) is described with the mp.spawn(…) code described above.   The way in which the GPUs are allocated to threads is a bit confusing.   It is designed so that if you have D GPUs and M threads, each thread is assigned a unique list of D/M GPUs.  We only use the first GPU on the in each thread.

The training loop in this version now iterates through each of the elements in the batch list for each epoch.  There is function sync_init_weights( ), a conditional controlling a periodic call to sync_gradients( ) and every 200 epochs we call a special function average_mode( ).   Sync_init_weights() uses a simple broadcast to copy the initial random-state network to the other threads.   While it is possible to allow each thread to create its own converged model they will not converge to anything much because each thread has only one fraction of the full data set.   We need a way to periodically tie the independently evolved models together.  That is done every 200 epochs with the call to average_model( ) which  uses a global reduce sum operation.   See distributed-lean-gpu-final.py for the details of these functions.

Another approach to blending the results of multiple training threads is to average the gradients from each parallel set of mini-batches before the optimizer step.   This is what the function sync_gradients( ) accomplishes.  But doing this every mini-batch step adds a great deal of thread synchronization overhead. Is it really necessary to do this every step?  Doing it often does improve the rate of convergence, but what happens if we do it periodically?  In the table below, we run the training for our network of size N=80000 with BS = 1000 until the loss is below 0.05.   This is done for doing it every mini-batch step, then for every 2, 4,10 and never mini-batch steps and finally “never” to see what happens if we only rely on average_model().

frequency 1 2 4 10        never
time 330 183 136 99 88
loss 0.039 0.05 0.04 0.025         0.03
Epochs x 1000 4 6 6 7 8
total time 1320 1098 816 693 704

This version was run on an Intel core-i7  with M = 4 threads using the no-GPU method (described in the next section).   The results from run-to-run vary greatly because of the randomness in the processes, but it was clear that doing it every mini-batch step produced excellent convergences, but at high cost.  It must also be noted that the penalty for the overhead depends on the size of the network (which equals the size of the gradient) and many other factors.

Running on 8 GPUs on one server.

Turning to the performance of our training code on the AWS 8-cuda core server, we ran the same configuration (N=80000, BS=1000) but rather than compare convergence rates, we ran it for 10000 epochs for 1 GPU, 2 GPUs, 4 GPUs and 8 GPUs.   At 10000 epochs they were all converged, but using the same number of epochs means that the total amount of work was the same in each case.   The results are below.  If we compare the performance of 1 GPU to 8 GPU we realize a speed-up of 6.74.

gpus 1 2 4 8
elapsed time 934 489 249 139
cuda  speed-up 1 1.91 3.75 6.74

Our test case is extremely small.   Much greater performance gains are possible with greater numbers of GPUs when larger problems are used.

The no-GPU method

This version uses multi-core servers without GPUs.   We ran this on an AWS c5n4xl server with 32 gig of memory with 8 real cpus (16 virtual cpus).  The network is the same one we described above.   We created a dataset of size 80000.   The variable M is the number of processes and BS is the batch size.  As before, A function batchtodev(rank, ‘cpu’) delivers a list of 80000/(M*BS) unique batches of data to the process identified by the local variable rank.   The run function now takes the form

22

In this case we had a surprise.   Using PyTorch multiprocessing and increasing the number of processes thread did not increase performance.   In fact, it got worse!  The table below is for 8, 4, 2 and 1 processes with the best performance for 1 process.  We also give the results for the Intel Core i7 running ubuntu in a Docker container.

processes 8 4 2 1
cpu elapsed time 4801 3003 1748 1403
cpu speed-up 1 1.7 1.8 2
cuda/cpu speedup 5.1 6.1 7 10.1
corei7-docker 1282 1274 1064 1151

There is an interesting explanation for this behavior.  The fact is that for PyTorch 1.3.0 and 1.3.1, the basic neural net functions (model evaluation, backward differentiation, optimization stepping) are all optimized to use all available cores.   So in the case of one process thread, all 16 cores are dividing the work.   In the case of 2 processes, there are 2 groups of 8 cores each working and the multithreading is pure overhead.  In all cases all cores are busy and the number of process threads only adds overhead.  This was verified by running the Linux “top” command.

Of course, if you run this on a cluster of multicore servers using the MCCI or MPI runtime, one can expect different behavior.  There have been numerous studies of the parallel training performance.  One we like  is Parallel and Distributed Deep Learning , by Vishakh Hegde and Sheema Usmani.

Model Parallelism

In the examples above we have used multiple copies of a neural net running on separate threads and then merged the results together.   As we have shown this works with multiple GPU on a single server, but without a GPU, we did not get performance that beat the native PyTorch built-in multi-core parallelism.

There is another approach to parallelizing the training and model evaluation computation that is in some sense, orthogonal to the method we described above.   This can be very valuable if the model does not fit in a single GPU memory.  One can slice the network so that the first few layers are in one GPU and the rest are in a second GPU.   Then you can feed inputs into the first, transfer the data from the last layer in the first GPU to the first layer in the second GPU.   Of course doing this, by itself, does not buy you any parallelism:  one GPU is idle while the other is work.  However, one can “pipline” batches through the model.   In this way batch0 goes to GPU0 and then to GPU1.  A soon as batch0 clears GPU0, push batch1 to GPU0 while GPU1 is working on batch0.   You need to accumulate the set of batches at the end and them concatenate them together as described in this PyTorch best practices tutorial.

However, one can do even more with pipelining as part of parallelization.  The back propagation algorithm is based on computing the gradients of the error from the last layer to the first layer it is possible to push to gradient back through our pipeline  of GPUs.   A group at Microsoft Research, CMU and Stanford has pushed this idea and produced a very nice paper at SOSP19.   The figure below, from that paper, illustrates the idea.  You distribute the layers among the GPUs (using data parallelism for very dense  layers) and pipeline batches as described above.  Then you also let the back propagation push back up the pipeline. After a startup, there is plenty of work to fill the available gpu slots.   However, to get this to work requires a sophisticated resource scheduler and care must be taken to make sure the model is consistent and converges correct.    Read their paper.   It is very nice.

24

Figure from https://www.microsoft.com/en-us/research/blog/pipedream-a-more-effective-way-to-train-deep-neural-networks-using-pipeline-parallelism/

A “Chatbot” for Scientific Research: Part 2 – AI, Knowledge Graphs and BERT.

Abstract.

In 2018 I published a blog about building a cloud-resident “Research Assistant” (RA) chatbot that would be the companion of each scientist. The RA would be responsible for managing scientific data, notes and publication drafts. It could create intelligent summaries and search for important related scientific articles. That post demonstrated a simple prototype that provided spoken English input and simple dialog responses to search for available, relevant research. But it did not address the important issues of data management and textual analysis required to make the RA real. In a short, invited “vision talk” I gave at the e-Science 2019 conference I tried to address the technology that, in 2030, we would need to solve these problems.   This article does not describe an implementation. Rather it is a survey of the missing pieces I alluded to in the talk in terms of the current, related literature.

Introduction

2017 was the year of the smart on-line bot and smart speaker. These are cloud based services that used natural language interfaces for both input and output to query knowledge graphs and search the web. The smart speakers, equipped with microphones listen for trigger phrases like “Hello Siri” or “hello Google” or “Alexa” and recorded a query in English, extracted the intent and replied within a second. They could deliver weather reports, do web searches, keep your shopping list and keep track of your online shopping. The impact of this bot technology will hit scientific research when the AI software improves to the point that every scientist, graduate student and corporate executive has a personal cloud-based research assistant. Raj Reddy calls these Cognition Amplifiers and Guardian Angels. We call it a research assistant.

Resembling a smart speaker or desktop/phone app, the research assistant is responsible for the following tasks:

  1. Cataloging research data, papers and articles associated with its owner’s projects.
  2. The assistant will monitor the research literature looking for papers exploring the same concepts seen in the owner’s work.
  3. Automatically sifting through open source archives like GitHub that may be of potential use in current projects.
  4. Understanding the mathematical analysis in the notes generated by the scientist and using that understanding to check proofs and automatically propose simulators and experiment designs capable of testing hypotheses implied by the research.

Understanding the implications of these 4 properties will be the central theme of this post.

In 2017 we published a short article about how we could build a chatbot for research. In that paper we presented a short overview of chatbot software circa 2017 and demonstrated a very simple toy example meta-search engine that used spoken commands about research interests and the bot would respond with matching documents from Bing, Wikipedia and ArXiv. To illustrate this, consider the sentence “Research papers by Michael Eichmair about the gannon-lee singularity are of interest.” This required out Bot, called the Research Assistant, to understand that the main topic of the sentence was the gannon-lee singularity (an obscure reference to a paper from the 1970s that I happen to know about) and the fact that we want related papers by Michael Eichmair. The result obtained by our Bot shown in Figure 1.

ra-figure

Figure 1.   The results (shortened) from our original 2017 Science Bot to the Eichmair question.

In 2019 the same results can now be obtained by directly inserting this sentence into Google or Bing. We suspect one reason for this is the use of vastly improved language models based on Transformers (that we will briefly describe below).   Our bot is not only obsolete, we will argue in this article that it completely misses the boat on what is needed to make something truly useful.   This report will not present any new research results.   Instead it will try to outline the types of tasks required to make the research assistant capable of demonstrating the capabilities listed above.   We will try to also give a survey of the best published work leading in these directions. (This report is an expansion of a paper that was originally written for an invited “vision” talk entitled “eScience 2050: a look Back” for the eScience 2019 conference held in San Diego, Sept. 2019.)

Knowledge Graphs

If we look at the first two items in the RA capabilities list above, we see that they go well beyond simple meta search. These tasks imply that the research assistant will need to keep an organized research archive of data, notes and papers and have the ability to extract knowledge from the literature. We will assume that the individual items the RA manages will be cloud-resident objects that are described by a searchable, heterogeneous database of metadata.   One such database structure that can be used for this purpose is a Knowledge Graph (KG).   KGs are graphs where the nodes are entities and the links between nodes are relations. Often these node-edge-node triples are represented using Resource Description Framework (RDF) which consist of a subject, a relationship and an object. Each element of the triple has a unique identifier. The triple also has an identifier so that it can also be subjects or objects.

Having a KG that is based on scientific ontological terms and facts that can be augmented with the content of the individual scientist would be the proper foundation for our RA. To help explain this we need to take a diversion into the existing triple store KGs to see if there is one we can build upon.

There are several dozen implementations of RDF triple stores and many are open source. In addition, there are a number of commercial products available including

  • Ontotext which produces GraphDB a commercial RDF knowledge graph used by commercial custormers in publishing ( BBC and Elsevier), pharmaceuticals (AstraZeneca) and libraries (Mellon funded projects for the British Museum and the US National Galery of Art)
  • Grakn Labs in the UK had a knowledge graph Grakn that has special versions such as BioGrakn for life science apps.
  • Cambridge Semantics has a product called AnzoGrapDB which has numerous customers in the pharmaceutical domain.
  • And, of course, Oracle has a version of its database called “Spatial and Graph” that supports very large triple stores.

If you search for Knowledge Graph on the web or in Wikipedia you will lean that the KG is the one introduced by Google in 2012 and it is simply known as “Knowledge Graph”. In fact, it is very large (over 70 billion nodes) and is consulted in a large fraction of searches.   Having the KG available means that a search can quickly surface many related items by looking at nearby nodes linked to the target of the search.   This is illustrated in Figure 2 for the result of a search for “differential equation” which is displayed an information panel to the right of the search results.

googlekd1

Figure 2. Google information panel that appears on the right side of the page. In this case the search was for “differential equation”. (This image is shortened as indicated by …).

Google’s Knowledge Graph is not as good for science topics as the example in Figure 2 suggests. In fact, it is extremely good with pop culture, but for science applications like our RA, Google’s KG often just takes information from Wikipedia. In its earliest form Google KG was based on another KG known as Freebase. In 2014 Google began the process of shutting down Freebase and moving content to a KG associated Wikipedia called Wikidata. However, the Freebase archive is still on-line had has some excellent science content.

Launched in 2012 with a grant from Allen Institute, Google and the Gordon and Betty Moore Foundation Wikidata information is used in 58.4% of all English Wikipedia articles.   Items in Wikidata each have an identifier (the letter Q and a number) and each item has a brief description and a list of alias names. (For example, the item for Earth (Q2) has alternative names: Blue Planet, Terra Mater, Terra, Planet Earth, Tellus, Sol III, Gaia, The world, Globe, The Blue Gem, and more.) each item has a list of affiliated “statements” which are the “object-relation-object” triples that are the heart of the KG.   Relations are predicates and are identified with a P and a number. For example, Earth is an “instance of” (P32) “inner planet” (Q3504248). Figure 3 shows an illustration of the item “Hubble Telescope”.   There are currently 68 million items in Wikidata and, like Wikipedia it can be edited by anyone.

Having a KG is not very useful unless you have a way to search it.   In the case of Wikidata (and other RDF KGs) the language for making queries is called SPARQL. Combined with Wikidata, SPARQL queries are a very powerful way to search the KG.   To give a trivial example of what a SPARQL query look like let’s search for all the scientific instruments carried on the Hubble Telescope.

hubblefinal

Figure 3.   Wikidata object Q2513, Hubble Telescope. This is an edited version of the full entry which has dozens of property statements.

To write the query we need to know that Hubble had id wd:q2513 and that the predicate “carries scientific instrument” is wdt:P1202. The query and results are shown below. To read the query note there are two unbound variables ?Inst and ?InstLabel.   The only significant part of the request is a match for tuples of the form (Hubble telescope, carries scientific instrument, ?Inst).

SELECT ?Inst ?InstLabel WHERE {
SERVICE wikibase:label { bd:serviceParam wikibase:language “[AUTO_LANGUAGE],en”. }wd:Q2513 wdt:P1202 ?Inst.
} LIMIT 100

The table below shows the output.

sparql

This example does not do justice to the power of the search capabilities.   A look at the example in the Wikidata Query Service will illustrate that point.

One of the more impressive KGs for science is the Springer Nature SciGraph which has over 2 billion triples related to scientific topics. While the content contains the full Springer content, it goes well beyond that such patents and grant awards.   Zhang et.al [zhang] have demonstrated the use of knowledge graphs for recommendations in the NASA Science Knowledge Graph (SKG) .

Building specialized KGs for science domains has been going on for a while.   In 2009, the Hanalyzer (short for high-throughput analyzer) system uses natural language processing to automatically extract a semantic network from all PubMed papers relevant to a specific scientist.

Where’s the Data?

This brings us to the question is Wikidata a place to store experimental data sets? The usual approach to data set description is via Schema.org.   However recent work by one of the Wikidata founders, Denny Vrandecic, and presented at the workshop Advanced Knowledge Technologies for Science in a FAIR World (AKTS) entitled Describing datasets in Wikidata described how this can be done when schema.org may not be sufficient. At that same workshop Daniel Garijo, Pedro Szekely described a way the extended Wikidata to support external collection in a presentation entitled WDPlus: Leveraging Wikidata to Link and Extend Tabular Data.   We shall ague below that this is an important possible component of the research assistant.

The Semantic Scholar Literature Graph

There is a very different approach to the problem of storing information about research papers than Wikidata.   The Allen Institute for Artificial Intelligence (AI2) has built the Semantic Scholar, a graph of the scientific literatures that has a structure that is tightly focused on research paper, their authors and the concepts in the papers that link them together.   More specifically, the Semantic Scholar Literature Graph, as described by Waleed Ammar, et. al has the following node types:

  • Author – a person record
  • Paper – a paper has a title, venue, year, etc.
  • Entities – unique scientific concepts like “deep learning” or “natural language processing”.
  • Mentions – references to entities from text

The nodes are linked by edges including author-to-paper, paper-citations, and mentions which are references in the text to entities.   Between mentions, edges link mention in the same sentence and between entities that are somehow related.   Figure 4 illustrates the graph.

literature-graph

Figure 4.  An approximate subgraph of the Literature Graph for a paper in Semantic Scholar.

Many of the entity node are associated with items in Wikimedia.

Another related project from AI2 is the GraphAL [GraphAL] query system for the knowledge graph. The query system can be accessed on-line. The types of queries that can be expressed are powerful. For example, finding the papers that mention certain pairs of entities, or all authors of papers that mention a certain entity. We shall return to this capability below.

Building the Research Assistant

If we consider the first of two tasks on our requirements list for the RAs functionality

  1. Cataloging research data, papers and articles associated with its owner’s projects

we see that this challenge may be well met by having the RA possess a copy of Wikidata together with the extensions described by Denny Vrandecic discussed above. If not that then Garijo and Szekely’s WDPlus Wikidata extension may be a perfect solution.

Turning now to the second task:

  1. The assistant will monitor the research literature looking for papers exploring the same concepts seen in the owner’s work

we see the nature of the challenge is very different, but progress has been made on this task. Xiaoyi et.al have shown it is possible to use a combination of neural networks and rule-based reasoning to identify semantic entities and even implicitly cited datasets in earth science papers.

Given a set of research notes, grant proposal or draft of research papers, we need a way the way to identify the concepts in the user’s documents and then insert them into a version of the Semantic Scholar Literature graph. To do we can use a language model to scan the documents looking for interesting literature terms.   The state of the art for language parsing has made great strides over the last few years and we will look at one called Bidirectional Encoder Representation from Transformers (called BERT)

Using BERT to extract knowledge from documents

Most older language analysis models were built from deep LSTM networks (which we discussed in our book on cloud computing). These models were unidirectional in that the processed text from right to left or left to right in order to train the network. Devlin et.al published the BERT paper in 2018 and revised it in 2019. BERT is unique in several respects. First it is designed so that it can be “pre-trained” on plane text to build a special encoder.   Then, for various language analysis tasks, such as question answering, paraphrasing and language inference, an additional layer is added so that the encoder plus the new layer can be tuned to address the task. (This is a type of transfer learning we have described before.) What makes this possible is the way BERT uses an encoder that captures a whole sentence at a time. The training is done by masking out a small number of words (15%) in the input and then using a loss function that measures how well the network predicts the correct masked word. Figure 5 below illustrates this.   The core of the encoder is based on transformers which have been shown to be powerful ways to capture context. (See the harvardNLP Annotated Transformer for a detailed walk through of building transformers.

bert-diagramFigure 5. Bert training of the encoder based on masking random words for the loss function. This figure taken from “BERT – State of the Art Language Model for NLP” by Rani Horev in Lyrn.

Another good blog explaining BERT and the transformers is by Ranko Mosic. The AllenNLP group has an excellent demo using the masked language model and this is illustrated in Figure 6. This shows the result of using a sentence “Multicore processors allow multiple threads of execution to run in parallel on the various cores.” with processors, execution and parallel masked. You can note that it did a rather good job (code is a reasonable substitute for execution here.)

bert-example2

Figure 6. The AI2 masked language model demo using the sentence “Multicore processors allow multiple threads of execution to run in parallel on the various cores.” with processors, execution and parallel masked.

Another application of a BERT based language model is semantic role labeling. This is good for analyzing sentences and identifying a subject verb and object. For our purposes this is important.   We would like to extract from the scientists document key scientific terms and the implied relations between them. With this we can query the literature graph for matches, or we can use it to extend the scientist private version of the literature graph or knowledge graph.

For example, a researcher working on papers related to programming of parallel computers may have key phrases that include, multicore programming, data parallel, multithreaded programs, synchronization, map reduce, BSP, etc.   The type of triples we may discover by mining the documents may include

(map reduce, used in, data parallel programming)

(multicore processors, speedup, multithreaded execution)

(synchronization problems, encountered in, multithreaded programs)

(locking mechanisms, solve, synchronization problems)

(bulk synchronous parallel, solve, synchronization problems)

(BSP, alias, bulk synchronous parallel)

(map reduce, type of, parallel algorithm)

The first and third elements of the triples correspond to entities that are associated with mentions in the document. The verbs are potential labels for entity-entity edges in the graph.

To demonstrate the capability the AI2 implementation of sematic role labeling we downloaded the language model and used it in a Jupyter notebook.   We tested it with a sentence related to general relativity:

A gravitational singularity is a place where gravity has caused an event horizon which created a black hole in space, but a naked singularity is a gravitational singularity that does not have an event horizon and, therefore naked singularities do not create a black hole.

Running this through the “predictor” function of the bert-base-srl-2019.06.17 model gives the output in Figure 7.

bart-output

Figure 7. Output of “predictor” function of the bert-base-srl-2019.06.17 AI2 model. The code to download the model for python is given in the document associated with the demo. The function pullTripples is a post processor which removes annotations not essential for this illustration and formats the output.

As can be seen in the figure the model identified the key noun phrases (naked singularity, gravitational singularity, event horizon, black hole and gravity) as well as a sequence of reasonable triples. It should be possible to use the GraphAL query system to find associated entities on the literature graph. Indeed, a simple search in Semantic scholar for these terms will find dozens of related papers. From these results, one can build a personal literature graph for each of the owner’s documents with links to the discovered material.

The Really Hard Problems

The final two requirements for the research assistant pose some really hard problems.

  1. Automatically sifting through open source archives like GitHub that may be of potential use in current projects.
  2. Understanding the mathematical analysis in the notes generated by the scientist and using that understanding to check proofs and automatically propose simulators and experiment designs capable of testing hypotheses implied by the research.

Github already has a very good search interface that can be used to discover resources related to specific general topics. For example, searching for “multicore programming” retrieves an excellent collection of archives that address the relevant to topics of parallelism and synchronization.

The Github machine learning group (yes, every organization these days has a ML or AI group) has done some nice work on using LSTM networks to translate English language text such as “ Read csv file into Pandas dataframe”, into the corresponding Python code.   This is done by building a good vector space embedding of the English statement and a trained LSTM that creates English summaries of code fragments. By associating the summaries with the original English question, they can map the question to the associated code. The Github team is also collaborating with Microsoft Research Cambridge where a team is working on Program Understanding. While all of this is still very early work it appears to be very promising.

Automatically “understanding” mathematical analysis

The fourth property in our RA list reaches way beyond current capabilities. The work from the GitHub team described above can make translating English program requirements into code very similar to natural language translation, but anything involving “understanding” is, for now, out of reach.   However, there have been some interesting early attempts to bridge language and models of scientific theory. Eureka (now DataRobot) does automatic AI based time series analysis and DataRobot is also a tool for automatically building ML models given only data. Michael Schmidt and Hod Lipson consider the problem of deriving theoretical mathematical models directly from experimental data (see Figure 8).

schmidt-lipson

Figure 8. From Michael Schmidt and Hod Lipson, Distilling Free-Form Natural Laws from Experimental Data. (SCIENCE VOL 324 3 APRIL 2009)

Automatic theorem checking research has been going on for years, but these systems require formal statements of the theorem to be checked and are usually designed for human-machine collaboration. If it were possible to create a system that could take a journal paper and automatically extract a formal expression of the mathematical content in a form that a checker could input, then we would be getting close to the goal.

The most impressive work on using the advanced deep learning technology to “comprehend” scientific text comes again from the AI2 team. Their system Aristo is “an intelligent system that reads, learns, and reasons about science”. Aristo recently got an “A” on the N.Y. Regents 8th grade science exams. This exam consists of multiple-choice questions such as the following:

Which object in our solar system reflect light and is a satellite that orbits around one planet? (A) Moon, (B) Earth, (C) Mercury, (D) Sun.

Aristo works by combining a number of component solvers to bear on the problem. Information retrieval and statistics for am important layer.   Pointwise mutual information is used to measure the likely hood of each Question-Answer pair against information retrieved from the text corpus. A quantitative reasoning solver is used to address questions that involved basic quantitative statements.   As shown in Figure 9, a tuple inference solver builds graphs that connect tuples from a scientific knowledge base to the terms in the question and the answers. Dalvi, Tandon and Clark have constructed an excellent knowledge base of science-related triples called the Aristo Tuple KB/.

The graphs with the most promising connection to one of the answers is the winner.

aristo

Figure 9. From Clark, et.al, From ‘F’ to ‘A’ on the N.Y. Regents Science Exams: An Overview of the Aristo Project. https://allenai.org/content/docs/Aristo_Milestone.pdf Aristo Tuple Inference Solver. Key terms in the question and answer candidates are linked to triples from the knowledge base.

While the Aristo work is a real milestone, it has far to go.   In particular, it does not yet have the ability to relate technical diagrams and equations in the text into its deductive (or abductive) analysis. I expect AI2 is working on this now. The bigger challenge, being able to classify documents by the content of the mathematical arguments used, is very hard when reasoning is spread over many pages. There is some interesting automatic document summarization work, but it is not up to this challenge.

Final Thoughts

This discussion is far too incomplete to warrant a “conclusions” section. The ability of the research assistant to take and idea and run with it is central to what we need.   The idea may be a theory expressed in a draft technical paper or research proposal. Finding all the related publication is certainly a start, but first the RA must be able to abstract the important original ideas and not just the keywords and phrases.   It may be that the key idea is a more of a metaphor for a larger truth that manifests itself in research in various disciplines. But this is probably more than any RA can grasp.

There is going to be amazing progress over the next 30 years.   This is obvious when one looks at the state of computing 30 years ago.   Much of what we have today was then only a dream.

This post contains a look at many active research projects, and I am sure I am missing some very important ones.   Please contact me if I have mischaracterized any of them or if I have missed something really important.

References

Most of the citations to literature in this blog are linked in-line.   Here are two for which I found it easier to provide explicit reference.

[grapAL] Christine Betts, Joanna Power, Waleed Ammar, GrapAL: Connecting the Dots in Scientific Literature, arXiv:1902.05170v2 [cs.DB] 19 May 2019

[zhang] Jia Zhang, Maryam Pourreza, Rahul Ramachandran, Tsengdar J. Lee, Patrick Gatlin, Manil Maskey, and Amanda Marie Weigel, “Facilitating Data-Centric Recommendation in Knowledge Graph”, in Proceedings of The 4th IEEE International Conference on Collaboration and Internet

Computing (CIC), Philadelphia, PA, USA, Oct. 18-20, 2018, pp. 207-216.

A Very Gentle Introduction to Probabilistic Programming Languages

Abstract.   Probabilistic programming languages (PPLs) allow us to model the observed behavior of probabilistic systems in terms its underlying latent variables. Using these models, the PPL provides tools to make inferences concerning the latent variables that give rise to specific observed behaviors. In this short report, we look at two such programming languages: Gen, a language based on Julia from a team at MIT and PyProb which is based on Python and Torch from the Probabilistic Programming Group at the University of Oxford.   These are not the only PPls nor are they the first, but they illustrate the concepts nicely and they are easy to describe. To fully understand the concepts behind these systems requires a deep mathematical exploration of Bayesian statistics and we won’t go there in this report. We will use a bit of math, but the beauty of these languages is that you can get results with a light overview of the concepts.

Introduction

In science we build theories that tell us how nature works.   We then construct experiments that allow us to test our theories.   Often the information we want to learn from the experiments is not directly observable from the results and we must infer it from what we measure.    For example, consider the problem of inferring the masses of subatomic particles based on the results of collider experiments,   or inferring the distribution of dark matter from the gravitational lensing effects on nearby galaxies, or finding share values that optimize financial portfolios subject to market risks, or unravelling complex models of gene expression that manifest as disease.

Often our theoretical models lead us to build simulation systems which generate values we can compare to the experimental observations.   The simulation systems are often programs that draw possible values for unknowns, call them x, from random number generators and these simulations use these values to generate outcomes y.   In other words, given values for x, our simulation is a “generative” function F which produces values y = F(x).     If our experiments give us values y’, we can think of the inference task as solving the inverse problem x = F-1(y’), i.e. finding values for the hidden variables x that give rise to the observed outcomes y’.   The more mathematical way to say this to say that our simulation gives us a probability distribution of values of y given the distribution associated with the random draws for x, which we write as p(y | x ). What we are interested in is the “posterior” probability p(x | y’) which is the distribution of x given the evidence y’. In other words, we want samples for values of x that generate values close to our experimental values y’. These probabilities are related by Bayes Theorem as

bayes-thm

Without going into more of the probability theory associated with this equation, suffice it to say that the right-hand side of this equation can be very difficult to compute when F is associated with a simulation code.   To get a feel for how we can approach this problem, consider the function F defined by our program as a generative process: each time we run the program it makes a series of decisions based on random x values it draws and then generates a value for y. What we will do is methodically trace the program, logging the values of x and the resulting ys. To get a good feel for the behavior of the program, we will do this a million time.

Begin by labeling each point in the program  where a random value is drawn. Suppose we now trace the flow of the program so that each time a new random value is drawn we record the program point and the value drawn. As shown in Figure 1, we define a trace of the program to be the sequence [(a1, x1), (a2, x2), …(an, xn), y] of program address points and random values we encounter along the way.

program-trace

Figure 1. Illustration of tracing random number draws from a simulation program. A trace is composed of a list of address, value tuples in the order they are encountered. ( If there are loops in the program we add an instance count to the tuple.)

If we can trace all the paths through the program and compute the probabilities of their traversal, we could begin to approximate the joint distribution p(x,y)=p(y|x)*p(y) but given that the x’s are drawn from continuous distributions this may be computationally infeasible. If we want to find those traces that lead to values of y near to y’, we need to use search algorithms that allow us to modify the x’s to construct the right traces.   We will say a bit more about these algorithms later, but this is enough to introduce some of the programming language ideas.

To illustrate our two probabilistic programming languages, we will use an example from the book “Bayesian Methods for Hackers” by Cameron Davidson-Pilon. (There are some excellent on-line resources for the book.   This includes Jupyter notebooks for each chapter that have been done with two other PPLs: PyMC3 and Tensorflow Probability.) The example comes from chapter 1.   It concerns the logs of text messages from a user. More specifically, it is the number of text messages sent per day over a period of 74 days.   Figure 2 shows bar graph of the daily message traffic.  Looking at the data, Davidson-Pilon made a conjecture that the traffic changes in some way at some point so that the second half of the time period has a qualitative difference from the first half. Data like this is usually Poisson distributed. If so, there is an average event rate such that the probability of k events in a single time slot is given by

poisson

If there really are two separate distributions the let us say the event rate is for the first half and for the second half and a day such that for all days before that  date the first rate applies and it is the second rate after that. (This is all very well explained in the Davidson-Pilon book and you should look at the solution there that uses PPL PyMC3. The solutions here are modeled on that one.)

textingdata

              Figure 2. From Chapter 1 of “Bayesian Methods for Hackers” by Cameron Davidson-Pilon.

Gen

Gen is a language that is built on top of Julia and Tensorflow by Marco Cusumano-Towner, Feras A Saad, Alexander K Lew and Vikash K Mansinghka at MIT and described in their recent POPL paper [1]. In addition they have a complete on-line resource where you can download the package and read tutorials.

We gave a brief introduction to Julia in a previous article, but it should not be hard to understand the following even if you have never used Julia.   To cast this computation into Gen we need to build a model that captures the discussion above.   Shown below we call it myModel.

mymodel

The first thing you notice about this code are the special annotations @gen and @trace.   This tells the Gen system that this is a generative model and that it will be compiled so that we can gather the execution traces that we discussed above.   We explicitly identify the random variables we want traced by the @trace annotation.   The argument to the function is a vector xs of time intervals from 1 to 74.   We create it when we read the data from Figure 2 from a csv file (which is shown in detail in the full Jupyter notebook for this example). Specifically, xs = [1.0, 2.0, 3.0 …, 74.0] and we set a vector ys so that ys[i] is the number of text messages on day i.

If our model process is driven by a Poisson to generate y value, then the math says we should assume that the time interval between events is exponentially distributed. Gen does not have an exponential distribution function, but it does have a Gamma distribution and  gamma(1, alpha) = exponential(1.0/alpha) . The statement

lambda1 = @trace(gamma(1, alpha), :lambda1)

tells Gen to pull lambda1 values from the exponential with mean alpha and we have initialized alpha to be the mean of the ys values (which we had previously computed to be 19.74…). Finally note we have used a special Julia labeling technique :variable-name to label this to be :lambda1.   This is effectively the address in the code of the random number draw that we will use to build traces.

We draw tau from a uniform distribution (and trace and label it) and then for each x[i] <= tau we assign the variable lambda to lambda1 and for each x[i] > tau we assign lambda to lambda2.   We use that value of lambda to draw a variable from the Poisson distribution and label that draw with a string “y-i”.

We can now generate full traces of our model using the Gen function simulate() and pull values from the traces with the get_choices() function as shown below.

generate-trace

The values for the random variable draws are from our unconstrained model, i.e.   they reflect the joint probability p(x,y) and not the desired posterior probability p(x | y’) that we seek. To reach that goal we need to run our model so that we can constrain the y values to y’ and search for those traces that lead the model in that direction.   For that we will use a variation of a Markov Chain Monte Carlo (MCMC) method called Metropolis-Hastings (MH). There is a great deal of on-line literature about MH so we won’t go into it here, but the basic idea is simple. We start with a trace and then make some random mods to the variable draws. If those mods improve the result, we continue. If not, we reject it and try again.   This is a great oversimplification, but Gen and the other PPLs provide library functions that allow us to easily use MH (and other methods.)   The code below shows the how we can invoke this to make inferences.

inference_prog

The inference program creates a map from the labels for the y values to the actual constraints from our data.   It then generates an initial trace and iteratively applies the MH algorithm to improve the trace. It then returns the choices for our three variables from the final trace. Running the algorithm for a large number of iterations yields the result below.

inference_result

This result is just one sample from the posterior probabilities.   If we repeat this 100 times we can get a good look at the distribution of values.   Histograms of these values are shown below in Figure 3.

historgrams

Figure 3.   Histograms of the tau and lambda values.  While difficult to read, the values are clustered near 44, 18, 24 respectively.

If we compare these results to the Davidson-Pilon book results which used the PyMC3 (and Tensorflow Probability) PPL, we see they are almost identical with the exception of the values of tau near 70 and 5. We expect these extreme values represent traces where the original hypothesis of two separate alphas was not well supported.

There is a great deal about Gen we have not covered here including combinators which allow us to compose generative function models together.   In addition, we have not used one of the important inference mechanisms called importance sampling.   We shall return to that topic later.

PyProb

Tuan Anh Le, Atılım Günes Baydin, Frank Wood first published an article about PyProb in 2017 [3] and another very important paper was released in 2019 entitled “Etalumis: Bringing Probabilistic Programming to Scientific Simulators at Scale” [4] which we will describe in greater detail later.   PyProb is built on top of the deep learning toolkit PyTorch which was developed and released by Facebook research.

Many concepts of PyProb are very similar to Gen, but PyProb is Python based so it looks a bit different. Our generative model in this case is an instance of a Python class as shown below. The main requirement of the is that it subclass Model and have a method called forward() that describes how to generate our traces.   Instead of the trace annotation used in Gen, we use PyProb sample and observe functions.   The random number variables in PyProb are all Torch tensors, so to we need to apply the method numpy() to extract values. The functions Normal, Exponential and Uniform are all imported from PyProb. Other than that, our generator looks identical to the Gen example.

pyprob-model

Also note we have used the name mu1 and mu2 instead of alpha1 and alpha2 (for no good reason.) Running the MH algorithm on this model is almost identical to doing it in Gen.

pyprob-infer

Again, this is just a sample from the posterior.   You will notice that the posterior result function also tells us what percent of the traces were accepted by the MH algorithm.   PyProb has its own histogram methods and the results are shown in Figure 4 below.  The legend in the figure is difficult to read. It shows that the tau value is clusters near 44 with a few traces showing between 5 and 10.   The mu1 values are near 17 and mu2 values are near 23.   In other words, this agrees with our Gen results and the PyMC3 results in the Davidson-Pilon book.

pyprob-histograms

Figure 4. Histogram of tau, mu1 and mu2 values.

Building a PyProb Inference LSTM network.

There are several additional features of PyProb that are worth describing. While several of these are also part of Gen, they seem to be better developed in PyProb. More specifically PyProb has been designed so that our generative model can be derived from an existing scientific simulation code and it has an additional inference method, called Inference Compilation, in which a deep recurrent neural network is constructed and trained so that it can give us a very good approximation of our posterior distribution.   In fact the neural network is a Long Short Term Memory (LSTM) network that that is trained using traces from out model or simulation code.   The training, which can take a long time, produces a “distribution” q(x | y) that approximates our desired p(x | y). More of the details are given in the paper “Inference Compilation and Universal Probabilistic Programming” by Anh le, Gunes Baydin and Wood [3]. Once trained, as sketched in Figure 5, when the network is fed our target constraints y’ and trace addresses, the network will generate the sequence of components needed to make q(x|y= y’).

rnn

Figure 5. Recurrent NNet compiled and trained from model. (see [3, 4])

Building and training the network is almost automatic. We had one problem. The compiler does not support the exponential distribution yet, so we replaced it with a normal distribution.   To do create and train the RNN was one function call as shown below.

trainnetwork

Once trained (which took a long time), using it was also easy. In this case we use the importance sampling algorithm which is described in reference [3] and elsewhere.

usetrained

Figure 6 illustrates the histograms of values drawn from the posterior.

trainedhisto

Figure 6.   Using the trained network with our data. As can be seen, the variance of the results is very small compared to the MH algorithm.

The fact that the training and evaluation took so much longer with our trivial example is not important, because the scalability of importance sampling using the compiled LSTM network. In the excellent paper “Etalumis: Bringing Probabilistic Programming to Scientific Simulators at Scale” [4] Güneş Baydin, et. Al. describe the use of PyProb with a very large simulation code that models LHC experiments involving the decay of the tau lepton. They used 1024 nodes of the Cori supercomputer at LBNL to train and run their IC system. To do this required using PyProb’s ability to link a PyProb model to a C++ program. Using the IC LSTM network, they were able achieve a speed-up of over 200 over a baseline MCMC algorithm. The paper describes the details of the implementation and testing.

Conclusion

The goal of this paper was to introduce the basic ideas behind Probabilistic Programming Languages by way of two relatively new PPLs, Gen and PyProb.   The example we used was trivial, but it illustrated the concepts and showed how the basic ideas were expressed (in very similar terms) in both languages.   Both languages are relatively new and they implementations are not yet fully mature.   However, we are certain that probabilistic programming will become a standard tool of data science in the future. We have put the source Jupyter Notebooks for both examples on GitHub.   Follow the installation notes for Gen and PyProb on their respective webpages and these should work fine.

https://github.com/dbgannon/probablistic-programming

The traditional way computer science is taught involves the study of algorithms, based on cold, hard logic which, when turned into software, runs in a deterministic path from input to output. The idea of running a program backward from output to the input does not make sense. You can’t “unsort” a list of number. The problem is even more complicated if our program is a scientific simulation or data science involving machine learning. In these cases, we learn to think about the results of a computation as representatives of internally generated probability distributions.

Some of the most interesting recent applications of AI to science have been the result of work on generative neural networks.   These systems are trained to perfectly mimic the statistical distribution of scientific data sets.   They can allow us to build “fake” human faces or perfect, but artificial spiral galaxies, or mimic the results of laboratory experiments. They can be extremely useful but, in the case of science, they tell us little about the underlying laws of nature.  PPLs allow us to begin to rescue the underlying science in the generative computation.

References

Some of these are link.   Two can be found on arXiv and the Gen paper can be found in the ACM archive.

 

eScience 2050: A Look Back

Abstract— This was originally written as an invited “vision” presentation for the eScience 2019 conference, but I decided to present something more focused for that event. Never wanting to let words go to waste I decided to put a revised version here.   It was written as a look back at the period of eScience from 2019 to 2050.   Of course, as it is being published in 2019, it is clearly a work of science fiction, but it is based on technology trends that seem relatively clear. Specifically, I consider the impact of four themes on eScience: the explosion of AI as an eScience enabler, quantum computing as a service in the cloud, DNA data storage in the cloud, and neuromorphic computing.

I.  Introduction

Predictions of the future are often so colored by the present that they miss the boat entirely.   The future, in these visions, tends to look like today but a bit nicer. Looking at images of the future from the 1950s we see pictures of personal helicopters (Figure 1) that clearly missed the mark.   But in some cases, the ideas are not so far off, as in the case of the woman ordering shirts “online” as in Figure 2.

hilicopters5

Fig 1. Future personal transportation. Mechanics Illustrated, Jan 1951

videophone2

Fig 2. Shopping “online” vision.

To look at the evolution of eScience in the period from 2019 to 2050, it can help to look at the state of the art in computing in 1960 and ask how much of todays technology can we see emerging there.   My first computer program was written when I was in high school. It was a Fortran version of “hello world” that consited of one line of Fortran per punched card as shown in Figure 3 and the entire program consisted of a deck of 12 cards .

Punched_card2

Fig 3.   The Fortran statement CALL RCLASS(AAA,21,NNC, PX3,PX4)

If asked to predict the future then, I might have said that software and data in the years ahead would be stored on a punched paper tape over ten miles long.

Fifty years ago, the high points of computing looked like:

  • The programming language FORTRAN IV was created in 1961.
  • IBM introduced its System/360. A monster machine.
  • Gordon Moore makes an observation about scale in 1965 (that became Moore’s law).
  • IBM created the first floppy disk in 1967.
  • The SABRE database system that was used by IBM to help American Airlines manage its reservations data.
  • In 1970 Edgar Codd came up with Relation Database Idea. It was not implemented until 1974.
  • The Internet: a UCLA student, tries to send “login,” the first message over ARPANET, at 10:30 P.M. on October 29, 1969.  (The system transmitted “l” and then “o” …. and then crashed.)

     What we can extrapolate from this is that mainframe IBM computers will continue to dominate, databases are going to emerge as a powerful tool and there will be a future for computer-to-computer networking, but PCs and web search were not remotely visible on the horizon. The profound implication of Moore’s law would not be evident until the first microprocessors appear in 1979 and the personal computer revolution of the 1980s followed. Those same microprocessors led to the idea that dozens of them could be packaged into a “parallel computer”. This resulted in a period of intense research in universities and startups to build scalable parallel systems. While the first massively parallel computer was the Illiac IV which was deployed at NASA Ames in 1974, it was the work on clusters of microprocessors in the 1980s [1] that led to the supercomputers of 2019. Looking at computing in 1970, few would have guessed this profound transition in computing from mainframes like the System/360 to the massively parallel systems of 2019.

II. eScience from 2019 Forward

A.   The Emergence and Evolution of the Cloud

   The World Wide Web appeared in 1991 and the first search engines appeared in the mid 90s. As these systems evolved, so did the need to store large segments of the web on vast server farms. Ecommerce also drove the need for large data centers. These data centers quickly evolved to offering data storage and virtual machines as a service to external users.   These clouds, as they became known, were designed to serve thousands to millions of concurrent clients in interactive sessions.

   For eScience, clouds provided an important alternative to traditional batch-oriented supercomputers by supporting interactive, data exploration and collaboration at scale. The cloud data centers evolved in several important way. Tools like Google’s Kubernetes was widely deployed to support computation in the form of swarms of microservices built from software containers. Deploying Kubernetes still required allocating resources and configuring systems and microservices are continuously running processes. Serverless computing is a newer cloud capability that avoids the need to configure VMs to support long running processes when they are not needed.

   Serverless computing is defined in terms of stateless functions that respond to events such as signals from remote instruments or changes in the state of a data archive. For examples, an eScience workflow can be automatically triggered when instrument data arrives in the data storage archive. The analysis in the workflow can trigger other serverless function to complete additional tasks.

   The Cloud if 2019 is defined in terms of services and not raw hardware and data storage. These services include

  • Streaming data analytics tools that allow the users to monitor hundreds of live streams from remote instruments.   Edge computing services evolved as a mechanism to off-load some of the analytics to small devices that interposed between the instruments and the cloud.
  • Planet scale databases allow a user to store and search data collections that are automatically replicated across multiple continents with guaranteed consistency.
  • Services to support applications of machine learning became widely available across all the commercial public clouds. These services include speech recognition and generation, automatic language translation, computer vision tasks such as image recognition and automatic captioning.

   The architecture of the cloud data centers continued to evolve toward ideas common in modern supercomputers. The first major change was the introduction of software defined networking throughout the data centers. Amazon began introducing GPU accelerators in 2012. This was followed by Google’s introduction of TPU chips to support the computational needs of deep learning algorithms coded in the Tensorflow system. The TPU (figure 4) is a device designed to accelerate matrix multiplication. It is based on systolic computation ideas of the 1980s and has been applied primarily to Google machine learning projects.

Fig 4.   Google TPU v3 architecture. See N. Jouppi, C. Young, N. Patil,  D. Patterson [2, 3].

The most radical evolutionary step has been taken by Microsoft in the Azure cloud. Project Brainwave represents an introduction of a mesh connected layer of FPGA devices that span the entire data center (figure 5).   These devices can be programed to cooperate on parallel execution of lage computational task including serving deep learning models.brainwave-archFig 5. Microsoft FPGA based Project Brainwave [4, 5]

A.   Major Themes Going Forward from 2019

The following pages will describe the four major themes that are going to dominate the ten years forward from 2019.   They are

  • The explosion of AI as an eScience enabler.
  • Quantum computing as a service in the cloud.
  • DNA data storage in the cloud.
  • Neuromorphic computing

 III. AI and eScience

     Machine Learning went through a revolution in the years from 2008 to 2019 due to the intensive work at universities and in companies.   The cloud providers including Google, Microsoft and others were driven by the need improve their search engines. Along the way they had amassed vast stores of text and images. Using this data and the new deep learning technologies they were able to build very powerful language translations systems and image recognition and captioning services that were mentioned above. By 2017, applications to eScience were beginning to emerge.

A.   Generative Neural Networks and Scientific Discovery

     One of the more fascinating developments that arose from the deep learning research was the rise of a special class of neural networks called generative models. The important property of these networks is that if you train them with a sufficiently, large and coherent collection of data samples, the network can be used to generate similar samples. These are most often seen in public as pernicious tools to create fake images and videos of important people saying things they never said.

     What these networks are doing is creating a statistical distribution of data that, when sampled, produces data that has properties that match nicely with the input data. When a scientist creates a simulation based on assumed parameters of nature, the simulations is evaluated against the statistical properties of the experimental observations.   Generative models can be used to validate the simulation output in case the data is sparse. One of the most commonly used generative model is the Generative Adversarial Network (GAN) in which to networks are pitted against each other. As shown in Figure 6, a discriminator is trained to recognize the data that is from the real data set and a generator is designed to fool the discriminator. When converged the generator mimics the data distribution of the real data. There are several interesting examples of how generative models have been used in science. We list several in [6], but two that stand out are an application in astronomy and one in drug design.

GAN

Fig 6. Generative Adversarial Network (GAN) from this site.

    M. Mustafa, and colleagues demonstrate how a slightly-modified standard Generative Adversarial Network (GAN) can be used generate synthetic images of weak lensing convergence maps derived from N-body cosmological simulations [7]. The results illustrate how the generated images match their validation tests, but what is more important, the resulting images also pass a variety of statistical tests ranging from tests of the distribution of intensities to power spectrum analysis.

    However, generative methods do not, by themselves, help with the problem of inferring the posterior distribution of the inputs to scientific simulations conditioned on the observed results, but there is value of in creating ‘life-like’ samples. F. Carminati, G. Khattak, S. Vallecorsa make the argument that designing and testing the next generation of sensors requires test data that is too expensive to compute with simulation . A well-tuned GAN can generate the test cases that fit the right statistical model at the rate needed for deployment [8].

     In the area of drug design, Shahar Harel and Kira Radinsky have used a generative model to suggest chemical compounds that may be used as candidates for study [9]. They start with a prototype compound known to have some desirable properties. This is expressed as sequence and fed to a layer of a convolutional network that allow local structures to emerge as  shorter vectors that are concatenated. A final all-to-all layer is used to generate sequence of mean and variance vectors for the prototype. his is fed to a “diversity layer” which add randomness as shown in Figure 7.

drug-network

Fig 7. Shahar Harel and Kira Radinsky multi-layer generative netowork for drug design.

       The decoder is an LSTM-based recurrent network which generates the new molecule. The results they report are impressive. In a one series of experiments they took as prototypes compounds from drugs that were discovered years ago, and they were able to generate more modern variations that are known to be more powerful and effective. No known drugs were used in the training.

B.   Probabilistic Programming and Bayesian Inference

     For very large experiments such as those conducted in the Large Hadron Collider, the scientists are interested in testing their models of the universe based the record of particle collisions that were generated.   It is often the case that the scientists have a detailed simulation of the experimental system based on the current theoretical models of physics that are involved. These models are driven by setting parameters that correspond to assumptions about nature. When they run the experiment, they see new behaviors in the data, such as new particles emerging from collisions, and they would like to see which parameters when fed to the simulation can produce the output that corresponds to the experimental results. In other words, the simulation is a function taking parameters x to outputs y and the interesting question is given the outputs what the corresponding values for x. This is an inverse problem that is notoriously difficult to solve for large scientific problems. In terms of Bayesian statistics, they are interested in the posterior distribution P(x | y) where y is the experimental statistics.

     In 2018 and later researchers began to study programming languages designed to express and solve problems like these. Pyro [10], Tensorflow Probability [11], and the Julia-based Gen [12] are a few of the Probabilistic Programing Languages (PPLs) introduced in 2019. One of the ideas in these languages is to build statistical models of behavior where program variables represent probability distributions and then, by looking at traces of the random choices associated with these variables during execution, one can make inferences about the random behaviors that influence outcomes.

     These languages have been put in use in industries such as finance, their role in eScience is now becoming apparent.    A. Güneş Baydin, et al. showed that PPLs can be used in very large Bayesian analysis of data from an LHC use case. Their approach allows a PPL too couple directly to existing scientific simulators through a cross-platform probabilistic execution protocol [13].

     Probabilistic programming with PPLs will go on to become a standard tool used by eScientists.

C.   AI-based Research Assistant

      2018 was the year of the smart on-line bot and smart speaker. These were cloud based services that used natural language interfaces for both input and output. The smart speakers, equipped with microphones listen for trigger phrases like “Hello Siri” or “hello Google” or “Alexa” and recorded a query in English, extracted the intent and replied within a second. They could deliver weather reports, do web searches, keep your shopping list and keep track of your online shopping.

     The impact of this bot technology will hit eScience when the AI software improves to the point that every scientist, graduate student and corporate executive had a person cloud-based research assistant. Raj Reddy calls these Cognition Amplifiers and Guardian Angels. Resembling a smart speaker or desktop/phone app, the research assistant is responsible for the following tasks:

  • Cataloging research data, papers and articles associated with its owner’s projects. The assistant will monitor the research literature looking for papers exploring the same concepts seen in the owner’s work.
  • Coordinate meetings among collaborators and establish coordination among other research assistants.
  • Automatically sifting through other data from collaborators and open source archives that may be of potential use in current projects.
  • Understanding the mathematical analysis in the notes generated by the scientist and using that understanding to check proofs and automatically propose simulators and experiment designs capable of testing hypotheses implied by the research.

     From the perspective of 2019 eScience, the claim that the Research Assistant of the future will be able to derive simulations and plan experiments may seem a bit “over the top”. However, progress on automatic code completion and generation was making great strides in 2019. Transformers [16] provide a second generation of sequence-to-sequence AI tools that outpaced the recurrent neural networks previously used.   Advances in programming tools based on similar technology show that AI can be used to generate code completions for C# method calls based on deep semantic mining of online archives of use cases.

     We built a toy demo of the Research Assistant in [14] that could take natural language (spoken or typed) input and find related research articles from Bing, Wikipedia and ArXiv (see Figure 6.)

res-asst4

    Fig 6. Demo cloud-based “research assistant” [14]

     This demo prototype research assistant was built by composing a few cloud tools as shown in Figure 7. A much more sophisticated version can be built using a modern chatbot tool like RASA, but that will not address everything that is needed to reach the final goal.

Ra-components

Fig 7. The toy research assistant demo is hosted an “elegant” cardboard container from Google’s voice kit running a python script on raspberry Pi. It invokes Googles speech to text, a text analysis service running on Algorithmia and uses Amazon’s Lex to generate voice. Depending on context, calls are made to Bing, Wikipedia or ArXiv.

     While this demo was a toy, there has been serious work over the last few years to make progress on this topic. In 2018, Google introduced its Dataset Search service to provide easy data set discovery. The work of the Allen Institute for AI stands out [15]. Their Semantic Sanity project is a sophisticated research search engine that allows you to tell it basic topics that interest you and it will monitor ArXiv looking for important related contributions.   Aristo is “an intelligent system that reads, learns, and reasons about science”. It can reason about diagrams, math and understand and answer elementary school science questions.

     One important problem that must be solved is extracting non-trivial mathematical concepts from research papers and notes so that similar papers can be found. In addition to Semantic Sanity, there is some other early related work on this problem. For example, researchers are using unsupervised neural net to sift through PubChem and uses NLP techniques to identify potential components for materials synthesis [22].

 IV. The Rise of Quantum in the Cloud

     Long viewed as a subject of purely theoretical interest, quantum computing emerged in 2018 as a service in the cloud. By 2019, Rigetti, IBM, D-Wave and Alibaba had live quantum computing services available.   Google and Microsoft followed soon after that.   The approaches taken to building a quantum computer differed in many respects.

     The basic unit of quantum computing is the qubit which obeys some important and non-intuitive laws of physics. Multiple qubits can be put into an “entangled” state in which an observation about one can effect the state of the others even when they are physically separated.   One can apply various operators to qubits and pairs of qubits to form “logical” circuits and these circuits are the stuff of quantum algorithms. A fact of life about quantum states is that once you measure them, they are reduced to ordinary bits. However, the probability that a bit is reduced to a zero or a one is determined by the unmeasured quantum state of the system. A good quantum algorithm is one in which the quantum circuit produces outputs with a probability distribution that makes solving a specific problem very fast.

     The first important quantum computations involved solving problems in quantum chemistry. In fact, this was the role that Richard Feynman had suggested for a quantum computer when he first came up with the idea in the 1980s. The other area of eScience that quantum computers excelled at was certain classes of optimization problems.

     Early quantum computers are program by literally composing the quantum gates as illustrated in Figure 8.quantum-circuit

Fig 8. A simple quantum circuit to create an entangled pair of two qubits and measure the result

     Programming tools to build such circuits include IBM’s IBM-Q qiskit and Microsoft’s Q# language and compiler. The IBM hardware was based what they call “noisy intermediate-scale quantum computers (NISQ)”. As shown in Figure 9, the induvial qubits are non-linear oscillators. Tuned superconducting resonator channels are used for readout. Also, the qubits are tied together by additional superconducting channelsibm-qubits

Fig 9 An early 5-qubit IBM-Q computational unit. Source: IBM-Q website.

     One problem with early quantum systems is that the qubits are very susceptible to noise and degrade quickly.   In other words, the error rate of quantum algorithms can be very high. The deeper the quantum circuit (the number of layers of “gates”), the more noise. A way around this is to add error-correcting redundancy into the circuit, but this means more qubits are required.   Quantum volume refers to a measure of the number of qubits needed for the error-corrected circuit times the depth of the corrected circuit. Unfortunately, early quantum computers had very low bounds on achievable quantum volume.

Microsoft took a different approach to building qubits that are based on the topological property of woven threads. These are much more noise resistant and hence capable of building deeper circuits with less error correction.

IV.  DNA-Based Storage

     There are two problems with 2019 era storage technologies.   First, it was not very stable. Data degrades and storage devices fail. Second, there was not enough storage capacity to capture the data that was being generated.   2.5 quintillion bytes of data was generated each day in 2018. Even if you cast out 99.9% of this as being of no long-term value, the remainder still leaves 2.5e+15 bytes per day. The amount of data stored in the cloud in 2018 was 1.0e+18 (an exabyte). In other words, the cloud only held 400 days’ worth of the important data. Obviously, another order or magnitude or more of the data needed to be discarded to allow the data centers time to expand to contain this growth. A new storage technology was needed.

     DNA storage was first proposed in the 1960, but not much happened with the idea until 1988 when researchers from Harvard stored an image in the DNA of e.coli. By 2015 researchers at Columbia University published a method that allowed the storage of 215 petabytes (2.15e+17) of data per gram of DNA. And DNA is very stable over long periods of time. While this was promising, there was still a big problem.   The encoding and decoding of the data were still an expensive manual process and it was not practical to have lab scientists running around in the back rooms of data centers managing wet processes.

        In 2019, researchers at the University of Washington and Microsoft demonstrated an automated lab that could encode and decode data to DNA without human hands. The system works by converting the ones and zeros of digital data into the As, Ts, Cs and Gs that make up the building blocks of DNA. These encoded sequences are then fed into synthesizing systems. Their next breakthrough was to reduce the entire “lab” to a chip that uses microfluidics to route droplets of water around a grid to enact the needed chemistry steps. They also produced a sophisticated software stack called puddle that allow the scientist to program this with conventional high-level languages [20].

      Other research has demonstrated ways in which DNA encoded data could be searched and structured in ways like relational databases.   As the costs came down, this became the standard cloud storage technology.

VI. Neuromorphic Computing

     It was long a dream of computer designers to build systems that mimicked the connectome of the biological brain. A major research initiative in Europe, the Human Brain Project, looked at the possibility of simulation a brain on traditional supercomputers. As that proved unrealistic, they turned to the study of special hardware devices that can simulate the behavior of neuron. Technology to build artificial neurons progressed in university and industry labs. The Stanford neurogrid can simulate six billion synapses.   In 2017 Intel introduced the Loihi neuromorphic research test chip. The device is a many-core mesh of 128 neuromorphic cores and each of which contains 1,024 primitive spiking neural units. The neural units are current-based synapse leaky integrate-and-fire neurons [21].

     In addition to the Loihi chip, Intel has released Pohoiki Beach, a system comprised of 64 Loihi chips and a substantial software stack to allow application development. Because overall power consumption is 100 times lower than GPU based neural networks, Intel’s application target is autonomous vehicles and robots. While full realization of true biological brain-like functionality may not be realized until the 2nd half of the 21st century, it is none the less an exciting step forward.

 VII.   Conclusions

     EScientists in 2019 found themselves at an important inflection point in terms of the technology they could deploy in doing science.   The cloud has evolved into a massive on-line, on-demand, heterogeneous supercomputer. It not only supports traditional digital simulation; it will also support hybrid quantum-digital computation.   It will soon allow applications to interact with robotic sensor nets controlled by neuromorphic fabrics. Storage of research data will be limitless and backed up by DNA based archives.

     One of the most remarkable features of computing in the 21st century has been the evolution of software. Programming tools have evolved into very deep stacks that have used and exploited AI methods to enable scientific programmer to accomplish more with a few lines of Julia in a Juypter notebook than was remotely possible programming computers of 1980. New probabilistic programming languages that combine large scale simulation with deep neural networks show promise of making Bayesian inference an easy to use used tool in eScience.

     The role of AI was not limited to the programming and execution of eScience experiments. Users of the AI research assistants in 2050 can look back at the “smart speakers” of 2019 and view them as laughably primitive as the users of the 2019 World Wide Web looked back at the 1970s era where computers were driven by programs written on decks of punched cards.

References

  1. G. Fox, R. Williams, P. Massina, “Parallel Computing Works!”, Morgan Kauffman, 1994.
  2. N. Jouppi, C. Young, N. Patil, D. Patterson, “A Domain-Specific Architecture for Deep Neural Networks”, Communications of the ACM, September 2018, Vol. 61 No. 9, Pages 50-59
  3. https://cloud.google.com/tpu/docs/images/tpu-sys-arch3.png
  4. Chung, et al. “Serving DNNs in Real Time at Datacenter Scale with Project Brainwave.” IEEE Micro 38 (2018): 8-20.
  5. Fowers et al., “A Configurable Cloud-Scale DNN Processor for Real-Time AI,” 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA), Los Angeles, CA, 2018, pp. 1-14
  6. D. Gannon, “Science Applications of Generative Neural Networks”, https://esciencegroup.com/2018/10/11/science-applications-of-generative-neural-networks/, 2018
  7. Mustafa, et. al. “Creating Virtual Universes Using Generative Adversarial Networks” (arXiv:1706.02390v2 [astro-ph.IM] 17 Aug 2018)
  8. Carminati, G. Khattak, S. Vallecorsa, “3D convolutional GAN for fast Simulation”, https://www.ixpug.org/images/docs/IXPUG_Annual_ Spring_Conference_2018/11-VALLECORSA-Machine-learning.pdf
  9. Harel and K. Radinsky, “Prototype-Based Compound Discovery using Deep Generative Models” http://kiraradinsky.com/files/acs-accelerating-prototype.pdf
  10. Bingham, et al, “Pyro: Deep universal probabilistic programming.” Journal of Machine Learning Research (2018).
  11. Tensorflow Probability, https://medium.com/tensorflow/an-introduction-to-probabilistic-programming-now-available-in-tensorflow-probability-6dcc003ca29e
  12. Gen, https://www.infoq.com/news/2019/07/mit-gen-probabilistic-programs
  13. Güneş Baydin, et al, “Etalumis: Bringing Probabilistic Programming to Scientific Simulators at Scale”, arXiv:1907.03382v1
  14. Gannon, “Building a ‘ChatBot’ for Scientific Research”, https://esciencegroup.com/2018/08/09/building-a-chatbot-for-scientific-research
  15. Allen Institute for AI, Semantic Sanity and Aristo, https://allenai.org/demos
  16. Vaswani et al., “Attention Is All You Need”, arXiv:1706.03762v5
  17. Chong, Hybrid Quantum-Classical Computing https://www.sigarch.org/hybrid-quantum-classical-computing/
  18. Shaydulin, et al., “A Hybrid Approach for Solving Optimization Problems on Small Quantum Computers,” in Computer, vol. 52, no. 6, pp. 18-26, June 2019.
  19. Takahashi, B. Nguyen, K. Strauss, L. Ceze, “Demonstration of End-to-End Automation of DNA Data Storage”, Nature Scientific Reports, vol. 9, no. 1, pp. 2045-2322, 2019
  20. Wellsey, et al., “Puddle: A Dynamic, Error-Correcting, Full-Stack Microfluidics Platform”, ASPLOS’19 , April 13–17, 2019.
  21. Intel Loihi Neurmorphic Chip. https://en.wikichip.org/wiki/intel/loihi
  22. E. Kim, et al., “Materials Synthesis Insights from Scientific Literature via Text Extraction and Machine Learning”, Chem. Mater. 2017 29219436-944