2018年12月31日星期一

2018总结

距离2019年剩下短短几个小时了,写下一篇总结来回顾这一年。

人生大事

过去的一年简直是惊涛骇浪

辞职

13年入职第一家公司,终于在18年1月1日从这个公司离职。在前东家的最后一年半是完全处于浑浑噩噩的状态!对领导不满,但是自身又找不到前进的方法,各种的迷茫。想到过去的时光,总是会有无限的感慨。然而,我也从这件事情中了解了“职场”的精髓:人走茶凉。过去的几年没有什么目标感,但是从公司离职的时候才知道,人生还是需要给自己指定方向的。

结婚

在这个时候,恰逢我女朋友对国外的生活无比憧憬,于是便鼓励我尝试国外的机会。于是各种疯狂求内推,好在终于被我司收留。但是女朋友却要等6个月之后才能毕业。
为了将来能顺利出国,我便和女朋友火速领证,仓促结婚。说到结婚这个事情,感觉是趁着媳妇没反应过来,直接骗到手了。不过我也不后悔,毕竟过去几年想这个事情好久啦!

入职

离职之后在家休息了将近一个月,不过也没闲着。首先是办了两场婚礼,我家一场,媳妇家一场。然后还准备了雅思考试,以均分6分越过及格线,成功开始了签证办理。终于在入职日期的前4天拿到了工作签证,开始了在新东家的生活。
不过不得不夸一下自己,去哪儿哪儿股票跌!入职新东家,股票是历史高点;入职没俩月,噩耗频发。但是前东家的股票是涨的飞起。
有一件噩耗:我也是党员。

异国团聚

入职几个月之后,唯一关心的事情就是:媳妇啥时候来了。由于我个人太过拖拉,导致媳妇在家等了3个月才到我的身边。这个事情需要好好的反省,做事情的考虑还是不够全面啊。想一想我工作上也存在类似的问题,需要改正。

媳妇怀孕

跟媳妇分别6个月,终于在9月份团聚啦!小别胜新婚,终于让媳妇怀孕了!!!毫无准备!!!原来的人生规划是先玩两年,但是他喵的这个时候就怀上了。
感觉原来的生活节奏完全打乱了。媳妇每天早上总是会有一次呕吐,其他时间完全随机呕吐一次。而且对吃的完全没有兴趣,每天只能吃一点水果,顺便吃一点米饭度日。
人在腐国,周围也完全没啥可以吃的。唐人街的食品毫无存在感,媳妇对这些一点食欲都没有。我的厨艺更是渣渣,养活媳妇的挑战太大了。于是只能让媳妇回国啦。回头想一想我图啥呢?

人生展望

新年的开始总要有美好的愿望。希望在新的一年里 母子平安,然后早早的跟我团聚。希望在新的一年里,在事业上也更近一步。希望双方父母身体健康,生活顺利。
除了这些人生的目标,还应该对自己有更高的要求。

完备全局的思维

在过去一年经历了跳槽,也算是事业遇到阻碍之后的转换。这让我明白自己在思维方式上存在很大的盲点:考虑事情总是单一,片面;对事情的理解总是处于一个特定的阶段,没有实现全局和全程的考量。

清晰准确的沟通

和同事,和领导就工作上面没有实现及时,准确的沟通。导致工作上屡屡出现了一些问题,这个也是需要改进的。

有的放矢

无论是工作中还是业余的生活,总是有一种全面开花的冲动。但是这种思路是有问题,因为人的精力总是有效的。在工作中还是需要抓住重点问题,全力解决。在业余的学习,也要首先从一个点突破,而不是全面的了解每个细节。

Written with StackEdit.
时间2018年12月31日

2018年12月26日星期三

Parameter Server ARch

General Introduction

Parameter server is widely used to handle large scale machine learning system. The general idea of PS is distribute the parameters across multiple machines to handle the extra size of data & parameters.
Considering multiple parameter servers available, there are also multiple work nodes available to finish the related computation & reduce time required to finish the model training.
But there are several problems when design & implement PS system

  1. communication across multiple machines
  2. synchronization between multiple machines.
  3. Storage system for parameters.

If these 3 problems are handled, then the general design will be nailed.
In the following, I will give introduction to the parameter server designed by Mu Li. From my personal opinion, this is a well designed system with very beautiful engineering designs.

Communication

In the system, the communication is handled through ZMQ. So the implementation complexity is handled through this library.

Synchronization

How to let multiple machines synchronize will be a very difficult problem. The key point is message design: each message will has a timestamp. For every pair of communicated machines, the timestamp can be uniquely identify the message.
There is a design in the system: each worker node can wait for specific message identified by timestamp. As long as all the machines are waited on the same timestamp, all the machines can act on the same timeline.
Since timestamp only works between 2 nodes, how to handle the broad cast situations? Then the solution is build multiple p2e connections between multiple nodes.

Storage System

Actually the storage is just hash_map. amazingly easy!!! :)

2016年8月18日星期四

Scope Rule for Identifier

Scope Rules

For each programming language, an identifier is defined with some specific rules. Each expression of program will also involve different identifiers, then a simple question comes: What’s these identifier refereed in the expression. This is called name resolution, the specific rule is defined by each programming language itself.
For name resolution, compiler must know the name binding, from identifier to entity. The scope of name binding is part of program text which the binding is valid. At different location of program text, the name binding is different.

Scope Rule

Generally speaking, scope of an identifier is the lines of program text which the entity can be accessed though the identifier. So scope is the property of identifier. We can also find name context, which is the union of all the scope of identifiers.

Scope Level

Depends on the level of definition, one can get function scope, module scope and so on.

Written with StackEdit.

2016年4月8日星期五

Using locale information

Kernel Method (or Non-parametric Method)

Seems there are two different definition of “kernel methods” in machine learning. One definition related with RKHS and so on. The other one, just like some non-parametric methods. And the latter one is the topic in post, the main idea is about how to use localized information to get a model.

Unlike linear model, which construct a global function over all the sample spaces; kernel methods works by construct a localized function for each new sample point . We can see how this method can be applied to different tasks.

When apply this method to regression task, for each new sample point , it will construct a weight matrix based on some kernel function . This kernel function will assign higher weights to closer training points based on some norm. Then a weighted regression will be performed, getting a brand new predicting function and return the predicted value.

For regression task, there is also another kernel method. Which will weight samples within the neighborhood of new sample point and return a weighted average of response variable value.

When apply this method to density estimation, it will also construct a weight kernel decaying with the distance from the point . And then perform classification according to bayes rule. We can also use mixture of Gaussian to estimate the density more clearly for each classes.

For all the methods mentioned above, there will be a issue of bias and variance trade off.

Written with StackEdit.

2016年4月1日星期五

Notes on Linear Regression

Having read the linear regression chapter of Element of Statistical Learning, method is different compared with Pattern Recognition and Machine Learning. After the introduction of least square methods, ESL will talk about the variant of the estimator ( ). Well, this is something quite new to me.
The first question is why we need to do this ? What’s the benefits of doing such kind of inference? But more interesting point is, with assumption of truly underlying model is linear model: ESL gives hypothesis testing and interval estimation of the parameters. This is quite new, but the question would be what if the real underlying model isn’t linear. I think this is the most common scenario.
For other point, ESL give a detailed analysis and comparison of different shrinkage method, this is a clear description of “bias variance decomposition”. And also other advanced method like lasso path and LAR algorithm.

2016年1月19日星期二

Notes on Pattern Recognition and Machine Learning

Chapter 1 Introduction

Three important parts: Probability distribution, decision theory and information theory.
From decision theory, loss function is provided; from information theory, entropy and KL is provided. From probability, conditional, joint probability; bayesian formalism and frequentist formalism.
Other topics about high dimension situation: for naive methods, requirement on data size grow exponentially with the number of dimensions and high-dimension is counter-intuitive. But we also have other insight on high dimension data: real data exist in a manifold of high dimension and local smoothness is guaranteed.
For model selection: we have cross validation from frequentist, other method combine model complexity and training performance from bayesian.

Chapter 2 Probability distribution

Focus on the probability distribution related to machine learning. Specially focus on Gaussian distribution. One important thing I learned from this chapter is how to derive the conditional and marginal distribution from a joint Gaussian distribution: Gaussian distribution has two very important components, first is the quadratic term involving precision matrix, second is the mean term; we can find the corresponding distribution by completing this quadratic terms.
Another important stuff about Gaussian distribution is the precision matrix, this is very helpful when derive the conditional and marginal distribution.
Other stuff in chapter 2 about Exponential Family, which is a generalized concepts with density function form where is sufficient statistics, is a normalizing constant depend on parameter . or called Partition Function.
For probability density, despite parametric form there is Nonparametric form. For Nonparametric probability density, we can have Nearest Neighbour Method and Kernel Method two different approaches. But these two ideas all come from one basic principal of estimating probability.

Chapter 3 Linear Regression

This book focus on bayesian approach to every model ( or hypothesis ). For linear regression, there are several different prospect for derivation:
1. MLE: assuming a gaussian distribution of noisy.
2. Geometry point: Projecting target value into the range of columns space of data samples.

For regularization part, assuming a gaussian prior distribution on parameter .
The ultimate purpose of learning model is predicting target value for new input data point , how to expression the uncertainly of predicted value ? Frequentist and Bayesian have different method:
1. Bayesian expression the uncertainly through of posterior distribution of parameter .
2. Frequentist will make a point estimate of at first, then through a series of though experiment to determine the uncertainty.

One important stuff: Bias-Variance Decompositioin is used for Frequentist, because the interpretation of Bias and Variance depend on the following ideas:
We have a set of different data sets, each data set comprised of N data points. From each set, learning algorithm will get an point estimate of , based on this parameter, prediction made on new data point . Since there are multiple data set from some unknown distribution , then can take expectation and variance of all the predicted values on new data point. This is the origin of Bias and Variance. Different model have different bias and variance depend on the model complexity. Thus the control of model complexity is vital for machine learning.

But what is the Bayesian approach to Linear Regress Estimate & Model Selection ?
Start with a prior distribution over parameter (mostly choose gaussian), then updating this distribution when new data point observed.

Frequentist start model selection with cross validation. But Bayesian will do it based on model evidence.

Another question would be the variant based on linear regression?
There are lots of variation of simple linear regression.
1. In original linear regression, original data point is used. But can use Basis Function to transform the data point at first : from -> , can have many basis function, then get the linear representation of data point. Some of the well-known basis function is: gaussian function, wavelet function and sigmoid function.
2. Another extension would be the norm of regularization. From to and . If a purely linear regression and norm regularization, this is called LASSO.

Other stuffs ?
1. Hypothesis complexity of hypothesis for linear regression. Used to derive the generalization bound and sample complexity.

Chapter 4 Linear Classification

For classification, there are three different approaches:
1. Discriminant Function: From training instance to class label directly.
2. Probabilistic Generative Model: model the joint probability of instance and class label.
3. Probabilistic Discriminative Model: model the conditional probability of class label give training instance.

In this chapter, it’s about how to use linear model to realize three different approaches.
For discriminant function, Linear Regression, Finsher Discriminant Analysis and Perceptron algorithm. Linear Regression and Finsher Discriminant function with different objective function. For perceptron, it’s quiet unusual. It’s hard to find a appropriate category for this algorithm.
For probabilistic generative model, it’s modeling as follow:

where represent the class conditional probability distribution. For binary and multi-class classification, if class conditional probability is gaussian and share the same covariance matrix, then the posterior of class label has the following form: . The function is called activation function and function classed link function in statistics. So one question would be this: which specific form of class conditional probability distribution will lead to a linear model? Answer is exponential family distribution with shared scaling parameters.
For discriminative function, it’s modeling the conditional probability directly. Linear discriminative model has the following form:

activation function is sigmoid function for binary classification, soft-max function for multi-class classification. Since nonlinear activation function, there is no closed form solution for this problem, only be solved through iterative approach. **I**terative **R**egularized **L**east **S**quare (IRLS) is apply the newton method to linear discriminative model. Another point need attention: when using 1-of-K coding schema for class label, optimize the negative loglikelihood function of training data is the same as optimize the negative cross entropy function of training data. this is true when binary using 0 and 1 to represent different class. (But in normal case, everyone is using 1 and -1, interesting).
According to the spirit of this book, There is a bayesian version of logistic regression. But the posterior of parameter given training data is intractable. So laplace approximation is used to approximate the posterior distribution.
How does Laplace Approximation works? It approximate the target distribution with a Gaussian. And this gaussian sit on the model, its precision matrix is the negative of the hessian of the target probability density at the mode.
Summary: Approaches to classification, cross-entropy, probit regression, laplace approximation, BIC.

Chapter 5 Neural Network

Most important concepts: Neural Network is adaptive linear model. Or can be understand as hierarchical linear model. Because each layer of neural network is just perform linear model operation plus some nonlinear activation. So neural network is itself nonlinear but composed of linear models. The most important motivation is the the input of linear model can be the output of other linear model. When thinking in this way, it is basis expansion but with adaptive basis. Much more interesting than aspects from neuron inspiration.
When recognized as adaptive linear model, neural network need some objective function: cross-entropy for classification, least square for regression. These concepts are all from the previous chapters.
But adaptive linear model is hard to calculate the gradient and hessian. So the back-propagation comes into help.
Using back propagation, gradient and hessian can be calculated easily.
As a new function mapping different from linear model, it need some ways to perform model selection. The old regularization on all the parameters still works well. However, as a adaptive linear model, itself is hierarchical!
For neural network, some other approaches can be used: consistent prior, tangent propagation, convolution and soft weigh sharing. I think all these techniques are too much complicated, don’t know the real application in real open-source tools.
Neural network is a function space and it can be used to do anything. Mixture density network is using neural network to predict the mixing coefficients, mean parameters and covariance matrix. Too much parameters, i don’t think this is good.
Still, this book is for Bayesian Method. Bayesian neural network for regression and classification. Using lapalace approximation, posterior distribution can be approximated to give predictive distribution and so on. Using so many approximation, what’s the meaning of getting a distribution rather than a single parameter. I doubt the effectiveness of the bayesion for neural network.

Chapter 6 Kernel Method

Previous chapters focus on linear method and its extension (i mean Neural Network). But kernel method is very different from kernel method, it involves nonlinear mapping in the model directly.
All the linear method can get a dual representation. In this representation, model is represented with a kernel function involved.
For kernel function, possible kernel should have positive definite gram matrix. And there are multiple ways to construct new kernel function:
1. Composite new kernel function according to a set of rules.
2. Composite new kernel function with probabilistic generative model, i.e. combine kernel function with a mixture model way.

This is the keypoint about the kernel function, others are skipped.
Another important knowledge is Gaussian Process, it seems that i can understand it now. From prior distribution, any existed data point has a distribution of output . Gaussian Process means the distribution of is gaussian. Well, most interesting point is we do not need to worry about selecting a proper prior over the parameter .

Chapter 7 Sparse Kernel Machine

Start with SVM algorithm, more interesting is Relevance Vector Machine. This is a Bayesian version of Support Vector Machine. The only different from Bayesian linear model, is the prior distribution for parameter is composed element-wise way:

So with this variant, sparse effect is achieved. The bayesian version is existed for classification & regression model at the same time.

Chapter 8 Graphical Model

Graphical model has different representation: directed & undirected. Each graphical representation specify the factorization of the joint probability distribution and conditional independence of the joint distribution. The factorization and conditional independence decide the efficiency of inference and learning algorithm.
For directed graphical model, there is d-separation to decide the conditional independence. For undirected graphical model, this is much simpler.
Inference on chain model & tree model is very simple. One important information: message passing used to passing message from other nodes about current node.
For general graphical model, loopy belief propagation, variational inference and sampling is the solution. Junction tree need complicated steps, i don’t think is used widely.
Some thing like factor graph and clique tree is another different representation of same graphical model. Different representation do not change the underlying distribution, just for the computation convenience.
So for graphical model, information is about structure of the joint distribution and how to use the structure to accelerate the computation.

Chapter 9 EM & Mixture Models

EM is used to get maximum likelihood estimator of some models with latent variable. The introduce of latent variable is used to simplify the computation of likelihood of observed data, even though the latent variable do not have any physical interpretation.
EM algorithm will try to get posterior distribution of hidden variable given observed data at first. Then it will calculate the expected complete data log-likelihood under the posterior distribution of hidden variables. At last, it will maximize the expectation with respect to model parameters.
For a long time, I can’t understand EM because don’t know how to get the distribution, then calculate the expectation under this distribution. Finally, i get the point. The key is not the separate the distribution and expectation, but to find the expectation of complete likelihood. With this information, the only thing required is the expectation of the posterior distribution.
If the posterior distribution is easy, EM can be very simple. Just maximize the complete likelihood of data, but replace the value of hidden variable with expected value of hidden variable. When posterior distribution is complex, approximate inference method is required. We can get the expected with different ways.

Chapter 10 Variational Inference

Variational inference is used for inference problem: marginal problem, posterior problem and MAP inference. If the distribution is very complex, we can make it simple by adding more extra conditional independence. Mean field approximation works by assuming some group of variables are independent even though not. Variatioanl inference minimized the following KL divergence:

and is factorized with different group of variables, i.e. . Apply this to the KL function then can get a function for minimization.
For a long time, i don’t understand this algorithm. Just because do not understand how to evaluate the expectation of complete data log-likelihood under some distribution.

Chapter 11 Sampling Method

The fantastic name “Monte Carlo Method” do not reveal the real content of this subject. Numerical Sampling method is using computer generated pseudo random number generator to get the real sample from target distribution, then perform all kind of inference operations.
So generally, there are specific sampling method, MCMC sampling method. Using MCMC method, there are several conditions:
1. aperiodic: This means there is no circle in the state traveling space.
2. irreducible: This means all the space can be explored.
3. reversible: also called detailed balance.
4. ergodicity: starting with any possible point, the convergence distribution is the same.

Among all the methods, Gibbs Sampling is the most important one. Another interesting method is Hamilton Monte Carlo Method.
From my understanding, the most important part for MCMC like method:
1. How to propose new point in the whole domain of the distribution.
2. How to use detailed balance to determine the acceptance of new point.

There are other general idea related with sampling: Data Augmentation. Data Augmentation works in the following way: If you want to sample from distribution , but you construct another distribution which satisfy at first. The variable called auxiliary variable. With new distribution , it’s much simpler to sample from it. So we get samples from then just drop the part. Slice sampling, Hamilton Monte Carlo method belong to this kind of general idea.

Chapter 12 Continuous Latent Variable

In this chapter, author gives some very important idea on dimension reduction task. In dimension reduction, there are some continuous latent variable, then after some kind of transformation become a high dimension space. But the latent variable only exist in a small manifold of high dimension. But in real word, data do not exist in a small manifold. How to interpret this? Well, data point do not exist in the small manifold, these data points will be interpreted as real data point plus some noise. PCA is interpreted as model with continuous latent variable, plus a Gaussian noise. Start with PCA, there are many other dimension reduction algorithm can be derived. Amazing!

Chapter 13 Sequential Data

Sequential data has the basic model: HMM for discrete latent variable, it’s the extension of mixture of Gaussians; LDS for continuous latent variable, it’s extension of PCA like models. It’s just the application of Graphical Model.

Combing Models

For combing models, it’s called Ensemble Method. Widely used method: Random Forest, AdaBoost, GBDT. For the rest, I don’t know the real applications.

Summary of Reading

For PRML, it covers the Linear Method for Classification/Regression, Neural Network, Kernel Method, SVM, Graphical Model, Variational Inference, Numerical Sampling, Ensemble Learning. I think the basic understand of machine learning techniques is acquired.

Written with StackEdit.

2015年9月17日星期四

AM207 Monte Carlo Method and Stochastic Optimization

AM207 Monte Carlo Method and Stochastic Optimization

Lecture 1

Introduction lecture, almost forget all the materials.

Lab session 1

Introduction to probability distribution with python as example.

Lecture 2 Basic Monte Carlo Method

  1. Important sampling
    This method is just for calculate the integral of some function.

  2. Rejection Sampling
    This is the first method for sampling from a distribution in real.

One important issue with monte carlo method is the variance of estimation depend on the sample size. We need find better ways to control the variance of estimator.

Lecture 3 Variance Reduction

Reduce the variance of monte carlo methods.

  1. Control Variates

  2. Antithetic Variates
    require the function must be mononic.

  3. Stratification variate
    split the variable into multiple interval and perform calculation on each interval.

Lecture 4 Bayesian Formalism

One simple distinction between bayesian and frequentist would be their returned result. For bayesian method, it will return a distribution; frequentist will return a number.
Bayesian method start with a prior distribution on parameter, after observing some data, updating the prior get the posterior distribution on the parameter.

Lecture 5 Bayesian Formalism: Part 2 && MCMC

Sampling method:
1. Inverse transformation
2. Rejection Sampling
3. MCMC

MCMC is the widely used approach to sample from high dimension and complex distributions. What’s the properties we expect MCMC to have ?
1. aperiodic: This means we do not want pattern (or loops) existed in the random walking. If we have this pattern, then the corresponding samples will not be random.
2. Irreducible: for any step , the probability of any point would large than . this means we do not want the random walk end up with a deterministic ending sample. So that we can get to any space in the domain.
3. detailed balance: from sample point and , the forward and backward probability is the same. This will guarantee the corresponding samples will come from the distribution we want.

Lecture 6 MCMC

Start real introduction into MCMC with detailed balance:
First talk about the Metroplis method for MCMC, by assuming a symmetric transition probability . First we get a new point from distribution where is current point, then we accept the new point with probability

Perform a uniform sampling between 0 and 1, get a prob . If less than then new point is accepted; otherwise stay on current point.
The interpretation of prob is quite simple: if the next point has a higher probability than current point, it is accepted without any condition; if the next point has a smaller probability than current point, it is accepted with a probability. In practice, using gaussian or uniform distribution will satisfy the symmetric requirement.
if we have a proposal distribution which is not symmetric ? We correct the probability in the following way:

then so we have the following new acceptance function
But in usual case, we still using symmetric proposal distribution instead of a unsymmetric.

Lecture 7 MCMC Convergence

For MCMC sampling method, convergence will be a serious issue. Convergence means we get the samples according to target distribution. Before convergence, we have a phase called burning in, samples from this stage can’t be used for estimate statistics. There is no reliable method to determine when the burning is finished, only some available heuristic variable to show.
1. Trace of variables
Using iteration number as the horizon line, actual value of variable as the vertical line. We plot this line to see the change of variable along time, if it looks like random Then it may have converged.
2. Geweke test
Geweke test used to calculate a estimator based on the mean and variance of two non-overlapping sequence of samples. Here is the definition:

In usual case, the value of should between -2 and 2. When using this method, we just split current samples into two parts, calculate the mean , and corresponding variance , . this seems useful.

Lecture 8 Gibbs Sampling

Another test statistics for convergence of MCMC. Gibbs sampling just mentioned.

Lecture 9 More on Gibbs Sampling

Just know the gibbs sampling is a special form of metroplolis-hasting algorithm when the proposal distribution is conditional distribution. Then all the rest is examples.

Lecture 10 Slice MC and Multivariate Slice MC

Data Augmentation
The key idea about data augmentation: If we want to sample from probability distribution , but we augment it with joint probability whose marginal distribution of is . Then sampling from and abandon to get samples from .

Slice Sampling
One example of data augmentation.

Lecture 11 Hierarchical Bayesian Method

Nothing special, just normal hierarchical models: directed acyclic graphical model. But the derivation of conditional distribution is quite hard. I don’t know the applicability to other more complex models.

Lecture 12 Hamiltonian Monte Carlo

Monte Carlo or Markov Chain Monte Carlo methods are all based on random walk. From this perspective, there are several interesting aspects need attention:
1. How to propose new point (based on current point or other information) ?
2. Criterion for accepting new proposed point. This is just need to ensure the detailed balance, so that finally sampling points according to target distribution.

For MCMC algorithm to work, need to ensure the detailed balance, ensure the random walk would not have patterns and can reach every possible space.

Hamiltonian Monte Carlo method is one type of MCMC, motivated by one face: for efficient sampling, when the probability surface is flat, the step size should be large; when the probability surface has very variability, the step size should be small. So can ensure the full sampling.

Hamiltonian Monte Carlo Method motivated by physical system and newton first, second law. Even though I do not know the details of HMC, i can know the main improvement is the way to propose new candidate point (make it much efficient).

Lecture 13 Parallel Tempering

Start multiple with different temperature. The key idea the the using of Temperature. For any probability distribution can be transformed into . This will turn any distribution into boltzmann distribution and is the temperate. Higher temperate will increase the rate of acceptance when sampling. Low temperate will decrease the rate of acceptance. The core idea of parallel tempering is the run multiple chain of MCMC with different temperate. Some of the chain with and others with higher temperate. So that MCMC will explore more space and more efficient.

Lecture 14 Simulated Annealing

Borrowing the concept of temperate form lecture 13. Start with high temperate, then decreasing the temperate slowly. Hope to jump out of local minimum. The reason is this: The high temperate will smooth the objective (or distribution).

Lecture 15 Stochastic Gradient Descent

Discuss the optimization algorithm, nothing special.

Lecture 16 Time Series

Despite the I.I.D assumptions about data points, there are other forms of data. Time series is a special form, data are depend on previous data points. There are multiple concepts related to time series data points:
1. Autocorrelation.
2. Autoregression.

Lecture 17 Times Series & HMM

Keep talking about time series related model & HMM, not interested.

Lecture 18 HMM & Kalman Filter

Model like HMM and kalman filter. Know the details.

Lecture 19 Expectation Maximization

Some kind of derivation with example of GMM. Nothing special information from this lecture.

Lecture 20 Gaussian Process

Gaussian process is too much complicated. The prediction for new point based on seen points. Using kernel to construct the covariance matrix for a gaussian distribution like the Gram Matrix. The new point and the original point will construct a joint distributiion which can be simplified to a conditional distribution. Something else not clear.

Lecture 21 Graphical Model

One lecture for graphical model will be impossible, skipped.

Lecture 22 Graphical Model cont.

Skipped as well.

Summary

After finishing this course, i think this course is not so advanced but the most important stuff is let me know some more details about MCMC. So that i can read some more book & understand & implement some algorithm myself. Whatever, this is a good course for MCMC. But for Optimization and Graphical Model part, EM part, nothing special.