变分贝叶斯方法 是近似贝叶斯推理 与机器学习 中较难积分 的一系列技术,通常用于由观测变量及未知参数 、潜变量 组成的复杂统计模型 ,之间存在的各种关系可用图模式 描述。贝叶斯推理中,参数和潜变量一般归为“未观察变量”。变分贝叶斯方法主要用于两个目的:
提供未观测变量后验概率 的分析近似,以便对变量统计推断
得出观测数据的边缘似然 的下界 (即给定模型的数据的边缘概率 ,边缘化未观测变量)。通常用于模型选择 ,一般認為给定模型的边缘似然越高,拟合效果越好,产生数据的可能性越大(另见贝叶斯因子 )。
就目的1(近似后验概率),变分贝叶斯是蒙特卡洛法 (特别是马尔可夫链蒙特卡洛 ,如吉布斯采样 )的替代,可采用完全贝叶斯的方法,对难以取值或抽样 的复杂分布 进行统计推断 。蒙特卡洛技术利用一组样本对精确后验值进行数值近似,而变分贝叶斯可求近似后验的局部最优的精确解析解。
变分贝叶斯可视作最大期望算法 (EM)的推广,从估计参数的单一最似然值的最大后验概率 (MAP估计),推广到计算参数及潜变量的整个近似后验分布 的完全贝叶斯估计。贝叶斯估计同EM一样,也能找到一组最优参数值,且与EM有相同的交替结构——基于不能求解析解的相依方程組。
很多应用中,变分贝叶斯能更快得到与吉布斯采样精度相当的解,但推导迭代方程组需要更多工作。即便是很多概念上非常简单的模型也如此,下面以只有2个参数、没有潜变量的基本不分层模型为例说明。
数学推导
问题
变分 推断中,数据
X
{\displaystyle \mathbf {X} }
的一组未观测变量
Z
=
{
Z
1
…
Z
n
}
{\displaystyle \mathbf {Z} =\{Z_{1}\dots Z_{n}\}}
的后验分布近似于所谓变分分布
Q
(
Z
)
{\displaystyle Q(\mathbf {Z} )}
P
(
Z
∣
X
)
≈
Q
(
Z
)
.
{\displaystyle P(\mathbf {Z} \mid \mathbf {X} )\approx Q(\mathbf {Z} ).}
分布
Q
(
Z
)
{\displaystyle Q(\mathbf {Z} )}
属于在形式上比
P
(
Z
∣
X
)
{\displaystyle P(\mathbf {Z} \mid \mathbf {X} )}
简单的分布族(如高斯分布族),这是为使
Q
(
Z
)
{\displaystyle Q(\mathbf {Z} )}
与真实后验
P
(
Z
∣
X
)
{\displaystyle P(\mathbf {Z} \mid \mathbf {X} )}
更相似。
相似性用不相似函数
d
(
Q
;
P
)
{\displaystyle d(Q;P)}
衡量,推断(inference)通过求分布
Q
(
Z
)
{\displaystyle Q(\mathbf {Z} )}
使
d
(
Q
;
P
)
{\displaystyle d(Q;P)}
最小化进行。
KL散度
最常见的变分贝叶斯法以Q 与P 的KL散度 作为不相似函数,很适合最小化。KL散度的定义是
D
K
L
(
Q
∥
P
)
≜
∑
Z
Q
(
Z
)
log
Q
(
Z
)
P
(
Z
∣
X
)
.
{\displaystyle D_{\mathrm {KL} }(Q\parallel P)\triangleq \sum _{\mathbf {Z} }Q(\mathbf {Z} )\log {\frac {Q(\mathbf {Z} )}{P(\mathbf {Z} \mid \mathbf {X} )}}.}
注意Q 和P 是相反的。反向KL散度在概念上类似于期望最大化算法 (KL散度则产生期望传播 算法)。
难解性
变分技术通常用于近似以下参数:
P
(
Z
∣
X
)
=
P
(
X
∣
Z
)
P
(
Z
)
P
(
X
)
=
P
(
X
∣
Z
)
P
(
Z
)
∫
Z
P
(
X
,
Z
′
)
d
Z
′
{\displaystyle P(\mathbf {Z} \mid \mathbf {X} )={\frac {P(\mathbf {X} \mid \mathbf {Z} )P(\mathbf {Z} )}{P(\mathbf {X} )}}={\frac {P(\mathbf {X} \mid \mathbf {Z} )P(\mathbf {Z} )}{\int _{\mathbf {Z} }P(\mathbf {X} ,\mathbf {Z} ')\,d\mathbf {Z} '}}}
对
Z
{\displaystyle \mathbf {Z} }
边缘化(以计算分母中的
P
(
X
)
{\displaystyle P(\mathbf {X} )}
)通常是难解的,因为
Z
{\displaystyle \mathbf {Z} }
的搜索空间在组合上可能很大。因此,我们用
Q
(
Z
)
≈
P
(
Z
∣
X
)
{\displaystyle Q(\mathbf {Z} )\approx P(\mathbf {Z} \mid \mathbf {X} )}
求近似。
证据下界
考虑到
P
(
Z
∣
X
)
=
P
(
X
,
Z
)
P
(
X
)
{\displaystyle P(\mathbf {Z} \mid \mathbf {X} )={\frac {P(\mathbf {X} ,\mathbf {Z} )}{P(\mathbf {X} )}}}
,KL散度也可写作
D
K
L
(
Q
∥
P
)
=
∑
Z
Q
(
Z
)
[
log
Q
(
Z
)
P
(
Z
,
X
)
+
log
P
(
X
)
]
=
∑
Z
Q
(
Z
)
[
log
Q
(
Z
)
−
log
P
(
Z
,
X
)
]
+
∑
Z
Q
(
Z
)
[
log
P
(
X
)
]
{\displaystyle D_{\mathrm {KL} }(Q\parallel P)=\sum _{\mathbf {Z} }Q(\mathbf {Z} )\left[\log {\frac {Q(\mathbf {Z} )}{P(\mathbf {Z} ,\mathbf {X} )}}+\log P(\mathbf {X} )\right]=\sum _{\mathbf {Z} }Q(\mathbf {Z} )\left[\log Q(\mathbf {Z} )-\log P(\mathbf {Z} ,\mathbf {X} )\right]+\sum _{\mathbf {Z} }Q(\mathbf {Z} )\left[\log P(\mathbf {X} )\right]}
因为
Q
(
Z
)
{\displaystyle Q(\mathbf {Z} )}
是分布,所以
∑
Z
Q
(
Z
)
=
1
{\displaystyle \sum _{\mathbf {Z} }Q(\mathbf {Z} )=1}
;又由于
P
(
X
)
{\displaystyle P(\mathbf {X} )}
对于
Z
{\displaystyle \mathbf {Z} }
是常数,所以
D
K
L
(
Q
∥
P
)
=
∑
Z
Q
(
Z
)
[
log
Q
(
Z
)
−
log
P
(
Z
,
X
)
]
+
log
P
(
X
)
{\displaystyle D_{\mathrm {KL} }(Q\parallel P)=\sum _{\mathbf {Z} }Q(\mathbf {Z} )\left[\log Q(\mathbf {Z} )-\log P(\mathbf {Z} ,\mathbf {X} )\right]+\log P(\mathbf {X} )}
根据(离散随机变量 )期望 的定义,上式可以写作
D
K
L
(
Q
∥
P
)
=
E
Q
[
log
Q
(
Z
)
−
log
P
(
Z
,
X
)
]
+
log
P
(
X
)
{\displaystyle D_{\mathrm {KL} }(Q\parallel P)=\mathbb {E} _{\mathbf {Q} }\left[\log Q(\mathbf {Z} )-\log P(\mathbf {Z} ,\mathbf {X} )\right]+\log P(\mathbf {X} )}
重排为
log
P
(
X
)
=
D
K
L
(
Q
∥
P
)
−
E
Q
[
log
Q
(
Z
)
−
log
P
(
Z
,
X
)
]
=
D
K
L
(
Q
∥
P
)
+
L
(
Q
)
{\displaystyle \log P(\mathbf {X} )=D_{\mathrm {KL} }(Q\parallel P)-\mathbb {E} _{\mathbf {Q} }\left[\log Q(\mathbf {Z} )-\log P(\mathbf {Z} ,\mathbf {X} )\right]=D_{\mathrm {KL} }(Q\parallel P)+{\mathcal {L}}(Q)}
由于对数证据
log
P
(
X
)
{\displaystyle \log P(\mathbf {X} )}
对于Q 是定值,最大化末项
L
(
Q
)
{\displaystyle {\mathcal {L}}(Q)}
会使Q 对P 的KL散度最小化。适当选择Q 可以让
L
(
Q
)
{\displaystyle {\mathcal {L}}(Q)}
更容易计算,更容易最大化,因此既有后验
P
(
Z
∣
X
)
{\displaystyle P(\mathbf {Z} \mid \mathbf {X} )}
的解析近似Q ,也有对数证据
log
P
(
X
)
{\displaystyle \log P(\mathbf {X} )}
的下界
L
(
Q
)
{\displaystyle {\mathcal {L}}(Q)}
(KL散度非负)。
下界
L
(
Q
)
{\displaystyle {\mathcal {L}}(Q)}
与热力学自由能 类似,也称作(负)变分自由能 ,因为它也可以表为负能量
E
Q
[
log
P
(
Z
,
X
)
]
{\displaystyle \operatorname {E} _{Q}[\log P(\mathbf {Z} ,\mathbf {X} )]}
加Q 的熵 。
L
(
Q
)
{\displaystyle {\mathcal {L}}(Q)}
项也称作证据下界 (ELBO),是数据的对数证据的下界。
证明
根据布雷格曼散度 (KL散度是其特例)的广义勾股定理,可以证明[ 1] [ 2]
布雷格曼散度 的广义勾股定理[ 2]
D
K
L
(
Q
∥
P
)
≥
D
K
L
(
Q
∥
Q
∗
)
+
D
K
L
(
Q
∗
∥
P
)
,
∀
Q
∗
∈
C
{\displaystyle D_{\mathrm {KL} }(Q\parallel P)\geq D_{\mathrm {KL} }(Q\parallel Q^{*})+D_{\mathrm {KL} }(Q^{*}\parallel P),\forall Q^{*}\in {\mathcal {C}}}
其中
C
{\displaystyle {\mathcal {C}}}
是凸集,且
Q
=
Q
∗
≜
arg
min
Q
∈
C
D
K
L
(
Q
∥
P
)
.
{\displaystyle Q=Q^{*}\triangleq \arg \min _{Q\in {\mathcal {C}}}D_{\mathrm {KL} }(Q\parallel P).}
时等号成立。这时,全局最小值
Q
∗
(
Z
)
=
q
∗
(
Z
1
∣
Z
2
)
q
∗
(
Z
2
)
=
q
∗
(
Z
2
∣
Z
1
)
q
∗
(
Z
1
)
{\displaystyle Q^{*}(\mathbf {Z} )=q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})q^{*}(\mathbf {Z} _{2})=q^{*}(\mathbf {Z} _{2}\mid \mathbf {Z} _{1})q^{*}(\mathbf {Z} _{1})}
,其中
Z
=
{
Z
1
,
Z
2
}
{\displaystyle \mathbf {Z} =\{\mathbf {Z_{1}} ,\mathbf {Z_{2}} \}}
可以如下求得:[ 1]
q
∗
(
Z
2
)
=
P
(
X
)
ζ
(
X
)
P
(
Z
2
∣
X
)
exp
(
D
K
L
(
q
∗
(
Z
1
∣
Z
2
)
∥
P
(
Z
1
∣
Z
2
,
X
)
)
)
=
1
ζ
(
X
)
exp
E
q
∗
(
Z
1
∣
Z
2
)
(
log
P
(
Z
,
X
)
q
∗
(
Z
1
∣
Z
2
)
)
,
{\displaystyle q^{*}(\mathbf {Z} _{2})={\frac {P(\mathbf {X} )}{\zeta (\mathbf {X} )}}{\frac {P(\mathbf {Z} _{2}\mid \mathbf {X} )}{\exp(D_{\mathrm {KL} }(q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})\parallel P(\mathbf {Z} _{1}\mid \mathbf {Z} _{2},\mathbf {X} )))}}={\frac {1}{\zeta (\mathbf {X} )}}\exp \mathbb {E} _{q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})}\left(\log {\frac {P(\mathbf {Z} ,\mathbf {X} )}{q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})}}\right),}
其中归一化常数为
ζ
(
X
)
=
P
(
X
)
∫
Z
2
P
(
Z
2
∣
X
)
exp
(
D
K
L
(
q
∗
(
Z
1
∣
Z
2
)
∥
P
(
Z
1
∣
Z
2
,
X
)
)
)
=
∫
Z
2
exp
E
q
∗
(
Z
1
∣
Z
2
)
(
log
P
(
Z
,
X
)
q
∗
(
Z
1
∣
Z
2
)
)
.
{\displaystyle \zeta (\mathbf {X} )=P(\mathbf {X} )\int _{\mathbf {Z} _{2}}{\frac {P(\mathbf {Z} _{2}\mid \mathbf {X} )}{\exp(D_{\mathrm {KL} }(q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})\parallel P(\mathbf {Z} _{1}\mid \mathbf {Z} _{2},\mathbf {X} )))}}=\int _{\mathbf {Z} _{2}}\exp \mathbb {E} _{q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})}\left(\log {\frac {P(\mathbf {Z} ,\mathbf {X} )}{q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})}}\right).}
ζ
(
X
)
{\displaystyle \zeta (\mathbf {X} )}
项在实践中常称作证据下界(ELBO),因为
P
(
X
)
≥
ζ
(
X
)
=
exp
(
L
(
Q
∗
)
)
{\displaystyle P(\mathbf {X} )\geq \zeta (\mathbf {X} )=\exp({\mathcal {L}}(Q^{*}))}
,[ 1] 如上所示。
交换
Z
1
,
Z
2
{\displaystyle \mathbf {Z} _{1},\ \mathbf {Z} _{2}}
的角色,可以迭代计算真实模型边际
P
(
Z
1
∣
X
)
,
P
(
Z
2
∣
X
)
{\displaystyle P(\mathbf {Z} _{1}\mid \mathbf {X} ),\ P(\mathbf {Z} _{2}\mid \mathbf {X} )}
的近似
q
∗
(
Z
1
)
,
q
∗
(
Z
2
)
{\displaystyle q^{*}(\mathbf {Z} _{1}),\ q^{*}(\mathbf {Z} _{2})}
。这种迭代保证单调收敛,[ 1] 但收敛到的
Q
∗
{\displaystyle Q^{*}}
只是
D
K
L
(
Q
∥
P
)
{\displaystyle D_{\mathrm {KL} }(Q\parallel P)}
的局部极小值。
若约束空间
C
{\displaystyle {\mathcal {C}}}
被限制在独立空间内,即
q
∗
(
Z
1
∣
Z
2
)
=
q
∗
(
Z
1
)
{\displaystyle q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})=q^{*}(\mathbf {Z_{1}} )}
,则上述迭代将成为所谓平均场近似(mean field approximation)
Q
∗
(
Z
)
=
q
∗
(
Z
1
)
q
∗
(
Z
2
)
,
{\displaystyle Q^{*}(\mathbf {Z} )=q^{*}(\mathbf {Z} _{1})q^{*}(\mathbf {Z} _{2}),}
。
平均场近似
变分分布
Q
(
Z
)
{\displaystyle Q(\mathbf {Z} )}
常被假定为潜变量的某划分 的因式分解,即对潜变量
Z
{\displaystyle \mathbf {Z} }
的划分
Z
1
…
Z
M
{\displaystyle \mathbf {Z} _{1}\dots \mathbf {Z} _{M}}
,
Q
(
Z
)
=
∏
i
=
1
M
q
i
(
Z
i
∣
X
)
{\displaystyle Q(\mathbf {Z} )=\prod _{i=1}^{M}q_{i}(\mathbf {Z} _{i}\mid \mathbf {X} )}
由变分法 可以证明,对每个因子
q
j
{\displaystyle q_{j}}
的最佳分布
q
j
∗
{\displaystyle q_{j}^{*}}
(就KL散度最小的分布而言,如上所述)满足[ 3]
q
j
∗
(
Z
j
∣
X
)
=
e
E
q
−
j
∗
[
ln
p
(
Z
,
X
)
]
∫
e
E
q
−
j
∗
[
ln
p
(
Z
,
X
)
]
d
Z
j
{\displaystyle q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )={\frac {e^{\operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]}}{\int e^{\operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]}\,d\mathbf {Z} _{j}}}}
其中
E
q
−
j
∗
[
ln
p
(
Z
,
X
)
]
{\displaystyle \operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]}
是数据与潜变量的对数联合概率 的期望值 ,是相对于不属于划分的所有变量的
q
∗
{\displaystyle q^{*}}
而取。关于
q
j
∗
(
Z
j
∣
X
)
{\displaystyle q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )}
的推导见[ 4] 的引理 4.1。
实践中,通常用对数表示:
ln
q
j
∗
(
Z
j
∣
X
)
=
E
q
−
j
∗
[
ln
p
(
Z
,
X
)
]
+
constant
{\displaystyle \ln q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )=\operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]+{\text{constant}}}
上式中的常数与归一化常数 (
q
j
∗
{\displaystyle q_{j}^{*}}
上述表达式中的分母)有关,通常通过检查来恢复,因为表达式的其余部分通常是已知的分布(如正态分布 、伽马分布 等)。
利用期望的性质,式
E
q
−
j
∗
[
ln
p
(
Z
,
X
)
]
{\displaystyle \operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]}
通常可简为潜变量先验分布 的固定超参数 及不属于当前划分的潜变量(即不属于
Z
j
{\displaystyle \mathbf {Z} _{j}}
的潜变量)的期望值的函数(有时还包括方差 等高阶矩 )。这就在某一划分的变量分布参数同其他划分的变量的期望值之间产生了循环依赖 ,自然就需要类似期望最大化算法 的迭代 法,以某种方式(也许随机)初始化潜变量期望(可能还有高阶矩),再利用当前期望依次计算每个分布的参数、适当设置新算出的分布的期望。这种算法保证收敛。[ 5]
换一种说法,对每个变量划分,简化其中变量分布的表达式、研究分布与相关变量的函数依赖关系,通常就能确定分布的族(反过来确定了常数)。分布的参数公式可用先验分布的超参数(已知常数)表示,也可用其他划分中变量函数的期望表示。通常这些期望可简为变量本身的期望值函数(即平均值 );有时也会出现变量平方(与变量方差 有关)或更高次幂()高阶矩 )的期望。其他变量的分布一般来自已知族,相关期望的公式可查到;但公式取决于分布的参数,参数又取决于其他变量的期望,所以每个变量的分布参数公式都可表为一群变量间非线性相互依赖的方程。通常不能直接求解,不过可以用简单的迭代算法,大多数时候都能保证收敛。
变分推理对偶公式
用对偶公式图解坐标上升变分推理算法[ 4]
下面的定理称作变分推理对偶公式,[ 4] 解释了变分贝叶斯方法中变分分布的一些重要性质。
Theorem 考虑两概率空间
(
Θ
,
F
,
P
)
,
(
Θ
,
F
,
Q
)
{\displaystyle (\Theta ,{\mathcal {F}},P),\ (\Theta ,{\mathcal {F}},Q)}
,其中
Q
≪
P
{\displaystyle Q\ll P}
。假设存在共同的主导概率测度
λ
{\displaystyle \lambda }
使
P
≪
λ
,
Q
≪
λ
{\displaystyle P\ll \lambda ,\ Q\ll \lambda }
。令h 是
(
Θ
,
F
,
P
)
{\displaystyle (\Theta ,{\mathcal {F}},P)}
上的任意实值随机变量 ,满足
h
∈
L
1
(
P
)
{\displaystyle h\in L_{1}(P)}
,则下列等式成立
log
E
P
[
exp
h
]
=
sup
Q
≪
P
{
E
Q
[
h
]
−
D
KL
(
Q
∥
P
)
}
.
{\displaystyle \log E_{P}[\exp h]={\text{sup}}_{Q\ll P}\{E_{Q}[h]-D_{\text{KL}}(Q\parallel P)\}.}
此外,当且仅当(supremum)
q
(
θ
)
p
(
θ
)
=
exp
h
(
θ
)
E
P
[
exp
h
]
,
{\displaystyle {\frac {q(\theta )}{p(\theta )}}={\frac {\exp h(\theta )}{E_{P}[\exp h]}},}
关于概率测度Q 几乎是确定的,其中
p
(
θ
)
=
d
P
/
d
λ
,
q
(
θ
)
=
d
Q
/
d
λ
{\displaystyle p(\theta )=dP/d\lambda ,\ q(\theta )=dQ/d\lambda }
分别表示概率测度P 、Q 对
λ
{\displaystyle \lambda }
的Radon–Nikodym导数,右式才取上界。
基本例子
考虑简单的不分层贝叶斯模型,由来自正态分布 (均值 、方差 未知)的独立同分布 观测量组成。[ 6] 下面我们将详细介绍这模型,以说明变分贝叶斯方法的工作原理。
为了数学上的方便,下例中我们用精度 (即方差 倒数;多元正态分布时是协方差矩阵 的逆。理论上精度和方差等价,因为两者间是双射 )。
数学模型
将共轭先验 分布置于未知均值
μ
{\displaystyle \mu }
、精度
τ
{\displaystyle \tau }
,即均值也遵循高斯分布,而精度遵循伽马分布 。
τ
∼
Gamma
(
a
0
,
b
0
)
μ
|
τ
∼
N
(
μ
0
,
(
λ
0
τ
)
−
1
)
{
x
1
,
…
,
x
N
}
∼
N
(
μ
,
τ
−
1
)
N
=
number of data points
{\displaystyle {\begin{aligned}\tau &\sim \operatorname {Gamma} (a_{0},b_{0})\\\mu |\tau &\sim {\mathcal {N}}(\mu _{0},(\lambda _{0}\tau )^{-1})\\\{x_{1},\dots ,x_{N}\}&\sim {\mathcal {N}}(\mu ,\tau ^{-1})\\N&={\text{number of data points}}\end{aligned}}}
先验分布超参数
μ
0
,
λ
0
,
a
0
{\displaystyle \mu _{0},\lambda _{0},a_{0}}
、
b
0
{\displaystyle b_{0}}
是已知定值,可设为小正数,以得到较宽的先验分布,表明我们对
μ
{\displaystyle \mu }
、
τ
{\displaystyle \tau }
的先验分布一无所知。
已知N 个数据点
X
=
{
x
1
,
…
,
x
N
}
{\displaystyle \mathbf {X} =\{x_{1},\ldots ,x_{N}\}}
,而我们的目标是推断参数
μ
,
τ
{\displaystyle \mu ,\ \tau }
的后验分布
q
(
μ
,
τ
)
=
p
(
μ
,
τ
∣
x
1
,
…
,
x
N
)
{\displaystyle q(\mu ,\tau )=p(\mu ,\tau \mid x_{1},\ldots ,x_{N})}
。
联合概率
所有变量的联合概率 可重写为
p
(
X
,
μ
,
τ
)
=
p
(
X
∣
μ
,
τ
)
p
(
μ
∣
τ
)
p
(
τ
)
{\displaystyle p(\mathbf {X} ,\mu ,\tau )=p(\mathbf {X} \mid \mu ,\tau )p(\mu \mid \tau )p(\tau )}
个体因子是
p
(
X
∣
μ
,
τ
)
=
∏
n
=
1
N
N
(
x
n
∣
μ
,
τ
−
1
)
p
(
μ
∣
τ
)
=
N
(
μ
∣
μ
0
,
(
λ
0
τ
)
−
1
)
p
(
τ
)
=
Gamma
(
τ
∣
a
0
,
b
0
)
{\displaystyle {\begin{aligned}p(\mathbf {X} \mid \mu ,\tau )&=\prod _{n=1}^{N}{\mathcal {N}}(x_{n}\mid \mu ,\tau ^{-1})\\p(\mu \mid \tau )&={\mathcal {N}}\left(\mu \mid \mu _{0},(\lambda _{0}\tau )^{-1}\right)\\p(\tau )&=\operatorname {Gamma} (\tau \mid a_{0},b_{0})\end{aligned}}}
其中
N
(
x
∣
μ
,
σ
2
)
=
1
2
π
σ
2
e
−
(
x
−
μ
)
2
2
σ
2
Gamma
(
τ
∣
a
,
b
)
=
1
Γ
(
a
)
b
a
τ
a
−
1
e
−
b
τ
{\displaystyle {\begin{aligned}{\mathcal {N}}(x\mid \mu ,\sigma ^{2})&={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{\frac {-(x-\mu )^{2}}{2\sigma ^{2}}}\\\operatorname {Gamma} (\tau \mid a,b)&={\frac {1}{\Gamma (a)}}b^{a}\tau ^{a-1}e^{-b\tau }\end{aligned}}}
因式分解近似
假设
q
(
μ
,
τ
)
=
q
(
μ
)
q
(
τ
)
{\displaystyle q(\mu ,\tau )=q(\mu )q(\tau )}
,即后验分布因式分解为
μ
,
τ
{\displaystyle \mu ,\ \tau }
的独立因子。这种假设是变分贝叶斯方法的基础。事实上,真正的后验分布并非如此(这种简单情形我们知道它是正态-伽马分布 ),因此所得结果将是近似值。
导出q (μ )
则
ln
q
μ
∗
(
μ
)
=
E
τ
[
ln
p
(
X
∣
μ
,
τ
)
+
ln
p
(
μ
∣
τ
)
+
ln
p
(
τ
)
]
+
C
=
E
τ
[
ln
p
(
X
∣
μ
,
τ
)
]
+
E
τ
[
ln
p
(
μ
∣
τ
)
]
+
E
τ
[
ln
p
(
τ
)
]
+
C
=
E
τ
[
ln
∏
n
=
1
N
N
(
x
n
∣
μ
,
τ
−
1
)
]
+
E
τ
[
ln
N
(
μ
∣
μ
0
,
(
λ
0
τ
)
−
1
)
]
+
C
2
=
E
τ
[
ln
∏
n
=
1
N
τ
2
π
e
−
(
x
n
−
μ
)
2
τ
2
]
+
E
τ
[
ln
λ
0
τ
2
π
e
−
(
μ
−
μ
0
)
2
λ
0
τ
2
]
+
C
2
=
E
τ
[
∑
n
=
1
N
(
1
2
(
ln
τ
−
ln
2
π
)
−
(
x
n
−
μ
)
2
τ
2
)
]
+
E
τ
[
1
2
(
ln
λ
0
+
ln
τ
−
ln
2
π
)
−
(
μ
−
μ
0
)
2
λ
0
τ
2
]
+
C
2
=
E
τ
[
∑
n
=
1
N
−
(
x
n
−
μ
)
2
τ
2
]
+
E
τ
[
−
(
μ
−
μ
0
)
2
λ
0
τ
2
]
+
E
τ
[
∑
n
=
1
N
1
2
(
ln
τ
−
ln
2
π
)
]
+
E
τ
[
1
2
(
ln
λ
0
+
ln
τ
−
ln
2
π
)
]
+
C
2
=
E
τ
[
∑
n
=
1
N
−
(
x
n
−
μ
)
2
τ
2
]
+
E
τ
[
−
(
μ
−
μ
0
)
2
λ
0
τ
2
]
+
C
3
=
−
E
τ
[
τ
]
2
{
∑
n
=
1
N
(
x
n
−
μ
)
2
+
λ
0
(
μ
−
μ
0
)
2
}
+
C
3
{\displaystyle {\begin{aligned}\ln q_{\mu }^{*}(\mu )&=\operatorname {E} _{\tau }\left[\ln p(\mathbf {X} \mid \mu ,\tau )+\ln p(\mu \mid \tau )+\ln p(\tau )\right]+C\\&=\operatorname {E} _{\tau }\left[\ln p(\mathbf {X} \mid \mu ,\tau )\right]+\operatorname {E} _{\tau }\left[\ln p(\mu \mid \tau )\right]+\operatorname {E} _{\tau }\left[\ln p(\tau )\right]+C\\&=\operatorname {E} _{\tau }\left[\ln \prod _{n=1}^{N}{\mathcal {N}}\left(x_{n}\mid \mu ,\tau ^{-1}\right)\right]+\operatorname {E} _{\tau }\left[\ln {\mathcal {N}}\left(\mu \mid \mu _{0},(\lambda _{0}\tau )^{-1}\right)\right]+C_{2}\\&=\operatorname {E} _{\tau }\left[\ln \prod _{n=1}^{N}{\sqrt {\frac {\tau }{2\pi }}}e^{-{\frac {(x_{n}-\mu )^{2}\tau }{2}}}\right]+\operatorname {E} _{\tau }\left[\ln {\sqrt {\frac {\lambda _{0}\tau }{2\pi }}}e^{-{\frac {(\mu -\mu _{0})^{2}\lambda _{0}\tau }{2}}}\right]+C_{2}\\&=\operatorname {E} _{\tau }\left[\sum _{n=1}^{N}\left({\frac {1}{2}}(\ln \tau -\ln 2\pi )-{\frac {(x_{n}-\mu )^{2}\tau }{2}}\right)\right]+\operatorname {E} _{\tau }\left[{\frac {1}{2}}(\ln \lambda _{0}+\ln \tau -\ln 2\pi )-{\frac {(\mu -\mu _{0})^{2}\lambda _{0}\tau }{2}}\right]+C_{2}\\&=\operatorname {E} _{\tau }\left[\sum _{n=1}^{N}-{\frac {(x_{n}-\mu )^{2}\tau }{2}}\right]+\operatorname {E} _{\tau }\left[-{\frac {(\mu -\mu _{0})^{2}\lambda _{0}\tau }{2}}\right]+\operatorname {E} _{\tau }\left[\sum _{n=1}^{N}{\frac {1}{2}}(\ln \tau -\ln 2\pi )\right]+\operatorname {E} _{\tau }\left[{\frac {1}{2}}(\ln \lambda _{0}+\ln \tau -\ln 2\pi )\right]+C_{2}\\&=\operatorname {E} _{\tau }\left[\sum _{n=1}^{N}-{\frac {(x_{n}-\mu )^{2}\tau }{2}}\right]+\operatorname {E} _{\tau }\left[-{\frac {(\mu -\mu _{0})^{2}\lambda _{0}\tau }{2}}\right]+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right\}+C_{3}\end{aligned}}}
上面的推导中,
C
,
C
2
,
C
3
{\displaystyle C,\ C_{2},\ C_{3}}
指对
μ
{\displaystyle \mu }
为常数的值。注意
E
τ
[
ln
p
(
τ
)
]
{\displaystyle \operatorname {E} _{\tau }[\ln p(\tau )]}
项不是
μ
{\displaystyle \mu }
的函数,无论
μ
{\displaystyle \mu }
的值是多少,它都不变。因此,第三行中可以把它吸收到末尾的常数项,第七行也如此。
最后一行是
μ
{\displaystyle \mu }
的二次多项式。由于这是
q
μ
∗
(
μ
)
{\displaystyle q_{\mu }^{*}(\mu )}
的对数,我们可以看到
q
μ
∗
(
μ
)
{\displaystyle q_{\mu }^{*}(\mu )}
本身是正态分布 。
通过一些繁琐的计算(展开括号里的平方、分离出涉及
μ
{\displaystyle \mu }
、
μ
2
{\displaystyle \mu ^{2}}
的项、对
μ
{\displaystyle \mu }
应用配方法 ),可得到正态分布的参数:
ln
q
μ
∗
(
μ
)
=
−
E
τ
[
τ
]
2
{
∑
n
=
1
N
(
x
n
−
μ
)
2
+
λ
0
(
μ
−
μ
0
)
2
}
+
C
3
=
−
E
τ
[
τ
]
2
{
∑
n
=
1
N
(
x
n
2
−
2
x
n
μ
+
μ
2
)
+
λ
0
(
μ
2
−
2
μ
0
μ
+
μ
0
2
)
}
+
C
3
=
−
E
τ
[
τ
]
2
{
(
∑
n
=
1
N
x
n
2
)
−
2
(
∑
n
=
1
N
x
n
)
μ
+
(
∑
n
=
1
N
μ
2
)
+
λ
0
μ
2
−
2
λ
0
μ
0
μ
+
λ
0
μ
0
2
}
+
C
3
=
−
E
τ
[
τ
]
2
{
(
λ
0
+
N
)
μ
2
−
2
(
λ
0
μ
0
+
∑
n
=
1
N
x
n
)
μ
+
(
∑
n
=
1
N
x
n
2
)
+
λ
0
μ
0
2
}
+
C
3
=
−
E
τ
[
τ
]
2
{
(
λ
0
+
N
)
μ
2
−
2
(
λ
0
μ
0
+
∑
n
=
1
N
x
n
)
μ
}
+
C
4
=
−
E
τ
[
τ
]
2
{
(
λ
0
+
N
)
μ
2
−
2
(
λ
0
μ
0
+
∑
n
=
1
N
x
n
λ
0
+
N
)
(
λ
0
+
N
)
μ
}
+
C
4
=
−
E
τ
[
τ
]
2
{
(
λ
0
+
N
)
(
μ
2
−
2
(
λ
0
μ
0
+
∑
n
=
1
N
x
n
λ
0
+
N
)
μ
)
}
+
C
4
=
−
E
τ
[
τ
]
2
{
(
λ
0
+
N
)
(
μ
2
−
2
(
λ
0
μ
0
+
∑
n
=
1
N
x
n
λ
0
+
N
)
μ
+
(
λ
0
μ
0
+
∑
n
=
1
N
x
n
λ
0
+
N
)
2
−
(
λ
0
μ
0
+
∑
n
=
1
N
x
n
λ
0
+
N
)
2
)
}
+
C
4
=
−
E
τ
[
τ
]
2
{
(
λ
0
+
N
)
(
μ
2
−
2
(
λ
0
μ
0
+
∑
n
=
1
N
x
n
λ
0
+
N
)
μ
+
(
λ
0
μ
0
+
∑
n
=
1
N
x
n
λ
0
+
N
)
2
)
}
+
C
5
=
−
E
τ
[
τ
]
2
{
(
λ
0
+
N
)
(
μ
−
λ
0
μ
0
+
∑
n
=
1
N
x
n
λ
0
+
N
)
2
}
+
C
5
=
−
1
2
(
λ
0
+
N
)
E
τ
[
τ
]
(
μ
−
λ
0
μ
0
+
∑
n
=
1
N
x
n
λ
0
+
N
)
2
+
C
5
{\displaystyle {\begin{aligned}\ln q_{\mu }^{*}(\mu )&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right\}+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{\sum _{n=1}^{N}(x_{n}^{2}-2x_{n}\mu +\mu ^{2})+\lambda _{0}(\mu ^{2}-2\mu _{0}\mu +\mu _{0}^{2})\right\}+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{\left(\sum _{n=1}^{N}x_{n}^{2}\right)-2\left(\sum _{n=1}^{N}x_{n}\right)\mu +\left(\sum _{n=1}^{N}\mu ^{2}\right)+\lambda _{0}\mu ^{2}-2\lambda _{0}\mu _{0}\mu +\lambda _{0}\mu _{0}^{2}\right\}+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\mu ^{2}-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu +\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right\}+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\mu ^{2}-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu \right\}+C_{4}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\mu ^{2}-2\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)(\lambda _{0}+N)\mu \right\}+C_{4}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\left(\mu ^{2}-2\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)\mu \right)\right\}+C_{4}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\left(\mu ^{2}-2\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)\mu +\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}-\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}\right)\right\}+C_{4}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\left(\mu ^{2}-2\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)\mu +\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}\right)\right\}+C_{5}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\left(\mu -{\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}\right\}+C_{5}\\&=-{\frac {1}{2}}(\lambda _{0}+N)\operatorname {E} _{\tau }[\tau ]\left(\mu -{\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}+C_{5}\end{aligned}}}
注意上面所有步骤用二次和公式都能化简。
即
q
μ
∗
(
μ
)
∼
N
(
μ
∣
μ
N
,
λ
N
−
1
)
μ
N
=
λ
0
μ
0
+
N
x
¯
λ
0
+
N
λ
N
=
(
λ
0
+
N
)
E
τ
[
τ
]
x
¯
=
1
N
∑
n
=
1
N
x
n
{\displaystyle {\begin{aligned}q_{\mu }^{*}(\mu )&\sim {\mathcal {N}}(\mu \mid \mu _{N},\lambda _{N}^{-1})\\\mu _{N}&={\frac {\lambda _{0}\mu _{0}+N{\bar {x}}}{\lambda _{0}+N}}\\\lambda _{N}&=(\lambda _{0}+N)\operatorname {E} _{\tau }[\tau ]\\{\bar {x}}&={\frac {1}{N}}\sum _{n=1}^{N}x_{n}\end{aligned}}}
推导q(τ)
q
τ
∗
(
τ
)
{\displaystyle q_{\tau }^{*}(\tau )}
的推导大致相同,简洁起见略去细节。
ln
q
τ
∗
(
τ
)
=
E
μ
[
ln
p
(
X
∣
μ
,
τ
)
+
ln
p
(
μ
∣
τ
)
]
+
ln
p
(
τ
)
+
constant
=
(
a
0
−
1
)
ln
τ
−
b
0
τ
+
1
2
ln
τ
+
N
2
ln
τ
−
τ
2
E
μ
[
∑
n
=
1
N
(
x
n
−
μ
)
2
+
λ
0
(
μ
−
μ
0
)
2
]
+
constant
{\displaystyle {\begin{aligned}\ln q_{\tau }^{*}(\tau )&=\operatorname {E} _{\mu }[\ln p(\mathbf {X} \mid \mu ,\tau )+\ln p(\mu \mid \tau )]+\ln p(\tau )+{\text{constant}}\\&=(a_{0}-1)\ln \tau -b_{0}\tau +{\frac {1}{2}}\ln \tau +{\frac {N}{2}}\ln \tau -{\frac {\tau }{2}}\operatorname {E} _{\mu }\left[\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right]+{\text{constant}}\end{aligned}}}
两侧取指数,可见
q
τ
∗
(
τ
)
{\displaystyle q_{\tau }^{*}(\tau )}
服从伽马分布 。
q
τ
∗
(
τ
)
∼
Gamma
(
τ
∣
a
N
,
b
N
)
a
N
=
a
0
+
N
+
1
2
b
N
=
b
0
+
1
2
E
μ
[
∑
n
=
1
N
(
x
n
−
μ
)
2
+
λ
0
(
μ
−
μ
0
)
2
]
{\displaystyle {\begin{aligned}q_{\tau }^{*}(\tau )&\sim \operatorname {Gamma} (\tau \mid a_{N},b_{N})\\a_{N}&=a_{0}+{\frac {N+1}{2}}\\b_{N}&=b_{0}+{\frac {1}{2}}\operatorname {E} _{\mu }\left[\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right]\end{aligned}}}
计算参数
回忆前几节的结论:
q
μ
∗
(
μ
)
∼
N
(
μ
∣
μ
N
,
λ
N
−
1
)
μ
N
=
λ
0
μ
0
+
N
x
¯
λ
0
+
N
λ
N
=
(
λ
0
+
N
)
E
τ
[
τ
]
x
¯
=
1
N
∑
n
=
1
N
x
n
{\displaystyle {\begin{aligned}q_{\mu }^{*}(\mu )&\sim {\mathcal {N}}(\mu \mid \mu _{N},\lambda _{N}^{-1})\\\mu _{N}&={\frac {\lambda _{0}\mu _{0}+N{\bar {x}}}{\lambda _{0}+N}}\\\lambda _{N}&=(\lambda _{0}+N)\operatorname {E} _{\tau }[\tau ]\\{\bar {x}}&={\frac {1}{N}}\sum _{n=1}^{N}x_{n}\end{aligned}}}
q
τ
∗
(
τ
)
∼
Gamma
(
τ
∣
a
N
,
b
N
)
a
N
=
a
0
+
N
+
1
2
b
N
=
b
0
+
1
2
E
μ
[
∑
n
=
1
N
(
x
n
−
μ
)
2
+
λ
0
(
μ
−
μ
0
)
2
]
{\displaystyle {\begin{aligned}q_{\tau }^{*}(\tau )&\sim \operatorname {Gamma} (\tau \mid a_{N},b_{N})\\a_{N}&=a_{0}+{\frac {N+1}{2}}\\b_{N}&=b_{0}+{\frac {1}{2}}\operatorname {E} _{\mu }\left[\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right]\end{aligned}}}
每种情形下,某变量的分布参数取决于对另一变量的期望。可用正态分布和伽马分布矩期望的标准公式推广期望:
E
[
τ
∣
a
N
,
b
N
]
=
a
N
b
N
E
[
μ
∣
μ
N
,
λ
N
−
1
]
=
μ
N
E
[
X
2
]
=
Var
(
X
)
+
(
E
[
X
]
)
2
E
[
μ
2
∣
μ
N
,
λ
N
−
1
]
=
λ
N
−
1
+
μ
N
2
{\displaystyle {\begin{aligned}\operatorname {E} [\tau \mid a_{N},b_{N}]&={\frac {a_{N}}{b_{N}}}\\\operatorname {E} \left[\mu \mid \mu _{N},\lambda _{N}^{-1}\right]&=\mu _{N}\\\operatorname {E} \left[X^{2}\right]&=\operatorname {Var} (X)+(\operatorname {E} [X])^{2}\\\operatorname {E} \left[\mu ^{2}\mid \mu _{N},\lambda _{N}^{-1}\right]&=\lambda _{N}^{-1}+\mu _{N}^{2}\end{aligned}}}
大多数时候将这些公式应用到上述方程不困难,但
b
N
{\displaystyle b_{N}}
需要更多工作:
b
N
=
b
0
+
1
2
E
μ
[
∑
n
=
1
N
(
x
n
−
μ
)
2
+
λ
0
(
μ
−
μ
0
)
2
]
=
b
0
+
1
2
E
μ
[
(
λ
0
+
N
)
μ
2
−
2
(
λ
0
μ
0
+
∑
n
=
1
N
x
n
)
μ
+
(
∑
n
=
1
N
x
n
2
)
+
λ
0
μ
0
2
]
=
b
0
+
1
2
[
(
λ
0
+
N
)
E
μ
[
μ
2
]
−
2
(
λ
0
μ
0
+
∑
n
=
1
N
x
n
)
E
μ
[
μ
]
+
(
∑
n
=
1
N
x
n
2
)
+
λ
0
μ
0
2
]
=
b
0
+
1
2
[
(
λ
0
+
N
)
(
λ
N
−
1
+
μ
N
2
)
−
2
(
λ
0
μ
0
+
∑
n
=
1
N
x
n
)
μ
N
+
(
∑
n
=
1
N
x
n
2
)
+
λ
0
μ
0
2
]
{\displaystyle {\begin{aligned}b_{N}&=b_{0}+{\frac {1}{2}}\operatorname {E} _{\mu }\left[\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right]\\&=b_{0}+{\frac {1}{2}}\operatorname {E} _{\mu }\left[(\lambda _{0}+N)\mu ^{2}-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu +\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right]\\&=b_{0}+{\frac {1}{2}}\left[(\lambda _{0}+N)\operatorname {E} _{\mu }[\mu ^{2}]-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\operatorname {E} _{\mu }[\mu ]+\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right]\\&=b_{0}+{\frac {1}{2}}\left[(\lambda _{0}+N)\left(\lambda _{N}^{-1}+\mu _{N}^{2}\right)-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu _{N}+\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right]\\\end{aligned}}}
这样就可以写出参数方程如下,不需要期望:
μ
N
=
λ
0
μ
0
+
N
x
¯
λ
0
+
N
λ
N
=
(
λ
0
+
N
)
a
N
b
N
x
¯
=
1
N
∑
n
=
1
N
x
n
a
N
=
a
0
+
N
+
1
2
b
N
=
b
0
+
1
2
[
(
λ
0
+
N
)
(
λ
N
−
1
+
μ
N
2
)
−
2
(
λ
0
μ
0
+
∑
n
=
1
N
x
n
)
μ
N
+
(
∑
n
=
1
N
x
n
2
)
+
λ
0
μ
0
2
]
{\displaystyle {\begin{aligned}\mu _{N}&={\frac {\lambda _{0}\mu _{0}+N{\bar {x}}}{\lambda _{0}+N}}\\\lambda _{N}&=(\lambda _{0}+N){\frac {a_{N}}{b_{N}}}\\{\bar {x}}&={\frac {1}{N}}\sum _{n=1}^{N}x_{n}\\a_{N}&=a_{0}+{\frac {N+1}{2}}\\b_{N}&=b_{0}+{\frac {1}{2}}\left[(\lambda _{0}+N)\left(\lambda _{N}^{-1}+\mu _{N}^{2}\right)-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu _{N}+\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right]\end{aligned}}}
注意
λ
N
{\displaystyle \lambda _{N}}
和
b
N
{\displaystyle b_{N}}
存在相互依赖关系,自然产生了类似最大期望算法 的算法:
计算
∑
n
=
1
N
x
n
,
∑
n
=
1
N
x
n
2
.
{\displaystyle \sum _{n=1}^{N}x_{n},\ \sum _{n=1}^{N}x_{n}^{2}.}
用所得值计算
μ
N
,
a
N
.
{\displaystyle \mu _{N},\ a_{N}.}
初始化
λ
N
{\displaystyle \lambda _{N}}
为任意值。
用
λ
N
{\displaystyle \lambda _{N}}
的现值和其它参数的已知值计算
b
N
{\displaystyle b_{N}}
。
用
b
N
{\displaystyle b_{N}}
的现值和其它参数的已知值计算
λ
N
{\displaystyle \lambda _{N}}
。
重复最后两部,直到收敛(即直到两值的更新量不超过阈值)。
然后就有了后验参数近似分布的超参数值,可用于计算后验的任何属性,如均值、方差、95%最高密度区域(包含总概率95%的最小区间)等等。
可以证明这种算法保证收敛到局部极大值。
还要注意,后验分布与相应的先验分布有同样的形式。只需假设分布可以分解,分布形式便随之而来。事实证明,后验、先验形式相同并非巧合,而是先验分布为指数族 成员时的一般结果,大多数标准分布都如此。
进一步讨论
另见
参考文献
外部链接
The on-line textbook: Information Theory, Inference, and Learning Algorithms (页面存档备份 ,存于互联网档案馆 ), by David J.C. MacKay provides an introduction to variational methods (p. 422).
A Tutorial on Variational Bayes (页面存档备份 ,存于互联网档案馆 ). Fox, C. and Roberts, S. 2012. Artificial Intelligence Review, doi :10.1007/s10462-011-9236-8 .
Variational-Bayes Repository A repository of research papers, software, and links related to the use of variational methods for approximate Bayesian learning up to 2003.
Variational Algorithms for Approximate Bayesian Inference (页面存档备份 ,存于互联网档案馆 ), by M. J. Beal includes comparisons of EM to Variational Bayesian EM and derivations of several models including Variational Bayesian HMMs.
High-Level Explanation of Variational Inference (页面存档备份 ,存于互联网档案馆 ) by Jason Eisner may be worth reading before a more mathematically detailed treatment.
Copula Variational Bayes inference via information geometry (pdf) (页面存档备份 ,存于互联网档案馆 ) by Tran, V.H. 2018. This paper is primarily written for students. Via Bregman divergence , the paper shows that Variational Bayes is simply a generalized Pythagorean projection of true model onto an arbitrarily correlated (copula) distributional space, of which the independent space is merely a special case.
An in depth introduction to Variational Bayes note (页面存档备份 ,存于互联网档案馆 ). Nguyen, D. 2023