我知道 ADVI/MCMC 之间的数学差异,但我试图了解使用其中一个的实际含义。我正在对以这种方式创建的数据运行一个非常简单的逻辑回归示例:
import pandas as pd
import pymc3 as pm
import matplotlib.pyplot as plt
import numpy as np
def logistic(x, b, noise=None):
L = x.T.dot(b)
if noise is not None:
L = L+noise
return 1/(1+np.exp(-L))
x1 = np.linspace(-10., 10, 10000)
x2 = np.linspace(0., 20, 10000)
bias = np.ones(len(x1))
X = np.vstack([x1,x2,bias]) # Add intercept
B = [-10., 2., 1.] # Sigmoid params for X + intercept
# Noisy mean
pnoisy = logistic(X, B, noise=np.random.normal(loc=0., scale=0., size=len(x1)))
# dichotomize pnoisy -- sample 0/1 with probability pnoisy
y = np.random.binomial(1., pnoisy)
with pm.Model() as model:
# Define priors
intercept = pm.Normal('Intercept', 0, sd=10)
x1_coef = pm.Normal('x1', 0, sd=10)
x2_coef = pm.Normal('x2', 0, sd=10)
# Define likelihood
likelihood = pm.Bernoulli('y',
pm.math.sigmoid(intercept+x1_coef*X[0]+x2_coef*X[1]),
observed=y)
approx = pm.fit(90000, method='advi')
最佳答案
这是个有趣的问题!默认'advi'
在 PyMC3 中是平均场变分推理,它在捕获相关性方面做得并不好。事实证明,您建立的模型有一个有趣的相关结构,可以通过以下方式看出:
import arviz as az
az.plot_pair(trace, figsize=(5, 5))
from pymc3.variational.callbacks import CheckParametersConvergence
with model:
fit = pm.fit(100_000, method='advi', callbacks=[CheckParametersConvergence()])
draws = fit.sample(2_000)
az.plot_pair(draws, figsize=(5, 5))
az.plot_forest([draws, trace])
method='fullrank_advi'
以更好地捕捉您看到的相关性。arviz
即将成为 PyMC3 的绘图库)
关于logistic-regression - 为什么在这个逻辑回归示例中 Pymc3 ADVI 比 MCMC 差?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52558826/