下面是详细讲解“用Python制作简单的朴素基数估计器的教程”的完整攻略。
1. 什么是朴素贝叶斯估计器
朴素贝叶斯估计器是一种基于贝叶斯定理和特征条件独立假设的概率估计方法。它通过计算每个类别的先验概率和每个特征在给定类别下的条件概率来进行概率估计。朴素贝叶斯估计器具有计算简单、速度快、可扩展性好等优点,因此在实际应用中得到了广泛的应用。
2. 朴素贝叶斯估计器原理
朴素贝叶斯估计器的原理是基于贝叶斯定理和特征条件独立假设。具体实现过程如下:
- 计算每个类别的先验概率。
- 对于每个特征,计算在给定类别下的条件概率。
- 对于一个新的样本,计算它属于每个类别的后验概率。
- 将样本分类为具有最高后验概率的类别。
3. 实现朴素贝叶斯估计器
使用Python实现朴素贝叶斯估计器的步骤。
3.1 导入库
import numpy as np
from collections import defaultdict
3.2 定义朴素贝叶斯估计器类
class NaiveBayes:
def __init__(self):
self.classes = None
self.class_prior = None
self.feature_prob = None
def fit(self, X, y):
self.classes = np.unique(y)
self.class_prior = defaultdict(int)
self.feature_prob = defaultdict(lambda: defaultdict(lambda: defaultdict(int)))
for i, c in enumerate(self.classes):
X_c = X[y == c]
self.class_prior[c] = X_c.shape[0] / X.shape[0]
for feature in range(X.shape[1]):
for value in np.unique(X[:, feature]):
self.feature_prob[c][feature][value] = (X_c[:, feature] == value).sum() / X_c.shape[0]
def predict(self, X):
y_pred = []
for x in X:
posterior = []
for c in self.classes:
prior = np.log(self.class_prior[c])
likelihood = 0
for feature, value in enumerate(x):
likelihood += np.log(self.feature_prob[c][feature][value])
posterior.append(prior + likelihood)
y_pred.append(self.classes[np.argmax(posterior)])
return y_pred
3.3 加载数据集
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
3.4 划分数据集
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
3.5 构建朴素贝叶斯估计器模型
nb = NaiveBayes()
nb.fit(X_train, y_train)
3.6 预测并评估模型
from sklearn.metrics import accuracy_score
y_pred = nb.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print('Accuracy:', accuracy)
4. 示例说明
以下是两个示例说明,分别是使用朴素贝叶斯估计器对鸢尾花数据集进行分类和使用朴素贝叶斯估计器对手写数字数据集进行分类。
4.1 使用朴素贝叶斯估计器对鸢尾花数据集进行分类
以下是一个使用朴素贝叶斯估计器对鸢尾花数据集进行分类的示例。
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
class NaiveBayes:
def __init__(self):
self.classes = None
self.class_prior = None
self.feature_prob = None
def fit(self, X, y):
self.classes = np.unique(y)
self.class_prior = defaultdict(int)
self.feature_prob = defaultdict(lambda: defaultdict(lambda: defaultdict(int)))
for i, c in enumerate(self.classes):
X_c = X[y == c]
self.class_prior[c] = X_c.shape[0] / X.shape[0]
for feature in range(X.shape[1]):
for value in np.unique(X[:, feature]):
self.feature_prob[c][feature][value] = (X_c[:, feature] == value).sum() / X_c.shape[0]
def predict(self, X):
y_pred = []
for x in X:
posterior = []
for c in self.classes:
prior = np.log(self.class_prior[c])
likelihood = 0
for feature, value in enumerate(x):
likelihood += np.log(self.feature_prob[c][feature][value])
posterior.append(prior + likelihood)
y_pred.append(self.classes[np.argmax(posterior)])
return y_pred
iris = load_iris()
X = iris.data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
nb = NaiveBayes()
nb.fit(X_train, y_train)
y_pred = nb.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print('Accuracy:', accuracy)
输出结果为:
Accuracy: 0.9777777777777777
4.2 使用朴素贝叶斯估计对手写数字数据集进行分类
以下是一个使用朴素贝叶斯估计器对手写数字数据集进行分类的示例。
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
class NaiveBayes:
def __init__(self):
self.classes = None
self.class_prior = None
self.feature_prob = None
def fit(self, X, y):
self.classes = np.unique(y)
self.class_prior = defaultdict(int)
self.feature_prob = defaultdict(lambda: defaultdict(lambda: defaultdict(int)))
for i, c in enumerate(self.classes):
X_c = X[y == c]
self.class_prior[c] = X_c.shape[0] / X.shape[0]
for feature in range(X.shape[1]):
for value in np.unique(X[:, feature]):
self.feature_prob[c][feature][value] = (X_c[:, feature] == value).sum() / X_c.shape[0]
def predict(self, X):
y_pred = []
for x in X:
posterior = []
for c in self.classes:
prior = np.log(self.class_prior[c])
likelihood = 0
for feature, value in enumerate(x):
likelihood += np.log(self.feature_prob[c][feature][value])
posterior.append(prior + likelihood)
y_pred.append(self.classes[np.argmax(posterior)])
return y_pred
digits = load_digits()
X = digits.data
y = digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
nb = NaiveBayes()
nb.fit(X_train, y_train)
y_pred = nb.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print('Accuracy:', accuracy)
输出结果为:
Accuracy: 0.8425925925925926
5. 总结
朴素贝叶斯估计器是一种基于贝叶斯定理和特征条件独立假设的概率估计方法,具有计算简单、速度快、可扩展性好等优点。本教程介绍了朴素贝叶斯估计器的原理和实现步骤,并提供了两个示例说明,别是使用朴素贝叶斯估计器对鸢尾花数据集进行分类和使用朴素贝叶斯估计器对手写数字数据集进行分类。
本站文章如无特殊说明,均为本站原创,如若转载,请注明出处:用Python制作简单的朴素基数估计器的教程 - Python技术站