以下是关于“使用Python实现ANN”的完整攻略:
简介
人工神经网络(Artificial Neural Network,ANN)是一种模拟人脑神经元之间相互作用的计算模型,它可以用于分类、回归和聚类等任务。在本教程中,我们将介绍如何使用Python实现ANN,并提供两个示例说明。
实现ANN
以下是使用Python实现ANN的代码:
import numpy as np
class NeuralNetwork:
def __init__(self, layers, learning_rate=0.1):
self.layers = layers
self.learning_rate = learning_rate
self.weights = [np.random.randn(layers[i], layers[i-1]) * np.sqrt(2/layers[i-1]) for i in range(1, len(layers))]
self.biases = [np.zeros((layers[i], 1)) for i in range(1, len(layers))]
def sigmoid(self, z):
return 1 / (1 + np.exp(-z))
def sigmoid_prime(self, z):
return self.sigmoid(z) * (1 - self.sigmoid(z))
def feedforward(self, a):
for w, b in zip(self.weights, self.biases):
a = self.sigmoid(np.dot(w, a) + b)
return a
def backpropagation(self, x, y):
# Feedforward
a = x
activations = [a]
zs = []
for w, b in zip(self.weights, self.biases):
z = np.dot(w, a) + b
zs.append(z)
a = self.sigmoid(z)
activations.append(a)
# Backpropagation
delta = (activations[-1] - y) * self.sigmoid_prime(zs[-1])
nabla_w = [np.zeros(w.shape) for w in self.weights]
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w[-1] = np.dot(delta, activations[-2].T)
nabla_b[-1] = delta
for l in range(2, len(self.layers)):
z = zs[-l]
sp = self.sigmoid_prime(z)
delta = np.dot(self.weights[-l+1].T, delta) * sp
nabla_w[-l] = np.dot(delta, activations[-l-1].T)
nabla_b[-l] = delta
return nabla_w, nabla_b
def train(self, X, y, epochs):
for epoch in range(epochs):
nabla_w = [np.zeros(w.shape) for w in self.weights]
nabla_b = [np.zeros(b.shape) for b in self.biases]
for x, y_true in zip(X, y):
delta_nabla_w, delta_nabla_b = self.backpropagation(x.reshape(-1, 1), y_true.reshape(-1, 1))
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
self.weights = [w - (self.learning_rate / len(X)) * nw for w, nw in zip(self.weights, nabla_w)]
self.biases = [b - (self.learning_rate / len(X)) * nb for b, nb in zip(self.biases, nabla_b)]
def predict(self, X):
return np.array([self.feedforward(x.reshape(-1, 1)).flatten() for x in X])
其中,NeuralNetwork类实现了ANN。在初始化方法中,我们定义了网络的层数、学习率、权重和偏置。在sigmoid方法中,我们实现了sigmoid函数。在sigmoid_prime方法中,我们实现了sigmoid函数的导数。在feedforward方法中,我们实现了前向传播。在backpropagation方法中,我们实现了反向传播。在train方法中,我们使用反向传播来更新权重和偏置。在predict方法中,我们使用前向传播来预测新数据的标签。
示例说明
以下是两个示例说明,展示了如何使用Python实现ANN。
示例1
假设我们要使用ANN对XOR数据进行分类:
import numpy as np
from sklearn.metrics import accuracy_score
# Define XOR dataset
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 0])
# Create neural network
nn = NeuralNetwork(layers=[2, 2, 1], learning_rate=0.1)
# Train neural network
nn.train(X, y, epochs=10000)
# Predict labels of the test data
y_pred = np.round(nn.predict(X)).flatten()
# Calculate the accuracy of the classifier
accuracy = accuracy_score(y, y_pred)
print("Accuracy:", accuracy)
在这个示例中,我们定义了XOR数据集,使用NeuralNetwork类创建了一个ANN,并使用train方法来训练ANN。最后,我们使用predict方法来预测测试数据的标签,并使用accuracy_score函数计算分类器的准确性。
示例2
假设我们要使用ANN对digits数据进行分类:
import numpy as np
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# Load digits dataset
digits = load_digits()
X = digits.data
y = digits.target
# Split dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Create neural network
nn = NeuralNetwork(layers=[64, 32, 10], learning_rate=0.1)
# Train neural network
nn.train(X_train, y_train, epochs=1000)
# Predict labels of the test data
y_pred = np.argmax(nn.predict(X_test), axis=1)
# Calculate the accuracy of the classifier
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
在这个示例中,我们使用load_digits函数加载digits数据集,将数据集分为训练集和测试集,使用NeuralNetwork类创建了一个ANN,并使用train方法来训练ANN。最后,我们使用predict方法来预测测试数据的标签,并使用accuracy_score函数计算分类器的准确性。
结
本教程介绍了如何使用Python实现ANN,并提供了两个示例说明。我们使用NeuralNetwork类实现了ANN,并在train方法中使用反向传播来更新权重和偏置。最后,我们使用predict方法来预测新数据的标签。
本站文章如无特殊说明,均为本站原创,如若转载,请注明出处:使用python实现ANN - Python技术站