以下是为您修改的批量梯度下降函数的代码示例: ```python import numpy as np import matplotlib.pyplot as plt def batch_gradient_descent(X, y, learning_rate, num_epochs, batch_size=16): # 初始化参数 w = np.zeros(X.shape[1]) b = 0 num_samples = X.shape[0] losses = [] for epoch in range(num_epochs): # 打乱样本顺序 indices = np.random.permutation(num_samples) X_shuffled = X[indices] y_shuffled = y[indices] for i in range(0, num_samples, batch_size): # 取批量样本 X_batch = X_shuffled[i:i + batch_size] y_batch = y_shuffled[i:i + batch_size] # 计算梯度 gradient_w = np.dot(X_batch.T, (np.dot(X_batch, w) + b - y_batch)) / batch_size gradient_b = np.sum(np.dot(X_batch, w) + b - y_batch) / batch_size # 更新参数 w -= learning_rate * gradient_w b -= learning_rate * gradient_b # 计算损失 loss = np.mean((np.dot(X, w) + b - y) 2) losses.append(loss) # 绘制损失函数变化图 plt.plot(losses) plt.xlabel('Epoch') plt.ylabel('Loss') plt.title('Loss Function over Epochs') plt.show() # 绘制回归效果图 y_pred = np.dot(X, w) + b plt.scatter(X[:, 0], y) plt.plot(X[:, 0], y_pred, color='red') plt.xlabel('X') plt.ylabel('y') plt.title('Regression Result') plt.show() # 示例用法 X = np.array([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6]]) y = np.array([3, 5, 7, 9, 11]) learning_rate = 0.01 num_epochs = 100 batch_gradient_descent(X, y, learning_rate, num_epochs) ``` 额外需要注意的逻辑点: 1. 批量大小的选择要根据数据量和计算资源进行权衡。 2. 打乱样本顺序对于梯度下降的效果有一定影响。 3. 学习率的调整可能需要多次试验以获得较好的收敛效果。 [2024-12-03 22:26:29 | AI写代码神器 | 590点数解答]