今天要來利用Pytorch 來做AND真值表的分類。AND 的定義如左圖
一開始先來定義輸入與輸出的部分
import torchX = torch.Tensor([[0,0],[0,1], [1,0], [1,1]])
Y = torch.Tensor([0,0,0,1]).view(-1,1)
接下來定義module,因為是AND 的邏輯閘,因此我們先宣告一個類別繼承mm.Module。__init__這個方法定義類別的實例建立之後,要進行的初始化動作。第一個self參數代表建立的類別實例。
forward 用於定義該module 進行運算,foward 方法接受一個輸入,然後透過其他module或function來進行運算,返回輸出結果
import torch.nn as nn
import torch.nn.functional as Fclass AND(nn.Module):
def __init__(self, input_dim = 2, output_dim=1):
super(AND, self).__init__()
self.lin = nn.Linear(input_dim, output_dim) def forward(self, x):
x = self.lin(x)
x = F.sigmoid(x)
return x
初始化權重,weight 的維度取決於X的維度,也就是AND module 中的__init__ 方法的第2個參數。
def weights_init(model):
for m in model.modules():
if isinstance(m, nn.Linear):
print("weight init",m.weight.data)
m.weight.data.normal_(0, 1)
設定損失函數及最佳化方法,開始訓練,迭代2001次,
model = AND()
weights_init(model)
loss_func = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.02, momentum=0.9)
epochs = 2001
steps = X.size(0)
for i in range(epochs):
for j in range(steps):
data_point = np.random.randint(X.size(0))
x_var = Variable(X[data_point], requires_grad=False)
y_var = Variable(Y[data_point], requires_grad=False)
optimizer.zero_grad()
y_hat = model(x_var)
loss = loss_func.forward(y_hat, y_var)
loss.backward()
optimizer.step()
if i % 500 == 0:
print ("Epoch: {0}, Loss: {1}, ".format(i, loss.data.numpy()))
經過很多次的努力運算後,找出weight 跟bias
model_params = list(model.parameters())
model_weights = model_params[0].data.numpy()
model_bias = model_params[1].data.numpy()
並畫圖呈現
plt.scatter([X.numpy()[i][0] for i in range(4) if Y[i]==0],
[X.numpy()[i][1] for i in range(4) if Y[i]==0],
s=50 )plt.scatter([X.numpy()[i][0] for i in range(4) if Y[i]==1],
X.numpy()[i][1] for i in range(4) if Y[i]==1]
,s=50,c=’red’ )x_1 = np.arange(-0.1, 1.1, 0.1)
y_1 = ((x_1 * model_weights[0,0]) + model_bias[0]) / (-model_weights[0,1])
plt.plot(x_1, y_1)
plt.show()