[ 딥러닝 ] 딥러닝

2023. 9. 11. 20:10딥러닝

728x90

1. 퍼셉트론(Perceptron)

1-1. 생물학적 뉴런

  • 인간의 뇌는 수십억 개의 뉴런을 가지고 있음
  • 뉴런은 화학적, 전기적 신호를 처리하고 전달하는 연결된 뇌신경 세포

1-2. 인공 뉴런(Perceptron)

  • 1943년에 워렌 맥컬록, 월터 피츠 단순화된 뇌세포 개념을 발표
  • 신경 세포를 이진 출력을 가진 단순한 논리 게이트라고 설명
  • 생물학적 뉴런의 모델에 기초한 수학적 기능으로, 각 뉴런이 입력을 받아 개별적으로 가중치를 곱하여 나온 합계를 비선형 함수를 전달하여 출력을 생성

1-3. 논리 회귀(단층 퍼셉트론)로 OR, AND 문제 풀기

# OR 게이트 (하나라도 True면 True)
import torch
import torch.nn as nn
import torch.optim as optim

X = torch.FloatTensor([[0, 0], [0, 1], [1, 0], [1, 1]])
y = torch.FloatTensor([[0], [1], [1], [1]])

model = nn.Sequential(
    nn.Linear(2, 1),
    nn.Sigmoid()  # Sotmax가 아니기 때문에 직접 확률을 구하는 Sigmoid()를 넣어주어야 함
)
print(model)
------------------------------------------------------------------------------------
# 결과
Sequential(
  (0): Linear(in_features=2, out_features=1, bias=True)
  (1): Sigmoid()
)

-----------------------------------------------------------------------------------
# 학습
optimizer = optim.SGD(model.parameters(), lr=1)

epochs = 1000

for epoch in range(epochs + 1):
  y_pred = model(X)
  loss = nn.BCELoss()(y_pred, y)  # 단순 논리 회귀의 loss
  optimizer.zero_grad()
  loss.backward()
  optimizer.step()

  if epoch % 100 == 0:
    y_bool = (y_pred >= 0.5).float()
    accuracy = (y == y_bool).float().sum() / len(y) * 100

    print(f'Epoch: {epoch:4d}/{epochs} Loss: {loss:.6f} Accuracy: {accuracy:.2f}%')
-------------------------------------------------------------------------------------
# 결과
Epoch:    0/1000 Loss: 0.809753 Accuracy: 25.00%
Epoch:  100/1000 Loss: 0.086609 Accuracy: 100.00%
Epoch:  200/1000 Loss: 0.046093 Accuracy: 100.00%
Epoch:  300/1000 Loss: 0.031063 Accuracy: 100.00%
Epoch:  400/1000 Loss: 0.023333 Accuracy: 100.00%
Epoch:  500/1000 Loss: 0.018650 Accuracy: 100.00%
Epoch:  600/1000 Loss: 0.015518 Accuracy: 100.00%
Epoch:  700/1000 Loss: 0.013279 Accuracy: 100.00%
Epoch:  800/1000 Loss: 0.011600 Accuracy: 100.00%
Epoch:  900/1000 Loss: 0.010295 Accuracy: 100.00%
Epoch: 1000/1000 Loss: 0.009253 Accuracy: 100.00%

-------------------------------------------------------------------------------------

1-4. 논리 회귀(단층 퍼셉트론)로 AND로 풀기

X = torch.FloatTensor([[0, 0], [0, 1], [1, 0], [1, 1]])
y = torch.FloatTensor([[0], [1], [1], [1]])
model = nn.Sequential(
    nn.Linear(2, 1),
    nn.Sigmoid()  # Sotmax가 아니기 때문에 직접 확률을 구하는 Sigmoid()를 넣어주어야 함
)
optimizer = optim.SGD(model.parameters(), lr=1)

epochs = 1000

for epoch in range(epochs + 1):
  y_pred = model(X)
  loss = nn.BCELoss()(y_pred, y)  # 단순 논리 회귀의 loss
  optimizer.zero_grad()
  loss.backward()
  optimizer.step()

  if epoch % 100 == 0:
    y_bool = (y_pred >= 0.5).float()
    accuracy = (y == y_bool).float().sum() / len(y) * 100

    print(f'Epoch: {epoch:4d}/{epochs} Loss: {loss:.6f} Accuracy: {accuracy:.2f}%')
------------------------------------------------------------------------------------
# 결과
Epoch:    0/1000 Loss: 0.669457 Accuracy: 75.00%
Epoch:  100/1000 Loss: 0.088276 Accuracy: 100.00%
Epoch:  200/1000 Loss: 0.046586 Accuracy: 100.00%
Epoch:  300/1000 Loss: 0.031291 Accuracy: 100.00%
Epoch:  400/1000 Loss: 0.023463 Accuracy: 100.00%
Epoch:  500/1000 Loss: 0.018734 Accuracy: 100.00%
Epoch:  600/1000 Loss: 0.015576 Accuracy: 100.00%
Epoch:  700/1000 Loss: 0.013322 Accuracy: 100.00%
Epoch:  800/1000 Loss: 0.011633 Accuracy: 100.00%
Epoch:  900/1000 Loss: 0.010321 Accuracy: 100.00%
Epoch: 1000/1000 Loss: 0.009274 Accuracy: 100.00%

-------------------------------------------------------------------------------------

1-5. 논리회귀(단층 퍼셉트론)로 XOR문제 풀기

X = torch.FloatTensor([[0, 0], [0, 1], [1, 0], [1, 1]])
y = torch.FloatTensor([[0], [1], [1], [1]])
model = nn.Sequential(
    nn.Linear(2, 1),
    nn.Sigmoid()  # Sotmax가 아니기 때문에 직접 확률을 구하는 Sigmoid()를 넣어주어야 함
)
optimizer = optim.SGD(model.parameters(), lr=1)

epochs = 1000

for epoch in range(epochs + 1):
  y_pred = model(X)
  loss = nn.BCELoss()(y_pred, y)  # 단순 논리 회귀의 loss
  optimizer.zero_grad()
  loss.backward()
  optimizer.step()

  if epoch % 100 == 0:
    y_bool = (y_pred >= 0.5).float()
    accuracy = (y == y_bool).float().sum() / len(y) * 100

    print(f'Epoch: {epoch:4d}/{epochs} Loss: {loss:.6f} Accuracy: {accuracy:.2f}%')
-----------------------------------------------------------------------------------------
# 결과
Epoch:    0/1000 Loss: 0.794193 Accuracy: 25.00%
Epoch:  100/1000 Loss: 0.093656 Accuracy: 100.00%
Epoch:  200/1000 Loss: 0.048133 Accuracy: 100.00%
Epoch:  300/1000 Loss: 0.031997 Accuracy: 100.00%
Epoch:  400/1000 Loss: 0.023863 Accuracy: 100.00%
Epoch:  500/1000 Loss: 0.018990 Accuracy: 100.00%
Epoch:  600/1000 Loss: 0.015754 Accuracy: 100.00%
Epoch:  700/1000 Loss: 0.013452 Accuracy: 100.00%
Epoch:  800/1000 Loss: 0.011732 Accuracy: 100.00%
Epoch:  900/1000 Loss: 0.010399 Accuracy: 100.00%
Epoch: 1000/1000 Loss: 0.009337 Accuracy: 100.00%

2. 역전파(Backpropagation)

  • 1974, by Paul Werbos
  • 1986, By Hinton

model = nn.Sequential(
    nn.Linear(2,64),
    nn.Sigmoid(),
    nn.Linear(64,32),
    nn.Sigmoid(),
    nn.Linear(32,16),
    nn.Sigmoid(),
    nn.Linear(16,1),
    nn.Sigmoid()
)
print(model)
------------------------------------------------
# 결과
Sequential(
  (0): Linear(in_features=2, out_features=64, bias=True)
  (1): Sigmoid()
  (2): Linear(in_features=64, out_features=32, bias=True)
  (3): Sigmoid()
  (4): Linear(in_features=32, out_features=16, bias=True)
  (5): Sigmoid()
  (6): Linear(in_features=16, out_features=1, bias=True)
  (7): Sigmoid()
)

------------------------------------------------------------
# 학습
optimizer = optim.SGD(model.parameters(), lr=1)

epochs = 1000

for epoch in range(epochs + 1):
  y_pred = model(X)
  loss = nn.BCELoss()(y_pred, y)  # 단순 논리 회귀의 loss
  optimizer.zero_grad()
  loss.backward()
  optimizer.step()

  if epoch % 100 == 0:
    y_bool = (y_pred >= 0.5).float()
    accuracy = (y == y_bool).float().sum() / len(y) * 100

    print(f'Epoch: {epoch:4d}/{epochs} Loss: {loss:.6f} Accuracy: {accuracy:.2f}%')
----------------------------------------------------------------------------------------
# 결과
Epoch:    0/1000 Loss: 0.903703 Accuracy: 25.00%
Epoch:  100/1000 Loss: 0.560680 Accuracy: 75.00%
Epoch:  200/1000 Loss: 0.552539 Accuracy: 75.00%
Epoch:  300/1000 Loss: 0.148512 Accuracy: 100.00%
Epoch:  400/1000 Loss: 0.008860 Accuracy: 100.00%
Epoch:  500/1000 Loss: 0.003539 Accuracy: 100.00%
Epoch:  600/1000 Loss: 0.002108 Accuracy: 100.00%
Epoch:  700/1000 Loss: 0.001472 Accuracy: 100.00%
Epoch:  800/1000 Loss: 0.001119 Accuracy: 100.00%
Epoch:  900/1000 Loss: 0.000897 Accuracy: 100.00%
Epoch: 1000/1000 Loss: 0.000746 Accuracy: 100.00%
728x90
반응형

'딥러닝' 카테고리의 다른 글

[ 딥러닝 ] CNN 기초  (0) 2023.09.11
[ 딥러닝 ] 활성화 함수  (0) 2023.09.11
[ 딥러닝 ] 데이터로더  (0) 2023.09.11
[ 딥러닝 ] 파이토치로 구현한 논리회귀  (0) 2023.09.11
[ 딥러닝 ] 선형 회귀  (0) 2023.07.06