더북(TheBook)

이제 모델을 학습시킵니다.

코드 13-20 모델 학습

from tqdm.auto import tqdm
for epoch in tqdm(range(0, epochs)):
    train(epoch, model, train_loader, optimizer)
    test(epoch, model, test_loader)
    print("\n")
writer.close() ------ ①

① 학습이 종료된 후에는 writer 객체를 close하여 값을 저장합니다. close하지 않으면 loss 값이 저장되지 않으니 주의합니다.

다음은 모델을 학습시킨 결과입니다.

100%                                                 30/30 [15:20<00:00, 30.20s/it]
Train Epoch: 0 [0/60000 (0%)]    Loss: 544.363125
Train Epoch: 0 [10000/60000 (17%)]   Loss: 191.183652
Train Epoch: 0 [20000/60000 (33%)]   Loss: 188.099336
Train Epoch: 0 [30000/60000 (50%)]   Loss: 158.454238
Train Epoch: 0 [40000/60000 (67%)]   Loss: 155.638984
Train Epoch: 0 [50000/60000 (83%)]   Loss: 153.043203
======> Epoch: 0 Average loss: 173.2792

Train Epoch: 1 [0/60000 (0%)]    Loss: 145.601250
Train Epoch: 1 [10000/60000 (17%)]   Loss: 134.434004
Train Epoch: 1 [20000/60000 (33%)]   Loss: 131.372871
Train Epoch: 1 [30000/60000 (50%)]   Loss: 132.994453
Train Epoch: 1 [40000/60000 (67%)]   Loss: 121.873936
Train Epoch: 1 [50000/60000 (83%)]   Loss: 121.991348
======> Epoch: 1 Average loss: 128.7373

... 중간 생략 ...
Train Epoch: 28 [0/60000 (0%)]    Loss: 100.488691
Train Epoch: 28 [10000/60000 (17%)]   Loss: 97.654678
Train Epoch: 28 [20000/60000 (33%)]   Loss: 99.481191
Train Epoch: 28 [30000/60000 (50%)]   Loss: 101.324482
Train Epoch: 28 [40000/60000 (67%)]   Loss: 99.653633
Train Epoch: 28 [50000/60000 (83%)]   Loss: 99.980400
======> Epoch: 28 Average loss: 100.1766

Train Epoch: 29 [0/60000 (0%)]    Loss: 100.662803
Train Epoch: 29 [10000/60000 (17%)]   Loss: 100.196104
Train Epoch: 29 [20000/60000 (33%)]   Loss: 102.175928
Train Epoch: 29 [30000/60000 (50%)]   Loss: 104.528301
Train Epoch: 29 [40000/60000 (67%)]   Loss: 99.064326
Train Epoch: 29 [50000/60000 (83%)]   Loss: 102.218926
======> Epoch: 29 Average loss: 100.0594
신간 소식 구독하기
뉴스레터에 가입하시고 이메일로 신간 소식을 받아 보세요.