Joonas' Note
Joonas' Note
[PyTorch] Tensor, NumPy, Pandas 타입 표 본문
Pandas는 기본값이 64비트 타입임에 유의해야한다.
╔══════════════════════════╦════════════════╦════════════════════╦═════════════════════════╦═════════╦═════════╗
║ Data type ║ dtype ║ CPU tensor ║ GPU tensor ║ NumPy ║ Pandas ║
╠══════════════════════════╬════════════════╬════════════════════╬═════════════════════════╬═════════╣═════════╣
║ Boolean ║ torch.bool ║ torch.BoolTensor ║ torch.cuda.BoolTensor ║ bool_ ║ bool ║
║ 8-bit integer (unsigned) ║ torch.uint8 ║ torch.ByteTensor ║ torch.cuda.ByteTensor ║ uint8 ║ uint8 ║
║ 8-bit integer (signed) ║ torch.int8 ║ torch.CharTensor ║ torch.cuda.CharTensor ║ int8 ║ int8 ║
║ 16-bit floating point ║ torch.float16 ║ torch.HalfTensor ║ torch.cuda.HalfTensor ║ float16 ║ float16 ║
║ ║ torch.half ║ ║ ║ ║ ║
║ 32-bit floating point ║ torch.float32 ║ torch.FloatTensor ║ torch.cuda.FloatTensor ║ float_ ║ float32 ║
║ ║ torch.float ║ ║ ║ float32 ║ ║
║ 64-bit floating point ║ torch.float64 ║ torch.DoubleTensor ║ torch.cuda.DoubleTensor ║ float64 ║ float ║
║ ║ torch.double ║ ║ ║ ║ float64 ║
║ 16-bit integer (signed) ║ torch.int16 ║ torch.ShortTensor ║ torch.cuda.ShortTensor ║ int16 ║ int16 ║
║ ║ torch.short ║ ║ ║ ║ ║
║ 32-bit integer (signed) ║ torch.int32 ║ torch.IntTensor ║ torch.cuda.IntTensor ║ int_ ║ int32 ║
║ ║ torch.int ║ ║ ║ int32 ║ ║
║ 64-bit integer (signed) ║ torch.int64 ║ torch.LongTensor ║ torch.cuda.LongTensor ║ int64 ║ int ║
║ ║ torch.long ║ ║ ║ ║ int64 ║
╚══════════════════════════╩═════════════════════════════════════╩═════════════════════════╩═══════════════════╝
링크
'AI' 카테고리의 다른 글
[강화학습 메모] Proximal Policy Optimization (PPO, 2017) (0) | 2023.03.11 |
---|---|
[강화학습 메모] A3C (Asynchronous A2C, 2016) (0) | 2023.03.10 |
[강화학습 일지] DQN Tutorial 살펴보기 (0) | 2023.01.13 |
Loss 또는 모델 output이 NaN인 경우 확인해볼 것 (0) | 2022.04.23 |
[부동산 가격 예측] LightGBM에서 DNN Regression으로 (0) | 2022.04.21 |
Comments