01: Test your PyTorch setup¶
Welcome to the laboratory, my eager apprentice; our first incantation is a simple one, to ensure your terminal is ready for the raw power we're about to unleash!
Start with the most basic example of Python.
print("Hello world")
print("Hello world")
Hello world
Summoning the Beast!¶
Now, we invoke the great PyTorch itself! Let's check its pulse and see if it has found the precious CUDA cores we so desperately need for our electrifying experiments.
import torch
print(f"PyTorch version: {torch.__version__}")
print(f"CUDA available: {torch.cuda.is_available()}")
PyTorch version: 2.7.0+cu126 CUDA available: True
Behold the Vital Signs!¶
If the stars have aligned and your incantations were correct, you should see a message confirming PyTorch's awakening. But this is merely a surface reading!
PyTorch version: 2.7.0+cu126
CUDA available: True
Now, let us peer deeper into the machine's soul and examine the very essence of its GPU, CuDNN, and other vital components!
#Display a PyTorch version and GPU availability
if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA device object
print(f"Using device: {device}")
# Print CUDA device properties
print("\nCUDA Device Properties:")
print(f"Device name: {torch.cuda.get_device_name(0)}")
print(f"Device properties: {torch.cuda.get_device_properties(0)}")
print(f"Current device index: {torch.cuda.current_device()}")
print(f"Device count: {torch.cuda.device_count()}")
# Print CUDA version and capabilities
print("\nCUDA Information:")
print(f"CUDA version: {torch.version.cuda}")
print(f"cuDNN version: {torch.backends.cudnn.version()}")
print(f"cuDNN enabled: {torch.backends.cudnn.enabled}")
# Print PyTorch memory info
print("\nPyTorch Memory Information:")
print(f"Allocated memory: {torch.cuda.memory_allocated(0) / 1024**2:.2f} MB")
print(f"Cached memory: {torch.cuda.memory_reserved(0) / 1024**2:.2f} MB")
else:
print('GPU not enabled')
print("\nPyTorch Information:")
print(f"PyTorch version: {torch.__version__}")
print(f"Backend: {torch.get_default_dtype()}")
Using device: cuda CUDA Device Properties: Device name: NVIDIA GeForce RTX 3080 Laptop GPU Device properties: _CudaDeviceProperties(name='NVIDIA GeForce RTX 3080 Laptop GPU', major=8, minor=6, total_memory=16383MB, multi_processor_count=48, uuid=4a2f15bc-7268-fcb8-f7b3-9f60002afe35, L2_cache_size=4MB) Current device index: 0 Device count: 1 CUDA Information: CUDA version: 12.6 cuDNN version: 90701 cuDNN enabled: True PyTorch Memory Information: Allocated memory: 0.00 MB Cached memory: 0.00 MB
The Apparatus is Ready!¶
Mwahahaha! Excellent! You have successfully interrogated the machine and confirmed that the foundational conduits are in place. The GPU's heart beats strong, and the PyTorch beast is straining at its leash, ready for our command.
With this knowledge, you are one step closer to bending the very fabric of computation to your will! Our instruments are tuned, the lab is humming with potential. Now, the real work begins...