WHY I STUDIED THIS
CUDA is the backbone of every modern deep learning system — and understanding it means understanding the actual bottleneck in AI at scale. I studied CUDA to understand GPU parallelism, memory hierarchy, and how deep learning accelerators work at the hardware level. This directly informed my understanding of LLM quantization and inference optimization during the NAVER Cloud HyperCLOVA X internship, where compute efficiency was a core product constraint. If you want to reason about where AI is going, you need to understand what it runs on.