What papers or textbooks do i need to read to have all the basics / background knowledge to use pytorch and understand what I am doing based on solely the documentation pytorch provides?

  • Newtra@pawb.social
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    11 months ago

    The easiest way to get the basics is to search for articles, online courses, and youtube videos about the specific modules you’re interested in. Papers are written for people who are already deep in the field. You’ll get there, but they’re not the most efficient way to get up to speed. I have no experience with textbooks.

    It helps to think of PyTorch as just a fancy math library. It has some well-documented frameworky structure (nn.Module) and a few differentiation engines, but all the deep learning-specific classes/functions (Conv2d, BatchNorm1d, ReLU, etc.) are just optimized math under the hood.

    You can see the math by looking for projects that reimplement everything in numpy, e.g. picoGPT or ConvNet in NumPy.

    If you can’t get your head around the tensor operations, I suggest searching for “explainers”. Basically for every impactful module there will be a bunch of “(module) Explained” articles or videos out there, e.g. Grouped Convolution, What are Residual Connections. There are also ones for entire models, e.g. The Illustrated Transformer. Once you start googling specific modules’ explainers, you’ll find people who have made mountains of them - I suggest going through their guides and learning everything that seems relevant to what you’re working on.

    If you’re not getting an explanation of something, just google and find another one. People have done an incredible job of making this information freely accessible in many different formats. I basically learned my way from webdev to an AI career with a couple years of casually watching YouTube videos.