Show HN: Eyot, A programming language where the GPU is just another thread

(cowleyforniastudios.com)

79 pontos | por steeleduncan 39 dias atrás

7 comentários

  • teleforce
    37 dias atrás
    Perhaps any new language targetting GPU acceleration would consider TILE based concept and primitive recently supported by major GPU vendors including Nvidia [1],[2],[3],[4].

    For more generic GPU targets there's TRITON [5],[6].

    [1] NVIDIA CUDA 13.1 Powers Next-Gen GPU Programming with NVIDIA CUDA Tile and Performance Gains:

    https://developer.nvidia.com/blog/nvidia-cuda-13-1-powers-ne...

    [2] Nvidia Tilus: A Tile-Level GPU Kernel Programming Language:

    https://github.com/NVIDIA/tilus

    [3] Simplify GPU Programming with NVIDIA CUDA Tile in Python:

    https://developer.nvidia.com/blog/simplify-gpu-programming-w...

    [4] Tile Language:

    https://github.com/tile-ai/tilelang

    [5] Triton: An Intermediate Language and Compiler for Tiled Neural Network Computations:

    https://dl.acm.org/doi/10.1145/3315508.3329973

    [6] Triton:

    https://github.com/triton-lang/triton

  • MeteorMarc
    39 dias atrás
    That is fun: it lends c-style block markers (curly braces) and python-style line separation (new lines). No objection.
  • shubhamintech
    38 dias atrás
    The latency point matters more than it looks imo like the GPU work isn't just async CPU work at a different speed, the cost model is completely different. In LLM inference, the hard scheduling problem is batching non-uniform requests where prompt lengths and generation lengths vary, and treating that like normal thread scheduling leads to terrible utilization. Would be curious if Eyot has anything to say about non-uniform work units.
    • steeleduncan
      38 dias atrás
      Not right now, it is far too early days. I'm currently working through bugs, and missing stdlib, to get a simple backpropagation network efficient. Once I'm happy with that I'd like to move onto more complex models.
      • CyberDildonics
        38 dias atrás
        What is the new language doing that can't be done with an already established language that is worth sacrificing an entire standard library?
  • sourcegrift
    39 dias atrás
    Don't mean to be rust fanatic or whatever but anyone know of anything similar for rust?
    • embedding-shape
      39 dias atrás
      Not similar in the way of "Decorate any function and now it's a thread on the GPU", but Candle been pretty neat for experimenting with ML on Rust, and easy to move things between CPU and GPU, more of a library than a DSL though: https://github.com/huggingface/candle
    • notnullorvoid
      39 dias atrás
      It seems somewhat similar to rust-gpu https://github.com/Rust-GPU/rust-gpu
    • steeleduncan
      39 dias atrás
      I'm not totally sure what it is, but I believe there is something for running Rust code on the GPU easily
    • ModernMech
      39 dias atrás
      You could use wgpu to replicate this demo.

      https://wgpu.rs

    • wingertge
      39 dias atrás
      I hate doing self-promotion, but this is basically exactly what CubeCL does. CubeCL is a bit more limited because as a proc macro we can't see any real type info, but it's the closest thing I'm aware of. Other solutions need a bunch of boilerplate and custom (nightly-only) compiler backends.
  • LorenDB
    39 dias atrás
    This reminds me that I'd love to see SYCL get more love. Right now, out of the computer hardware manufacturers, it seems that only Intel is putting any effort into it.
    • jamiejquinn
      38 dias atrás
      CUDA having had such a wide moat for so long has completely warped the GPU software ecosystem. There just isn't any incentive for Nvidia to meaningfully contribute to any external, standards-driven effort like SYCL or OpenCL. Real shame because it leads to a tonne of duplicated effort as AMD and Intel try to reimplement the exact same libraries as Nvidia (and usually worse because neither seem to prioritise good software for whatever reason).
  • 1flei
    39 dias atrás
    [dead]
  • CyberDildonics
    39 dias atrás
    Every time someone does something with threading and makes it a language feature it always seems like it could just be done with stock C++.

    Whatever this is doing could be wrapped up in another language.

    Either way it's arguable that is even a good idea, since dealing with a regular thread in the same memory space, getting data to and from the GPU and doing computations on the GPU are all completely separate and have different latency characteristics.