Graphcore Support For Ai

acceleration
AI
gpu alternative
graphcore
hardware
Graphcore’s Innovative Processing Unit (IPU) is a highly efficient and faster solution compared to NVIDIA’s A100. It offers full performance integration with popular frameworks such as TensorFlow, PyTorch, and PaddlePaddle. Graphcore provides open-source resources for these frameworks and features on-board RAM sharing capabilities, making it a cost-effective and quicker alternative.
Published

April 21, 2022


Graphcore’s IPU could be cheaper and faster than NVIDIA’s A100, though need sharing on-board RAM.

Supports tensorflow, pytorch, paddlepaddle.

https://docs.graphcore.ai/en/latest/

pytorch: poptorch, pytorch-lightning(tpu and ipu)

tensorflow:

from tensorflow.python import ipu

Create an IPU distribution strategy

strategy = ipu.ipu_strategy.IPUStrategy()

with strategy.scope():

paddlepaddle:

https://github.com/graphcore/portfolio-examples/tree/master/paddlepaddle/bert-base

https://github.com/graphcore/Paddle.git

TensorFlow 1 & 2 support with full performant integration with TensorFlow XLA backend

PyTorch support for targeting IPU using the PyTorch ATEN backend

PopART™ (Poplar Advanced Runtime) for training & inference; supports Python/C++ model building plus ONNX model input

Full support for PaddlePaddle

Other frameworks support coming soon