We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent dc9b8e9 commit 46994b3Copy full SHA for 46994b3
docs/source/_toctree.yml
@@ -13,6 +13,8 @@
13
title: Using TGI with Intel Gaudi
14
- local: installation_inferentia
15
title: Using TGI with AWS Inferentia
16
+ - local: installation_tpu
17
+ title: Using TGI with Google TPU
18
- local: installation_intel
19
title: Using TGI with Intel GPUs
20
- local: installation
docs/source/installation_tpu.md
@@ -0,0 +1,3 @@
1
+# Using TGI with Google TPU
2
+
3
+Check out this [guide](https://huggingface.co/docs/optimum-tpu) on how to serve models with TGI on TPUs.
0 commit comments