We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 203cade commit 1470aecCopy full SHA for 1470aec
docs/source/_toctree.yml
@@ -14,7 +14,7 @@
14
- local: installation_inferentia
15
title: Using TGI with AWS Inferentia
16
- local: installation_tpu
17
- title: Using TGI with Google TPU
+ title: Using TGI with Google TPUs
18
- local: installation_intel
19
title: Using TGI with Intel GPUs
20
- local: installation
docs/source/installation_tpu.md
@@ -1,3 +1,3 @@
1
-# Using TGI with Google TPU
+# Using TGI with Google TPUs
2
3
Check out this [guide](https://huggingface.co/docs/optimum-tpu) on how to serve models with TGI on TPUs.
0 commit comments