Skip to content

Commit 0aa5764

Browse files
committed
AI glossary first submission
1 parent ae59c8d commit 0aa5764

File tree

1 file changed

+45
-0
lines changed
  • supplementary_style_guide/glossary_terms_conventions/general_conventions

1 file changed

+45
-0
lines changed

supplementary_style_guide/glossary_terms_conventions/general_conventions/i.adoc

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -348,6 +348,28 @@ There is no functional difference between the first server that was installed an
348348

349349
*See also*: xref:bucket-index[bucket index]
350350

351+
[[inference]]
352+
==== image:images/yes.png[yes] inference (noun)
353+
*Description*: The act a model generating outputs from input data. For example, "Inference speeds increased on the new models"
354+
355+
*Use it*: yes
356+
357+
[.vale-ignore]
358+
*Incorrect forms*:
359+
360+
*See also*:
361+
362+
[[inferencing]]
363+
==== image:images/yes.png[yes] inferencing (noun)
364+
*Description*: A process by which a model processes input data, deduce information, and generates an output. For example, "The inferencing workload is distributed across multiple accelerators."
365+
366+
*Use it*: yes
367+
368+
[.vale-ignore]
369+
*Incorrect forms*:
370+
371+
*See also*:
372+
351373
[[inference-engine]]
352374
==== image:images/yes.png[yes] inference engine (noun)
353375
*Description*: In Red{nbsp}Hat Process Automation Manager and Red{nbsp}Hat Decision Manager, the _inference engine_ is a part of the Red{nbsp}Hat Decision Manager engine, which matches production facts and data to rules. It is often called the brain of a production rules system because it is able to scale to a large number of rules and facts. It makes inferences based on its existing knowledge and performs the actions based on what it infers from the information.
@@ -359,6 +381,29 @@ There is no functional difference between the first server that was installed an
359381

360382
*See also*:
361383

384+
[[inferenceservice]]
385+
==== image:images/yes.png[yes] InferenceService (noun)
386+
*Description*: In Red Hat OpenShift AI, this is the custom resource definition (CRD) used to create the `InferenceService` object. When referring to the CRD name, use `InferenceService` in monospace.
387+
388+
389+
*Use it*: yes
390+
391+
[.vale-ignore]
392+
*Incorrect forms*: InferenceService, inference serving
393+
394+
*See also*:
395+
396+
[[inference-serving]]
397+
==== image:images/yes.png[yes] inference serving (verb)
398+
*Description*: _Inference serving_ is the process of deploying a model onto a server for the model to inference. Use as separate words, for example, "The following charts display the minimum hardware requirements for inference serving a model".
399+
400+
*Use it*: yes
401+
402+
[.vale-ignore]
403+
*Incorrect forms*:
404+
405+
*See also*:
406+
362407
[[infiniband]]
363408
==== image:images/yes.png[yes] InfiniBand (noun)
364409
*Description*: _InfiniBand_ is a switched fabric network topology used in high-performance computing. The term is both a service mark and a trademark of the InfiniBand Trade Association. Their rules for using the mark are standard ones: append the (TM) symbol the first time it is used, and respect the capitalization (including the inter-capped "B") from then on. In ASCII-only circumstances, the "\(TM)" string is the acceptable alternative.

0 commit comments

Comments
 (0)