You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: supplementary_style_guide/glossary_terms_conventions/general_conventions/i.adoc
+45Lines changed: 45 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -348,6 +348,28 @@ There is no functional difference between the first server that was installed an
348
348
349
349
*See also*: xref:bucket-index[bucket index]
350
350
351
+
[[inference]]
352
+
==== image:images/yes.png[yes] inference (noun)
353
+
*Description*: The act a model generating outputs from input data. For example, "Inference speeds increased on the new models"
354
+
355
+
*Use it*: yes
356
+
357
+
[.vale-ignore]
358
+
*Incorrect forms*:
359
+
360
+
*See also*:
361
+
362
+
[[inferencing]]
363
+
==== image:images/yes.png[yes] inferencing (noun)
364
+
*Description*: A process by which a model processes input data, deduce information, and generates an output. For example, "The inferencing workload is distributed across multiple accelerators."
*Description*: In Red{nbsp}Hat Process Automation Manager and Red{nbsp}Hat Decision Manager, the _inference engine_ is a part of the Red{nbsp}Hat Decision Manager engine, which matches production facts and data to rules. It is often called the brain of a production rules system because it is able to scale to a large number of rules and facts. It makes inferences based on its existing knowledge and performs the actions based on what it infers from the information.
@@ -359,6 +381,29 @@ There is no functional difference between the first server that was installed an
*Description*: In Red Hat OpenShift AI, this is the custom resource definition (CRD) used to create the `InferenceService` object. When referring to the CRD name, use `InferenceService` in monospace.
*Description*: _Inference serving_ is the process of deploying a model onto a server for the model to inference. Use as separate words, for example, "The following charts display the minimum hardware requirements for inference serving a model".
399
+
400
+
*Use it*: yes
401
+
402
+
[.vale-ignore]
403
+
*Incorrect forms*:
404
+
405
+
*See also*:
406
+
362
407
[[infiniband]]
363
408
==== image:images/yes.png[yes] InfiniBand (noun)
364
409
*Description*: _InfiniBand_ is a switched fabric network topology used in high-performance computing. The term is both a service mark and a trademark of the InfiniBand Trade Association. Their rules for using the mark are standard ones: append the (TM) symbol the first time it is used, and respect the capitalization (including the inter-capped "B") from then on. In ASCII-only circumstances, the "\(TM)" string is the acceptable alternative.
0 commit comments