Skip to content

Conversation

@HendrikHuebner
Copy link
Contributor

@HendrikHuebner HendrikHuebner commented Nov 19, 2025

This PR adds a number of cases to the switch statement in CIRGenBUiltin.cpp. Some existing cases were relocated, so the order matches the order from the switch statement in clangs codegen. Additionally, some exisiting cases were moved to functions, to keep the code a little cleaner. In the future, it will be easier to keep track of which builtins have not been implemented, since there would always be a NYI case for unimplemented builtins.

@llvmbot llvmbot added clang Clang issues not falling into any other category ClangIR Anything related to the ClangIR project labels Nov 19, 2025
@llvmbot
Copy link
Member

llvmbot commented Nov 19, 2025

@llvm/pr-subscribers-clangir

@llvm/pr-subscribers-clang

Author: Hendrik Hübner (HendrikHuebner)

Changes

This PR adds a number of cases to the switch statement in CIRGenBUiltin.cpp. Some existing cases were relocated, so the order matches the order from the switch statement in clangs codegen. Additionally, some exisiting cases were moved to functions, to keep the code a little cleaner.


Patch is 27.82 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/168699.diff

1 Files Affected:

  • (modified) clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp (+542-86)
diff --git a/clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp b/clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp
index 77f19343653db..90b9509fe5f8f 100644
--- a/clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp
@@ -93,6 +93,75 @@ static RValue emitUnaryFPBuiltin(CIRGenFunction &cgf, const CallExpr &e) {
   return RValue::get(call->getResult(0));
 }
 
+static RValue errorBuiltinNYI(CIRGenFunction &cgf, const CallExpr *e,
+                              unsigned builtinID) {
+  cgf.cgm.errorNYI(e->getSourceRange(),
+                   std::string("unimplemented X86 builtin call: ") +
+                       cgf.getContext().BuiltinInfo.getName(builtinID));
+
+  return cgf.getUndefRValue(e->getType());
+}
+
+static RValue emitBuiltinAlloca(CIRGenFunction &cgf, const CallExpr *e,
+                                unsigned builtinID) {
+  assert(builtinID == Builtin::BI__builtin_alloca ||
+         builtinID == Builtin::BI__builtin_alloca_uninitialized ||
+         builtinID == Builtin::BIalloca || builtinID == Builtin::BI_alloca);
+
+  // Get alloca size input
+  mlir::Value size = cgf.emitScalarExpr(e->getArg(0));
+
+  // The alignment of the alloca should correspond to __BIGGEST_ALIGNMENT__.
+  const TargetInfo &ti = cgf.getContext().getTargetInfo();
+  const CharUnits suitableAlignmentInBytes =
+      cgf.getContext().toCharUnitsFromBits(ti.getSuitableAlign());
+
+  // Emit the alloca op with type `u8 *` to match the semantics of
+  // `llvm.alloca`. We later bitcast the type to `void *` to match the
+  // semantics of C/C++
+  // FIXME(cir): It may make sense to allow AllocaOp of type `u8` to return a
+  // pointer of type `void *`. This will require a change to the allocaOp
+  // verifier.
+  CIRGenBuilderTy &builder = cgf.getBuilder();
+  mlir::Value allocaAddr = builder.createAlloca(
+      cgf.getLoc(e->getSourceRange()), builder.getUInt8PtrTy(),
+      builder.getUInt8Ty(), "bi_alloca", suitableAlignmentInBytes, size);
+
+  // Initialize the allocated buffer if required.
+  if (builtinID != Builtin::BI__builtin_alloca_uninitialized) {
+    // Initialize the alloca with the given size and alignment according to
+    // the lang opts. Only the trivial non-initialization is supported for
+    // now.
+
+    switch (cgf.getLangOpts().getTrivialAutoVarInit()) {
+    case LangOptions::TrivialAutoVarInitKind::Uninitialized:
+      // Nothing to initialize.
+      break;
+    case LangOptions::TrivialAutoVarInitKind::Zero:
+    case LangOptions::TrivialAutoVarInitKind::Pattern:
+      cgf.cgm.errorNYI("trivial auto var init");
+      break;
+    }
+  }
+
+  // An alloca will always return a pointer to the alloca (stack) address
+  // space. This address space need not be the same as the AST / Language
+  // default (e.g. in C / C++ auto vars are in the generic address space). At
+  // the AST level this is handled within CreateTempAlloca et al., but for the
+  // builtin / dynamic alloca we have to handle it here.
+
+  if (!cir::isMatchingAddressSpace(
+          cgf.getCIRAllocaAddressSpace(),
+          e->getType()->getPointeeType().getAddressSpace())) {
+    cgf.cgm.errorNYI(e->getSourceRange(),
+                     "Non-default address space for alloca");
+  }
+
+  // Bitcast the alloca to the expected type.
+  return RValue::get(builder.createBitcast(
+      allocaAddr, builder.getVoidPtrTy(cgf.getCIRAllocaAddressSpace())));
+}
+
 RValue CIRGenFunction::emitBuiltinExpr(const GlobalDecl &gd, unsigned builtinID,
                                        const CallExpr *e,
                                        ReturnValueSlot returnValue) {
@@ -149,62 +218,6 @@ RValue CIRGenFunction::emitBuiltinExpr(const GlobalDecl &gd, unsigned builtinID,
     emitVAEnd(emitVAListRef(e->getArg(0)).getPointer());
     return {};
 
-  case Builtin::BIalloca:
-  case Builtin::BI_alloca:
-  case Builtin::BI__builtin_alloca_uninitialized:
-  case Builtin::BI__builtin_alloca: {
-    // Get alloca size input
-    mlir::Value size = emitScalarExpr(e->getArg(0));
-
-    // The alignment of the alloca should correspond to __BIGGEST_ALIGNMENT__.
-    const TargetInfo &ti = getContext().getTargetInfo();
-    const CharUnits suitableAlignmentInBytes =
-        getContext().toCharUnitsFromBits(ti.getSuitableAlign());
-
-    // Emit the alloca op with type `u8 *` to match the semantics of
-    // `llvm.alloca`. We later bitcast the type to `void *` to match the
-    // semantics of C/C++
-    // FIXME(cir): It may make sense to allow AllocaOp of type `u8` to return a
-    // pointer of type `void *`. This will require a change to the allocaOp
-    // verifier.
-    mlir::Value allocaAddr = builder.createAlloca(
-        getLoc(e->getSourceRange()), builder.getUInt8PtrTy(),
-        builder.getUInt8Ty(), "bi_alloca", suitableAlignmentInBytes, size);
-
-    // Initialize the allocated buffer if required.
-    if (builtinID != Builtin::BI__builtin_alloca_uninitialized) {
-      // Initialize the alloca with the given size and alignment according to
-      // the lang opts. Only the trivial non-initialization is supported for
-      // now.
-
-      switch (getLangOpts().getTrivialAutoVarInit()) {
-      case LangOptions::TrivialAutoVarInitKind::Uninitialized:
-        // Nothing to initialize.
-        break;
-      case LangOptions::TrivialAutoVarInitKind::Zero:
-      case LangOptions::TrivialAutoVarInitKind::Pattern:
-        cgm.errorNYI("trivial auto var init");
-        break;
-      }
-    }
-
-    // An alloca will always return a pointer to the alloca (stack) address
-    // space. This address space need not be the same as the AST / Language
-    // default (e.g. in C / C++ auto vars are in the generic address space). At
-    // the AST level this is handled within CreateTempAlloca et al., but for the
-    // builtin / dynamic alloca we have to handle it here.
-
-    if (!cir::isMatchingAddressSpace(
-            getCIRAllocaAddressSpace(),
-            e->getType()->getPointeeType().getAddressSpace())) {
-      cgm.errorNYI(e->getSourceRange(), "Non-default address space for alloca");
-    }
-
-    // Bitcast the alloca to the expected type.
-    return RValue::get(builder.createBitcast(
-        allocaAddr, builder.getVoidPtrTy(getCIRAllocaAddressSpace())));
-  }
-
   case Builtin::BIcos:
   case Builtin::BIcosf:
   case Builtin::BIcosl:
@@ -425,36 +438,6 @@ RValue CIRGenFunction::emitBuiltinExpr(const GlobalDecl &gd, unsigned builtinID,
   case Builtin::BI__builtin_rotateright64:
     return emitRotate(e, /*isRotateLeft=*/false);
 
-  case Builtin::BI__builtin_return_address:
-  case Builtin::BI__builtin_frame_address: {
-    mlir::Location loc = getLoc(e->getExprLoc());
-    llvm::APSInt level = e->getArg(0)->EvaluateKnownConstInt(getContext());
-    if (builtinID == Builtin::BI__builtin_return_address) {
-      return RValue::get(cir::ReturnAddrOp::create(
-          builder, loc,
-          builder.getConstAPInt(loc, builder.getUInt32Ty(), level)));
-    }
-    return RValue::get(cir::FrameAddrOp::create(
-        builder, loc,
-        builder.getConstAPInt(loc, builder.getUInt32Ty(), level)));
-  }
-
-  case Builtin::BI__builtin_trap:
-    emitTrap(loc, /*createNewBlock=*/true);
-    return RValue::get(nullptr);
-
-  case Builtin::BI__builtin_unreachable:
-    emitUnreachable(e->getExprLoc(), /*createNewBlock=*/true);
-    return RValue::get(nullptr);
-
-  case Builtin::BI__builtin_elementwise_acos:
-    return emitUnaryFPBuiltin<cir::ACosOp>(*this, *e);
-  case Builtin::BI__builtin_elementwise_asin:
-    return emitUnaryFPBuiltin<cir::ASinOp>(*this, *e);
-  case Builtin::BI__builtin_elementwise_atan:
-    return emitUnaryFPBuiltin<cir::ATanOp>(*this, *e);
-  case Builtin::BI__builtin_elementwise_cos:
-    return emitUnaryFPBuiltin<cir::CosOp>(*this, *e);
   case Builtin::BI__builtin_coro_id:
   case Builtin::BI__builtin_coro_promise:
   case Builtin::BI__builtin_coro_resume:
@@ -520,6 +503,479 @@ RValue CIRGenFunction::emitBuiltinExpr(const GlobalDecl &gd, unsigned builtinID,
     cir::PrefetchOp::create(builder, loc, address, locality, isWrite);
     return RValue::get(nullptr);
   }
+  case Builtin::BI__builtin_readcyclecounter:
+  case Builtin::BI__builtin_readsteadycounter:
+  case Builtin::BI__builtin___clear_cache:
+    return errorBuiltinNYI(*this, e, builtinID);
+  case Builtin::BI__builtin_trap:
+    emitTrap(loc, /*createNewBlock=*/true);
+    return RValue::getIgnored();
+  case Builtin::BI__builtin_verbose_trap:
+  case Builtin::BI__debugbreak:
+    return errorBuiltinNYI(*this, e, builtinID);
+  case Builtin::BI__builtin_unreachable:
+    emitUnreachable(e->getExprLoc(), /*createNewBlock=*/true);
+    return RValue::getIgnored();
+  case Builtin::BI__builtin_powi:
+  case Builtin::BI__builtin_powif:
+  case Builtin::BI__builtin_powil:
+  case Builtin::BI__builtin_frexpl:
+  case Builtin::BI__builtin_frexp:
+  case Builtin::BI__builtin_frexpf:
+  case Builtin::BI__builtin_frexpf128:
+  case Builtin::BI__builtin_frexpf16:
+  case Builtin::BImodf:
+  case Builtin::BImodff:
+  case Builtin::BImodfl:
+  case Builtin::BI__builtin_modf:
+  case Builtin::BI__builtin_modff:
+  case Builtin::BI__builtin_modfl:
+  case Builtin::BI__builtin_isgreater:
+  case Builtin::BI__builtin_isgreaterequal:
+  case Builtin::BI__builtin_isless:
+  case Builtin::BI__builtin_islessequal:
+  case Builtin::BI__builtin_islessgreater:
+  case Builtin::BI__builtin_isunordered:
+  case Builtin::BI__builtin_isnan:
+  case Builtin::BI__builtin_issignaling:
+  case Builtin::BI__builtin_isinf:
+  case Builtin::BIfinite:
+  case Builtin::BI__finite:
+  case Builtin::BIfinitef:
+  case Builtin::BI__finitef:
+  case Builtin::BIfinitel:
+  case Builtin::BI__finitel:
+  case Builtin::BI__builtin_isfinite:
+  case Builtin::BI__builtin_isnormal:
+  case Builtin::BI__builtin_issubnormal:
+  case Builtin::BI__builtin_iszero:
+  case Builtin::BI__builtin_isfpclass:
+  case Builtin::BI__builtin_nondeterministic_value:
+  case Builtin::BI__builtin_elementwise_abs:
+    return errorBuiltinNYI(*this, e, builtinID);
+  case Builtin::BI__builtin_elementwise_acos:
+    return emitUnaryFPBuiltin<cir::ACosOp>(*this, *e);
+  case Builtin::BI__builtin_elementwise_asin:
+    return emitUnaryFPBuiltin<cir::ASinOp>(*this, *e);
+  case Builtin::BI__builtin_elementwise_atan:
+    return emitUnaryFPBuiltin<cir::ATanOp>(*this, *e);
+  case Builtin::BI__builtin_elementwise_atan2:
+  case Builtin::BI__builtin_elementwise_ceil:
+  case Builtin::BI__builtin_elementwise_exp:
+  case Builtin::BI__builtin_elementwise_exp2:
+  case Builtin::BI__builtin_elementwise_exp10:
+  case Builtin::BI__builtin_elementwise_ldexp:
+  case Builtin::BI__builtin_elementwise_log:
+  case Builtin::BI__builtin_elementwise_log2:
+  case Builtin::BI__builtin_elementwise_log10:
+  case Builtin::BI__builtin_elementwise_pow:
+  case Builtin::BI__builtin_elementwise_bitreverse:
+    return errorBuiltinNYI(*this, e, builtinID);
+  case Builtin::BI__builtin_elementwise_cos:
+    return emitUnaryFPBuiltin<cir::CosOp>(*this, *e);
+  case Builtin::BI__builtin_elementwise_cosh:
+  case Builtin::BI__builtin_elementwise_floor:
+  case Builtin::BI__builtin_elementwise_popcount:
+  case Builtin::BI__builtin_elementwise_roundeven:
+  case Builtin::BI__builtin_elementwise_round:
+  case Builtin::BI__builtin_elementwise_rint:
+  case Builtin::BI__builtin_elementwise_nearbyint:
+  case Builtin::BI__builtin_elementwise_sin:
+  case Builtin::BI__builtin_elementwise_sinh:
+  case Builtin::BI__builtin_elementwise_tan:
+  case Builtin::BI__builtin_elementwise_tanh:
+  case Builtin::BI__builtin_elementwise_trunc:
+  case Builtin::BI__builtin_elementwise_canonicalize:
+  case Builtin::BI__builtin_elementwise_copysign:
+  case Builtin::BI__builtin_elementwise_fma:
+  case Builtin::BI__builtin_elementwise_fshl:
+  case Builtin::BI__builtin_elementwise_fshr:
+  case Builtin::BI__builtin_elementwise_add_sat:
+  case Builtin::BI__builtin_elementwise_sub_sat:
+  case Builtin::BI__builtin_elementwise_max:
+  case Builtin::BI__builtin_elementwise_min:
+  case Builtin::BI__builtin_elementwise_maxnum:
+  case Builtin::BI__builtin_elementwise_minnum:
+  case Builtin::BI__builtin_elementwise_maximum:
+  case Builtin::BI__builtin_elementwise_minimum:
+  case Builtin::BI__builtin_elementwise_maximumnum:
+  case Builtin::BI__builtin_elementwise_minimumnum:
+  case Builtin::BI__builtin_reduce_max:
+  case Builtin::BI__builtin_reduce_min:
+  case Builtin::BI__builtin_reduce_add:
+  case Builtin::BI__builtin_reduce_mul:
+  case Builtin::BI__builtin_reduce_xor:
+  case Builtin::BI__builtin_reduce_or:
+  case Builtin::BI__builtin_reduce_and:
+  case Builtin::BI__builtin_reduce_maximum:
+  case Builtin::BI__builtin_reduce_minimum:
+  case Builtin::BI__builtin_matrix_transpose:
+  case Builtin::BI__builtin_matrix_column_major_load:
+  case Builtin::BI__builtin_matrix_column_major_store:
+  case Builtin::BI__builtin_masked_load:
+  case Builtin::BI__builtin_masked_expand_load:
+  case Builtin::BI__builtin_masked_gather:
+  case Builtin::BI__builtin_masked_store:
+  case Builtin::BI__builtin_masked_compress_store:
+  case Builtin::BI__builtin_masked_scatter:
+  case Builtin::BI__builtin_isinf_sign:
+  case Builtin::BI__builtin_flt_rounds:
+  case Builtin::BI__builtin_set_flt_rounds:
+  case Builtin::BI__builtin_fpclassify:
+    return errorBuiltinNYI(*this, e, builtinID);
+  case Builtin::BIalloca:
+  case Builtin::BI_alloca:
+  case Builtin::BI__builtin_alloca_uninitialized:
+  case Builtin::BI__builtin_alloca:
+    return emitBuiltinAlloca(*this, e, builtinID);
+  case Builtin::BI__builtin_alloca_with_align_uninitialized:
+  case Builtin::BI__builtin_alloca_with_align:
+  case Builtin::BI__builtin_infer_alloc_token:
+  case Builtin::BIbzero:
+  case Builtin::BI__builtin_bzero:
+  case Builtin::BIbcopy:
+  case Builtin::BI__builtin_bcopy:
+    return errorBuiltinNYI(*this, e, builtinID);
+  case Builtin::BImemcpy:
+  case Builtin::BI__builtin_memcpy:
+    break;
+  case Builtin::BImempcpy:
+  case Builtin::BI__builtin_mempcpy:
+  case Builtin::BI__builtin_memcpy_inline:
+  case Builtin::BI__builtin_char_memchr:
+  case Builtin::BI__builtin___memcpy_chk:
+  case Builtin::BI__builtin_objc_memmove_collectable:
+  case Builtin::BI__builtin___memmove_chk:
+  case Builtin::BI__builtin_trivially_relocate:
+  case Builtin::BImemmove:
+  case Builtin::BI__builtin_memmove:
+  case Builtin::BImemset:
+  case Builtin::BI__builtin_memset:
+  case Builtin::BI__builtin_memset_inline:
+  case Builtin::BI__builtin___memset_chk:
+  case Builtin::BI__builtin_wmemchr:
+  case Builtin::BI__builtin_wmemcmp:
+  case Builtin::BI__builtin_dwarf_cfa:
+    return errorBuiltinNYI(*this, e, builtinID);
+  case Builtin::BI__builtin_return_address:
+  case Builtin::BI_ReturnAddress:
+  case Builtin::BI__builtin_frame_address: {
+    mlir::Location loc = getLoc(e->getExprLoc());
+    llvm::APSInt level = e->getArg(0)->EvaluateKnownConstInt(getContext());
+    if (builtinID == Builtin::BI__builtin_return_address) {
+      return RValue::get(cir::ReturnAddrOp::create(
+          builder, loc,
+          builder.getConstAPInt(loc, builder.getUInt32Ty(), level)));
+    }
+    return RValue::get(cir::FrameAddrOp::create(
+        builder, loc,
+        builder.getConstAPInt(loc, builder.getUInt32Ty(), level)));
+  }
+  case Builtin::BI__builtin_extract_return_addr:
+  case Builtin::BI__builtin_frob_return_addr:
+  case Builtin::BI__builtin_dwarf_sp_column:
+  case Builtin::BI__builtin_init_dwarf_reg_size_table:
+  case Builtin::BI__builtin_eh_return:
+  case Builtin::BI__builtin_unwind_init:
+  case Builtin::BI__builtin_extend_pointer:
+  case Builtin::BI__builtin_setjmp:
+  case Builtin::BI__builtin_longjmp:
+  case Builtin::BI__builtin_launder:
+  case Builtin::BI__sync_fetch_and_add:
+  case Builtin::BI__sync_fetch_and_sub:
+  case Builtin::BI__sync_fetch_and_or:
+  case Builtin::BI__sync_fetch_and_and:
+  case Builtin::BI__sync_fetch_and_xor:
+  case Builtin::BI__sync_fetch_and_nand:
+  case Builtin::BI__sync_add_and_fetch:
+  case Builtin::BI__sync_sub_and_fetch:
+  case Builtin::BI__sync_and_and_fetch:
+  case Builtin::BI__sync_or_and_fetch:
+  case Builtin::BI__sync_xor_and_fetch:
+  case Builtin::BI__sync_nand_and_fetch:
+  case Builtin::BI__sync_val_compare_and_swap:
+  case Builtin::BI__sync_bool_compare_and_swap:
+  case Builtin::BI__sync_lock_test_and_set:
+  case Builtin::BI__sync_lock_release:
+  case Builtin::BI__sync_swap:
+  case Builtin::BI__sync_fetch_and_add_1:
+  case Builtin::BI__sync_fetch_and_add_2:
+  case Builtin::BI__sync_fetch_and_add_4:
+  case Builtin::BI__sync_fetch_and_add_8:
+  case Builtin::BI__sync_fetch_and_add_16:
+  case Builtin::BI__sync_fetch_and_sub_1:
+  case Builtin::BI__sync_fetch_and_sub_2:
+  case Builtin::BI__sync_fetch_and_sub_4:
+  case Builtin::BI__sync_fetch_and_sub_8:
+  case Builtin::BI__sync_fetch_and_sub_16:
+  case Builtin::BI__sync_fetch_and_or_1:
+  case Builtin::BI__sync_fetch_and_or_2:
+  case Builtin::BI__sync_fetch_and_or_4:
+  case Builtin::BI__sync_fetch_and_or_8:
+  case Builtin::BI__sync_fetch_and_or_16:
+  case Builtin::BI__sync_fetch_and_and_1:
+  case Builtin::BI__sync_fetch_and_and_2:
+  case Builtin::BI__sync_fetch_and_and_4:
+  case Builtin::BI__sync_fetch_and_and_8:
+  case Builtin::BI__sync_fetch_and_and_16:
+  case Builtin::BI__sync_fetch_and_xor_1:
+  case Builtin::BI__sync_fetch_and_xor_2:
+  case Builtin::BI__sync_fetch_and_xor_4:
+  case Builtin::BI__sync_fetch_and_xor_8:
+  case Builtin::BI__sync_fetch_and_xor_16:
+  case Builtin::BI__sync_fetch_and_nand_1:
+  case Builtin::BI__sync_fetch_and_nand_2:
+  case Builtin::BI__sync_fetch_and_nand_4:
+  case Builtin::BI__sync_fetch_and_nand_8:
+  case Builtin::BI__sync_fetch_and_nand_16:
+  case Builtin::BI__sync_fetch_and_min:
+  case Builtin::BI__sync_fetch_and_max:
+  case Builtin::BI__sync_fetch_and_umin:
+  case Builtin::BI__sync_fetch_and_umax:
+  case Builtin::BI__sync_add_and_fetch_1:
+  case Builtin::BI__sync_add_and_fetch_2:
+  case Builtin::BI__sync_add_and_fetch_4:
+  case Builtin::BI__sync_add_and_fetch_8:
+  case Builtin::BI__sync_add_and_fetch_16:
+  case Builtin::BI__sync_sub_and_fetch_1:
+  case Builtin::BI__sync_sub_and_fetch_2:
+  case Builtin::BI__sync_sub_and_fetch_4:
+  case Builtin::BI__sync_sub_and_fetch_8:
+  case Builtin::BI__sync_sub_and_fetch_16:
+  case Builtin::BI__sync_and_and_fetch_1:
+  case Builtin::BI__sync_and_and_fetch_2:
+  case Builtin::BI__sync_and_and_fetch_4:
+  case Builtin::BI__sync_and_and_fetch_8:
+  case Builtin::BI__sync_and_and_fetch_16:
+  case Builtin::BI__sync_or_and_fetch_1:
+  case Builtin::BI__sync_or_and_fetch_2:
+  case Builtin::BI__sync_or_and_fetch_4:
+  case Builtin::BI__sync_or_and_fetch_8:
+  case Builtin::BI__sync_or_and_fetch_16:
+  case Builtin::BI__sync_xor_and_fetch_1:
+  case Builtin::BI__sync_xor_and_fetch_2:
+  case Builtin::BI__sync_xor_and_fetch_4:
+  case Builtin::BI__sync_xor_and_fetch_8:
+  case Builtin::BI__sync_xor_and_fetch_16:
+  case Builtin::BI__sync_nand_and_fetch_1:
+  case Builtin::BI__sync_nand_and_fetch_2:
+  case Builtin::BI__sync_nand_and_fetch_4:
+  case Builtin::BI__sync_nand_and_fetch_8:
+  case Builtin::BI__sync_nand_and_fetch_16:
+  case Builtin::BI__sync_val_compare_and_swap_1:
+  case Builtin::BI__sync_val_compare_and_swap_2:
+  case Builtin::BI__sync_val_compare_and_swap_4:
+  case Builtin::BI__sync_val_compare_and_swap_8:
+  case Builtin::BI__sync_val_compare_and_swap_16:
+  case Builtin::BI__sync_bool_compare_and_swap_1:
+  case Builtin::BI__sync_bool_compare_and_swap_2:
+  case Builtin::BI__sync_bool_compare_and_swap_4:
+  case Builtin::BI__sync_bool_compare_and_swap_8:
+  case Builtin::BI__sync_bool_compare_and_swap_16:
+  case Builtin::BI__sync_swap_1:
+  case Builtin::BI__sync_swap_2:
+  case Builtin::BI__sync_swap_4:
+  case Builtin::BI__sync_swap_8:
+  case Builtin::BI__sync_swap_16:
+  case Builtin::BI__sync_lock_test_and_set_1:
+  case Builtin::BI__sync_lock_test_and_set_2:
+  case Builtin::BI__sync_lock_test_and_set_4:
+  case Builtin::BI__sync_lock_test_and_set_8:
+  case Builtin::BI__sync_lock_test_and_set_16:
+  case Builtin::BI__sync_lock_release_1:
+  case Builtin::BI__sync_lock_release_2:
+  case Builtin::BI__sync_lock_release_4:
+  case Builtin::BI__sync_lock_release_8:
+  case Builtin::BI__sync_lock_release_16:
+  case Builtin::BI__sync_synchronize:
+  case Builtin::BI__builtin_nontemporal_load:
+  case Built...
[truncated]

@github-actions
Copy link

github-actions bot commented Nov 19, 2025

🐧 Linux x64 Test Results

  • 112070 tests passed
  • 4077 tests skipped

Copy link
Contributor

@andykaylor andykaylor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for doing this!

I appreciate the restructuring, but this is going to cause some problems for builtins that have a 1:1 mapping to library calls. Currently, we have a block of code following the switch statement to call the library function for cases that weren't previously handled.

  // If this is an alias for a lib function (e.g. __builtin_sin), emit
  // the call using the normal call path, but using the unmangled
  // version of the function name.
  if (getContext().BuiltinInfo.isLibFunction(builtinID))
    return emitLibraryCall(*this, fd, e,
                           cgm.getBuiltinLibFunction(fd, builtinID));

If you put that inside of errorBuiltinNYI with a comment explaining that it's a temporary workaround, that will keep us from regressing on a bunch of tests that are currently passing in the llvm-test-suite.

@HendrikHuebner
Copy link
Contributor Author

HendrikHuebner commented Nov 19, 2025

Thanks for doing this!

I appreciate the restructuring, but this is going to cause some problems for builtins that have a 1:1 mapping to library calls. Currently, we have a block of code following the switch statement to call the library function for cases that weren't previously handled.

  // If this is an alias for a lib function (e.g. __builtin_sin), emit
  // the call using the normal call path, but using the unmangled
  // version of the function name.
  if (getContext().BuiltinInfo.isLibFunction(builtinID))
    return emitLibraryCall(*this, fd, e,
                           cgm.getBuiltinLibFunction(fd, builtinID));

If you put that inside of errorBuiltinNYI with a comment explaining that it's a temporary workaround, that will keep us from regressing on a bunch of tests that are currently passing in the llvm-test-suite.

I added break;s for the builtins that are handled by the piece of code you mentioned, e.g. __builtin_printf and the CIR tests pass. If there are regressions in other test suites, wouldn't it be better to handle this explicitly by also inserting break for the specific builtin?

@andykaylor
Copy link
Contributor

I added break;s for the builtins that are handled by the piece of code you mentioned, e.g. __builtin_printf and the CIR tests pass. If there are regressions in other test suites, wouldn't it be better to handle this explicitly by also inserting break for the specific builtin?

Ah. I didn't notice the break handling for specific builtins. I just knew that we were getting fallback handling for a lot of builtins this way. I didn't actually run any tests with your patch applied. The tests I would have used are the single-source and multi-source tests from the LLVM test suite.

Inserting break for specific builtins is a reasonable way to handle this. The block of code that currently maps these to library functions will stay in place long term, but with an additional condition (shouldEmitBuiltinAsIR). When we add that condition, that should flag builtins that we're missing.

@HendrikHuebner
Copy link
Contributor Author

I added break;s for the builtins that are handled by the piece of code you mentioned, e.g. __builtin_printf and the CIR tests pass. If there are regressions in other test suites, wouldn't it be better to handle this explicitly by also inserting break for the specific builtin?

Ah. I didn't notice the break handling for specific builtins. I just knew that we were getting fallback handling for a lot of builtins this way. I didn't actually run any tests with your patch applied. The tests I would have used are the single-source and multi-source tests from the LLVM test suite.

Inserting break for specific builtins is a reasonable way to handle this. The block of code that currently maps these to library functions will stay in place long term, but with an additional condition (shouldEmitBuiltinAsIR). When we add that condition, that should flag builtins that we're missing.

Sounds good. Can I make the check in the NYI helper an assertion, so it produces a different error message when a builtin is reached that should generate a libcall?

@andykaylor
Copy link
Contributor

Sounds good. Can I make the check in the NYI helper an assertion, so it produces a different error message when a builtin is reached that should generate a libcall?

Maybe just make it a different errorNYI message? It would also be nice to have a MissingFeatures assert on the lines where we break out of the switch because we know it's a libcall but we really want to handle the builtin some other way.

I just ran the llvm-test-suite single source tests with this patch, and it looks like the only two missing breaks are __builtin_memset and __builtin_isnan. There are likely others that just aren't covered in these tests, but we can deal with those as we find them.

@andykaylor
Copy link
Contributor

I just merged #166037 which, unfortunately is going to require a bit of rebase work to fit with this change. On the plus side, it implements the correct builtin handling for __builtin_isnan.

@HendrikHuebner
Copy link
Contributor Author

I added the extra error message and break after memset. Can you take another look?

Copy link
Contributor

@andykaylor andykaylor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm. Thanks for the updates!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

clang Clang issues not falling into any other category ClangIR Anything related to the ClangIR project

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants