Module core::core_arch::arm::simd32

source ·
🔬This is a nightly-only experimental API. (stdsimd #48556)
Available on ARM only.
Expand description

References

  • Section 8.5 “32-bit SIMD intrinsics” of ACLE

Intrinsics that could live here

  • [x] __sel
  • [ ] __ssat16
  • [ ] __usat16
  • [ ] __sxtab16
  • [ ] __sxtb16
  • [ ] __uxtab16
  • [ ] __uxtb16
  • [x] __qadd8
  • [x] __qsub8
  • [x] __sadd8
  • [x] __shadd8
  • [x] __shsub8
  • [x] __ssub8
  • [ ] __uadd8
  • [ ] __uhadd8
  • [ ] __uhsub8
  • [ ] __uqadd8
  • [ ] __uqsub8
  • [x] __usub8
  • [x] __usad8
  • [x] __usada8
  • [x] __qadd16
  • [x] __qasx
  • [x] __qsax
  • [x] __qsub16
  • [x] __sadd16
  • [x] __sasx
  • [x] __shadd16
  • [ ] __shasx
  • [ ] __shsax
  • [x] __shsub16
  • [ ] __ssax
  • [ ] __ssub16
  • [ ] __uadd16
  • [ ] __uasx
  • [ ] __uhadd16
  • [ ] __uhasx
  • [ ] __uhsax
  • [ ] __uhsub16
  • [ ] __uqadd16
  • [ ] __uqasx
  • [x] __uqsax
  • [ ] __uqsub16
  • [ ] __usax
  • [ ] __usub16
  • [x] __smlad
  • [ ] __smladx
  • [ ] __smlald
  • [ ] __smlaldx
  • [x] __smlsd
  • [ ] __smlsdx
  • [ ] __smlsld
  • [ ] __smlsldx
  • [x] __smuad
  • [x] __smuadx
  • [x] __smusd
  • [x] __smusdx

Macros

Structs

  • int8x4_tExperimental
    ARM-specific 32-bit wide vector of four packed i8.
  • uint8x4_tExperimental
    ARM-specific 32-bit wide vector of four packed u8.

Functions

  • __qadd8Experimental
    Saturating four 8-bit integer additions
  • __qadd16Experimental
    Saturating two 16-bit integer additions
  • __qasxExperimental
    Returns the 16-bit signed saturated equivalent of
  • __qsaxExperimental
    Returns the 16-bit signed saturated equivalent of
  • __qsub8Experimental
    Saturating two 8-bit integer subtraction
  • __qsub16Experimental
    Saturating two 16-bit integer subtraction
  • __sadd8Experimental
    Returns the 8-bit signed saturated equivalent of
  • __sadd16Experimental
    Returns the 16-bit signed saturated equivalent of
  • __sasxExperimental
    Returns the 16-bit signed equivalent of
  • __selExperimental
    Select bytes from each operand according to APSR GE flags
  • __shadd8Experimental
    Signed halving parallel byte-wise addition.
  • __shadd16Experimental
    Signed halving parallel halfword-wise addition.
  • __shsub8Experimental
    Signed halving parallel byte-wise subtraction.
  • __shsub16Experimental
    Signed halving parallel halfword-wise subtraction.
  • __smladExperimental
    Dual 16-bit Signed Multiply with Addition of products and 32-bit accumulation.
  • __smlsdExperimental
    Dual 16-bit Signed Multiply with Subtraction of products and 32-bit accumulation and overflow detection.
  • __smuadExperimental
    Signed Dual Multiply Add.
  • __smuadxExperimental
    Signed Dual Multiply Add Reversed.
  • __smusdExperimental
    Signed Dual Multiply Subtract.
  • __smusdxExperimental
    Signed Dual Multiply Subtract Reversed.
  • __ssub8Experimental
    Inserts a SSUB8 instruction.
  • __usad8Experimental
    Sum of 8-bit absolute differences.
  • __usada8Experimental
    Sum of 8-bit absolute differences and constant.
  • __usub8Experimental
    Inserts a USUB8 instruction.
  • arm_qadd8 🔒 Experimental
  • arm_qadd16 🔒 Experimental
  • arm_qasx 🔒 Experimental
  • arm_qsax 🔒 Experimental
  • arm_qsub8 🔒 Experimental
  • arm_qsub16 🔒 Experimental
  • arm_sadd8 🔒 Experimental
  • arm_sadd16 🔒 Experimental
  • arm_sasx 🔒 Experimental
  • arm_sel 🔒 Experimental
  • arm_shadd8 🔒 Experimental
  • arm_shadd16 🔒 Experimental
  • arm_shsub8 🔒 Experimental
  • arm_shsub16 🔒 Experimental
  • arm_smlad 🔒 Experimental
  • arm_smlsd 🔒 Experimental
  • arm_smuad 🔒 Experimental
  • arm_smuadx 🔒 Experimental
  • arm_smusd 🔒 Experimental
  • arm_smusdx 🔒 Experimental
  • arm_ssub8 🔒 Experimental
  • arm_usad8 🔒 Experimental
  • arm_usub8 🔒 Experimental