Module core::arch::aarch64[][src]

🔬 This is a nightly-only experimental API. (stdsimd #27731)
This is supported on AArch64 only.
Expand description

Platform-specific intrinsics for the aarch64 platform.

See the module documentation for more details.

Structs

APSRExperimental

Application Program Status Register

SYExperimental

Full system is the required shareability domain, reads and writes are the required access types

float32x2_tExperimental

ARM-specific 64-bit wide vector of two packed f32.

float32x4_tExperimental

ARM-specific 128-bit wide vector of four packed f32.

float64x1_tExperimental

ARM-specific 64-bit wide vector of one packed f64.

float64x2_tExperimental

ARM-specific 128-bit wide vector of two packed f64.

int8x8_tExperimental

ARM-specific 64-bit wide vector of eight packed i8.

int8x8x2_tExperimental

ARM-specific type containing two int8x8_t vectors.

int8x8x3_tExperimental

ARM-specific type containing three int8x8_t vectors.

int8x8x4_tExperimental

ARM-specific type containing four int8x8_t vectors.

int8x16_tExperimental

ARM-specific 128-bit wide vector of sixteen packed i8.

int8x16x2_tExperimental

ARM-specific type containing two int8x16_t vectors.

int8x16x3_tExperimental

ARM-specific type containing three int8x16_t vectors.

int8x16x4_tExperimental

ARM-specific type containing four int8x16_t vectors.

int16x4_tExperimental

ARM-specific 64-bit wide vector of four packed i16.

int16x8_tExperimental

ARM-specific 128-bit wide vector of eight packed i16.

int32x2_tExperimental

ARM-specific 64-bit wide vector of two packed i32.

int32x4_tExperimental

ARM-specific 128-bit wide vector of four packed i32.

int64x1_tExperimental

ARM-specific 64-bit wide vector of one packed i64.

int64x2_tExperimental

ARM-specific 128-bit wide vector of two packed i64.

poly8x8_tExperimental

ARM-specific 64-bit wide polynomial vector of eight packed p8.

poly8x8x2_tExperimental

ARM-specific type containing two poly8x8_t vectors.

poly8x8x3_tExperimental

ARM-specific type containing three poly8x8_t vectors.

poly8x8x4_tExperimental

ARM-specific type containing four poly8x8_t vectors.

poly8x16_tExperimental

ARM-specific 128-bit wide vector of sixteen packed p8.

poly8x16x2_tExperimental

ARM-specific type containing two poly8x16_t vectors.

poly8x16x3_tExperimental

ARM-specific type containing three poly8x16_t vectors.

poly8x16x4_tExperimental

ARM-specific type containing four poly8x16_t vectors.

poly16x4_tExperimental

ARM-specific 64-bit wide vector of four packed p16.

poly16x8_tExperimental

ARM-specific 128-bit wide vector of eight packed p16.

poly64x1_tExperimental

ARM-specific 64-bit wide vector of one packed p64.

poly64x2_tExperimental

ARM-specific 128-bit wide vector of two packed p64.

uint8x8_tExperimental

ARM-specific 64-bit wide vector of eight packed u8.

uint8x8x2_tExperimental

ARM-specific type containing two uint8x8_t vectors.

uint8x8x3_tExperimental

ARM-specific type containing three uint8x8_t vectors.

uint8x8x4_tExperimental

ARM-specific type containing four uint8x8_t vectors.

uint8x16_tExperimental

ARM-specific 128-bit wide vector of sixteen packed u8.

uint8x16x2_tExperimental

ARM-specific type containing two uint8x16_t vectors.

uint8x16x3_tExperimental

ARM-specific type containing three uint8x16_t vectors.

uint8x16x4_tExperimental

ARM-specific type containing four uint8x16_t vectors.

uint16x4_tExperimental

ARM-specific 64-bit wide vector of four packed u16.

uint16x8_tExperimental

ARM-specific 128-bit wide vector of eight packed u16.

uint32x2_tExperimental

ARM-specific 64-bit wide vector of two packed u32.

uint32x4_tExperimental

ARM-specific 128-bit wide vector of four packed u32.

uint64x1_tExperimental

ARM-specific 64-bit wide vector of one packed u64.

uint64x2_tExperimental

ARM-specific 128-bit wide vector of two packed u64.

Constants

See prefetch.

See prefetch.

See prefetch.

See prefetch.

_PREFETCH_READExperimental

See prefetch.

_PREFETCH_WRITEExperimental

See prefetch.

_TMFAILURE_CNCLExperimental

Transaction executed a TCANCEL instruction

_TMFAILURE_DBGExperimental

Transaction aborted due to a debug trap.

_TMFAILURE_ERRExperimental

Transaction aborted because a non-permissible operation was attempted

_TMFAILURE_IMPExperimental

Fallback error type for any other reason

_TMFAILURE_INTExperimental

Transaction failed from interrupt

_TMFAILURE_MEMExperimental

Transaction aborted because a conflict occurred

_TMFAILURE_NESTExperimental

Transaction aborted due to transactional nesting level was exceeded

_TMFAILURE_REASONExperimental

Extraction mask for failure reason

_TMFAILURE_RTRYExperimental

Transaction retry is possible.

_TMFAILURE_SIZEExperimental

Transaction aborted due to read or write set limit was exceeded

_TMFAILURE_TRIVIALExperimental

Indicates a TRIVIAL version of TM is available

_TMSTART_SUCCESSExperimental

Transaction successfully started.

Functions

__breakpointExperimental

Inserts a breakpoint instruction.

__crc32bExperimentalcrc

CRC32 single round checksum for bytes (8 bits).

__crc32cbExperimentalcrc

CRC32-C single round checksum for bytes (8 bits).

__crc32cdExperimentalcrc

CRC32-C single round checksum for quad words (64 bits).

__crc32chExperimentalcrc

CRC32-C single round checksum for half words (16 bits).

__crc32cwExperimentalcrc

CRC32-C single round checksum for words (32 bits).

__crc32dExperimentalcrc

CRC32 single round checksum for quad words (64 bits).

__crc32hExperimentalcrc

CRC32 single round checksum for half words (16 bits).

__crc32wExperimentalcrc

CRC32 single round checksum for words (32 bits).

__dmbExperimental

Generates a DMB (data memory barrier) instruction or equivalent CP15 instruction.

__dsbExperimental

Generates a DSB (data synchronization barrier) instruction or equivalent CP15 instruction.

__isbExperimental

Generates an ISB (instruction synchronization barrier) instruction or equivalent CP15 instruction.

__nopExperimental

Generates an unspecified no-op instruction.

__rsrExperimental

Reads a 32-bit system register

__rsrpExperimental

Reads a system register containing an address

__sevExperimental

Generates a SEV (send a global event) hint instruction.

__sevlExperimental

Generates a send a local event hint instruction.

__tcancelExperimentaltme

Cancels the current transaction and discards all state modifications that were performed transactionally.

__tcommitExperimentaltme

Commits the current transaction. For a nested transaction, the only effect is that the transactional nesting depth is decreased. For an outer transaction, the state modifications performed transactionally are committed to the architectural state.

__tstartExperimentaltme

Starts a new transaction. When the transaction starts successfully the return value is 0. If the transaction fails, all state modifications are discarded and a cause of the failure is encoded in the return value.

__ttestExperimentaltme

Tests if executing inside a transaction. If no transaction is currently executing, the return value is 0. Otherwise, this intrinsic returns the depth of the transaction.

__wfeExperimental

Generates a WFE (wait for event) hint instruction, or nothing.

__wfiExperimental

Generates a WFI (wait for interrupt) hint instruction, or nothing.

__wsrExperimental

Writes a 32-bit system register

__wsrpExperimental

Writes a system register containing an address

__yieldExperimental

Generates a YIELD hint instruction.

_cls_u32Experimental

Counts the leading most significant bits set.

_cls_u64Experimental

Counts the leading most significant bits set.

_clz_u64Experimental

Count Leading Zeros.

_prefetchExperimental

Fetch the cache line that contains address p using the given RW and LOCALITY.

_rbit_u64Experimental

Reverse the bit order.

_rev_u64Experimental

Reverse the order of the bytes.

brkExperimental

Generates the trap instruction BRK 1

vaba_s8Experimentalneon
vaba_s16Experimentalneon
vaba_s32Experimentalneon
vaba_u8Experimentalneon
vaba_u16Experimentalneon
vaba_u32Experimentalneon
vabal_high_s8Experimentalneon

Signed Absolute difference and Accumulate Long

vabal_high_s16Experimentalneon

Signed Absolute difference and Accumulate Long

vabal_high_s32Experimentalneon

Signed Absolute difference and Accumulate Long

vabal_high_u8Experimentalneon

Unsigned Absolute difference and Accumulate Long

vabal_high_u16Experimentalneon

Unsigned Absolute difference and Accumulate Long

vabal_high_u32Experimentalneon

Unsigned Absolute difference and Accumulate Long

vabal_s8Experimentalneon

Signed Absolute difference and Accumulate Long

vabal_s16Experimentalneon

Signed Absolute difference and Accumulate Long

vabal_s32Experimentalneon

Signed Absolute difference and Accumulate Long

vabal_u8Experimentalneon

Unsigned Absolute difference and Accumulate Long

vabal_u16Experimentalneon

Unsigned Absolute difference and Accumulate Long

vabal_u32Experimentalneon

Unsigned Absolute difference and Accumulate Long

vabaq_s8Experimentalneon
vabaq_s16Experimentalneon
vabaq_s32Experimentalneon
vabaq_u8Experimentalneon
vabaq_u16Experimentalneon
vabaq_u32Experimentalneon
vabd_f32Experimentalneon

Absolute difference between the arguments of Floating

vabd_f64Experimentalneon

Absolute difference between the arguments of Floating

vabd_s8Experimentalneon

Absolute difference between the arguments

vabd_s16Experimentalneon

Absolute difference between the arguments

vabd_s32Experimentalneon

Absolute difference between the arguments

vabd_u8Experimentalneon

Absolute difference between the arguments

vabd_u16Experimentalneon

Absolute difference between the arguments

vabd_u32Experimentalneon

Absolute difference between the arguments

vabdl_high_s8Experimentalneon

Signed Absolute difference Long

vabdl_high_s16Experimentalneon

Signed Absolute difference Long

vabdl_high_s32Experimentalneon

Signed Absolute difference Long

vabdl_high_u8Experimentalneon

Unsigned Absolute difference Long

vabdl_high_u16Experimentalneon

Unsigned Absolute difference Long

vabdl_high_u32Experimentalneon

Unsigned Absolute difference Long

vabdl_s8Experimentalneon

Signed Absolute difference Long

vabdl_s16Experimentalneon

Signed Absolute difference Long

vabdl_s32Experimentalneon

Signed Absolute difference Long

vabdl_u8Experimentalneon

Unsigned Absolute difference Long

vabdl_u16Experimentalneon

Unsigned Absolute difference Long

vabdl_u32Experimentalneon

Unsigned Absolute difference Long

vabdq_f32Experimentalneon

Absolute difference between the arguments of Floating

vabdq_f64Experimentalneon

Absolute difference between the arguments of Floating

vabdq_s8Experimentalneon

Absolute difference between the arguments

vabdq_s16Experimentalneon

Absolute difference between the arguments

vabdq_s32Experimentalneon

Absolute difference between the arguments

vabdq_u8Experimentalneon

Absolute difference between the arguments

vabdq_u16Experimentalneon

Absolute difference between the arguments

vabdq_u32Experimentalneon

Absolute difference between the arguments

vabs_f32Experimentalneon

Floating-point absolute value

vabs_f64Experimentalneon

Floating-point absolute value

vabs_s8Experimentalneon

Absolute value (wrapping).

vabs_s16Experimentalneon

Absolute value (wrapping).

vabs_s32Experimentalneon

Absolute value (wrapping).

vabs_s64Experimentalneon

Absolute Value (wrapping).

vabsd_s64Experimentalneon

Absolute Value (wrapping).

vabsq_f32Experimentalneon

Floating-point absolute value

vabsq_f64Experimentalneon

Floating-point absolute value

vabsq_s8Experimentalneon

Absolute value (wrapping).

vabsq_s16Experimentalneon

Absolute value (wrapping).

vabsq_s32Experimentalneon

Absolute value (wrapping).

vabsq_s64Experimentalneon

Absolute Value (wrapping).

vadd_f32Experimentalneon

Vector add.

vadd_f64Experimentalneon

Vector add.

vadd_s8Experimentalneon

Vector add.

vadd_s16Experimentalneon

Vector add.

vadd_s32Experimentalneon

Vector add.

vadd_s64Experimentalneon

Vector add.

vadd_u8Experimentalneon

Vector add.

vadd_u16Experimentalneon

Vector add.

vadd_u32Experimentalneon

Vector add.

vadd_u64Experimentalneon

Vector add.

vaddd_s64Experimentalneon

Vector add.

vaddd_u64Experimentalneon

Vector add.

vaddhn_high_s16Experimentalneon

Add returning High Narrow (high half).

vaddhn_high_s32Experimentalneon

Add returning High Narrow (high half).

vaddhn_high_s64Experimentalneon

Add returning High Narrow (high half).

vaddhn_high_u16Experimentalneon

Add returning High Narrow (high half).

vaddhn_high_u32Experimentalneon

Add returning High Narrow (high half).

vaddhn_high_u64Experimentalneon

Add returning High Narrow (high half).

vaddhn_s16Experimentalneon

Add returning High Narrow.

vaddhn_s32Experimentalneon

Add returning High Narrow.

vaddhn_s64Experimentalneon

Add returning High Narrow.

vaddhn_u16Experimentalneon

Add returning High Narrow.

vaddhn_u32Experimentalneon

Add returning High Narrow.

vaddhn_u64Experimentalneon

Add returning High Narrow.

vaddl_high_s8Experimentalneon

Signed Add Long (vector, high half).

vaddl_high_s16Experimentalneon

Signed Add Long (vector, high half).

vaddl_high_s32Experimentalneon

Signed Add Long (vector, high half).

vaddl_high_u8Experimentalneon

Unsigned Add Long (vector, high half).

vaddl_high_u16Experimentalneon

Unsigned Add Long (vector, high half).

vaddl_high_u32Experimentalneon

Unsigned Add Long (vector, high half).

vaddl_s8Experimentalneon

Signed Add Long (vector).

vaddl_s16Experimentalneon

Signed Add Long (vector).

vaddl_s32Experimentalneon

Signed Add Long (vector).

vaddl_u8Experimentalneon

Unsigned Add Long (vector).

vaddl_u16Experimentalneon

Unsigned Add Long (vector).

vaddl_u32Experimentalneon

Unsigned Add Long (vector).

vaddlv_s8Experimentalneon

Signed Add Long across Vector

vaddlv_s16Experimentalneon

Signed Add Long across Vector

vaddlv_s32Experimentalneon

Signed Add Long across Vector

vaddlv_u8Experimentalneon

Unsigned Add Long across Vector

vaddlv_u16Experimentalneon

Unsigned Add Long across Vector

vaddlv_u32Experimentalneon

Unsigned Add Long across Vector

vaddlvq_s8Experimentalneon

Signed Add Long across Vector

vaddlvq_s16Experimentalneon

Signed Add Long across Vector

vaddlvq_s32Experimentalneon

Signed Add Long across Vector

vaddlvq_u8Experimentalneon

Unsigned Add Long across Vector

vaddlvq_u16Experimentalneon

Unsigned Add Long across Vector

vaddlvq_u32Experimentalneon

Unsigned Add Long across Vector

vaddq_f32Experimentalneon

Vector add.

vaddq_f64Experimentalneon

Vector add.

vaddq_s8Experimentalneon

Vector add.

vaddq_s16Experimentalneon

Vector add.

vaddq_s32Experimentalneon

Vector add.

vaddq_s64Experimentalneon

Vector add.

vaddq_u8Experimentalneon

Vector add.

vaddq_u16Experimentalneon

Vector add.

vaddq_u32Experimentalneon

Vector add.

vaddq_u64Experimentalneon

Vector add.

vaddv_s8Experimentalneon

Add across vector

vaddv_s16Experimentalneon

Add across vector

vaddv_s32Experimentalneon

Add across vector

vaddv_u8Experimentalneon

Add across vector

vaddv_u16Experimentalneon

Add across vector

vaddv_u32Experimentalneon

Add across vector

vaddvq_s8Experimentalneon

Add across vector

vaddvq_s16Experimentalneon

Add across vector

vaddvq_s32Experimentalneon

Add across vector

vaddvq_s64Experimentalneon

Add across vector

vaddvq_u8Experimentalneon

Add across vector

vaddvq_u16Experimentalneon

Add across vector

vaddvq_u32Experimentalneon

Add across vector

vaddvq_u64Experimentalneon

Add across vector

vaddw_high_s8Experimentalneon

Signed Add Wide (high half).

vaddw_high_s16Experimentalneon

Signed Add Wide (high half).

vaddw_high_s32Experimentalneon

Signed Add Wide (high half).

vaddw_high_u8Experimentalneon

Unsigned Add Wide (high half).

vaddw_high_u16Experimentalneon

Unsigned Add Wide (high half).

vaddw_high_u32Experimentalneon

Unsigned Add Wide (high half).

vaddw_s8Experimentalneon

Signed Add Wide.

vaddw_s16Experimentalneon

Signed Add Wide.

vaddw_s32Experimentalneon

Signed Add Wide.

vaddw_u8Experimentalneon

Unsigned Add Wide.

vaddw_u16Experimentalneon

Unsigned Add Wide.

vaddw_u32Experimentalneon

Unsigned Add Wide.

vaesdq_u8Experimentalaes

AES single round decryption.

vaeseq_u8Experimentalaes

AES single round encryption.

vaesimcq_u8Experimentalaes

AES inverse mix columns.

vaesmcq_u8Experimentalaes

AES mix columns.

vand_s8Experimentalneon

Vector bitwise and

vand_s16Experimentalneon

Vector bitwise and

vand_s32Experimentalneon

Vector bitwise and

vand_s64Experimentalneon

Vector bitwise and

vand_u8Experimentalneon

Vector bitwise and

vand_u16Experimentalneon

Vector bitwise and

vand_u32Experimentalneon

Vector bitwise and

vand_u64Experimentalneon

Vector bitwise and

vandq_s8Experimentalneon

Vector bitwise and

vandq_s16Experimentalneon

Vector bitwise and

vandq_s32Experimentalneon

Vector bitwise and

vandq_s64Experimentalneon

Vector bitwise and

vandq_u8Experimentalneon

Vector bitwise and

vandq_u16Experimentalneon

Vector bitwise and

vandq_u32Experimentalneon

Vector bitwise and

vandq_u64Experimentalneon

Vector bitwise and

vbic_s8Experimentalneon

Vector bitwise bit clear

vbic_s16Experimentalneon

Vector bitwise bit clear

vbic_s32Experimentalneon

Vector bitwise bit clear

vbic_s64Experimentalneon

Vector bitwise bit clear

vbic_u8Experimentalneon

Vector bitwise bit clear

vbic_u16Experimentalneon

Vector bitwise bit clear

vbic_u32Experimentalneon

Vector bitwise bit clear

vbic_u64Experimentalneon

Vector bitwise bit clear

vbicq_s8Experimentalneon

Vector bitwise bit clear

vbicq_s16Experimentalneon

Vector bitwise bit clear

vbicq_s32Experimentalneon

Vector bitwise bit clear

vbicq_s64Experimentalneon

Vector bitwise bit clear

vbicq_u8Experimentalneon

Vector bitwise bit clear

vbicq_u16Experimentalneon

Vector bitwise bit clear

vbicq_u32Experimentalneon

Vector bitwise bit clear

vbicq_u64Experimentalneon

Vector bitwise bit clear

vbsl_f32Experimentalneon

Bitwise Select.

vbsl_f64Experimentalneon

Bitwise Select instructions. This instruction sets each bit in the destination SIMD&FP register to the corresponding bit from the first source SIMD&FP register when the original destination bit was 1, otherwise from the second source SIMD&FP register.

vbsl_p8Experimentalneon

Bitwise Select.

vbsl_p16Experimentalneon

Bitwise Select.

vbsl_p64Experimentalneon

Bitwise Select.

vbsl_s8Experimentalneon

Bitwise Select instructions. This instruction sets each bit in the destination SIMD&FP register to the corresponding bit from the first source SIMD&FP register when the original destination bit was 1, otherwise from the second source SIMD&FP register. Bitwise Select.

vbsl_s16Experimentalneon

Bitwise Select.

vbsl_s32Experimentalneon

Bitwise Select.

vbsl_s64Experimentalneon

Bitwise Select.

vbsl_u8Experimentalneon

Bitwise Select.

vbsl_u16Experimentalneon

Bitwise Select.

vbsl_u32Experimentalneon

Bitwise Select.

vbsl_u64Experimentalneon

Bitwise Select.

vbslq_f32Experimentalneon

Bitwise Select. (128-bit)

vbslq_f64Experimentalneon

Bitwise Select. (128-bit)

vbslq_p8Experimentalneon

Bitwise Select. (128-bit)

vbslq_p16Experimentalneon

Bitwise Select. (128-bit)

vbslq_p64Experimentalneon

Bitwise Select. (128-bit)

vbslq_s8Experimentalneon

Bitwise Select. (128-bit)

vbslq_s16Experimentalneon

Bitwise Select. (128-bit)

vbslq_s32Experimentalneon

Bitwise Select. (128-bit)

vbslq_s64Experimentalneon

Bitwise Select. (128-bit)

vbslq_u8Experimentalneon

Bitwise Select. (128-bit)

vbslq_u16Experimentalneon

Bitwise Select. (128-bit)

vbslq_u32Experimentalneon

Bitwise Select. (128-bit)

vbslq_u64Experimentalneon

Bitwise Select. (128-bit)

vcage_f32Experimentalneon

Floating-point absolute compare greater than or equal

vcage_f64Experimentalneon

Floating-point absolute compare greater than or equal

vcageq_f32Experimentalneon

Floating-point absolute compare greater than or equal

vcageq_f64Experimentalneon

Floating-point absolute compare greater than or equal

vcagt_f32Experimentalneon

Floating-point absolute compare greater than

vcagt_f64Experimentalneon

Floating-point absolute compare greater than

vcagtq_f32Experimentalneon

Floating-point absolute compare greater than

vcagtq_f64Experimentalneon

Floating-point absolute compare greater than

vcale_f32Experimentalneon

Floating-point absolute compare less than or equal

vcale_f64Experimentalneon

Floating-point absolute compare less than or equal

vcaleq_f32Experimentalneon

Floating-point absolute compare less than or equal

vcaleq_f64Experimentalneon

Floating-point absolute compare less than or equal

vcalt_f32Experimentalneon

Floating-point absolute compare less than

vcalt_f64Experimentalneon

Floating-point absolute compare less than

vcaltq_f32Experimentalneon

Floating-point absolute compare less than

vcaltq_f64Experimentalneon

Floating-point absolute compare less than

vceq_f32Experimentalneon

Floating-point compare equal

vceq_f64Experimentalneon

Floating-point compare equal

vceq_p8Experimentalneon

Compare bitwise Equal (vector)

vceq_p64Experimentalneon

Compare bitwise Equal (vector)

vceq_s8Experimentalneon

Compare bitwise Equal (vector)

vceq_s16Experimentalneon

Compare bitwise Equal (vector)

vceq_s32Experimentalneon

Compare bitwise Equal (vector)

vceq_s64Experimentalneon

Compare bitwise Equal (vector)

vceq_u8Experimentalneon

Compare bitwise Equal (vector)

vceq_u16Experimentalneon

Compare bitwise Equal (vector)

vceq_u32Experimentalneon

Compare bitwise Equal (vector)

vceq_u64Experimentalneon

Compare bitwise Equal (vector)

vceqq_f32Experimentalneon

Floating-point compare equal

vceqq_f64Experimentalneon

Floating-point compare equal

vceqq_p8Experimentalneon

Compare bitwise Equal (vector)

vceqq_p64Experimentalneon

Compare bitwise Equal (vector)

vceqq_s8Experimentalneon

Compare bitwise Equal (vector)

vceqq_s16Experimentalneon

Compare bitwise Equal (vector)

vceqq_s32Experimentalneon

Compare bitwise Equal (vector)

vceqq_s64Experimentalneon

Compare bitwise Equal (vector)

vceqq_u8Experimentalneon

Compare bitwise Equal (vector)

vceqq_u16Experimentalneon

Compare bitwise Equal (vector)

vceqq_u32Experimentalneon

Compare bitwise Equal (vector)

vceqq_u64Experimentalneon

Compare bitwise Equal (vector)

vceqz_f32Experimentalneon

Floating-point compare bitwise equal to zero

vceqz_f64Experimentalneon

Floating-point compare bitwise equal to zero

vceqz_p8Experimentalneon

Signed compare bitwise equal to zero

vceqz_p64Experimentalneon

Signed compare bitwise equal to zero

vceqz_s8Experimentalneon

Signed compare bitwise equal to zero

vceqz_s16Experimentalneon

Signed compare bitwise equal to zero

vceqz_s32Experimentalneon

Signed compare bitwise equal to zero

vceqz_s64Experimentalneon

Signed compare bitwise equal to zero

vceqz_u8Experimentalneon

Unsigned compare bitwise equal to zero

vceqz_u16Experimentalneon

Unsigned compare bitwise equal to zero

vceqz_u32Experimentalneon

Unsigned compare bitwise equal to zero

vceqz_u64Experimentalneon

Unsigned compare bitwise equal to zero

vceqzq_f32Experimentalneon

Floating-point compare bitwise equal to zero

vceqzq_f64Experimentalneon

Floating-point compare bitwise equal to zero

vceqzq_p8Experimentalneon

Signed compare bitwise equal to zero

vceqzq_p64Experimentalneon

Signed compare bitwise equal to zero

vceqzq_s8Experimentalneon

Signed compare bitwise equal to zero

vceqzq_s16Experimentalneon

Signed compare bitwise equal to zero

vceqzq_s32Experimentalneon

Signed compare bitwise equal to zero

vceqzq_s64Experimentalneon

Signed compare bitwise equal to zero

vceqzq_u8Experimentalneon

Unsigned compare bitwise equal to zero

vceqzq_u16Experimentalneon

Unsigned compare bitwise equal to zero

vceqzq_u32Experimentalneon

Unsigned compare bitwise equal to zero

vceqzq_u64Experimentalneon

Unsigned compare bitwise equal to zero

vcge_f32Experimentalneon

Floating-point compare greater than or equal

vcge_f64Experimentalneon

Floating-point compare greater than or equal

vcge_s8Experimentalneon

Compare signed greater than or equal

vcge_s16Experimentalneon

Compare signed greater than or equal

vcge_s32Experimentalneon

Compare signed greater than or equal

vcge_s64Experimentalneon

Compare signed greater than or equal

vcge_u8Experimentalneon

Compare unsigned greater than or equal

vcge_u16Experimentalneon

Compare unsigned greater than or equal

vcge_u32Experimentalneon

Compare unsigned greater than or equal

vcge_u64Experimentalneon

Compare unsigned greater than or equal

vcgeq_f32Experimentalneon

Floating-point compare greater than or equal

vcgeq_f64Experimentalneon

Floating-point compare greater than or equal

vcgeq_s8Experimentalneon

Compare signed greater than or equal

vcgeq_s16Experimentalneon

Compare signed greater than or equal

vcgeq_s32Experimentalneon

Compare signed greater than or equal

vcgeq_s64Experimentalneon

Compare signed greater than or equal

vcgeq_u8Experimentalneon

Compare unsigned greater than or equal

vcgeq_u16Experimentalneon

Compare unsigned greater than or equal

vcgeq_u32Experimentalneon

Compare unsigned greater than or equal

vcgeq_u64Experimentalneon

Compare unsigned greater than or equal

vcgez_f32Experimentalneon

Floating-point compare greater than or equal to zero

vcgez_f64Experimentalneon

Floating-point compare greater than or equal to zero

vcgez_s8Experimentalneon

Compare signed greater than or equal to zero

vcgez_s16Experimentalneon

Compare signed greater than or equal to zero

vcgez_s32Experimentalneon

Compare signed greater than or equal to zero

vcgez_s64Experimentalneon

Compare signed greater than or equal to zero

vcgezq_f32Experimentalneon

Floating-point compare greater than or equal to zero

vcgezq_f64Experimentalneon

Floating-point compare greater than or equal to zero

vcgezq_s8Experimentalneon

Compare signed greater than or equal to zero

vcgezq_s16Experimentalneon

Compare signed greater than or equal to zero

vcgezq_s32Experimentalneon

Compare signed greater than or equal to zero

vcgezq_s64Experimentalneon

Compare signed greater than or equal to zero

vcgt_f32Experimentalneon

Floating-point compare greater than

vcgt_f64Experimentalneon

Floating-point compare greater than

vcgt_s8Experimentalneon

Compare signed greater than

vcgt_s16Experimentalneon

Compare signed greater than

vcgt_s32Experimentalneon

Compare signed greater than

vcgt_s64Experimentalneon

Compare signed greater than

vcgt_u8Experimentalneon

Compare unsigned highe

vcgt_u16Experimentalneon

Compare unsigned highe

vcgt_u32Experimentalneon

Compare unsigned highe

vcgt_u64Experimentalneon

Compare unsigned highe

vcgtq_f32Experimentalneon

Floating-point compare greater than

vcgtq_f64Experimentalneon

Floating-point compare greater than

vcgtq_s8Experimentalneon

Compare signed greater than

vcgtq_s16Experimentalneon

Compare signed greater than

vcgtq_s32Experimentalneon

Compare signed greater than

vcgtq_s64Experimentalneon

Compare signed greater than

vcgtq_u8Experimentalneon

Compare unsigned highe

vcgtq_u16Experimentalneon

Compare unsigned highe

vcgtq_u32Experimentalneon

Compare unsigned highe

vcgtq_u64Experimentalneon

Compare unsigned highe

vcgtz_f32Experimentalneon

Floating-point compare greater than zero

vcgtz_f64Experimentalneon

Floating-point compare greater than zero

vcgtz_s8Experimentalneon

Compare signed greater than zero

vcgtz_s16Experimentalneon

Compare signed greater than zero

vcgtz_s32Experimentalneon

Compare signed greater than zero

vcgtz_s64Experimentalneon

Compare signed greater than zero

vcgtzq_f32Experimentalneon

Floating-point compare greater than zero

vcgtzq_f64Experimentalneon

Floating-point compare greater than zero

vcgtzq_s8Experimentalneon

Compare signed greater than zero

vcgtzq_s16Experimentalneon

Compare signed greater than zero

vcgtzq_s32Experimentalneon

Compare signed greater than zero

vcgtzq_s64Experimentalneon

Compare signed greater than zero

vcle_f32Experimentalneon

Floating-point compare less than or equal

vcle_f64Experimentalneon

Floating-point compare less than or equal

vcle_s8Experimentalneon

Compare signed less than or equal

vcle_s16Experimentalneon

Compare signed less than or equal

vcle_s32Experimentalneon

Compare signed less than or equal

vcle_s64Experimentalneon

Compare signed less than or equal

vcle_u8Experimentalneon

Compare unsigned less than or equal

vcle_u16Experimentalneon

Compare unsigned less than or equal

vcle_u32Experimentalneon

Compare unsigned less than or equal

vcle_u64Experimentalneon

Compare unsigned less than or equal

vcleq_f32Experimentalneon

Floating-point compare less than or equal

vcleq_f64Experimentalneon

Floating-point compare less than or equal

vcleq_s8Experimentalneon

Compare signed less than or equal

vcleq_s16Experimentalneon

Compare signed less than or equal

vcleq_s32Experimentalneon

Compare signed less than or equal

vcleq_s64Experimentalneon

Compare signed less than or equal

vcleq_u8Experimentalneon

Compare unsigned less than or equal

vcleq_u16Experimentalneon

Compare unsigned less than or equal

vcleq_u32Experimentalneon

Compare unsigned less than or equal

vcleq_u64Experimentalneon

Compare unsigned less than or equal

vclez_f32Experimentalneon

Floating-point compare less than or equal to zero

vclez_f64Experimentalneon

Floating-point compare less than or equal to zero

vclez_s8Experimentalneon

Compare signed less than or equal to zero

vclez_s16Experimentalneon

Compare signed less than or equal to zero

vclez_s32Experimentalneon

Compare signed less than or equal to zero

vclez_s64Experimentalneon

Compare signed less than or equal to zero

vclezq_f32Experimentalneon

Floating-point compare less than or equal to zero

vclezq_f64Experimentalneon

Floating-point compare less than or equal to zero

vclezq_s8Experimentalneon

Compare signed less than or equal to zero

vclezq_s16Experimentalneon

Compare signed less than or equal to zero

vclezq_s32Experimentalneon

Compare signed less than or equal to zero

vclezq_s64Experimentalneon

Compare signed less than or equal to zero

vcls_s8Experimentalneon

Count leading sign bits

vcls_s16Experimentalneon

Count leading sign bits

vcls_s32Experimentalneon

Count leading sign bits

vclsq_s8Experimentalneon

Count leading sign bits

vclsq_s16Experimentalneon

Count leading sign bits

vclsq_s32Experimentalneon

Count leading sign bits

vclt_f32Experimentalneon

Floating-point compare less than

vclt_f64Experimentalneon

Floating-point compare less than

vclt_s8Experimentalneon

Compare signed less than

vclt_s16Experimentalneon

Compare signed less than

vclt_s32Experimentalneon

Compare signed less than

vclt_s64Experimentalneon

Compare signed less than

vclt_u8Experimentalneon

Compare unsigned less than

vclt_u16Experimentalneon

Compare unsigned less than

vclt_u32Experimentalneon

Compare unsigned less than

vclt_u64Experimentalneon

Compare unsigned less than

vcltq_f32Experimentalneon

Floating-point compare less than

vcltq_f64Experimentalneon

Floating-point compare less than

vcltq_s8Experimentalneon

Compare signed less than

vcltq_s16Experimentalneon

Compare signed less than

vcltq_s32Experimentalneon

Compare signed less than

vcltq_s64Experimentalneon

Compare signed less than

vcltq_u8Experimentalneon

Compare unsigned less than

vcltq_u16Experimentalneon

Compare unsigned less than

vcltq_u32Experimentalneon

Compare unsigned less than

vcltq_u64Experimentalneon

Compare unsigned less than

vcltz_f32Experimentalneon

Floating-point compare less than zero

vcltz_f64Experimentalneon

Floating-point compare less than zero

vcltz_s8Experimentalneon

Compare signed less than zero

vcltz_s16Experimentalneon

Compare signed less than zero

vcltz_s32Experimentalneon

Compare signed less than zero

vcltz_s64Experimentalneon

Compare signed less than zero

vcltzq_f32Experimentalneon

Floating-point compare less than zero

vcltzq_f64Experimentalneon

Floating-point compare less than zero

vcltzq_s8Experimentalneon

Compare signed less than zero

vcltzq_s16Experimentalneon

Compare signed less than zero

vcltzq_s32Experimentalneon

Compare signed less than zero

vcltzq_s64Experimentalneon

Compare signed less than zero

vclz_s8Experimentalneon

Signed count leading sign bits

vclz_s16Experimentalneon

Signed count leading sign bits

vclz_s32Experimentalneon

Signed count leading sign bits

vclz_u8Experimentalneon

Unsigned count leading sign bits

vclz_u16Experimentalneon

Unsigned count leading sign bits

vclz_u32Experimentalneon

Unsigned count leading sign bits

vclzq_s8Experimentalneon

Signed count leading sign bits

vclzq_s16Experimentalneon

Signed count leading sign bits

vclzq_s32Experimentalneon

Signed count leading sign bits

vclzq_u8Experimentalneon

Unsigned count leading sign bits

vclzq_u16Experimentalneon

Unsigned count leading sign bits

vclzq_u32Experimentalneon

Unsigned count leading sign bits

vcnt_p8Experimentalneon

Population count per byte.

vcnt_s8Experimentalneon

Population count per byte.

vcnt_u8Experimentalneon

Population count per byte.

vcntq_p8Experimentalneon

Population count per byte.

vcntq_s8Experimentalneon

Population count per byte.

vcntq_u8Experimentalneon

Population count per byte.

vcombine_f32Experimentalneon

Vector combine

vcombine_f64Experimentalneon

Vector combine

vcombine_p8Experimentalneon

Vector combine

vcombine_p16Experimentalneon

Vector combine

vcombine_p64Experimentalneon

Vector combine

vcombine_s8Experimentalneon

Vector combine

vcombine_s16Experimentalneon

Vector combine

vcombine_s32Experimentalneon

Vector combine

vcombine_s64Experimentalneon

Vector combine

vcombine_u8Experimentalneon

Vector combine

vcombine_u16Experimentalneon

Vector combine

vcombine_u32Experimentalneon

Vector combine

vcombine_u64Experimentalneon

Vector combine

vcopy_lane_f32Experimentalneon

Insert vector element from another vector element

vcopy_lane_f64Experimentalneon

Duplicate vector element to vector or scalar

vcopy_lane_p8Experimentalneon

Insert vector element from another vector element

vcopy_lane_p16Experimentalneon

Insert vector element from another vector element

vcopy_lane_p64Experimentalneon

Duplicate vector element to vector or scalar

vcopy_lane_s8Experimentalneon

Insert vector element from another vector element

vcopy_lane_s16Experimentalneon

Insert vector element from another vector element

vcopy_lane_s32Experimentalneon

Insert vector element from another vector element

vcopy_lane_s64Experimentalneon

Duplicate vector element to vector or scalar

vcopy_lane_u8Experimentalneon

Insert vector element from another vector element

vcopy_lane_u16Experimentalneon

Insert vector element from another vector element

vcopy_lane_u32Experimentalneon

Insert vector element from another vector element

vcopy_lane_u64Experimentalneon

Duplicate vector element to vector or scalar

vcopy_laneq_f32Experimentalneon

Insert vector element from another vector element

vcopy_laneq_f64Experimentalneon

Duplicate vector element to vector or scalar

vcopy_laneq_p8Experimentalneon

Insert vector element from another vector element

vcopy_laneq_p16Experimentalneon

Insert vector element from another vector element

vcopy_laneq_p64Experimentalneon

Duplicate vector element to vector or scalar

vcopy_laneq_s8Experimentalneon

Insert vector element from another vector element

vcopy_laneq_s16Experimentalneon

Insert vector element from another vector element

vcopy_laneq_s32Experimentalneon

Insert vector element from another vector element

vcopy_laneq_s64Experimentalneon

Duplicate vector element to vector or scalar

vcopy_laneq_u8Experimentalneon

Insert vector element from another vector element

vcopy_laneq_u16Experimentalneon

Insert vector element from another vector element

vcopy_laneq_u32Experimentalneon

Insert vector element from another vector element

vcopy_laneq_u64Experimentalneon

Duplicate vector element to vector or scalar

vcopyq_lane_f32Experimentalneon

Insert vector element from another vector element

vcopyq_lane_f64Experimentalneon

Insert vector element from another vector element

vcopyq_lane_p8Experimentalneon

Insert vector element from another vector element

vcopyq_lane_p16Experimentalneon

Insert vector element from another vector element

vcopyq_lane_p64Experimentalneon

Insert vector element from another vector element

vcopyq_lane_s8Experimentalneon

Insert vector element from another vector element

vcopyq_lane_s16Experimentalneon

Insert vector element from another vector element

vcopyq_lane_s32Experimentalneon

Insert vector element from another vector element

vcopyq_lane_s64Experimentalneon

Insert vector element from another vector element

vcopyq_lane_u8Experimentalneon

Insert vector element from another vector element

vcopyq_lane_u16Experimentalneon

Insert vector element from another vector element

vcopyq_lane_u32Experimentalneon

Insert vector element from another vector element

vcopyq_lane_u64Experimentalneon

Insert vector element from another vector element

vcopyq_laneq_f32Experimentalneon

Insert vector element from another vector element

vcopyq_laneq_f64Experimentalneon

Insert vector element from another vector element

vcopyq_laneq_p8Experimentalneon

Insert vector element from another vector element

vcopyq_laneq_p16Experimentalneon

Insert vector element from another vector element

vcopyq_laneq_p64Experimentalneon

Insert vector element from another vector element

vcopyq_laneq_s8Experimentalneon

Insert vector element from another vector element

vcopyq_laneq_s16Experimentalneon

Insert vector element from another vector element

vcopyq_laneq_s32Experimentalneon

Insert vector element from another vector element

vcopyq_laneq_s64Experimentalneon

Insert vector element from another vector element

vcopyq_laneq_u8Experimentalneon

Insert vector element from another vector element

vcopyq_laneq_u16Experimentalneon

Insert vector element from another vector element

vcopyq_laneq_u32Experimentalneon

Insert vector element from another vector element

vcopyq_laneq_u64Experimentalneon

Insert vector element from another vector element

vcreate_f32Experimentalneon

Insert vector element from another vector element

vcreate_f64Experimentalneon

Insert vector element from another vector element

vcreate_p8Experimentalneon

Insert vector element from another vector element

vcreate_p16Experimentalneon

Insert vector element from another vector element

vcreate_p64Experimentalneon,aes

Insert vector element from another vector element

vcreate_s8Experimentalneon

Insert vector element from another vector element

vcreate_s32Experimentalneon

Insert vector element from another vector element

vcreate_s64Experimentalneon

Insert vector element from another vector element

vcreate_u8Experimentalneon

Insert vector element from another vector element

vcreate_u32Experimentalneon

Insert vector element from another vector element

vcreate_u64Experimentalneon

Insert vector element from another vector element

vcvt_f32_f64Experimentalneon

Floating-point convert to lower precision narrow

vcvt_f32_s32Experimentalneon

Fixed-point convert to floating-point

vcvt_f32_u32Experimentalneon

Fixed-point convert to floating-point

vcvt_f64_f32Experimentalneon

Floating-point convert to higher precision long

vcvt_f64_s64Experimentalneon

Fixed-point convert to floating-point

vcvt_f64_u64Experimentalneon

Fixed-point convert to floating-point

vcvt_high_f32_f64Experimentalneon

Floating-point convert to lower precision narrow

vcvt_high_f64_f32Experimentalneon

Floating-point convert to higher precision long

vcvt_n_f64_s64Experimentalneon

Fixed-point convert to floating-point

vcvt_n_f64_u64Experimentalneon

Fixed-point convert to floating-point

vcvt_n_s64_f64Experimentalneon

Floating-point convert to fixed-point, rounding toward zero

vcvt_n_u64_f64Experimentalneon

Floating-point convert to fixed-point, rounding toward zero

vcvt_s32_f32Experimentalneon

Floating-point convert to signed fixed-point, rounding toward zero

vcvt_s64_f64Experimentalneon

Floating-point convert to signed fixed-point, rounding toward zero

vcvt_u32_f32Experimentalneon

Floating-point convert to unsigned fixed-point, rounding toward zero

vcvt_u64_f64Experimentalneon

Floating-point convert to unsigned fixed-point, rounding toward zero

vcvta_s32_f32Experimentalneon

Floating-point convert to signed integer, rounding to nearest with ties to away

vcvta_s64_f64Experimentalneon

Floating-point convert to signed integer, rounding to nearest with ties to away

vcvta_u32_f32Experimentalneon

Floating-point convert to unsigned integer, rounding to nearest with ties to away

vcvta_u64_f64Experimentalneon

Floating-point convert to unsigned integer, rounding to nearest with ties to away

vcvtad_s64_f64Experimentalneon

Floating-point convert to integer, rounding to nearest with ties to away

vcvtad_u64_f64Experimentalneon

Floating-point convert to integer, rounding to nearest with ties to away

vcvtaq_s32_f32Experimentalneon

Floating-point convert to signed integer, rounding to nearest with ties to away

vcvtaq_s64_f64Experimentalneon

Floating-point convert to signed integer, rounding to nearest with ties to away

vcvtaq_u32_f32Experimentalneon

Floating-point convert to unsigned integer, rounding to nearest with ties to away

vcvtaq_u64_f64Experimentalneon

Floating-point convert to unsigned integer, rounding to nearest with ties to away

vcvtas_s32_f32Experimentalneon

Floating-point convert to integer, rounding to nearest with ties to away

vcvtas_u32_f32Experimentalneon

Floating-point convert to integer, rounding to nearest with ties to away

vcvtd_f64_s64Experimentalneon

Fixed-point convert to floating-point

vcvtd_f64_u64Experimentalneon

Fixed-point convert to floating-point

vcvtd_n_f64_s64Experimentalneon

Fixed-point convert to floating-point

vcvtd_n_f64_u64Experimentalneon

Fixed-point convert to floating-point

vcvtd_n_s64_f64Experimentalneon

Floating-point convert to fixed-point, rounding toward zero

vcvtd_n_u64_f64Experimentalneon

Floating-point convert to fixed-point, rounding toward zero

vcvtd_s64_f64Experimentalneon

Fixed-point convert to floating-point

vcvtd_u64_f64Experimentalneon

Fixed-point convert to floating-point

vcvtm_s32_f32Experimentalneon

Floating-point convert to signed integer, rounding toward minus infinity

vcvtm_s64_f64Experimentalneon

Floating-point convert to signed integer, rounding toward minus infinity

vcvtm_u32_f32Experimentalneon

Floating-point convert to unsigned integer, rounding toward minus infinity

vcvtm_u64_f64Experimentalneon

Floating-point convert to unsigned integer, rounding toward minus infinity

vcvtmd_s64_f64Experimentalneon

Floating-point convert to signed integer, rounding toward minus infinity

vcvtmd_u64_f64Experimentalneon

Floating-point convert to unsigned integer, rounding toward minus infinity

vcvtmq_s32_f32Experimentalneon

Floating-point convert to signed integer, rounding toward minus infinity

vcvtmq_s64_f64Experimentalneon

Floating-point convert to signed integer, rounding toward minus infinity

vcvtmq_u32_f32Experimentalneon

Floating-point convert to unsigned integer, rounding toward minus infinity

vcvtmq_u64_f64Experimentalneon

Floating-point convert to unsigned integer, rounding toward minus infinity

vcvtms_s32_f32Experimentalneon

Floating-point convert to signed integer, rounding toward minus infinity

vcvtms_u32_f32Experimentalneon

Floating-point convert to unsigned integer, rounding toward minus infinity

vcvtn_s32_f32Experimentalneon

Floating-point convert to signed integer, rounding to nearest with ties to even

vcvtn_s64_f64Experimentalneon

Floating-point convert to signed integer, rounding to nearest with ties to even

vcvtn_u32_f32Experimentalneon

Floating-point convert to unsigned integer, rounding to nearest with ties to even

vcvtn_u64_f64Experimentalneon

Floating-point convert to unsigned integer, rounding to nearest with ties to even

vcvtnd_s64_f64Experimentalneon

Floating-point convert to signed integer, rounding to nearest with ties to even

vcvtnd_u64_f64Experimentalneon

Floating-point convert to unsigned integer, rounding to nearest with ties to even

vcvtnq_s32_f32Experimentalneon

Floating-point convert to signed integer, rounding to nearest with ties to even

vcvtnq_s64_f64Experimentalneon

Floating-point convert to signed integer, rounding to nearest with ties to even

vcvtnq_u32_f32Experimentalneon

Floating-point convert to unsigned integer, rounding to nearest with ties to even

vcvtnq_u64_f64Experimentalneon

Floating-point convert to unsigned integer, rounding to nearest with ties to even

vcvtns_s32_f32Experimentalneon

Floating-point convert to signed integer, rounding to nearest with ties to even

vcvtns_u32_f32Experimentalneon

Floating-point convert to unsigned integer, rounding to nearest with ties to even

vcvtp_s32_f32Experimentalneon

Floating-point convert to signed integer, rounding toward plus infinity

vcvtp_s64_f64Experimentalneon

Floating-point convert to signed integer, rounding toward plus infinity

vcvtp_u32_f32Experimentalneon

Floating-point convert to unsigned integer, rounding toward plus infinity

vcvtp_u64_f64Experimentalneon

Floating-point convert to unsigned integer, rounding toward plus infinity

vcvtpd_s64_f64Experimentalneon

Floating-point convert to signed integer, rounding toward plus infinity

vcvtpd_u64_f64Experimentalneon

Floating-point convert to unsigned integer, rounding toward plus infinity

vcvtpq_s32_f32Experimentalneon

Floating-point convert to signed integer, rounding toward plus infinity

vcvtpq_s64_f64Experimentalneon

Floating-point convert to signed integer, rounding toward plus infinity

vcvtpq_u32_f32Experimentalneon

Floating-point convert to unsigned integer, rounding toward plus infinity

vcvtpq_u64_f64Experimentalneon

Floating-point convert to unsigned integer, rounding toward plus infinity

vcvtps_s32_f32Experimentalneon

Floating-point convert to signed integer, rounding toward plus infinity

vcvtps_u32_f32Experimentalneon

Floating-point convert to unsigned integer, rounding toward plus infinity

vcvtq_f32_s32Experimentalneon

Fixed-point convert to floating-point

vcvtq_f32_u32Experimentalneon

Fixed-point convert to floating-point

vcvtq_f64_s64Experimentalneon

Fixed-point convert to floating-point

vcvtq_f64_u64Experimentalneon

Fixed-point convert to floating-point

vcvtq_n_f64_s64Experimentalneon

Fixed-point convert to floating-point

vcvtq_n_f64_u64Experimentalneon

Fixed-point convert to floating-point

vcvtq_n_s64_f64Experimentalneon

Floating-point convert to fixed-point, rounding toward zero

vcvtq_n_u64_f64Experimentalneon

Floating-point convert to fixed-point, rounding toward zero

vcvtq_s32_f32Experimentalneon

Floating-point convert to signed fixed-point, rounding toward zero

vcvtq_s64_f64Experimentalneon

Floating-point convert to signed fixed-point, rounding toward zero

vcvtq_u32_f32Experimentalneon

Floating-point convert to unsigned fixed-point, rounding toward zero

vcvtq_u64_f64Experimentalneon

Floating-point convert to unsigned fixed-point, rounding toward zero

vcvts_f32_s32Experimentalneon

Fixed-point convert to floating-point

vcvts_f32_u32Experimentalneon

Fixed-point convert to floating-point

vcvts_n_f32_s32Experimentalneon

Fixed-point convert to floating-point

vcvts_n_f32_u32Experimentalneon

Fixed-point convert to floating-point

vcvts_n_s32_f32Experimentalneon

Floating-point convert to fixed-point, rounding toward zero

vcvts_n_u32_f32Experimentalneon

Floating-point convert to fixed-point, rounding toward zero

vcvts_s32_f32Experimentalneon

Fixed-point convert to floating-point

vcvts_u32_f32Experimentalneon

Fixed-point convert to floating-point

vcvtx_f32_f64Experimentalneon

Floating-point convert to lower precision narrow, rounding to odd

vcvtx_high_f32_f64Experimentalneon

Floating-point convert to lower precision narrow, rounding to odd

vdiv_f32Experimentalneon

Divide

vdiv_f64Experimentalneon

Divide

vdivq_f32Experimentalneon

Divide

vdivq_f64Experimentalneon

Divide

vdup_lane_f32Experimentalneon

Set all vector lanes to the same value

vdup_lane_f64Experimentalneon

Set all vector lanes to the same value

vdup_lane_p8Experimentalneon

Set all vector lanes to the same value

vdup_lane_p16Experimentalneon

Set all vector lanes to the same value

vdup_lane_p64Experimentalneon

Set all vector lanes to the same value

vdup_lane_s8Experimentalneon

Set all vector lanes to the same value

vdup_lane_s16Experimentalneon

Set all vector lanes to the same value

vdup_lane_s32Experimentalneon

Set all vector lanes to the same value

vdup_lane_s64Experimentalneon

Set all vector lanes to the same value

vdup_lane_u8Experimentalneon

Set all vector lanes to the same value

vdup_lane_u16Experimentalneon

Set all vector lanes to the same value

vdup_lane_u32Experimentalneon

Set all vector lanes to the same value

vdup_lane_u64Experimentalneon

Set all vector lanes to the same value

vdup_laneq_f32Experimentalneon

Set all vector lanes to the same value

vdup_laneq_f64Experimentalneon

Set all vector lanes to the same value

vdup_laneq_p8Experimentalneon

Set all vector lanes to the same value

vdup_laneq_p16Experimentalneon

Set all vector lanes to the same value

vdup_laneq_p64Experimentalneon

Set all vector lanes to the same value

vdup_laneq_s8Experimentalneon

Set all vector lanes to the same value

vdup_laneq_s16Experimentalneon

Set all vector lanes to the same value

vdup_laneq_s32Experimentalneon

Set all vector lanes to the same value

vdup_laneq_s64Experimentalneon

Set all vector lanes to the same value

vdup_laneq_u8Experimentalneon

Set all vector lanes to the same value

vdup_laneq_u16Experimentalneon

Set all vector lanes to the same value

vdup_laneq_u32Experimentalneon

Set all vector lanes to the same value

vdup_laneq_u64Experimentalneon

Set all vector lanes to the same value

vdup_n_f32Experimentalneon

Duplicate vector element to vector or scalar

vdup_n_f64Experimentalneon

Duplicate vector element to vector or scalar

vdup_n_p8Experimentalneon

Duplicate vector element to vector or scalar

vdup_n_p16Experimentalneon

Duplicate vector element to vector or scalar

vdup_n_p64Experimentalneon

Duplicate vector element to vector or scalar

vdup_n_s8Experimentalneon

Duplicate vector element to vector or scalar

vdup_n_s16Experimentalneon

Duplicate vector element to vector or scalar

vdup_n_s32Experimentalneon

Duplicate vector element to vector or scalar

vdup_n_s64Experimentalneon

Duplicate vector element to vector or scalar

vdup_n_u8Experimentalneon

Duplicate vector element to vector or scalar

vdup_n_u16Experimentalneon

Duplicate vector element to vector or scalar

vdup_n_u32Experimentalneon

Duplicate vector element to vector or scalar

vdup_n_u64Experimentalneon

Duplicate vector element to vector or scalar

vdupb_lane_p8Experimentalneon

Set all vector lanes to the same value

vdupb_lane_s8Experimentalneon

Set all vector lanes to the same value

vdupb_lane_u8Experimentalneon

Set all vector lanes to the same value

vdupb_laneq_p8Experimentalneon

Set all vector lanes to the same value

vdupb_laneq_s8Experimentalneon

Set all vector lanes to the same value

vdupb_laneq_u8Experimentalneon

Set all vector lanes to the same value

vdupd_lane_f64Experimentalneon

Set all vector lanes to the same value

vdupd_lane_s64Experimentalneon

Set all vector lanes to the same value

vdupd_lane_u64Experimentalneon

Set all vector lanes to the same value

vdupd_laneq_f64Experimentalneon

Set all vector lanes to the same value

vdupd_laneq_s64Experimentalneon

Set all vector lanes to the same value

vdupd_laneq_u64Experimentalneon

Set all vector lanes to the same value

vduph_lane_p16Experimentalneon

Set all vector lanes to the same value

vduph_lane_s16Experimentalneon

Set all vector lanes to the same value

vduph_lane_u16Experimentalneon

Set all vector lanes to the same value

vduph_laneq_p16Experimentalneon

Set all vector lanes to the same value

vduph_laneq_s16Experimentalneon

Set all vector lanes to the same value

vduph_laneq_u16Experimentalneon

Set all vector lanes to the same value

vdupq_lane_f32Experimentalneon

Set all vector lanes to the same value

vdupq_lane_f64Experimentalneon

Set all vector lanes to the same value

vdupq_lane_p8Experimentalneon

Set all vector lanes to the same value

vdupq_lane_p16Experimentalneon

Set all vector lanes to the same value

vdupq_lane_p64Experimentalneon

Set all vector lanes to the same value

vdupq_lane_s8Experimentalneon

Set all vector lanes to the same value

vdupq_lane_s16Experimentalneon

Set all vector lanes to the same value

vdupq_lane_s32Experimentalneon

Set all vector lanes to the same value

vdupq_lane_s64Experimentalneon

Set all vector lanes to the same value

vdupq_lane_u8Experimentalneon

Set all vector lanes to the same value

vdupq_lane_u16Experimentalneon

Set all vector lanes to the same value

vdupq_lane_u32Experimentalneon

Set all vector lanes to the same value

vdupq_lane_u64Experimentalneon

Set all vector lanes to the same value

vdupq_laneq_f32Experimentalneon

Set all vector lanes to the same value

vdupq_laneq_f64Experimentalneon

Set all vector lanes to the same value

vdupq_laneq_p8Experimentalneon

Set all vector lanes to the same value

vdupq_laneq_p16Experimentalneon

Set all vector lanes to the same value

vdupq_laneq_p64Experimentalneon

Set all vector lanes to the same value

vdupq_laneq_s8Experimentalneon

Set all vector lanes to the same value

vdupq_laneq_s16Experimentalneon

Set all vector lanes to the same value

vdupq_laneq_s32Experimentalneon

Set all vector lanes to the same value

vdupq_laneq_s64Experimentalneon

Set all vector lanes to the same value

vdupq_laneq_u8Experimentalneon

Set all vector lanes to the same value

vdupq_laneq_u16Experimentalneon

Set all vector lanes to the same value

vdupq_laneq_u32Experimentalneon

Set all vector lanes to the same value

vdupq_laneq_u64Experimentalneon

Set all vector lanes to the same value

vdupq_n_f32Experimentalneon

Duplicate vector element to vector or scalar

vdupq_n_f64Experimentalneon

Duplicate vector element to vector or scalar

vdupq_n_p8Experimentalneon

Duplicate vector element to vector or scalar

vdupq_n_p16Experimentalneon

Duplicate vector element to vector or scalar

vdupq_n_p64Experimentalneon

Duplicate vector element to vector or scalar

vdupq_n_s8Experimentalneon

Duplicate vector element to vector or scalar

vdupq_n_s16Experimentalneon

Duplicate vector element to vector or scalar

vdupq_n_s32Experimentalneon

Duplicate vector element to vector or scalar

vdupq_n_s64Experimentalneon

Duplicate vector element to vector or scalar

vdupq_n_u8Experimentalneon

Duplicate vector element to vector or scalar

vdupq_n_u16Experimentalneon

Duplicate vector element to vector or scalar

vdupq_n_u32Experimentalneon

Duplicate vector element to vector or scalar

vdupq_n_u64Experimentalneon

Duplicate vector element to vector or scalar

vdups_lane_f32Experimentalneon

Set all vector lanes to the same value

vdups_lane_s32Experimentalneon

Set all vector lanes to the same value

vdups_lane_u32Experimentalneon

Set all vector lanes to the same value

vdups_laneq_f32Experimentalneon

Set all vector lanes to the same value

vdups_laneq_s32Experimentalneon

Set all vector lanes to the same value

vdups_laneq_u32Experimentalneon

Set all vector lanes to the same value

veor_s8Experimentalneon

Vector bitwise exclusive or (vector)

veor_s16Experimentalneon

Vector bitwise exclusive or (vector)

veor_s32Experimentalneon

Vector bitwise exclusive or (vector)

veor_s64Experimentalneon

Vector bitwise exclusive or (vector)

veor_u8Experimentalneon

Vector bitwise exclusive or (vector)

veor_u16Experimentalneon

Vector bitwise exclusive or (vector)

veor_u32Experimentalneon

Vector bitwise exclusive or (vector)

veor_u64Experimentalneon

Vector bitwise exclusive or (vector)

veorq_s8Experimentalneon

Vector bitwise exclusive or (vector)

veorq_s16Experimentalneon

Vector bitwise exclusive or (vector)

veorq_s32Experimentalneon

Vector bitwise exclusive or (vector)

veorq_s64Experimentalneon

Vector bitwise exclusive or (vector)

veorq_u8Experimentalneon

Vector bitwise exclusive or (vector)

veorq_u16Experimentalneon

Vector bitwise exclusive or (vector)

veorq_u32Experimentalneon

Vector bitwise exclusive or (vector)

veorq_u64Experimentalneon

Vector bitwise exclusive or (vector)

vext_f32Experimentalneon

Extract vector from pair of vectors

vext_f64Experimentalneon

Extract vector from pair of vectors

vext_p8Experimentalneon

Extract vector from pair of vectors

vext_p16Experimentalneon

Extract vector from pair of vectors

vext_p64Experimentalneon

Extract vector from pair of vectors

vext_s8Experimentalneon

Extract vector from pair of vectors

vext_s16Experimentalneon

Extract vector from pair of vectors

vext_s32Experimentalneon

Extract vector from pair of vectors

vext_s64Experimentalneon

Extract vector from pair of vectors

vext_u8Experimentalneon

Extract vector from pair of vectors

vext_u16Experimentalneon

Extract vector from pair of vectors

vext_u32Experimentalneon

Extract vector from pair of vectors

vext_u64Experimentalneon

Extract vector from pair of vectors

vextq_f32Experimentalneon

Extract vector from pair of vectors

vextq_f64Experimentalneon

Extract vector from pair of vectors

vextq_p8Experimentalneon

Extract vector from pair of vectors

vextq_p16Experimentalneon

Extract vector from pair of vectors

vextq_p64Experimentalneon

Extract vector from pair of vectors

vextq_s8Experimentalneon

Extract vector from pair of vectors

vextq_s16Experimentalneon

Extract vector from pair of vectors

vextq_s32Experimentalneon

Extract vector from pair of vectors

vextq_s64Experimentalneon

Extract vector from pair of vectors

vextq_u8Experimentalneon

Extract vector from pair of vectors

vextq_u16Experimentalneon

Extract vector from pair of vectors

vextq_u32Experimentalneon

Extract vector from pair of vectors

vextq_u64Experimentalneon

Extract vector from pair of vectors

vfma_f32Experimentalneon

Floating-point fused Multiply-Add to accumulator(vector)

vfma_f64Experimentalneon

Floating-point fused Multiply-Add to accumulator(vector)

vfma_lane_f32Experimentalneon

Floating-point fused multiply-add to accumulator

vfma_lane_f64Experimentalneon

Floating-point fused multiply-add to accumulator

vfma_laneq_f32Experimentalneon

Floating-point fused multiply-add to accumulator

vfma_laneq_f64Experimentalneon

Floating-point fused multiply-add to accumulator

vfma_n_f32Experimentalneon

Floating-point fused Multiply-Add to accumulator(vector)

vfma_n_f64Experimentalneon

Floating-point fused Multiply-Add to accumulator(vector)

vfmad_lane_f64Experimentalneon

Floating-point fused multiply-add to accumulator

vfmad_laneq_f64Experimentalneon

Floating-point fused multiply-add to accumulator

vfmaq_f32Experimentalneon

Floating-point fused Multiply-Add to accumulator(vector)

vfmaq_f64Experimentalneon

Floating-point fused Multiply-Add to accumulator(vector)

vfmaq_lane_f32Experimentalneon

Floating-point fused multiply-add to accumulator

vfmaq_lane_f64Experimentalneon

Floating-point fused multiply-add to accumulator

vfmaq_laneq_f32Experimentalneon

Floating-point fused multiply-add to accumulator

vfmaq_laneq_f64Experimentalneon

Floating-point fused multiply-add to accumulator

vfmaq_n_f32Experimentalneon

Floating-point fused Multiply-Add to accumulator(vector)

vfmaq_n_f64Experimentalneon

Floating-point fused Multiply-Add to accumulator(vector)

vfmas_lane_f32Experimentalneon

Floating-point fused multiply-add to accumulator

vfmas_laneq_f32Experimentalneon

Floating-point fused multiply-add to accumulator

vfms_f32Experimentalneon

Floating-point fused multiply-subtract from accumulator

vfms_f64Experimentalneon

Floating-point fused multiply-subtract from accumulator

vfms_lane_f32Experimentalneon

Floating-point fused multiply-subtract to accumulator

vfms_lane_f64Experimentalneon

Floating-point fused multiply-subtract to accumulator

vfms_laneq_f32Experimentalneon

Floating-point fused multiply-subtract to accumulator

vfms_laneq_f64Experimentalneon

Floating-point fused multiply-subtract to accumulator

vfms_n_f32Experimentalneon

Floating-point fused Multiply-subtract to accumulator(vector)

vfms_n_f64Experimentalneon

Floating-point fused Multiply-subtract to accumulator(vector)

vfmsd_lane_f64Experimentalneon

Floating-point fused multiply-subtract to accumulator

vfmsd_laneq_f64Experimentalneon

Floating-point fused multiply-subtract to accumulator

vfmsq_f32Experimentalneon

Floating-point fused multiply-subtract from accumulator

vfmsq_f64Experimentalneon

Floating-point fused multiply-subtract from accumulator

vfmsq_lane_f32Experimentalneon

Floating-point fused multiply-subtract to accumulator

vfmsq_lane_f64Experimentalneon

Floating-point fused multiply-subtract to accumulator

vfmsq_laneq_f32Experimentalneon

Floating-point fused multiply-subtract to accumulator

vfmsq_laneq_f64Experimentalneon

Floating-point fused multiply-subtract to accumulator

vfmsq_n_f32Experimentalneon

Floating-point fused Multiply-subtract to accumulator(vector)

vfmsq_n_f64Experimentalneon

Floating-point fused Multiply-subtract to accumulator(vector)

vfmss_lane_f32Experimentalneon

Floating-point fused multiply-subtract to accumulator

vfmss_laneq_f32Experimentalneon

Floating-point fused multiply-subtract to accumulator

vget_high_f32Experimentalneon

Duplicate vector element to vector or scalar

vget_high_f64Experimentalneon

Duplicate vector element to vector or scalar

vget_high_p8Experimentalneon

Duplicate vector element to vector or scalar

vget_high_p16Experimentalneon

Duplicate vector element to vector or scalar

vget_high_p64Experimentalneon

Duplicate vector element to vector or scalar

vget_high_s8Experimentalneon

Duplicate vector element to vector or scalar

vget_high_s16Experimentalneon

Duplicate vector element to vector or scalar

vget_high_s32Experimentalneon

Duplicate vector element to vector or scalar

vget_high_s64Experimentalneon

Duplicate vector element to vector or scalar

vget_high_u8Experimentalneon

Duplicate vector element to vector or scalar

vget_high_u16Experimentalneon

Duplicate vector element to vector or scalar

vget_high_u32Experimentalneon

Duplicate vector element to vector or scalar

vget_high_u64Experimentalneon

Duplicate vector element to vector or scalar

vget_lane_f32Experimentalneon

Duplicate vector element to vector or scalar

vget_lane_f64Experimentalneon

Duplicate vector element to vector or scalar

vget_lane_p8Experimentalneon

Move vector element to general-purpose register

vget_lane_p16Experimentalneon

Move vector element to general-purpose register

vget_lane_p64Experimentalneon

Move vector element to general-purpose register

vget_lane_s8Experimentalneon

Move vector element to general-purpose register

vget_lane_s16Experimentalneon

Move vector element to general-purpose register

vget_lane_s32Experimentalneon

Move vector element to general-purpose register

vget_lane_s64Experimentalneon

Move vector element to general-purpose register

vget_lane_u8Experimentalneon

Move vector element to general-purpose register

vget_lane_u16Experimentalneon

Move vector element to general-purpose register

vget_lane_u32Experimentalneon

Move vector element to general-purpose register

vget_lane_u64Experimentalneon

Move vector element to general-purpose register

vget_low_f32Experimentalneon

Duplicate vector element to vector or scalar

vget_low_f64Experimentalneon

Duplicate vector element to vector or scalar

vget_low_p8Experimentalneon

Duplicate vector element to vector or scalar

vget_low_p16Experimentalneon

Duplicate vector element to vector or scalar

vget_low_p64Experimentalneon

Duplicate vector element to vector or scalar

vget_low_s8Experimentalneon

Duplicate vector element to vector or scalar

vget_low_s16Experimentalneon

Duplicate vector element to vector or scalar

vget_low_s32Experimentalneon

Duplicate vector element to vector or scalar

vget_low_s64Experimentalneon

Duplicate vector element to vector or scalar

vget_low_u8Experimentalneon

Duplicate vector element to vector or scalar

vget_low_u16Experimentalneon

Duplicate vector element to vector or scalar

vget_low_u32Experimentalneon

Duplicate vector element to vector or scalar

vget_low_u64Experimentalneon

Duplicate vector element to vector or scalar

vgetq_lane_f32Experimentalneon

Duplicate vector element to vector or scalar

vgetq_lane_f64Experimentalneon

Duplicate vector element to vector or scalar

vgetq_lane_p8Experimentalneon

Move vector element to general-purpose register

vgetq_lane_p16Experimentalneon

Move vector element to general-purpose register

vgetq_lane_p64Experimentalneon

Move vector element to general-purpose register

vgetq_lane_s8Experimentalneon

Move vector element to general-purpose register

vgetq_lane_s16Experimentalneon

Move vector element to general-purpose register

vgetq_lane_s32Experimentalneon

Move vector element to general-purpose register

vgetq_lane_s64Experimentalneon

Move vector element to general-purpose register

vgetq_lane_u8Experimentalneon

Move vector element to general-purpose register

vgetq_lane_u16Experimentalneon

Move vector element to general-purpose register

vgetq_lane_u32Experimentalneon

Move vector element to general-purpose register

vgetq_lane_u64Experimentalneon

Move vector element to general-purpose register

vhadd_s8Experimentalneon

Halving add

vhadd_s16Experimentalneon

Halving add

vhadd_s32Experimentalneon

Halving add

vhadd_u8Experimentalneon

Halving add

vhadd_u16Experimentalneon

Halving add

vhadd_u32Experimentalneon

Halving add

vhaddq_s8Experimentalneon

Halving add

vhaddq_s16Experimentalneon

Halving add

vhaddq_s32Experimentalneon

Halving add

vhaddq_u8Experimentalneon

Halving add

vhaddq_u16Experimentalneon

Halving add

vhaddq_u32Experimentalneon

Halving add

vhsub_s8Experimentalneon

Signed halving subtract

vhsub_s16Experimentalneon

Signed halving subtract

vhsub_s32Experimentalneon

Signed halving subtract

vhsub_u8Experimentalneon

Signed halving subtract

vhsub_u16Experimentalneon

Signed halving subtract

vhsub_u32Experimentalneon

Signed halving subtract

vhsubq_s8Experimentalneon

Signed halving subtract

vhsubq_s16Experimentalneon

Signed halving subtract

vhsubq_s32Experimentalneon

Signed halving subtract

vhsubq_u8Experimentalneon

Signed halving subtract

vhsubq_u16Experimentalneon

Signed halving subtract

vhsubq_u32Experimentalneon

Signed halving subtract

vld1_dup_f32Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_p8Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_p16Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_s8Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_s16Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_s32Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_s64Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_u8Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_u16Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_u32Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1_dup_u64Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1_f32Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1_f64Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1_lane_f32Experimentalneon

Load one single-element structure to one lane of one register.

vld1_lane_p8Experimentalneon

Load one single-element structure to one lane of one register.

vld1_lane_p16Experimentalneon

Load one single-element structure to one lane of one register.

vld1_lane_s8Experimentalneon

Load one single-element structure to one lane of one register.

vld1_lane_s16Experimentalneon

Load one single-element structure to one lane of one register.

vld1_lane_s32Experimentalneon

Load one single-element structure to one lane of one register.

vld1_lane_s64Experimentalneon

Load one single-element structure to one lane of one register.

vld1_lane_u8Experimentalneon

Load one single-element structure to one lane of one register.

vld1_lane_u16Experimentalneon

Load one single-element structure to one lane of one register.

vld1_lane_u32Experimentalneon

Load one single-element structure to one lane of one register.

vld1_lane_u64Experimentalneon

Load one single-element structure to one lane of one register.

vld1_p8Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1_p16Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1_s8Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1_s16Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1_s32Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1_s64Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1_u8Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1_u16Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1_u32Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1_u64Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1q_dup_f32Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_p8Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_p16Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_s8Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_s16Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_s32Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_s64Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_u8Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_u16Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_u32Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_dup_u64Experimentalneon

Load one single-element structure and Replicate to all lanes (of one register).

vld1q_f32Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1q_f64Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1q_lane_f32Experimentalneon

Load one single-element structure to one lane of one register.

vld1q_lane_p8Experimentalneon

Load one single-element structure to one lane of one register.

vld1q_lane_p16Experimentalneon

Load one single-element structure to one lane of one register.

vld1q_lane_s8Experimentalneon

Load one single-element structure to one lane of one register.

vld1q_lane_s16Experimentalneon

Load one single-element structure to one lane of one register.

vld1q_lane_s32Experimentalneon

Load one single-element structure to one lane of one register.

vld1q_lane_s64Experimentalneon

Load one single-element structure to one lane of one register.

vld1q_lane_u8Experimentalneon

Load one single-element structure to one lane of one register.

vld1q_lane_u16Experimentalneon

Load one single-element structure to one lane of one register.

vld1q_lane_u32Experimentalneon

Load one single-element structure to one lane of one register.

vld1q_lane_u64Experimentalneon

Load one single-element structure to one lane of one register.

vld1q_p8Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1q_p16Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1q_s8Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1q_s16Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1q_s32Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1q_s64Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1q_u8Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1q_u16Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1q_u32Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vld1q_u64Experimentalneon

Load multiple single-element structures to one, two, three, or four registers.

vmax_f32Experimentalneon

Maximum (vector)

vmax_f64Experimentalneon

Maximum (vector)

vmax_s8Experimentalneon

Maximum (vector)

vmax_s16Experimentalneon

Maximum (vector)

vmax_s32Experimentalneon

Maximum (vector)

vmax_u8Experimentalneon

Maximum (vector)

vmax_u16Experimentalneon

Maximum (vector)

vmax_u32Experimentalneon

Maximum (vector)

vmaxnm_f32Experimentalneon

Floating-point Maximun Number (vector)

vmaxnm_f64Experimentalneon

Floating-point Maximun Number (vector)

vmaxnmq_f32Experimentalneon

Floating-point Maximun Number (vector)

vmaxnmq_f64Experimentalneon

Floating-point Maximun Number (vector)

vmaxq_f32Experimentalneon

Maximum (vector)

vmaxq_f64Experimentalneon

Maximum (vector)

vmaxq_s8Experimentalneon

Maximum (vector)

vmaxq_s16Experimentalneon

Maximum (vector)

vmaxq_s32Experimentalneon

Maximum (vector)

vmaxq_u8Experimentalneon

Maximum (vector)

vmaxq_u16Experimentalneon

Maximum (vector)

vmaxq_u32Experimentalneon

Maximum (vector)

vmaxv_f32Experimentalneon

Horizontal vector max.

vmaxv_s8Experimentalneon

Horizontal vector max.

vmaxv_s16Experimentalneon

Horizontal vector max.

vmaxv_s32Experimentalneon

Horizontal vector max.

vmaxv_u8Experimentalneon

Horizontal vector max.

vmaxv_u16Experimentalneon

Horizontal vector max.

vmaxv_u32Experimentalneon

Horizontal vector max.

vmaxvq_f32Experimentalneon

Horizontal vector max.

vmaxvq_f64Experimentalneon

Horizontal vector max.

vmaxvq_s8Experimentalneon

Horizontal vector max.

vmaxvq_s16Experimentalneon

Horizontal vector max.

vmaxvq_s32Experimentalneon

Horizontal vector max.

vmaxvq_u8Experimentalneon

Horizontal vector max.

vmaxvq_u16Experimentalneon

Horizontal vector max.

vmaxvq_u32Experimentalneon

Horizontal vector max.

vmin_f32Experimentalneon

Minimum (vector)

vmin_f64Experimentalneon

Minimum (vector)

vmin_s8Experimentalneon

Minimum (vector)

vmin_s16Experimentalneon

Minimum (vector)

vmin_s32Experimentalneon

Minimum (vector)

vmin_u8Experimentalneon

Minimum (vector)

vmin_u16Experimentalneon

Minimum (vector)

vmin_u32Experimentalneon

Minimum (vector)

vminnm_f32Experimentalneon

Floating-point Minimun Number (vector)

vminnm_f64Experimentalneon

Floating-point Minimun Number (vector)

vminnmq_f32Experimentalneon

Floating-point Minimun Number (vector)

vminnmq_f64Experimentalneon

Floating-point Minimun Number (vector)

vminq_f32Experimentalneon

Minimum (vector)

vminq_f64Experimentalneon

Minimum (vector)

vminq_s8Experimentalneon

Minimum (vector)

vminq_s16Experimentalneon

Minimum (vector)

vminq_s32Experimentalneon

Minimum (vector)

vminq_u8Experimentalneon

Minimum (vector)

vminq_u16Experimentalneon

Minimum (vector)

vminq_u32Experimentalneon

Minimum (vector)

vminv_f32Experimentalneon

Horizontal vector min.

vminv_s8Experimentalneon

Horizontal vector min.

vminv_s16Experimentalneon

Horizontal vector min.

vminv_s32Experimentalneon

Horizontal vector min.

vminv_u8Experimentalneon

Horizontal vector min.

vminv_u16Experimentalneon

Horizontal vector min.

vminv_u32Experimentalneon

Horizontal vector min.

vminvq_f32Experimentalneon

Horizontal vector min.

vminvq_f64Experimentalneon

Horizontal vector min.

vminvq_s8Experimentalneon

Horizontal vector min.

vminvq_s16Experimentalneon

Horizontal vector min.

vminvq_s32Experimentalneon

Horizontal vector min.

vminvq_u8Experimentalneon

Horizontal vector min.

vminvq_u16Experimentalneon

Horizontal vector min.

vminvq_u32Experimentalneon

Horizontal vector min.

vmla_f32Experimentalneon

Floating-point multiply-add to accumulator

vmla_f64Experimentalneon

Floating-point multiply-add to accumulator

vmla_lane_f32Experimentalneon

Vector multiply accumulate with scalar

vmla_lane_s16Experimentalneon

Vector multiply accumulate with scalar

vmla_lane_s32Experimentalneon

Vector multiply accumulate with scalar

vmla_lane_u16Experimentalneon

Vector multiply accumulate with scalar

vmla_lane_u32Experimentalneon

Vector multiply accumulate with scalar

vmla_laneq_f32Experimentalneon

Vector multiply accumulate with scalar

vmla_laneq_s16Experimentalneon

Vector multiply accumulate with scalar

vmla_laneq_s32Experimentalneon

Vector multiply accumulate with scalar

vmla_laneq_u16Experimentalneon

Vector multiply accumulate with scalar

vmla_laneq_u32Experimentalneon

Vector multiply accumulate with scalar

vmla_n_f32Experimentalneon

Vector multiply accumulate with scalar

vmla_n_s16Experimentalneon

Vector multiply accumulate with scalar

vmla_n_s32Experimentalneon

Vector multiply accumulate with scalar

vmla_n_u16Experimentalneon

Vector multiply accumulate with scalar

vmla_n_u32Experimentalneon

Vector multiply accumulate with scalar

vmla_s8Experimentalneon

Multiply-add to accumulator

vmla_s16Experimentalneon

Multiply-add to accumulator

vmla_s32Experimentalneon

Multiply-add to accumulator

vmla_u8Experimentalneon

Multiply-add to accumulator

vmla_u16Experimentalneon

Multiply-add to accumulator

vmla_u32Experimentalneon

Multiply-add to accumulator

vmlal_high_lane_s16Experimentalneon

Multiply-add long

vmlal_high_lane_s32Experimentalneon

Multiply-add long

vmlal_high_lane_u16Experimentalneon

Multiply-add long

vmlal_high_lane_u32Experimentalneon

Multiply-add long

vmlal_high_laneq_s16Experimentalneon

Multiply-add long

vmlal_high_laneq_s32Experimentalneon

Multiply-add long

vmlal_high_laneq_u16Experimentalneon

Multiply-add long

vmlal_high_laneq_u32Experimentalneon

Multiply-add long

vmlal_high_n_s16Experimentalneon

Multiply-add long

vmlal_high_n_s32Experimentalneon

Multiply-add long

vmlal_high_n_u16Experimentalneon

Multiply-add long

vmlal_high_n_u32Experimentalneon

Multiply-add long

vmlal_high_s8Experimentalneon

Signed multiply-add long

vmlal_high_s16Experimentalneon

Signed multiply-add long

vmlal_high_s32Experimentalneon

Signed multiply-add long

vmlal_high_u8Experimentalneon

Unsigned multiply-add long

vmlal_high_u16Experimentalneon

Unsigned multiply-add long

vmlal_high_u32Experimentalneon

Unsigned multiply-add long

vmlal_lane_s16Experimentalneon

Vector widening multiply accumulate with scalar

vmlal_lane_s32Experimentalneon

Vector widening multiply accumulate with scalar

vmlal_lane_u16Experimentalneon

Vector widening multiply accumulate with scalar

vmlal_lane_u32Experimentalneon

Vector widening multiply accumulate with scalar

vmlal_laneq_s16Experimentalneon

Vector widening multiply accumulate with scalar

vmlal_laneq_s32Experimentalneon

Vector widening multiply accumulate with scalar

vmlal_laneq_u16Experimentalneon

Vector widening multiply accumulate with scalar

vmlal_laneq_u32Experimentalneon

Vector widening multiply accumulate with scalar

vmlal_n_s16Experimentalneon

Vector widening multiply accumulate with scalar

vmlal_n_s32Experimentalneon

Vector widening multiply accumulate with scalar

vmlal_n_u16Experimentalneon

Vector widening multiply accumulate with scalar

vmlal_n_u32Experimentalneon

Vector widening multiply accumulate with scalar

vmlal_s8Experimentalneon

Signed multiply-add long

vmlal_s16Experimentalneon

Signed multiply-add long

vmlal_s32Experimentalneon

Signed multiply-add long

vmlal_u8Experimentalneon

Unsigned multiply-add long

vmlal_u16Experimentalneon

Unsigned multiply-add long

vmlal_u32Experimentalneon

Unsigned multiply-add long

vmlaq_f32Experimentalneon

Floating-point multiply-add to accumulator

vmlaq_f64Experimentalneon

Floating-point multiply-add to accumulator

vmlaq_lane_f32Experimentalneon

Vector multiply accumulate with scalar

vmlaq_lane_s16Experimentalneon

Vector multiply accumulate with scalar

vmlaq_lane_s32Experimentalneon

Vector multiply accumulate with scalar

vmlaq_lane_u16Experimentalneon

Vector multiply accumulate with scalar

vmlaq_lane_u32Experimentalneon

Vector multiply accumulate with scalar

vmlaq_laneq_f32Experimentalneon

Vector multiply accumulate with scalar

vmlaq_laneq_s16Experimentalneon

Vector multiply accumulate with scalar

vmlaq_laneq_s32Experimentalneon

Vector multiply accumulate with scalar

vmlaq_laneq_u16Experimentalneon

Vector multiply accumulate with scalar

vmlaq_laneq_u32Experimentalneon

Vector multiply accumulate with scalar

vmlaq_n_f32Experimentalneon

Vector multiply accumulate with scalar

vmlaq_n_s16Experimentalneon

Vector multiply accumulate with scalar

vmlaq_n_s32Experimentalneon

Vector multiply accumulate with scalar

vmlaq_n_u16Experimentalneon

Vector multiply accumulate with scalar

vmlaq_n_u32Experimentalneon

Vector multiply accumulate with scalar

vmlaq_s8Experimentalneon

Multiply-add to accumulator

vmlaq_s16Experimentalneon

Multiply-add to accumulator

vmlaq_s32Experimentalneon

Multiply-add to accumulator

vmlaq_u8Experimentalneon

Multiply-add to accumulator

vmlaq_u16Experimentalneon

Multiply-add to accumulator

vmlaq_u32Experimentalneon

Multiply-add to accumulator

vmls_f32Experimentalneon

Floating-point multiply-subtract from accumulator

vmls_f64Experimentalneon

Floating-point multiply-subtract from accumulator

vmls_lane_f32Experimentalneon

Vector multiply subtract with scalar

vmls_lane_s16Experimentalneon

Vector multiply subtract with scalar

vmls_lane_s32Experimentalneon

Vector multiply subtract with scalar

vmls_lane_u16Experimentalneon

Vector multiply subtract with scalar

vmls_lane_u32Experimentalneon

Vector multiply subtract with scalar

vmls_laneq_f32Experimentalneon

Vector multiply subtract with scalar

vmls_laneq_s16Experimentalneon

Vector multiply subtract with scalar

vmls_laneq_s32Experimentalneon

Vector multiply subtract with scalar

vmls_laneq_u16Experimentalneon

Vector multiply subtract with scalar

vmls_laneq_u32Experimentalneon

Vector multiply subtract with scalar

vmls_n_f32Experimentalneon

Vector multiply subtract with scalar

vmls_n_s16Experimentalneon

Vector multiply subtract with scalar

vmls_n_s32Experimentalneon

Vector multiply subtract with scalar

vmls_n_u16Experimentalneon

Vector multiply subtract with scalar

vmls_n_u32Experimentalneon

Vector multiply subtract with scalar

vmls_s8Experimentalneon

Multiply-subtract from accumulator

vmls_s16Experimentalneon

Multiply-subtract from accumulator

vmls_s32Experimentalneon

Multiply-subtract from accumulator

vmls_u8Experimentalneon

Multiply-subtract from accumulator

vmls_u16Experimentalneon

Multiply-subtract from accumulator

vmls_u32Experimentalneon

Multiply-subtract from accumulator

vmlsl_high_lane_s16Experimentalneon

Multiply-subtract long

vmlsl_high_lane_s32Experimentalneon

Multiply-subtract long

vmlsl_high_lane_u16Experimentalneon

Multiply-subtract long

vmlsl_high_lane_u32Experimentalneon

Multiply-subtract long

vmlsl_high_laneq_s16Experimentalneon

Multiply-subtract long

vmlsl_high_laneq_s32Experimentalneon

Multiply-subtract long

vmlsl_high_laneq_u16Experimentalneon

Multiply-subtract long

vmlsl_high_laneq_u32Experimentalneon

Multiply-subtract long

vmlsl_high_n_s16Experimentalneon

Multiply-subtract long

vmlsl_high_n_s32Experimentalneon

Multiply-subtract long

vmlsl_high_n_u16Experimentalneon

Multiply-subtract long

vmlsl_high_n_u32Experimentalneon

Multiply-subtract long

vmlsl_high_s8Experimentalneon

Signed multiply-subtract long

vmlsl_high_s16Experimentalneon

Signed multiply-subtract long

vmlsl_high_s32Experimentalneon

Signed multiply-subtract long

vmlsl_high_u8Experimentalneon

Unsigned multiply-subtract long

vmlsl_high_u16Experimentalneon

Unsigned multiply-subtract long

vmlsl_high_u32Experimentalneon

Unsigned multiply-subtract long

vmlsl_lane_s16Experimentalneon

Vector widening multiply subtract with scalar

vmlsl_lane_s32Experimentalneon

Vector widening multiply subtract with scalar

vmlsl_lane_u16Experimentalneon

Vector widening multiply subtract with scalar

vmlsl_lane_u32Experimentalneon

Vector widening multiply subtract with scalar

vmlsl_laneq_s16Experimentalneon

Vector widening multiply subtract with scalar

vmlsl_laneq_s32Experimentalneon

Vector widening multiply subtract with scalar

vmlsl_laneq_u16Experimentalneon

Vector widening multiply subtract with scalar

vmlsl_laneq_u32Experimentalneon

Vector widening multiply subtract with scalar

vmlsl_n_s16Experimentalneon

Vector widening multiply subtract with scalar

vmlsl_n_s32Experimentalneon

Vector widening multiply subtract with scalar

vmlsl_n_u16Experimentalneon

Vector widening multiply subtract with scalar

vmlsl_n_u32Experimentalneon

Vector widening multiply subtract with scalar

vmlsl_s8Experimentalneon

Signed multiply-subtract long

vmlsl_s16Experimentalneon

Signed multiply-subtract long

vmlsl_s32Experimentalneon

Signed multiply-subtract long

vmlsl_u8Experimentalneon

Unsigned multiply-subtract long

vmlsl_u16Experimentalneon

Unsigned multiply-subtract long

vmlsl_u32Experimentalneon

Unsigned multiply-subtract long

vmlsq_f32Experimentalneon

Floating-point multiply-subtract from accumulator

vmlsq_f64Experimentalneon

Floating-point multiply-subtract from accumulator

vmlsq_lane_f32Experimentalneon

Vector multiply subtract with scalar

vmlsq_lane_s16Experimentalneon

Vector multiply subtract with scalar

vmlsq_lane_s32Experimentalneon

Vector multiply subtract with scalar

vmlsq_lane_u16Experimentalneon

Vector multiply subtract with scalar

vmlsq_lane_u32Experimentalneon

Vector multiply subtract with scalar

vmlsq_laneq_f32Experimentalneon

Vector multiply subtract with scalar

vmlsq_laneq_s16Experimentalneon

Vector multiply subtract with scalar

vmlsq_laneq_s32Experimentalneon

Vector multiply subtract with scalar

vmlsq_laneq_u16Experimentalneon

Vector multiply subtract with scalar

vmlsq_laneq_u32Experimentalneon

Vector multiply subtract with scalar

vmlsq_n_f32Experimentalneon

Vector multiply subtract with scalar

vmlsq_n_s16Experimentalneon

Vector multiply subtract with scalar

vmlsq_n_s32Experimentalneon

Vector multiply subtract with scalar

vmlsq_n_u16Experimentalneon

Vector multiply subtract with scalar

vmlsq_n_u32Experimentalneon

Vector multiply subtract with scalar

vmlsq_s8Experimentalneon

Multiply-subtract from accumulator

vmlsq_s16Experimentalneon

Multiply-subtract from accumulator

vmlsq_s32Experimentalneon

Multiply-subtract from accumulator

vmlsq_u8Experimentalneon

Multiply-subtract from accumulator

vmlsq_u16Experimentalneon

Multiply-subtract from accumulator

vmlsq_u32Experimentalneon

Multiply-subtract from accumulator

vmov_n_f32Experimentalneon

Duplicate vector element to vector or scalar

vmov_n_f64Experimentalneon

Duplicate vector element to vector or scalar

vmov_n_p8Experimentalneon

Duplicate vector element to vector or scalar

vmov_n_p16Experimentalneon

Duplicate vector element to vector or scalar

vmov_n_p64Experimentalneon

Duplicate vector element to vector or scalar

vmov_n_s8Experimentalneon

Duplicate vector element to vector or scalar

vmov_n_s16Experimentalneon

Duplicate vector element to vector or scalar

vmov_n_s32Experimentalneon

Duplicate vector element to vector or scalar

vmov_n_s64Experimentalneon

Duplicate vector element to vector or scalar

vmov_n_u8Experimentalneon

Duplicate vector element to vector or scalar

vmov_n_u16Experimentalneon

Duplicate vector element to vector or scalar

vmov_n_u32Experimentalneon

Duplicate vector element to vector or scalar

vmov_n_u64Experimentalneon

Duplicate vector element to vector or scalar

vmovl_s8Experimentalneon

Vector long move.

vmovl_s16Experimentalneon

Vector long move.

vmovl_s32Experimentalneon

Vector long move.

vmovl_u8Experimentalneon

Vector long move.

vmovl_u16Experimentalneon

Vector long move.

vmovl_u32Experimentalneon

Vector long move.

vmovn_high_s16Experimentalneon

Extract narrow

vmovn_high_s32Experimentalneon

Extract narrow

vmovn_high_s64Experimentalneon

Extract narrow

vmovn_high_u16Experimentalneon

Extract narrow

vmovn_high_u32Experimentalneon

Extract narrow

vmovn_high_u64Experimentalneon

Extract narrow

vmovn_s16Experimentalneon

Vector narrow integer.

vmovn_s32Experimentalneon

Vector narrow integer.

vmovn_s64Experimentalneon

Vector narrow integer.

vmovn_u16Experimentalneon

Vector narrow integer.

vmovn_u32Experimentalneon

Vector narrow integer.

vmovn_u64Experimentalneon

Vector narrow integer.

vmovq_n_f32Experimentalneon

Duplicate vector element to vector or scalar

vmovq_n_f64Experimentalneon

Duplicate vector element to vector or scalar

vmovq_n_p8Experimentalneon

Duplicate vector element to vector or scalar

vmovq_n_p16Experimentalneon

Duplicate vector element to vector or scalar

vmovq_n_p64Experimentalneon

Duplicate vector element to vector or scalar

vmovq_n_s8Experimentalneon

Duplicate vector element to vector or scalar

vmovq_n_s16Experimentalneon

Duplicate vector element to vector or scalar

vmovq_n_s32Experimentalneon

Duplicate vector element to vector or scalar

vmovq_n_s64Experimentalneon

Duplicate vector element to vector or scalar

vmovq_n_u8Experimentalneon

Duplicate vector element to vector or scalar

vmovq_n_u16Experimentalneon

Duplicate vector element to vector or scalar

vmovq_n_u32Experimentalneon

Duplicate vector element to vector or scalar

vmovq_n_u64Experimentalneon

Duplicate vector element to vector or scalar

vmul_f32Experimentalneon

Multiply

vmul_f64Experimentalneon

Multiply

vmul_lane_f32Experimentalneon

Floating-point multiply

vmul_lane_f64Experimentalneon

Floating-point multiply

vmul_lane_s16Experimentalneon

Multiply

vmul_lane_s32Experimentalneon

Multiply

vmul_lane_u16Experimentalneon

Multiply

vmul_lane_u32Experimentalneon

Multiply

vmul_laneq_f32Experimentalneon

Floating-point multiply

vmul_laneq_f64Experimentalneon

Floating-point multiply

vmul_laneq_s16Experimentalneon

Multiply

vmul_laneq_s32Experimentalneon

Multiply

vmul_laneq_u16Experimentalneon

Multiply

vmul_laneq_u32Experimentalneon

Multiply

vmul_n_f32Experimentalneon

Vector multiply by scalar

vmul_n_f64Experimentalneon

Vector multiply by scalar

vmul_n_s16Experimentalneon

Vector multiply by scalar

vmul_n_s32Experimentalneon

Vector multiply by scalar

vmul_n_u16Experimentalneon

Vector multiply by scalar

vmul_n_u32Experimentalneon

Vector multiply by scalar

vmul_p8Experimentalneon

Polynomial multiply

vmul_s8Experimentalneon

Multiply

vmul_s16Experimentalneon

Multiply

vmul_s32Experimentalneon

Multiply

vmul_u8Experimentalneon

Multiply

vmul_u16Experimentalneon

Multiply

vmul_u32Experimentalneon

Multiply

vmuld_lane_f64Experimentalneon

Floating-point multiply

vmuld_laneq_f64Experimentalneon

Floating-point multiply

vmull_high_lane_s16Experimentalneon

Multiply long

vmull_high_lane_s32Experimentalneon

Multiply long

vmull_high_lane_u16Experimentalneon

Multiply long

vmull_high_lane_u32Experimentalneon

Multiply long

vmull_high_laneq_s16Experimentalneon

Multiply long

vmull_high_laneq_s32Experimentalneon

Multiply long

vmull_high_laneq_u16Experimentalneon

Multiply long

vmull_high_laneq_u32Experimentalneon

Multiply long

vmull_high_n_s16Experimentalneon

Multiply long

vmull_high_n_s32Experimentalneon

Multiply long

vmull_high_n_u16Experimentalneon

Multiply long

vmull_high_n_u32Experimentalneon

Multiply long

vmull_high_p8Experimentalneon

Polynomial multiply long

vmull_high_p64Experimentalneon,aes

Polynomial multiply long

vmull_high_s8Experimentalneon

Signed multiply long

vmull_high_s16Experimentalneon

Signed multiply long

vmull_high_s32Experimentalneon

Signed multiply long

vmull_high_u8Experimentalneon

Unsigned multiply long

vmull_high_u16Experimentalneon

Unsigned multiply long

vmull_high_u32Experimentalneon

Unsigned multiply long

vmull_lane_s16Experimentalneon

Vector long multiply by scalar

vmull_lane_s32Experimentalneon

Vector long multiply by scalar

vmull_lane_u16Experimentalneon

Vector long multiply by scalar

vmull_lane_u32Experimentalneon

Vector long multiply by scalar

vmull_laneq_s16Experimentalneon

Vector long multiply by scalar

vmull_laneq_s32Experimentalneon

Vector long multiply by scalar

vmull_laneq_u16Experimentalneon

Vector long multiply by scalar

vmull_laneq_u32Experimentalneon

Vector long multiply by scalar

vmull_p8Experimentalneon

Polynomial multiply long

vmull_p64Experimentalneon,aes

Polynomial multiply long

vmull_s8Experimentalneon

Signed multiply long

vmull_s16Experimentalneon

Signed multiply long

vmull_s32Experimentalneon

Signed multiply long

vmull_u8Experimentalneon

Unsigned multiply long

vmull_u16Experimentalneon

Unsigned multiply long

vmull_u32Experimentalneon

Unsigned multiply long

vmullh_n_s16Experimentalneon

Vector long multiply with scalar

vmullh_n_u16Experimentalneon

Vector long multiply with scalar

vmulls_n_s32Experimentalneon

Vector long multiply with scalar

vmulls_n_u32Experimentalneon

Vector long multiply with scalar

vmulq_f32Experimentalneon

Multiply

vmulq_f64Experimentalneon

Multiply

vmulq_lane_f32Experimentalneon

Floating-point multiply

vmulq_lane_f64Experimentalneon

Floating-point multiply

vmulq_lane_s16Experimentalneon

Multiply

vmulq_lane_s32Experimentalneon

Multiply

vmulq_lane_u16Experimentalneon

Multiply

vmulq_lane_u32Experimentalneon

Multiply

vmulq_laneq_f32Experimentalneon

Floating-point multiply

vmulq_laneq_f64Experimentalneon

Floating-point multiply

vmulq_laneq_s16Experimentalneon

Multiply

vmulq_laneq_s32Experimentalneon

Multiply

vmulq_laneq_u16Experimentalneon

Multiply

vmulq_laneq_u32Experimentalneon

Multiply

vmulq_n_f32Experimentalneon

Vector multiply by scalar

vmulq_n_f64Experimentalneon

Vector multiply by scalar

vmulq_n_s16Experimentalneon

Vector multiply by scalar

vmulq_n_s32Experimentalneon

Vector multiply by scalar

vmulq_n_u16Experimentalneon

Vector multiply by scalar

vmulq_n_u32Experimentalneon

Vector multiply by scalar

vmulq_p8Experimentalneon

Polynomial multiply

vmulq_s8Experimentalneon

Multiply

vmulq_s16Experimentalneon

Multiply

vmulq_s32Experimentalneon

Multiply

vmulq_u8Experimentalneon

Multiply

vmulq_u16Experimentalneon

Multiply

vmulq_u32Experimentalneon

Multiply

vmuls_lane_f32Experimentalneon

Floating-point multiply

vmuls_laneq_f32Experimentalneon

Floating-point multiply

vmulx_f32Experimentalneon

Floating-point multiply extended

vmulx_f64Experimentalneon

Floating-point multiply extended

vmulx_lane_f32Experimentalneon

Floating-point multiply extended

vmulx_lane_f64Experimentalneon

Floating-point multiply extended

vmulx_laneq_f32Experimentalneon

Floating-point multiply extended

vmulx_laneq_f64Experimentalneon

Floating-point multiply extended

vmulxd_f64Experimentalneon

Floating-point multiply extended

vmulxd_lane_f64Experimentalneon

Floating-point multiply extended

vmulxd_laneq_f64Experimentalneon

Floating-point multiply extended

vmulxq_f32Experimentalneon

Floating-point multiply extended

vmulxq_f64Experimentalneon

Floating-point multiply extended

vmulxq_lane_f32Experimentalneon

Floating-point multiply extended

vmulxq_lane_f64Experimentalneon

Floating-point multiply extended

vmulxq_laneq_f32Experimentalneon

Floating-point multiply extended

vmulxq_laneq_f64Experimentalneon

Floating-point multiply extended

vmulxs_f32Experimentalneon

Floating-point multiply extended

vmulxs_lane_f32Experimentalneon

Floating-point multiply extended

vmulxs_laneq_f32Experimentalneon

Floating-point multiply extended

vmvn_p8Experimentalneon

Vector bitwise not.

vmvn_s8Experimentalneon

Vector bitwise not.

vmvn_s16Experimentalneon

Vector bitwise not.

vmvn_s32Experimentalneon

Vector bitwise not.

vmvn_u8Experimentalneon

Vector bitwise not.

vmvn_u16Experimentalneon

Vector bitwise not.

vmvn_u32Experimentalneon

Vector bitwise not.

vmvnq_p8Experimentalneon

Vector bitwise not.

vmvnq_s8Experimentalneon

Vector bitwise not.

vmvnq_s16Experimentalneon

Vector bitwise not.

vmvnq_s32Experimentalneon

Vector bitwise not.

vmvnq_u8Experimentalneon

Vector bitwise not.

vmvnq_u16Experimentalneon

Vector bitwise not.

vmvnq_u32Experimentalneon

Vector bitwise not.

vneg_f32Experimentalneon

Negate

vneg_f64Experimentalneon

Negate

vneg_s8Experimentalneon

Negate

vneg_s16Experimentalneon

Negate

vneg_s32Experimentalneon

Negate

vneg_s64Experimentalneon

Negate

vnegq_f32Experimentalneon

Negate

vnegq_f64Experimentalneon

Negate

vnegq_s8Experimentalneon

Negate

vnegq_s16Experimentalneon

Negate

vnegq_s32Experimentalneon

Negate

vnegq_s64Experimentalneon

Negate

vorn_s8Experimentalneon

Vector bitwise inclusive OR NOT

vorn_s16Experimentalneon

Vector bitwise inclusive OR NOT

vorn_s32Experimentalneon

Vector bitwise inclusive OR NOT

vorn_s64Experimentalneon

Vector bitwise inclusive OR NOT

vorn_u8Experimentalneon

Vector bitwise inclusive OR NOT

vorn_u16Experimentalneon

Vector bitwise inclusive OR NOT

vorn_u32Experimentalneon

Vector bitwise inclusive OR NOT

vorn_u64Experimentalneon

Vector bitwise inclusive OR NOT

vornq_s8Experimentalneon

Vector bitwise inclusive OR NOT

vornq_s16Experimentalneon

Vector bitwise inclusive OR NOT

vornq_s32Experimentalneon

Vector bitwise inclusive OR NOT

vornq_s64Experimentalneon

Vector bitwise inclusive OR NOT

vornq_u8Experimentalneon

Vector bitwise inclusive OR NOT

vornq_u16Experimentalneon

Vector bitwise inclusive OR NOT

vornq_u32Experimentalneon

Vector bitwise inclusive OR NOT

vornq_u64Experimentalneon

Vector bitwise inclusive OR NOT

vorr_s8Experimentalneon

Vector bitwise or (immediate, inclusive)

vorr_s16Experimentalneon

Vector bitwise or (immediate, inclusive)

vorr_s32Experimentalneon

Vector bitwise or (immediate, inclusive)

vorr_s64Experimentalneon

Vector bitwise or (immediate, inclusive)

vorr_u8Experimentalneon

Vector bitwise or (immediate, inclusive)

vorr_u16Experimentalneon

Vector bitwise or (immediate, inclusive)

vorr_u32Experimentalneon

Vector bitwise or (immediate, inclusive)

vorr_u64Experimentalneon

Vector bitwise or (immediate, inclusive)

vorrq_s8Experimentalneon

Vector bitwise or (immediate, inclusive)

vorrq_s16Experimentalneon

Vector bitwise or (immediate, inclusive)

vorrq_s32Experimentalneon

Vector bitwise or (immediate, inclusive)

vorrq_s64Experimentalneon

Vector bitwise or (immediate, inclusive)

vorrq_u8Experimentalneon

Vector bitwise or (immediate, inclusive)

vorrq_u16Experimentalneon

Vector bitwise or (immediate, inclusive)

vorrq_u32Experimentalneon

Vector bitwise or (immediate, inclusive)

vorrq_u64Experimentalneon

Vector bitwise or (immediate, inclusive)

vpadal_s8Experimentalneon

Signed Add and Accumulate Long Pairwise.

vpadal_s16Experimentalneon

Signed Add and Accumulate Long Pairwise.

vpadal_s32Experimentalneon

Signed Add and Accumulate Long Pairwise.

vpadal_u8Experimentalneon

Unsigned Add and Accumulate Long Pairwise.

vpadal_u16Experimentalneon

Unsigned Add and Accumulate Long Pairwise.

vpadal_u32Experimentalneon

Unsigned Add and Accumulate Long Pairwise.

vpadalq_s8Experimentalneon

Signed Add and Accumulate Long Pairwise.

vpadalq_s16Experimentalneon

Signed Add and Accumulate Long Pairwise.

vpadalq_s32Experimentalneon

Signed Add and Accumulate Long Pairwise.

vpadalq_u8Experimentalneon

Unsigned Add and Accumulate Long Pairwise.

vpadalq_u16Experimentalneon

Unsigned Add and Accumulate Long Pairwise.

vpadalq_u32Experimentalneon

Unsigned Add and Accumulate Long Pairwise.

vpadd_s8Experimentalneon

Add pairwise.

vpadd_s16Experimentalneon

Add pairwise.

vpadd_s32Experimentalneon

Add pairwise.

vpadd_u8Experimentalneon

Add pairwise.

vpadd_u16Experimentalneon

Add pairwise.

vpadd_u32Experimentalneon

Add pairwise.

vpaddd_s64Experimentalneon

Add pairwise

vpaddd_u64Experimentalneon

Add pairwise

vpaddl_s8Experimentalneon

Signed Add Long Pairwise.

vpaddl_s16Experimentalneon

Signed Add Long Pairwise.

vpaddl_s32Experimentalneon

Signed Add Long Pairwise.

vpaddl_u8Experimentalneon

Unsigned Add Long Pairwise.

vpaddl_u16Experimentalneon

Unsigned Add Long Pairwise.

vpaddl_u32Experimentalneon

Unsigned Add Long Pairwise.

vpaddlq_s8Experimentalneon

Signed Add Long Pairwise.

vpaddlq_s16Experimentalneon

Signed Add Long Pairwise.

vpaddlq_s32Experimentalneon

Signed Add Long Pairwise.

vpaddlq_u8Experimentalneon

Unsigned Add Long Pairwise.

vpaddlq_u16Experimentalneon

Unsigned Add Long Pairwise.

vpaddlq_u32Experimentalneon

Unsigned Add Long Pairwise.

vpaddq_s8Experimentalneon

Add pairwise

vpaddq_s16Experimentalneon

Add pairwise

vpaddq_s32Experimentalneon

Add pairwise

vpaddq_u8Experimentalneon

Add pairwise

vpaddq_u16Experimentalneon

Add pairwise

vpaddq_u32Experimentalneon

Add pairwise

vpmax_f32Experimentalneon

Folding maximum of adjacent pairs

vpmax_s8Experimentalneon

Folding maximum of adjacent pairs

vpmax_s16Experimentalneon

Folding maximum of adjacent pairs

vpmax_s32Experimentalneon

Folding maximum of adjacent pairs

vpmax_u8Experimentalneon

Folding maximum of adjacent pairs

vpmax_u16Experimentalneon

Folding maximum of adjacent pairs

vpmax_u32Experimentalneon

Folding maximum of adjacent pairs

vpmaxnm_f32Experimentalneon

Floating-point Maximum Number Pairwise (vector).

vpmaxnmq_f32Experimentalneon

Floating-point Maximum Number Pairwise (vector).

vpmaxnmq_f64Experimentalneon

Floating-point Maximum Number Pairwise (vector).

vpmaxq_f32Experimentalneon

Folding maximum of adjacent pairs

vpmaxq_f64Experimentalneon

Folding maximum of adjacent pairs

vpmaxq_s8Experimentalneon

Folding maximum of adjacent pairs

vpmaxq_s16Experimentalneon

Folding maximum of adjacent pairs

vpmaxq_s32Experimentalneon

Folding maximum of adjacent pairs

vpmaxq_u8Experimentalneon

Folding maximum of adjacent pairs

vpmaxq_u16Experimentalneon

Folding maximum of adjacent pairs

vpmaxq_u32Experimentalneon

Folding maximum of adjacent pairs

vpmin_f32Experimentalneon

Folding minimum of adjacent pairs

vpmin_s8Experimentalneon

Folding minimum of adjacent pairs

vpmin_s16Experimentalneon

Folding minimum of adjacent pairs

vpmin_s32Experimentalneon

Folding minimum of adjacent pairs

vpmin_u8Experimentalneon

Folding minimum of adjacent pairs

vpmin_u16Experimentalneon

Folding minimum of adjacent pairs

vpmin_u32Experimentalneon

Folding minimum of adjacent pairs

vpminnm_f32Experimentalneon

Floating-point Minimum Number Pairwise (vector).

vpminnmq_f32Experimentalneon

Floating-point Minimum Number Pairwise (vector).

vpminnmq_f64Experimentalneon

Floating-point Minimum Number Pairwise (vector).

vpminq_f32Experimentalneon

Folding minimum of adjacent pairs

vpminq_f64Experimentalneon

Folding minimum of adjacent pairs

vpminq_s8Experimentalneon

Folding minimum of adjacent pairs

vpminq_s16Experimentalneon

Folding minimum of adjacent pairs

vpminq_s32Experimentalneon

Folding minimum of adjacent pairs

vpminq_u8Experimentalneon

Folding minimum of adjacent pairs

vpminq_u16Experimentalneon

Folding minimum of adjacent pairs

vpminq_u32Experimentalneon

Folding minimum of adjacent pairs

vqabs_s8Experimentalneon

Singned saturating Absolute value

vqabs_s16Experimentalneon

Singned saturating Absolute value

vqabs_s32Experimentalneon

Singned saturating Absolute value

vqabs_s64Experimentalneon

Singned saturating Absolute value

vqabsq_s8Experimentalneon

Singned saturating Absolute value

vqabsq_s16Experimentalneon

Singned saturating Absolute value

vqabsq_s32Experimentalneon

Singned saturating Absolute value

vqabsq_s64Experimentalneon

Singned saturating Absolute value

vqadd_s8Experimentalneon

Saturating add

vqadd_s16Experimentalneon

Saturating add

vqadd_s32Experimentalneon

Saturating add

vqadd_s64Experimentalneon

Saturating add

vqadd_u8Experimentalneon

Saturating add

vqadd_u16Experimentalneon

Saturating add

vqadd_u32Experimentalneon

Saturating add

vqadd_u64Experimentalneon

Saturating add

vqaddb_s8Experimentalneon

Saturating add

vqaddb_u8Experimentalneon

Saturating add

vqaddd_s64Experimentalneon

Saturating add

vqaddd_u64Experimentalneon

Saturating add

vqaddh_s16Experimentalneon

Saturating add

vqaddh_u16Experimentalneon

Saturating add

vqaddq_s8Experimentalneon

Saturating add

vqaddq_s16Experimentalneon

Saturating add

vqaddq_s32Experimentalneon

Saturating add

vqaddq_s64Experimentalneon

Saturating add

vqaddq_u8Experimentalneon

Saturating add

vqaddq_u16Experimentalneon

Saturating add

vqaddq_u32Experimentalneon

Saturating add

vqaddq_u64Experimentalneon

Saturating add

vqadds_s32Experimentalneon

Saturating add

vqadds_u32Experimentalneon

Saturating add

vqdmlal_high_lane_s16Experimentalneon

Signed saturating doubling multiply-add long

vqdmlal_high_lane_s32Experimentalneon

Signed saturating doubling multiply-add long

Signed saturating doubling multiply-add long

Signed saturating doubling multiply-add long

vqdmlal_high_n_s16Experimentalneon

Signed saturating doubling multiply-add long

vqdmlal_high_n_s32Experimentalneon

Signed saturating doubling multiply-add long

vqdmlal_high_s16Experimentalneon

Signed saturating doubling multiply-add long

vqdmlal_high_s32Experimentalneon

Signed saturating doubling multiply-add long

vqdmlal_lane_s16Experimentalneon

Vector widening saturating doubling multiply accumulate with scalar

vqdmlal_lane_s32Experimentalneon

Vector widening saturating doubling multiply accumulate with scalar

vqdmlal_laneq_s16Experimentalneon

Vector widening saturating doubling multiply accumulate with scalar

vqdmlal_laneq_s32Experimentalneon

Vector widening saturating doubling multiply accumulate with scalar

vqdmlal_n_s16Experimentalneon

Vector widening saturating doubling multiply accumulate with scalar

vqdmlal_n_s32Experimentalneon

Vector widening saturating doubling multiply accumulate with scalar

vqdmlal_s16Experimentalneon

Signed saturating doubling multiply-add long

vqdmlal_s32Experimentalneon

Signed saturating doubling multiply-add long

vqdmlsl_high_lane_s16Experimentalneon

Signed saturating doubling multiply-subtract long

vqdmlsl_high_lane_s32Experimentalneon

Signed saturating doubling multiply-subtract long

Signed saturating doubling multiply-subtract long

Signed saturating doubling multiply-subtract long

vqdmlsl_high_n_s16Experimentalneon

Signed saturating doubling multiply-subtract long

vqdmlsl_high_n_s32Experimentalneon

Signed saturating doubling multiply-subtract long

vqdmlsl_high_s16Experimentalneon

Signed saturating doubling multiply-subtract long

vqdmlsl_high_s32Experimentalneon

Signed saturating doubling multiply-subtract long

vqdmlsl_lane_s16Experimentalneon

Vector widening saturating doubling multiply subtract with scalar

vqdmlsl_lane_s32Experimentalneon

Vector widening saturating doubling multiply subtract with scalar

vqdmlsl_laneq_s16Experimentalneon

Vector widening saturating doubling multiply subtract with scalar

vqdmlsl_laneq_s32Experimentalneon

Vector widening saturating doubling multiply subtract with scalar

vqdmlsl_n_s16Experimentalneon

Vector widening saturating doubling multiply subtract with scalar

vqdmlsl_n_s32Experimentalneon

Vector widening saturating doubling multiply subtract with scalar

vqdmlsl_s16Experimentalneon

Signed saturating doubling multiply-subtract long

vqdmlsl_s32Experimentalneon

Signed saturating doubling multiply-subtract long

vqdmulh_n_s16Experimentalneon

Vector saturating doubling multiply high with scalar

vqdmulh_n_s32Experimentalneon

Vector saturating doubling multiply high with scalar

vqdmulh_s16Experimentalneon

Signed saturating doubling multiply returning high half

vqdmulh_s32Experimentalneon

Signed saturating doubling multiply returning high half

vqdmulhh_lane_s16Experimentalneon

Signed saturating doubling multiply returning high half

vqdmulhh_laneq_s16Experimentalneon

Signed saturating doubling multiply returning high half

vqdmulhh_s16Experimentalneon

Signed saturating doubling multiply returning high half

vqdmulhq_nq_s16Experimentalneon

Vector saturating doubling multiply high with scalar

vqdmulhq_nq_s32Experimentalneon

Vector saturating doubling multiply high with scalar

vqdmulhq_s16Experimentalneon

Signed saturating doubling multiply returning high half

vqdmulhq_s32Experimentalneon

Signed saturating doubling multiply returning high half

vqdmulhs_lane_s32Experimentalneon

Signed saturating doubling multiply returning high half

vqdmulhs_laneq_s32Experimentalneon

Signed saturating doubling multiply returning high half

vqdmulhs_s32Experimentalneon

Signed saturating doubling multiply returning high half

vqdmull_high_lane_s16Experimentalneon

Signed saturating doubling multiply long

vqdmull_high_lane_s32Experimentalneon

Signed saturating doubling multiply long

Signed saturating doubling multiply long

Signed saturating doubling multiply long

vqdmull_high_n_s16Experimentalneon

Signed saturating doubling multiply long

vqdmull_high_n_s32Experimentalneon

Signed saturating doubling multiply long

vqdmull_high_s16Experimentalneon

Signed saturating doubling multiply long

vqdmull_high_s32Experimentalneon

Signed saturating doubling multiply long

vqdmull_lane_s16Experimentalneon

Vector saturating doubling long multiply by scalar

vqdmull_lane_s32Experimentalneon

Vector saturating doubling long multiply by scalar

vqdmull_laneq_s16Experimentalneon

Vector saturating doubling long multiply by scalar

vqdmull_laneq_s32Experimentalneon

Vector saturating doubling long multiply by scalar

vqdmull_n_s16Experimentalneon

Vector saturating doubling long multiply with scalar

vqdmull_n_s32Experimentalneon

Vector saturating doubling long multiply with scalar

vqdmull_s16Experimentalneon

Signed saturating doubling multiply long

vqdmull_s32Experimentalneon

Signed saturating doubling multiply long

vqdmullh_lane_s16Experimentalneon

Signed saturating doubling multiply long

vqdmullh_laneq_s16Experimentalneon

Signed saturating doubling multiply long

vqdmullh_s16Experimentalneon

Signed saturating doubling multiply long

vqdmulls_lane_s32Experimentalneon

Signed saturating doubling multiply long

vqdmulls_laneq_s32Experimentalneon

Signed saturating doubling multiply long

vqdmulls_s32Experimentalneon

Signed saturating doubling multiply long

vqmovn_high_s16Experimentalneon

Signed saturating extract narrow

vqmovn_high_s32Experimentalneon

Signed saturating extract narrow

vqmovn_high_s64Experimentalneon

Signed saturating extract narrow

vqmovn_high_u16Experimentalneon

Signed saturating extract narrow

vqmovn_high_u32Experimentalneon

Signed saturating extract narrow

vqmovn_high_u64Experimentalneon

Signed saturating extract narrow

vqmovn_s16Experimentalneon

Signed saturating extract narrow

vqmovn_s32Experimentalneon

Signed saturating extract narrow

vqmovn_s64Experimentalneon

Signed saturating extract narrow

vqmovn_u16Experimentalneon

Unsigned saturating extract narrow

vqmovn_u32Experimentalneon

Unsigned saturating extract narrow

vqmovn_u64Experimentalneon

Unsigned saturating extract narrow

vqmovnd_s64Experimentalneon

Saturating extract narrow

vqmovnd_u64Experimentalneon

Saturating extract narrow

vqmovnh_s16Experimentalneon

Saturating extract narrow

vqmovnh_u16Experimentalneon

Saturating extract narrow

vqmovns_s32Experimentalneon

Saturating extract narrow

vqmovns_u32Experimentalneon

Saturating extract narrow

vqmovun_high_s16Experimentalneon

Signed saturating extract unsigned narrow

vqmovun_high_s32Experimentalneon

Signed saturating extract unsigned narrow

vqmovun_high_s64Experimentalneon

Signed saturating extract unsigned narrow

vqmovun_s16Experimentalneon

Signed saturating extract unsigned narrow

vqmovun_s32Experimentalneon

Signed saturating extract unsigned narrow

vqmovun_s64Experimentalneon

Signed saturating extract unsigned narrow

vqmovund_s64Experimentalneon

Signed saturating extract unsigned narrow

vqmovunh_s16Experimentalneon

Signed saturating extract unsigned narrow

vqmovuns_s32Experimentalneon

Signed saturating extract unsigned narrow

vqneg_s8Experimentalneon

Signed saturating negate

vqneg_s16Experimentalneon

Signed saturating negate

vqneg_s32Experimentalneon

Signed saturating negate

vqneg_s64Experimentalneon

Signed saturating negate

vqnegq_s8Experimentalneon

Signed saturating negate

vqnegq_s16Experimentalneon

Signed saturating negate

vqnegq_s32Experimentalneon

Signed saturating negate

vqnegq_s64Experimentalneon

Signed saturating negate

vqrdmlah_lane_s16Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlah_lane_s32Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlah_laneq_s16Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlah_laneq_s32Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlah_s16Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlah_s32Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahh_lane_s16Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahh_laneq_s16Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahh_s16Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahq_lane_s16Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahq_lane_s32Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahq_laneq_s16Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahq_laneq_s32Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahq_s16Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahq_s32Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahs_lane_s32Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahs_laneq_s32Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlahs_s32Experimentalneon

Signed saturating rounding doubling multiply accumulate returning high half

vqrdmlsh_lane_s16Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlsh_lane_s32Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlsh_laneq_s16Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlsh_laneq_s32Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlsh_s16Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlsh_s32Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshh_lane_s16Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshh_laneq_s16Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshh_s16Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshq_lane_s16Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshq_lane_s32Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshq_laneq_s16Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshq_laneq_s32Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshq_s16Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshq_s32Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshs_lane_s32Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshs_laneq_s32Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmlshs_s32Experimentalneon

Signed saturating rounding doubling multiply subtract returning high half

vqrdmulh_lane_s16Experimentalneon

Vector rounding saturating doubling multiply high by scalar

vqrdmulh_lane_s32Experimentalneon

Vector rounding saturating doubling multiply high by scalar

vqrdmulh_laneq_s16Experimentalneon

Vector rounding saturating doubling multiply high by scalar

vqrdmulh_laneq_s32Experimentalneon

Vector rounding saturating doubling multiply high by scalar

vqrdmulh_n_s16Experimentalneon

Vector saturating rounding doubling multiply high with scalar

vqrdmulh_n_s32Experimentalneon

Vector saturating rounding doubling multiply high with scalar

vqrdmulh_s16Experimentalneon

Signed saturating rounding doubling multiply returning high half

vqrdmulh_s32Experimentalneon

Signed saturating rounding doubling multiply returning high half

vqrdmulhh_lane_s16Experimentalneon

Signed saturating rounding doubling multiply returning high half

vqrdmulhh_laneq_s16Experimentalneon

Signed saturating rounding doubling multiply returning high half

vqrdmulhh_s16Experimentalneon

Signed saturating rounding doubling multiply returning high half

vqrdmulhq_lane_s16Experimentalneon

Vector rounding saturating doubling multiply high by scalar

vqrdmulhq_lane_s32Experimentalneon

Vector rounding saturating doubling multiply high by scalar

vqrdmulhq_laneq_s16Experimentalneon

Vector rounding saturating doubling multiply high by scalar

vqrdmulhq_laneq_s32Experimentalneon

Vector rounding saturating doubling multiply high by scalar

vqrdmulhq_n_s16Experimentalneon

Vector saturating rounding doubling multiply high with scalar

vqrdmulhq_n_s32Experimentalneon

Vector saturating rounding doubling multiply high with scalar

vqrdmulhq_s16Experimentalneon

Signed saturating rounding doubling multiply returning high half

vqrdmulhq_s32Experimentalneon

Signed saturating rounding doubling multiply returning high half

vqrdmulhs_lane_s32Experimentalneon

Signed saturating rounding doubling multiply returning high half

vqrdmulhs_laneq_s32Experimentalneon

Signed saturating rounding doubling multiply returning high half

vqrdmulhs_s32Experimentalneon

Signed saturating rounding doubling multiply returning high half

vqrshl_s8Experimentalneon

Signed saturating rounding shift left

vqrshl_s16Experimentalneon

Signed saturating rounding shift left

vqrshl_s32Experimentalneon

Signed saturating rounding shift left

vqrshl_s64Experimentalneon

Signed saturating rounding shift left

vqrshl_u8Experimentalneon

Unsigned signed saturating rounding shift left

vqrshl_u16Experimentalneon

Unsigned signed saturating rounding shift left

vqrshl_u32Experimentalneon

Unsigned signed saturating rounding shift left

vqrshl_u64Experimentalneon

Unsigned signed saturating rounding shift left

vqrshlb_s8Experimentalneon

Signed saturating rounding shift left

vqrshlb_u8Experimentalneon

Unsigned signed saturating rounding shift left

vqrshld_s64Experimentalneon

Signed saturating rounding shift left

vqrshld_u64Experimentalneon

Unsigned signed saturating rounding shift left

vqrshlh_s16Experimentalneon

Signed saturating rounding shift left

vqrshlh_u16Experimentalneon

Unsigned signed saturating rounding shift left

vqrshlq_s8Experimentalneon

Signed saturating rounding shift left

vqrshlq_s16Experimentalneon

Signed saturating rounding shift left

vqrshlq_s32Experimentalneon

Signed saturating rounding shift left

vqrshlq_s64Experimentalneon

Signed saturating rounding shift left

vqrshlq_u8Experimentalneon

Unsigned signed saturating rounding shift left

vqrshlq_u16Experimentalneon

Unsigned signed saturating rounding shift left

vqrshlq_u32Experimentalneon

Unsigned signed saturating rounding shift left

vqrshlq_u64Experimentalneon

Unsigned signed saturating rounding shift left

vqrshls_s32Experimentalneon

Signed saturating rounding shift left

vqrshls_u32Experimentalneon

Unsigned signed saturating rounding shift left

vqrshrn_high_n_s16Experimentalneon

Signed saturating rounded shift right narrow

vqrshrn_high_n_s32Experimentalneon

Signed saturating rounded shift right narrow

vqrshrn_high_n_s64Experimentalneon

Signed saturating rounded shift right narrow

vqrshrn_high_n_u16Experimentalneon

Unsigned saturating rounded shift right narrow

vqrshrn_high_n_u32Experimentalneon

Unsigned saturating rounded shift right narrow

vqrshrn_high_n_u64Experimentalneon

Unsigned saturating rounded shift right narrow

vqrshrnd_n_s64Experimentalneon

Signed saturating rounded shift right narrow

vqrshrnd_n_u64Experimentalneon

Unsigned saturating rounded shift right narrow

vqrshrnh_n_s16Experimentalneon

Signed saturating rounded shift right narrow

vqrshrnh_n_u16Experimentalneon

Unsigned saturating rounded shift right narrow

vqrshrns_n_s32Experimentalneon

Signed saturating rounded shift right narrow

vqrshrns_n_u32Experimentalneon

Unsigned saturating rounded shift right narrow

vqrshrun_high_n_s16Experimentalneon

Signed saturating rounded shift right unsigned narrow

vqrshrun_high_n_s32Experimentalneon

Signed saturating rounded shift right unsigned narrow

vqrshrun_high_n_s64Experimentalneon

Signed saturating rounded shift right unsigned narrow

vqrshrund_n_s64Experimentalneon

Signed saturating rounded shift right unsigned narrow

vqrshrunh_n_s16Experimentalneon

Signed saturating rounded shift right unsigned narrow

vqrshruns_n_s32Experimentalneon

Signed saturating rounded shift right unsigned narrow

vqshl_n_s8Experimentalneon

Signed saturating shift left

vqshl_n_s16Experimentalneon

Signed saturating shift left

vqshl_n_s32Experimentalneon

Signed saturating shift left

vqshl_n_s64Experimentalneon

Signed saturating shift left

vqshl_n_u8Experimentalneon

Unsigned saturating shift left

vqshl_n_u16Experimentalneon

Unsigned saturating shift left

vqshl_n_u32Experimentalneon

Unsigned saturating shift left

vqshl_n_u64Experimentalneon

Unsigned saturating shift left

vqshl_s8Experimentalneon

Signed saturating shift left

vqshl_s16Experimentalneon

Signed saturating shift left

vqshl_s32Experimentalneon

Signed saturating shift left

vqshl_s64Experimentalneon

Signed saturating shift left

vqshl_u8Experimentalneon

Unsigned saturating shift left

vqshl_u16Experimentalneon

Unsigned saturating shift left

vqshl_u32Experimentalneon

Unsigned saturating shift left

vqshl_u64Experimentalneon

Unsigned saturating shift left

vqshlb_n_s8Experimentalneon

Signed saturating shift left

vqshlb_n_u8Experimentalneon

Unsigned saturating shift left

vqshlb_s8Experimentalneon

Signed saturating shift left

vqshlb_u8Experimentalneon

Unsigned saturating shift left

vqshld_n_s64Experimentalneon

Signed saturating shift left

vqshld_n_u64Experimentalneon

Unsigned saturating shift left

vqshld_s64Experimentalneon

Signed saturating shift left

vqshld_u64Experimentalneon

Unsigned saturating shift left

vqshlh_n_s16Experimentalneon

Signed saturating shift left

vqshlh_n_u16Experimentalneon

Unsigned saturating shift left

vqshlh_s16Experimentalneon

Signed saturating shift left

vqshlh_u16Experimentalneon

Unsigned saturating shift left

vqshlq_n_s8Experimentalneon

Signed saturating shift left

vqshlq_n_s16Experimentalneon

Signed saturating shift left

vqshlq_n_s32Experimentalneon

Signed saturating shift left

vqshlq_n_s64Experimentalneon

Signed saturating shift left

vqshlq_n_u8Experimentalneon

Unsigned saturating shift left

vqshlq_n_u16Experimentalneon

Unsigned saturating shift left

vqshlq_n_u32Experimentalneon

Unsigned saturating shift left

vqshlq_n_u64Experimentalneon

Unsigned saturating shift left

vqshlq_s8Experimentalneon

Signed saturating shift left

vqshlq_s16Experimentalneon

Signed saturating shift left

vqshlq_s32Experimentalneon

Signed saturating shift left

vqshlq_s64Experimentalneon

Signed saturating shift left

vqshlq_u8Experimentalneon

Unsigned saturating shift left

vqshlq_u16Experimentalneon

Unsigned saturating shift left

vqshlq_u32Experimentalneon

Unsigned saturating shift left

vqshlq_u64Experimentalneon

Unsigned saturating shift left

vqshls_n_s32Experimentalneon

Signed saturating shift left

vqshls_n_u32Experimentalneon

Unsigned saturating shift left

vqshls_s32Experimentalneon

Signed saturating shift left

vqshls_u32Experimentalneon

Unsigned saturating shift left

vqshrn_high_n_s16Experimentalneon

Signed saturating shift right narrow

vqshrn_high_n_s32Experimentalneon

Signed saturating shift right narrow

vqshrn_high_n_s64Experimentalneon

Signed saturating shift right narrow

vqshrn_high_n_u16Experimentalneon

Unsigned saturating shift right narrow

vqshrn_high_n_u32Experimentalneon

Unsigned saturating shift right narrow

vqshrn_high_n_u64Experimentalneon

Unsigned saturating shift right narrow

vqshrnd_n_s64Experimentalneon

Signed saturating shift right narrow

vqshrnd_n_u64Experimentalneon

Unsigned saturating shift right narrow

vqshrnh_n_s16Experimentalneon

Signed saturating shift right narrow

vqshrnh_n_u16Experimentalneon

Unsigned saturating shift right narrow

vqshrns_n_s32Experimentalneon

Signed saturating shift right narrow

vqshrns_n_u32Experimentalneon

Unsigned saturating shift right narrow

vqshrun_high_n_s16Experimentalneon

Signed saturating shift right unsigned narrow

vqshrun_high_n_s32Experimentalneon

Signed saturating shift right unsigned narrow

vqshrun_high_n_s64Experimentalneon

Signed saturating shift right unsigned narrow

vqshrund_n_s64Experimentalneon

Signed saturating shift right unsigned narrow

vqshrunh_n_s16Experimentalneon

Signed saturating shift right unsigned narrow

vqshruns_n_s32Experimentalneon

Signed saturating shift right unsigned narrow

vqsub_s8Experimentalneon

Saturating subtract

vqsub_s16Experimentalneon

Saturating subtract

vqsub_s32Experimentalneon

Saturating subtract

vqsub_s64Experimentalneon

Saturating subtract

vqsub_u8Experimentalneon

Saturating subtract

vqsub_u16Experimentalneon

Saturating subtract

vqsub_u32Experimentalneon

Saturating subtract

vqsub_u64Experimentalneon

Saturating subtract

vqsubb_s8Experimentalneon

Saturating subtract

vqsubb_u8Experimentalneon

Saturating subtract

vqsubd_s64Experimentalneon

Saturating subtract

vqsubd_u64Experimentalneon

Saturating subtract

vqsubh_s16Experimentalneon

Saturating subtract

vqsubh_u16Experimentalneon

Saturating subtract

vqsubq_s8Experimentalneon

Saturating subtract

vqsubq_s16Experimentalneon

Saturating subtract

vqsubq_s32Experimentalneon

Saturating subtract

vqsubq_s64Experimentalneon

Saturating subtract

vqsubq_u8Experimentalneon

Saturating subtract

vqsubq_u16Experimentalneon

Saturating subtract

vqsubq_u32Experimentalneon

Saturating subtract

vqsubq_u64Experimentalneon

Saturating subtract

vqsubs_s32Experimentalneon

Saturating subtract

vqsubs_u32Experimentalneon

Saturating subtract

vqtbl1_p8Experimentalneon

Table look-up

vqtbl1_s8Experimentalneon

Table look-up

vqtbl1_u8Experimentalneon

Table look-up

vqtbl1q_p8Experimentalneon

Table look-up

vqtbl1q_s8Experimentalneon

Table look-up

vqtbl1q_u8Experimentalneon

Table look-up

vqtbl2_p8Experimentalneon

Table look-up

vqtbl2_s8Experimentalneon

Table look-up

vqtbl2_u8Experimentalneon

Table look-up

vqtbl2q_p8Experimentalneon

Table look-up

vqtbl2q_s8Experimentalneon

Table look-up

vqtbl2q_u8Experimentalneon

Table look-up

vqtbl3_p8Experimentalneon

Table look-up

vqtbl3_s8Experimentalneon

Table look-up

vqtbl3_u8Experimentalneon

Table look-up

vqtbl3q_p8Experimentalneon

Table look-up

vqtbl3q_s8Experimentalneon

Table look-up

vqtbl3q_u8Experimentalneon

Table look-up

vqtbl4_p8Experimentalneon

Table look-up

vqtbl4_s8Experimentalneon

Table look-up

vqtbl4_u8Experimentalneon

Table look-up

vqtbl4q_p8Experimentalneon

Table look-up

vqtbl4q_s8Experimentalneon

Table look-up

vqtbl4q_u8Experimentalneon

Table look-up

vqtbx1_p8Experimentalneon

Extended table look-up

vqtbx1_s8Experimentalneon

Extended table look-up

vqtbx1_u8Experimentalneon

Extended table look-up

vqtbx1q_p8Experimentalneon

Extended table look-up

vqtbx1q_s8Experimentalneon

Extended table look-up

vqtbx1q_u8Experimentalneon

Extended table look-up

vqtbx2_p8Experimentalneon

Extended table look-up

vqtbx2_s8Experimentalneon

Extended table look-up

vqtbx2_u8Experimentalneon

Extended table look-up

vqtbx2q_p8Experimentalneon

Extended table look-up

vqtbx2q_s8Experimentalneon

Extended table look-up

vqtbx2q_u8Experimentalneon

Extended table look-up

vqtbx3_p8Experimentalneon

Extended table look-up

vqtbx3_s8Experimentalneon

Extended table look-up

vqtbx3_u8Experimentalneon

Extended table look-up

vqtbx3q_p8Experimentalneon

Extended table look-up

vqtbx3q_s8Experimentalneon

Extended table look-up

vqtbx3q_u8Experimentalneon

Extended table look-up

vqtbx4_p8Experimentalneon

Extended table look-up

vqtbx4_s8Experimentalneon

Extended table look-up

vqtbx4_u8Experimentalneon

Extended table look-up

vqtbx4q_p8Experimentalneon

Extended table look-up

vqtbx4q_s8Experimentalneon

Extended table look-up

vqtbx4q_u8Experimentalneon

Extended table look-up

vraddhn_high_s16Experimentalneon

Rounding Add returning High Narrow (high half).

vraddhn_high_s32Experimentalneon

Rounding Add returning High Narrow (high half).

vraddhn_high_s64Experimentalneon

Rounding Add returning High Narrow (high half).

vraddhn_high_u16Experimentalneon

Rounding Add returning High Narrow (high half).

vraddhn_high_u32Experimentalneon

Rounding Add returning High Narrow (high half).

vraddhn_high_u64Experimentalneon

Rounding Add returning High Narrow (high half).

vraddhn_s16Experimentalneon

Rounding Add returning High Narrow.

vraddhn_s32Experimentalneon

Rounding Add returning High Narrow.

vraddhn_s64Experimentalneon

Rounding Add returning High Narrow.

vraddhn_u16Experimentalneon

Rounding Add returning High Narrow.

vraddhn_u32Experimentalneon

Rounding Add returning High Narrow.

vraddhn_u64Experimentalneon

Rounding Add returning High Narrow.

vrbit_p8Experimentalneon

Reverse bit order

vrbit_s8Experimentalneon

Reverse bit order

vrbit_u8Experimentalneon

Reverse bit order

vrbitq_p8Experimentalneon

Reverse bit order

vrbitq_s8Experimentalneon

Reverse bit order

vrbitq_u8Experimentalneon

Reverse bit order

vrecpe_f32Experimentalneon

Reciprocal estimate.

vrecpe_f64Experimentalneon

Reciprocal estimate.

vrecpeq_f32Experimentalneon

Reciprocal estimate.

vrecpeq_f64Experimentalneon

Reciprocal estimate.

vreinterpret_f32_f64Experimentalneon

Vector reinterpret cast operation

vreinterpret_f32_p8Experimentalneon

Vector reinterpret cast operation

vreinterpret_f32_p16Experimentalneon

Vector reinterpret cast operation

vreinterpret_f32_p64Experimentalneon

Vector reinterpret cast operation

vreinterpret_f32_s8Experimentalneon

Vector reinterpret cast operation

vreinterpret_f32_s16Experimentalneon

Vector reinterpret cast operation

vreinterpret_f32_s32Experimentalneon

Vector reinterpret cast operation

vreinterpret_f32_s64Experimentalneon

Vector reinterpret cast operation

vreinterpret_f32_u8Experimentalneon

Vector reinterpret cast operation

vreinterpret_f32_u16Experimentalneon

Vector reinterpret cast operation

vreinterpret_f32_u32Experimentalneon

Vector reinterpret cast operation

vreinterpret_f32_u64Experimentalneon

Vector reinterpret cast operation

vreinterpret_f64_f32Experimentalneon

Vector reinterpret cast operation

vreinterpret_f64_p8Experimentalneon

Vector reinterpret cast operation

vreinterpret_f64_p16Experimentalneon

Vector reinterpret cast operation

vreinterpret_f64_p64Experimentalneon

Vector reinterpret cast operation

vreinterpret_f64_s8Experimentalneon

Vector reinterpret cast operation

vreinterpret_f64_s16Experimentalneon

Vector reinterpret cast operation

vreinterpret_f64_s32Experimentalneon

Vector reinterpret cast operation

vreinterpret_f64_s64Experimentalneon

Vector reinterpret cast operation

vreinterpret_f64_u8Experimentalneon

Vector reinterpret cast operation

vreinterpret_f64_u16Experimentalneon

Vector reinterpret cast operation

vreinterpret_f64_u32Experimentalneon

Vector reinterpret cast operation

vreinterpret_f64_u64Experimentalneon

Vector reinterpret cast operation

vreinterpret_p8_f32Experimentalneon

Vector reinterpret cast operation

vreinterpret_p8_f64Experimentalneon

Vector reinterpret cast operation

vreinterpret_p8_p16Experimentalneon

Vector reinterpret cast operation

vreinterpret_p8_p64Experimentalneon

Vector reinterpret cast operation

vreinterpret_p8_s8Experimentalneon

Vector reinterpret cast operation

vreinterpret_p8_s16Experimentalneon

Vector reinterpret cast operation

vreinterpret_p8_s32Experimentalneon

Vector reinterpret cast operation

vreinterpret_p8_s64Experimentalneon

Vector reinterpret cast operation

vreinterpret_p8_u8Experimentalneon

Vector reinterpret cast operation

vreinterpret_p8_u16Experimentalneon

Vector reinterpret cast operation

vreinterpret_p8_u32Experimentalneon

Vector reinterpret cast operation

vreinterpret_p8_u64Experimentalneon

Vector reinterpret cast operation

vreinterpret_p16_f32Experimentalneon

Vector reinterpret cast operation

vreinterpret_p16_f64Experimentalneon

Vector reinterpret cast operation

vreinterpret_p16_p8Experimentalneon

Vector reinterpret cast operation

vreinterpret_p16_p64Experimentalneon

Vector reinterpret cast operation

vreinterpret_p16_s8Experimentalneon

Vector reinterpret cast operation

vreinterpret_p16_s16Experimentalneon

Vector reinterpret cast operation

vreinterpret_p16_s32Experimentalneon

Vector reinterpret cast operation

vreinterpret_p16_s64Experimentalneon

Vector reinterpret cast operation

vreinterpret_p16_u8Experimentalneon

Vector reinterpret cast operation

vreinterpret_p16_u16Experimentalneon

Vector reinterpret cast operation

vreinterpret_p16_u32Experimentalneon

Vector reinterpret cast operation

vreinterpret_p16_u64Experimentalneon

Vector reinterpret cast operation

vreinterpret_p64_f32Experimentalneon

Vector reinterpret cast operation

vreinterpret_p64_f64Experimentalneon

Vector reinterpret cast operation

vreinterpret_p64_p8Experimentalneon

Vector reinterpret cast operation

vreinterpret_p64_p16Experimentalneon

Vector reinterpret cast operation

vreinterpret_p64_s8Experimentalneon

Vector reinterpret cast operation

vreinterpret_p64_s16Experimentalneon

Vector reinterpret cast operation

vreinterpret_p64_s32Experimentalneon

Vector reinterpret cast operation

vreinterpret_p64_s64Experimentalneon

Vector reinterpret cast operation

vreinterpret_p64_u8Experimentalneon

Vector reinterpret cast operation

vreinterpret_p64_u16Experimentalneon

Vector reinterpret cast operation

vreinterpret_p64_u32Experimentalneon

Vector reinterpret cast operation

vreinterpret_p64_u64Experimentalneon

Vector reinterpret cast operation

vreinterpret_s8_f32Experimentalneon

Vector reinterpret cast operation

vreinterpret_s8_f64Experimentalneon

Vector reinterpret cast operation

vreinterpret_s8_p8Experimentalneon

Vector reinterpret cast operation

vreinterpret_s8_p16Experimentalneon

Vector reinterpret cast operation

vreinterpret_s8_p64Experimentalneon

Vector reinterpret cast operation

vreinterpret_s8_s16Experimentalneon

Vector reinterpret cast operation

vreinterpret_s8_s32Experimentalneon

Vector reinterpret cast operation

vreinterpret_s8_s64Experimentalneon

Vector reinterpret cast operation

vreinterpret_s8_u8Experimentalneon

Vector reinterpret cast operation

vreinterpret_s8_u16Experimentalneon

Vector reinterpret cast operation

vreinterpret_s8_u32Experimentalneon

Vector reinterpret cast operation

vreinterpret_s8_u64Experimentalneon

Vector reinterpret cast operation

vreinterpret_s16_f32Experimentalneon

Vector reinterpret cast operation

vreinterpret_s16_f64Experimentalneon

Vector reinterpret cast operation

vreinterpret_s16_p8Experimentalneon

Vector reinterpret cast operation

vreinterpret_s16_p16Experimentalneon

Vector reinterpret cast operation

vreinterpret_s16_p64Experimentalneon

Vector reinterpret cast operation

vreinterpret_s16_s8Experimentalneon

Vector reinterpret cast operation

vreinterpret_s16_s32Experimentalneon

Vector reinterpret cast operation

vreinterpret_s16_s64Experimentalneon

Vector reinterpret cast operation

vreinterpret_s16_u8Experimentalneon

Vector reinterpret cast operation

vreinterpret_s16_u16Experimentalneon

Vector reinterpret cast operation

vreinterpret_s16_u32Experimentalneon

Vector reinterpret cast operation

vreinterpret_s16_u64Experimentalneon

Vector reinterpret cast operation

vreinterpret_s32_f32Experimentalneon

Vector reinterpret cast operation

vreinterpret_s32_f64Experimentalneon

Vector reinterpret cast operation

vreinterpret_s32_p8Experimentalneon

Vector reinterpret cast operation

vreinterpret_s32_p16Experimentalneon

Vector reinterpret cast operation

vreinterpret_s32_p64Experimentalneon

Vector reinterpret cast operation

vreinterpret_s32_s8Experimentalneon

Vector reinterpret cast operation

vreinterpret_s32_s16Experimentalneon

Vector reinterpret cast operation

vreinterpret_s32_s64Experimentalneon

Vector reinterpret cast operation

vreinterpret_s32_u8Experimentalneon

Vector reinterpret cast operation

vreinterpret_s32_u16Experimentalneon

Vector reinterpret cast operation

vreinterpret_s32_u32Experimentalneon

Vector reinterpret cast operation

vreinterpret_s32_u64Experimentalneon

Vector reinterpret cast operation

vreinterpret_s64_f32Experimentalneon

Vector reinterpret cast operation

vreinterpret_s64_f64Experimentalneon

Vector reinterpret cast operation

vreinterpret_s64_p8Experimentalneon

Vector reinterpret cast operation

vreinterpret_s64_p16Experimentalneon

Vector reinterpret cast operation

vreinterpret_s64_p64Experimentalneon

Vector reinterpret cast operation

vreinterpret_s64_s8Experimentalneon

Vector reinterpret cast operation

vreinterpret_s64_s16Experimentalneon

Vector reinterpret cast operation

vreinterpret_s64_s32Experimentalneon

Vector reinterpret cast operation

vreinterpret_s64_u8Experimentalneon

Vector reinterpret cast operation

vreinterpret_s64_u16Experimentalneon

Vector reinterpret cast operation

vreinterpret_s64_u32Experimentalneon

Vector reinterpret cast operation

vreinterpret_s64_u64Experimentalneon

Vector reinterpret cast operation

vreinterpret_u8_f32Experimentalneon

Vector reinterpret cast operation

vreinterpret_u8_f64Experimentalneon

Vector reinterpret cast operation

vreinterpret_u8_p8Experimentalneon

Vector reinterpret cast operation

vreinterpret_u8_p16Experimentalneon

Vector reinterpret cast operation

vreinterpret_u8_p64Experimentalneon

Vector reinterpret cast operation

vreinterpret_u8_s8Experimentalneon

Vector reinterpret cast operation

vreinterpret_u8_s16Experimentalneon

Vector reinterpret cast operation

vreinterpret_u8_s32Experimentalneon

Vector reinterpret cast operation

vreinterpret_u8_s64Experimentalneon

Vector reinterpret cast operation

vreinterpret_u8_u16Experimentalneon

Vector reinterpret cast operation

vreinterpret_u8_u32Experimentalneon

Vector reinterpret cast operation

vreinterpret_u8_u64Experimentalneon

Vector reinterpret cast operation

vreinterpret_u16_f32Experimentalneon

Vector reinterpret cast operation

vreinterpret_u16_f64Experimentalneon

Vector reinterpret cast operation

vreinterpret_u16_p8Experimentalneon

Vector reinterpret cast operation

vreinterpret_u16_p16Experimentalneon

Vector reinterpret cast operation

vreinterpret_u16_p64Experimentalneon

Vector reinterpret cast operation

vreinterpret_u16_s8Experimentalneon

Vector reinterpret cast operation

vreinterpret_u16_s16Experimentalneon

Vector reinterpret cast operation

vreinterpret_u16_s32Experimentalneon

Vector reinterpret cast operation

vreinterpret_u16_s64Experimentalneon

Vector reinterpret cast operation

vreinterpret_u16_u8Experimentalneon

Vector reinterpret cast operation

vreinterpret_u16_u32Experimentalneon

Vector reinterpret cast operation

vreinterpret_u16_u64Experimentalneon

Vector reinterpret cast operation

vreinterpret_u32_f32Experimentalneon

Vector reinterpret cast operation

vreinterpret_u32_f64Experimentalneon

Vector reinterpret cast operation

vreinterpret_u32_p8Experimentalneon

Vector reinterpret cast operation

vreinterpret_u32_p16Experimentalneon

Vector reinterpret cast operation

vreinterpret_u32_p64Experimentalneon

Vector reinterpret cast operation

vreinterpret_u32_s8Experimentalneon

Vector reinterpret cast operation

vreinterpret_u32_s16Experimentalneon

Vector reinterpret cast operation

vreinterpret_u32_s32Experimentalneon

Vector reinterpret cast operation

vreinterpret_u32_s64Experimentalneon

Vector reinterpret cast operation

vreinterpret_u32_u8Experimentalneon

Vector reinterpret cast operation

vreinterpret_u32_u16Experimentalneon

Vector reinterpret cast operation

vreinterpret_u32_u64Experimentalneon

Vector reinterpret cast operation

vreinterpret_u64_f32Experimentalneon

Vector reinterpret cast operation

vreinterpret_u64_f64Experimentalneon

Vector reinterpret cast operation

vreinterpret_u64_p8Experimentalneon

Vector reinterpret cast operation

vreinterpret_u64_p16Experimentalneon

Vector reinterpret cast operation

vreinterpret_u64_p64Experimentalneon

Vector reinterpret cast operation

vreinterpret_u64_s8Experimentalneon

Vector reinterpret cast operation

vreinterpret_u64_s16Experimentalneon

Vector reinterpret cast operation

vreinterpret_u64_s32Experimentalneon

Vector reinterpret cast operation

vreinterpret_u64_s64Experimentalneon

Vector reinterpret cast operation

vreinterpret_u64_u8Experimentalneon

Vector reinterpret cast operation

vreinterpret_u64_u16Experimentalneon

Vector reinterpret cast operation

vreinterpret_u64_u32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f32_f64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f32_p8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f32_p16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f32_p64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f32_s8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f32_s16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f32_s32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f32_s64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f32_u8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f32_u16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f32_u32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f32_u64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f64_f32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f64_p8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f64_p16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f64_p64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f64_s8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f64_s16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f64_s32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f64_s64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f64_u8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f64_u16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f64_u32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_f64_u64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p8_f32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p8_f64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p8_p16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p8_p64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p8_s8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p8_s16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p8_s32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p8_s64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p8_u8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p8_u16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p8_u32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p8_u64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p16_f32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p16_f64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p16_p8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p16_p64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p16_s8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p16_s16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p16_s32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p16_s64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p16_u8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p16_u16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p16_u32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p16_u64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p64_f32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p64_f64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p64_p8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p64_p16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p64_s8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p64_s16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p64_s32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p64_s64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p64_u8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p64_u16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p64_u32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_p64_u64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s8_f32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s8_f64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s8_p8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s8_p16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s8_p64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s8_s16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s8_s32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s8_s64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s8_u8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s8_u16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s8_u32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s8_u64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s16_f32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s16_f64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s16_p8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s16_p16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s16_p64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s16_s8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s16_s32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s16_s64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s16_u8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s16_u16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s16_u32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s16_u64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s32_f32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s32_f64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s32_p8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s32_p16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s32_p64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s32_s8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s32_s16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s32_s64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s32_u8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s32_u16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s32_u32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s32_u64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s64_f32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s64_f64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s64_p8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s64_p16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s64_p64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s64_s8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s64_s16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s64_s32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s64_u8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s64_u16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s64_u32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_s64_u64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u8_f32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u8_f64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u8_p8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u8_p16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u8_p64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u8_s8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u8_s16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u8_s32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u8_s64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u8_u16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u8_u32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u8_u64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u16_f32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u16_f64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u16_p8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u16_p16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u16_p64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u16_s8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u16_s16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u16_s32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u16_s64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u16_u8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u16_u32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u16_u64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u32_f32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u32_f64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u32_p8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u32_p16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u32_p64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u32_s8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u32_s16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u32_s32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u32_s64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u32_u8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u32_u16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u32_u64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u64_f32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u64_f64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u64_p8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u64_p16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u64_p64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u64_s8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u64_s16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u64_s32Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u64_s64Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u64_u8Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u64_u16Experimentalneon

Vector reinterpret cast operation

vreinterpretq_u64_u32Experimentalneon

Vector reinterpret cast operation

vrev16_p8Experimentalneon

Reversing vector elements (swap endianness)

vrev16_s8Experimentalneon

Reversing vector elements (swap endianness)

vrev16_u8Experimentalneon

Reversing vector elements (swap endianness)

vrev16q_p8Experimentalneon

Reversing vector elements (swap endianness)

vrev16q_s8Experimentalneon

Reversing vector elements (swap endianness)

vrev16q_u8Experimentalneon

Reversing vector elements (swap endianness)

vrev32_p8Experimentalneon

Reversing vector elements (swap endianness)

vrev32_p16Experimentalneon

Reversing vector elements (swap endianness)

vrev32_s8Experimentalneon

Reversing vector elements (swap endianness)

vrev32_s16Experimentalneon

Reversing vector elements (swap endianness)

vrev32_u8Experimentalneon

Reversing vector elements (swap endianness)

vrev32_u16Experimentalneon

Reversing vector elements (swap endianness)

vrev32q_p8Experimentalneon

Reversing vector elements (swap endianness)

vrev32q_p16Experimentalneon

Reversing vector elements (swap endianness)

vrev32q_s8Experimentalneon

Reversing vector elements (swap endianness)

vrev32q_s16Experimentalneon

Reversing vector elements (swap endianness)

vrev32q_u8Experimentalneon

Reversing vector elements (swap endianness)

vrev32q_u16Experimentalneon

Reversing vector elements (swap endianness)

vrev64_f32Experimentalneon

Reversing vector elements (swap endianness)

vrev64_p8Experimentalneon

Reversing vector elements (swap endianness)

vrev64_p16Experimentalneon

Reversing vector elements (swap endianness)

vrev64_s8Experimentalneon

Reversing vector elements (swap endianness)

vrev64_s16Experimentalneon

Reversing vector elements (swap endianness)

vrev64_s32Experimentalneon

Reversing vector elements (swap endianness)

vrev64_u8Experimentalneon

Reversing vector elements (swap endianness)

vrev64_u16Experimentalneon

Reversing vector elements (swap endianness)

vrev64_u32Experimentalneon

Reversing vector elements (swap endianness)

vrev64q_f32Experimentalneon

Reversing vector elements (swap endianness)

vrev64q_p8Experimentalneon

Reversing vector elements (swap endianness)

vrev64q_p16Experimentalneon

Reversing vector elements (swap endianness)

vrev64q_s8Experimentalneon

Reversing vector elements (swap endianness)

vrev64q_s16Experimentalneon

Reversing vector elements (swap endianness)

vrev64q_s32Experimentalneon

Reversing vector elements (swap endianness)

vrev64q_u8Experimentalneon

Reversing vector elements (swap endianness)

vrev64q_u16Experimentalneon

Reversing vector elements (swap endianness)

vrev64q_u32Experimentalneon

Reversing vector elements (swap endianness)

vrhadd_s8Experimentalneon

Rounding halving add

vrhadd_s16Experimentalneon

Rounding halving add

vrhadd_s32Experimentalneon

Rounding halving add

vrhadd_u8Experimentalneon

Rounding halving add

vrhadd_u16Experimentalneon

Rounding halving add

vrhadd_u32Experimentalneon

Rounding halving add

vrhaddq_s8Experimentalneon

Rounding halving add

vrhaddq_s16Experimentalneon

Rounding halving add

vrhaddq_s32Experimentalneon

Rounding halving add

vrhaddq_u8Experimentalneon

Rounding halving add

vrhaddq_u16Experimentalneon

Rounding halving add

vrhaddq_u32Experimentalneon

Rounding halving add

vrnd_f32Experimentalneon

Floating-point round to integral, toward zero

vrnd_f64Experimentalneon

Floating-point round to integral, toward zero

vrnda_f32Experimentalneon

Floating-point round to integral, to nearest with ties to away

vrnda_f64Experimentalneon

Floating-point round to integral, to nearest with ties to away

vrndaq_f32Experimentalneon

Floating-point round to integral, to nearest with ties to away

vrndaq_f64Experimentalneon

Floating-point round to integral, to nearest with ties to away

vrndi_f32Experimentalneon

Floating-point round to integral, using current rounding mode

vrndi_f64Experimentalneon

Floating-point round to integral, using current rounding mode

vrndiq_f32Experimentalneon

Floating-point round to integral, using current rounding mode

vrndiq_f64Experimentalneon

Floating-point round to integral, using current rounding mode

vrndm_f32Experimentalneon

Floating-point round to integral, toward minus infinity

vrndm_f64Experimentalneon

Floating-point round to integral, toward minus infinity

vrndmq_f32Experimentalneon

Floating-point round to integral, toward minus infinity

vrndmq_f64Experimentalneon

Floating-point round to integral, toward minus infinity

vrndn_f32Experimentalneon

Floating-point round to integral, to nearest with ties to even

vrndn_f64Experimentalneon

Floating-point round to integral, to nearest with ties to even

vrndnq_f32Experimentalneon

Floating-point round to integral, to nearest with ties to even

vrndnq_f64Experimentalneon

Floating-point round to integral, to nearest with ties to even

vrndp_f32Experimentalneon

Floating-point round to integral, toward plus infinity

vrndp_f64Experimentalneon

Floating-point round to integral, toward plus infinity

vrndpq_f32Experimentalneon

Floating-point round to integral, toward plus infinity

vrndpq_f64Experimentalneon

Floating-point round to integral, toward plus infinity

vrndq_f32Experimentalneon

Floating-point round to integral, toward zero

vrndq_f64Experimentalneon

Floating-point round to integral, toward zero

vrndx_f32Experimentalneon

Floating-point round to integral exact, using current rounding mode

vrndx_f64Experimentalneon

Floating-point round to integral exact, using current rounding mode

vrndxq_f32Experimentalneon

Floating-point round to integral exact, using current rounding mode

vrndxq_f64Experimentalneon

Floating-point round to integral exact, using current rounding mode

vrshl_s8Experimentalneon

Signed rounding shift left

vrshl_s16Experimentalneon

Signed rounding shift left

vrshl_s32Experimentalneon

Signed rounding shift left

vrshl_s64Experimentalneon

Signed rounding shift left

vrshl_u8Experimentalneon

Unsigned rounding shift left

vrshl_u16Experimentalneon

Unsigned rounding shift left

vrshl_u32Experimentalneon

Unsigned rounding shift left

vrshl_u64Experimentalneon

Unsigned rounding shift left

vrshld_s64Experimentalneon

Signed rounding shift left

vrshld_u64Experimentalneon

Unsigned rounding shift left

vrshlq_s8Experimentalneon

Signed rounding shift left

vrshlq_s16Experimentalneon

Signed rounding shift left

vrshlq_s32Experimentalneon

Signed rounding shift left

vrshlq_s64Experimentalneon

Signed rounding shift left

vrshlq_u8Experimentalneon

Unsigned rounding shift left

vrshlq_u16Experimentalneon

Unsigned rounding shift left

vrshlq_u32Experimentalneon

Unsigned rounding shift left

vrshlq_u64Experimentalneon

Unsigned rounding shift left

vrshr_n_s8Experimentalneon

Signed rounding shift right

vrshr_n_s16Experimentalneon

Signed rounding shift right

vrshr_n_s32Experimentalneon

Signed rounding shift right

vrshr_n_s64Experimentalneon

Signed rounding shift right

vrshr_n_u8Experimentalneon

Unsigned rounding shift right

vrshr_n_u16Experimentalneon

Unsigned rounding shift right

vrshr_n_u32Experimentalneon

Unsigned rounding shift right

vrshr_n_u64Experimentalneon

Unsigned rounding shift right

vrshrd_n_s64Experimentalneon

Signed rounding shift right

vrshrd_n_u64Experimentalneon

Unsigned rounding shift right

vrshrn_high_n_s16Experimentalneon

Rounding shift right narrow

vrshrn_high_n_s32Experimentalneon

Rounding shift right narrow

vrshrn_high_n_s64Experimentalneon

Rounding shift right narrow

vrshrn_high_n_u16Experimentalneon

Rounding shift right narrow

vrshrn_high_n_u32Experimentalneon

Rounding shift right narrow

vrshrn_high_n_u64Experimentalneon

Rounding shift right narrow

vrshrn_n_u16Experimentalneon

Rounding shift right narrow

vrshrn_n_u32Experimentalneon

Rounding shift right narrow

vrshrn_n_u64Experimentalneon

Rounding shift right narrow

vrshrq_n_s8Experimentalneon

Signed rounding shift right

vrshrq_n_s16Experimentalneon

Signed rounding shift right

vrshrq_n_s32Experimentalneon

Signed rounding shift right

vrshrq_n_s64Experimentalneon

Signed rounding shift right

vrshrq_n_u8Experimentalneon

Unsigned rounding shift right

vrshrq_n_u16Experimentalneon

Unsigned rounding shift right

vrshrq_n_u32Experimentalneon

Unsigned rounding shift right

vrshrq_n_u64Experimentalneon

Unsigned rounding shift right

vrsqrte_f32Experimentalneon

Reciprocal square-root estimate.

vrsqrte_f64Experimentalneon

Reciprocal square-root estimate.

vrsqrteq_f32Experimentalneon

Reciprocal square-root estimate.

vrsqrteq_f64Experimentalneon

Reciprocal square-root estimate.

vrsra_n_s8Experimentalneon

Signed rounding shift right and accumulate

vrsra_n_s16Experimentalneon

Signed rounding shift right and accumulate

vrsra_n_s32Experimentalneon

Signed rounding shift right and accumulate

vrsra_n_s64Experimentalneon

Signed rounding shift right and accumulate

vrsra_n_u8Experimentalneon

Unsigned rounding shift right and accumulate

vrsra_n_u16Experimentalneon

Unsigned rounding shift right and accumulate

vrsra_n_u32Experimentalneon

Unsigned rounding shift right and accumulate

vrsra_n_u64Experimentalneon

Unsigned rounding shift right and accumulate

vrsrad_n_s64Experimentalneon

Signed rounding shift right and accumulate.

vrsrad_n_u64Experimentalneon

Ungisned rounding shift right and accumulate.

vrsraq_n_s8Experimentalneon

Signed rounding shift right and accumulate

vrsraq_n_s16Experimentalneon

Signed rounding shift right and accumulate

vrsraq_n_s32Experimentalneon

Signed rounding shift right and accumulate

vrsraq_n_s64Experimentalneon

Signed rounding shift right and accumulate

vrsraq_n_u8Experimentalneon

Unsigned rounding shift right and accumulate

vrsraq_n_u16Experimentalneon

Unsigned rounding shift right and accumulate

vrsraq_n_u32Experimentalneon

Unsigned rounding shift right and accumulate

vrsraq_n_u64Experimentalneon

Unsigned rounding shift right and accumulate

vset_lane_f32Experimentalneon

Insert vector element from another vector element

vset_lane_f64Experimentalneon

Insert vector element from another vector element

vset_lane_p8Experimentalneon

Insert vector element from another vector element

vset_lane_p16Experimentalneon

Insert vector element from another vector element

vset_lane_p64Experimentalneon,aes

Insert vector element from another vector element

vset_lane_s8Experimentalneon

Insert vector element from another vector element

vset_lane_s16Experimentalneon

Insert vector element from another vector element

vset_lane_s32Experimentalneon

Insert vector element from another vector element

vset_lane_s64Experimentalneon

Insert vector element from another vector element

vset_lane_u8Experimentalneon

Insert vector element from another vector element

vset_lane_u16Experimentalneon

Insert vector element from another vector element

vset_lane_u32Experimentalneon

Insert vector element from another vector element

vset_lane_u64Experimentalneon

Insert vector element from another vector element

vsetq_lane_f32Experimentalneon

Insert vector element from another vector element

vsetq_lane_f64Experimentalneon

Insert vector element from another vector element

vsetq_lane_p8Experimentalneon

Insert vector element from another vector element

vsetq_lane_p16Experimentalneon

Insert vector element from another vector element

vsetq_lane_p64Experimentalneon,aes

Insert vector element from another vector element

vsetq_lane_s8Experimentalneon

Insert vector element from another vector element

vsetq_lane_s16Experimentalneon

Insert vector element from another vector element

vsetq_lane_s32Experimentalneon

Insert vector element from another vector element

vsetq_lane_s64Experimentalneon

Insert vector element from another vector element

vsetq_lane_u8Experimentalneon

Insert vector element from another vector element

vsetq_lane_u16Experimentalneon

Insert vector element from another vector element

vsetq_lane_u32Experimentalneon

Insert vector element from another vector element

vsetq_lane_u64Experimentalneon

Insert vector element from another vector element

vsha1cq_u32Experimentalsha2

SHA1 hash update accelerator, choose.

vsha1h_u32Experimentalsha2

SHA1 fixed rotate.

vsha1mq_u32Experimentalsha2

SHA1 hash update accelerator, majority.

vsha1pq_u32Experimentalsha2

SHA1 hash update accelerator, parity.

vsha1su0q_u32Experimentalsha2

SHA1 schedule update accelerator, first part.

vsha1su1q_u32Experimentalsha2

SHA1 schedule update accelerator, second part.

vsha256h2q_u32Experimentalsha2

SHA256 hash update accelerator, upper part.

vsha256hq_u32Experimentalsha2

SHA256 hash update accelerator.

vsha256su0q_u32Experimentalsha2

SHA256 schedule update accelerator, first part.

vsha256su1q_u32Experimentalsha2

SHA256 schedule update accelerator, second part.

vshl_n_s8Experimentalneon

Shift left

vshl_n_s16Experimentalneon

Shift left

vshl_n_s32Experimentalneon

Shift left

vshl_n_s64Experimentalneon

Shift left

vshl_n_u8Experimentalneon

Shift left

vshl_n_u16Experimentalneon

Shift left

vshl_n_u32Experimentalneon

Shift left

vshl_n_u64Experimentalneon

Shift left

vshl_s8Experimentalneon

Signed Shift left

vshl_s16Experimentalneon

Signed Shift left

vshl_s32Experimentalneon

Signed Shift left

vshl_s64Experimentalneon

Signed Shift left

vshl_u8Experimentalneon

Unsigned Shift left

vshl_u16Experimentalneon

Unsigned Shift left

vshl_u32Experimentalneon

Unsigned Shift left

vshl_u64Experimentalneon

Unsigned Shift left

vshld_n_s64Experimentalneon

Shift left

vshld_n_u64Experimentalneon

Shift left

vshld_s64Experimentalneon

Signed Shift left

vshld_u64Experimentalneon

Unsigned Shift left

vshll_high_n_s8Experimentalneon

Signed shift left long

vshll_high_n_s16Experimentalneon

Signed shift left long

vshll_high_n_s32Experimentalneon

Signed shift left long

vshll_high_n_u8Experimentalneon

Signed shift left long

vshll_high_n_u16Experimentalneon

Signed shift left long

vshll_high_n_u32Experimentalneon

Signed shift left long

vshll_n_s8Experimentalneon

Signed shift left long

vshll_n_s16Experimentalneon

Signed shift left long

vshll_n_s32Experimentalneon

Signed shift left long

vshll_n_u8Experimentalneon

Signed shift left long

vshll_n_u16Experimentalneon

Signed shift left long

vshll_n_u32Experimentalneon

Signed shift left long

vshlq_n_s8Experimentalneon

Shift left

vshlq_n_s16Experimentalneon

Shift left

vshlq_n_s32Experimentalneon

Shift left

vshlq_n_s64Experimentalneon

Shift left

vshlq_n_u8Experimentalneon

Shift left

vshlq_n_u16Experimentalneon

Shift left

vshlq_n_u32Experimentalneon

Shift left

vshlq_n_u64Experimentalneon

Shift left

vshlq_s8Experimentalneon

Signed Shift left

vshlq_s16Experimentalneon

Signed Shift left

vshlq_s32Experimentalneon

Signed Shift left

vshlq_s64Experimentalneon

Signed Shift left

vshlq_u8Experimentalneon

Unsigned Shift left

vshlq_u16Experimentalneon

Unsigned Shift left

vshlq_u32Experimentalneon

Unsigned Shift left

vshlq_u64Experimentalneon

Unsigned Shift left

vshr_n_s8Experimentalneon

Shift right

vshr_n_s16Experimentalneon

Shift right

vshr_n_s32Experimentalneon

Shift right

vshr_n_s64Experimentalneon

Shift right

vshr_n_u8Experimentalneon

Shift right

vshr_n_u16Experimentalneon

Shift right

vshr_n_u32Experimentalneon

Shift right

vshr_n_u64Experimentalneon

Shift right

vshrd_n_s64Experimentalneon

Signed shift right

vshrd_n_u64Experimentalneon

Unsigned shift right

vshrn_high_n_s16Experimentalneon

Shift right narrow

vshrn_high_n_s32Experimentalneon

Shift right narrow

vshrn_high_n_s64Experimentalneon

Shift right narrow

vshrn_high_n_u16Experimentalneon

Shift right narrow

vshrn_high_n_u32Experimentalneon

Shift right narrow

vshrn_high_n_u64Experimentalneon

Shift right narrow

vshrn_n_s16Experimentalneon

Shift right narrow

vshrn_n_s32Experimentalneon

Shift right narrow

vshrn_n_s64Experimentalneon

Shift right narrow

vshrn_n_u16Experimentalneon

Shift right narrow

vshrn_n_u32Experimentalneon

Shift right narrow

vshrn_n_u64Experimentalneon

Shift right narrow

vshrq_n_s8Experimentalneon

Shift right

vshrq_n_s16Experimentalneon

Shift right

vshrq_n_s32Experimentalneon

Shift right

vshrq_n_s64Experimentalneon

Shift right

vshrq_n_u8Experimentalneon

Shift right

vshrq_n_u16Experimentalneon

Shift right

vshrq_n_u32Experimentalneon

Shift right

vshrq_n_u64Experimentalneon

Shift right

vsli_n_p8Experimentalneon

Shift Left and Insert (immediate)

vsli_n_p16Experimentalneon

Shift Left and Insert (immediate)

vsli_n_s8Experimentalneon

Shift Left and Insert (immediate)

vsli_n_s16Experimentalneon

Shift Left and Insert (immediate)

vsli_n_s32Experimentalneon

Shift Left and Insert (immediate)

vsli_n_s64Experimentalneon

Shift Left and Insert (immediate)

vsli_n_u8Experimentalneon

Shift Left and Insert (immediate)

vsli_n_u16Experimentalneon

Shift Left and Insert (immediate)

vsli_n_u32Experimentalneon

Shift Left and Insert (immediate)

vsli_n_u64Experimentalneon

Shift Left and Insert (immediate)

vsliq_n_p8Experimentalneon

Shift Left and Insert (immediate)

vsliq_n_p16Experimentalneon

Shift Left and Insert (immediate)

vsliq_n_s8Experimentalneon

Shift Left and Insert (immediate)

vsliq_n_s16Experimentalneon

Shift Left and Insert (immediate)

vsliq_n_s32Experimentalneon

Shift Left and Insert (immediate)

vsliq_n_s64Experimentalneon

Shift Left and Insert (immediate)

vsliq_n_u8Experimentalneon

Shift Left and Insert (immediate)

vsliq_n_u16Experimentalneon

Shift Left and Insert (immediate)

vsliq_n_u32Experimentalneon

Shift Left and Insert (immediate)

vsliq_n_u64Experimentalneon

Shift Left and Insert (immediate)

vsqadd_u8Experimentalneon

Unsigned saturating Accumulate of Signed value.

vsqadd_u16Experimentalneon

Unsigned saturating Accumulate of Signed value.

vsqadd_u32Experimentalneon

Unsigned saturating Accumulate of Signed value.

vsqadd_u64Experimentalneon

Unsigned saturating Accumulate of Signed value.

vsqaddq_u8Experimentalneon

Unsigned saturating Accumulate of Signed value.

vsqaddq_u16Experimentalneon

Unsigned saturating Accumulate of Signed value.

vsqaddq_u32Experimentalneon

Unsigned saturating Accumulate of Signed value.

vsqaddq_u64Experimentalneon

Unsigned saturating Accumulate of Signed value.

vsqrt_f32Experimentalneon

Calculates the square root of each lane.

vsqrt_f64Experimentalneon

Calculates the square root of each lane.

vsqrtq_f32Experimentalneon

Calculates the square root of each lane.

vsqrtq_f64Experimentalneon

Calculates the square root of each lane.

vsra_n_s8Experimentalneon

Signed shift right and accumulate

vsra_n_s16Experimentalneon

Signed shift right and accumulate

vsra_n_s32Experimentalneon

Signed shift right and accumulate

vsra_n_s64Experimentalneon

Signed shift right and accumulate

vsra_n_u8Experimentalneon

Unsigned shift right and accumulate

vsra_n_u16Experimentalneon

Unsigned shift right and accumulate

vsra_n_u32Experimentalneon

Unsigned shift right and accumulate

vsra_n_u64Experimentalneon

Unsigned shift right and accumulate

vsrad_n_s64Experimentalneon

Signed shift right and accumulate

vsrad_n_u64Experimentalneon

Unsigned shift right and accumulate

vsraq_n_s8Experimentalneon

Signed shift right and accumulate

vsraq_n_s16Experimentalneon

Signed shift right and accumulate

vsraq_n_s32Experimentalneon

Signed shift right and accumulate

vsraq_n_s64Experimentalneon

Signed shift right and accumulate

vsraq_n_u8Experimentalneon

Unsigned shift right and accumulate

vsraq_n_u16Experimentalneon

Unsigned shift right and accumulate

vsraq_n_u32Experimentalneon

Unsigned shift right and accumulate

vsraq_n_u64Experimentalneon

Unsigned shift right and accumulate

vsri_n_p8Experimentalneon

Shift Right and Insert (immediate)

vsri_n_p16Experimentalneon

Shift Right and Insert (immediate)

vsri_n_s8Experimentalneon

Shift Right and Insert (immediate)

vsri_n_s16Experimentalneon

Shift Right and Insert (immediate)

vsri_n_s32Experimentalneon

Shift Right and Insert (immediate)

vsri_n_s64Experimentalneon

Shift Right and Insert (immediate)

vsri_n_u8Experimentalneon

Shift Right and Insert (immediate)

vsri_n_u16Experimentalneon

Shift Right and Insert (immediate)

vsri_n_u32Experimentalneon

Shift Right and Insert (immediate)

vsri_n_u64Experimentalneon

Shift Right and Insert (immediate)

vsriq_n_p8Experimentalneon

Shift Right and Insert (immediate)

vsriq_n_p16Experimentalneon

Shift Right and Insert (immediate)

vsriq_n_s8Experimentalneon

Shift Right and Insert (immediate)

vsriq_n_s16Experimentalneon

Shift Right and Insert (immediate)

vsriq_n_s32Experimentalneon

Shift Right and Insert (immediate)

vsriq_n_s64Experimentalneon

Shift Right and Insert (immediate)

vsriq_n_u8Experimentalneon

Shift Right and Insert (immediate)

vsriq_n_u16Experimentalneon

Shift Right and Insert (immediate)

vsriq_n_u32Experimentalneon

Shift Right and Insert (immediate)

vsriq_n_u64Experimentalneon

Shift Right and Insert (immediate)

vst1_f32Experimentalneon
vst1_f64Experimentalneon
vst1_p8Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1_p16Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1_p64Experimentalneon
vst1_s8Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1_s16Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1_s32Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1_s64Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1_u8Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1_u16Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1_u32Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1_u64Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1q_f32Experimentalneon
vst1q_f64Experimentalneon
vst1q_p8Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1q_p16Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1q_p64Experimentalneon
vst1q_s8Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1q_s16Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1q_s32Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1q_s64Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1q_u8Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1q_u16Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1q_u32Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vst1q_u64Experimentalneon

Store multiple single-element structures from one, two, three, or four registers.

vsub_f32Experimentalneon

Subtract

vsub_f64Experimentalneon

Subtract

vsub_s8Experimentalneon

Subtract

vsub_s16Experimentalneon

Subtract

vsub_s32Experimentalneon

Subtract

vsub_s64Experimentalneon

Subtract

vsub_u8Experimentalneon

Subtract

vsub_u16Experimentalneon

Subtract

vsub_u32Experimentalneon

Subtract

vsub_u64Experimentalneon

Subtract

vsubhn_high_s16Experimentalneon

Subtract returning high narrow

vsubhn_high_s32Experimentalneon

Subtract returning high narrow

vsubhn_high_s64Experimentalneon

Subtract returning high narrow

vsubhn_high_u16Experimentalneon

Subtract returning high narrow

vsubhn_high_u32Experimentalneon

Subtract returning high narrow

vsubhn_high_u64Experimentalneon

Subtract returning high narrow

vsubhn_s16Experimentalneon

Subtract returning high narrow

vsubhn_s32Experimentalneon

Subtract returning high narrow

vsubhn_s64Experimentalneon

Subtract returning high narrow

vsubhn_u16Experimentalneon

Subtract returning high narrow

vsubhn_u32Experimentalneon

Subtract returning high narrow

vsubhn_u64Experimentalneon

Subtract returning high narrow

vsubl_high_s8Experimentalneon

Signed Subtract Long

vsubl_high_s16Experimentalneon

Signed Subtract Long

vsubl_high_s32Experimentalneon

Signed Subtract Long

vsubl_high_u8Experimentalneon

Unsigned Subtract Long

vsubl_high_u16Experimentalneon

Unsigned Subtract Long

vsubl_high_u32Experimentalneon

Unsigned Subtract Long

vsubl_s8Experimentalneon

Signed Subtract Long

vsubl_s16Experimentalneon

Signed Subtract Long

vsubl_s32Experimentalneon

Signed Subtract Long

vsubl_u8Experimentalneon

Unsigned Subtract Long

vsubl_u16Experimentalneon

Unsigned Subtract Long

vsubl_u32Experimentalneon

Unsigned Subtract Long

vsubq_f32Experimentalneon

Subtract

vsubq_f64Experimentalneon

Subtract

vsubq_s8Experimentalneon

Subtract

vsubq_s16Experimentalneon

Subtract

vsubq_s32Experimentalneon

Subtract

vsubq_s64Experimentalneon

Subtract

vsubq_u8Experimentalneon

Subtract

vsubq_u16Experimentalneon

Subtract

vsubq_u32Experimentalneon

Subtract

vsubq_u64Experimentalneon

Subtract

vsubw_high_s8Experimentalneon

Signed Subtract Wide

vsubw_high_s16Experimentalneon

Signed Subtract Wide

vsubw_high_s32Experimentalneon

Signed Subtract Wide

vsubw_high_u8Experimentalneon

Unsigned Subtract Wide

vsubw_high_u16Experimentalneon

Unsigned Subtract Wide

vsubw_high_u32Experimentalneon

Unsigned Subtract Wide

vsubw_s8Experimentalneon

Signed Subtract Wide

vsubw_s16Experimentalneon

Signed Subtract Wide

vsubw_s32Experimentalneon

Signed Subtract Wide

vsubw_u8Experimentalneon

Unsigned Subtract Wide

vsubw_u16Experimentalneon

Unsigned Subtract Wide

vsubw_u32Experimentalneon

Unsigned Subtract Wide

vtbl1_p8Experimentalneon

Table look-up

vtbl1_s8Experimentalneon

Table look-up

vtbl1_u8Experimentalneon

Table look-up

vtbl2_p8Experimentalneon

Table look-up

vtbl2_s8Experimentalneon

Table look-up

vtbl2_u8Experimentalneon

Table look-up

vtbl3_p8Experimentalneon

Table look-up

vtbl3_s8Experimentalneon

Table look-up

vtbl3_u8Experimentalneon

Table look-up

vtbl4_p8Experimentalneon

Table look-up

vtbl4_s8Experimentalneon

Table look-up

vtbl4_u8Experimentalneon

Table look-up

vtbx1_p8Experimentalneon

Extended table look-up

vtbx1_s8Experimentalneon

Extended table look-up

vtbx1_u8Experimentalneon

Extended table look-up

vtbx2_p8Experimentalneon

Extended table look-up

vtbx2_s8Experimentalneon

Extended table look-up

vtbx2_u8Experimentalneon

Extended table look-up

vtbx3_p8Experimentalneon

Extended table look-up

vtbx3_s8Experimentalneon

Extended table look-up

vtbx3_u8Experimentalneon

Extended table look-up

vtbx4_p8Experimentalneon

Extended table look-up

vtbx4_s8Experimentalneon

Extended table look-up

vtbx4_u8Experimentalneon

Extended table look-up

vtrn1_f32Experimentalneon

Transpose vectors

vtrn1_p8Experimentalneon

Transpose vectors

vtrn1_p16Experimentalneon

Transpose vectors

vtrn1_s8Experimentalneon

Transpose vectors

vtrn1_s16Experimentalneon

Transpose vectors

vtrn1_s32Experimentalneon

Transpose vectors

vtrn1_u8Experimentalneon

Transpose vectors

vtrn1_u16Experimentalneon

Transpose vectors

vtrn1_u32Experimentalneon

Transpose vectors

vtrn1q_f32Experimentalneon

Transpose vectors

vtrn1q_f64Experimentalneon

Transpose vectors

vtrn1q_p8Experimentalneon

Transpose vectors

vtrn1q_p16Experimentalneon

Transpose vectors

vtrn1q_p64Experimentalneon

Transpose vectors

vtrn1q_s8Experimentalneon

Transpose vectors

vtrn1q_s16Experimentalneon

Transpose vectors

vtrn1q_s32Experimentalneon

Transpose vectors

vtrn1q_s64Experimentalneon

Transpose vectors

vtrn1q_u8Experimentalneon

Transpose vectors

vtrn1q_u16Experimentalneon

Transpose vectors

vtrn1q_u32Experimentalneon

Transpose vectors

vtrn1q_u64Experimentalneon

Transpose vectors

vtrn2_f32Experimentalneon

Transpose vectors

vtrn2_p8Experimentalneon

Transpose vectors

vtrn2_p16Experimentalneon

Transpose vectors

vtrn2_s8Experimentalneon

Transpose vectors

vtrn2_s16Experimentalneon

Transpose vectors

vtrn2_s32Experimentalneon

Transpose vectors

vtrn2_u8Experimentalneon

Transpose vectors

vtrn2_u16Experimentalneon

Transpose vectors

vtrn2_u32Experimentalneon

Transpose vectors

vtrn2q_f32Experimentalneon

Transpose vectors

vtrn2q_f64Experimentalneon

Transpose vectors

vtrn2q_p8Experimentalneon

Transpose vectors

vtrn2q_p16Experimentalneon

Transpose vectors

vtrn2q_p64Experimentalneon

Transpose vectors

vtrn2q_s8Experimentalneon

Transpose vectors

vtrn2q_s16Experimentalneon

Transpose vectors

vtrn2q_s32Experimentalneon

Transpose vectors

vtrn2q_s64Experimentalneon

Transpose vectors

vtrn2q_u8Experimentalneon

Transpose vectors

vtrn2q_u16Experimentalneon

Transpose vectors

vtrn2q_u32Experimentalneon

Transpose vectors

vtrn2q_u64Experimentalneon

Transpose vectors

vtst_p8Experimentalneon

Signed compare bitwise Test bits nonzero

vtst_p64Experimentalneon

Signed compare bitwise Test bits nonzero

vtst_s8Experimentalneon

Signed compare bitwise Test bits nonzero

vtst_s16Experimentalneon

Signed compare bitwise Test bits nonzero

vtst_s32Experimentalneon

Signed compare bitwise Test bits nonzero

vtst_s64Experimentalneon

Signed compare bitwise Test bits nonzero

vtst_u8Experimentalneon

Unsigned compare bitwise Test bits nonzero

vtst_u16Experimentalneon

Unsigned compare bitwise Test bits nonzero

vtst_u32Experimentalneon

Unsigned compare bitwise Test bits nonzero

vtst_u64Experimentalneon

Unsigned compare bitwise Test bits nonzero

vtstq_p8Experimentalneon

Signed compare bitwise Test bits nonzero

vtstq_p64Experimentalneon

Signed compare bitwise Test bits nonzero

vtstq_s8Experimentalneon

Signed compare bitwise Test bits nonzero

vtstq_s16Experimentalneon

Signed compare bitwise Test bits nonzero

vtstq_s32Experimentalneon

Signed compare bitwise Test bits nonzero

vtstq_s64Experimentalneon

Signed compare bitwise Test bits nonzero

vtstq_u8Experimentalneon

Unsigned compare bitwise Test bits nonzero

vtstq_u16Experimentalneon

Unsigned compare bitwise Test bits nonzero

vtstq_u32Experimentalneon

Unsigned compare bitwise Test bits nonzero

vtstq_u64Experimentalneon

Unsigned compare bitwise Test bits nonzero

vuqadd_s8Experimentalneon

Signed saturating Accumulate of Unsigned value.

vuqadd_s16Experimentalneon

Signed saturating Accumulate of Unsigned value.

vuqadd_s32Experimentalneon

Signed saturating Accumulate of Unsigned value.

vuqadd_s64Experimentalneon

Signed saturating Accumulate of Unsigned value.

vuqaddq_s8Experimentalneon

Signed saturating Accumulate of Unsigned value.

vuqaddq_s16Experimentalneon

Signed saturating Accumulate of Unsigned value.

vuqaddq_s32Experimentalneon

Signed saturating Accumulate of Unsigned value.

vuqaddq_s64Experimentalneon

Signed saturating Accumulate of Unsigned value.

vuzp1_f32Experimentalneon

Unzip vectors

vuzp1_p8Experimentalneon

Unzip vectors

vuzp1_p16Experimentalneon

Unzip vectors

vuzp1_s8Experimentalneon

Unzip vectors

vuzp1_s16Experimentalneon

Unzip vectors

vuzp1_s32Experimentalneon

Unzip vectors

vuzp1_u8Experimentalneon

Unzip vectors

vuzp1_u16Experimentalneon

Unzip vectors

vuzp1_u32Experimentalneon

Unzip vectors

vuzp1q_f32Experimentalneon

Unzip vectors

vuzp1q_f64Experimentalneon

Unzip vectors

vuzp1q_p8Experimentalneon

Unzip vectors

vuzp1q_p16Experimentalneon

Unzip vectors

vuzp1q_p64Experimentalneon

Unzip vectors

vuzp1q_s8Experimentalneon

Unzip vectors

vuzp1q_s16Experimentalneon

Unzip vectors

vuzp1q_s32Experimentalneon

Unzip vectors

vuzp1q_s64Experimentalneon

Unzip vectors

vuzp1q_u8Experimentalneon

Unzip vectors

vuzp1q_u16Experimentalneon

Unzip vectors

vuzp1q_u32Experimentalneon

Unzip vectors

vuzp1q_u64Experimentalneon

Unzip vectors

vuzp2_f32Experimentalneon

Unzip vectors

vuzp2_p8Experimentalneon

Unzip vectors

vuzp2_p16Experimentalneon

Unzip vectors

vuzp2_s8Experimentalneon

Unzip vectors

vuzp2_s16Experimentalneon

Unzip vectors

vuzp2_s32Experimentalneon

Unzip vectors

vuzp2_u8Experimentalneon

Unzip vectors

vuzp2_u16Experimentalneon

Unzip vectors

vuzp2_u32Experimentalneon

Unzip vectors

vuzp2q_f32Experimentalneon

Unzip vectors

vuzp2q_f64Experimentalneon

Unzip vectors

vuzp2q_p8Experimentalneon

Unzip vectors

vuzp2q_p16Experimentalneon

Unzip vectors

vuzp2q_p64Experimentalneon

Unzip vectors

vuzp2q_s8Experimentalneon

Unzip vectors

vuzp2q_s16Experimentalneon

Unzip vectors

vuzp2q_s32Experimentalneon

Unzip vectors

vuzp2q_s64Experimentalneon

Unzip vectors

vuzp2q_u8Experimentalneon

Unzip vectors

vuzp2q_u16Experimentalneon

Unzip vectors

vuzp2q_u32Experimentalneon

Unzip vectors

vuzp2q_u64Experimentalneon

Unzip vectors

vzip1_f32Experimentalneon

Zip vectors

vzip1_p8Experimentalneon

Zip vectors

vzip1_p16Experimentalneon

Zip vectors

vzip1_s8Experimentalneon

Zip vectors

vzip1_s16Experimentalneon

Zip vectors

vzip1_s32Experimentalneon

Zip vectors

vzip1_u8Experimentalneon

Zip vectors

vzip1_u16Experimentalneon

Zip vectors

vzip1_u32Experimentalneon

Zip vectors

vzip1q_f32Experimentalneon

Zip vectors

vzip1q_f64Experimentalneon

Zip vectors

vzip1q_p8Experimentalneon

Zip vectors

vzip1q_p16Experimentalneon

Zip vectors

vzip1q_p64Experimentalneon

Zip vectors

vzip1q_s8Experimentalneon

Zip vectors

vzip1q_s16Experimentalneon

Zip vectors

vzip1q_s32Experimentalneon

Zip vectors

vzip1q_s64Experimentalneon

Zip vectors

vzip1q_u8Experimentalneon

Zip vectors

vzip1q_u16Experimentalneon

Zip vectors

vzip1q_u32Experimentalneon

Zip vectors

vzip1q_u64Experimentalneon

Zip vectors

vzip2_f32Experimentalneon

Zip vectors

vzip2_p8Experimentalneon

Zip vectors

vzip2_p16Experimentalneon

Zip vectors

vzip2_s8Experimentalneon

Zip vectors

vzip2_s16Experimentalneon

Zip vectors

vzip2_s32Experimentalneon

Zip vectors

vzip2_u8Experimentalneon

Zip vectors

vzip2_u16Experimentalneon

Zip vectors

vzip2_u32Experimentalneon

Zip vectors

vzip2q_f32Experimentalneon

Zip vectors

vzip2q_f64Experimentalneon

Zip vectors

vzip2q_p8Experimentalneon

Zip vectors

vzip2q_p16Experimentalneon

Zip vectors

vzip2q_p64Experimentalneon

Zip vectors

vzip2q_s8Experimentalneon

Zip vectors

vzip2q_s16Experimentalneon

Zip vectors

vzip2q_s32Experimentalneon

Zip vectors

vzip2q_s64Experimentalneon

Zip vectors

vzip2q_u8Experimentalneon

Zip vectors

vzip2q_u16Experimentalneon

Zip vectors

vzip2q_u32Experimentalneon

Zip vectors

vzip2q_u64Experimentalneon

Zip vectors