Orin NX 16GB with embedded Ampere GPU: 1024 CUDA cores, 32 Tensor cores. Embedded 8-core NVIDIA Cortex ARM64 CPU, 2GHz. 1TB NVMe SED (Self Encrypting Drive). Module power: configurable from 20W – 35W.
Description
More Products & Services
Products & Services
XMC-AD2000E-FGX2-IO (WOLF-3570)
WOLF Advanced Technology
This versatile I/O module includes both an advanced NVIDIA RTXTM 2000 Ada embedded GPU and WOLF’s Frame Grabber eXtreme (FGX2) with up to 4K support. The board accepts multiple simultaneous SDI inputs. The captured inputs can be routed to the high-performance NVIDIA GPU for processing and be output in several formats, including SDI, DisplayPort, HDMI, and others as MCOTS options.
The NVIDIA Ada architecture includes CUDA cores for HPEC, 4th generation Tensor cores for AI and data science computations, and 3rd generation Ray Tracing (RT) cores for visually accurate rendering. The Ada GPU uses a new TSMC 4N NVIDIA Custom Manufacturing Process which to increased efficiency. The denser Ada GPUs have more CUDA and Tensor cores operating at higher clock frequencies at the same power, delivering significantly more performance per watt compared to WOLF’s previous generation product.
The WOLF Frame Grabber eXtreme (FGX2) provides the board with data conversion from one standard to another, with a wide array of video input and output options for both cutting-edge digital I/O and legacy analog I/O. The FGX2 supports NVIDIA GPUDirect which allows direct access to the GPU memory for processing and analysis.
WOLF’s advanced cooling technology is designed to move heat using a low weight, high efficiency path to conduct heat away from the GPU.
VPX3U-BW5000E-CX7 (WOLF-163L)
WOLF Advanced Technology
The VPX3U-BW5000E-CX7 HPEC module includes an NVIDIA RTX 5000 Blackwell embedded GPU and a ConnectX SmartNIC. The NVIDIA RTX 5000 Blackwell embedded GPU provides the advanced processing capabilities for high performance embedded computing (HPC) and artificial intelligence (AI) processing. The ConnectX-7 provides the Ethernet and PCIe connectivity needed to move large datasets efficiently.
The NVIDIA Blackwell architecture includes CUDA cores for HPC, and 5th generation Tensor cores for AI and data science computations. The Blackwell GPU has an improved architecture which provides increased efficiency. The module also supports 24GB of GDDR7 memory which provides over 50% higher bandwidth compared to the previous generation. The GPU supports PCIe up to x16 providing a fast data transfer path to/from the module.
The NVIDIA ConnectX-7 SmartNIC provides PCIe and Ethernet connectivity. ConnectX-7 is ideal for the high-speed, secure, data transfer capabilities required for data-heavy tasks such as sensor data processing and other C5ISR tasks. The ConnectX-7 also provides support for RDMA over Converged Ethernet (RoCE), enabling the fastest method for transferring data across the network to the GPU.
The NVIDIA Blackwell architecture includes CUDA cores for HPC, and 5th generation Tensor cores for AI and data science computations. The Blackwell GPU has an improved architecture which provides increased efficiency. The module also supports 24GB of GDDR7 memory which provides over 50% higher bandwidth compared to the previous generation. The GPU supports PCIe up to x16 providing a fast data transfer path to/from the module.
The NVIDIA ConnectX-7 SmartNIC provides PCIe and Ethernet connectivity. ConnectX-7 is ideal for the high-speed, secure, data transfer capabilities required for data-heavy tasks such as sensor data processing and other C5ISR tasks. The ConnectX-7 also provides support for RDMA over Converged Ethernet (RoCE), enabling the fastest method for transferring data across the network to the GPU.
VPX6U-BW5000E-DUAL-VO (WOLF-2638)
WOLF Advanced Technology
The VPX6U-BW5000E-DUAL-VO module includes two NVIDIA RTX™ 5000 Blackwell embedded GPUs and a PCIe Gen5 switch in a rugged 6U VPX module. The NVIDIA RTX5000 embedded GPU provides advanced processing capabilities for high performance embedded computing (HPEC) and artificial intelligence (AI) data processing.
The NVIDIA Blackwell architecture includes CUDA cores for HPC, and 5th generation Tensor cores for AI and data science computations. The Blackwell GPU has an improved architecture which provides increased efficiency. The module also supports 24GB of GDDR7 memory which provides over 50% higher bandwidth compared to the previous generation. The GPU supports PCIe Gen5, providing a fast data transfer path to/from the module.
Unlocking the best performance requires the best cooling capability. WOLF’s advanced cooling technology is designed to move heat using a low weight, high efficiency path from the GPU die to the wedgelocks.
The NVIDIA Blackwell architecture includes CUDA cores for HPC, and 5th generation Tensor cores for AI and data science computations. The Blackwell GPU has an improved architecture which provides increased efficiency. The module also supports 24GB of GDDR7 memory which provides over 50% higher bandwidth compared to the previous generation. The GPU supports PCIe Gen5, providing a fast data transfer path to/from the module.
Unlocking the best performance requires the best cooling capability. WOLF’s advanced cooling technology is designed to move heat using a low weight, high efficiency path from the GPU die to the wedgelocks.
People
Description
The VNXP-ORIN-NX is an autonomous, secure compute node which provides advanced AI and HPC processing capabilities, PCIe Gen4, network data transfer, and cyber security features to ensure data is being protected. The small VNX+ form factor allows the technology to be deployed into extremely small spaces.
The NVIDIA® Jetson Orin™ NX includes an embedded Ampere GPU which provides the CUDA cores and Tensor cores for data processing, deep learning inference, machine vision, audio processing and video encoding/decoding. The 1024 CUDA cores run at up to 918MHz providing GPGPU processing, while the 32 Gen3 Tensor cores provide the underlying architecture required for an efficient inference engine which can achieve up to 100 TOPS (INT8, Sparse) of deep learning inference computing.
The integrated NVMe SED (Self Encrypting Drive) supports data encryption providing protection for sensitive information without significantly affecting read/write speeds to the drive.
The NVIDIA® Jetson Orin™ NX includes an embedded Ampere GPU which provides the CUDA cores and Tensor cores for data processing, deep learning inference, machine vision, audio processing and video encoding/decoding. The 1024 CUDA cores run at up to 918MHz providing GPGPU processing, while the 32 Gen3 Tensor cores provide the underlying architecture required for an efficient inference engine which can achieve up to 100 TOPS (INT8, Sparse) of deep learning inference computing.
The integrated NVMe SED (Self Encrypting Drive) supports data encryption providing protection for sensitive information without significantly affecting read/write speeds to the drive.

Share
Recent Chats
Share via email
Future: handle WhatsApp here
Future: handle LinkedIn here
Future: handle Twitter here
SUBMENU HERE
Share via Chat
Copy Link