The NVIDIA Jetson Thor Module (T5000) is a system-on-module (SOM) that pushes the boundaries of physical AI and robotics computing. With 2070 FP4 TFLOPS of AI compute, a 14-core Arm Neoverse-V3AE CPU, and the revolutionary Blackwell GPU architecture, it represents the most advanced embedded AI module available for developers and enterprises.
Designed from the ground up for next-generation humanoid robots, AI agents, and industrial edge AI, the Jetson Thor module brings together high performance, efficiency, and scalability in a compact form factor.
With 2070 FP4 TFLOPS, Jetson Thor can handle large language models (LLMs), vision-language models (VLMs), and Vision Language Action (VLA) models with ease.
The 2560-core Blackwell GPU and 96 fifth-gen Tensor Cores provide transformer acceleration and Multi-Instance GPU (MIG) support for simultaneous AI workloads.
Developers gain 128 GB memory with 273 GB/s bandwidth, ensuring smooth operation of multi-sensor robotics, AI video analytics, and generative AI pipelines.
NVIDIA Jetson T5000 - Technical Specifications | |
---|---|
AI Performance | 2070 TFLOPS (FP4—sparse) |
GPU | 2560-core NVIDIA Blackwell architecture, GPU with 96 fifth-gen Tensor Cores, Multi-Instance GPU with 10 TPCs, |
GPU Max Frequency | 1.57 GHz |
CPU | 14-core Arm® Neoverse®-V3AE 64-bit CPU 64 KB I-Cache, 64 KB D-Cache 1 MB L2 Cache per core 16 MB shared system L3 cache |
CPU Max Frequency | 2.6 GHz |
Vision Accelerator | 1x PVA v3 |
Memory | 128 GB 256-bit LPDDR5X, 273 GB/s |
Storage | Supports NVMe through PCIe, Supports SSD through USB3.2 |
Video Encode | 6x 4Kp60 (H.265) 12x 4Kp30 (H.265) 24x 1080p60 (H.265) 50x 1080p30 (H.265) 48x 1080p30 (H.264) 6x 4Kp60 (H.264) |
Video Decode | 4x 8Kp30 (H.265) 10x 4Kp60 (H.265) 22x 4Kp30 (H.265) 46x 1080p60 (H.265) 92x 1080p30 (H.265) 82x 1080p30 (H.264) 4x 4Kp60 (H.264) |
Camera | Up to 20 cameras via HSB Up to 6 cameras through 16x lanes MIPI CSI-2 Up to 32 cameras using virtual channels C-PHY 2.1 (10.25 Gbps), D-PHY 2.1 (40 Gbps) |
PCIe* | Up to 8 lanes—Gen5 Root port only—C1 (x1) and C3 (x2) Root Point or Endpoint—C2 (x1), C4 (x8), and C5 (x4) |
USB* | xHCI host controller with integrated PHY 3x USB 3.2 4x USB 2.0 |
Networking | 4x 25 GbE |
Display | 4x shared HDMI2.1 VESA DisplayPort 1.4a—HBR2, MST |
Other I/O | 5x I2S / 2x Audio Hub (AHUB), 2x DMIS, 4x UART, 4x CAN, 3x SPI, 13x I2C, 6x PWM outputs |
Power | 40 W–130 W |
Mechanical | 100 mm x 87 mm, 699-pin B2B connector, Integrated Thermal Transfer Plate (TTP) with heatpipe |
Thor delivers 3.5× better energy efficiency, operating between 40W–130W, making it ideal for high-performance edge deployments.
Jetson Thor is engineered for humanoid robotics, enabling lifelike movement, AI-driven decision-making, and sensor fusion for autonomous navigation.
From LLMs and VLMs to agentic AI video summarization and search, Thor supports the latest generative AI applications.
Industries can deploy Jetson Thor for:
Helps developers with robotics simulation, reinforcement learning, and robot autonomy.
Ideal for video analytics, surveillance, and smart city infrastructure.
Optimized for real-time medical and industrial sensor fusion workloads.
Q1: What is the price of the NVIDIA Jetson Thor Module?
The price is ₹284,710.00 (excluding taxes).
Q2: When will the Jetson Thor Module ship?
It is expected to ship by October 14, 2025.
Q3: How does Jetson Thor compare to Jetson Orin?
Thor provides 7.5× more compute and 3.5× greater energy efficiency.
Q4: Can Jetson Thor handle generative AI models?
Yes, it is designed for LLMs, VLMs, and VLA models.
Q5: Is it suitable for industrial edge AI applications?
Absolutely—it supports retail AI, smart spaces, industrial robotics, and healthcare AI.
Q6: What kind of ecosystem support is available?
NVIDIA provides a rich partner ecosystem with carrier boards, sensors, and design services
The NVIDIA Jetson Thor Module sets a new benchmark in AI robotics and physical AI computing. With Blackwell GPU acceleration, 128 GB memory, and powerful CPU performance, it’s an all-in-one solution for developers building the next generation of humanoid robots and AI-driven edge devices.