DGX Spark vs Jetson Thor vs Framework Desktop

A Complete Comparison of Desktop AI Compute Platforms for Developers, Roboticists, and AI Researchers in 2025

Published: December 2025 | Updated with latest benchmarks and pricing

The landscape of desktop AI compute has fundamentally shifted in 2025. Three platforms are now competing for the attention of AI developers, robotics engineers, and researchers: NVIDIA's DGX Spark for general AI development (predominantly targeted for LLM training & custom model generation in-house at your desk rather than relying on massive datacenters), Jetson Thor for robotics and edge AI (inferencing and end-use deployment), and Framework Desktop for budget-conscious AI developers. This comprehensive guide breaks down specifications, performance, pricing, and ideal use cases to help you make the right choice.

Quick Comparison: Specifications at a Glance

Specification
NVIDIA
DGX Spark
JETSON
Jetson Thor
Framework Desktop
AI Performance 1,000 TOPS (FP4) 2,070 TOPS (FP4) ~50 TOPS (NPU)
Memory 128GB LPDDR5x 128GB LPDDR5x Up to 128GB LPDDR5x
Memory Bandwidth 273 GB/s 273 GB/s 256 GB/s
CPU 20-core ARM (Grace) 14-core ARM Neoverse V3AE 16-core AMD Ryzen™ AI Max+ 395
GPU Architecture NVIDIA Blackwell NVIDIA Blackwell AMD Radeon™ 8060S + NPU
Power (TDP) ~170W 40-130W ~140W
Form Factor 150×150×50mm Dev Kit (larger) 4.5L Mini-ITX
Operating System DGX OS (Ubuntu) JetPack OS Windows/Linux
Relative Pricing Premium (Founders Edition) Mid-range Most Affordable
Primary Use Case LLM Development Robotics & Edge AI General AI / Gaming

NVIDIA DGX Spark: The Desktop AI Supercomputer

The DGX Spark, powered by NVIDIA's GB10 Grace Blackwell Superchip, represents NVIDIA's vision of bringing datacenter-class AI capabilities to the desktop. Originally announced as "Project DIGITS" at CES 2025, it shipped in October 2025 as the premium Founders Edition—priced higher than OEM variants from partners like ASUS, Dell, and HP.

Key Strengths

🧠 Large Model Support

  • Run up to 200B parameter models locally
  • Fine-tune models up to 70B parameters
  • NVFP4 format for efficient inference
  • Link two units for 405B parameters

🔧 CUDA Ecosystem

  • Full NVIDIA AI software stack
  • PyTorch, TensorFlow, TRT-LLM
  • NGC catalog access
  • Seamless cloud deployment path

🌐 Connectivity

  • ConnectX-7 SmartNIC (200Gbps)
  • 10GbE Ethernet
  • Wi-Fi 7 support
  • Dual-unit clustering capability

Performance Benchmarks

Real-world testing reveals the DGX Spark excels at prompt processing (prefill) but shows limitations in token generation (decode) due to memory bandwidth constraints:

Model Prompt Processing Token Generation
Llama 3.1 8B (FP4) ~3,500 tokens/sec ~45 tokens/sec
GPT-OSS 120B (MXFP4) 1,723 tokens/sec 38.55 tokens/sec
Qwen3 235B (Dual Spark) 23,477 tokens/sec throughput

✅ Pros

  • Runs massive models locally (200B params)
  • Full NVIDIA CUDA ecosystem support
  • Excellent software/documentation
  • Compact, quiet form factor
  • Enterprise-grade security (local data)
  • 200Gbps clustering for dual-unit setup

❌ Cons

  • Lower memory bandwidth (273 GB/s)
  • Slower token generation vs Mac Studio
  • Linux-only (DGX OS)
  • Non-upgradeable RAM
  • Premium pricing vs alternatives

OEM Variants

Multiple OEMs offer GB10-based systems, often at lower price points than NVIDIA's Founders Edition:

NVIDIA
DGX Spark Founders
Premium (4TB)
ASUS
Ascent GX10
Lower Cost (1TB)
Dell
Pro Max with GB10
Competitive
HP
ZGX Nano AI Station G1n
TBA
Acer
Veriton GN100
Competitive
Lenovo
GB10 Mini PC
Coming Soon

💡 Best Value Pick: ASUS Ascent GX10

The Ascent GX10 offers identical GB10 performance to the DGX Spark Founders Edition at a lower price point with 1TB storage. The only trade-off is less storage (1TB vs 4TB), which is easily upgradeable.

NVIDIA Jetson Thor: The Robotics Powerhouse

Jetson Thor represents a different philosophy—purpose-built for physical AI and robotics. With 2,070 FP4 TFLOPS (more than double the DGX Spark), it's designed for humanoid robots, autonomous systems, and edge AI applications requiring real-time sensor processing.

Key Differentiators from DGX Spark

🤖 Robotics-First Design

  • Multi-camera support (20+ physical cameras)
  • Holoscan Sensor Bridge for real-time data
  • NVIDIA Isaac robotics platform
  • GR00T humanoid foundation models

⚡ Edge Efficiency

  • 40-130W configurable power
  • 7.5x performance vs Jetson Orin
  • 3.5x better energy efficiency
  • Functional safety processor

🔌 Industrial I/O

  • 4x 25GbE networking
  • QSFP slot for high-speed sensors
  • Camera offload engine
  • Multi-Instance GPU (MIG) support

✅ Pros

  • 2x AI compute vs DGX Spark
  • Purpose-built for robotics
  • Lower power envelope (40-130W)
  • Real-time sensor processing
  • Production-ready module pathway
  • More affordable than DGX Spark Founders Edition

❌ Cons

  • JetPack OS (not general-purpose)
  • Larger form factor (dev kit)
  • Focused ecosystem (robotics)
  • Less suited for pure LLM work
  • Steeper learning curve for non-roboticists

Framework Desktop: The Value Champion

The Framework Desktop disrupts the market with AMD's Ryzen AI Max+ 395 "Strix Halo" APU—offering 128GB unified memory at a significantly lower price point than NVIDIA's offerings. It's the modular PC maker's first desktop, shipping in Q3 2025.

Why Consider Framework Desktop?

💰 Exceptional Value

  • Most affordable 128GB option
  • Significantly lower cost than NVIDIA platforms
  • Run Llama 3.3 70B locally
  • Modular, upgradeable design

🎮 Versatility

  • Windows 11 or Linux support
  • 1440p+ gaming capability
  • General productivity workloads
  • Creative applications

📊 Competitive AI Performance

  • 256 GB/s memory bandwidth
  • Up to 96GB GPU allocation
  • ROCm ecosystem support
  • USB4/5GbE for clustering

Framework vs DGX Spark: Head-to-Head

Metric Framework Desktop DGX Spark Winner
Price (128GB) Most Affordable Premium (Founders) Framework
Memory Bandwidth 256 GB/s 273 GB/s DGX Spark (slight)
AI Compute (FP4) Limited FP4 support 1,000 TOPS DGX Spark
OS Flexibility Windows/Linux DGX OS only Framework
CUDA Ecosystem ❌ (ROCm only) ✅ Full support DGX Spark
Gaming/Productivity Excellent Limited Framework
Clustering USB4/5GbE 200Gbps ConnectX-7 DGX Spark

✅ Pros

  • Best price-to-memory ratio
  • Windows/Linux flexibility
  • Gaming and productivity capable
  • Modular, repairable design
  • 16 Zen 5 CPU cores
  • Strong community support

❌ Cons

  • No CUDA (ROCm learning curve)
  • Lower AI TOPS vs NVIDIA
  • Soldered memory (pick at purchase)
  • Limited FP4/INT4 optimization
  • DIY assembly required

Decision Framework: Which Platform is Right for You?

🔬 Choose DGX Spark if:

You're an AI researcher or enterprise developer who needs to run large language models (70B-200B parameters) locally, requires full CUDA ecosystem compatibility, values seamless cloud deployment paths, and prioritizes software maturity over raw price/performance. Ideal for: AI startups, research labs, enterprise AI teams.

🤖 Choose Jetson Thor if:

You're building robots, autonomous systems, or edge AI applications requiring real-time sensor fusion, multi-camera processing, and embedded deployment. The 2,070 TOPS and Isaac/GR00T ecosystem make it unmatched for physical AI. Ideal for: Robotics companies, autonomous vehicle developers, industrial automation.

💻 Choose Framework Desktop if:

You want maximum memory capacity per dollar, need Windows compatibility, plan to use the system for both AI development and general computing/gaming, or are comfortable with AMD's ROCm ecosystem. Ideal for: Budget-conscious developers, indie AI hackers, students, multi-purpose workstations.

Final Verdict

🏆 The Bottom Line

For pure AI development: DGX Spark or ASUS Ascent GX10 (more affordable OEM variant) deliver the best CUDA-native experience with proven software stack.

For robotics: Jetson Thor is the clear winner—2x the compute, purpose-built I/O, and the Isaac ecosystem.

For value seekers: Framework Desktop offers 128GB and multi-purpose capability at the most affordable price point—if you can live without CUDA.

The 2025 desktop AI landscape finally offers real choices. Whether you're building the next ChatGPT competitor, programming humanoid robots, or just want to run local LLMs without breaking the bank, there's now a platform designed for your specific needs.