Finally, I’ve decided to ‘connect’ all six devices within this DIY AI station on:
https://www.gpd-minipc.com/products/gpd-pocket4?variant=6be177c7-59b6-4b46-a9e4-941e171108fc?ref=VLADISLAVSOLODKIY
https://www.dram.fyi/
✅ Is the GPD Pocket 4 good for AI/ML experiments?
Short answer: Yes — for light to medium AI/ML workloads, local inference, model prototyping, and distributed experiments.
Not ideal for heavy training of large models.
Why it is good
- The Pocket 4 uses the AMD Ryzen AI 9 HX 370, which includes the XDNA2 NPU delivering up to 50 TOPS — excellent for on‑device inference, quantized models, and edge‑AI workflows.
- It has LPDDR5x up to 64GB and NVMe 2280 SSD support — enough memory bandwidth for medium‑sized models and vector DBs.
- The integrated Radeon 890M GPU (RDNA 3.5) is strong for small‑scale GPU compute and ONNX/DirectML acceleration.
- It’s extremely portable (770g) with a 144Hz 2.5K display — perfect for field experiments, travel, and mobile AI dev.
Where it’s not enough
- No discrete GPU → not suitable for training large transformer models.
- NPU is powerful for inference but not for training.
- Sustained thermal loads are limited by the 28W TDP envelope.
Conclusion:
It’s a fantastic controller node, inference device, or portable dev machine — but not a replacement for your Minisforum/Geekom/GMKtec AI stations.

✅ Can you connect headless mini‑stations to the GPD Pocket 4?
Yes — absolutely.
And the Pocket 4 is actually perfect for this role.
Why it works
The Pocket 4 includes: