龙傲天! (@dragonfsky)Day 6 of #OpenSourceWeek: One More Thing – DeepSeek-V3/R1 Inference System Overview 中发帖

deepseek刚刚发布了 
Optimized throughput and latency via:
🔧 Cross-node EP-powered batch scaling
🔄 Computation-communication overlap
Load balancing
Statistics of DeepSeek’s Online Service:
73.7k/14.8k input/output tokens per second per H800 node
🚀 Cost profit margin 545%
💡 We hope this week’s insights offer value to the community and contribute to our shared AGI goals.