Monday, March 3, 2025
HomeAnalyticsDeepSeek #OpenSourceWeek Day 6: Inference System Overview

DeepSeek #OpenSourceWeek Day 6: Inference System Overview


As we reach Day 6 of #OpenSourceWeek, DeepSeek presented an in-depth overview of the DeepSeek-V3/R1 inference system. This article will dig into the system’s design principles, optimization strategies, and performance statistics, highlighting the significant advancements made in throughput and latency optimization.

System Design Principles

The primary objectives of the DeepSeek-V3/ DeepSeek R1 inference system are to achieve higher throughput and lower latency. To meet these goals, they have implemented a sophisticated architecture that leverages cross-node Expert Parallelism (EP). This approach not only enhances the efficiency of GPU matrix computations but also optimizes the overall system performance.

Expert Parallelism (EP)

  • Batch Size Scaling: EP allows for significant scaling of the batch size, which is crucial for maximizing GPU utilization and throughput.
  • Memory Access Reduction: By distributing experts across multiple GPUs, each GPU processes only a small subset of experts, which reduces memory access demands and consequently lowers latency.

However, the implementation of EP introduces complexities, particularly in terms of cross-node communication and the need for effective load balancing across different Data Parallelism (DP) instances.

Addressing Challenges of EP

To tackle these challenges, they focused on three key strategies:

  • Scaling Batch Size: By ensuring a sufficiently large overall batch size, it can maintain high throughput and low latency, even with the model’s inherent sparsity.
  • Hiding Communication Latency: They employ a dual-batch overlap strategy during the prefill and decode phases, allowing them to execute microbatches alternately and hide communication costs behind computation.
  • Load Balancing: They strive to balance computational and communication loads across all GPUs to prevent any single GPU from becoming a bottleneck.

Prefilling and Decoding Phases

The architecture of DeepSeek-V3/R1 employs different degrees of parallelism during the prefill and decode phases:

  • Prefilling Phase: Utilizes Routed Expert EP32 and MLA/Shared Expert DP32, with each deployment unit spanning 4 nodes and 32 redundant routed experts.
  • Decoding Phase: Employs Routed Expert EP144 and MLA/Shared Expert DP144, with each deployment unit spanning 18 nodes.

Communication-Computation Overlapping

To optimize throughput, they have developed a communication-computation overlapping mechanism. During the prefilling phase, it alternates between two microbatches, allowing the communication cost of one microbatch to be hidden behind the computation of the other. In the decoding phase, it subdivides the attention layer into two steps and utilizes a 5-stage pipeline to achieve seamless overlapping.

Diagram of DeepSeek’s Online Inference System

This diagram depicts a system with two main components: Prefill and Decode services, each managed by load balancers for parallel processing. The API Server directs requests to these services. Both services utilize an optional external key-value cache (KVCache) for storage. The system is designed for efficient and scalable handling of API requests through parallel processing and caching.

Performance Statistics

The performance of the DeepSeek-V3/R1 inference system has been impressive. Over 24 hours, the system achieved the following statistics:

  • Total Input Tokens: 608 billion, with 342 billion (56.3%) hitting the on-disk KV cache.
  • Total Output Tokens: 168 billion, with an average output speed of 20–22 tokens per second.
  • Average Throughput: Each H800 node delivered approximately 73.7k tokens/s for input and 14.8k tokens/s for output.

Cost and Revenue Analysis

The operational costs and revenue generated by the DeepSeek-V3/R1 system are noteworthy. The total daily cost for running the inference services, assuming a leasing cost of $2 per hour per H800 GPU, amounted to $87,072.

If all tokens were billed at DeepSeek-R1’s pricing, the theoretical total daily revenue would be $562,027, resulting in a remarkable cost profit margin of 545%. The pricing structure is as follows:

  • R1 Pricing:
    • $0.14/M for input tokens (cache hit)
    • $0.55/M for input tokens (cache miss)
    • $2.19/M for output tokens

However, actual revenue is lower due to several factors:

  • DeepSeek-V3’s pricing is significantly lower than R1.
  • Only a subset of services are monetized, with web and app access remaining free.
  • Nighttime discounts are applied during off-peak hours.

Graph Overview

  • The Graph Displays Two Datasets: Cost (in yellow) and Theoretical Income (in blue) over 24 hours, from 12:00 to 12:00.
  • Data Trends: Theoretical income shows significant peaks during certain hours, indicating higher potential earnings, while costs remain relatively stable and low in comparison.
  • Time Analysis: Cost remains consistently low, suggesting efficient operations, while theoretical income fluctuates, hinting at varying levels of engagement or activity.

Notes: The theoretical income is based on API pricing calculations and does not reflect actual earnings.

For detailed analysis, please refer to the GitHub link of day 6 GitHub.

Previous Updates:

Conclusion

The DeepSeek-V3/R1 inference system represents a significant advancement in the field of artificial intelligence, particularly in optimizing throughput and latency. Through the innovative use of cross-node Expert Parallelism, effective load balancing, and communication-computation overlapping, we have achieved impressive performance metrics.

As they continue to refine our systems and share insights with the community, they are contributing to the broader goals of artificial general intelligence (AGI). The insights gained from this week will not only enhance our understanding but also pave the way for future innovations in AI technology

They are encouraging the community to engage with these resources, as they provide valuable insights into the ongoing developments in the DeepSeek project and its implications for the future of AI.

Harsh Mishra is an AI/ML Engineer who spends more time talking to Large Language Models than actual humans. Passionate about GenAI, NLP, and making machines smarter (so they don’t replace him just yet). When not optimizing models, he’s probably optimizing his coffee intake. 🚀☕



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments

Skip to toolbar