TECHNICAL RESEARCH DOCUMENT

CORE ARCHITECTURES OF TEXT DIFFUSION MODELS

A Developer's Guide
DOCUMENT ID:
TDM_DEV_GUIDE_v0.8
AUTHOR:
Jorge A. Arroyo
STATUS:
RELEASED

EXECUTIVE SUMMARY

An analysis of recent research reveals two dominant architectural paradigms for text generation diffusion models: Discrete Diffusion and Continuous Diffusion. While both are built on the foundational principle of reversing a corruption process, their internal mechanics differ significantly. This guide outlines a core two-component architecture with five essential subcomponents, providing a clear and consistent understanding of both approaches for developers.

The central idea is a two-stage process: A fixed, non-learned Forward Process systematically corrupts clean text into a simple, known distribution, and a learned Reverse Process starts from that simple distribution and iteratively refines it back into coherent, clean text.

BREAKTHROUGH VALIDATION: LLaDA (2025)

The theoretical foundations outlined in this guide received major validation with the release of LLaDA (Large Language Diffusion with mAsking) by Nie et al. (2025). LLaDA represents the first discrete diffusion language model to achieve competitive performance with strong autoregressive LLMs at scale.

Scale Achievement: LLaDA 8B, trained from scratch on 2.3T tokens, achieves performance competitive with LLaMA3 8B across diverse benchmarks including language understanding, mathematics, code generation, and Chinese language tasks.

Key Validations:

  • Scalability: Proves discrete diffusion scales effectively to 8B parameters and beyond
  • Competitive Performance: Matches or exceeds strong AR baselines on standard benchmarks
  • Unique Capabilities: Addresses the "reversal curse," outperforming GPT-4o on reversal reasoning tasks
  • Instruction Following: Demonstrates strong chat and instruction-following abilities after supervised fine-tuning

This breakthrough establishes discrete diffusion as a viable alternative to autoregressive modeling for large-scale language generation.


Current Architectural Limitations & Core Trade-offs

Before diving into the components, it's crucial to understand the high-level trade-offs between autoregressive (AR) and non-autoregressive (NAR) models like diffusion. The choice between them often comes down to a fundamental decision between optimizing for compute efficiency or data efficiency.

Autoregressive (AR) Models: Optimized for Compute

AR models are highly optimized for computational efficiency, but this comes with limitations.

Diffusion Models (NAR): Optimized for Data

Diffusion models are "super data learners" that trade higher computational costs for a deeper understanding of the training data.


The Two Core Components

Text diffusion models fundamentally consist of two complementary processes that work together to enable generation:

Core Component 1: The Forward Process (Data → Noise)

The forward process systematically corrupts clean text into a simple, tractable distribution. This is a fixed, non-learned process that creates the learning task for the reverse process. It consists of three essential subcomponents that work together to define how clean data becomes noise.

Core Component 2: The Reverse Process (Noise → Data)

The reverse process learns to invert the forward corruption, step-by-step, to generate new data. This is the learned, generative core of the model. It consists of two essential subcomponents that define how the model learns to denoise and what objective guides this learning.


Core Component 1: The Forward Process

Subcomponent 1A: Input Representation

This subcomponent determines how raw text is converted into a format suitable for the diffusion process.

Discrete Diffusion operates directly on tokenized text, using token IDs from a standard vocabulary (e.g., BPE, WordPiece). Continuous Diffusion converts discrete tokens into continuous vector representations, typically through embeddings or contextualized encodings from pre-trained models.

Subcomponent 1B: The Corruption Process

Discrete Diffusion

The corruption process operates as a fixed Markov chain, starting with a clean text sequence (x₀) and progressively replacing discrete tokens with a special [MASK] token until the sequence is fully degraded. This approach directly connects the diffusion framework to the highly successful Masked Language Modeling (MLM) paradigm.

Critical Implementation Detail: Variable Masking Ratios

LLaDA (Nie et al., 2025) demonstrates that variable masking ratios per sequence are essential for optimal performance:

Variable Masking (LLaDA Approach):

- Each sequence in a training batch gets a different corruption ratio - Sampled as: t ~ U[0,1] per sequence - Trains model to handle diverse corruption levels simultaneously

Why Not Fixed Ratios:

- Fixed ratios train the model for only one specific infilling task - Variable ratios create a more robust, generalizable model - Essential for achieving competitive performance at scale

Implementation: Set single_ratio_per_sequence = false in your diffusion configuration.

Continuous Diffusion

This approach first maps tokens into a continuous vector space using contextual encodings from pre-trained models (e.g., BERT). The forward process then gradually adds Gaussian noise to these encodings according to a predefined schedule until they become pure noise (zT).

Shabalin et al. (2025) demonstrate that using contextual encodings is superior to using context-free embeddings, providing the diffusion model with a more suitable latent space for training.

Subcomponent 1C: The Corruption Schedule

Discrete Diffusion

CRITICAL DISTINCTION: Training vs Inference Schedules

Recent research, particularly LLaDA (Nie et al., 2025), clarifies that training and inference use different scheduling approaches:

Training Schedule:

- Uses uniform sampling of corruption ratios: t ~ U[0,1] for each sequence - Each training example gets a different randomly sampled corruption level - Implementation: mask_ratio = torch.rand(batch_size)

Inference Schedule:

- Uses cosine discretization for the reverse process steps - Formula: t(i) = cos²(π/2 × (1 - i/T)) for step i of T total steps - Creates equally difficult denoising steps for optimal sampling quality

Why This Matters: Training with uniform sampling ensures robust learning across all corruption levels. Inference with cosine scheduling ensures optimal generation quality. Zhang (2025) proves cosine schedule is Fisher-Rao optimal for masked discrete diffusion.

Continuous Diffusion

The schedule defines how much Gaussian noise is added at each step t of the forward process. Standard schedules from image diffusion have been found to be suboptimal for text. Shabalin et al. (2025) propose a tan-d noise scheduler designed to introduce a significantly higher and more consistent level of noise across all timesteps.


Core Component 2: The Reverse Process

Subcomponent 2A: The Denoising Network

Discrete Diffusion

A neural network (typically a Transformer) learns to predict the original tokens at masked positions. The reverse process can operate in multiple steps, iteratively replacing masked tokens with predictions, often with remasking strategies that refine uncertain predictions over several iterations.

Sahoo et al. (2024) introduced confidence-based remasking, where the model iteratively unmasks the most confident predictions while remasking uncertain ones, significantly improving generation quality.

Continuous Diffusion

A neural network learns to denoise the corrupted latent vectors, typically involving predicting either the noise to be removed or the clean latent vectors directly. Shabalin et al. (2025) demonstrate that using a sophisticated encoder-denoiser-decoder architecture significantly improves performance.

Subcomponent 2B: The Objective Function

Discrete Diffusion

Correct Loss Function Implementation

The correct, theoretically grounded loss function for a sequence of N tokens is:

L∞(N) = ∫₀¹ [α'ₜ/(1-αₜ)] E_q(xₜ|x₀) [∑_{n:xₜ⁽ⁿ⁾=m} (x₀⁽ⁿ⁾)ᵀ log μθ⁽ⁿ⁾(xₜ,t)] dt

Key Components:

  • μθ⁽ⁿ⁾(xₜ,t): Your neural network predicting probability distribution of original n-th token
  • αₜ: The masking schedule (e.g., αₜ = 1-t for linear schedule)
  • α'ₜ/(1-αₜ): Crucial time-dependent reweighting factor

CRITICAL: Omitting the time-dependent reweighting factor leads to an incorrect loss formulation that does not faithfully optimize the data's log-likelihood and makes comparisons with AR models fundamentally unfair.

Continuous Diffusion

The objective is typically a regression-style loss. The denoising network is trained by minimizing the mean-squared error (MSE) between the network's prediction of the clean latent vectors and the true clean latent vectors.


Advanced Sampling: Multiple Remasking Strategies

LLaDA introduces multiple remasking strategies for different use cases, moving beyond simple random remasking:

Random Remasking (Algorithm 4)

Low-Confidence Remasking (Algorithm 5)

Semi-Autoregressive Remasking

Implementation Insight: LLaDA shows that remasking strategy should be task-dependent, with base models preferring confidence-based approaches and instruct models benefiting from semi-autoregressive strategies.


Implementation Considerations for Developers

Choosing Between Discrete and Continuous

Discrete Diffusion

Recommended for developers who:

Continuous Diffusion

Recommended for developers who:

Scaling Lessons from LLaDA

Architecture Scaling

Training Scaling

Advanced Technique: Accelerating Inference with Speculative Sampling

A major drawback of diffusion models is their slow inference speed due to the iterative nature of the reverse process. Speculative sampling, adapted for diffusion models by De Bortoli et al. (2025), offers a promising solution.

The Core Idea: Instead of running the expensive, high-quality "target" model for every single generation step, a faster, lower-quality "draft" model proposes a sequence of future steps. The target model then verifies these proposed steps in a single parallel pass, accepting or rejecting them. This can reduce the number of required evaluations by 50% or more without any loss in sample quality.

Drafting Strategies for Developers


References

Austin, J., Johnson, D. D., Ho, J., Tarlow, D., & van den Berg, R. (2021). Structured denoising diffusion models in discrete state-spaces. In Advances in Neural Information Processing Systems, 34, 17981–17993.
Chen, J., Zhang, A., Li, M., Smola, A., & Yang, D. (2023). A cheaper and better diffusion language model with soft-masked noise. arXiv preprint arXiv:2304.04746.
De Bortoli, V., Galashov, A., Gretton, A., & Doucet, A. (2025). Accelerated diffusion models via speculative sampling. In Proceedings of the 42nd International Conference on Machine Learning.
Gong, S., Li, M., Feng, J., Wu, Z., & Kong, L. (2023). DiffuSeq-v2: Bridging discrete and continuous text spaces for accelerated Seq2Seq diffusion models. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 9868-9875).
Li, Y., Zhou, K., Zhao, W. X., & Wen, J.-R. (2023). Diffusion models for non-autoregressive text generation: A survey. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23) (pp. 6692-6701).
Ni, J., et al. (2025). Diffusion language models are super data learners. Blog Post. Retrieved from https://jinjieni.notion.site/Diffusion-Language-Models-are-Super-Data-Learners
Nie, S., Zhu, F., You, Z., Zhang, X., Ou, J., Hu, J., Zhou, J., Lin, Y., Wen, J.-R., & Li, C. (2025). Large Language Diffusion Models. arXiv preprint arXiv:2502.09992.
Prabhudesai, M., Wu, M., Zadeh, A., Fragkiadaki, K., & Pathak, D. (2025). Diffusion beats autoregressive in data-constrained settings. arXiv preprint arXiv:2507.15857.
Sahoo, S. S., Arriola, M., Schiff, Y., Gokaslan, A., Marroquin, E., Chiu, J. T., Rush, A., & Kuleshov, V. (2024). Simple and effective masked diffusion language models. In Advances in Neural Information Processing Systems, 37.
Shabalin, A., Meshchaninov, V., Chimbulatov, E., Lapikov, V., Kim, R., Bartosh, G., Molchanov, D., Markov, S., & Vetrov, D. (2025). TEncDM: Understanding the Properties of the Diffusion Model in the Space of Language Model Encodings.
Shi, J., Han, K., Wang, Z., Doucet, A., & Titsias, M. K. (2025). Simplified and generalized masked diffusion for discrete data. In Advances in Neural Information Processing Systems, 38.
Tang, C., Zhu, F., Huang, Z., & Liu, X. (2023). Denoising text generation by learning to reconcile predictions at different timesteps. arXiv preprint arXiv:2310.13308.
Yi, X., Zhang, W., Wang, T., Li, L., & Yang, J. (2024). A comprehensive survey of diffusion models for text generation. arXiv preprint arXiv:2401.12345.
Zhang, L. (2025). The cosine schedule is Fisher-Rao-optimal for masked discrete diffusion models. arXiv preprint arXiv:2508.04884.