Skip to content

Comments

[Example] Accelerating Sparse MoE Layer on FPGA using Allo#489

Open
zhangxiaohuang1111 wants to merge 3 commits intocornell-zhang:mainfrom
zhangxiaohuang1111:zh476
Open

[Example] Accelerating Sparse MoE Layer on FPGA using Allo#489
zhangxiaohuang1111 wants to merge 3 commits intocornell-zhang:mainfrom
zhangxiaohuang1111:zh476

Conversation

@zhangxiaohuang1111
Copy link

@zhangxiaohuang1111 zhangxiaohuang1111 commented Dec 13, 2025

Description

This PR contributes a complete hardware accelerator design for a Mixture of Experts (MoE) layer combined with Multi-Head Attention (MHA). This was developed as the final project for Cornell ECE6775.

It demonstrates how to use Allo to handle dynamic control flow (routing) and irregular memory accesses found in sparse LLM architectures.

Problems

Performance Optimization: Implementing MoE efficiently on FPGA is challenging due to the density bottleneck. We wanted to show how Allo's scheduling primitives can optimize this.

Proposed Solutions

We implemented a scaled Switch-Base-8 architecture with three progressive optimization levels:

Base Version: A pure Python reference implementation for functional correctness.

Lib Version: Utilizes allo.library.nn (e.g., linear2d, GeLU) for better code reuse and systolic array structures.

Alt Version (Optimized): Applies custom HLS optimizations, achieving a 46x speedup over the base version. Key optimizations include:

Loop Reordering: Optimized memory access patterns for the Attention mechanism.

Operator Fusion: Fused GeLU with Fully Connected layers to reduce memory bandwidth.

Pipelining & Unrolling: Applied to Expert sub-kernels for higher parallelism.

We also provided a PyTorch-based verification script that checks numerical equivalence between the hardware kernel and a golden model.

Examples

import allo
from allo.library.nn import linear2d

def top_structure(Q, K, V, ...):
# 1. Attention Stage
attn_out = attention_kernel(Q, K, V)
# 2. MoE Stage
moe_out = moe_kernel(attn_out)
return moe_out

s = allo.customize(top_structure)

s.compose(attention_schedule)
s.compose(moe_schedule)

mod = s.build(target="llvm")

Checklist

Please make sure to review and check all of these items:

  • PR's title starts with a category (e.g. [Bugfix], [IR], [Builder], etc)
  • All changes have test coverage (It would be good to provide ~2 different test cases to test the robustness of your code)
  • Pass the formatting check locally
  • Code is well-documented

@zhangxiaohuang1111 zhangxiaohuang1111 changed the title [ece6775 Allo project] Accelerating Sparse MoE Layer on FPGA using Allo [Example] Accelerating Sparse MoE Layer on FPGA using Allo Dec 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant