🎨 Transformer架构图编辑器(原图还原版)
Output Probabilities:
Softmax:
Linear:
Add & Norm (右上):
Feed Forward (右上):
Add & Norm (右中):
Multi-Head Attention (右中):
Add & Norm (右下):
Masked Multi-Head Attention:
Add & Norm (左上):
Feed Forward (左):
Add & Norm (左下):
Multi-Head Attention (左):
Input Embedding:
Output Embedding:
Inputs:
Outputs:
Outputs 副标题:
Positional Encoding:
N× (左):
N× (右):
🔄 更新图表
📥 导出PNG
📥 导出JPEG
📥 导出PDF
📥 导出SVG