Back to Home

Notes

A running collection of ideas, experiments, and references.

Ai Notes

Computes attention in tiles/blocks rather than materializing the full $N \times N$ attention matrix

Modality Extension Deep Dive

2026-02-11: Read through the LlaVA to MoE-LLaVA lineage; those two papers seem like good initial starts. Currently need to look into compatability with LoRA as well as what Q-Former is.

Notes