Overview
Share focuses on parameter-efficient continual finetuning without data replay or a growing list of adapters.
Single Shared Subspace
Learn a shared low-rank subspace that is updated as new tasks arrive, keeping a compact representation of past knowledge.
Almost Strict Continual Learning
Integrate new task directions while minimizing interference with prior tasks to reduce catastrophic forgetting.
Scalable Efficiency
Replace many task-specific adapters with one evolving subspace, enabling large parameter and memory savings.
Method at a Glance
A compact flow of how Share updates knowledge over time.
Initialize Subspace
Start with a low-rank basis that captures core task knowledge.
Detect New Directions
Find essential directions needed for each new task.
Update and Share
Merge new directions into the shared subspace for reuse.
Figures
Key visuals from the paper.
Method Overview
Open figure PDFText-to-Image Results
Open results PDFResults Snapshot
Reported gains span vision, language, 3D pose, and text-to-image tasks.
Efficiency
Up to 100x parameter reduction and 281x memory savings compared to traditional LoRA approaches, while maintaining performance close to jointly trained models.
Scalability
A single Share model can replace many task-specific adapters, supporting continual learning without maintaining separate LoRA weights per task.
BibTeX
Use this entry to cite the paper.
Paper
Preview the PDF or open it in a new tab.