arXiv: 2602.06043

Shared LoRA Subspaces for almost Strict Continual Learning

A single shared low-rank subspace that grows with tasks while keeping the model compact and stable.

* Equal contribution.
Highlights
100x
parameter reduction vs LoRA adapters
281x
memory savings
1
shared subspace for many tasks
Designed for scalable, asynchronous continual learning across modalities.

Overview

Share focuses on parameter-efficient continual finetuning without data replay or a growing list of adapters.

Single Shared Subspace

Learn a shared low-rank subspace that is updated as new tasks arrive, keeping a compact representation of past knowledge.

Almost Strict Continual Learning

Integrate new task directions while minimizing interference with prior tasks to reduce catastrophic forgetting.

Scalable Efficiency

Replace many task-specific adapters with one evolving subspace, enabling large parameter and memory savings.

Method at a Glance

A compact flow of how Share updates knowledge over time.

01

Initialize Subspace

Start with a low-rank basis that captures core task knowledge.

02

Detect New Directions

Find essential directions needed for each new task.

03

Update and Share

Merge new directions into the shared subspace for reuse.

Task t -> New directions
Shared Subspace -> Task t+1

Figures

Key visuals from the paper.

Method Overview

Open figure PDF

Text-to-Image Results

Open results PDF

Results Snapshot

Reported gains span vision, language, 3D pose, and text-to-image tasks.

Efficiency

Up to 100x parameter reduction and 281x memory savings compared to traditional LoRA approaches, while maintaining performance close to jointly trained models.

Scalability

A single Share model can replace many task-specific adapters, supporting continual learning without maintaining separate LoRA weights per task.

BibTeX

Use this entry to cite the paper.

Paper

Preview the PDF or open it in a new tab.