π Summary: Paper Circle is a multi-agent LLM system designed to help researchers efficiently discover, evaluate, and synthesize academic literature through two complementary pipelines: one for discovery (combining multiple retrieval sources, scoring, and ranking) and one for analysis (transforming individual papers into structured insights). This addresses the growing challenge of navigating the exponentially expanding scientific literature.
π‘ Key Insight: Multi-agent LLMs can dramatically reduce the cognitive load of literature review by automating discovery and synthesis simultaneously.
π Read Paper
π Summary: In-Place TTT enables LLMs to dynamically adapt their weights at inference time by updating fast weights in MLP projection layers, allowing models to learn from new information streams without retraining. This overcomes architectural incompatibility and computational efficiency barriers that previously limited test-time training in large language models.
π‘ Key Insight: Models can stay current by learning from each new input at inference time, not just once during training.
π Read Paper
π Summary: MMEmb-R1 improves multimodal embeddings by selectively incorporating chain-of-thought reasoning only when beneficial, addressing the problem that forcing reasoning on all inputs wastes computation and can obscure simple semantic signals. The framework uses pair-aware selection to align instance-level reasoning with pairwise contrastive learning, avoiding shortcut learning.
π‘ Key Insight: Not all tasks need reasoningβknowing when to apply it makes embeddings both smarter and more efficient.
π Read Paper
π Summary: HaloProbe identifies and mitigates false object descriptions in vision-language models by using Bayesian inference to separate the impact of token position and object repetition (confounders) from genuine hallucination signals. The framework detects that simple attention-based methods fail due to Simpson's paradox, where attention patterns reverse when properly aggregated.