PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.
Live Feed

AI Research Paper Feed

Discover cutting-edge research papers in AI and machine learning. Stay ahead with the latest breakthroughs, insights, and discoveries from top researchers worldwide.

22,578 Research Papers
Multi-Domain Coverage
Real-time Updates
ArXivFeb 19, 2026

Optimal Unconstrained Self-Distillation in Ridge Regression: Strict Improvements, Precise Asymptotics, and One-Shot Tuning

Hien Dang, Pratik Patil et al.

TLDR: This paper demonstrates that self-distillation can significantly improve ridge regression performance by optimally mixing teacher predictions, providing precise asymptotic analyses and a practical one-shot tuning method.

06
ArXivFeb 19, 2026

IRIS: Learning-Driven Task-Specific Cinema Robot Arm for Visuomotor Motion Control

Qilong Cheng, Matthew Mackay et al.

TLDR: IRIS is a cost-effective, learning-driven robotic camera system for cinematic motion control, using imitation learning to achieve smooth and repeatable camera movements.

03
ArXivFeb 19, 2026

From Labor to Collaboration: A Methodological Experiment Using AI Agents to Augment Research Perspectives in Taiwan's Humanities and Social Sciences

Yi-Chih Huang

TLDR: This study proposes a collaborative AI workflow for humanities and social sciences research, using Taiwan's Claude.ai data to validate its feasibility and effectiveness.

04,904
ArXivFeb 19, 2026

Asymptotic Smoothing of the Lipschitz Loss Landscape in Overparameterized One-Hidden-Layer ReLU Networks

Saveliy Baturin

TLDR: This paper shows that in overparameterized one-hidden-layer ReLU networks, the loss landscape becomes smoother and flatter as the network width increases, resulting in smaller energy gaps between local and global minima.

0459
ArXivFeb 19, 2026

Adaptive Decentralized Composite Optimization via Three-Operator Splitting

Xiaokai Chen, Ilya Kuruzov et al.

TLDR: The paper introduces an adaptive decentralized optimization method using three-operator splitting and local stepsize adjustments, achieving robust convergence for convex and strongly convex problems.

04
ArXivFeb 19, 2026

Transforming Behavioral Neuroscience Discovery with In-Context Learning and AI-Enhanced Tensor Methods

Paimon Goulart, Jordan Steinhauser et al.

TLDR: This paper presents an AI-enhanced pipeline using In-Context Learning and tensor methods to improve data analysis in behavioral neuroscience, particularly for studying fear generalization in mice, which can help understand PTSD.

03,272
ArXivFeb 19, 2026

LORA-CRAFT: Cross-layer Rank Adaptation via Frozen Tucker Decomposition of Pre-trained Attention Weights

Kasun Dewage, Marianna Pensky et al.

TLDR: CRAFT is a parameter-efficient fine-tuning method using Tucker decomposition on pre-trained attention weights, achieving competitive performance with minimal adaptation parameters.

06,222
ArXivFeb 19, 2026

Instructor-Aligned Knowledge Graphs for Personalized Learning

Abdulrahman AlRabah, Priyanka Kargupta et al.

TLDR: InstructKG is a framework that automatically constructs knowledge graphs from course materials to capture learning dependencies and aid personalized learning.

05,816
ArXivFeb 19, 2026

Universal Fine-Grained Symmetry Inference and Enforcement for Rigorous Crystal Structure Prediction

Shi Yin, Jinming Mu et al.

TLDR: This paper presents a novel approach to crystal structure prediction using large language models and constrained optimization to improve symmetry inference and enforce physical validity, achieving state-of-the-art results without relying on existing databases.

04,865
ArXivFeb 19, 2026

Catastrophic Forgetting Resilient One-Shot Incremental Federated Learning

Obaidullah Zaland, Zulfiqar Ahmad Khan et al.

TLDR: This paper introduces One-Shot Incremental Federated Learning (OSI-FL), a framework that addresses communication overhead and catastrophic forgetting in federated learning by using category-specific embeddings and selective sample retention.

0913
ArXivFeb 19, 2026

Sign Lock-In: Randomly Initialized Weight Signs Persist and Bottleneck Sub-Bit Model Compression

Akira Sakai, Yuma Ichikawa

TLDR: The paper identifies that weight sign persistence is a bottleneck in sub-bit model compression and proposes methods to reduce sign flips while maintaining performance.

03,930
ArXivFeb 19, 2026

Deep Reinforcement Learning for Optimal Portfolio Allocation: A Comparative Study with Mean-Variance Optimization

Srijan Sood, Kassiani Papasotiriou et al.

TLDR: This study compares Deep Reinforcement Learning (DRL) and Mean-Variance Optimization (MVO) for portfolio allocation, showing DRL's strong performance across various financial metrics.

04,188
ArXivFeb 19, 2026

Retaining Suboptimal Actions to Follow Shifting Optima in Multi-Agent Reinforcement Learning

Yonghyeon Jo, Sunwoo Lee et al.

TLDR: The paper introduces Successive Sub-value Q-learning (S2Q), a method that improves adaptability in multi-agent reinforcement learning by retaining multiple high-value actions, outperforming existing algorithms.

03,646
ArXivFeb 19, 2026

Phase-Aware Mixture of Experts for Agentic Reinforcement Learning

Shengtian Yang, Yu Li et al.

TLDR: The paper introduces Phase-Aware Mixture of Experts (PA-MoE) to enhance reinforcement learning by allowing expert specialization for complex tasks without being dominated by simpler tasks.

03,914
ArXivFeb 19, 2026

Evaluating Chain-of-Thought Reasoning through Reusability and Verifiability

Shashank Aggarwal, Ram Vikas Mishra et al.

TLDR: This paper introduces reusability and verifiability as new metrics to evaluate the quality of Chain-of-Thought reasoning in multi-agent IR pipelines, revealing that these metrics are not correlated with traditional accuracy measures.

06,478
ArXivFeb 19, 2026

The Anxiety of Influence: Bloom Filters in Transformer Attention Heads

Peter Balogh

TLDR: Certain transformer attention heads in language models act as membership testers, identifying repeated tokens with high precision, similar to Bloom filters.

06,374
ArXivFeb 19, 2026

Fine-Grained Uncertainty Quantification for Long-Form Language Model Outputs: A Comparative Study

Dylan Bouchard, Mohit Singh Chauhan et al.

TLDR: This study introduces a taxonomy for fine-grained uncertainty quantification in long-form language model outputs, revealing that claim-level scoring and uncertainty-aware decoding improve factuality in generated content.

06,354
ArXivFeb 19, 2026

KLong: Training LLM Agent for Extremely Long-horizon Tasks

Yue Liu, Zhiyuan Hu et al.

TLDR: KLong is a new LLM agent designed to tackle long-horizon tasks using a novel training method combining trajectory-splitting SFT and progressive RL, outperforming existing models on various benchmarks.

07,407
ArXivFeb 19, 2026

Deeper detection limits in astronomical imaging using self-supervised spatiotemporal denoising

Yuduo Guo, Hao Zhang et al.

TLDR: ASTERIS, a self-supervised denoising algorithm, enhances astronomical imaging detection limits by leveraging spatiotemporal data, improving detection by 1 magnitude and identifying previously undetectable features in deep space images.

05,618
ArXivFeb 19, 2026

genriesz: A Python Package for Automatic Debiased Machine Learning with Generalized Riesz Regression

Masahiro Kato

TLDR: genriesz is a Python package that automates debiased machine learning for estimating causal and structural parameters using generalized Riesz regression.

03
Showing 1-20 of 22578 papers
per page
…
1 / 1129