PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.
Live Feed

AI Research Paper Feed

Discover cutting-edge research papers in AI and machine learning. Stay ahead with the latest breakthroughs, insights, and discoveries from top researchers worldwide.

22,158 Research Papers
Multi-Domain Coverage
Real-time Updates
ArXivFeb 5, 2026

SpectraKAN: Conditioning Spectral Operators

Chun-Wun Cheng, Carola-Bibiane Schönlieb et al.

TLDR: SpectraKAN enhances spectral neural operators by conditioning them on input data, significantly improving performance on PDE benchmarks.

00
ArXivFeb 5, 2026

Path Sampling for Rare Events Boosted by Machine Learning

Porhouy Minh, Sapna Sarupria

TLDR: AIMMD is a new algorithm that uses machine learning to improve the efficiency of transition path sampling for studying molecular processes.

00
ArXivFeb 5, 2026

Private Prediction via Shrinkage

Chao Yan

TLDR: The paper presents a method to achieve differentially private prediction with reduced dependence on the number of queries, improving efficiency in streaming settings.

00
ArXivFeb 5, 2026

Are Open-Weight LLMs Ready for Social Media Moderation? A Comparative Study on Bluesky

Hsuan-Yu Chou, Wajiha Naveed et al.

TLDR: Open-weight large language models (LLMs) show promise for social media moderation, with performance comparable to proprietary models in detecting harmful content on platforms like Bluesky.

00
ArXivFeb 5, 2026

Disentangled Representation Learning via Flow Matching

Jinjin Chi, Taoping Liu et al.

TLDR: The paper introduces a flow matching-based framework for disentangled representation learning that improves semantic alignment and disentanglement scores by using a non-overlap regularizer to reduce factor interference.

00
ArXivFeb 5, 2026

Finite-Particle Rates for Regularized Stein Variational Gradient Descent

Ye He, Krishnakumar Balasubramanian et al.

TLDR: The paper provides finite-particle convergence rates for the regularized Stein variational gradient descent (R-SVGD) algorithm, offering non-asymptotic bounds and guidance on parameter tuning.

00
ArXivFeb 5, 2026

Smoothness Errors in Dynamics Models and How to Avoid Them

Edward Berman, Luisa Li et al.

TLDR: This paper introduces relaxed unitary convolutions for graph neural networks to improve performance in dynamics modeling by balancing smoothness preservation with natural physical system requirements.

01
ArXivFeb 5, 2026

Radon--Wasserstein Gradient Flows for Interacting-Particle Sampling in High Dimensions

Elias Hess-Childs, Dejan Slepčev et al.

TLDR: The paper introduces new Radon--Wasserstein gradient flows for efficient high-dimensional sampling using interacting particles with linear scaling costs.

00
ArXivFeb 5, 2026

Decision-Focused Sequential Experimental Design: A Directional Uncertainty-Guided Approach

Beichen Wan, Mo Liu et al.

TLDR: The paper proposes a new sequential experimental design approach that improves decision-making by focusing on directional uncertainty, leading to better optimization outcomes than traditional methods.

01
ArXivFeb 5, 2026

Extreme Weather Nowcasting via Local Precipitation Pattern Prediction

Changhoon Song, Teng Yuan Chang et al.

TLDR: The paper introduces exPreCast, a deterministic model for accurate and efficient nowcasting of both normal and extreme rainfall using a balanced dataset from the Korea Meteorological Administration.

00
ArXivFeb 5, 2026

Logarithmic-time Schedules for Scaling Language Models with Momentum

Damien Ferbach, Courtney Paquette et al.

TLDR: ADANA, an optimizer with time-varying schedules for hyperparameters, improves large-scale language model training efficiency by up to 40% compared to AdamW.

01
ArXivFeb 5, 2026

Faithful Bi-Directional Model Steering via Distribution Matching and Distributed Interchange Interventions

Yuntai Bao, Xuhong Zhang et al.

TLDR: The paper introduces Concept DAS (CDAS), a novel intervention-based model steering method that uses distribution matching to achieve more faithful and stable control compared to traditional preference-optimization methods.

00
ArXivFeb 5, 2026

Accelerated Sequential Flow Matching: A Bayesian Filtering Perspective

Yinan Huang, Hans Hao-Hsun Hsu et al.

TLDR: This paper introduces Sequential Flow Matching, a Bayesian filtering approach that accelerates real-time sequential prediction by efficiently updating predictive distributions, reducing inference latency compared to traditional methods.

01
ArXivFeb 5, 2026

PatchFlow: Leveraging a Flow-Based Model with Patch Features

Boxiang Zhang, Baijian Yang et al.

TLDR: PatchFlow improves defect detection in die casting using local patch features and a flow-based model, reducing error rates significantly on multiple datasets.

00
ArXivFeb 5, 2026

OpenMAG: A Comprehensive Benchmark for Multimodal-Attributed Graph

Chenxi Wan, Xunkai Li et al.

TLDR: OpenMAG is a comprehensive benchmark designed to evaluate Multimodal-Attributed Graph (MAG) models across various domains, encoders, and tasks, addressing existing limitations in fair evaluation standards.

01
ArXivFeb 5, 2026

Convergence Rate of the Last Iterate of Stochastic Proximal Algorithms

Kevin Kurian Thomas Vaidyan, Michael P. Friedlander et al.

TLDR: This paper establishes optimal convergence rates for the last iterate of stochastic proximal algorithms without assuming bounded variance, applicable to problems in multi-task and federated learning.

01
ArXivFeb 5, 2026

Hinge Regression Tree: A Newton Method for Oblique Regression Tree Splitting

Hongyi Li, Han Lin et al.

TLDR: The Hinge Regression Tree (HRT) is a new method for creating oblique decision trees using a Newton method that improves split quality and convergence speed, outperforming traditional tree models.

01
ArXivFeb 5, 2026

Consistency-Preserving Concept Erasure via Unsafe-Safe Pairing and Directional Fisher-weighted Adaptation

Yongwoo Kim, Sungmin Cha et al.

TLDR: The paper introduces PAIR, a framework for concept erasure in text-to-image models that maintains semantic consistency by aligning unsafe concepts with safe alternatives.

01
ArXivFeb 5, 2026

Structural Disentanglement in Bilinear MLPs via Architectural Inductive Bias

Ojasva Nema, Kaustubh Sharma et al.

TLDR: Bilinear MLPs with multiplicative interactions improve structural disentanglement and model editability by leveraging architectural inductive bias for better representation and unlearning capabilities.

01
ArXivFeb 5, 2026

SHaSaM: Submodular Hard Sample Mining for Fair Facial Attribute Recognition

Anay Majee, Rishabh Iyer

TLDR: SHaSaM is a novel approach that improves fairness in facial attribute recognition by using submodular hard sample mining to address data imbalance and reduce bias from sensitive attributes.

00
Showing 1-20 of 22158 papers
per page
…
1 / 1108