PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.
Live Feed

AI Research Paper Feed

Discover cutting-edge research papers in AI and machine learning. Stay ahead with the latest breakthroughs, insights, and discoveries from top researchers worldwide.

22,578 Research Papers
Multi-Domain Coverage
Real-time Updates
ArXivFeb 19, 2026

VP-VAE: Rethinking Vector Quantization via Adaptive Vector Perturbation

Linwei Zhai, Han Ding et al.

TLDR: VP-VAE introduces a novel approach to Vector Quantized Variational Autoencoders by using adaptive vector perturbation to improve training stability and avoid codebook collapse.

02
ArXivFeb 19, 2026

Predictive Batch Scheduling: Accelerating Language Model Training Through Loss-Aware Sample Prioritization

Sumedh Rasal

TLDR: Predictive Batch Scheduling (PBS) accelerates language model training by prioritizing high-loss samples using a lightweight predictor based on token-level features.

02
ArXivFeb 19, 2026

Instructor-Aligned Knowledge Graphs for Personalized Learning

Abdulrahman AlRabah, Priyanka Kargupta et al.

TLDR: InstructKG is a framework that automatically constructs knowledge graphs from course materials to capture learning dependencies and aid personalized learning.

0596
ArXivFeb 19, 2026

JEPA-DNA: Grounding Genomic Foundation Models through Joint-Embedding Predictive Architectures

Ariel Larey, Elay Dahan et al.

TLDR: JEPA-DNA is a new framework for genomic foundation models that improves understanding of genomic sequences by integrating high-level functional embeddings with traditional generative objectives.

04
ArXivFeb 19, 2026

Deep Reinforcement Learning for Optimal Portfolio Allocation: A Comparative Study with Mean-Variance Optimization

Srijan Sood, Kassiani Papasotiriou et al.

TLDR: This study compares Deep Reinforcement Learning (DRL) and Mean-Variance Optimization (MVO) for portfolio allocation, showing DRL's strong performance across various financial metrics.

02
ArXivFeb 19, 2026

FLoRG: Federated Fine-tuning with Low-rank Gram Matrices and Procrustes Alignment

Chuiyang Meng, Ming Tang et al.

TLDR: FLoRG improves federated fine-tuning of large language models by using a single low-rank matrix and Procrustes alignment, enhancing accuracy and reducing communication overhead.

02
ArXivFeb 19, 2026

A Privacy by Design Framework for Large Language Model-Based Applications for Children

Diana Addae, Diana Rogachova et al.

TLDR: This paper proposes a Privacy-by-Design framework for developing AI applications for children that integrates privacy regulations to ensure data protection and legal compliance.

02,300
ArXivFeb 19, 2026

KLong: Training LLM Agent for Extremely Long-horizon Tasks

Yue Liu, Zhiyuan Hu et al.

TLDR: KLong is a new LLM agent designed to tackle long-horizon tasks using a novel training method combining trajectory-splitting SFT and progressive RL, outperforming existing models on various benchmarks.

02,333
ArXivFeb 19, 2026

Deeper detection limits in astronomical imaging using self-supervised spatiotemporal denoising

Yuduo Guo, Hao Zhang et al.

TLDR: ASTERIS, a self-supervised denoising algorithm, enhances astronomical imaging detection limits by leveraging spatiotemporal data, improving detection by 1 magnitude and identifying previously undetectable features in deep space images.

0535
ArXivFeb 19, 2026

Universal Fine-Grained Symmetry Inference and Enforcement for Rigorous Crystal Structure Prediction

Shi Yin, Jinming Mu et al.

TLDR: This paper presents a novel approach to crystal structure prediction using large language models and constrained optimization to improve symmetry inference and enforce physical validity, achieving state-of-the-art results without relying on existing databases.

04
ArXivFeb 19, 2026

TIFO: Time-Invariant Frequency Operator for Stationarity-Aware Representation Learning in Time Series

Xihao Piao, Zheng Chen et al.

TLDR: TIFO is a new method that improves time series forecasting by addressing distribution shifts using a frequency-based approach, achieving significant accuracy and efficiency gains.

02
ArXivFeb 19, 2026

Agentic Wireless Communication for 6G: Intent-Aware and Continuously Evolving Physical-Layer Intelligence

Zhaoyang Li, Xingzhi Jin et al.

TLDR: This paper explores the use of large language models for creating intent-aware, autonomous agents to manage the complexity of 6G wireless communication systems.

02
ArXivFeb 19, 2026

Toward Trustworthy Evaluation of Sustainability Rating Methodologies: A Human-AI Collaborative Framework for Benchmark Dataset Construction

Xiaoran Cai, Wang Yang et al.

TLDR: The paper proposes a human-AI collaborative framework to create benchmark datasets for more trustworthy and comparable sustainability ratings of companies.

02
ArXivFeb 19, 2026

AdvSynGNN: Structure-Adaptive Graph Neural Nets via Adversarial Synthesis and Self-Corrective Propagation

Rong Fu, Muge Qi et al.

TLDR: AdvSynGNN is a robust graph neural network architecture that adapts to structural noise and heterophily using adversarial synthesis and self-corrective propagation, improving node-level representation learning and predictive accuracy.

02
ArXivFeb 19, 2026

Evaluating Chain-of-Thought Reasoning through Reusability and Verifiability

Shashank Aggarwal, Ram Vikas Mishra et al.

TLDR: This paper introduces reusability and verifiability as new metrics to evaluate the quality of Chain-of-Thought reasoning in multi-agent IR pipelines, revealing that these metrics are not correlated with traditional accuracy measures.

02,085
ArXivFeb 19, 2026

The Anxiety of Influence: Bloom Filters in Transformer Attention Heads

Peter Balogh

TLDR: Certain transformer attention heads in language models act as membership testers, identifying repeated tokens with high precision, similar to Bloom filters.

02,102
ArXivFeb 19, 2026

LORA-CRAFT: Cross-layer Rank Adaptation via Frozen Tucker Decomposition of Pre-trained Attention Weights

Kasun Dewage, Marianna Pensky et al.

TLDR: CRAFT is a parameter-efficient fine-tuning method using Tucker decomposition on pre-trained attention weights, achieving competitive performance with minimal adaptation parameters.

02,074
ArXivFeb 19, 2026

Fine-Grained Uncertainty Quantification for Long-Form Language Model Outputs: A Comparative Study

Dylan Bouchard, Mohit Singh Chauhan et al.

TLDR: This study introduces a taxonomy for fine-grained uncertainty quantification in long-form language model outputs, revealing that claim-level scoring and uncertainty-aware decoding improve factuality in generated content.

02,259
ArXivFeb 19, 2026

From Labor to Collaboration: A Methodological Experiment Using AI Agents to Augment Research Perspectives in Taiwan's Humanities and Social Sciences

Yi-Chih Huang

TLDR: This study proposes a collaborative AI workflow for humanities and social sciences research, using Taiwan's Claude.ai data to validate its feasibility and effectiveness.

0671
ArXivFeb 19, 2026

Sign Lock-In: Randomly Initialized Weight Signs Persist and Bottleneck Sub-Bit Model Compression

Akira Sakai, Yuma Ichikawa

TLDR: The paper identifies that weight sign persistence is a bottleneck in sub-bit model compression and proposes methods to reduce sign flips while maintaining performance.

02
Showing 1-20 of 22578 papers
per page
…
1 / 1129