PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.
Live Feed

AI Research Paper Feed

Discover cutting-edge research papers in AI and machine learning. Stay ahead with the latest breakthroughs, insights, and discoveries from top researchers worldwide.

18,563 Research Papers
Multi-Domain Coverage
Real-time Updates
ArXivDec 26, 2025

Tilt Matching for Scalable Sampling and Fine-Tuning

Peter Potaptchik, Cheuk-Kit Lee et al.

TLDR: Tilt Matching is a scalable algorithm for sampling from unnormalized densities and fine-tuning generative models using stochastic interpolants without needing reward gradients.

00
ArXivDec 26, 2025

Self-attention vector output similarities reveal how machines pay attention

Tal Halevi, Yarden Tzach et al.

TLDR: This study analyzes self-attention in BERT, revealing that attention heads focus on different linguistic features and develop context similarity, with a shift from long-range to short-range similarities across layers.

00
ArXivDec 26, 2025

AutoPP: Towards Automated Product Poster Generation and Optimization

Jiahao Fan, Yuxin Qin et al.

TLDR: AutoPP automates the creation and optimization of product posters using a novel pipeline that integrates design and performance feedback to improve Click-Through Rates (CTR).

01
ArXivDec 26, 2025

GQ-VAE: A gated quantized VAE for learning variable length tokens

Theo Datta, Kayla Huang et al.

TLDR: GQ-VAE is a new neural tokenizer that improves language model performance by encoding variable-length discrete tokens and can be used as a drop-in replacement for traditional tokenizers like BPE.

00
ArXivDec 26, 2025

Few Tokens Matter: Entropy Guided Attacks on Vision-Language Models

Mengqi He, Xinyu Tian et al.

TLDR: The study shows that targeting high-entropy tokens in vision-language models can efficiently degrade performance and expose safety vulnerabilities with fewer resources.

00
ArXivDec 26, 2025

Toward Secure and Compliant AI: Organizational Standards and Protocols for NLP Model Lifecycle Management

Sunil Arora, John Hastings

TLDR: The paper introduces SC-NLP-LMF, a framework for secure and compliant management of NLP systems throughout their lifecycle, especially in sensitive domains.

00
ArXivDec 26, 2025

MoonBot: Modular and On-Demand Reconfigurable Robot Toward Moon Base Construction

Kentaro Uno, Elian Neppel et al.

TLDR: MoonBot is a modular, reconfigurable robot designed to support lunar base construction by adapting to various tasks and conditions while meeting mass constraints.

01
ArXivDec 26, 2025

Direction Finding with Sparse Arrays Based on Variable Window Size Spatial Smoothing

Wesley S. Leite, Rodrigo C. de Lamare et al.

TLDR: This paper introduces a variable window size spatial smoothing approach to improve direction of arrival estimation using sparse arrays, enhancing the performance and reducing complexity compared to traditional methods.

01
ArXivDec 26, 2025

Hybrid Combinatorial Multi-armed Bandits with Probabilistically Triggered Arms

Kongchang Zhou, Tingyu Zhang et al.

TLDR: The paper introduces a hybrid CMAB-T framework that combines offline data with online interaction to improve learning in multi-armed bandit problems, outperforming purely online or offline methods.

01
ArXivDec 26, 2025

Prefill vs. Decode Bottlenecks: SRAM-Frequency Tradeoffs and the Memory-Bandwidth Ceiling

Hannah Atmer, Yuan Yao et al.

TLDR: The study explores the trade-offs between SRAM size and operating frequency on the energy efficiency and performance of Large Language Model inference, identifying optimal configurations to minimize energy use and latency.

01
ArXivDec 26, 2025

Exploring the Heterogeneity of Tabular Data: A Diversity-aware Data Generator via LLMs

Yafeng Tang, Xiaoou Ding et al.

TLDR: The Diversity-Aware Tabular data gEnerator (DATE) framework improves tabular data generation by partitioning data into diverse subsets and using LLMs with decision tree reasoning to generate high-quality data, outperforming existing methods significantly.

01
ArXivDec 26, 2025

DuaDeep-SeqAffinity: Dual-Stream Deep Learning Framework for Sequence-Only Antigen-Antibody Affinity Prediction

Aicha Boutorh, Soumia Bouyahiaoui et al.

TLDR: DuaDeep-SeqAffinity is a deep learning framework that accurately predicts antigen-antibody binding affinity using only amino acid sequences, outperforming existing methods without needing 3D structural data.

01
ArXivDec 26, 2025

Context as a Tool: Context Management for Long-Horizon SWE-Agents

Shukai Liu, Jian Yang et al.

TLDR: The paper introduces CAT, a context management tool for software engineering agents that improves long-horizon reasoning by structuring context maintenance and enabling proactive compression of historical data.

00
ArXivDec 26, 2025

Look Closer! An Adversarial Parametric Editing Framework for Hallucination Mitigation in VLMs

Jiayu Hu, Beibei Li et al.

TLDR: The paper introduces an adversarial parametric editing framework to reduce hallucinations in Vision-Language Models by prioritizing visual evidence over linguistic biases.

01
ArXivDec 26, 2025

Scalable Class-Incremental Learning Based on Parametric Neural Collapse

Chuangxin Zhang, Guangfeng Lin et al.

TLDR: The paper introduces SCL-PNC, a scalable class-incremental learning method that uses parametric neural collapse to efficiently expand models while maintaining feature consistency and addressing class misalignment.

01
ArXivDec 26, 2025

Bridging the Copyright Gap: Do Large Vision-Language Models Recognize and Respect Copyrighted Content?

Naen Xu, Jinghuai Zhang et al.

TLDR: Large vision-language models struggle to recognize and respect copyrighted content, prompting the need for enhanced copyright compliance tools.

01
ArXivDec 26, 2025

HeartBench: Probing Core Dimensions of Anthropomorphic Intelligence in LLMs

Jiaxin Liu, Peiyi Tu et al.

TLDR: HeartBench is a new framework for evaluating the anthropomorphic intelligence of Chinese LLMs, revealing significant limitations in their ability to handle complex social, emotional, and ethical nuances.

01
ArXivDec 26, 2025

Scaling Adversarial Training via Data Selection

Youran Ye, Dejin Wang et al.

TLDR: Selective Adversarial Training reduces computational costs by perturbing only critical samples, achieving comparable or better robustness than full PGD adversarial training.

01
ArXivDec 26, 2025

Explainable Multimodal Regression via Information Decomposition

Zhaozhao Ma, Shujian Yu

TLDR: This paper introduces a new framework for multimodal regression that enhances interpretability by decomposing modality contributions using Partial Information Decomposition (PID), achieving better accuracy and insight than existing methods.

01
ArXivDec 26, 2025

Broken Words, Broken Performance: Effect of Tokenization on Performance of LLMs

Sachin Pawar, Manoj Apte et al.

TLDR: The paper investigates how breaking natural words into multiple tokens in large language models (LLMs) negatively affects their performance on NLP tasks.

00
Showing 1-20 of 18563 papers
per page
…
1 / 929