PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.
Live Feed

AI Research Paper Feed

Discover cutting-edge research papers in AI and machine learning. Stay ahead with the latest breakthroughs, insights, and discoveries from top researchers worldwide.

3,653 Research Papers
Multi-Domain Coverage
Real-time Updates
arXivJun 3, 2025

GraphRAG-Bench: Challenging Domain-Specific Reasoning for Evaluating Graph Retrieval-Augmented Generation

Yilin Xiao, Junnan Dong et al.

TLDR: GraphRAG-Bench is a new benchmark designed to rigorously evaluate the reasoning capabilities of Graph Retrieval Augmented Generation models using challenging, domain-specific questions across diverse tasks.

00
arXivJun 3, 2025

Evaluating LLM Agent Adherence to Hierarchical Safety Principles: A Lightweight Benchmark for Probing Foundational Controllability Components

Ram Potham

TLDR: This paper introduces a benchmark to test if AI agents prioritize safety principles over conflicting operational goals using a grid world scenario.

00
arXivJun 3, 2025

Comparative Analysis of AI Agent Architectures for Entity Relationship Classification

Maryam Berijanian, Kuldeep Singh et al.

TLDR: This study compares three AI architectures for entity relationship classification, finding that a novel multi-agent approach outperforms standard methods and approaches fine-tuned models' performance.

00
arXivJun 3, 2025

Random at First, Fast at Last: NTK-Guided Fourier Pre-Processing for Tabular DL

Renat Sergazinov, Jing Wu et al.

TLDR: Random Fourier features can enhance tabular deep learning by stabilizing and accelerating training, without requiring additional tuning or embeddings.

00
arXivJun 3, 2025

VS-Bench: Evaluating VLMs for Strategic Reasoning and Decision-Making in Multi-Agent Environments

Zelai Xu, Zhexuan Xu et al.

TLDR: VS-Bench is a new benchmark for evaluating Vision Language Models (VLMs) in multi-agent environments, revealing significant gaps in current models' strategic reasoning and decision-making abilities.

00
arXivJun 3, 2025

MISLEADER: Defending against Model Extraction with Ensembles of Distilled Models

Xueqi Cheng, Minxing Zheng et al.

TLDR: MISLEADER is a novel defense strategy against model extraction attacks that uses ensembles of distilled models to maintain utility while reducing extractability without relying on out-of-distribution assumptions.

00
arXivJun 3, 2025

FroM: Frobenius Norm-Based Data-Free Adaptive Model Merging

Zijian Li, Xiaocheng Feng et al.

TLDR: FroM is a new adaptive model merging method that uses the Frobenius norm to effectively combine fine-tuned models without data, reducing task interference.

00
arXivJun 3, 2025

A Review of Various Datasets for Machine Learning Algorithm-Based Intrusion Detection System: Advances and Challenges

Sudhanshu Sekhar Tripathy, Bichitrananda Behera

TLDR: This paper reviews various datasets and machine learning methods for intrusion detection systems, highlighting advances and challenges in the field.

00
arXivJun 3, 2025

SingaKids: A Multilingual Multimodal Dialogic Tutor for Language Learning

Zhengyuan Liu, Geyu Lin et al.

TLDR: SingaKids is a multilingual, multimodal dialogic tutor that enhances language learning for children through interactive picture description tasks in four languages.

00
arXivJun 3, 2025

Comba: Improving Nonlinear RNNs with Closed-loop Control

Jiaxi Hu, Yongqi Pan et al.

TLDR: Comba is a new Nonlinear RNN variant that enhances performance and efficiency in sequence modeling by using closed-loop control theory and a scalar-plus-low-rank state transition.

00
arXivJun 3, 2025

OThink-R1: Intrinsic Fast/Slow Thinking Mode Switching for Over-Reasoning Mitigation

Shengjia Zhang, Junjie Wu et al.

TLDR: OThink-R1 is a method that reduces unnecessary reasoning in large reasoning models by switching between fast-thinking and slow-thinking modes, improving efficiency without losing accuracy.

00
arXivJun 3, 2025

Consultant Decoding: Yet Another Synergistic Mechanism

Chuanghao Ding, Jiaping Wang et al.

TLDR: Consultant Decoding (CD) improves inference speed and quality for large language models by using token-level likelihoods for draft verification, achieving significant efficiency gains over traditional speculative decoding.

00
arXivJun 3, 2025

Asymptotically Optimal Linear Best Feasible Arm Identification with Fixed Budget

Jie Bian, Vincent Y. F. Tan

TLDR: This paper presents a novel algorithm for best feasible arm identification in linear bandits with a fixed budget, achieving optimal error decay rates using a posterior sampling framework.

00
arXivJun 3, 2025

Exploring Explanations Improves the Robustness of In-Context Learning

Ukyo Honda, Tatsushi Oka

TLDR: The study introduces X$^2$-ICL, an improved in-context learning method that uses explanations for all possible labels to enhance robustness in language models.

00
arXivJun 3, 2025

Computational Thresholds in Multi-Modal Learning via the Spiked Matrix-Tensor Model

Hugo Tabanelli, Pierre Mergny et al.

TLDR: The paper explores multi-modal learning using spiked matrix-tensor models, revealing that sequential learning strategies can overcome computational challenges and achieve optimal signal recovery.

00
arXivJun 3, 2025

MASTER: Enhancing Large Language Model via Multi-Agent Simulated Teaching

Liang Yue, Yihong Tang et al.

TLDR: MASTER is a novel data augmentation method that uses multi-agent simulated teaching to enhance large language models' instruction-following capabilities and reasoning abilities.

00
arXivJun 3, 2025

Symmetry-Aware GFlowNets

Hohyun Kim, Seunggeun Lee et al.

TLDR: Symmetry-Aware GFlowNets (SA-GFN) corrects biases in graph sampling by integrating symmetry corrections into the reward structure, improving the diversity and accuracy of generated graphs.

00
arXivJun 3, 2025

VidEvent: A Large Dataset for Understanding Dynamic Evolution of Events in Videos

Baoyu Liang, Qile Su et al.

TLDR: VidEvent is a large-scale dataset designed to improve AI's understanding of dynamic events in videos, providing over 23,000 annotated events from movie recaps for research and development.

00
arXivJun 3, 2025

AERO: A Redirection-Based Optimization Framework Inspired by Judo for Robust Probabilistic Forecasting

Karthikeyan Vaiapury

TLDR: AERO is a novel optimization framework inspired by Judo, designed to improve stability and adaptability in probabilistic forecasting under uncertainty.

00
arXivJun 3, 2025

DIAMOND: An LLM-Driven Agent for Context-Aware Baseball Highlight Summarization

Jeonghun Kang, Soonmok Kwon et al.

TLDR: DIAMOND is an LLM-driven agent that enhances baseball highlight summarization by integrating sabermetrics with natural language reasoning, significantly improving performance over traditional methods.

00
Showing 1-20 of 3653 papers
per page
…
1 / 183