
Questo prodotto usufruisce delle SPEDIZIONI GRATIS
selezionando l'opzione Corriere Veloce in fase di ordine.
Pagabile anche con Carta della cultura giovani e del merito, 18App Bonus Cultura e Carta del Docente
This book constitutes the proceedings of the Second International Workshop, AI4Research 2025, and First International Workshop, SEAS 2025, which were held in conjunction with AAAI 2025, Philadelphia, PA, USA, during February 25–March 4, 2025.
AI4Research 2025 presented 8 full papers from 35 submissions. The papers covered diverse areas such as agent debate evaluation, taxonomy expansion, hypothesis generation, AI4Research benchmarks, caption generation, drug discovery, and financial auditing.
SEAS 2025 accepted 7 full papers from 17 submissions. These papers explore the efficiency and scalability of AI models.
.- AI4Research 2025.
.- ResearchCodeAgent: An LLM Multi-Agent System for Automated Codification of Research Methodologies.
.- LLMs Tackle Meta-Analysis: Automating Scientific Hypothesis Generation with Statistical Rigor.
.- AuditBench: A Benchmark for Large Language Models in Financial Statement Auditing.
.- Clustering Time Series Data with Gaussian Mixture Embeddings in a Graph Autoencoder Framework.
.- Empowering AI as Autonomous Researchers: Evaluating LLMs in Generating Novel Research Ideas through Automated Metrics.
.- Multi-LLM Collaborative Caption Generation in Scientific Documents.
.- CypEGAT: A Deep Learning Framework Integrating Protein Language Model and Graph Attention Networks for Enhanced CYP450s Substrate Prediction.
.- Understanding How Paper Writers Use AI-Generated Captions in Figure Caption Writing.
.- SEAS 2025.
.- ssProp: Energy-Efficient Training for Convolutional Neural Networks with Scheduled Sparse Back Propagation.
.- Knowledge Distillation with Training Wheels.
.- PickLLM: Context-Aware RL-Assisted Large Language Model Routing.
.- ZNorm: Z-Score Gradient Normalization Accelerating Skip-Connected Network Training without Architectural Modification.
.- The Impact of Multilingual Model Scaling on Seen and Unseen Language Performance.
.- Information Consistent Pruning: How to Efficiently Search for Sparse Networks?.
.- Efficient Image Similarity Search with Quadtrees.


Il sito utilizza cookie ed altri strumenti di tracciamento che raccolgono informazioni dal dispositivo dell’utente. Oltre ai cookie tecnici ed analitici aggregati, strettamente necessari per il funzionamento di questo sito web, previo consenso dell’utente possono essere installati cookie di profilazione e marketing e cookie dei social media. Cliccando su “Accetto tutti i cookie” saranno attivate tutte le categorie di cookie. Per accettare solo deterninate categorie di cookie, cliccare invece su “Impostazioni cookie”. Chiudendo il banner o continuando a navigare saranno installati solo cookie tecnici. Per maggiori dettagli, consultare la Cookie Policy.