• HERCOLELab
  • About us
  • Meet the Team
  • News
  • Publications
  • Projects
  • Projects
    • Pandas
    • PyTorch
    • scikit-learn
  • News
    • ๐ŸŽจ Hercole Lab has a new style!
    • ๐ŸŽฏ Our paper "Natural Language Counterfactual Explanations for Graphs Using Large Language Models" has been accepted at AISTATS 2025!
    • ๐ŸŽŠ Hercole Lab is open!
  • Publications
    • Beyond Predictions: A Participatory Framework for Multi-Stakeholder Decision-Making
    • COMBINEX: A Unified Counterfactual Explainer for Graph Neural Networks via Node Feature and Structural Perturbations
    • Community Membership Hiding via Gradient-based Optimization
    • Consistent Counterfactual Explanations via Anomaly Control and Data Coherence
    • FROG: Fair Removal on Graphs
    • Generalizability through Explainability: Countering Overfitting with Counterfactual Examples
    • (โˆ‡) (ฯ„): Gradient-based and Task-Agnostic machine Unlearning
    • Debiasing Machine Unlearning with Counterfactual Examples
    • Evading Community Detection via Counterfactual Neighborhood Search
    • Explain the Explainer: Interpreting Model-Agnostic Counterfactual Explanations of a Deep Reinforcement Learning Agent
    • Natural Language Counterfactual Explanations for Graphs Using Large Language Models
    • Robust Training of Sequential Recommender Systems with Missing Input Data
    • Sparse Vicious Attacks on Graph Neural Networks
    • A Byzantine-Resilient Aggregation Scheme for Federated Learning via Matrix Autoregression on Client Updates
    • A Survey on Decentralized Federated Learning
    • Community Membership Hiding as Counterfactual Graph Search via Deep Reinforcement Learning
    • FLIRT: Federated Learning for Information Retrieval
    • Prompt-to-OS (P2OS): Revolutionizing Operating Systems and Human-Computer Interaction with Integrated AI Generative Models
    • Prompt-to-OS (P2OS): Revolutionizing Operating Systems and Human-Computer Interaction with Integrated AI Generative Models
    • The Dark Side of Explanations: Poisoning Recommender Systems with Counterfactual Examples
    • The Dark Side of Explanations: Poisoning Recommender Systems with Counterfactual Examples
    • CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
    • ChAALenge: An Ambient Assisted Living Project to Promote an Active and Health Ageing
    • Editorial: Human-Interpretable Machine Learning
    • GREASE: Generate Factual and Counterfactual Explanations for GNN-based Recommendations
    • ISIDE: Proactively Assist University Students at Risk of Dropout
    • MUSTACHE: Multi-Step-Ahead Predictions for Cache Eviction
    • NEWRON: A New Generalization of the Artificial Neuron to Enhance the Interpretability of Neural Networks
    • ReLAX: Reinforcement Learning Agent Explainer for Arbitrary Predictive Models
    • Sparse Vicious Attacks on Graph Neural Networks
    • Turning Federated Learning Systems Into Covert Channels
    • CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
    • Covert Channel Attack to Federated Learning Systems
    • Generating Actionable Interpretations from Ensembles of Decision Trees
    • NEWRON: A New Generalization of the Artificial Neuron to Enhance the Interpretability of Neural Networks
    • ReLACE: Reinforcement Learning Agent for Counterfactual Explanations of Arbitrary Predictive Models
    • Machine Learning for Web Vulnerability Detection: The Case of Cross-Site Request Forgery
    • Treant: training evasion-aware decision trees
    • Adversarial Training of Gradient-Boosted Decision Trees
    • Mitch: A Machine Learning Approach to the Black-Box Detection of CSRF Vulnerabilities
    • Treant: Training Evasion-Aware Decision Trees
    • You must have clicked on this ad by mistake! Data-driven identification of accidental clicks on mobile ads with applications to advertiser cost discounting and click-through rate prediction
    • Advertising in the IoT Era: Vision and Challenges
    • Advertising in the IoT Era: Vision and Challenges
    • Spot the Difference: Your Bucket is Leaking : A Novel Methodology to Expose A/B Testing Effortlessly
    • Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking
    • Learning to Rank User Queries to Detect Search Tasks
    • A Supervised Learning Approach to Protect Client Authentication on the Web
    • Promoting Positive Post-Click Experience for In-Stream Yahoo Gemini Users
    • Quite a mess in my cookie jar!: leveraging machine learning to protect web authentication
    • Discovering tasks from search engine query logs
    • Modeling and predicting the task-by-task behavior of search engine users
    • SEED: a framework for extracting social events from press news
    • Trending Topics on Twitter Improve the Prediction of Google Hot Queries
    • Twitter anticipates bursts of requests for Wikipedia articles
    • Using Clustering to Improve the Structure of Natural Language Requirements Documents
    • A clustering-based approach for discovering flaws in requirements specifications
    • Automatic Analysis of Multimodal Requirements: A Research Preview
    • Enhancing web search user experience : from document retrieval to task recommendation
    • Discovering Europeana Users' Search Behavior
    • Identifying task-based sessions in search engine query logs
    • Improving Europeana Search Experience Using Query Logs
    • Detecting Task-Based Query Sessions Using Collaborative Knowledge
    • Towards a task-based search and recommender systems
    • Challenges in designing an interest-based distributed aggregation of users in P2P systems
    • Search the web x.0: mining and recommending web-mediated processes
    • SPRANKER: A Discovery Tool to Rank Service Providers Using Quality of Experience
    • An open standard solution for domotic interoperability
  • Recent & Upcoming Talks
    • Example Talk
  • Teaching
    • Learn JavaScript
    • Learn Python

MUSTACHE: Multi-Step-Ahead Predictions for Cache Eviction

Jan 1, 2022ยท
Gabriele Tolomei
,
Lorenzo Takanen
,
Fabio Pinelli
ยท 1 min read
Cite DOI URL
Type
Preprint
Publication
CoRR

Add the full text or supplementary notes for the publication here using Markdown formatting.

Last updated on Jun 13, 2025

← ISIDE: Proactively Assist University Students at Risk of Dropout Jan 1, 2022
NEWRON: A New Generalization of the Artificial Neuron to Enhance the Interpretability of Neural Networks Jan 1, 2022 →

ยฉ 2025 HERCOLELab. This work is licensed under CC BY NC ND 4.0

Published with Hugo Blox Builder โ€” the free, open source website builder that empowers creators.