Tau-Eval is a user-friendly, modular, and customizable Python library designed to benchmark and evaluate text anonymization algorithms. It enables granular analysis of anonymization impacts from both privacy and utility perspectives. Tau-Eval seamlessly integrates with LiteLLM and 🤗 Hugging Face to support a wide range of datasets, models, and evaluation metrics.
Install Tau-Eval via pip:
pip install tau-eval
To install from source:
- Clone this repository on your own path:
git clone https://github.com/gabrielloiseau/tau-eval.git
cd tau-eval
- Create an environment with your own preferred package manager. We used python 3.10 and dependencies listed in
pyproject.toml. If you use conda, you can just run the following commands from the root of the project:
conda create --name taueval python=3.10 # create the environment
conda activate taueval # activate the environment
pip install -e . # install the required packages
Tau-Eval is designed for flexibility. With just a few lines of code, you can set up and run evaluations.
Create a custom anonymization model by extending the Anonymizer interface:
from tau_eval.models import Anonymizer
class TestModel(Anonymizer):
def __init__(self):
self.name = "Test Model"
def anonymize(self, text: str) -> str:
# Implement anonymization logic
return text
def anonymize_batch(self, texts: list[str]) -> list[str]:
# Batch processing
return textsOr use prebuilt models from tau_eval.models.
Use built-in metrics from tau_eval.metrics or define your own following this signature:
Callable[[str | list[str], str | list[str]], dict]This allows complete control over what and how you evaluate.
Tasks can be created using prebuilt options in tau_eval.tasks, or customized using CustomTask. Tau-Eval also supports tasksource for dataset integration.
from tau_eval.tasks import DeIdentification
from tasknet import AutoTask
anli = AutoTask("anli/a1")
deid = DeIdentification(dataset="ai4privacy/pii-masking-400k")Define an experiment configuration:
from tau_eval.config import ExperimentConfig
config = ExperimentConfig(
exp_name="test-experiment",
classifier_name="answerdotai/ModernBERT-base",
train_task_models=True,
train_with_generations=False,
)Run the experiment:
from tau_eval.experiment import Experiment
Experiment(
models=[TestModel(), ...],
metrics=["bertscore", "rouge"],
tasks=[anli, deid],
config=config
).run()Tau-Eval includes built-in visualization tools to compare model anonymization strategies and evaluation results. You can find them with tau_eval.visualization.
You can explore our tutorials to master Tau-Eval more effectively in the examples/ folder.
- Gabriel Loiseau, Hornetsecurity, Inria Lille
If you use 𝜏 Tau-Eval in your work, please cite our paper as follows:
@misc{loiseau2025taueval,
title={Tau-Eval: A Unified Evaluation Framework for Useful and Private Text Anonymization},
author={Gabriel Loiseau, Damien Sileo, Damien Riquet, Maxime Meyer, Marc Tommasi},
year={2025},
eprint={2506.05979},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.05979},
}