Skip to content

GabrielLoiseau/tau-eval

Repository files navigation

𝜏 Tau-Eval: A Unified Evaluation Framework for Useful and Private Text Anonymization

Tau-Eval is a user-friendly, modular, and customizable Python library designed to benchmark and evaluate text anonymization algorithms. It enables granular analysis of anonymization impacts from both privacy and utility perspectives. Tau-Eval seamlessly integrates with LiteLLM and 🤗 Hugging Face to support a wide range of datasets, models, and evaluation metrics.

GNU-GPLv3 v0.1.0 Python 3.10+ Tutorials Docs - GitHub.io

Installation

From PyPI

Install Tau-Eval via pip:

pip install tau-eval

From source

To install from source:

  1. Clone this repository on your own path:
git clone https://github.com/gabrielloiseau/tau-eval.git
cd tau-eval
  1. Create an environment with your own preferred package manager. We used python 3.10 and dependencies listed in pyproject.toml. If you use conda, you can just run the following commands from the root of the project:
conda create --name taueval python=3.10        # create the environment
conda activate taueval                         # activate the environment
pip install -e .                               # install the required packages

Quickstart

Tau-Eval is designed for flexibility. With just a few lines of code, you can set up and run evaluations.

1. Define Your Anonymization Model

Create a custom anonymization model by extending the Anonymizer interface:

from tau_eval.models import Anonymizer

class TestModel(Anonymizer):
    def __init__(self):
        self.name = "Test Model"

    def anonymize(self, text: str) -> str:
        # Implement anonymization logic
        return text

    def anonymize_batch(self, texts: list[str]) -> list[str]:
        # Batch processing
        return texts

Or use prebuilt models from tau_eval.models.

2. Configure Evaluation Metrics

Use built-in metrics from tau_eval.metrics or define your own following this signature:

Callable[[str | list[str], str | list[str]], dict]

This allows complete control over what and how you evaluate.

3. Instantiate Tasks

Tasks can be created using prebuilt options in tau_eval.tasks, or customized using CustomTask. Tau-Eval also supports tasksource for dataset integration.

from tau_eval.tasks import DeIdentification
from tasknet import AutoTask

anli = AutoTask("anli/a1")
deid = DeIdentification(dataset="ai4privacy/pii-masking-400k")

4. Configure and Run Your Experiment

Define an experiment configuration:

from tau_eval.config import ExperimentConfig

config = ExperimentConfig(
    exp_name="test-experiment",
    classifier_name="answerdotai/ModernBERT-base",
    train_task_models=True,
    train_with_generations=False,
)

Run the experiment:

from tau_eval.experiment import Experiment

Experiment(
    models=[TestModel(), ...],
    metrics=["bertscore", "rouge"],
    tasks=[anli, deid],
    config=config
).run()

5. Visualize Results

Tau-Eval includes built-in visualization tools to compare model anonymization strategies and evaluation results. You can find them with tau_eval.visualization.

Tutorials

You can explore our tutorials to master Tau-Eval more effectively in the examples/ folder.

Contributors

Citation

If you use 𝜏 Tau-Eval in your work, please cite our paper as follows:

@misc{loiseau2025taueval,
      title={Tau-Eval: A Unified Evaluation Framework for Useful and Private Text Anonymization}, 
      author={Gabriel Loiseau, Damien Sileo, Damien Riquet, Maxime Meyer, Marc Tommasi},
      year={2025},
      eprint={2506.05979},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2506.05979}, 
}

About

Tau-Eval: A Unified Evaluation Framework for Useful and Private Text Anonymization

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages