Skip to content

deepinv/benchmarks

Repository files navigation

DeepInverse Benchmarks

This repository contains a set benchmarks for evaluating the performance of different image reconstruction methods implemented in the DeepInverse library.

Leaderboards are automatically generated and can be found in the DeepInverse benchmarks documentation.

Benchmark results are stored in a HuggingFace repository: https://huggingface.co/datasets/deepinv/benchmarks/tree/main

Evaluating Your Reconstruction Methods

To evaluate your own reconstruction methods on these benchmarks, install DeepInverse with the benchmarks extra:

pip install deepinv[benchmarks]

and then run on python:

from deepinv.benchmarks import run_benchmark
my_solver = lambda y, physics: ...  # your solver here
results = run_benchmark(my_solver, "benchmark_name")

where benchmark_name is the name of the benchmark and my_solver is your reconstruction method which receives (y, physics) where

Adding New Solvers

To add a new solver to an existing benchmark, open a new pull request on this repository, adding a new your_solver_name.py file in the corresponding benchmark folder. Follow the structure of the existing solver files. The new solver will be automatically run once the pull request is merged, and the results will be added to the leaderboard.

Adding New Benchmarks

To create a new benchmark, open a new pull request adding a new folder following the structure given in the existing benchmark_template folder.

A new benchmark requires:

If you would like to propose a new dataset, metric or forward operator, please open an issue.

About

Repository containing code for running DeepInverse benchmarks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages