-
Notifications
You must be signed in to change notification settings - Fork 2
Fixed minimizer behaviour on exceeded iterations #231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
db4f4b0
2e63985
f88120d
3dbc23a
4effb57
41c2224
d5321a2
f83ef52
ce6feeb
67d96ad
9267e57
d85a723
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,6 +1,7 @@ | ||
| # SPDX-FileCopyrightText: 2026 EasyScience contributors <https://github.com/easyscience> | ||
| # SPDX-License-Identifier: BSD-3-Clause | ||
|
|
||
| import warnings | ||
| from typing import Callable | ||
| from typing import Dict | ||
| from typing import List | ||
|
|
@@ -122,6 +123,10 @@ def fit( | |
| model_results = self._dfo_fit(self._cached_pars, model, **kwargs) | ||
| self._set_parameter_fit_result(model_results, stack_status) | ||
| results = self._gen_fit_results(model_results, weights) | ||
| except FitError: | ||
| for key in self._cached_pars.keys(): | ||
| self._cached_pars[key].value = self._cached_pars_vals[key][0] | ||
| raise | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Codecov says this misses a test :)
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. test added. you know we shouldn't really aim for 100% coverage? :) |
||
| except Exception as e: | ||
| for key in self._cached_pars.keys(): | ||
| self._cached_pars[key].value = self._cached_pars_vals[key][0] | ||
|
|
@@ -208,7 +213,11 @@ def _gen_fit_results(self, fit_results, weights, **kwargs) -> FitResults: | |
| for name, value in kwargs.items(): | ||
| if getattr(results, name, False): | ||
| setattr(results, name, value) | ||
| results.success = not bool(fit_results.flag) | ||
| # DFO-LS stores fixed exit-code constants on each result object; | ||
| # EXIT_SUCCESS is 0 and EXIT_MAXFUN_WARNING keeps a different flag value. | ||
| results.success = fit_results.flag == fit_results.EXIT_SUCCESS | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you explain this? What is
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. And should we raise a warning if the fit did not succeed?
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In DFO-LS, whether to raise a warning: not for every non-success. Hard failures already become an exception in
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Added warning and unit test
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Okay, but what about the
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No, there is no risk from the value being nonzero by itself. It only returns True when the flag exactly matches success const. It does not depend on success being 0 specifically. Even if a library used 137 for success, this comparison would still be correct as long as fit_results.flag and fit_results.EXIT_SUCCESS were both 137 for a real success. For DFO we have: hard errors have negative values The line does what we want: I added a comment above the assignment to spell it out.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. But what about
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Line 271: DFO results object is returned, if results.flag is either 0 (success) or 1 (MAXFUN_WARNING), the results object is returned. Otherwise it is assumed the fit failed and assert raised. So in line 219, where this is actually being checked, flag_results.flag is only 0 or 1.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ahh, now I think I get it. DFO sets |
||
| if fit_results.flag == fit_results.EXIT_MAXFUN_WARNING: | ||
| warnings.warn(str(fit_results.msg), UserWarning) | ||
|
|
||
| pars = {} | ||
| for p_name, par in self._cached_pars.items(): | ||
|
|
@@ -220,11 +229,14 @@ def _gen_fit_results(self, fit_results, weights, **kwargs) -> FitResults: | |
| results.y_obs = self._cached_model.y | ||
| results.y_calc = self.evaluate(results.x, minimizer_parameters=results.p) | ||
| results.y_err = weights | ||
| results.n_evaluations = int(fit_results.nf) | ||
|
rozyczko marked this conversation as resolved.
|
||
| results.message = str(fit_results.msg) | ||
| # results.residual = results.y_obs - results.y_calc | ||
| # results.goodness_of_fit = fit_results.f | ||
|
|
||
| results.minimizer_engine = self.__class__ | ||
| results.fit_args = None | ||
| results.engine_result = fit_results | ||
| # results.check_sanity() | ||
|
|
||
| return results | ||
|
|
@@ -258,10 +270,12 @@ def _dfo_fit( | |
|
|
||
| results = dfols.solve(model, pars_values, bounds=bounds, **kwargs) | ||
|
|
||
| if 'Success' not in results.msg: | ||
| raise FitError(f'Fit failed with message: {results.msg}') | ||
| # DFO-LS uses EXIT_MAXFUN_WARNING when it stops on the evaluation budget; | ||
| # we still return the partial fit result and let the unified result mark it as non-success. | ||
| if results.flag in {results.EXIT_SUCCESS, results.EXIT_MAXFUN_WARNING}: | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Could you also explain this? Maybe add a comment?
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Added comment |
||
| return results | ||
|
|
||
| return results | ||
| raise FitError(f'Fit failed with message: {results.msg}') | ||
|
|
||
| @staticmethod | ||
| def _prepare_kwargs( | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,6 +1,7 @@ | ||
| # SPDX-FileCopyrightText: 2026 EasyScience contributors <https://github.com/easyscience> | ||
| # SPDX-License-Identifier: BSD-3-Clause | ||
|
|
||
| import warnings | ||
| from typing import Callable | ||
| from typing import List | ||
| from typing import Optional | ||
|
|
@@ -298,6 +299,10 @@ def _gen_fit_results(self, fit_results: ModelResult, **kwargs) -> FitResults: | |
| # results.goodness_of_fit = fit_results.chisqr | ||
| results.y_calc = fit_results.best_fit | ||
| results.y_err = 1 / fit_results.weights | ||
| results.n_evaluations = fit_results.nfev | ||
| results.message = fit_results.message | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We should probably either raise an error or a warning when
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As mentioned earlier, I would prefer to let the caller decide on what to do with this information. Let's spit a warning and be gracious wrt. max_evaluations. |
||
| if fit_results.success is False and fit_results.message: | ||
| warnings.warn(str(fit_results.message), UserWarning) | ||
| results.minimizer_engine = self.__class__ | ||
| results.fit_args = None | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking about this now, but you're tracking fit iterations by tracking how many times the model function is evaluated. This is not a correct representation of the number of fit iterations, as some fitting routines evaluate the model function multiple times per fitting step in order to best choose the next fitting step.
Optimizers like BUMPS, for example, run multiple model evaluations at the start of each fit.
This might be the best we CAN do, if we can't get the actual fit iterations from the back engine, but it's worth mentioning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You might want to add to the integration tests where we actually run the fits and check the iteration counters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the best we can currently do.
The actual iteration counters might differ from machine to machine, from OS to OS, I don't think it's going to be easy to catch the right situation, but I'll give it a try
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just checked, the number of iterations used by Bumps is saved in the
engine_result.nit:There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we may use
nit, assuming BUMPS is consistent with it.