Roi axis stack#1219
Conversation
|
Do you think |
The concat files are more complex than that. It is neither snake nor zig-zag. It is the concatenation of several 2D maps, potentially overlapping or with gaps. The problem is not simple to solve in general.
|
In cases which i saw it is saved as zig-zag like in hdf5 file. Same as If i understood Silx approach correctly - it try to handle regular grid as Z-like or as a line. |
|
My feeling is that it is OK to add support for Z-like flatten data with regular grid as a most common case. And on user request to start work on irregular grid support. |
| fast_is_A = False | ||
|
|
||
| # Find repetition length n in the slow array | ||
| diff_idx = numpy.where(slow != slow[0])[0] |
There was a problem hiding this comment.
To be clear this assumes the slow motor readout is not noisy.
There was a problem hiding this comment.
Indeed i thought it is most regular case, but it appears to be 50-50.
I need to fix it.
There was a problem hiding this comment.
Yes but the route of analyzing noise motors of regular scans to find the regular grid has been tried many times and is inherently flaky.
Your next implementation might involve some kind of noise threshold which of course is impossible to know.
Then you might realize there is only one property you can rely on: the values of the fast axis are "piecewise monotonic". Est is using that to split black-and-forth energy scans, but that's only a 1D problem. You need to first find the fast axis. Silx is using this for silx view.
And after all this work you realize you need to handle CTRL-C etc.
In my opinion all this is a dead end and we should not rely on regular vs. irregular. ewoksfluo uses an optimal grid finding approach for any list of nD coordinates.
| short_fast = fast[:n] | ||
| short_slow = slow[::n] |
There was a problem hiding this comment.
We sample the coordinates and take this as the final regular grid coordinates. To make it more robust we could take the median of all possible samplings. Not sure we care. All this is very approximate anyway.
There was a problem hiding this comment.
Not exactly. We find the grid from Z-like type. Then to reuse existing logic we take the first element and find mean step of the grid - this is xScale and yScale. So full grid is not going further.
There was a problem hiding this comment.
What I mean is you are using fast[:n] and slow[::n] which is a single sample. You are not using all the values in fast and slow.
You could do something like
ny = len(fast) // n
fast_scale = np.array([np.median(fast[j::n]) for j in range(n)])
slow_scale = np.array([np.median(slow[i * n : (i + 1) * n]) for i in range(ny)])But as I said, trying to recognize zig-zag and snake coordinates is very flaky anyway. So this is not the biggest problem of the approach.
| scaleList = [] | ||
| for i in range(len(self.data.shape)): | ||
| if i == mcaIndex: | ||
| if i == self.info['McaIndex']: |
There was a problem hiding this comment.
mcaIndex, self.info['McaIndex'] and stackMcaIndex.
I'm going to assume you know what you're doing because I have no idea. Same for all the other places where you changed it.
There was a problem hiding this comment.
mcaIndex is initial index of channels. In most cases it goes to self.info["McaIndex"] to be stored but in some specific cases self.info["McaIndex"] stores not the mcaIndex but exact value.
line 459 in HDF5Stack1D.py for example:
if (not DONE) and (not considerAsImages):
_logger.info("Data in memory as spectra")
self.info["McaIndex"] = 2
n = 0So i believe it is what should be used. But in most cases they are the same.
There was a problem hiding this comment.
How is 2 the exact value? Looks like an index to me, although it seems very random to suddenly choose 2.
The snake case is handled as a ROI Stack Plugin (ReverseStackPlugin). However, I repeat the simplest and clearer way to handle all cases is to use the MaskScatterViewPlugin. You have a flattened stack and the associated motor positions. That's all. |
|
My opinion is that generic scans is what is to be supported. No regular mesh, no patch but arbitrary data collection points according to any custom strategy. That case is already supported. You have a set of spectra measured at certain positions. Just make sure the positions are properly written in the input data file (and they will be automatically read) or make sure they can be read from an external file. |
|
With the code as it is, the regular mesh case and the generic case are handled. In my opinion you are looking at the wrong place to intervene. |
Probably you are right.
It can work but sounds complicated to find out by user. Or do i miss a possibility to "cut the corner"? |
Would it make sense to you to have both possibilities? Because now "standard" ESRF scans could not be opened directly - i mean if data is collected with shape N*M (N is number of points, M is number of channels) and 2 motor arrays have shape of just N. I would like to have a direct possibility to open "standard" scan without extra workaround. But if i am missing the way please point me out. And for general case the approach with Mask Scatter View described above can be done. |
Yes you do. If the motor positions are properly stored they will be loaded automatically. We are generating the data. So, let's generate them properly and every code and not just PyMca will be able to deal with them. PyMca is not there to workaround data generation issues. If you want to improve the handling of non regular stacks, it would be more convenient to have the aternative to replace the regular ROI Imaging Window view by the Scatter one in the main ROI Imaging Window. Even the person issuing the request was wondering if PyMca was the proper place to attack the problem. The way to go is to properly deal with the generic case and to not try to combine patched acquisitions into a regular stack. Sooner or later you are going to face arbitrary acquisitions and all this work will be useless for it besides being prone to introduction of bugs. |
Support flatten data to be open in ROI tool.
Closing request ticket