Skip to content

Adding nanoprisms model (pure python)#699

Merged
sara-mokhtari merged 25 commits intomasterfrom
adding_nanoprisms_model
Apr 17, 2026
Merged

Adding nanoprisms model (pure python)#699
sara-mokhtari merged 25 commits intomasterfrom
adding_nanoprisms_model

Conversation

@sara-mokhtari
Copy link
Copy Markdown
Contributor

Description

Adding a pure python model for nanoprisms of different cross sections with orientation averaging using Fibonacci quadrature.

Fixes #685

How Has This Been Tested?

Unit tests worked.
This model was compared to an another numerical calculation : for a gold nanoprism with a pentagonal cross section, agreement was found with the Debye equation (Debye calculator: https://github.com/FrederikLizakJohansen/DebyeCalculator). The Debye equation is based on atomic positions while SasView model is based on analytical expressions.
The model was also used to fit an experimental data (AuAg nanoprism of pentagonal cross section D = 30 nm, L = 117 nm).

Since it's a python model, there is still the issue: "the support orientational dispersion in pure python model" ( #695).

More tests should be made on small q. Indeed, like the previous model (octahedron_truncated model), we encouter issues when it comes to small q (q<10^-6 Angs). More precise mathematical should be used (see PR #694 comments).

Note : the fibonacci quadrature code (sasmodels/quadratures/fibonacci.py) was added to a new repository called "quadratures" because it could be also useful for other models.

To do:

  • take care of the orientational dispersion for pure python model
  • add numba acceleration (tests have to be made first)
  • take care of the singularities

Review Checklist:

[if using the editor, use [x] in place of [ ] to check a box]

Documentation (check at least one)

  • There is nothing that needs documenting
  • Documentation changes are in this PR
  • There is an issue open for the documentation (link?)

Installers

  • There is a chance this will affect the installers, if so
    • Windows installer (GH artifact) has been tested (installed and worked)
    • MacOSX installer (GH artifact) has been tested (installed and worked)
    • Wheels installer (GH artifact) has been tested (installed and worked)

Licensing (untick if necessary)

  • The introduced changes comply with SasView license (BSD 3-Clause)

@butlerpd
Copy link
Copy Markdown
Member

butlerpd commented Mar 9, 2026

I have not had the chance to look at this but note that you created a new directory at sasmodels.quadratures. I thought all our current quadratures reside in sasmodels.models.lib It may be that we want to move quadruture code into its own directory but that should be a discussion I guess? Also not sure the higher level (under sasmodels rather than sasmodels.models is the right answer?

@pkienzle
Copy link
Copy Markdown
Contributor

pkienzle commented Mar 9, 2026

I don't remember the details, but I found that I needed to put the 2-Yukawa support libraries into sasmodels.TwoYukawa rather than sasmodels.models.TwoYukawa. Similarly, the additional quadrature support code cannot be in the sasmodels.models namespace.

We may want to create a sasmodels.lib package to contain the support libraries for pure python models. Currently these are in sasmodels.special, the sasmodels.TwoYukawa package, and the new sasmodels.quadrature module.

Note: PEP8 says package names should be snake_case. Somehow both the author (me) and the reviewers messed up with TwoYukawa. We can fix that if we move support libraries to lib.

I'll create a ticket for this.

I wasn't using the weights because of this line, not sure why i put it ?
Copy link
Copy Markdown
Contributor

@pkienzle pkienzle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could convert a lot of the steps to use numpy vector operators. I don't think numba will give a big speedup so I didn't try it. Torch would allow you to compute on the GPU, and that would probably be faster.

I think you are only running this with a small number of vertices. If the number of vertices were significantly larger than the number of q points then you may want to use vector operations across the vertices and loop over the q values. (This is unlikely given that q points for all |q| shells in the Iq calculation are sent is as one big vector.)

Comment thread sasmodels/models/nanoprisms.py Outdated
Comment thread sasmodels/models/nanoprisms.py Outdated
Comment thread sasmodels/models/nanoprisms.py Outdated
for j in range(2):
coordinates.append((extended_vertices[i+1][j]+extended_vertices[i][j])/2)
edgecenter.append(coordinates)
return edgecenter
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with numpy [2 x n] => [2 x n]:

return (vertices + np.roll(vertices, -1, axis=1))/2

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the vectorization were made in the last commit 38207eb. I resolved all the comments in which I applied your suggestions.

Comment thread sasmodels/models/nanoprisms.py Outdated
Comment thread sasmodels/models/nanoprisms.py Outdated
Comment thread sasmodels/models/nanoprisms.py Outdated
Comment thread sasmodels/models/nanoprisms.py Outdated
# Compute intensity
intensity = Iqabc(qa, qb, qc, nsides, Rave, L) # shape (nq, npoints)
# Uniform average over the sphere
integral = np.sum(w[np.newaxis, :] * intensity, axis=1)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fibonacci is equal-weighted, so no need for w. If it weren't equal-weighted, then you could use integral = intensity @ w to produce the weighted sum in place rather than first forming the matrix Iqij*wi then summing the rows.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, the weights are the same for the fibonacci quadrature, I simply replaced with np.mean directly, done in 38207eb.

indices = np.arange(0, npoints_fibonacci, dtype=float) + 0.5
phi = np.arccos(1.0 - 2.0 * indices / npoints_fibonacci)
theta = 2.0 * np.pi * indices / ((1.0 + 5.0 ** 0.5) / 2.0)

Copy link
Copy Markdown
Contributor

@pkienzle pkienzle Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

theta and phi here are opposite to what we are using in the models. Make sure the x,y,z equations correspond to the correct a,b,c for the canonical shape orientation (c-axis along the beam, b-axis up, a-axis across).

Depending on the symmetries in your object you may not want to the entire sphere. If for example you do a scatter plot of (phi mod π/2, theta mod π/2) you will see that the points are no longer nicely distributed.

Swapping theta and phi, you should modify it as theta = np.arccos(1 - indices/npoints) if you only need equator to pole, and modify it as phi = (2π/φ k) mod π/2 from phi = (2π/φ k) mod 2π if you only need a quarter circle (φ is the golden ratio).

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The calculation is fast on the entire sphere. To compute the orientation average, I therefore suggest to keep working on the entire sphere.
Things get more complicated if we are interested in specific directions. Since the golden ratio is irrational, the problem becomes case dependent. I believe it is out for scope for the moment.

I suggest the following changes to comply with sasview conventions (swap orientation labels)
'
golden_ratio = ((1+5.0**0.5)/2.0)
theta = np.arccos(1.0 - 2.0 * indices / npoints) # polaire
phi = (2.0 * np.pi * indices / golden_ratio) # azimutal

x = np.sin(theta) * np.cos(phi) # axis a
y = np.sin(theta) * np.sin(phi) # axis b
z = np.cos(theta) # axis c
'

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that the Fibonacci points on the sphere are not well distributed after you take symmetries into account. If the shape is mirror symmetric across all three dimensions then all points fold into a single quadrant. Within that quadrant they are not well distributed.

import numpy as np
import matplotlib.pyplot as plt
def fib(npoints, quadrant=False, fold=False):
     indices = np.arange(0, npoints, dtype=float) + 0.5
     theta = 1.0 - (1 if quadrant else 2.0) * indices/npoints
     theta = np.arccos(theta)
     golden_ratio = (1.0 + np.sqrt(5.0))/2.0
     phi = 2*np.pi * indices / golden_ratio
     phi = np.fmod(phi, np.pi/2 if quadrant else 2*np.pi)
     theta, phi = np.degrees(theta), np.degrees(phi)
     if fold:
         # theta is mirror symmetric around 90
         theta = 90 - abs(90 - theta)
         # phi is mirror symmetric around 180, and then around 90
         phi = 180 - abs(180 - phi)
         phi = 90 - abs(90 - phi)

     plt.subplot(211)
     plt.scatter(phi, theta, alpha=0.7)
     plt.xlabel("phi")
     plt.ylabel("theta")
     plt.title(f"Fibonacci points {quadrant=} {fold=}")
     plt.subplot(223)
     plt.hist(theta, bins=50)
     plt.xlabel("theta distribution")
     plt.subplot(224)
     plt.hist(phi, bins=50)
     plt.xlabel("phi distribution")

plt.figure()
fib(999, quadrant=False, fold=False)
plt.figure()
fib(999, quadrant=False, fold=True)
plt.figure()
fib(999, quadrant=True)
plt.show()
image image

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correction:

...
cycle_length = np.pi/2 if quadrant else 2*pi
phi = cycle_length * indices / golden_ratio
phi = np.fmod(phi, cycle_length)
...
image

And the full sphere for comparison:
image

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hypothesis: A nanoprism model computed with cycle length 2π/nsides for phi and theta in [0, π/2) will be more accurate than the full sphere for the same number of sample points. You will need to weight the sum by 2*nsides/npoints to get the full sphere equivalent from a single quadrant.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, here's the error curve for the same model (nsides=3, Rave=25 nm, L=4 nm) with 500 points throughout the q range instead of splitting it into intervals:

$ python -m sasmodels.compare nanoprisms,nanoprisms -double -highq background=0 -nq=40 nsides=3 L=40 Rave=250 npoints=1000000,500                                                  
PY[64] t=3170.07 ms, intensity=125290498
PY[64] t=2.11 ms, intensity=125285563
|npoints=1000000.0-npoints=500.0|                  max:6.310e+02  median:1.857e+01  98%:6.291e+02  rms:2.252e+02  zero-offset:+1.234e+02
|(npoints=1000000.0-npoints=500.0)/npoints=500.0|  max:1.695e+01  median:5.404e-04  98%:1.368e+01  rms:3.881e+00  zero-offset:+1.370e+00
image

We can avoid memory problems by chunking the q values when the number of points is large, keeping the size of the (q_chunk x q_unit) outer product smaller than 10 million or so elements. This is tedious but necessary: I crashed my Mac when using too many Fibonacci points.

These details need to be hidden in a library rather than cluttering up every model that requires spherical integration.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implementation of orientational average based on adaptative Fibonacci quadrature

Number of points used in orientation averaging

As q increases, the number of quadrature points should be increased, particularly when form factor starts to oscillate. The parameter $N$, number of quadrature points, is thus somehow related to the size of the nanoprism.

To account for this, the $q$-dependence of $N$ is set according to the nanoprism size parameters:

$$N(q) = \mathrm{clip}((q \cdot R_\mathrm{char})^2,\ N_\mathrm{min},\ N_\mathrm{max})$$

where:

  • $N_\mathrm{min} = 300$
  • $N_\mathrm{max} = (q_\mathrm{max} \cdot R_\mathrm{char})^2$
  • $R_\mathrm{char}$ is the characteristic size of the nanoprism:

$$R_\mathrm{char} = \max(R_\mathrm{circ},\ L/2)$$

$$R_\mathrm{circ} = \frac{R_\mathrm{ave}}{\sqrt{\mathrm{sinc}(2\pi/n)}}$$

npoints

Fix discontinuities in adaptive orientation averaging

The adaptive quadrature currently has a fundamental issue: quadratures for different $q$ are not nested.

When $N(q)$ changes, the integration grid changes as well. For two nearby $q$ values straddling a jump in $N$, different orientations are sampled, which introduces artificial discontinuities in $I(q)$. This affects Fibonacci, Gauss–Legendre, and Lebedev quadratures alike.

Proposed fix

Use linear interpolation between successive grids:

  • compute $I(q)$ at $N_{\mathrm{lo}}$ and $N_{\mathrm{hi}}$
  • interpolate using the position of $N_{\mathrm{exact}}$ between them

This removes discontinuities in $I(q)$.


Changes in the code

sasmodels/special/fibonacci.py

New function:

def orientation_average(q, R_char, Iqabc_func, *args,
                        N_MIN=300, N_MAX=None, N_GRID=500,
                        MAX_ELEMENTS=50_000_000):
    """
    Orientation average of a 3-D scattering intensity using adaptive Fibonacci
    quadrature.

    This function can be used by any SAS model whose form factor amplitude is
    available as a function of the three Cartesian components of the scattering
    vector. The number of quadrature points scales as N ~ (q * R_char)^2.

    Parameters
    ----------
    q : array_like
    R_char : float
    Iqabc_func : callable
    *args : forwarded to Iqabc_func
    """

    q = np.atleast_1d(q)

    if N_MAX is None:
        n_at_qmax = (float(q.max()) * R_char) ** 2
        N_MAX = max(int(np.ceil(n_at_qmax / N_GRID) * N_GRID), N_MIN)

    n_exact = np.clip((q * R_char) ** 2, N_MIN, N_MAX)

    n_lo = np.clip(
        (np.floor(n_exact / N_GRID) * N_GRID).astype(int), N_MIN, N_MAX)
    n_hi = np.clip(
        (np.ceil(n_exact / N_GRID) * N_GRID).astype(int), N_MIN, N_MAX)

    gap  = (n_hi - n_lo).astype(float)
    w_hi = np.where(gap > 0, (n_exact - n_lo) / gap, 0.0)

    all_n = np.union1d(np.unique(n_lo), np.unique(n_hi))
    sphere_cache = {int(n): fibonacci_sphere(int(n))[0] for n in all_n}

    def _eval(n_arr):
        out = np.empty_like(q, dtype=float)
        for n in np.unique(n_arr):
            n = int(n)
            q_unit = sphere_cache[n]
            mask = n_arr == n
            q_group = q[mask]
            chunk_size = max(1, MAX_ELEMENTS // n)
            grp = np.empty(len(q_group), dtype=float)
            for start in range(0, len(q_group), chunk_size):
                q_c = q_group[start:start + chunk_size]
                qa  = q_c[:, np.newaxis] * q_unit[:, 0]
                qb  = q_c[:, np.newaxis] * q_unit[:, 1]
                qc  = q_c[:, np.newaxis] * q_unit[:, 2]
                intensity = Iqabc_func(
                    qa.ravel(), qb.ravel(), qc.ravel(), *args
                ).reshape(qa.shape)
                grp[start:start + len(q_c)] = np.mean(intensity, axis=1)
            out[mask] = grp
        return out

    I_lo = _eval(n_lo)
    if (w_hi == 0.0).all():
        return I_lo
    I_hi = _eval(n_hi)
    return I_lo + w_hi * (I_hi - I_lo)

Key points:

  • Adaptive rule:
    $N(q) = clip((\left(\frac{q}{R_{\mathrm{char}}}\right)^2, N_{\min}, N_{\max})$
    where $N_{\min}=300$,
    and $N_{\max}=\left(\frac{q_{\max}}{R_{\mathrm{char}}}\right)^2$

  • Automatic $N_{\max}$ from $(q_{\max}, R_{\mathrm{char}})$

  • Linear interpolation between $N_{\mathrm{lo}}$ and $N_{\mathrm{hi}}$

  • Cached Fibonacci spheres (no duplication)

  • Chunking for large arrays


sasmodels/models/nanoprisms.py

Updated Iq:

def Iq(q, sld, sld_solvent, nsides: int, Rave, L):
    nsides = int(nsides)
    q = np.atleast_1d(q)

    R_circ = Rave / np.sqrt(np.sinc(2 / nsides))
    R_char = max(R_circ, L / 2)

    I_avg = orientation_average(q, R_char, Iqabc, nsides, Rave, L)
    return I_avg * (sld - sld_solvent) ** 2 * 1e-4

With:

$$ R_{\mathrm{char}} = \max\left(R_{\mathrm{circ}}, \frac{L}{2}\right), \quad R_{\mathrm{circ}} = \frac{R_{\mathrm{ave}}}{\sqrt{\mathrm{sinc}(2\pi/n)}} $$

For $n = 3$:

$$ R_{\mathrm{circ}} \approx 1.56,R_{\mathrm{ave}} $$


Results

comparison
  • Discontinuities in $I(q)$ removed
  • Relative error $\lesssim 1%$
  • Speed-up $\sim 5.9\times$ (depends on particle size)

Summary

This makes adaptive orientation averaging continuous and more robust with minimal overhead. The parameter n_points is no longer used as directly constrained by the size parameters of the models.
To avoid memory overflow, the computation is split into chunks of at most MAX_ELEMENTS elements.

Feedback welcome.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that I haven't updated the code on the repository. I am not sure on the procedure and don't want to make a mess...

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that you are using (qr)² scaling to determine the number of integration points. This makes sense given that fringe spacing scales linearly with qr along both longitude and latitude, but I observed error ranging from flat to (qr)⁴ depending on the model parameters. A possible explanation is that we are severely under-sampling. Checking the model with Rave=250, L=500, q=0.1 I get about 150000 peaks around the equator of which about 20000 prominent. The meridian has about 40000 peaks, with 3000 prominent. Sampling with a half dozen points per peak would require 36 billion evaluations. It is only because the peaks are so similar to each other that we can get away with so few samples, but this does make it harder to predict the number of samples we need.

I'm surprised you are okay with 1% error. I thought the ΔI/I for x-rays was much smaller than that.

Because we are so heavily under-sampled, I am concerned that your error performance (< 1%) will not hold across parameter domain and between models. Until we have explored more of the space I would stick with empirically discovered cutoffs.

For smaller qr I've been able to use fewer than 100 points. A floor of N_MIN=300 seems high.

I don't understand why you want to contaminate a high precision calculation with a low precision calculation. If you are concerned with jumps as you go from (q,r) to (q+ε,r+δ) I would expect the solution would be to calculate with enough precision that the jump is insignificant. Showing a worse value that looks better seems like the wrong approach. Smoothing the curve will make $\partial I / \partial r$ calculations a little better behaved, but I doubt it is worth showing worse values at 2x cost.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is triaxial ellipsoid using Fibonacci:

#define MIN_N 100
#define MAX_N 1000000
#define BAD_N 10000000
static void
Fq(double q,
    double *F1,
    double *F2,
    double sld,
    double sld_solvent,
    double radius_equat_minor,
    double radius_equat_major,
    double radius_polar
)
{
    const double inv_golden_ratio = 2.0 / (1.0 + sqrt(5.0));
    const double qr_max = q*fmax(fmax(radius_equat_minor, radius_equat_major), radius_polar);
    //const int target = 1000000; // high accuracy non-adaptive
    //const int target = 5180;  // equivalent 76x76 gauss-legendre
    //const int target = (int)(ceil(100. * qr_max*qr_max);  // qr squared
    /* Emperically determined cutoff values
    const int target =
        qr_max < 1.0 ? 100
        // qr_max < 15.0 ? 5000  // use at least 5000 points
        : (qr_max < 2.0 ? 500
        : (qr_max < 3.0 ? 1000
        : (qr_max < 15.0 ? 5000
        : (qr_max < 30.0 ? 30000
        : (qr_max < 80.0 ? 100000
        : (qr_max < 200.0 ? 300000
        : (qr_max < 3000.0 ? 1000000 : 0)))))));
    */
    //if (npoints == 1000000)
    //printf("q=%f qr_max=%f npoints=%d\n", q, qr_max, npoints);
    // The empirical cutoffs fit pretty well to (qr)^(3/2), so use that.
    const int target = (int)(ceil(100. * sqrt(qr_max*qr_max*qr_max)));
    const int npoints = min(max(target, MIN_N), MAX_N);
    if (target >= BAD_N) {
        *F1 = NAN;
        *F2 = NAN;
        return;
    }
    const double step = 2.0/npoints;
    const double cycle = 2.0*M_PI;
    const double ra_sq = square(radius_equat_minor);
    const double rb_sq = square(radius_equat_major);
    const double rc_sq = square(radius_polar);
    double sum_F1 = 0.0;
    double sum_F2 = 0.0;
    for (int k=0; k < npoints; k++) {
        const double index = k + 0.5;
        const double cos_theta = 1.0 - index*step;
        const double cos_theta_sq = cos_theta*cos_theta;
        // const double sin_theta = sin(acos(cos_theta)); // more accurate than sqrt(1-cos_theta^2) near 1
        const double sin_theta_sq = 1.0 - cos_theta_sq;
        const double phi = cycle*fmod(index*inv_golden_ratio, 1.0);
        double cos_phi_sq = square(cos(phi));
        double sin_phi_sq = 1.0 - cos_phi_sq;
        const double rsq = (ra_sq*sin_phi_sq + rb_sq*cos_phi_sq)*sin_theta_sq + rc_sq*cos_theta_sq;
        const double fq = sas_3j1x_x(q*sqrt(rsq));
        sum_F1 += fq;
        sum_F2 += fq*fq;
    }
    const double volume = form_volume(radius_equat_minor, radius_equat_major, radius_polar);
    const double contrast = (sld - sld_solvent);
    *F1 = 1.0e-2 * contrast * volume * sum_F1 / npoints;
    *F2 = 1.0e-4 * square(contrast * volume) * sum_F2 / npoints;
}

Initially I chose cutoffs to keep the error below 1e-5. I found that they fit pretty well to x^(3/2), so I replaced it with that. I don't think a blanket (qr)^2 scaling is the best approach.

Because this is in C code I can generate arbitrary Fibonacci sizes on the fly, so the code is much simpler, and there are no jumps in the error value.

Note that in the region where gauss-legendre integration is good it is much more accurate than the Fibonacci. I'm using a 1000 x 1000 gauss-legendre for comparison, but a million point Fibonacci gives similar results. I think for the triaxial ellipsoid model an adaptive gauss-legendre quadrature is the best approach.

@pkienzle
Copy link
Copy Markdown
Contributor

pkienzle commented Apr 3, 2026

Code looks okay. Math seems to match the paper.

I would prefer to update the fibonacci interface to allow for symmetries before merging. Once we define an interface in sasmodels.special we need to maintain backward compatibility indefinitely. If you imagine that we might add other integration rules to the mix, keeping the interfaces similar would be nice, so I suggest n or npoints as the argument rather than npoints_fibonacci.

It would be nice to decide how to control the number of integration points in the model. Ideally this would use max qr for the enclosing cylinder ≈ √Rave² + L²} rather than making the user decide. The npoints_fibonacci parameter in Iq won't be used by SasView or the rest of sasmodels. I would remove it but I'm okay with ignoring it until we have a better way to control integration.

For consistency with other models, I suggest using length, radius_average and n_sides. If you decide that the circumradius is a better parameter to report in a paper then the name radius would be good enough. For numbers of things I see n_shells, n_steps, n_aggreg, n_stacking, n, num_pearls, arms (for star polymer) and level (for unifified power Rg). Discussion here.

Model names are singular (barbell, sphere, ellipsoid) not plural (nanoprisms).

@pkienzle
Copy link
Copy Markdown
Contributor

pkienzle commented Apr 7, 2026

Regarding the model name, we do not have nanospheres or nanocylinders. You could use plain prism instead.

@marimperorclerc
Copy link
Copy Markdown
Contributor

Regarding the model name, we do not have nanospheres or nanocylinders. You could use plain prism instead.

rectangular_prism is already one model of sasmodels.

Maybe regular_prism or simply prism could do it indeed ?

Allmost there !

@marimperorclerc
Copy link
Copy Markdown
Contributor

For consistency with other models, I suggest using length, radius_average and n_sides. If you decide that the circumradius is a better parameter to report in a paper then the name radius would be good enough. For numbers of things I see n_shells, n_steps, n_aggreg, n_stacking, n, num_pearls, arms (for star polymer) and level (for unifified power Rg). Discussion here.

Yes, this set of choice is fine: length, radius_average and n_sides + npoints for number of points on Fibonacci sphere.

@pkienzle
Copy link
Copy Markdown
Contributor

pkienzle commented Apr 8, 2026

I would prefer to keep n_points out of the model and use adaptive integration instead. I only exposed it so I could easily make the comparison plots.

@pkienzle
Copy link
Copy Markdown
Contributor

Note comment in ticket #685: I found that Gauss-Legendre is much more accurate than Fibonacci for the same number of evaluation points.

@butlerpd
Copy link
Copy Markdown
Member

Sorry I’ve been very slow to jump in here. But I have to say there seems to be a tremendous amount of outstanding and sophisticated work that has gone into this — way more than for any other model I think by far!!

It feels that in the process this may have become a case of allowing perfection to be the enemy of the good? As far as I can tell all the discussion is over the integration but that seems like something that can be iterated after the model is merged. Unless there is a problem of the model just not working but that does not seem to be the case?

I also note that there will be a code freeze soon for version 6.2 and an accompanying sasmodels version. The next opportunity to make it into a release will most likely be very late this year or, more likely, early next year (definitely after the Prague contributor camp). I would really like to have this model in the release. It covers a lot of ground in one shot!

SUGGESTION:

  • Assuming I am not missing something fundamental, may I suggest we focus a discussion on what is required to merge this PR and then move the rest to a new issue/branch for ongoing work?
  • For merging I only see the following but will admit I’ve not had time to delve into this too deeply:
    • I note that the .gitignore file is included - it should NOT be part of the merge. That should be a local thing only?
    • Otherwise, the main issue I would worry about before merging is in fact the nomenclature issue discussed and only because that is currently very painful to change later (both model name and parameter names).
      • In that respect I note that currently the name attribute and file name differ. It should not be the case but we seem to constantly end up in situations where that causes problems so I think we should make those the same whatever the final decision is.
      • NOTE: the entire parallelepiped category is prisms and thus probably belongs in the polyhedron category. We also have a rectangular_prism and hollow_prims. Oddly we have a parallelepiped that should be the same as the rectangular prism which should be the same as n=4 in this new “prism model?” we should check that they produce similar results I suppose and eventually should discuss if we really need all 3 in the core. This new model covers a lot of ground in one shot as already mentioned! But that again is a future issue and one for a sasmodels working group to discuss and recommend IMO.
      • QUESTION: Do we think that “prism” makes sense given hollow_prism (which is rectangular etc) exists and that there may also be other prism models .. for example core shell ones? Perhaps “solid_prisms” (plural) is more likely to stand the test of time? What about the parameters? Do they all follow the SasView conventions?

@pkienzle
Copy link
Copy Markdown
Contributor

TLDR; The names have been fixed. We could merge now and move adaptive integration to a separate ticket.

It feels that in the process this may have become a case of allowing perfection to be the enemy of the good? As far as I can tell all the discussion is over the integration but that seems like something that can be iterated after the model is merged.

The Fibonacci integration scheme has been proposed for several models. We've been using this particular model to explore it. If it were just for this model I would not have gone into such depth. Furthermore, this work has been enough to move forward on the simple adaptive integration PR #658, which was stuck on the 2-D integration models.

The current implementation uses a fixed integration scheme that is inaccurate for large shapes. This should be fixed.

I note that the .gitignore file is included - it should NOT be part of the merge. That should be a local thing only?

Agreed. There is a .gitignore in the root. It should not be in the subdirectory.

Oddly we have a parallelepiped that should be the same as the rectangular prism

Rectangular prism is parameterized by axis ratio and parallelepiped by lengths. Whether we should have many variants in the same model or a different model for each variant is a separate ticket (#215). Reparameterization (#211) addresses this, but it might be nice if the standard variants were supplied with sasmodels. I suggested a way to do this in #215 that should not require too many changes to sasmodels or sasview.

we should check that they produce similar results

Already done. The radius_average parameter makes this slightly non-trivial. For example,

$ python -m sasmodels.compare parallelepiped,prism n_sides=4 length_c=100 length_a=20 length_b=length_a length=length_c "radius_average=sqrt(length_a*length_a/pi)" -pars -double
==== parallelepiped =====
scale: 1
background: 0.001
sld: 4
sld_solvent: 1
length_a: 20
length_b: 20
length_c: 100
==== nanoprisms =====
scale: 1
background: 0.001
sld: 4
sld_solvent: 1
n_sides: 4
radius_average: 11.2838
length: 100
DLL[64] t=29.23 ms, intensity=4402
PY[64] t=5.55 ms, intensity=4402
|parallelepiped-nanoprisms|               max:1.934e-05  median:9.078e-08  98%:1.904e-05  rms:6.564e-06  zero-offset:+3.185e-06
|(parallelepiped-nanoprisms)/nanoprisms|  max:7.955e-07  median:2.523e-09  98%:7.726e-07  rms:2.507e-07  zero-offset:+1.148e-07

Ideally the difference would be in the order of 1e-11. Maybe that will be the case after we switch to Gauss-Legendre integration.

Do we think that “prism” makes sense given hollow_prism (which is rectangular etc) exists and that there may also be other prism models .. for example core shell ones?

Prism seems fine. Rectangular prism and hollow prism are specific forms. If we implement a more general prism with arbitrary polygonal cross section then we can call it "polygonal_prism". This is like "cylinder", "hollow_cylinder", "core_shell_cylinder" and "elliptical_cylinder".

What about the parameters? Do they all follow the SasView conventions?

They have been updated to follow our conventions (thanks @sara-mokhtari !)

@butlerpd
Copy link
Copy Markdown
Member

Thanks @pkienzle .. and I did read it all :-). And I 1000% agree about the importance of growing our integration support for sasmodels. This has been a recurring theme over the years and most recently in the information gathering for a sasmodels working group. So agree that this work has clearly been a tremendous catalyst with bit shout outs to the group.

I do think the new integration method should be separated from the model, but I guess the problem is that this model relies on an integration method that does not yet exist in sasmodels? So maybe it cannot be pulled out into a separate PR? That would be a bit annoying since, as you say, it is important to get any new integration methods being made available as robust as possible which then has the knock on effect of preventing this model from being merged? Any thoughts on that? and/or timeline for having a version of the integration ready in time for an upcoming code freeze?

@marimperorclerc
Copy link
Copy Markdown
Contributor

From my point of view, the prism model is ready to merge ! :)

  • It is running well with its current implementation on the branch adding_nanoprisms_model with a fixed number of integration points on the whole Fibonacci sphere, and this number can be increased at will if needed (with the price to slown down computation). But so far, with the default value of points, only a few hundreds of them, we get fast and reliable results when fiting SAXS curves of different types of nanoparticles (pentagonal rods, triangular platelets, etc ...).

  • Last point of attention is where the fibonacci.py is placed in the branch adding_nanoprisms_model; this in order to anticipate where other adaptative integration methods will be implemented.
    Might be a good idea to double-check it before merging the PR ?
    It is currently placed here: sasmodels/special/fibonacci.py, with current special.py renamed as __init__.py. Btw, folder called quadrature was a test but it is not used anymore and can be removed.

  • other thoughts about it ?

PS: we have two more models, tetrahedron and truncated tetrahedron coming soon, based on the same integration scheme. Should be ready next week or so. :)

@pkienzle
Copy link
Copy Markdown
Contributor

pkienzle commented Apr 16, 2026

I'm okay with it as long as you add a ticket for adaptive integration on the new models.

[Edit] ... after removing quadratures directory and sasmodels/models/.gitignore

Copy link
Copy Markdown
Contributor

@pkienzle pkienzle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to remove sasmodels/models/.gitignore and sasmodels/quadrature/*

@sara-mokhtari
Copy link
Copy Markdown
Contributor Author

I removed both sasmodels/models/.gitignore and sasmodels/quadrature/*. I updated the name of the model (name = prism) to mach the filename. I will also add a ticket for the integration discussion.
I will merge soon, if anyone disagree, let me know !
@marimperorclerc don't hesitate to read the model again if you can.
Thank you all for your amazing help.

updated description of model (first line)
DOI of Wuttke's reference corrected
updated last revision day
@sara-mokhtari sara-mokhtari merged commit b30ebaf into master Apr 17, 2026
26 checks passed
@sara-mokhtari sara-mokhtari deleted the adding_nanoprisms_model branch April 17, 2026 14:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Nanoprism model

7 participants