Skip to content
Merged
Show file tree
Hide file tree
Changes from 38 commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
2433ebc
fixes
Alexandr-Solovev Oct 27, 2025
a70133b
Merge branch 'main' into dev/asolovev_spmd_cpu
Alexandr-Solovev Oct 28, 2025
587e56a
fixes
Alexandr-Solovev Oct 28, 2025
18de8a2
fixes
Alexandr-Solovev Oct 28, 2025
d73e127
Merge branch 'main' into dev/asolovev_spmd_cpu
Alexandr-Solovev Oct 30, 2025
686c39e
minor fix
Alexandr-Solovev Oct 30, 2025
cd72243
fixes
Alexandr-Solovev Nov 4, 2025
cb98883
fixes
Alexandr-Solovev Nov 4, 2025
faad3eb
Merge branch 'main' into dev/asolovev_spmd_cpu
Alexandr-Solovev Nov 4, 2025
c936caf
fixes
Alexandr-Solovev Nov 4, 2025
74dadc1
fixes
Alexandr-Solovev Nov 4, 2025
3b9c2d4
fixes
Alexandr-Solovev Nov 5, 2025
b7720f2
Merge branch 'main' into dev/asolovev_spmd_cpu
Alexandr-Solovev Nov 5, 2025
12dec1e
fixes
Alexandr-Solovev Nov 6, 2025
53371e3
fixes
Alexandr-Solovev Nov 10, 2025
36d559f
fixes
Alexandr-Solovev Nov 10, 2025
04bccfa
Merge branch 'main' into dev/asolovev_spmd_cpu
Alexandr-Solovev Nov 10, 2025
0691ded
fixes
Alexandr-Solovev Nov 12, 2025
d3e1dc0
fixes
Alexandr-Solovev Nov 12, 2025
3b1023d
minor fixes
Alexandr-Solovev Nov 13, 2025
c5d27f3
minor fix
Alexandr-Solovev Nov 13, 2025
d540617
Merge branch 'main' into dev/asolovev_spmd_cpu
Alexandr-Solovev Nov 17, 2025
61121e6
Merge branch 'main' into dev/asolovev_spmd_cpu
Alexandr-Solovev Nov 18, 2025
ada55bd
test: add run_samples in conda-recipe
Alexandr-Solovev Nov 18, 2025
aa14a37
fixes
Alexandr-Solovev Nov 18, 2025
192fea5
fixes
Alexandr-Solovev Nov 19, 2025
b248f7d
more test
Alexandr-Solovev Nov 19, 2025
eca8e3a
Merge branch 'uxlfoundation:main' into dev/asolovev_spmd_cpu
Alexandr-Solovev Nov 20, 2025
3e68566
fixes
Alexandr-Solovev Nov 20, 2025
fa514f6
fixes
Alexandr-Solovev Nov 21, 2025
de3e143
fixes
Alexandr-Solovev Nov 21, 2025
baafc9a
fixes
Alexandr-Solovev Nov 21, 2025
fc252e7
fixes
Alexandr-Solovev Nov 26, 2025
0dabad2
Merge branch 'main' into dev/asolovev_spmd_cpu
Alexandr-Solovev Nov 26, 2025
6e4f3f8
fixes
Alexandr-Solovev Nov 26, 2025
2197800
fixes
Alexandr-Solovev Nov 27, 2025
4ad6339
docs update
Alexandr-Solovev Nov 27, 2025
3ea4eb7
minor update
Alexandr-Solovev Nov 27, 2025
22517dc
fixes
Alexandr-Solovev Nov 28, 2025
e8aae0c
fixes
Alexandr-Solovev Nov 28, 2025
1ed1f7f
Merge branch 'main' into dev/asolovev_spmd_cpu
Alexandr-Solovev Dec 1, 2025
00bfc1b
Merge branch 'main' into dev/asolovev_spmd_cpu
Alexandr-Solovev Dec 2, 2025
729af95
fixes
Alexandr-Solovev Dec 2, 2025
3f82f6a
fixes
Alexandr-Solovev Dec 2, 2025
d191a04
Merge branch 'main' into dev/asolovev_spmd_cpu
Alexandr-Solovev Dec 3, 2025
d0dcfbc
docs fixes
Alexandr-Solovev Dec 4, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 19 additions & 1 deletion INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,7 @@ It is possible to integrate various sanitizers by specifying the REQSAN flag, av

---

After having built the library, if one wishes to use it for building [scikit-learn-intelex](https://github.com/uxlfoundation/scikit-learn-intelex/tree/main) or for executing the usage examples, one can set the required environment variables to point to the generated build by sourcing the script that it creates under the `env` folder. The script will be located under `__release_{os_name}[_{compiler_name}]/daal/latest/env/vars.sh` and can be sourced with a POSIX-compliant shell such as `bash`, by executing something like the following from inside the `__release*` folder:
After having built the library, if one wishes to use it for building [scikit-learn-intelex](https://github.com/uxlfoundation/scikit-learn-intelex/tree/main) or for executing the usage examples or samples, one can set the required environment variables to point to the generated build by sourcing the script that it creates under the `env` folder. The script will be located under `__release_{os_name}[_{compiler_name}]/daal/latest/env/vars.sh` and can be sourced with a POSIX-compliant shell such as `bash`, by executing something like the following from inside the `__release*` folder:

```shell
cd daal/latest
Expand Down Expand Up @@ -293,6 +293,24 @@ For example, in a Linux platform, assuming one wishes to execute the `adaboost_d

DPC++ examples (running on devices supported by SYCL, such as GPU) from oneAPI are also auto-generated within these folders when oneDAL is built with DPC++ support (target `oneapi` in the Makefile), but be aware that it requires a DPC++ compiler such as ICX, and executing the examples requires the DPC++ runtime as well as the GPGPU drivers. The DPC++ examples can be found under `examples/oneapi/dpc`.

oneDAL samples are also auto-generated in `daal/latest/samples/oneapi/cpp/`(Multi-CPU) and `daal/latest/samples/oneapi/dpc/`(Multi-GPU) when oneDAL is built with DPC++ support (target oneapi in the Makefile). Note that building and running the samples requires a DPC++ compiler such as ICX, and multi-process execution requires MPI/CCL.

* oneAPI samples:

```shell
cd daal/latest/samples/oneapi/cpp/mpi
mkdir -p build
cd build
cmake ..
make -j$(nproc)
```

Once built, the samples can be run with mpirun specifying the number of processes:

```shell
mpirun -n {num_processes} ./_cmake_results/{platform_name}/{example}
```

### Executing examples with ASAN

When building oneDAL with ASAN (flags `REQSAN=address`, typically combined with `REQDBG=yes`), building and executing the generated examples requires additional steps - **assuming a Linux system** (ASAN on Windows has not been tested):
Expand Down
1 change: 1 addition & 0 deletions conda-recipe/test-devel.sh
Original file line number Diff line number Diff line change
Expand Up @@ -62,3 +62,4 @@ run_examples oneapi dynamic
run_examples oneapi static
run_examples daal dynamic
run_examples daal static
# TODO: add testing for samples
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
#include "oneapi/dal/backend/interop/table_conversion.hpp"

#include "oneapi/dal/backend/primitives/ndarray.hpp"
#include "oneapi/dal/backend/primitives/utils.hpp"

#include "oneapi/dal/table/row_accessor.hpp"

Expand All @@ -31,6 +32,8 @@
#include "oneapi/dal/algo/linear_regression/backend/model_impl.hpp"
#include "oneapi/dal/algo/linear_regression/backend/cpu/train_kernel.hpp"
#include "oneapi/dal/algo/linear_regression/backend/cpu/train_kernel_common.hpp"
#include "oneapi/dal/algo/linear_regression/backend/cpu/partial_train_kernel.hpp"
#include "oneapi/dal/algo/linear_regression/backend/cpu/finalize_train_kernel.hpp"

namespace oneapi::dal::linear_regression::backend {

Expand All @@ -54,6 +57,56 @@ using batch_lr_kernel_t = daal_lr::training::internal::BatchKernel<Float, daal_l
template <typename Float, daal::CpuType Cpu>
using batch_rr_kernel_t = daal_rr::training::internal::BatchKernel<Float, daal_rr_method, Cpu>;

template <typename Float, typename Task>
static train_result<Task> call_daal_spmd_kernel(const context_cpu& ctx,
const detail::descriptor_base<Task>& desc,
const detail::train_parameters<Task>& params,
const table& data,
const table& resp) {
auto& comm = ctx.get_communicator();

/// Compute partial X^T * X and X^T * y on each rank
partial_train_input<Task> partial_input(data, resp);
auto partial_result =
dal::linear_regression::backend::partial_train_kernel_cpu<Float, method::norm_eq, Task>{}(
ctx,
desc,
params,
partial_input);
/// Get local partial X^T * X and X^T * y as array<Float> to pass to collective allgatherv
const auto& xtx_local = partial_result.get_partial_xtx();
const auto& xty_local = partial_result.get_partial_xty();
const auto xtx_local_nd = pr::table2ndarray<Float>(xtx_local);
const auto xty_local_nd = pr::table2ndarray<Float>(xty_local);
const auto xtx_local_ary =
dal::array<Float>::wrap(xtx_local_nd.get_mutable_data(), xtx_local_nd.get_count());
const auto xty_local_ary =
dal::array<Float>::wrap(xty_local_nd.get_mutable_data(), xty_local_nd.get_count());
/// Allocate storage for gathered X^T * X and X^T * y across all ranks
//auto rank_count = comm.get_rank_count();
const std::int64_t ext_feature_count = xtx_local.get_row_count();
const std::int64_t response_count = xty_local.get_row_count();

/// Collectively gather X^T * X and X^T * y across all ranks
comm.allreduce(xtx_local_ary).wait();
comm.allreduce(xty_local_ary).wait();

auto xtx_table = homogen_table::wrap(xtx_local_ary, ext_feature_count, ext_feature_count);
auto xty_table = homogen_table::wrap(xty_local_ary, response_count, ext_feature_count);
/// Compute regression coefficients
partial_train_result<Task> partial_result_final;
partial_result_final.set_partial_xtx(xtx_table);
partial_result_final.set_partial_xty(xty_table);
auto result =
dal::linear_regression::backend::finalize_train_kernel_cpu<Float, method::norm_eq, Task>{}(
ctx,
desc,
params,
partial_result_final);

return result;
}

template <typename Float, typename Task>
static train_result<Task> call_daal_kernel(const context_cpu& ctx,
const detail::descriptor_base<Task>& desc,
Expand Down Expand Up @@ -171,6 +224,13 @@ static train_result<Task> train(const context_cpu& ctx,
const detail::descriptor_base<Task>& desc,
const detail::train_parameters<Task>& params,
const train_input<Task>& input) {
if (ctx.get_communicator().get_rank_count() > 1) {
return call_daal_spmd_kernel<Float, Task>(ctx,
desc,
params,
input.get_data(),
input.get_responses());
}
return call_daal_kernel<Float, Task>(ctx,
desc,
params,
Expand Down
13 changes: 7 additions & 6 deletions cpp/oneapi/dal/algo/linear_regression/detail/infer_ops.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -23,19 +23,20 @@ namespace v1 {

using dal::detail::host_policy;

template <typename Float, typename Method, typename Task>
struct infer_ops_dispatcher<host_policy, Float, Method, Task> {
infer_result<Task> operator()(const host_policy& ctx,
template <typename Policy, typename Float, typename Method, typename Task>
struct infer_ops_dispatcher<Policy, Float, Method, Task> {
infer_result<Task> operator()(const Policy& ctx,
const descriptor_base<Task>& desc,
const infer_input<Task>& input) const {
using kernel_dispatcher_t = dal::backend::kernel_dispatcher<KERNEL_SINGLE_NODE_CPU(
using kernel_dispatcher_t = dal::backend::kernel_dispatcher<KERNEL_UNIVERSAL_SPMD_CPU(
backend::infer_kernel_cpu<Float, Method, Task>)>;
return kernel_dispatcher_t()(ctx, desc, input);
}
};

#define INSTANTIATE(F, M, T) \
template struct ONEDAL_EXPORT infer_ops_dispatcher<host_policy, F, M, T>;
#define INSTANTIATE(F, M, T) \
template struct ONEDAL_EXPORT infer_ops_dispatcher<dal::detail::host_policy, F, M, T>; \
template struct ONEDAL_EXPORT infer_ops_dispatcher<dal::detail::spmd_host_policy, F, M, T>;

INSTANTIATE(float, method::norm_eq, task::regression)
INSTANTIATE(double, method::norm_eq, task::regression)
Expand Down
4 changes: 2 additions & 2 deletions cpp/oneapi/dal/algo/linear_regression/detail/train_ops.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ struct train_ops_dispatcher<Policy, Float, Method, Task> {
const descriptor_base<Task>& desc,
const train_input<Task>& input) const {
using kernel_dispatcher_t = dal::backend::kernel_dispatcher< //
KERNEL_SINGLE_NODE_CPU(parameters::train_parameters_cpu<Float, Method, Task>)>;
KERNEL_UNIVERSAL_SPMD_CPU(parameters::train_parameters_cpu<Float, Method, Task>)>;
return kernel_dispatcher_t{}(ctx, desc, input);
}

Expand All @@ -54,7 +54,7 @@ struct train_ops_dispatcher<Policy, Float, Method, Task> {
const train_parameters<Task>& params,
const train_input<Task>& input) const {
using kernel_dispatcher_t = dal::backend::kernel_dispatcher< //
KERNEL_SINGLE_NODE_CPU(backend::train_kernel_cpu<Float, Method, Task>)>;
KERNEL_UNIVERSAL_SPMD_CPU(backend::train_kernel_cpu<Float, Method, Task>)>;
return kernel_dispatcher_t{}(ctx, desc, params, input);
}
};
Expand Down
1 change: 0 additions & 1 deletion cpp/oneapi/dal/algo/linear_regression/test/spmd.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@
namespace oneapi::dal::linear_regression::test {

TEMPLATE_LIST_TEST_M(lr_spmd_test, "LR common flow", "[lr][spmd]", lr_types) {
SKIP_IF(this->get_policy().is_cpu());
SKIP_IF(this->not_float64_friendly());

this->generate(777);
Expand Down
4 changes: 4 additions & 0 deletions cpp/oneapi/dal/algo/linear_regression/train_types.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -119,6 +119,8 @@ class train_input : public base {

train_input(const table& data);

virtual ~train_input() = default;

/// The training set X
/// @remark default = table{}
const table& get_data() const;
Expand Down Expand Up @@ -266,6 +268,8 @@ class partial_train_input : protected train_input<Task> {
partial_train_input(const partial_train_result<Task>& prev,
const partial_train_input<Task>& input);

virtual ~partial_train_input() = default;

const table& get_data() const {
return train_input<Task>::get_data();
}
Expand Down
42 changes: 42 additions & 0 deletions cpp/oneapi/dal/backend/dispatcher.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,9 @@
#define KERNEL_SINGLE_NODE_CPU(...) \
KERNEL_SPEC(::oneapi::dal::backend::single_node_cpu_kernel, __VA_ARGS__)

#define KERNEL_UNIVERSAL_SPMD_CPU(...) \
KERNEL_SPEC(::oneapi::dal::backend::universal_spmd_cpu_kernel, __VA_ARGS__)

#define KERNEL_SINGLE_NODE_GPU(...) \
KERNEL_SPEC(::oneapi::dal::backend::single_node_gpu_kernel, __VA_ARGS__)

Expand Down Expand Up @@ -152,6 +155,9 @@ inline auto dispatch_by_device(const detail::data_parallel_policy& policy,
/// Tag that indicates CPU kernel for single-node
struct single_node_cpu_kernel {};

/// Tag that indicates universal CPU kernel for single-node and SPMD modes
struct universal_spmd_cpu_kernel {};

/// Tag that indicates GPU kernel for single-node
struct single_node_gpu_kernel {};

Expand Down Expand Up @@ -209,9 +215,45 @@ struct kernel_dispatcher<kernel_spec<single_node_cpu_kernel, CpuKernel>> {
throw unimplemented{ msg::algorithm_is_not_implemented_for_this_device() };
});
}
template <typename... Args>
auto operator()(const detail::spmd_data_parallel_policy& policy, Args&&... args) const
-> cpu_kernel_return_t<CpuKernel, Args...> {
// We have to specify return type for this function as compiler cannot
// infer it from a body that consist of single `throw` expression
using msg = detail::error_messages;
throw unimplemented{ msg::spmd_version_of_algorithm_is_not_implemented_for_this_device() };
}
#endif
};

/// Dispatcher for the case of multi-node CPU algorithm based on universal SPMD kernel
template <typename CpuKernel>
struct kernel_dispatcher<kernel_spec<universal_spmd_cpu_kernel, CpuKernel>> {
template <typename... Args>
auto operator()(const detail::host_policy& policy, Args&&... args) const {
return CpuKernel{}(context_cpu{}, std::forward<Args>(args)...);
}

template <typename... Args>
auto operator()(const detail::spmd_host_policy& policy, Args&&... args) const {
return CpuKernel{}(context_cpu{ policy }, std::forward<Args>(args)...);
}

#ifdef ONEDAL_DATA_PARALLEL
template <typename... Args>
auto operator()(const detail::data_parallel_policy& policy, Args&&... args) const {
return dispatch_by_device(
policy,
[&]() {
return CpuKernel{}(context_cpu{}, std::forward<Args>(args)...);
},
[&]() -> cpu_kernel_return_t<CpuKernel, Args...> {
// We have to specify return type for this lambda as compiler cannot
// infer it from a body that consist of single `throw` expression
using msg = detail::error_messages;
throw unimplemented{ msg::algorithm_is_not_implemented_for_this_device() };
});
}
template <typename... Args>
auto operator()(const detail::spmd_data_parallel_policy& policy, Args&&... args) const
-> cpu_kernel_return_t<CpuKernel, Args...> {
Expand Down
4 changes: 1 addition & 3 deletions cpp/oneapi/dal/detail/spmd_policy.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -37,10 +37,8 @@ const spmd::communicator<MemoryAccessKind>& spmd_policy_base<MemoryAccessKind>::
return impl_->comm;
}

template class ONEDAL_EXPORT spmd_policy_base<spmd::device_memory_access::usm>;
// implicit instantiation occurs in header when generating the spmd_policy class
// this must be corrected
template class spmd_policy_base<spmd::device_memory_access::none>;
template class spmd_policy_base<spmd::device_memory_access::usm>;

} // namespace v1
} // namespace oneapi::dal::detail
2 changes: 1 addition & 1 deletion cpp/oneapi/dal/detail/spmd_policy.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ template <typename MemoryAccessKind>
class spmd_policy_impl;

template <typename MemoryAccessKind>
class spmd_policy_base : public base {
class ONEDAL_EXPORT spmd_policy_base : public base {
public:
explicit spmd_policy_base(const spmd::communicator<MemoryAccessKind>& comm);

Expand Down
8 changes: 8 additions & 0 deletions makefile
Original file line number Diff line number Diff line change
Expand Up @@ -428,6 +428,13 @@ release.SAMPLES.CPP := $(if $(wildcard $(SAMPLES.srcdir)/daal/cpp/*),
$(filter $(spat),$(shell find $(SAMPLES.srcdir)/daal/cpp -type f)) \
) \
)
release.SAMPLES.ONEDAL.CPP := $(if $(wildcard $(SAMPLES.srcdir)/oneapi/cpp/*), \
$(if $(OS_is_mac), \
$(filter $(spat),$(shell find $(SAMPLES.srcdir)/oneapi/cpp -not -wholename '*mpi*' -type f)) \
, \
$(filter $(spat),$(shell find $(SAMPLES.srcdir)/oneapi/cpp -type f)) \
) \
)
release.SAMPLES.ONEDAL.DPC := $(if $(wildcard $(SAMPLES.srcdir)/oneapi/dpc/*), \
$(if $(OS_is_mac), \
$(filter $(spat),$(shell find $(SAMPLES.srcdir)/oneapi/dpc -not -wholename '*mpi*' -type f)) \
Expand Down Expand Up @@ -1079,6 +1086,7 @@ $2: $1 | $(dir $2)/. ; $(value cpy)
$(if $(filter %.sh %.bat,$2),chmod +x $$@)
endef
$(foreach d,$(release.SAMPLES.CPP), $(eval $(call .release.d,$d,$(subst $(SAMPLES.srcdir),$(RELEASEDIR.samples),$(subst _$(_OS),,$d)),_release_c)))
$(foreach d,$(release.SAMPLES.ONEDAL.CPP), $(eval $(call .release.d,$d,$(subst $(SAMPLES.srcdir),$(RELEASEDIR.samples),$(subst _$(_OS),,$d)),_release_oneapi_dpc)))
$(foreach d,$(release.SAMPLES.ONEDAL.DPC), $(eval $(call .release.d,$d,$(subst $(SAMPLES.srcdir),$(RELEASEDIR.samples),$(subst _$(_OS),,$d)),_release_oneapi_dpc)))
$(foreach d,$(release.SAMPLES.CMAKE), $(eval $(call .release.d,$d,$(subst $(SAMPLES.srcdir),$(RELEASEDIR.samples),$d),_release_common)))

Expand Down
47 changes: 47 additions & 0 deletions samples/oneapi/cpp/ccl/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
#===============================================================================
# Copyright contributors to the oneDAL project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#===============================================================================

cmake_minimum_required(VERSION 3.16)

set(ONEDAL_INTERFACE yes)
set(ONEDAL_DISTRIBUTED yes)
set(ONEDAL_USE_CCL no)
set(ONEDAL_DISTRIBUTED_CPU yes)
set(MPIEXEC_MAX_NUMPROCS "4" CACHE STRING "Number of processes")
set(MPIEXEC_NUMPROCS_PER_NODE "4" CACHE STRING "Number of processes per node")


# Add cmake scripts and modules to CMake search path
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/../../../cmake/")
include(setup_samples)

project(oneapi_dpc_samples)

find_package(oneDAL REQUIRED)

find_dependencies()
set_link_type()
set_common_compiler_options()

include_directories(sources)

# Initialize the EXCLUDE_LIST variable
set(EXCLUDE_LIST "sources/*.hpp")

# Define variable to specify the samples or directories to include or exclude
option(SAMPLES_LIST "")

generate_samples("${EXCLUDE_LIST}" "${SAMPLES_LIST}")
25 changes: 25 additions & 0 deletions samples/oneapi/cpp/ccl/data/linear_regression_test_data.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
4.7408e+00,-4.3203e+00,2.0963e-01,-1.5789e+00,
3.2199e+00,-4.1992e+00,1.5187e+00,2.4452e+00,
-2.1517e+00,3.2832e+00,4.0083e+00,4.3406e+00,
4.6496e+00,-1.4519e+00,-3.5445e+00,1.5077e+00,
3.7411e+00,1.9831e+00,-4.6572e+00,-2.4883e-01,
-3.7311e+00,-4.7193e+00,4.6284e+00,-3.7915e+00,
-7.1414e-01,1.0602e+00,-3.6055e+00,5.8303e-01,
-1.5016e+00,1.9514e+00,-3.5752e+00,2.9763e+00,
-8.7124e-01,2.8677e+00,-4.8682e-01,7.6945e-01,
-3.1116e-02,9.7549e-02,3.6944e+00,-2.1663e+00,
-1.3821e+00,-4.0194e+00,-3.5427e+00,-2.0139e+00,
-4.1779e+00,8.3570e-01,-3.2471e+00,2.3163e+00,
-4.1854e+00,-8.7692e-02,4.4636e+00,-4.1856e+00,
1.2469e+00,1.3116e+00,-2.3832e+00,3.4908e+00,
4.8300e+00,-2.8320e+00,3.6545e+00,5.5179e-01,
3.5933e+00,-3.2084e-01,-1.8436e+00,-2.3313e+00,
-1.9571e+00,-2.5447e+00,3.8619e+00,4.1992e+00,
-4.0046e+00,3.5057e+00,-1.3584e-01,2.6378e+00,
-4.5083e+00,-1.9631e+00,-3.8710e+00,-1.0727e+00,
2.6675e+00,-1.8064e+00,2.0341e+00,9.4878e-01,
-4.7467e+00,-2.8794e+00,-2.7395e-01,-4.5898e+00,
-2.6709e+00,3.9516e+00,3.7859e+00,-1.5429e+00,
1.6294e+00,-1.5427e+00,-1.8827e+00,1.1832e+00,
3.3005e+00,-3.3550e+00,2.8902e+00,-4.4256e+00,
-2.8893e+00,4.2992e+00,-2.6801e+00,1.0146e+00,
Loading
Loading