Skip to content

Commit

Permalink
Merge pull request #274 from jcmgray/fitting_updates
Browse files Browse the repository at this point in the history
add fit(method="tree") and fix ALS for complex TNs
  • Loading branch information
jcmgray authored Dec 18, 2024
2 parents e7019c6 + 5bc29ca commit 8e119dc
Show file tree
Hide file tree
Showing 10 changed files with 914 additions and 255 deletions.
9 changes: 7 additions & 2 deletions docs/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,16 @@

Release notes for `quimb`.

(whats-new-1-9-1)=
## v1.9.1 (unreleased)
(whats-new-1-10-0)=
## v1.10.0 (unreleased)

**Enhancements:**

- tensor network fitting: add `method="tree"` for when ansatz is a tree - [`tensor_network_fit_tree`](quimb.tensor.fitting.tensor_network_fit_tree)
- tensor network fitting: fix `method="als"` for complex networks
- tensor network fitting: allow `method="als"` to use a iterative solver suited to much larger tensors, by default a custom conjugate gradient implementation.
- [`tensor_network_distance`](quimb.tensor.fitting.tensor_network_distance) and fitting: support hyper indices explicitly via `output_inds` kwarg
- add [`tn.make_overlap`](quimb.tensor.tensor_core.TensorNetwork.make_overlap) and [`tn.overlap`](quimb.tensor.tensor_core.TensorNetwork.overlap) for computing the overlap between two tensor networks, $\langle O |T \rangle$, with explicit handling of outer indices to address hyper networks. Add `output_inds` to [`tn.norm`](quimb.tensor.tensor_core.TensorNetwork.norm) and [`tn.make_norm`](quimb.tensor.tensor_core.TensorNetwork.make_norm) also, as well as the `squared` kwarg.
- replace all `numba` based paralellism (`prange` and parallel vectorize) with explicit thread pool based parallelism. Should be more reliable and no need to set `NUMBA_NUM_THREADS` anymore. Remove env var `QUIMB_NUMBA_PAR`.
- [`Circuit`](quimb.tensor.circuit.Circuit): add `dtype` and `convert_eager` options. `dtype` specifies what the computation should be performed in. `convert_eager` specifies whether to apply this (and any `to_backend` calls) as soon as gates are applied (the default for MPS circuit simulation) or just prior to contraction (the default for exact contraction simulation).
- [`tn.full_simplify`](quimb.tensor.tensor_core.TensorNetwork.full_simplify): add `check_zero` (by default set of `"auto"`) option which explicitly checks for zero tensor norms when equalizing norms to avoid `log10(norm)` resulting in -inf or nan. Since it creates a data dependency that breaks e.g. `jax` tracing, it is optional.
Expand Down
2 changes: 2 additions & 0 deletions quimb/tensor/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,7 @@
HTN_dual_from_edges_and_fill_fn,
HTN_from_clauses,
HTN_from_cnf,
HTN_rand,
HTN_random_ksat,
MPO_ham_heis,
MPO_ham_ising,
Expand Down Expand Up @@ -281,6 +282,7 @@
"HTN_dual_from_edges_and_fill_fn",
"HTN_from_clauses",
"HTN_from_cnf",
"HTN_rand",
"HTN_random_ksat",
"HTN2D_classical_ising_partition_function",
"HTN3D_classical_ising_partition_function",
Expand Down
8 changes: 8 additions & 0 deletions quimb/tensor/array_ops.py
Original file line number Diff line number Diff line change
Expand Up @@ -234,6 +234,14 @@ def norm_fro(x):
norm_fro.register("numpy", norm_fro_dense)


@norm_fro.register("autograd")
def norm_fro_autoray(x):
# seems to be bug with autograd's linalg.norm and complex numbers
# https://github.com/HIPS/autograd/issues/666
# so implement manually
return do("sum", do("abs", x) ** 2) ** 0.5


def sensibly_scale(x):
"""Take an array and scale it *very* roughly such that random tensor
networks consisting of such arrays do not have gigantic norms.
Expand Down
2 changes: 1 addition & 1 deletion quimb/tensor/contraction.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import contextlib
import collections

import cotengra as ctg
import cotengra as ctg


_CONTRACT_STRATEGY = 'greedy'
Expand Down
Loading

0 comments on commit 8e119dc

Please sign in to comment.