-
Notifications
You must be signed in to change notification settings - Fork 58
Make better use of context when calling eig vs eigh #677
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
…s used when we need the eigenvectors, eigenvalues, and inverse matrix of eigenvectors; it includes a check for if the input is Hermitian. Move `check_rank_one_density` from a function defined in `fidelity` to a function defined in the optools module. Replace most calls to `eig` with calls to `eigendecomposition`.
…tian and assume_normal keyword arguments, and selects intelligently among eigvalsh, schur, and eigvals.
….linalg.eigvals with this new function
| # TODO: discuss correctness of this function. I think it might assume that both are | ||
| # tp and that b is unitary. We should probably just transform the result of | ||
| # eigenvalue_entanglement_infidelity, rather than compute from scratch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential correctness issue in reportables.py::eigenvalue_avg_gate_infidelity.
| # TODO: discuss correctness of this function. I think it might assume that both are | ||
| # tp and that b is unitary. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential correctness issue in reportables.py::eigenvalue_diamondnorm.
| ------- | ||
| float | ||
| """ | ||
| # TODO: discuss meaning of this function. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not obvious to me what eigenvalue_nonunitary_diamondnorm is supposed to do. It has a similar structure to other functions whose correctness I'm unsure of.
| def eigenvalues(m: _np.ndarray, *, assume_hermitian: Optional[bool] = None, assume_normal: bool = False): | ||
| if assume_hermitian is None: | ||
| assume_hermitian = is_hermitian(m) | ||
| if assume_hermitian: | ||
| # Make sure it's Hermtian in exact arithmetic. This helps with | ||
| # reproducibility across different implementations of LAPACK. | ||
| m += m.T.conj() | ||
| m /= 2 | ||
| return _np.linalg.eigvalsh(m) | ||
| elif assume_normal: | ||
| temp = _spl.schur(m, output='complex') | ||
| evals = _np.diag(temp[1]) | ||
| return evals | ||
| else: | ||
| return _np.linalg.eigvals(m) | ||
|
|
||
|
|
||
| def eigendecomposition(m: _np.ndarray, *, assume_hermitian: Optional[bool] = None): | ||
| if assume_hermitian is None: | ||
| assume_hermitian = is_hermitian(m) | ||
| if assume_hermitian: | ||
| # Make sure it's Hermtian in exact arithmetic. This helps with | ||
| # reproducibility across different implementations of LAPACK. | ||
| m += m.T.conj() | ||
| m /= 2 | ||
| evals, evecs = _np.linalg.eigh(m) | ||
| inv_evecs = evecs.T.conj() | ||
| else: | ||
| evals, evecs = _np.linalg.eigh(m) | ||
| inv_evecs = _np.linalg.inv(evecs) | ||
| return evecs, evals, inv_evecs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The star in the function signature means that only one positional argument is allowed. All remaining arguments have to be specified by kwargs. This is to give us more flexibility to change these signatures in the future.
| def check_rank_one_density(mat, scalar_tol: Optional[Union[float,_np.floating]]=None) -> tuple[int, _np.ndarray]: | ||
| """ | ||
| mat is Hermitian of order n. This function uses an O(n^2) time randomized algorithm to | ||
| test if mat is a PSD matrix of rank 0 or 1. It returns a tuple (r, vec), where | ||
| If r == 0, then vec is the zero vector. Either mat's numerical rank is zero | ||
| OR the projection of mat onto the set of PSD matrices is zero. | ||
| If r == 1, then mat is a PSD matrix of numerical rank one, and vec is mat's | ||
| unique nontrivial eigenvector. | ||
| If r == 2, then our best guess is that mat's (numerical) rank is at least two | ||
| In exact arithmetic, this "guess" is correct with probability one. The value of | ||
| `vec` is ignored but provided for type-checking purposes. Additional computations | ||
| will be needed to determine if mat is PSD. | ||
| Conceptually, this function just takes a single step of the power iteration method | ||
| for estimating mat's largest eigenvalue (with size measured in absolute value). | ||
| See https://en.wikipedia.org/wiki/Power_iteration for more information. | ||
| """ | ||
| if scalar_tol is None: | ||
| scalar_tol = _np.finfo(mat.dtype).eps ** 0.5 | ||
| # ^ use for checks that have no dimensional dependence; about 1e-8 for double precision. | ||
| vec_tol = (mat.shape[0] ** 0.5) * scalar_tol | ||
| # ^ use for checks that do have dimensional dependence (will naturally increase for larger matrices) | ||
|
|
||
| n = mat.shape[0] | ||
|
|
||
| if _np.linalg.norm(mat) < vec_tol: | ||
| # We prefer to return the zero vector instead of None to simplify how we handle | ||
| # this function's output. | ||
| return 0, _np.zeros(n, dtype=complex) | ||
|
|
||
| rng = _np.random.default_rng(0) | ||
| test_vec = rng.standard_normal(n) + 1j * rng.standard_normal(n) | ||
| test_vec /= _np.linalg.norm(test_vec) | ||
|
|
||
| candidate_v = mat @ test_vec | ||
| candidate_v /= _np.linalg.norm(candidate_v) | ||
| alpha = _np.real(candidate_v.conj() @ mat @ candidate_v) | ||
| reconstruction = alpha * _np.outer(candidate_v, candidate_v.conj()) | ||
|
|
||
| if _np.linalg.norm(mat - reconstruction) > vec_tol: | ||
| # We can't certify that mat is rank-1. | ||
| return 2, _np.zeros(n) | ||
|
|
||
| if alpha <= 0.0: | ||
| # Ordinarily we'd project out the negative eigenvalues and proceed with the | ||
| # PSD part of the matrix, but at this point we know that the PSD part is zero. | ||
| return 0, _np.zeros(n) | ||
|
|
||
| if abs(alpha - 1) > scalar_tol: | ||
| message = f"The input matrix is not trace-1 up to tolerance {scalar_tol}. Beware result!" | ||
| _warnings.warn(message) | ||
| candidate_v *= _np.sqrt(alpha) | ||
|
|
||
| return 1, candidate_v | ||
|
|
||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function used to be defined locally inside the fidelity function.
| # TODO: discuss the significance of this i==0 codepath. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please review optools.py::std_process_mx_to_unitary. I'd like to discuss the significance of this i==0 codepath.
| dslc = slice(dstart, dstart + kk) | ||
| # enforce block conjugacy needed to retain Uproj conjugacy structure | ||
| D[dslc, dslc] = D[slc, slc].conj() | ||
| def _get_eigenspace_pairs(mx, tol=1e-6): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All changes past this point are from simplifying an if True: ... block to remove the unreachable code.
|
@coreyostrove and @sserita, this is ready for review. @enielse I have correctness concerns for three functions in reportables.py. These functions were written over 6 years ago, so that means they were likely written by you. Can you take a look? |
This PR aims to resolve #582. While making these changes I found a couple potential correctness issues in reportables.py. I've left comments on those specific functions.