Replies: 8 comments
-
I don't think there should be result data structure - the kernel should just return what you write. Given there's already a runtime can we not start from scratch and propose something based on the current runtime? |
Beta Was this translation helpful? Give feedback.
-
IMO, I need to say I don't like this RFC if we are talking about result values, and if you want To explain more why this is bad, for example, if one writes @kernel
def main():
qreg = qasm2.new(3)
return qreg this already indicates returning a register, we should not add new syntax like bloqade.debug.log(reg,"register")
After taking a closer look, I think what you ACTUALLY want here is a way to pass logs (and whatever being printed into the log) back. I think we should settle #225 before heading into this. In this case, the logs and whatever side effects got captured on server side should be passed back as a piece of metadata instead of directly showing up in the result value. Otherwise it will mess up what the program means. Instead of having return value being modified which breaks the semantics convention, a possible route is we can pass back the logs in metadata, and having it goes into |
Beta Was this translation helpful? Give feedback.
-
ok I missed this paragraph. I think current runtime has covered a large range of the use case above, we just need to make sure we can return the same thing from the server. Basically, anything can be discussed further except modifying return values, that's a hard no for me. |
Beta Was this translation helpful? Give feedback.
-
When you start to move stuff outside the kernel you now run into the issue where you have an interface that can't be supported by the machine by definition. I would focus future RFC's things like the list you have here: |
Beta Was this translation helpful? Give feedback.
-
Respectfully, all this means is that you wrote something without first considering requirements and use cases. We are writing a product for users, to be maximally useful for projects and development, not a tool that is maximally easy for the package developer. These requirements are very close to your proposed solution #226 . The analysis functions are a use case that depends on the requirements of having unified, consistent, and interpretable results structures. Just dumping some numpy array completely decontextualizes the data in a really bad way. At least having a qubit and register runtime value gives some notion of a basis. The fundamental problem with extracting (exact) expectation values and wavefunctions is that they are functionally unphysical. We agreed in #218 to have the wavefunction and observables float around outside the kernel, instead of requiring a new set of dialects that include representing shots. We could do another RFC on exactly how we can do simulation and wavefunctions in more complicated scenarios, such as mid-circuit feed-forward, or when you want to collect the wavefunction at multiple places along the execution. Whether the richer analysis is sugaring kernels with logs, or some other method, having the ability to extract the action of the program with a particular simulator backend is a good use case to enable users to understand the action of their programs. An MVP could be a simulator that simply extracts the wavefunction at the end (how?? a user must do measurements??) and the user must have different kernels for machine submission vs. simulation. |
Beta Was this translation helpful? Give feedback.
-
You need to give concrete examples of what requirements are missing, can you provide a use case that isn't covered by the previous RFC? |
Beta Was this translation helpful? Give feedback.
-
Requirement: consistent, reproducible, and extensible representation and generation of data and results produced from the execution of (quantum) kernels. Beyond top-level representations, it becomes implementation details. Use cases: Integration into larger workflows beyond the scope of bloqade APIs, as well as an ensemble of analysis and helper functions such as cross-entropy, fidelity, averaging, expectation values, and so forth. To narrow the focus, perhaps we should consider the output of the state-vector simulation. I proposed something here and in #230 but to just dump a numpy vector is inadequate. |
Beta Was this translation helpful? Give feedback.
-
concrete examples please e.g.
So we covered this in a previous rfc and everyone seems to agree it works
cross-entropy between what and what? give an examples. Presumably, you would need to run the kernel on hardware to really get the most out of this but I could imagine you could do simulations with the noise model.
fidelity concerning what? what are the states, and where do they come from? What if you run a stabilizer simulation using PyQrack how does this help you? How does this work with STIM or other backends? How do you deal with atoms that have been lost when sampling using the atom loss model? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Once a kernel is executed using an interpreter, it must generate some data for analysis. There are only so many relevant data types, so we can start to lock in the basics. The big question is figuring out how to do observables-- returned quantum data lives in a basis hosted by qubits, which are possibly decontextualized from their definition within a kernel.
These results would come from execution of some job, aka
I propose thinking about extracting wavefunctions and other internal state as a "debug" mode. One can have a register, list of qubits, or wires as a return value, in which case an emulator can simply return the quantum state. But, the same piece of code should also be runnable on hardware, where you cannot return a state. So, I propose a debug dialect (like print statements one uses in python) that a simulator can use to log variables, such as classical or quantum variables.
There are two kinds of runtime values: ClassicalRuntimeValues, which are runtime values within the kernel, and QuantumRuntimeValues, which are wavefunctions, probability distributions or stochastic samples.
We can separate observables and expectation values, because they now are functions which QuantumRuntimeValue as input! This, of course, runs the risk of decoupling qubit labels with the statevector. There's no good way around this-- the statevector is not a true quantity of reality, so its on the user to manage matching qubit labels with the statevector. Nonetheless, the statevector should carry around its basis as an attribute; for 2^N statevector this can be as simple as a list of qubits, implicitly in the Z basis.
This RFC can be seen as a MINIMAL working set for results. There can be much more here, such as helper functions that compute correlation functions, statistical distance, fidelity, and so forth. But they ultimately act on these basic classes. I am not sure if this fits the requirements of error correction-- maybe we can hash that out soon.
A key point to make here is that these data, when linked appropriately with an interpreter, can be part of the building blocks of much larger packages and code. These structures are the use case! I'm being intentionally vague as to how the interpreters generate these data; that's more of an implementation question. But, having a robust way to do emulations and simulation is critical... to be discussed...
Beta Was this translation helpful? Give feedback.
All reactions