Placeholder Image

字幕表 動画を再生する

  • [MUSIC PLAYING]

  • JULIAN KELLY: My name is Julian Kelly.

  • I am a quantum hardware team lead in Google AI Quantum.

  • And I'm going to be talking about extracting coherence

  • information from random circuits via something

  • we're calling speckle purity benchmarking.

  • So I want to start by quickly reviewing

  • cross-entropy benchmarking, which we sometimes call XEB.

  • So the way that this works is we are

  • going to decide on some random circuits.

  • These are interleaved layers of randomly chosen

  • single-qubit gates and fixed two-qubit gates.

  • We then take this circuit, and we send a copy of it

  • to the quantum computer.

  • And then we send a copy of it to a simulator.

  • We take this simulator as the ideal distribution

  • of what the quantum evolution should have done.

  • And we compare the probabilities to what

  • the quantum computer did, the measured probabilities.

  • And we expect the ideal probabilities

  • to be sampling from the Porter-Thomas distribution.

  • We can then assign a fidelity for the sequence

  • by comparing the measured and ideal probabilities to figure

  • out how all the quantum computer has done.

  • And, again, I want to emphasize that we are comparing

  • the measured fidelity against the expected

  • unitary evolution of what this circuit should have done.

  • So here's what experimental cross-entropy benchmarking

  • data looks like.

  • So what we're going to do is we're

  • going to take data both over a number of cycles and also

  • random circuit instances.

  • And, when we look at the raw probabilities,

  • we see that it looks pretty random.

  • So what we're going to do is we then

  • take the unitary that we expect to get,

  • and we can then compute the fidelity.

  • And then, out of this random looking data,

  • we see this nice fidelity decay.

  • So the way that this works is that, by choosing

  • depolarizing single-qubit gates, the errors, more or less,

  • add up.

  • And we get this exponential decay.

  • We can fit this and extract an error per cycle

  • for our XEB sequence.

  • And this is error per cycle, given some expected given

  • unitary evolution.

  • So this is fantastic, but we might

  • be interested in understanding where exactly our errors are

  • coming from.

  • So this is where a technique known as purity benchmarking

  • comes in.

  • And, essentially, what we're going to do

  • is we're going to take this XEB sequence,

  • and then we're going to append it with state tomography.

  • So, in state tomography, we run a collection of experiments

  • to extract the density matrix rho.

  • We can then figure out the purity, which is, more or less,

  • the trace of rho squared.

  • You can think of this as, essentially,

  • the length of a block vector squared, an n-dimensional block

  • vector.

  • So what happens if we then look at the length of this block

  • vector as a function of the number of cycles

  • in the sequence?

  • We can look at the decay of it, and that

  • tells us something about incoherent errors

  • that we're adding to the system.

  • So, basically, the block vector will

  • shrink if we have noise or decoherence.

  • So purity benchmarking allows us to figure out

  • the incoherent error per cycle.

  • And the incoherent error is the decoherence error lower bound.

  • And it's the best error that we could possibly get.

  • So this is great.

  • We love this.

  • We use this all the time.

  • It turns out that predicting incoherent error

  • is harder than you think, and it's very nice

  • to have a measurement in situ in the experiment that you're

  • carrying [? about ?] to measure it directly.

  • So, with that, I'm going to tell a little bit of a story.

  • So, last March Meeting, I was sitting

  • in some of the sessions, and I was inspired

  • to try some new type of gate.

  • So I decided to run back to my hotel room

  • and code it up and remote in and get things running.

  • And I took this data, and my experimentalist intuition

  • started tingling.

  • And I was thinking that, wow, the performance of this gate

  • seems really bad.

  • So, when I was looking at the raw XEB data,

  • I noticed that, for a low number of cycles,

  • the data looked quite speckly, and I

  • was pretty happy with the amount of speckliness

  • that was happening down here.

  • But, when I went to a high number of cycles,

  • I noticed these speckles started to wash out.

  • And I found the degree of speckliness

  • to be a little bit disappointing, not so speckly.

  • So I started thinking about this a bit more.

  • And this is pretty interesting because I'm

  • looking at the data, and I'm assessing the quality somehow,

  • but I don't even know what unitary

  • I've been actually performing.

  • And shouldn't the XEB data just be completely random?

  • So I formed an extremely simple hypothesis.

  • Maybe this speckle contrast is actually

  • telling us something about the purity, right?

  • So, in some sense, what I'm saying

  • is that, if there's low contrast,

  • it's telling us that our system is decohered in some way.

  • So I did something very, very simple,

  • which is I just took a cut like this,

  • and I took the variance of that, and I plotted it

  • versus cycle number.

  • And I see this nice exponential decay.

  • So, after this, I got excited, and I went to talk to some

  • of our awesome theory friends like Sergio Boixo

  • and [? Sasha ?] [? Karakov. ?] And they helped set me straight

  • by going through some of the math.

  • So what we're going to do is we're first

  • going to define a rescaled version of purity that

  • looks like this.

  • And you see that, essentially, this is trace of rho squared.

  • And there's some dimension factors

  • that have to do with the size of the Hilbert space.

  • We define it this way so that purity equals 0 corresponds

  • to a completely decohered state.

  • And a purity of 1 corresponds to a completely pure state.

  • We then assume a depolarizing channel model,

  • which is to say that the density matrix is some polarization

  • parameter p times a completely pure state psi plus 1

  • minus p times the completely decohered uniform distribution

  • 1/D. Then, if you plug these into each other,

  • we can see that these parameters directly relate.

  • The polarization parameter is directly related

  • to the square root of purity, which kind of makes

  • sense because the polarization is telling us how much we're

  • in a pure state versus how much we're in a decohered state.

  • OK, so now let's talk about what it looks

  • like if we're actually going to try and measure

  • this density matrix rho.

  • If we're measuring probabilities from the pure state

  • distribution, we'd expect to get a Porter-Thomas distribution

  • like we talked about before.

  • And what we notice is that there is

  • some variance to this distribution that

  • is greater than 0.

  • However, if we're measuring probabilities

  • from the uniform distribution, which

  • corresponds to a decohered state,

  • we say that there's no variance at all.

  • And, again, this is an ECDF or an integrated histogram.

  • So this corresponds to a delta function with no variance.

  • And it turns out, if you then go back and actually do the math,

  • you can directly relate the purity

  • to the variance of the probability distributions

  • times some dimension factors.

  • So the point is that we can directly relate purity

  • to variance experimentally from measured probabilities.

  • So is this purity?

  • Yes, it turns out.

  • So let's talk about what this looks like in an experiment.

  • So here we took some very dense XEB data,

  • both the number of cycles this way

  • and number of circuit instances that way.

  • And so what I'm going to do is I'm

  • going to draw a cut like this.

  • I'm going to plot the ECDF, the integrated histogram

  • distribution.

  • And we see that it obeys is very nice Porter-Thomas line.

  • And then, if I look over here when

  • we expect the state to have very much decohered,

  • we see that it does look like this decohered uniform

  • distribution like that.

  • So then, if we plug all this data

  • into the handy-dandy formula that we

  • showed on the previous slide, we can then

  • compare it to the purity as measured by tomography.

  • And we see that these curves line up almost exactly.

  • And, indeed, they provide an error lower bound.

  • They're outperforming the XEB fidelity

  • because there can be some, for example, calibration error.

  • So speckle purity agrees with tomography.

  • We get it for free from this raw XEB data.

  • And we are noticing this by basically

  • observing a general signature of quantum behavior.

  • That's what these contrasts mean.

  • So here's a simple analogy.

  • So, if you take some kind of [? neato ?] coherent laser,

  • and you send it through a crazy crystal medium,

  • you can see this speckle pattern showing up on,

  • for example, a wall.

  • But, if you take a kind of boring flashlight,

  • and you point it at something, you just

  • get a very blurred out pattern.

  • So this makes sense, which is to say

  • that, if you have a very decohered state,

  • and you try and perform a complex quantum

  • operation to it, it's really not going to tell you anything.

  • Nothing is going to happen.

  • OK, so let's talk about how to actually understand

  • the cost of these different ways of extracting purity

  • in terms of experimental time.

  • So, typically, when we're doing an XEB experiment,

  • we take N random circuits times N repetitions per circuit.

  • And then we may optionally add some number

  • of additional tomography configurations

  • to figure out how much data we need to take.

  • So, for example, in 2 qubits, we may take 20 random circuits,

  • 1,000 repetitions per circuit.

  • And then, for 2 qubits, we have to add an additional 9

  • tomography repetitions.

  • And that means we're taking 200,000 data points

  • to extract a single data point here,

  • which is pretty expensive.

  • If you look at the order of scaling,

  • the number of circuits in XEB we'd typically

  • pick to be about constant.

  • The number of repetitions it turns

  • that scales exponentially due to the size of the Hilbert space.

  • But then we also have this additional exponential factor

  • doing full state tomography.

  • So what we're taking away from this

  • is that full state tomography really

  • is overkill for extracting the purity.

  • We can get it just from the speckles with the same

  • [? information, ?] [? only ?] we're doing it exponentially

  • cheaper.

  • We're actually getting rid of this 3 to the N factor.

  • And I do want to point out that there's still

  • this exponential factor that remains

  • due to having to figure out the full distribution

  • of probabilities.

  • So I want to now actually show data

  • for scaling the number of qubits up to larger system sizes.

  • So we can, for example, extend the XEB protocol to many qubits

  • in a pattern that looks something like this.

  • And we can measure for 3 qubits, 5, 7,

  • or all the way up to 10 qubits to directly extract a purity.

  • And we see that this still works.

  • We see that we get nice numbers.

  • We get a nice decays out of this.

  • And we can also then compare to the XEB directly.

  • I want to point out that, going all the way up to here,

  • we use kind of standard numbers for the number of sequences,

  • the number of stats, but then, at this point,

  • we started to feel the exponential number of samples

  • that we needed.

  • We had to crank the repetitions up to 20,000,

  • but even that is really not that much for extracting purity

  • for a 10 qubit system.

  • OK, so now the last thing that we can do

  • is we can actually do some pretty cool error budgeting.

  • So we can benchmark the different error processes

  • versus system size using the data that I just showed.

  • So, if we say that XEB error is equal to the incoherent error

  • plus the coherent error, we directly measure the XEB error,

  • and we directly measure the incoherent error, the purity.

  • That allows us to infer the coherent error.

  • So, if we look at, for example, the N-qubit

  • XEB versus these error mechanisms,

  • we can see that, for a low number of qubits,

  • we don't have much coherent error.

  • We are doing a very good job with calibration.

  • But, as we scale the system size, and we add more qubits,

  • we start to see just a little bit

  • of coherent error being added.

  • And we'd expect that, for example,

  • cross-talk effects to introduce something like this.

  • So we can answer these questions now

  • versus system size, which is are we

  • getting unexpected incoherent error as we scale.

  • And we can also answer are we getting

  • coherent errors, for example, cross-talk as we scale.

  • And, typically, these are quite challenging to measure.

  • So, in conclusion, we introduce this notion of speckle purity,

  • and it quantifies the decoherence

  • without even having to know the unitary that you did.

  • It relies on random circuits in a Porter-Thomas distribution,

  • which is important.

  • It comes free with any XEB data that you have.

  • So, even if you have a historical data set,

  • you can go and extract this from it, which is pretty neat.

  • We can use it for many qubit systems.

  • We showed up to 10 here.

  • And there's exponentially better scaling

  • than doing full state tomography.

  • It probes this fundamental behavior of quantum mechanics,

  • and it's pretty neat if you spend

  • some time thinking about it.

  • I want to point out that we have a discussion of this

  • in the supplemental of our quantum supremacy publication.

  • And I also found this nice paper that

  • talks about a lot of similar concepts from 2012.

  • [MUSIC PLAYING]

[MUSIC PLAYING]

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

ランダム回路からコヒーレンス情報を抽出する(QuantumCasts) (Extracting coherence information from random circuits (QuantumCasts))

  • 4 1
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語