Controlling Qubits in Milan, with Lorenzo Leandro, Quantum Machines.

Dan: Hey there.

Welcome back.

Thank you for tuning in once again.

Uh, the quantum divide is back.

There's been . A bit of a hiatus.

I will admit.

But we're back again.

I've got a stack of different.

Fascinating people lined up.

Interviews.

So stay tuned.

And this particular episode
is with Lorenzo Leandro.

He's a product solution specialist.

At quantum machines.

He was previously a experimental
Photonics physicist.

He has a.

PhD in quantum physics.

He was at the.

DTU technical university at Denmark.

And also went over to
California for a short while.

Work to FOSS.

And now he's a quantum machine
starting, first of all, in marketing

and then in product management.

He answered lots of my questions
and really helped me understand how

the control of qubits work, the.

At the hardware level.

So this was a fascinating
conversation for me.

I hope you.

Have a similar kind of experience.

So Lorenzo, thank you for joining me.

Thank you for joining the quantum divide.

It's been a while since
I've had any any guests.

Yeah, it's fantastic for you to break
them, break the silence with me.

Lorenzo: Thanks a lot for the invitation.

I'm happy to be here.

Dan: excellent.

So let's start with a bit of a background.

Where'd you come from?

What's your path into quantum
and what are you doing now?

Lorenzo: Yeah.

I started studying physics in in Italy.

I did a bachelor in physics
engineering, and then I moved to

Denmark for my master in photonics.

So I'm a bit of a optics guy.

I stayed in Denmark at the
Technical University of Denmark

for my PhD in Nika Kopian's lab.

And we built the quantum photonics
lab there, basically from scratch.

We were doing, the PhD is formally on, was
formally on quantum information science.

We mostly worked with single
photon sources based on

heterostructure quantum dots.

So basically these structures
that are engineered.

To shoot one single photon at a time.

And the idea is that you can encode
quantum information on these single

photons and then use them either to do
computation or to do simply networks

that are based on quantum information.

And the the lab continued
to push the research.

Nika Ko is still there.

I moved after my PhD uh, to, to work
into a quantum hardware and software

company, which is called quantum machines
pushing the quantum research to get

faster to useful quantum computation.

Dan: Great.

I'm going to ask a question about
the lab and that technology.

Was there some quantum information
held on the quantum dot as well?

Or is it just encoded in the photon
that was, as you said, shot out of it?

I like that.

It's it's a visual representation.

Lorenzo: basically the vast majority
of the systems that, that deal with

single photon sources are virtually.

Semiconductor spin systems
that are optically active.

So basically what you get is a a
structure that you can encode quantum

information in via spin encoding.

And then this structure will very
quickly usually re emit or emit a photon.

And that photon properties depend
on the spin that was encoded by

the In the structure before the key
understanding is in the timings.

So usually these optically active quantum
dots are extremely fast at the meeting.

So you need to be very quick in
changing the spin of the system

before it shoots a photon.

Because after that, you just
have a photon with a certain

polarization or time beam encoding.

And you don't necessarily
want to change that.

So it's an interface.

Quantum dots are an interface
between a spin system that is static.

And the photon system, which is flying.

It's a flying qubit.

Dan: And once it shoots the
photon, then it loses its spin.

Does it decoherent and
both systems remain in

Lorenzo: doesn't decode here in the sense
that it loses the quantum information.

It just isn't there anymore because
the system has converted the

energy from a spin level system.

So it's it's gone down one level
and emitted a photon and that

photon properties depend on.

Which were the levels from
what, from where it originated.

Dan: Yeah, fantastic.

So it's dropped down a level.

Lorenzo: And the fun part is that the
photon then lives for a very long time,

so you can actually store store quantum
information in a photon, but then you

need to be able to handle it properly
because it gets it doesn't decode

here in the usual sense for that is
true for other qubits, like it loses,

dissipates quantum information through
entanglement with the environment.

but it gets absorbed.

This is light, right?

So it gets absorbed by all
of the materials around.

So you may lose a photon, but
if you don't, then you keep

the quantum information alive.

Dan: I hear they're quite hold
to quite hard to get a hold

of, or at least hold on to.

They go quite fast.

And I've heard like pretty quick.

Lorenzo: yes, indeed.

Dan: So I imagine that to implement
these types of experiments, control.

Of all of these harder
elements was really important.

And you mentioned timing, what kind of
systems did you set up to control that?

And what were the kind of timing levels,
you mentioned it's very accurate.

We're talking Pico second or faster.

Lorenzo: Yeah.

So generally speaking optically active
quantum dots, they emit a photon within,

let's say hundreds of picoseconds.

Then it, everything depends
on your system specifics.

But usually like hundreds
of picosecond, say below one

nanosecond is a typical timeframe.

So that means that you
have to manipulate a spin.

If you want to encode some
information, you have to manipulate

the spin before it emits a photon.

And this needs to happen
extremely quickly.

So this has been a huge gap in
between let's say photonics or

I wanna say optically active.

Optically active single photon sources
and all the other qubit types because

the time scales are very different.

And, electronics generally speaking
works above any meaningful operation

you can do above nanosecond, let's
say hundreds of nanoseconds level.

So hundreds of pico was extremely short.

So when I started my
PhD, all of the controls.

were optical.

We need to set up pulse lasers
that do certain operations within

that 100 picoseconds and all of our
electronics to handle the readout

and to read out the system needed
to have an extremely high precision.

So there is these single photon
detectors that are very precise timing.

We're talking maybe 10 or 20 picosecond
resolution just to get an understanding

of when the photon was measured.

So all the electronics were quite unique.

So a bit distant from other qubit types.

But now the gap is, I think, closing.

So if I were to rebuild the same lab
today, I wouldn't necessarily use the

same control systems that we had before.

Although many labs still do because the
timings are very difficult to handle.

Dan: Thanks.

I just going to ask one
more before we move on.

The way we've just discussed this.

It sounds like it's just at one
shot timing, everything, a single

photon, but is there a frequency to
how often the photons are emitted?

Or will.

Is it literally in the
experiment one at a time?

Lorenzo: So that strongly
depends on the experiment.

The type of experiments I was doing
during my PhD and many labs still do

today they are single shot experiments.

So you.

You emit a photon, or you do some
manipulation, let's say, to the

spin, then you, your quantum dot
emits a photon, and then maybe you

do some manipulation of that photon,
and then you detect the photon.

And once you set it up, everything is
passive, in the sense that you don't have

to do computation during the sequence.

You just, it's playback sequence.

You calibrate your pulses and your
control system and your readout, and

then you press play, and you do this
a million times to gather statistics.

And that's it.

So it's a passive, in the
sense of the control system.

It's a playback, but then yeah, everything
needs to be tuned to first perfection.

Basically, nowadays there is more
room for different computation

to be done during the sequence.

And there is also room, like there is many
labs doing multi-photon on computation.

For example, the people that
do photonic quantum computing,

they generally either have one.

single photon source shooting many
photons and then have a network of

those to do the photonic computation.

Or many single photons shooting at
the same time to do the same thing.

So it's changing rapidly.

The last since I did my PhD was it
is now a very different environment.

I feel

Dan: Based on this extremely precise
And accurate control that's needed.

I guess that's why you were
quite an appealing hire for

quantum machines, right?

Let's come onto that.

How did you end up in the company?

And what's the, what's it like there was,
is the company focused on a particular

single set of products or is there a
portfolio and what's the vision as well?

Lorenzo: I got into quantum machines in
the strangest of places for a physicist

I, I entered the company in the marketing
team weirdly enough, because I've always

had the passion for communication and
specifically the communication of science.

But I quickly started to have
a physicist role in the company

and now I'm in the product team.

I don't develop products.

I am mostly still work on our outreach
or how we present the products.

But they also offer feedback to the
people that are actually developing

products on what the market needs.

What are the, the rumors in the market and
the requirements of the different labs?

And we develop strategies for how
the What kind of experiments do

we want to enable with the newest
products and stuff like this?

But it is true that it was a good
fit in the sense that quantum

machines specifically develop
solutions for quantum control.

We don't make qubits.

We don't make dilution refrigerators.

We make control systems.

And we say we develop these
control systems as products.

So we sell hardware mostly and we
serve a lot of different communities.

So in truth the quantum field is split
into many different qubit modalities

and the company tries to serve them all.

And it's not for me, it's not a matter
of trying to sell the products to more

people is simply that we today, no,
no expert can tell you what's what's

the qubit type that's first going
to show useful quantum computation

that will, revolutionize the field.

So we have to, in our following our goal
of speeding up quantum research to make

useful quantum computers come sooner.

we have to serve many
different communities.

So this is this is what
we do in quantum machines.

And we are products are vary from room
temperature controllers, both high high

frequency and low frequency, but also
cryogenic components like filters and

sample holders and things like that.

Dan: Thanks.

I guess with all the different
modalities, does it make it difficult

for you on I guess you want to work
as closely as you can with all of your

customers so that they get the maximum
outcome from the hardware and software

you sell them and from their qubits.

And I expect in some cases you may even
be collaborating, in terms of the design

and so on to optimize the way your control
system works with their qubits and so on.

How do you tackle that?

I guess it's is it just about
ensuring the company's got enough

people to work with your customers?

Lorenzo: I think it's a very
interesting and unique place

for quantum machines to operate.

We want to serve many different
communities, but these communities

sometimes have different requirements.

Also, hardware requirements not only
just, tackling different physics

but really, you want different
outputs of your box when you buy it.

So there is a contrast there that
is difficult to tackle in terms

of research and development.

If you want to create the new product,
what requirements do you follow?

Do you follow the one of spin
qubits or superconducting qubits?

So there is a lot of, tension
there in terms of requirements.

At the same time, there is only a very
few people in the world that collaborate

with all the different qubit modalities.

And from there we draw understanding
and requirements that are really unique.

I feel.

So sometimes it happens that, you
have this tension that one qubit

type wants some output and another
qubit type wants another output.

But having this bird eye view.

Sometimes gives you also the
opportunity to see a requirement.

Maybe you see that, a qubit type could
use a sequence that they don't use today.

then some other qubit type has developed,
for example, and then they don't

know that they have a requirement.

You could, you see a way for
them to improve something.

But in order to do that,
they need something that the

other qubit type developed.

So this is a really
unique place for us to be.

So there is tension between the
different requirements, especially

when it comes to hardware.

Sure.

That you cannot update twice a month, but
it's also, there is also an opportunity

there in finding out what are the
common grounds, what are the things

that we can, cross use among different
qubit modalities, and there is a lot.

Dan: Yeah, that's fascinating.

I guess you're almost forced to try
and agnosticism where possible make

your products appealing to more people.

Yeah.

And as a result, that ends up
leveling the playing field a

little bit for the whole industry.

Lorenzo: That's true.

Dan: Yeah.

So anything more on the hardware?

In terms of the different product
product and families you have is

Lorenzo: Yeah.

Okay.

We have different products
for different customer needs.

For example, are we have a DC and
low frequency room temperature

controller called the QDAC.

That is the lowest noise
floor on the market.

And this is needed because for
some qubit types, the DC noise.

means loss of fidelity in
your day to day operations.

And this this will be a problem when
you try to do useful computation.

But in, in some other cases, This
responding to customer needs allows

us to leverage this position of,
being having this bird's eye view of

the overall quantum research field.

And this allows us to do our homework and
do research and development for things

that people didn't think will be possible.

So for example when quantum machines
started, everybody was doing experiments

by Playing back AWG sequences.

So you have a memory, you put everything
you need in a certain experiment inside

the memory, and then you press play
and then you just wait for the results.

We developed our own processor
architecture called the P P U pulse

processing unit, and this allows
you to do computation, classical

computation within the quantum sequences.

So now you can, for example, send a
measurement pulse to a qubit, get the

response from the qubit, analyze that
response live to decide whether the

qubit is in state zero or one, and then
based on that, decide whether you want

to reset the qubit or just do nothing.

That's called an active reset.

And it's it can be done because now we
have the capability of doing classical

computation within the time frames of
the quantum sequences and orchestrate

the both the generation manipulation
of the pulses, the readouts and the

computation that is within the sequence.

So that's and the PPU was was the
backbone of our room temperature high

frequency controller called the OPX.

Which now has an entire
family of products.

Dan: Wow.

I didn't know that.

I guess when you observe that particular
qubit, you're collapsing the wave function

and therefore you can't continue to use
it in a quantum computation at that stage.

Lorenzo: Yeah, but

Dan: you

Lorenzo: simply to initialize the
system you need to be in a known state.

You measure it, you
collapse the wave function.

If it's one, you send it back to zero.

If it's zero, you can just continue
with your experiment, right?

Dan: And what about setting the
system really for computation?

Does PPU help with that in terms of uh,

Lorenzo: Setting the system as in

Dan: preparing the qubits
for, yeah for a computation?

Lorenzo: Yes.

Yes.

So you can imagine if you
have one qubit and you want to

initialize it that's what you do.

You do active reset.

You measure if it's one, you send
a pi pulse to bring it to zero.

If it's zero, you can just continue on.

But what if you have,
let's say a hundred qubits?

Or maybe 1000 qubits and someday
we'll have a million qubits.

Now you need the system that not only
allows you to do active reset on each

qubit simultaneously, but you want to
have them all prepared at the same time.

So if I do, let's say that I do one
active reset and I have a fidelity of 98%.

Now I want, if I have, a million
qubits, and I have a fidelity of

98 percent of all on all of them.

My overall fidelity of
the CPU is pretty low.

So there is no way I'm going to
do useful computation on that.

I want a active reset system that allows
you to have arbitrarily high fidelity.

So I want to spend my time doing more
and more active resets to increase

my chances of having something
good come out the other side.

But this brings you to another issue.

Now you have a million, let's
say, qubits that are being

actively reset at the same time.

What if one requires 10 active reset
and the other requires one to reach

the level of fidelity you need?

Now your second qubit is waiting for the
first, for the time of nine active resets.

And that brings, terminalization
into, becomes an issue because your

qubit will tend to a state where it
has your, where it has maybe two or

three percent excited state component.

So that's not good either.

So all of a sudden, the simple act
of doing active reset becomes, I need

to find a way to arbitrarily, have
arbitrarily high fidelity in the process,

but also, a timing that is that is,
a process that is time deterministic.

So I need to know how long it will take
for each qubit to do what they need to do.

So all of these are possible, but then
you need to change your parameters

and change your sequences even
dynamically during the sequence.

And that's what the PPU enables.

So today actually we are working on
this sort of large scale active reset

together with some of our customers.

And one of our quantum machines
expert Tom Veer is working on a

new method that allows you to do to
have this active reset arbitrarily

with arbitrarily high fidelity.

And then known timing, and this is
only possible if you can do classical

computation in real time during the
sequence and also with very high very

low latency, so very high throughput.

So this is just one of the
examples why you need the

classical computation in real time.

Dan: Nice.

Yeah, it's, that's really helped.

Thanks.

So now I understand that you've
got the active reset during a

computation particular points.

And also I'm imagining.

Maybe this, I'm just a visual
thinker, but so I'm imagining a grid

of qubits, regardless of modality,
something where you can have a grid.

And almost as you're performing the
execution across those qubits, you can

be actively setting in advance to ensure
that the qubits are ready for execution.

Lorenzo: Yeah, it can be
both for, the overall qubit.

Now I want to reset my entire chip.

Or maybe during a big circuit,
I have let's say I have a

measurement of one of the qubits.

This happens almost in every circuit.

You have a measurement of one of
the qubits, but then that qubit

needs to become operational again.

Now, okay, you did the measurement, so
you collapsed the wave function, and now

you're supposed to, you're supposed to
know what is the state of that qubit.

both to reset it, but
also to do feed forward.

So to use that information
within the same sequence.

And this is all classical computation,
which is what truly amazes me.

Quantum computation, the most
advanced quantum computation out

there is truly mostly classical.

The, all the classical controls are.

What allows us to do the
operations on the quantum systems?

Dan: yeah, I've seen a few posts on
LinkedIn recently, people talking about

hybrid computation and then other people
returning saying, don't be stupid.

They're all hybrid.

Every single algorithm needs
some classical computation.

Lorenzo: Yeah, there is a lot of
discussion about how hybrid algorithms,

and I believe it is true that all quantum
computation will also be classical.

Now, it's a matter of semantics,
whether you call it hybrid quantum

classical computation or not, but
in the end, what you want to do is,

integrate classical computation at every
level of the quantum computing stack.

There is no way around it.

Every level needs to have access to
specific classical compute resources that

are scaling up as you go up in the stack.

Dan: Thanks.

So just before we go on to the
software, I wanted to ask at the

hardware level obviously you have
some very fast control electronics

and so on, which is controlling the
order of operations and the rate of

different pulses sent to the qubits.

What creates the pulses?

Is that, do they come out
of your system as well?

Lorenzo: Yes, so our room temperature
controllers, the OPX does read

out manipulation and generation
of waveforms all in real time.

So it's basically an instruction based
system where you load a logic into it, and

it performs the operations following that
logic in real time during the lifetime

of your qubits or during your circuit.

So if I tell measure my qubits, and
then if it, if the result is one.

Only then send the pi pulse
to bring the qubit to zero.

That's a dynamic that's a dynamic
circuit because the system has to measure

something and then do a computation.

Even if it's just a simple thresholding,
that's still computation, but it can

be much more difficult than that.

It could be a neural network or
it could be a Bayesian estimate.

Or any other like a Fourier transform,
whatever you need in your sequence.

And then it needs to follow your
instructions on how the process

needs to continue based on the
results of that computation.

And this was virtually impossible with
AWGs in the past, even, a few years

ago, it was virtually impossible.

And now it's the rule of the game.

If you don't have.

A system that allows you
for real time computation.

There is very limited things
you can do with qubits really.

Dan: So are all of those dynamic
decisions made in the algorithm,

which has been developed and ingested
by your system, or is it also?

Done within your hardware.

There are some reactive
type of dynamic changes.

I guess one of them you mentioned was
resetting a qubit mid circuit for reuse.

You wouldn't encode that into an
algorithm at a higher order, would you?

Yeah, I guess I'm answering my own
question there, but it's a mixture.

Lorenzo: Yeah.

So this is a mixture of hardware and
software, but basically what we created

the OPX is just a platform where our
customers can code their own instructions.

And their instructions are are then
transferred and followed by the hardware.

And so we have a software layer, which
is a, which is called quantum universal

assembly, which is a pulse level language.

It's I can code play a pipe house or
measure a certain qubit or do some

mathematical operation on some variable.

And then this information gets
transferred into the OPX and

the OPX Follows instructions.

This follows following instruction doesn't
need a communication to the computer,

and this is done to reduce the latency.

If you have to go to the computer to
do computation, then you're losing

milliseconds and you can't allow that.

But overall, you can basically
code whatever your system requires.

So the O P X is a platform where you
can code whatever type of experiment

you need to do from the simplest
active reset to the most advanced

quantum error correction algorithm.

And then it's just a matter of what
system you have and what are your

goals in terms of the experiment?

It could be, for example, the new code.

You have a specific system.

You code the macros that Refer
to that specific system and

implement a certain gate.

So we have a pulse level language where
you send and receive pulses, your system,

you can code a sequence of pulses and
measurements and stuff that applies a

C-not, for example, or or a Hadamard gate,
and then you can put that into a function,

a macro, and then you simply have a C not
macro that you can use at the gate level.

So it's actually fairly simple to
move from pulse level to gate level.

And we also have we also have softwares
to convert gate level languages into

pulse level language to allow customers to
use for example open QASM to write their

algorithms, you always need the person
that codes the specifics of the system.

And that's.

Typically, the person working in the
lab that knows everything about the Q.

P U that they're working on
and implements all sequences.

But then they can move up the stack and
we have different soft source to do that.

Dan: Use the term assembly language.

I think that's that makes sense because
it's sitting in between the more abstract

circuit to more like the object orientated
approach with the different actions in not

memory in this case, but on the qubits.

Lorenzo: the

Dan: okay.

So that the way your system is
developed by the user defines the

capabilities of the computer, ultimately.

Lorenzo: capabilities of the what we call
the QPU, the quantum processing unit,

which is a mixture of qubits and controls.

So the ultimately is the
customer that decides.

What the QPU will be able to do.

Our job is to enable processes, enable
computation, and enable sequences.

For example, if you, let's go back
to the example of the active reset,

which I think we, Understood.

So if you can do an active reset in
one millisecond, then there is very

little you can do with your qubits.

Because if it takes one millisecond to
reset, then it means that your latency

in your control system is very long.

You want to reduce that latency
down as much as possible.

And we offer systems that go all
the way down to 100 nanoseconds.

Latency to do an active reset.

Now you get your information back from the
qubit and within a hundred nanosecond, you

can decide whether you, whether the qubit
is in the state zero or one, and what

kind of pulse do you need to generate.

Right there and then in
order to reset the qubit.

So this latency is one of the metrics
that we use because it's an enabler.

So if the latency is low enough, then
we enable things for our customers.

And that's the sort of thinking that goes
into the development of of the products.

Dan: Great.

And I guess in the, in your software,
do you have a library of patterns

of executions that people can pull
on perhaps they changed by each

individual customer use case or
modality, but you mentioned these

kind of functions for different gates.

So I guess there's a, do you have
a standard set that they work

from and then develop from there?

Lorenzo: Yeah.

So our software starts with QUA,
which is this quantum universal

assembly language, which is a pulse
level language that you can code.

where you can code sequences,
pulse sequences and measurements.

And then even if you have just
one, you can write basically your

entire circuit, but you need to be
able to write at the pulse level.

So you need to know the specifics of
the system, what kind of pulses to make

a C not or to make a Hadamard gate.

For your system specifics.

Now, sometimes this is enough.

So a lot of labs just use qua.

And we have libraries of qua codes
where you can take inspirations

or just copy paste some sequences.

For example, virtually every
qubit lab out there uses Rabi,

Ramsey, and echo sequences.

Now, we have codes for that.

There is no reason you
should rewrite them.

Also, there may be 10 lines of code.

And that's, let's say, simple enough
to transfer from one lab to the other.

More interesting it becomes when you
start to use sequences that are not

commonly used in your in your community.

But you know of them because you work with
us, and we work with other communities.

For example, one example that comes
to mind is what is called PDH lock

Pound-Drever-Hall, which is a technique
that is commonly used in In atom systems

for metrology purposes and was not
generally used in photonics, for example,

although they might have used for it.

So now our system offers the capability
to do it, although you didn't ask for it.

We have codes for it because we have
other customers that have that we

worked with to develop those codes.

There is a lot of cross cross usage of
codes and sequences and that's just qua.

Now on top of Qua we have a
product that just came out.

It's called qualibrate.

Um, Qualibrate is our tool for
calibrations, mostly of large systems.

Now, if you need to calibrate
the parameters of one qubit,

you can do it manually.

Um, And it doesn't take a long time.

But if you want to scale up and
you want to calibrate 100 qubits or

1000 qubits, now all of a sudden it
becomes a lifetime of calibrations.

So qualibrate solves the issue by allowing
customers to create calibration nodes.

Okay.

I want to write a piece
of code that calibrates.

for example, or the amplitude
of some pulse in a sequence.

And then create graphs of those
calibration nodes that can be automated.

So for example, I can set up a
system where, I code for a week

and then I say, okay, this is my
calibration when I press play.

The system needs to be calibrated or
maybe even better every now and then

during my experiments, the system
should take the information that

it gathered and understand whether
it needs a calibration or not.

All these automation tools are
now available for customers

to, to build to build upon.

And and part of this new tool is our
QAM, which is a library of of tools for

abstracting the information of the qubits.

So now if you have one qubit,
it's simple enough to, store

into memory the T1 or what the pi
pulse means and stuff like this.

But if you have a thousand qubits, now all
of a sudden you, all of a sudden you need

a a system just to store the information
that you have available for your qubits.

And we did the infrastructure work
for customers to do that simply.

And that's all embedded in
Quate, which is freely available

for our customers already now.

Dan: So I've got to say them together.

Qua, Qualibrate and QAM.

I like it.

Okay.

Great.

So I'm thinking along the lines of
Qualibrate one of the main drivers

for the development of that.

The fact that logical qubits are made
up of many smaller physical qubits.

Not smaller, I mean, in terms of
individual and the pulses at the

core level to lots of individual.

Qubits then make up the way
you address a logical qubit.

Is that why Qualibrate was formed?

Or I guess there's probably other
automations like you described,

which were necessary anyway,

Lorenzo: Yeah.

So I think to me.

Scale is the big driver.

So a lot of what we do is anticipating
the needs of the community.

For the years ahead.

So for example one of our research
teams did the work on simulating

what it will take to run sho Shor's
algorithm today on a superconducting

QPU to factorized a number.

This is one of the holy
grails of quantum computing.

We want to factorize numbers because we
can break encryption and stuff like this.

Now, what do you think it will take
to factorize the number 21 that

you can do on top of your head?

How many qubits?

Do you think are necessary to do it today?

Shoot a number

Dan: a thousand.

Lorenzo: 1015 exactly a thought you
need the 1015 superconducting qubit.

It's a QPU that basically doesn't
exist or maybe one exception

today to factorize something that
you can do on top of your head.

So this is the the difficulty.

You have a scaling issue even
before you, your QPU becomes useful.

So we need to scale, we need
to scale a lot and scaling.

It's a real challenge for the
control system because now all

of a sudden, all the requirements
that you have for single channels.

you have for thousands
and thousands of channels.

That's why we developed OPX1000, which
is our scaled up version of the control

system, room temperature control
system for QPUs that can, for example,

synchronize thousands and thousands
of channels to orchestrate operations

on thousands and thousands of qubits.

This is extremely tough.

There is a lot of thinking and design
that goes development also that goes

into that, and this is, I think,
one of the main drivers that's been

in, in QM in the last few years.

We need to scale up in order
for QPUs to become useful.

And now part of that scaling up is the
moment you go from 10 qubits to a hundred.

Now you can't do the calibration.

Manually anymore.

It's just too much.

You will spend all of your time just
doing calibration with no hope of

getting anything useful done on the QPU.

A calibration package or framework that
will allow automated calibration and

smart calibrations was a necessary step
that there will be, I believe there

will be, no useful quantum computation
ever without an automated way to make

proper high fidelity calibrations
and calibrate aimset doing that.

It's a framework, so you
need to build upon it.

So you need to write
your own calibrations.

And sometimes we have codes for that,
or we have a team, a customer success

team that helps with that, but it's
a necessary, it's a necessary step

to get to useful quantum computation.

Dan: Great.

And it's got me thinking along
the lines of scaling now to the,

what's next in terms of scaling.

Because I believe that all modalities are
going to have scaling issues of some kind.

Some are down to control, some are
down to perhaps maintaining the state.

Some are down to just the
topologies of the qubits and so on.

Do you, how do you describe that landscape
of different limitations of scale?

Does it tend to be per manufacturer
per vendor or is it per modality?

Lorenzo: So I think there
is a little bit of both.

Okay.

Let me talk about modality.

So it just so happens that.

The way you will build a control system
to control a thousand superconducting

qubits is not too far off from what you
will build to control a thousand spin

qubits, at least spins of some types.

So you will build a system
that has analog outputs.

Roughly like eight to two
ratio or something like this.

So you need a lot of analog outputs.

In the case of spin systems, you also
need a lot of DC outputs and that we

have covered with another product.

But generally speaking, it's
not too crazy of a difference.

You need analog outputs, you need
a microwave and a basement pulses,

and you need a way to do readouts
also the same range of frequencies.

When you move to atoms, now it becomes
different to, to build a large scale

atomic system, eventually you will
need to control thousands of lasers.

And that's not simply doing,
electromagnetic pulses.

You want to use some sort
of modulator technology that

maybe is driven by our OPx.

Or your control system, but
it modulates real lasers.

So optical beams.

Now for this, we had to develop something
different, all different altogether.

And we teamed up with Quera to do it.

To build the so called PCU, which is
a a modulation control system for many

lasers, so we will be able to from a
single tiny box have maybe 200 plus

laser beams coming out that can address
individual atoms in an atom based QPU.

Now this is the sort of.

Changes that you see from one qubit
type to the other, it sometimes

requires a complete new products
that is specific to to a qubit type.

So this is the type of tension that we see
that, sometimes we need to pull resources.

that will only enable new things
in a certain qubit type or

in certain qubit technology.

Some other things are
more generally useful.

So then it's easier to
spend resources on those.

And so this is the kind of thinking that
goes into the different qubit modalities.

And then I think another axis is the
What kind of operations or what kind

of resources do you need to during your
sequences now, when you start to think

about quantum error correction, you start
to think about decoders and decoders are

generally speaking classical algorithms.

So you want to run them on classical
hardware, for example, CPUs and GPUs.

Now, okay, if you have a quantum
error correction for maybe 10 qubits.

you might write things in FPGA or
within your controller is probably fine.

When you move to a thousand atoms, 10,
000 atoms and 10, 000 atoms is already

in the roadmap of some companies.

Now you need GPUs, but GPUs are
very distant from the electronics

that handles your sequence.

So you need, an additional piece of
the puzzle, which is an interface

between cheap CPUs and GPUs and QPUs.

And this is what we are
building with NVIDIA.

We teamed up to develop DGX Quantum,
which is a, an an integrator, an

interface between CPU, GPU, and QPU.

And that's going to be
the first of its kind.

It's already live actually
and offers a latency.

between your quantum controller and
your GPU of less than four microseconds.

And that's what we believe it's
necessary to run even the most advanced

quantum error correction algorithm.

So I think the challenges are variegated,
different qubit modalities will need

some, in some cases, different products.

In some other cases, we'll use the same
product, for example, DGX quantum, we'll

use it differently to build different
decoders, to build different circuits.

But using similar types
of classical resources.

Dan: Great.

I definitely want to come back to
the, to DGX in a moment, but I've

got some other trailing thoughts from
the beginning of your answer there.

I feel like I have to address
specifically around neutral atoms.

You mentioned a, like a.

small device with 200 lasers all
being controlled at a very high rate.

I know that in neutral atoms
systems, you have mobile qubits and

you can move them in a 2d plane.

For example, question that I just
haven't got my, I haven't got an answer

to yet myself through my own research
is are the lasers moving to do that?

Or is it a series of pulses controlled by
something like your software to help move

a qubit across cloud of atoms, you know?

Lorenzo: so there is different
types of of optical controls.

Now I'm not an expert in atomic
system, so I will probably say some

heresy here, but generally speaking,
there is individual controls.

So you want to be able to take a
certain laser and shoot a single atom.

And then there is.

global or area controls.

So you want to take all of the atoms
in a certain region and shine a certain

laser or a certain optical pulse to them.

For example, the recent Lukin
Quera paper showing, I believe

it was 64 logical qubits, very
prominent, very high impact paper.

They use individual optical beams
to move atoms to certain zones,

physical zones, physical spaces.

And then onto these spaces, they use a
global laser control to do operations.

So whether you, so you need to
do both shuttling of atoms around

and some sort of global or area
controls to do, certain operations.

So there is a mixture of of both.

Dan: Yeah.

Okay.

I thought so I've seen the grids of
atoms being moved by a global laser.

I haven't seen individual
ones being moved as well.

So I guess maybe that's another
level of complexity that will come.

But I think Quera a whole
grid and then that becomes a.

Logical qubit somehow.

So then you can address that
whole qubit as a, with that

global laser to move it around.

Lorenzo: Yeah.

And eventually you will want to, create
closed loop systems that allow you to

call this, let's say a hundred atoms,
a logical qubit that automate automated

calibrations that allow to keep them into
shape and then abstract away all of that.

So now you know what the CNOT is,
you know what the Hadamard is.

I don't need to go into details.

to to run a CNOT or Hadamard, I
just want to work on logical qubits.

And that's because there is a
whole other set of scientists.

that only deal with logical qubits, and
you want them to do their thing as well.

So eventually we want to abstract away
and move up one layer onto the logical

realm and start working more there.

Dan: Yeah, it makes sense.

Layers and layers of abstraction.

And that's why people could
talk about the full stack.

Great.

Coming back to DGX then this sounds
like quite an exciting thing.

I wanted to ask first of all, why GPUs is
it for the parallelization or is it for.

for something else specifically, like
why not continue to use CPUs or FPGAs?

Lorenzo: Okay, so I think
there is two answers to this.

The first is local.

Let's say Now, there is a lot of a lot
of groups that are working on using

FPGAs to calibrate qubits, or using GPU
systems to calibrate qubits so there

is no, no one shot answer to this.

But if you want to run a large
scale machine learning driven

algorithms to do something on a
qubit, you're going to need QPU.

Sorry, a GPU eventually.

So maybe it won't happen at 10 qubit, but
it will happen maybe a thousand qubits.

So eventually, there is going to be
algorithms that are going to need a

much, much bigger parallelization, and
GPUs are already excellent for that.

There is no reason to, reinvent
the wheel on the FPGA level.

When you can simply or quote
unquote simply connect to GPUs

that already do an excellent job.

And that's the local version.

If you have in your lab 10, 000 qubits and
you want to do some decoder or even simply

calibrate a circuit because we talked
about qualibrate for doing the calibration

of single qubit stuff or gate stuff.

So I want to calibrate my pi ???. I
want to calibrate my Hadamard, but

once you have a circuit written
down that has maybe 10, 000 gates.

Now you can do a holistic calibration
of that circuit and that will

most probably require a GPU.

That's not going to happen in a in a,
FPGA environment or something like that.

That's the local answer.

And there is a lot to work on this still.

It's not a definitive answer.

What I want to add to that is a more
integration sort of large scale answer.

We believe that quantum computers
will have the highest potential, not

as standalone computers, but within
workflows that are truly mostly

classical, let's say HPC centers.

Or big data centers that have all manners
of GPUs and CPUs running in parallel

that, that have a very advanced workflow.

Now we want to introduce quantum
computers into that workflow as

components that solve specific problems.

So to me, the work that is being
done now on developing an interface

between a QPU and a GPU will also
be instrumental in helping quantum

computers work effectively inside HPC
centers and larger classical workflows.

And that's what one of the
things that excites me the most

because we don't really know
what's going to come out of that.

But I think the highest
potential is there.

And that's why HPC centers are
heavily investing in quantum.

Dan: Sure.

Yeah.

So on your first point the, it's not
just about parallelization of tasks.

It's about running inferencing on the
edge to then make decisions on how you.

Further configure and calibrate
the qubits in real time.

Lorenzo: And you want to do it
with the shortest possible latency.

So if you want to do a very complicated
algorithm that takes 10 milliseconds.

On an FPGA, you might as well spend
four microseconds to go to a GPU

that can do it 100 times faster.

And then you'll save time.

Dan: Do the GPUs get mounted very
close into your hardware, or is it a

matter of your software running on an
Nvidia cloud or some other kind of a

Lorenzo: No the, there needs
to be a physical connection

between our controller and GPUs.

So our, in the end, the DGX quantum
will be a box containing parts

of OPX and parts of of a DGX.

So a Grace Hopper and it's it's
necessary because of this latency issue.

So you need to be able to not only
keep the latency low, but also

have a very high control on what.

What are the different latencies
in the system and that you can

only do on a, physical hardware?

Dan: So if a user has an algorithm
and they're not sure which part

should be executed basically on a QPU
and which part should be executed.

Implemented on a quantum QPU that's
a higher order kind of decision

making scheduling that happens in
the, in, in terms of the HPC type

services, whereas the GPUs in your
domain are all about optimizing the

control make it as fast as possible.

Lorenzo: exactly right.

So you have a local
version, which is okay.

I want my gates and my circuits to
run the best way possible, and that

maybe I can use a GPU for that.

And then I have GPUs that are solving
other parts of the same problem.

That maybe are completely unrelated don't
even communicate with a quantum computer,

but solve part of the same issue.

Dan: Yeah.

So where is quantum machines
place in the global piece that

you were talking about there?

The second

Lorenzo: So we, we collaborate with HPC
centers and also with companies that do

integration of technologies into HPC.

One example is ParTec that we created a
product with, which is called Qbridge.

Qbridge is a SLURM extension that
allows to see a quantum computer

as a node of an HPC workflow.

So that's already available today
and already in use from some HPC

centers to, develop algorithms and
test things on quantum computers.

There is a lot of work to do there.

But that's also what's exciting about it.

There is a lot of development and a lot
of push because of the high potential.

Dan: Yeah, certainly a rapidly
developed developing market.

So can we talk a bit about
the competitive market?

Are there yeah, what does
it look like for you?

Both for your traditional
products and And DGX.

Lorenzo: So again, I'm not really the
expert here, but what I know about

the market is that following the
development of OPX the market, I think

quickly realized that dynamic circuits.

are the way to go.

And there is no, no real way around it.

No real way to do, useful quantum
computation with, without being

able to do feedback and feed forward
within the quantum sequences.

All the companies that were traditionally
doing AWGs or other playback systems

tried to, compete on the processor
level or instruction based systems.

But then I think in terms of scale
we are doing a lot of work and we, a

lot of our resources and also the way
we take decisions is driven by scale.

And the next few years of what QPU
providers or QPU labs will want to do.

This, I don't see as much in the
competition, especially for what regards.

Integrating classical
resources at all layers.

For example, the DGX is something
that is completely unique to us.

It's also thanks to the collaboration
with NVIDIA, which is, the

biggest partner one can have.

So that's certainly something that is
unique and that's going to drive a lot.

A lot of the research, I
think, in the next few years.

And what we'll see about that.

Dan: Hey, you were talking about
the fast moving pace of the industry

and obviously being a product
manager, you must have a roadmap.

Is it, is there anything you'd
like to share with the industry

at this point, something you can
signpost or is it all under wraps

still until you release things?

Lorenzo: Yeah I don't know what else
I can say that is that is of value

in respect to we already said I want
to stress that DGX quantum is going

to change the cards in the deck.

For a lot of the qubit types
and that we work with all the

different qubit modalities.

And, we want to push quantum
research, whatever, wherever that

happens and it can be atoms, it
can be ions, photons and whatnot.

We will push the research in order to
get to something useful at the end.

Dan: I wanted to ask about just finally,
I wanted to ask about marketplaces.

It feels like you're moving from hardware
all the way up the stack now to the

software control elements, using GPUs,
almost at the level now where, like

you said, you're facing up to the HPC.

You've developed some interface to
for SLURM interaction at that point.

When it comes to the software control,
there are many different software

products now and in the future, which
will, which could be using your DGX as

an interface, like it could be a higher
order software application of some kind.

Will you be developing interfaces
bit by bit, or do you think you'll

somehow, will you need to scale
your your software such that.

I guess maybe it just comes down to
standards development and that could

be another part of your answer is

looking to like influence
those standards going forward?

Lorenzo: yeah, definitely.

Definitely.

There is, as you said, to,
to answer answers to this.

One is standards and what people
do, at the lower level and the other

is what kind of, how does software
scale in order to encompass also

HPC and and all other players.

Yes, we are involved with standardization.

We are parts of standardization
committees in various places in the

world to try and define what are
the metrics of the quantum systems.

we do some internal research on this, so
we have a couple of papers out on what

are the requirements for control systems
for the future of quantum, and those

basically proposed benchmarks to, in
order to evaluate different technologies.

We are also involved for with the
committees, for example, with open the

open task committee that's a virtually a
de facto standard for gate level codes.

Because we want to influence that
and we want to be influenced by

that, so we want to be involved with
everything that regards standardization

because that's one of the.

The things that still somehow need
to be developed properly to be

able to talk the same language.

We, the different qubit types come from
different sub fields and the more we

go we go on the more the gap closes in
between these things and we see overlap.

So that's where you want to create
standards to be able to speak the

same language and progress faster.

That's as simple as that.

Now in regards to.

Software will definitely scale.

Now calibration is one part of that.

Then there is, extensions to include HPC.

And so there is definitely a ton
of work to be there, but I think

at least In my view, the basic
components we know what the basic

components are and there is there is
works being done in each of those.

So we have extensions to make quantum
computers into HPC and we have higher

we have different languages that can
now communicate between one another.

You have qua for the low level, you have
a qualibrate for large scale calibrations.

You have open QASM and Circuit Cir
circuit for the gate level operations.

And then you you have Cuda Quantum for
all the, DGX or GPU related optimizations.

So I think the basic components are there.

Standardization still needs work.

But yeah I'm excited
to see what comes out.

But I think the puzzle
is almost already made.

Dan: I think we just keep carving
more pieces of the puzzle.

It's

Lorenzo: Yes.

Dan: going to keep growing
the complexity for sure.

So yeah, thanks for picking
that all apart for me.

That was really interesting for me to
understand some of that at a lower level.

Let's let's move on to
some, a bit more fun now.

There's a few questions
I'd like to ask at the end.

The first of of those is what's your
most influential piece of work or

maybe individual in the quantum domain

Lorenzo: People of today or

Dan: Totally up to you.

People or a paper or some particular
piece of work or body of work.

Lorenzo: so lately I've been
really impressed and inspired

by the Australian ecosystem.

Dirac is a full stack company.

And that's under the
guidance of Andrew Zurak.

They really managed to inspire me and
my, even in my daily work, whenever a

paper comes out, I am in awe, basically.

They are huge collaborators of ours.

We.

We work with them closely
almost day to day.

They work also with the entire Australian
ecosystem from Andrea Morello's group at

UNSW and all the other, research teams.

out there.

And it's, I think, there is a lot,
there is a lot of influential and

inspiring figures in the quantum space
from Will Oliver in superconducting

qubits to Mikhail Lukin in atoms.

But personally, I've been really
inspired by, by the ecosystem that

Dirac and UNSW managed to to create.

It's truly beautiful.

I've never been there physically.

But I'm yeah, I'm really excited
to see what they do next.

Dan: Yeah, I agree.

I think they seem to be pushing
the boundaries for sure.

And maybe that's one for
your bucket list, right?

Lorenzo: Yes, it's not even my, the
qubit type I know the most about,

but it's still very fun to read their

Dan: I'd like to ask about your kind of
your vision for for the future of quantum.

I think here, the question
to you is not so much about.

Computing, I get the feeling that going
back to your days in the in the quantum

photonics lab, you were working on the
encoding of quantum information in flying

qubits, but now in your work, you're
focusing wholly on the The only optical

pulses that are created are for control.

There going to be a conjoining
of those two parts of your life?

Do you think quantum networking will
come into some of the products from

quantum machines or are you cause,
cause they're two different fields,

but they're both photonics orientated.

And we see that with the scaling issues of
different quantum computers going forward,

networking is, I'm a strong believer
that networking is going to be necessary.

And there's some great activities
going on globally in different

universities and so on and startups.

What's your feeling about
networking quantum computers?

Lorenzo: I still feel that there is
going to be a quantum internet of

sort as, as Kimball described that
curious back in his Hallmark paper.

But I think I still think it will
not be at the scale that, the word

quantum internet makes you think about.

I feel that networks will be mostly closed
systems, such as photonics interconnects

to mix different QPUs or, photonic
systems to work as transducers between

different technologies or QKD systems
for secure communication and so on.

So I'm imagining at least as of now, I'm
imagining things mostly as closed systems.

To be fair, even an interested party like
me cannot really, there is no certainty of

what the future of of this is going to be.

The recent works in quantum networks
at large cover anything from,

satellite quantum communication.

To single photons through 1550
nanometers optical fibers that

we use today for the internet.

So it's really difficult to imagine
that you can't create a quantum

internet as Kimball described.

You can, today you can use satellites,
you can send single photons from Earth

to space and back to create secure
Communications using quantum mechanics,

which is incredible and something
that nobody could have, predicted a

couple of decades ago, so it's it's
really difficult to make a prediction.

I think in my head, they are mostly
closed systems for now, but it's

also true that, as I said before,
the gap between photonics and

the other, QPU types is closing.

The people that are doing photonic
quantum computing now are really good.

And they really show promise.

So it could be that the, the future
quantum computers will be all optical

and that could very well be the
best quantum computer out there.

We just don't know.

But in terms of networking, I think
networking will have a place for

interconnects and secure communication.

There is so much going on that it's
difficult to imagine a world where all

quantum computers are just standalone
machines and we don't have any sort

of quantum communication going on.

Dan: Great.

Yeah, I'm with you.

I've said it before, I'll say it again,
I hate the term quantum internet, but

Lorenzo: It gives I think the
wrong impression of the thing.

Yeah.

Dan: definitely.

So yeah, it's that in terms of
the closed networks of quantum

computers I guess that's something
that quantum machines are keeping a

close eye on the developments, right?

Because ultimately there may
be a additional market for you

there as well going forward.

Lorenzo: Yeah.

So we are quickly closing
the gap towards photonics.

So that starts.

I think with systems that are optically
active, that still do work as any

other qubit type, that could be the
case for NV centers, for example that

could pass through quantum sensing
before it goes to quantum networks.

But I believe in the next few years we
will see that most photonics labs like

the one I helped build, A DTU will start
to use electronics like ours to perform.

The operations that could be
generations of single photons or

manipulation or readout just as
any other qubit type out there.

And I believe that even quantum
photonics, companies like Squantum

or or the other players in that
space will start to need electronics.

that we developed originally
for other qubit types to

make their processors better.

So I think we, we are already, putting a
foot in the space and I think it's going

to get much bigger in the near future.

Dan: Cool.

Thanks.

And then just to close off, I like to ask,
what's Tell us a bit more about Lorenzo.

What do you like to do to
wind down from science?

Do you get a break at all?

Or is your mind constantly on
the control and the optics?

You must, there must be something, right?

Lorenzo: Yeah.

Yeah, there is definitely something.

So my wife and I recently got a dog.

So I, at the moment I feel that my
free time is basically spent walking.

But we enjoy hiking.

We enjoy meeting friends and we spend
most of our weekends in the Alps

somewhere walking and enjoying nature.

Yeah, lately I feel that after an
entire day in front of the computer,

I really need to see some nature
to be active again the next day.

Otherwise it becomes tough for me.

Dan: Yeah.

Good.

Cool.

Yeah, I'm a dog parent as well.

So I get that totally,

Lorenzo: Dogs are a lot
more work than I expected.

Dan: They are, they're very
needy, they're very needy.

But what you give, you get back
in, in love, that's for sure.

But you get to walk your dog in the Alps.

Can you just clarify that?

Where do you walk?

And

Lorenzo: I

currently, yeah, I
currently live in Milan.

And also, there is parks
around the day to day.

It's mostly in the city.

Milan is maybe one or two
hours away from mountains.

It can be, pre Alps or Alps.

There is a lot to explore, so we
try to make the best out of it.

We are not city people at all.

Milan is just we are just here
for opportunities and work mostly,

but we are close to the very nice
mountains, so we try to enjoy that.

Dan: Very nice.

I'm glad that's the case.

Great.

Okay.

Let's wrap up.

Thank you very much, Lorenzo.

I really enjoyed that discussion with you.

Lorenzo: Thanks a lot Dan.

Dan: Just a final note for me.

I want to say thank you very much
for listening to the podcast.

, quantum networking, quantum computing.

There's so much to, to investigate.

It's such a broad domain.

, And that's before you even get
to all the quantum physics.

, that lies underneath it.

It's, , it's really interesting
and exciting doing this podcast.

, but there's one thing
you can do to help me.

Please subscribe to the podcast.

Follow me on your
favorite podcast platform.

, spread the word, tell
your friends and family.

, and all of your quantum curious buddies.

Thank you goodbye.

And until next time.

Creators and Guests

Dan Holme
Host
Dan Holme
Quantum curious technologist and student. Industry and Consulting Partnerships at Cisco.
Lorenzo Leandro
Guest
Lorenzo Leandro
Studying Physics and Photonics for many years and in different countries has given me a broad view of the field and a mind-set oriented on problem solving. My love for scientific topics allows me to work with passion and dedication, while communicating clearly even the hard stuff. During my Ph.D. project I was responsible for the building and usage of qLab, the cryogenics and quantum optics laboratory at DTU Fotonik, photonics department of the Technical University of Denmark. Now I work in the Quantum Tech industry, to speed up the coming of Quantum Advantage and useful applications.
Controlling Qubits in Milan, with Lorenzo Leandro, Quantum Machines.
Broadcast by