About how far does this leave us from a usable quantum processor? How far from all current cryptographic algorithms being junk?
The latest versions of TLS already have support post-quantum crypto, so no, it’s not all of them. For the ones that are vulnerable, we’re way, way far off from that. It may not even be possible to have enough qbits to break those at all.
Things like simulating medicines, folding proteins, and logistics are much closer, very useful, and more likely to be practical in the medium term.
Is there gov money in folding proteins though? I assume there’s a lot of 3 letter agencies what want decryption with a lot more funding.
There’s plenty of publicly funded research for that, yes.
Three letter agencies also want to protect their own nation’s secrets. They have as much interest in breaking it as they do protecting against it.
yes of course, and nuclear arsenal build up doesn’t exist because govts have that kinda foresight
Except there’s evidence they do, in fact, go both directions.
For example, DES had its s-boxes messed with by the NSA. At the time, the thought was that they were intentionally weakening it. Some years later, public cryptographers developed differential cryptanalysis for breaking ciphers. They found that the new s-boxes in DES made it resistant to differential cryptanalysis. It appears the NSA had already developed the technique and had made DES stronger, not weaker. Because again, they need to protect their own stuff, too, and they used and promoted DES to get there.
They also gave it a really short key that was expected to be broken by the '90s, which is also exactly what happened.
They appear to be going a similar direction with elliptic curves. They seem to be resistant against certain attacks, and the NSA was promoting them earlier than most public cryptographers.
At least a week, probably more
Algorithms will be easier and faster to fix than the process of getting this breakthrough to viability
Just in time for the fall of American democracy. What could possibly go wrong.
108 qubits, but error correction duty for some of them?
What size RSA key can it factor “instantly”?
Currently none, I think it’s allegedly 2000 qbits to break RSA
afaik, without a need for error correction a quantum computer with 256 bits could break an old 256 bit RSA key. RSA keys are made by taking 2 (x-1 bit) primes and multiplying them together. It is relatively simple algorithms to factor numbers that size on both classsical and quantum computers, However, the larger the number/bits, the more billions of billions of years it takes a classical computer to factor it. The limit for a quantum computer is how many “practical qubits” it has. OP’s article did not answer this, and so far no quantum computer has been able to solve factoring a number any faster than your phone can in under a half second.
Seeing quantum computers work will be like seeing mathemagics at work, doing it all behind the scenes. Physically (for the small ones) it looks the same, but abstractly it can perform all kinds of deep mathematics.
google can walk up the passageway of elton john for all i care!
I know a good therapist, if need be!
There’s a dime stuck in the road behind our local store, tails side up, for over 15 years. And that doesn’t even need error correction.
Why does it sound like technology is going backwards more and more each day?
Someone please explain to me how anything implementing error correction is even useful if it only lasts about an hour?
I mean, that’s literally how research works. You make small discoveries and use them to move forward.
What’s to research? A fucking abacus can hold data longer than a goddamn hour.
Are you really comparing a fucking abacus to quantum mechanics and computing?
they are baiting honest people with absurd statements
Russian election interference money dried up and now they’re bored
Absurdly ignorant questions that, when answered, will likely result in people knowing more about quantum computing than they did before.
And if it stopped there, we’d all be the better for it :)
They are shaking some informative answers out of literate people so I’m getting something out of it :D
True
Don’t feed the trolls.
Yes, as far as memory integrity goes anyways. Hell, even without an abacus, the wet noodle in my head still has better memory integrity.
Are you aware that RAM in your Computing devices looses information if you read the bit?
Why don’t you switch from smartphone to abacus and dwell in the anti science reality of medieval times?
And that it looses data after merely a few milliseconds if left alone, that to account for that, DDR5 reads and rewrites unused data every 32ms.
You’re describing how ancient magnetic core memory works, that’s not how modern DRAM (Dynamic RAM) works. DRAM uses a constant pulsing refresh cycle to recharge the micro capacitors of each cell.
And on top of that, SRAM (Static RAM) doesn’t even need the refresh circuitry, it just works and holds it’s data as long as it remains powered. It only takes 2 discreet transistors, 2 resistors, 2 buttons and 2 LEDs to demonstrate this on a simple breadboard.
I’m taking a wild guess that you’ve never built any circuits yourself.
I’m taking a wild guess that you completely ignored the subject of the thread to start an electronics engineering pissing contest?
Do you really trust the results of any computing system, no matter how it’s designed, when it has pathetic memory integrity compared to ancient technology?
That is not a product. This is research.
And you would have been there shitting on magnetic core memory when it came out. But without that we wouldn’t have the more advanced successors we have now.
Nah, core memory is alright in my book, considering the era of technology anyways. I would have been shitting on the William’s Tube CRT Memory system…
https://youtube.com/watch?v=SpqayTc_Gcw
Though in all fairness, at the time even that was something of progress.
Doubt.
Core memory loses information on read and DRAM is only good while power is applied. Your street dime will be readable practically forever and your abacus is stable until someone kicks it over.
You’re not the arbiter of what technology is “good enough” to warrant spending money on.
Obvious troll is obvious
Must be the dumbest take on QC I’ve seen yet. You expect a lot of people to focus on how it’ll break crypto. There’s a great deal of nuance around that and people should probably shut up about it. But “dime stuck in the road is a stable datapoint” sounds like a late 19th century op-ed about how airplanes are impossible.
The internet is pointless, because you can transmit information by shouting. /s
AND I can shout while the power is out. So there!
Not to mention that quantum cryptography has found ways to prevent that already.
Welp, quantum computers have certain advantages (finding elements in O(sqrt(n)) time complexity, factorizing primes, etc). The difficulty is actually making everything stable because these machines are pretty complex.
Based take.
You disrespect the meaning of based
He meant base
As stable as that dime is, it’s utterly useless for all practical purposes.
What Google is talking about it making a stable qbit - the basic unit of a quantum computer. It’s extremely difficult to make a qbit stable - and as it underpins how a quantum computer would work instability introduces noise and errors into the calculations a quantum computer would make.
Stabilising a qbit in the way Google’s researchers have done shows that in principle if you scale up a quantum computer it will get more stable and accurate. It’s been a major aim in the development of quantum computing for some time.
Current quantum computers are small and error prone. The researchers have added another stepping stone on the way to useful quantum computers in the real world.
It sounds like your saying a large quantum computer is easier to make than a small quantum computer?
That is one of the things the article says. That making certain parts of the processor bigger reduces error rates.
I think that means the current quantum computers made using photonics, right? Those are really big though.
Do you have any idea the amount of error correction needed to get a regular desktop computer to do its thing? Between the peripheral bus and the CPU, inside your RAM if you have ECC, between the USB host controller and your printer, between your network card and your network switch/router, and so on and so forth. It’s amazing that something as complex and using such fast signalling as a modern PC does can function at all. At the frequencies that are being used to transfer data around the system, the copper traces behave more like radio frequency waveguides than they do wires. They are just “suggestions” for the signals to follow. So there’s tons of crosstalk/bleed over and external interference that must be taken into account.
Basically, if you want to send high speed signals more than a couple centimeters and have them arrive in a way that makes sense to the receiving entity, you’re going to need error correction. Having “error correction” doesn’t mean something is bad. We use it all the time. CRC, checksums, parity bits, and many other techniques exist to detect and correct for errors in data.
I’m well aware. I’m also aware that the various levels of error correction in a typical computer manage to retain the data integrity potentially for years or even decades.
Google bragging about an hour, regardless of it being a different type of computer, just sounds pathetic, especially given all the money being invested in the technology.
Traditional bits only have to be 0 or 1. Not a coherent superposition.
Managing to maintain a stable qubit for a meaningful amount of time is an important step. The final output from quantum computation is likely going to end up being traditional bits, stored traditionally, but superpositions allow qubits to be much more powerful during computation.
Being able to maintain a cached superposition seems like it would be an important step.
(Note: I am not even a quantum computer novice.)
How many calculations can your computer do in an hour? The answer is a lot.
Indeed, you’re very correct. It can also remember those results for over an hour. Hell, a jumping spider has better memory than that.
The output of a quantum computer is read by a classical computer and can then be transferred or stored as long as you liked use traditional means.
The lifetime of the error corrected qubit mentioned here is a limitation of how complex of a quantum calculation the quantum computer can fix. And an hour is a really, really long time by that standard.
Breaking RSA or other exciting things still requires a bunch of these error corrected qubits connected together. But this is still a pretty significant step.
Well riddle me this, if a computer of any sort has to constantly keep correcting itself, whether in processing or memory, well doesn’t that seem unreliable to you?
Hell, with quantum computers, if the temperature ain’t right and you fart in the wrong direction, the computations get corrupted. Even when you introduce error correction, if it only lasts an hour, that still doesn’t sound very reliable to me.
On the other hand, I have ECC ChipKill RAM in my computer, I can literally destroy a memory chip while the computer is still running, and the system is literally designed to keep running with no memory corruption as if nothing happened.
That sort of RAM ain’t exactly cheap either, but it’s way cheaper than a super expensive quantum computer with still unreliable memory.
Well riddle me this, if a computer of any sort has to constantly keep correcting itself, whether in processing or memory, well doesn’t that seem unreliable to you?
Error correction is the study of the mathematical techniques that let you make something reliable out of something unreliable. Much of classical computing heavily relies on error correction. You even pointed out error correction applied in your classical computer.
That sort of RAM ain’t exactly cheap either, but it’s way cheaper than a super expensive quantum computer with still unreliable memory.
The reason so much money is being invested in the development of quantum computers is mathematical work that suggests a sufficiently big enough quantum computer will be able to solve useful problems in an hour that would take the worlds biggest classical computer thousands of years to solve.
Why do we humans even think we need to solve these extravagantly over-complicated formulas in the first place? Shit, we’re in a world today where kids are forgetting how to spell and do basic math on their own, no thanks to modern technology.
Don’t get me wrong, human curiosity is an amazing thing. But that’s a two edged sword, especially when we’re augmenting genuine human intelligence with the processing power of modern technology and algorithms.
Just because we can, doesn’t necessarily mean we should. We’re gonna end up with a new generation of kids growing up half dumb as a stump, expecting the computers to give us all the right answers.
Smart technology for dumb people…
Why do we humans even think we need to solve these extravagantly over-complicated formulas in the first place? Shit, we’re in a world today where kids are forgetting how to spell and do basic math on their own, no thanks to modern technology.
lol.
All of modern technology boils down to math. Curing diseases, building our buildings, roads, cars, even how we do farming these days is all heavily driven by science and math.
Sure, some of modern technology has made people lazy or had other negative impacts, but it’s not a serious argument to say continuing math and science research in general is worthless.
Specifically relating to quantum computing, the first real problems to be solved by quantum computers are likely to be chemistry simulations which can have impact in discovering new medicines or new industrial processes.