Recently, Google (GOOGL) has been incessantly vocal about Artificial Intelligence. Yet, an exciting new partnership is now putting a fresh, albeit connected, spin on tech advancement: quantum computing.
Joining forces with the University of Chicago and the University of Tokyo, Google is dedicated to accelerating the progress of a fault-tolerant quantum computer. The tripartite alliance has pledged an impressive $100 million for this initiative over the coming decade, with Google contributing up to half of this amount.
This substantial investment will unlock research prospects, grants, workforce and business development, as well as afford access to quantum computers.
According to a statement by Google, “This partnership is in sync with Google Quantum AI’s mission to build a large-scale quantum computer capable of executing complex, error-corrected computations. We are convinced that accomplishing this will reveal the capacity to yield discernible benefits for many – from the discovery of molecules for new medications, to crafting more eco-friendly batteries, ensuring solid information security, and even instigating scientific research advancements that are yet to be envisioned.”
Yesterday, we signed a quantum computing partnership with the University of Chicago and the University of Tokyo, together committing up to $100 million to quantum research — which may lead to advances that we haven't even imagined. Learn more ↓ https://t.co/uyWRbB3tsc
— Google (@Google) May 22, 2023
Per the Harvard Business Review, quantum computers process and register data in a more refined way compared to conventional computers. Employing a non-binary data processing method, as opposed to the traditional binary approach (zeroes and ones), they facilitate much swifter and more powerful computations that overwhelm traditional supercomputers.
This superior computing approach could trigger breakthroughs in AI, among numerous other potential applications.
However, there’s a hitch. Contemporary quantum computers exhibit an error rate of around one in 1,000 computations. To feasibly apply this computational novelty in practice, error rates must be dramatically reduced — ideally to around one in a million.
Are you enjoying your time on JBKlutse?
Articles like these are sponsored free for everyone through the support of generous readers just like you. Thanks to their partnership in our mission, we reach more than 50,000 unique users monthly!
Please help us continue to bring the tech narrative to people everywhere through relevant and simple tech news, reviews, buying guides, and more.
Support JBKkutse with a gift today!