It goes without saying that quantum computing is complex. But people buy extraordinarily complex things every day through simple processes. After all, few smartphone buyers know how their devices work. Even a humble bar of soap only hits the shelf after the raw materials have been mined, refined, manufactured, packaged, shipped and stored.
The question is, will the complexity of quantum computing be contained to the point where end users can buy it “off the shelf”? The answer is: it depends on what you mean by plug and play.
If you imagine something like a Quantum MacBook that you might find on the shelves at Best Buy, it’s unlikely that will ever be the case. For most users, it may never be worth having your own on-site quantum computer. Unless you are a large, well-funded organization uniquely positioned to gain exclusive access to a quantum device, such as a government entity or large financial institution, it will always be more convenient to access quantum computing resources through the cloud.
On the other hand, if you imagine a digital market for quantum applications, it could very well materialize within two to five years. But just because you can download quantum software doesn’t mean it will instantly provide an advantage over classical computing — or even that it’ll be useful at all.
Some readers may have seen this play out with “off-the-shelf” AI solutions. Although there are many commercial AI solutions available in the market, these solutions offer no benefit without some level of customization. For example, consider the similarity of all the automated chat bots you encounter when browsing the web. These chatbots have become table stakes, not perks.
As a rule of thumb, the less customization an out-of-the-box solution requires to work, the less likely it is to offer an advantage that a competitor could not install as easily. Every organization will have a unique combination of data, IT infrastructure, people, and problems to solve. Any useful algorithm will need to be adapted to this unique environment to have an impact. It’s true for AI, and it’s even more true for quantum computing.
Quantum applications require dedicated expertise
Now, some quantum use cases will lend themselves more readily than others to out-of-the-box applications. There are already several classical optimization solutions commercially available, such as Gurobi and CPlex, and it is no exaggeration to imagine quantum versions in the future. Although optimization use cases vary widely, they can all be mapped to well-known mathematical formulations, such as a mixed-integer programming problem. However, it still takes a domain expert to understand which variables or constraints should be prioritized. It also takes a technical expert to map business problems into mathematical problems that a software solution can solve, and then tweak the software to get the best performance.
Any benefit of off-the-shelf quantum software will depend on having a dedicated team that can tailor the software to a company’s unique problems. This includes both experts in quantum computing and experts who deeply understand business issues. It may seem like you can wait until the software is fully realized to start hiring quantum talent, but unfortunately the talent pool is rapidly shrinking. In our recent enterprise quantum adoption survey, we found that 69% of enterprises have started the quantum adoption path, and 51% of those organizations have already started building their quantum teams. If you wait too long, the brightest minds will be long gone.
You will also want to foster relationships with external consultants. The executives we surveyed agreed: 96% said they couldn’t successfully adopt quantum computing without outside help. External consultants can save you time and effort by helping you identify use cases, anticipate roadblocks, and build the software infrastructure you’ll need to effectively leverage quantum computing.
Building the infrastructure for quantum computing
Quantum computing will never exist in a vacuum, and to add value, quantum computing components must be seamlessly integrated with the rest of the enterprise technology stack. This includes HPC clusters, ETL processes, data warehouses, S3 buckets, security policies, etc. The data will need to be processed by classical computers before and after going through quantum algorithms.
This infrastructure matters: any acceleration of quantum computing can easily be offset by mundane issues such as disorganized data warehousing and suboptimal ETL processes. Expecting a quantum algorithm to provide an advantage with shoddy classical infrastructure around it is like expecting a flight to buy you time when you don’t have a car. to take you to the airport.
These same infrastructure issues often arise in many of today’s machine learning (ML) use cases. Many out-of-the-box tools may be available, but any useful ML application will ultimately be unique to the purpose of the model and the data used to train it. You need a streamlined process to prep and clean data, ensure data is compliant with privacy and governance policies, track and correct model drifts, and of course ensure the model does what you want him to do.
As enterprise ML users know, maintaining these applications is an ongoing process. Ideally, you would have a development environment for prototyping, a staging environment for testing, and then a production environment to scale the model for enterprise use, leveraging HPC and cloud resources. The complexity associated with building and deploying ML applications in production necessitated the creation of the field of MLOps (also known as AIOps) to manage this complexity.
The complexity only multiplies when you add quantum computing, which requires a similar “QuantumOps” process to manage the complexity and make it useful in production. Quantum hardware is changing rapidly, and to keep pace, you’ll need a way to benchmark the performance of new quantum hardware backends as they’re released to ensure you have the best setup for your problem. The last thing you want is to invest millions in the development of a quantum application, only for a new device or software component to render your work obsolete. It will be essential to have an environment that gives you the flexibility to refine your models, try different configurations, track and compare changes, and iterate quickly.
A ready-made future?
In the future, quantum computing could be as invisible as the processor running the device you’re reading this on right now. Quantum applications can be as easily accessible as your Internet browser application or email application.
But accessible is not synonymous with useful.
To gain meaningful benefit from quantum computing, you need to lay the groundwork by building the required team and infrastructure. Although fault-tolerant quantum devices are still years away, companies can build their workflows well in advance and trade in these more powerful backend devices once they’re live.
Ultimately, every business will have unique challenges requiring unique quantum applications. Business-to-business applications may be similar, but any quantum advantage will depend on how well the quantum application matches business needs and capabilities. This is in direct contrast to the idea of an out-of-the-box quantum app, as appealing as that sounds.
Image credit: plotplot/Shutterstock
Jhonathan Romero Fontalvo is Founder and Director of Professional Services at Zapata Computing.