Posted on 01 February, 2023
What's the issue?
True probability doesn't exist in computers today.
You might respond with: "but I use a probability distribution whenever I run the command for a normal distribution!". It’s not the end of the world, but unfortunately, the machines have been lying to us. You simply can’t code a random ‘coin flip’ into our computers.
So, what really happens when you call a distribution and try to get a value out of the probability, say for example, you ask your machine for a number between 1 and 1000? Well today’s computations can be divided into 2 categories:
- Pseudo-random number generator: This is by far the biggest form of random number generation. Essentially, it’s an algorithm that goes through a sequence of numbers, calculating the next number in the sequence based on the previous number in the sequence. This sort of method can never produce a true probability as they’re deterministic (i.e. it’s predictable, based on the algorithm)
- Hardware ‘True’ Random number generators: These use some form of hardware effect, such as a transistor circuit that fluctuates, to generate probability. Unfortunately, these can easily be biased (try heating a resistor yourself – it’s a fun way to demonstrate this in real life)
The implication is, for both of the above methods, you have either no probability (pseudorandom sequences) or some uncontrollable probability distribution (hardware random number generators).
Why does it matter?
A lot of the computation we run today relies on good probability, take a few examples:
- The world of probabilistic algorithms: using a random walk or random forest (whether that’s in modelling market movements, or under the hood in your CFD modelling)
- The simulations we run every day: all the way from Monte Carlo through to the emerging world of deep neural networks and machine learning (e.g. Generative Adversarial Networks used in complex computer vision tasks)
The problem is: if we use a pseudo or hardware based random number generator, we don’t get the right amount of randomness (or ‘entropy’) injected into our systems.
So, why is this important? Our systems end up being less efficient than they should be. Meaning, the convergence times of our models are sub-optimal and training or simulation just simply take longer and use more compute than they should!
An interesting way to think about this is to swap your random number generator in your simulation with a sequence of 1’s, it’s obvious that your simulation would never converge. The higher the quality of randomness in your system, the faster you’ll converge.
The next question I’m sure you’ll ask is: how can we actually generate this sort of outcome? How do we create something that can accurately generate a probability distribution - a real coin toss.
We turn to the world of quantum, but not ‘quantum computing’ as we know it, which dominates the headlines. Quantum Random Number Generators uses a quantum effect that is known to be inherently probabilistic. They generate quantum states, and only on measuring that quantum state, do we observe a value. What this means is we get a coin toss effect which allows us to, at every point, get a 50:50 chance of 1 or 0 in binary, optimising the above algorithms.
It’s really important to be careful when selecting the hardware provider in this space: not all quantum is made equal. Some organisations claim to use quantum hardware which is in-fact deterministic and not probabilistic (the same issues we face when using pseudo and hardware random number generators).
At Boston, we’ve tested high quality solutions and demonstrated just how significant the acceleration in convergence times can be. We’ve seen some startling results: up to 30% greater efficiency. This is truly significant when we look across the impact that can have for our customers, particularly those with heavy HPC workloads. We will continue to work closely with our partners who are experts in this area, such as SECQAI, to ensure that all of our customers receive the highest quality probability for their simulations.