If you happen to write code for a living, there’s a pretty good chance you’ve found yourself explaining another interviewer again how to reverse a linked list or how to tell if a string contains only digits. Usually, the necessity of this B.Sc. material ends once a contract is signed, as most of these low-level questions are dealt with for us under-the-hood of modern coding languages and external libraries.
Still, not long ago we found ourselves facing one such question in real-life: find an efficient algorithm for real-time weighted sampling. As naive as it might seem at first sight, we’d like to show you why it’s actually not – and then walk you through how we solved it, just in case you’ll run into something similar. So buckle up, we’ve got some statistics and integrals coming up next!
Why We Need Weighted Sampling in Production?
At Taboola, our core business is to personalize the online advertising experience of millions of users worldwide. Thousands of websites across the globe trust us to display each visiting user the ads that he or she will most likely relate to, and most likely to click and engage with. We do that by training several deep-learning-based models which predict the CTR (click-through rate) of each ad for each user. The process of predicting CTR and displaying the highest rated items is known as Exploitation, as we exploit the model’s predictions. But exploitation is not sufficient for a longterm successful model – we need to allow it to do some Exploration of new possibilities too, in order to find better ads.
One of our ideas for such exploration was as following: ask the model to predict the CTR of a list of ads we would like to display, and then instead of displaying the highest rated items, randomly sample items for that list using weighted sampling. This is sometimes known as Soft-Exploration: the highest rated items are still the most probable ones, but every item has some non-zero probability of being shown.
So, we need to do weighted sampling. The most naive approach to do so will be something like this:
max_r = 1
for i from 0 to amount of given weights:
..r = Random number in the range (0,max_r]
..s = 0
..for each weight in the list of weights:
....s += weight
....if s => r:
......append weight to results
......remove weight from weights
......max_r = max_r - weight
......break
return results
This naive algorithm has a complexity of . Considering the fact that the lists are long and all this is happening in real-time, this algorithm is a no-go. But there has to be a better way to do this, right? Well, yes, but we had to design it ourselves.
The Theoretical Solution
Looking hard enough for an algorithm yielded a paper named Weighted Random Sampling by Efraimidis & Spirakis. A single line in this paper gave a simple algorithm to what we should do (page 2, A-Res algorithm, line 2):
..(r is a random number, chosen uniformly and independently for each number)
2) reorder the numbers according to the mapped values
..(first number will be the one which has the largest mapped-value, and so on)
This algorithm involves mapping and sorting, making it , way better than , but there’s still one issue – the authors never proved it. And since we had no proof this is actually working, we had to prove it ourselves. Brace yourselves, integrals are coming.
- I will first describe how a weighted-sampling probability-distribution should behave. As this is what we’re eventually looking for, formalizing it mathematically is probably a good idea.
- Once we formalized the distribution we want, we will find a specific distribution we can use for weighted sampling.
- Lastly, after finding a specific distribution, I’ll link it to the Uniform Distribution, (just like the algorithm above). We’ll be amazed by the fact that the suggested mapping really does imitate random sampling.
Ready? Let’s go.
Part I: Formalizing
How does weighted sampling behave? Let’s say we have two numbers, and , which we perform weighted sampling over. We’d expect to get the sequence (2,1) two-thirds of the time, and the sequence (1,2) a third of the time. So we expect to be the first number 66.6% of the times and the second 33.3% of the times. Another way to look at this, is that since we’re sorting the numbers in a list, we’d expect the priority (how close a number is to the head of the list) of to be the highest two-thirds of the times, and the lowest one-third of the times. You can easily see that priority, which we’ll denote as m, behaves in a way like an inverse-index, meaning the highest m is the first one on the list. We’ll prefer it over the index for two reasons: first, the priority increases as w increases, and it’s more intuitive than the index, which decreases as w increases. Second, the absolute values of the priorities are not relevant; it doesn’t matter if () equal to (4.5, 3) or (-1, -5) or (1024, 5). All that matters is the order between them – the highest will be first, then the second-highest and so on. These two characteristics will allow us to generalize better later on. So to wrap this example up, in the case of and , we would like to find a probability distribution which will yield which obey:
Let’s generalize this and formalize it mathematically: for every two numbers , we would like to have two random variables which originate from a probability distribution (meaning: ), where is a probability distribution defined by all w values provided (in this simple example there are only two, and , but generally there could be more). We expect with probability .
Part II: Finding a Specific Distribution
I claim that the probability distribution defined by the Cumulative Distribution Function (CDF) obeys the requirement above – and I’ll prove it. For this, remember that the Probability Density Function (PDF) obeys , and therefore in our case: . I’ll also denote the Indicator Function as (which means is 1 when and 0 otherwise). Finally, we’ll work only on the range [0,1]:
So we’ve proved that the distribution with CDF indeed imitates weighted sampling.
Part III: Linking to Uniform Distribution
As programmers, the Uniform Distribution is usually the most accessible one we have, regardless of language or libraries. It will only make sense to link the custom-made distribution we just found to the Uniform Distribution, which will then allow us to use the latter for weighted sampling.
Say some X is yielded from (that is, ), what is the probability X is smaller than some number ? This is given by the CDF:
Let’s examine another variable, Y, which we’ll define as , when R originates from the Uniform Distribution . What is the probability that Y is smaller than ? Let’s calculate, remembering that the CDF of for any is :
This is the same result we got for X which was sampled from , and this means we can sample a number from , take its wth root, and it would be just as if we used all along. This means that the priority m of a number w is given by . Neat.
Really Making It Work
There’s a saying I like which states that the difference between theory and practice is that theory only works in theory. So we found a fast-enough algorithm, proved it mathematically, and of course it doesn’t work. Why? Because computers. Let’s take a look at our m values again: . As mentioned before, we use our models to predict CTR, and so w = CTR, which is always a number in the range of [0,1], and usually very small. As r is also sampled from the same range, becomes very small, as and . It actually becomes so small and so often, that the computer doesn’t handle the precision very well, and we get zeros for all values. Let’s see an example using Python:
>>> m = lambda w: random() ** (1.0/w)
>>> m(0.002), m(0.001)
(0.0, 0.0)
Not good.
>>> from math import log
>>> m = lambda w: log(random())/w
>>> m(0.002), m(0.001)
(-402.5850708710584, -758.9101125141252)
Much better. Still, this doesn’t come without a price tag – the logarithm we apply decreases the accuracy of the algorithm. This means that in our example of and , we won’t get with probability 2/3, but something close. For us though, this deviation is something we’re fine with.
So, to wrap this up, our random-weighted sampling algorithm for our real-time production services is:
..(r is a random number, chosen uniformly and independently for each number)
2) reorder the numbers according to the mapped values
..(first number will be the one which has the largest mapped-value, and so on)
Success.
Final Words
Summing this process up, we’ve started with a naive algorithm which wasn’t efficient enough, moved on to the exact opposite – an efficient algorithm which doesn’t work, and then modified it to an almost-exact version which works great and is also efficient.
If I need to conclude, I can only say this – there’s something super exciting about stepping down from our daily routine of developing state-of-the-art AI models and return to our roots as algorithm developers; going back to the basics, develop mathematical proofs, sleeping by the river under starry skies and cooking dinner by the fire – we don’t get to this every day, and I think we’re all glad we did it this time. So wherever you may surf online, know that we just made your experience a little better using plain ol’ math.