Most genius are not properly understood in their lifetime. Some don’t even understand their own genius. I believe one such great theologian and mathematician, Thomas Bayes, laid the groundwork of understanding the unequal world around us using quantifiable numbers. But more importantly, Bayes also provided a framework for us to understand ourselves. I often euphemise Bayes Theorem to ‘Bias‘ Theorem, as it is really all about understanding how a bias (a.k.a. ‘prior‘) quantifiably affects an outcome. Even though the stated theorem may sometimes seem intuitive but paradoxically its application certainly is not. This is partly because our human brains are not wired to properly quantify a bias, and therefore quite often we misjudge an outcome.
As an example, consider the following statement:
“What is the likelihood of hiring a post-graduate for an entry-level role in the organisation?“
If we look at this statement alone, we intuitively realise that there may be bias against the likelihood of outcome. I can go on to the numbers behind it, and we may even realise that the likelihood may be different than what we instinctively thought. But that would just be trivializing the Bayes theorem. There are far greater implications.
So let us look at another statement.
“What is the likelihood of hiring a post-graduate who was referred for an entry-level role in the organisation?‘
This is similar to the earlier statement with an additional prior that the candidate came as a reference. This prior alone can have a huge impact on the outcome. Bayes Theorem also implies that we can affect the likelihood of an outcome by having additional set of prior. We are trying to make an unequal world more equal!
Lets carry on.
“What is the likelihood of hiring a new post-graduate for an entry-level role in the organisation if we already have another post-graduate for an entry-level role?“
We can now see the pattern which I am trying to lay out in front of you. The likelihood is now hugely different. Bias is not constant. We, as humans, continually learn and our Bias evolves. The same concept is actually carried forward to data sciences and Machine Learning specifically. Our machines need to understand the priors or biases which affect the likelihood of an outcome; and continually evolve to incorporate a changing set of priors.
Apart from the three statements above, there are so many other Bias’ which determine the likelihood of hiring, isn’t it? However, just like machines, we are seldom able to consider all prior. Hence, our minds often intuitively selects the bias which we feel are more relevant.
A statutory warning: While data scientists and Artificial Intelligence / Machine Learning evangelists may like to dig in deeper into the above set of problems; the Business Managers may have a totally different perspective. Business Managers are purely concerned with the business relevance of a problem. Thus, while we debate about likelihood of hiring (possibly even likelihood of retention), the business problem may actually translate to how quickly can we hire someone and get productive output. I may be naive enough to try to fit a Bayesian model on likelihood of productivity, but there is at least one business problem which we can controllably address i.e. Time-to-hire. Looking beyond Bayesian models, Time-to-hire can actually be addressed much simply by automating a set of time-consuming repetitive processes using Robotic Process Automation. A simple but powerful methodology followed by Visume® to first strike at the heart of business relevance. More details here.