Honestly, I think a good tech journalist should be able to critically review what his interview partner claims and not take it for face value, which the author of this article seems to do most of the time. Today, all start-ups that do any kind of data processing advertise themselves as "big data" companies that use "advanced machine learning", but from my own experience most of them rely on pretty trivial algorithms behind the scenes.
Also, some of the numbers in the article really make you scratch your head: Achieving more than 95 % accuracy when ranking a large number of student teams in an eight-months long business plan competition, based solely on the results of a simple online questionnaire taken at the beginning of the competition? This just seems too good to be true considering the data sources they have at hand, even assuming that they use the most advanced machine learning in the world.
Of course, if you test your algorithm many times at different competitions you will achieve a perfect or near-perfect prediction accuracy for some of them (by pure chance), which however doesn't mean that you can achieve this kind of accuracy consistently (which is where the business value lies).
"Achieving more than 95 % accuracy when ranking a large number of student teams in an eight-months long business plan competition, based solely on the results of a simple online questionnaire taken at the beginning of the competition? This just seems too good to be true."
Technically, the two sentences in that quote may not be connected. The first sentence is about the longest eval they did (8 months); the second sentence is about their performance on competitions (95%), but I read it that the 95% was over all the competitions they did, and that they could have been (and presumably were) worse on the 8-month one.
Also, some of the numbers in the article really make you scratch your head: Achieving more than 95 % accuracy when ranking a large number of student teams in an eight-months long business plan competition, based solely on the results of a simple online questionnaire taken at the beginning of the competition? This just seems too good to be true considering the data sources they have at hand, even assuming that they use the most advanced machine learning in the world.
Of course, if you test your algorithm many times at different competitions you will achieve a perfect or near-perfect prediction accuracy for some of them (by pure chance), which however doesn't mean that you can achieve this kind of accuracy consistently (which is where the business value lies).