Spigit hosts a variety of in-person events as well as live webinars each month. Find out what’s coming up soon, and check out our previous recordings.
For Spigit customers, our latest monthly SaaS release has some noteworthy new science under the hood. Spigit has a long history of employing some of the most innovative and rigorous algorithms to help apply science to the difficult problem of selecting ideas — or separating an idea signal from the collective idea noise.
In previous blogs, we’ve talked about the science behind our advanced Pairwise Voting algorithm and the application of reputation used in order to weight ranking algorithms for up/down voting. In this release, our crowd science team turned its attention to making the Spigit Star Rating system state of the art.
Star Ratings are essentially a mechanism that enables users to express their depth of sentiment about a particular idea. The mechanism itself is a simple expression for end users, but the algorithmic difficulty lies in using these Star Ratings across the crowd. The goal is to effectively create a ranked order of ideas on the Leaderboard that increases confidence in top-rated ideas. This is precisely what our new algorithm aims to achieve.
Historically, most innovation solutions that use star ratings employ a simple average of the collective rating across the crowd, which generates the median ranking on the Leaderboard. However, such a simplistic approach does not take into account all available evidence in the idea set to calculate the score for each individual idea.
If you’ve ever wanted to know how popular a movie is, you may have consulted the online Top 250 list of movies by IMDb. It has a combined web and mobile audience of more than 190 million unique monthly visitors, not to mention a searchable database of more than 150 million data items, which includes more than 2.7 million movies. So how does IMDb produce a rank order of all these movies? The answer is their adaptation of a Bayesian estimate algorithm.
Rather than simply averaging the score for an idea, we normalize the rating based on a number of factors. These elements produce a score that gets pulled in either a higher or lower direction than would occur with a straight average. For example, we should have more confidence in ratings for ideas that have more votes. We should apply the evidence across all ideas in the idea set to determine ranking. And, we should update the ranking dynamically as new evidence is made available.
First, we identify an acceptable set of ideas. These represent ideas that fall into the top 50th percentile of ideas with at least one vote. Other factors we consider in the algorithm are:
The Leaderboard remains empty until two conditions are met:
It’s important to note that without an acceptable set, we cannot have confidence in the ratings of ideas.
The heavy lifting is done behind the scenes by the algorithm, so if you use star ratings in challenges or communities, you can now have much greater confidence in the ranking. That’s it — it’s all available to everyone in the latest SaaS update. It’s also available in our ad-hoc reporting engine now, and reflected in the Mindjet Graph APIs, which are currently in beta.
Of course. This release, as always, involves the ongoing task of squishing pesky bugs, improving the ease of adding files to an idea when it’s first created, generally continuing to up the game — and making it all seamlessly available to customers with our monthly updates. As always, details are available to current customers on our support site.
Until my next blog — and in the spirit of these Star Ratings — may the crowd be with you!