Thread Previous • Date Previous • Date Next • Thread Next |
Until now, we've had two suggestions: the Bayesian average and the solution mentioned by Selene, where each app would get a 3 star rating by default which would be adjusted later when people start reviewing the application.
As I read it, Selene's approach is the Bayesian average with m=3 and C not specified. (m being the prior and C being our confidence in it, the same notation as http://fulmicoton.com/posts/bayesian_rating/.)
It would seem to me that setting m to be the average review would be more accurate. Or, perhaps the average review minus the standard deviation, so that an average app with more reviews ranks above an average app with none.
The Bayesian average tries to predict what would happen if plenty of people were asked to rate the app but apps without bad reviews only still rank better than those without any reviews.
No, the Bayesian average ranks apps with no reviews as m. Any app with a rating below m will rank below the non-reviewed app.
On Thu, Apr 9, 2015 at 8:38 AM, Martin Albisetti <argentina@xxxxxxxxx> wrote:
The obscurity of how search results are (will be) sorted in this case will help make it harder to game the system by developers
Really? Any sane rating system will increase the computed rating when receiving a new review above the current rating and reducing it when receiving a new review below the current rating. Thus every sane rating system is vulnerable to a flood of five-star, or one-star, reviews. Methinks you are flattering yourself if you think attackers will take the time to search for a unique vulnerability in your rating system, rather than brute-forcing it.
Since reviews require an Ubuntu One account, I think the best defense is preventing and removing fraudulent accounts. Keeping this part secret makes more sense to me.
Robert
Thread Previous • Date Previous • Date Next • Thread Next |