Description
Python implementation of Weng-Lin Bayesian ranking, a better, license-free alternative to TrueSkill
This is a port of the amazing openskill.js package.
openskill.py alternatives and similar software solutions
Based on the "Analytics" category.
Alternatively, view openskill.py alternatives based on common mentions on social networks and blogs.
-
Superset
DISCONTINUED. Apache Superset is a Data Visualization and Data Exploration Platform [Moved to: https://github.com/apache/superset] -
Metabase
The simplest, fastest way to get business intelligence and analytics to everyone in your company :yum: -
Redash
Make Your Company Data Driven. Connect to any data source, easily visualize, dashboard and share your data. -
PostHog
๐ฆ PostHog provides open-source product analytics, session recording, feature flagging and A/B testing that you can self-host. -
Piwik
Empowering People Ethically with the leading open source alternative to Google Analytics that gives you full control over your data. Matomo lets you easily collect data from websites & apps and visualise this data and extract insights. Privacy is built-in. Liberating Web Analytics. Star us on Github? +1. And we love Pull Requests! -
Plausible
Simple, open source, lightweight (< 1 KB) and privacy-friendly web analytics alternative to Google Analytics. -
GoAccess
GoAccess is a real-time web log analyzer and interactive viewer that runs in a terminal in *nix systems or through your browser. -
Fathom Analytics
Fathom Lite. Simple, privacy-focused website analytics. Built with Golang & Preact. -
Countly
Countly is a product analytics platform that helps teams track, analyze and act-on their user actions and behaviour on mobile, web and desktop applications. -
Velox
A C++ vectorized database acceleration library aimed to optimizing query engines and data processing systems. -
ThinkUp
ThinkUp gives you insights into your social networking activity on Twitter, Facebook, Instagram, and beyond. -
Open Web Analytics
Official repository for Open Web Analytics which is an open source alternative to commercial tools such as Google Analytics. Stay in control of the data you collect about the use of your website or app. Please consider sponsoring this project. -
Chartbrew
Open-source web platform used to create live reporting dashboards from APIs, MongoDB, Firestore, MySQL, PostgreSQL, and more ๐๐ -
Socioboard
Socioboard is world's first and open source Social Technology Enabler. Socioboard Core is our flagship product. -
Mixpost
Self-hosted social media management. Schedule and organize your social content. Built with Laravel Framework. -
Rakam
๐ Collect customer event data from your apps. (Note that this project only includes the API collector, not the visualization platform) -
IG Monitoring
DISCONTINUED. ๐จ DISCONTINUED๐จ IGMonitoring - Free, self hosted Instagram Analytics and Stats -
Tellery
Tellery lets you build metrics using SQL and bring them to your team. As easy as using a document. As powerful as a data modeling tool. -
Matomo
Leading open-source analytics platform that gives you more than just powerful analytics, formerly known as Piwik.
InfluxDB - Purpose built for real-time analytics at any scale.
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of openskill.py or a related project?
README
Python implementation of Weng-Lin Bayesian ranking, a better, license-free alternative to TrueSkill
This is a port of the amazing openskill.js package.
Installation
pip install openskill
Usage
>>> from openskill import Rating, rate
>>> a1 = Rating()
>>> a1
Rating(mu=25, sigma=8.333333333333334)
>>> a2 = Rating(mu=32.444, sigma=5.123)
>>> a2
Rating(mu=32.444, sigma=5.123)
>>> b1 = Rating(43.381, 2.421)
>>> b1
Rating(mu=43.381, sigma=2.421)
>>> b2 = Rating(mu=25.188, sigma=6.211)
>>> b2
Rating(mu=25.188, sigma=6.211)
If a1
and a2
are on a team, and wins against a team of b1
and b2
, send this into rate:
>>> [[x1, x2], [y1, y2]] = rate([[a1, a2], [b1, b2]])
>>> x1, x2, y1, y2
(Rating(mu=28.669648436582808, sigma=8.071520788025197), Rating(mu=33.83086971107981, sigma=5.062772998705765), Rating(mu=43.071274808241974, sigma=2.4166900452721256), Rating(mu=23.149503312339064, sigma=6.1378606973362135))
You can also create Rating
objects by importing create_rating
:
>>> from openskill import create_rating
>>> x1 = [28.669648436582808, 8.071520788025197]
>>> x1 = create_rating(x1)
>>> x1
Rating(mu=28.669648436582808, sigma=8.071520788025197)
Ranks
When displaying a rating, or sorting a list of ratings, you can use ordinal
:
>>> from openskill import ordinal
>>> ordinal([43.07, 2.42])
35.81
By default, this returns mu - 3 * sigma
, showing a rating for which there's a 99.7% likelihood the player's true rating is higher, so with early games, a player's ordinal rating will usually go up and could go up even if that player loses.
Artificial Ranks
If your teams are listed in one order but your ranking is in a different order, for convenience you can specify a ranks option, such as:
>>> a1 = b1 = c1 = d1 = Rating()
>>> result = [[a2], [b2], [c2], [d2]] = rate([[a1], [b1], [c1], [d1]], rank=[4, 1, 3, 2])
>>> result
[[Rating(mu=20.96265504062538, sigma=8.083731307186588)], [Rating(mu=27.795084971874736, sigma=8.263160757613477)], [Rating(mu=24.68943500312503, sigma=8.083731307186588)], [Rating(mu=26.552824984374855, sigma=8.179213704945203)]]
It's assumed that the lower ranks are better (wins), while higher ranks are worse (losses). You can provide a score instead, where lower is worse and higher is better. These can just be raw scores from the game, if you want.
Ties should have either equivalent rank or score.
>>> a1 = b1 = c1 = d1 = Rating()
>>> result = [[a2], [b2], [c2], [d2]] = rate([[a1], [b1], [c1], [d1]], score=[37, 19, 37, 42])
>>> result
[[Rating(mu=24.68943500312503, sigma=8.179213704945203)], [Rating(mu=22.826045021875203, sigma=8.179213704945203)], [Rating(mu=24.68943500312503, sigma=8.179213704945203)], [Rating(mu=27.795084971874736, sigma=8.263160757613477)]]
Predicting Winners
You can compare two or more teams to get the probabilities of each team winning.
>>> from openskill import predict_win
>>> a1 = Rating()
>>> a2 = Rating(mu=33.564, sigma=1.123)
>>> predictions = predict_win(teams=[[a1], [a2]])
>>> predictions
[0.45110901512761536, 0.5488909848723846]
>>> sum(predictions)
1.0
Predicting Draws
You can compare two or more teams to get the probabilities of the match drawing.
>>> from openskill import predict_draw
>>> a1 = Rating()
>>> a2 = Rating(mu=33.564, sigma=1.123)
>>> prediction = predict_draw(teams=[[a1], [a2]])
>>> prediction
0.09025541153402594
Choosing Models
The default model is PlackettLuce
. You can import alternate models from openskill.models
like so:
>>> from openskill.models import BradleyTerryFull
>>> a1 = b1 = c1 = d1 = Rating()
>>> rate([[a1], [b1], [c1], [d1]], rank=[4, 1, 3, 2], model=BradleyTerryFull)
[[Rating(mu=17.09430584957905, sigma=7.5012190693964005)], [Rating(mu=32.90569415042095, sigma=7.5012190693964005)], [Rating(mu=22.36476861652635, sigma=7.5012190693964005)], [Rating(mu=27.63523138347365, sigma=7.5012190693964005)]]
Available Models
BradleyTerryFull
: Full Pairing for Bradley-TerryBradleyTerryPart
: Partial Pairing for Bradley-TerryPlackettLuce
: Generalized Bradley-TerryThurstoneMostellerFull
: Full Pairing for Thurstone-MostellerThurstoneMostellerPart
: Partial Pairing for Thurstone-Mosteller
Which Model Do I Want?
- Bradley-Terry rating models follow a logistic distribution over a player's skill, similar to Glicko.
- Thurstone-Mosteller rating models follow a gaussian distribution, similar to TrueSkill. Gaussian CDF/PDF functions differ in implementation from system to system (they're all just chebyshev approximations anyway). The accuracy of this model isn't usually as great either, but tuning this with an alternative gamma function can improve the accuracy if you really want to get into it.
- Full pairing should have more accurate ratings over partial pairing, however in high k games (like a 100+ person marathon race), Bradley-Terry and Thurstone-Mosteller models need to do a calculation of joint probability which involves is a k-1 dimensional integration, which is computationally expensive. Use partial pairing in this case, where players only change based on their neighbors.
- Plackett-Luce (default) is a generalized Bradley-Terry model for k โฅ 3 teams. It scales best.
Advanced Usage
You can learn more about how to configure this library to suit your custom needs in the project documentation.
Implementations in other Languages
*Note that all licence references and agreements mentioned in the openskill.py README section above
are relevant to that project's source code only.