COMPAS fairness analysis end-to-end example#52
COMPAS fairness analysis end-to-end example#52ashryaagr wants to merge 1 commit intoJuliaAI:masterfrom
Conversation
|
I have used VegaLite for plotting Count Plots. I have added it to Project.toml. But due to some system issues, I am unable to update Manifest.toml to the latest one available here. So I haven't committed it. Due to this the build is failing. |
|
Cool thanks for this! I'll look into the VegaLite story, will also ask someone else's opinion on this as Fairness is not my area |
|
Thanks a lot for the pull request. In the tutorial you focus on assessing fairness of an AdaBoostClassifier. However the first step is to analysis the prediction from COMPAS itself following compas e.g. Looking at the figure 8 we can see that COMPAS is fair regarding the FDR for race but as ProPublica found, the FPR for African-Americans is almost twice as the FPR for Caucasians. For age we observe the same results for the same two metrics: FDR results are fair but FPR for ¡25 is 1.6X higher than 25-45. On the other hand, if we consider false positive errors distribution considering Sex we observe the contrary: the model is fair for FPR but the FDR of Female is 1.34 times higher than for Male. This should not be too hard since all of this is available in this notebook. In particular the output of the risk score from COMPASS to assess its fairness. Only in a second step I would look at other algorithms. More extensive analysis can be found at aequitas. Let me know if you want to discuss. |
I have added a fairness analysis of COMPAS dataset. On my local system, this example is being rendered perfectly when I use Frankiln.serve()