Skip to content

nicolealf/ethics_Alfaro_Nicole

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 

Repository files navigation

Ethics manifesto

My Ethical Principles

_Data scientists must be actively involved in preventing algorithms from negatively affecting marginalized groups. As discussed in one of the class discussions, propublica found that “prices in whiter neighborhoods stayed about the same as risk increased, while premiums in minority neighborhoods went up” when it came to car insurance rates in three states (Angwin 2017). A big use and purpose of Data Science is to make companies more money, even if it means taking advantage of minority groups that do not hold as much power. In this case, it was highly unethical but unsurprising that these car insurance companies would raise their prices for non-white areas, knowing they could easily take advantage of them. This type of disparity can also be seen in the millions of dollars that people of color are overcharged for when seeking loans to purchase or refinance homes (Akselrod 2021). Another example of data science being used to negatively affect groups of people is the criminal risk assessment algorithms that determine a recidivism score for individuals in prison. This score is created by an algorithm that takes into account a person’s background such as race, income, and gender. It is then used as a factor by the judge when determining what types of resources or outcomes the person may receive such as a longer prison sentence or whether or not they are held in jail before trial (Hao 2019). A big problem with algorithms that take into account criminal data is that they use data that historically misrepresents marginalized groups. I would argue along with others that there is no such thing as crime data. We only have policing data, which we know highly mis-represents marginalized groups. We do not have a record of every “crime” ever, only “crimes” accounted for by law enforcement. Oppressive systems are easier to hide behind algorithms because people assume they are always correct and ethical to use because of their mathematical nature. However, the data fed into these algorithms as well as the people and companies they are benefiting are still highly biased and oppressive. Ultimately, using Machine Learning and Artificial Intelligence to make decisions about people is problematic simply due to the fact that these algorithms are based on data and systems that have historically been discriminatory. _

Reading Discussion

Data science outcomes involve making decisions about people's lives using personal data. Because Data Science is often used to make money, there are several mal-practices that negatively affect others whether intentional or not. These include biases in machine learning algorithms, p-hacking, and misusing people's information. Data science ethics must be studied and actively discussed among the data science community as well as everyone else it affects.

References

Akselrod, Olga. “How Artificial Intelligence Can Deepen Racial and Economic Inequities.” American Civil Liberties Union, 13 July 2021, www.aclu.org/news/privacy-technology/how-artificial-intelligence-can-deepen-racial-and-economic-inequities.

Angwin, Julia, et al. “Minority Neighborhoods Pay Higher Car Insurance Premiums than White Areas with the Same Risk.” ProPublica, 4 Apr. 2017, www.propublica.org/article/minority-neighborhoods-higher-car-insurance-premiums-white-areas-same-risk.

Hao, Karen. “Ai Is Sending People to Jail-and Getting It Wrong.” MIT Technology Review, MIT Technology Review, 2 Apr. 2020, www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published