From the course: Ethics and Law in Data Analytics

Bias and legal challenges

From the course: Ethics and Law in Data Analytics

Bias and legal challenges

- The Executive Office of President Barack Obama issued a report in 2016, which identifies the opportunities and challenges of big data, the law that protects civil rights and protects against discrimination, and also some best practices for organizations in the data industry. This report really highlights two of the core themes we identified in mod one. The tension between law and technology; and the tension between individual rights and organizational rights. In this video, I will share some challenges and highlight some principles of law that are relevant. In later videos, we will look at how these challenges and laws show up in four distinct areas, consumer rights, employee rights, student rights, and criminal justice. And finally, in a separate video, we will review the best practices suggested by the U.S. government to alleviate and eliminate bias. The White House Big Data Report identified two challenges to promoting fairness and overcoming bias. One, the challenge related to the data inputs themselves that are used in algorithms, and two, the design of the algorithm systems. For the first challenge, the concern is related to our earlier discussion of inclusion exclusion. The decision to use certain inputs and not others in an algorithm can, in fact, result in discrimination. For example, in an algorithm design to determine the fastest route between points A and B, the architect of the system might include information about roads, but not bike routes, or public transportation. This negatively impacts those who do not own a vehicle. Similarly, where the data inputs do not reflect accurately a population, there can be conclusions made that favors certain groups over others. For example, in the fastest route problem, a speed data is collected only from those who own smartphones, then the system result may be more accurate for wealthier populations with higher concentrations of smartphones, and less accurate for poor areas, where smartphone concentrations are lower. This otherwise neutral collection of data can have a disparate or disproportionate impact on socially and economically disadvantaged people. This is discrimination. We're going to hear from Nathan about challenge to the actual design of the system, but basically, the issue here is that the technical processes involved in describing, diagnosing and predicting human preferences and behaviors are completely unknown. There's no transparency. And because we can't see the process, we can't determine if there's been bias or discrimination. The owner of the algorithmic system has a confidential treat secret right in that system. It is not required to disclose it, so we can never really tell what's happening. The owner, organization, or business has what we call intellectual property rights that are here at tension with individual rights to be free from discrimination.

Contents