From the course: Ethics and Law in Data Analytics

Handling employee data

From the course: Ethics and Law in Data Analytics

Handling employee data

- In this video, we're talking about regulations in the United States that apply to employees to protect them from discrimination and other misuses of their information. There's a list here for you to see that the main laws that apply in the U.S. to regulate and protect against employee discrimination. And so, in this area, we're talking, basically, about people analytics. How do we protect people in the context of analytics and artificial intelligence? Looking first at existing law to see what the law requires, and then considering instances in how the law will be applied to new technologies. So, I'm not going to talk about all of these laws, I'm just going to highlight a few things for you. Title VII of the Civil Rights Act of 1964 is our primary law protecting against employment discrimination on the basis of protected characteristics. So, generally, in the U.S., if you fall into a suspect classification, and all of us fall into at least one, then you are protected from employment decisions that harm you on the basis of your membership in that protected class. The protected classes are race, gender, religion, and so forth. Title VII covers a host of the protected classifications. Then these other laws, the Americans with Disabilities Act, that protects those who are disabled, and disabled is defined very broadly, from decisions that are made that harm them in employment, whether it's not hiring, firing, demoting, or whatnot. Anything that is done to a person, an employee, with respect to a disability would be covered under the ADA. The Age Discrimination in Employment Act, just to let you know what that is, that is based on the classification of age, and in the United States, under this regulation, if you are over 40, which doesn't really seem like it's that old, but if you're over 40, you are protected from employment decisions that cause you harm on the basis of that age. The last one, the Genetic Information Nondiscrimination Act, is super interesting. This was the most recent of all of these laws, and it was passed by the U.S. Congress because what was happening was employers, and actually, insurance companies on the consumer analytics side, but employers were using genetic information about their employees or their potential hires to make decisions about hiring, firing, promoting, and so forth. So, you can imagine that if an employer gets access to your genetic information as part of a pre-employment physical and then decides, in fact, not to hire you because you have the genetic marker for prostate cancer, then that would be discrimination. Now, there was no law that protected us from that before GINA, now that we have GINA, employers and insurance companies are not allowed to use that information to make employment or insurance decisions about us. So, looking at these laws, we realize that, again, they were written before technology advanced, and so we're going to be concerned to see whether or not we can stretch them in their application to these new areas of analytics and artificial intelligence. The classifications under Title VII, Title VII is the one I'm going to focus the most on here today, these are our protected classes. Race, ethnicity, national origin, gender, and religion, all protected under Title VII. So, if you are treated differently or you experience some sort of employment harm and you are a member of one of these classes, and in fact, the decision was made because of your membership in the class, then that is discrimination that is prohibited by Title VII of the Civil Rights Act of 1964. The theories here of liability, there's two, and these theories we see consistently across laws that are protecting us from discrimination in every context, so it's important to understand them here with respect to Title VII and the employment context. But you've seen and we've heard and talked about these different theories in other context. Any time there's an issue of bias or discrimination, these theories actually are relevant. So, disparate treatment and disparate impact, how are these two things different? Disparate treatment with respect to Title VII is actually considered intentional discrimination. If you are not hired because of your membership in a protected class, that is considered intentional discrimination, or disparate treatment. Treating you differently on the basis of your membership in that class. In data analytics, this might come up if an employer disfavors a particular group, has made some conclusions based on analytics that a certain group of people will be less desirable as employees. For example, we've talked about markers that might reveal that a person is more likely to quit in the first five years or is more likely to be less productive. Whatever the conclusion is, if an employer is taking that conclusion and treating a group differently on the basis, that's intentional, intentional harm. Now, unfortunately, if the group isn't one of these protected classes, that's not considered a problem under Title VII. So, we don't really see a whole lot of Title VII issues come up in analytics and artificial intelligence. What we do see, at least with respect to disparate treatment, we do see more of the disparate impact case in analytics and artificial intelligence, and impact, disparate impact, is unintentional discrimination. Basically, this happens when neutral information is used. Although it's neutral, it is impacting members of a protected class differently than others. So, historically in Title VII cases, disparate impact would come up when an employer is using a certain kind of test to determine eligibility for a job. So, if a job requires that you are able to lift 100 pounds of weight, you might have to pass a test for that to see that you are physically able to do what the job requires. A lot of scholars are using disparate impact theory to talk about the potential for bias and discrimination in employment in this area, but it's not entirely in alignment because when we're looking at analytics information, it's all information, it's all variables, it's all data, so we're not actually looking to correlate, necessarily, the data with what the job requires. But to the degree and extent that this is all we have for employment protections under Title VII, disparate impact is really going to be the theory that gets a lot of attention to see if groups are being disfavorably affected as a result of using algorithms, for example. And as we talked about in mod two, it's one thing to understand that this can happen and to know that it can happen, it's another thing to actually really be able to identify it because there's really no transparency here. So, an employee who has an impact might not even realize that he or she's been discriminated against in the first place. But things to look at, and these two theories are really important to learn. If we understand, then we can see how they come up, really, in all law that protects from bias and discrimination.

Contents