Can algorithm bias result in employment discrimination?
by Claire Jamie-Lee Nolan
A modern, technologically driven society generates large amounts of information about members of that society. Think, for example, of the information regarding statuses and activities that banks, credit card providers, medical aid schemes, cell phone networks and employers have in their possession. Think further of the information that Google, Facebook, Uber and Amazon have in their possession.
Most, if not all, of this information is digitally stored and can potentially be easily “mined” to derive information that other parties may find useful – for example, a credit provider, insurer or employer. While this new digital age of technology promotes efficiency and heightened productivity, it is not without risk.
In contemplation of what risks we could face in the age of technology, a long debated phenomenon is the algorithm. Algorithms are, in essence, computer programmes created to enable their users to process “big data” in a fast and efficient manner and to provide relevant information that can be derived from this data. This information can then be used to take decisions that affect individuals, for example, to be granted admission to study for a particular degree, to be granted credit, or to be employed or promoted.
Big data and the algorithms utilised to analyse data are playing an increasingly important role in the generation of information for employers when they take employment-related decisions. These tools can be useful, but some argue that they can have negative outcomes. Although an algorithm is, on the face of it, an objective computer process, it may reflect human bias and historical injustice. Opponents argue that an algorithm can reflect the bias of the human that creates it. Consequently, discrimination and bias may be reflected in the results that an algorithm produces and this may reinforce human prejudice.
Cathy O’ Neil, the author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
, is a firm believer that algorithms are “opinions embedded in mathematics”. She believes that algorithms can reinforce discrimination and widen inequality. O’Neil emphasises that these algorithms are not subjected to enough scepticism – many believe that they are both objective and trustworthy. These automated decisions, based on an opaque algorithms, may be subjected to very little human oversight. Humans make decisions based on information provided by an algorithm without questioning the validity of this information.
To demonstrate, consider how a human resources department in large corporations may employ an algorithm to “weed” through thousands of job applications in order to minimise the pool of applicants to those who are the most suitable for the position available. While these hiring algorithms may be effective in lowering the numbers of applications, one would have to consider the results that are produced may have eliminated candidates with high potential due to programmed criteria that may be inherently biased. Hiring algorithms are, in fact, designed to use historical data to make predictions about what qualities and criteria are associated with strong job performance. Thus, these algorithms may, in reality, discount the true potential of a various candidates and produce results that are quantitative and not necessarily fair.
O’Neil cites the example of an algorithm utilised by an American company that was initially designed to take into account the area potential job applicants lived in on the assumption that employees who lived closer to their place of work would be more likely to remain in employment with it. The result was that candidates who lived “too far” from work could be excluded from consideration by the algorithm. This would have had the effect of excluding lower income communities who lived further away from the workplace from consideration and by excluding this class of people indirectly implemented bias based on class. This would have resulted in candidates with potential being excluded based on a criterion that did not accurately reflect their qualifications or abilities to do the work required. In this case, the algorithm was altered to prevent this from occurring. But another user may not have considered this possibility. The implications of such a criterion become even starker in a society where employees live in racially segregated suburbs.
Another example cited is an algorithm used in the recruitment process by an English company that had been programmed to utilise the recruitment criteria that the company had utilised in the past. The application of these criteria had, in the past, resulted in the rejection of candidates whose proficiency in English was poor, often foreigners. This resulted in the algorithm “learning” that “English” names were generally associated with acceptable qualifications while “foreign” names were not. The programme therefore reinforced racial and perhaps other forms of discrimination.
The above examples deal with decisions taken in the context of job appointments. There are also examples where algorithms were utilised to assess the performance of employees. They could also find application in decisions to promote.
It is important to note that the creators of the algorithms may not have any intention to create an algorithm that has the effect of discriminating against certain groups. An employer who purchases a product containing such an algorithm may also not be aware of such potential discrimination. But this will not prevent an allegation that the application of the algorithm constitutes unfair direct or indirect discrimination as prohibited by section 6 of the Employment Equity Act, 1998.
The debate about algorithm bias is predominantly American focused because of the greater use of this type of technology in that country and the existence of a large number of employers employing large workforces. But this is not to say that similar issues and problems may not also arise in South Africa in the future.
Equality is one of the corner stones of our society embedded in our Bill of Rights and other legislation. Inequality and discrimination is rooted in our history and still affects millions of people in South Africa. The potential risk of discrimination is one that employers ought to be aware of.
While the efficiency and productivity gains that may be associated with the use of algorithms in computer programmes in the employment context is a commendable goal, care should be taken to ensure that this is not at the expense of the right to equality. Employers considering the use of this type of programme should take steps to ensure that they meet the requirements of equality. It would also be advisable to monitor the impact of the use of these programmes on an ongoing basis. One could implement this oversight by regularly running audits to test the accuracy of results produced by the algorithm.Claire Jamie-Lee Nolan is a candidate attorney in ENSafrica’s employment department.Reviewed by Peter le Roux, an executive consultant in ENSafrica’s employment department.
No information provided herein may in any way be construed as legal advice from ENSafrica and/or any of its personnel. Professional advice must be sought from ENSafrica before any action is taken based on the information provided herein, and consent must be obtained from ENSafrica before the information provided herein is reproduced in any way. ENSafrica disclaims any responsibility for positions taken without due consultation and/or information reproduced without due consent, and no person shall have any claim of any nature whatsoever arising out of, or in connection with, the information provided herein against ENSafrica and/or any of its personnel. Any values, such as currency (and their indicators), and/or dates provided herein are indicative and for information purposes only, and ENSafrica does not warrant the correctness, completeness or accuracy of the information provided herein in any way.