Artificial Inteligence for Recruitment Purposes - When is It the Right Choice?

20/04/2021


Artificial Inteligence for Recruitment Purposes - When is It the Right Choice?

Artificial intelligence (AI) enables systems and machines to learn, solve problems and make decisions. It can be used for various purposes, but we will talk about how AI can be used in the recruitment context.


AI may replace manual search of thousands of resumes to find candidates that match the job requirements and recommend only those candidates who have the right experience, education and qualifications to recruiters, but also, it can find the right candidate completely autonomously. This can make organizations more effective and efficient.


However, it can also cause harm to candidates if not used properly - if AI is left to decide whether an individual will be selected for the role, then, it may do so wrongly and put them at a disadvantage. Even if AI does not select a candidate by itself, there are concerns when it is used to shortlist candidates. Therefore, organizations have to decide whether AI is the right choice for them. This could be true for organizations that process high volumes of applications, but for others the risk of employing AI might be greater than the benefit they would get out of it.


Some organizations might have a human in the loop – humans are the ones making final decisions based on AI inputs. This type of use of AI is less intrusive.


This article will provide best practices for the use of AI in the recruitment context.


Responsibility for compliance with privacy laws


An organization wishing to deploy AI to find right candidates is responsible for compliance with privacy laws. In most jurisdictions this obligation cannot be passed on to third parties, e.g., vendor providing AI solution to the organization.


Within organization, different stakeholders should work on complying with privacy laws – responsible management, privacy teams, data scientists and engineers etc. Sometimes, organizations might not have required expertise to manage risks associated with the use of AI. In such cases organizations might need to upskill their existing resources or bring in subject matter experts. It is important to assess the risk to individuals and invest early enough to address potential risks that may arise when using AI.


When deploying AI, it is important to assess risks associated with its use and mitigate them to safeguard individuals’ rights and comply with privacy laws.


Identifying and mitigating risks


The best way to assess risks to individuals is to perform a privacy impact assessment/data protection impact assessment, collectively referred to as PIA, even if not legally required. That way, all risks can be properly highlighted and mitigated. This activity could likely hit the threshold from Article 35 GDPR and require a PIA under the law.


The PIA should describe the activity, assess risks and propose mitigation plans.


Describing the activity


The activity should be described in details and data flows provided. Of course, it should not be too onerous, however, it needs to be informative enough and a focus should be on the processing of personal data by AI, especially when AI is making decisions/predictions. It should be clear what data is collected from individuals, how it will be processed and what will be the outcome of such processing.


AI can make decisions with or without any human involvement. If there is any human involvement in a fully automated decision making, the human involvement should be meaningful and should be able to overturn decisions made by the AI in order to consider that the decision-making process is not automated. AI decision making and human involvement should be well described and documented.


Even where the use of AI is not legal regulated, it is a good practice to address how is AI going to be a reasonable solution for the given activity. It should be described how AI is going to achieve the purpose which would not possible by using less intrusive technologies.


An envisaged process of how the system will ensure individual rights should be mentioned. This is a very important point, as the law is there to protect individuals’ rights and freedoms.


Lastly, the description of the activity should mention whether individuals would reasonably expect to have AI processing their personal data and how the activity is going to address AI accuracy.


Risk assessment


Where AI is making decisions without any human intervention, individuals’ rights might not be properly safeguarded. In addition, many organizations might fall at the transparency front, as sometimes it can be hard to provide a meaningful explanation using plain language. The system in use must be able to accommodate other rights’ requests such as access, erasure, objection, rectification and objection which will be addressed further in the article.


Beside usual privacy considerations such as an impact on individuals’ rights under the privacy laws, AI might cause the both material and non-material harm to individuals; it can treat them unequally and deprive them of employment opportunities.


This can lead to a non-compliance with labour, equality and other laws.


Mitigating actions should be appropriate to the identified risks and able to reduce them. These could include:


Finding appropriate legal basis


Where laws prescribe legal basis for processing of personal data, in most of the cases organizations will have to find two grounds for the processing of personal data by AI – when training AI and when the actual selection of candidates occurs.


If we consider GDPR legal bases, organizations could possibly rely on legitimate interest or consent when training the AI, as it could be harder to justify relying on another legal basis.


It could be problematical to rely on consent, as in most of the cases the consent would not be obtained at the time of collection of the data, therefore, obtaining the consent afterwards might prove inefficient. In addition, there are considerations about the conditions for consent such as whether consent can be freely given in the prospective employee/employer context and how to ensure that enough information is provided to data subject to make the consent informed. More on transparency further in the text.


Legitimate interest might be an appropriate legal basis to train AI, provided that the legitimate interest assessment has been carried out and it does not indicate that the rights and freedoms of individuals override the legitimate interest pursued by the controller.


Legal basis for the actual processing of personal data by AI to find the right candidates might be the performance of a contract. If organizations receive a high number of applications for positions, it might be justifiable to use AI to find candidates that would be fit for the role as a disproportionate effort would have to be taken to find candidates without the use of AI. In that case it could be considered that the processing is necessary in order to take steps at the request of the data subject prior to entering into a contract. The fact whether AI which makes decisions without anu human involvement is a proportionate mean in the recruitment context is particularly important, as the use of AI in these cases might significantly affect individuals (an employment opportunity could be denied or individuals could be put at a significant disadvantage). The conditions for such use of AI are stricter under the GDPR and only three legal bases are potentially applicable – performance of a contract, authorization under the Member state law or explicit consent.


However, if organizations receive numbers of applications which can objectively be reviewed manually, then the organizations should assess whether a performance of a contract is the right choice for them and probably would have to seek another legal basis such as consent or legitimate interest. In this case using AI system that makes fully automated decisions might not be justified in line with the above consideration.


Informing individuals of the use of AI


Many consider AI to be a “black box” and that it cannot be explained to individuals how it works. However, this is not the case; how AI works can be explained. Individuals have to understand how AI makes decisions and what are the outcomes. This does not mean that organization should provide the whole AI architecture to individuals, but to translate it into a plain language so that individuals may understand what data is collected and used by AI, what impacts it may have on the individuals, how does the logic work, what are the outcomes of AI processing, how it is ensured that the results are accurate etc.


To put that in a plain language requires some effort, but the best advice when drafting the explanation is to put yourself in a position of a data subject and based on the explanation you intend to provide them try to understand how AI works. Explaining logic is probably the most contentious part. To overcome it, you could say to candidates that AI makes decisions/predictions by using what data, comparing the qualifications and work experience against job requirements, taking into consideration (and how) psychometric and other test results, interview notes, behaviour during video interviews resulting in a decision/recommendation.


Furthermore, depending on how much AI is independent in the decision-making process and whether a human will review the decision/prediction, it should be explained how the overall process works. To be prudent, mentioning how fairness is ensured and what is the impact of decisions made in the process should be explained – more about fairness further in the article, but it should be explained for instance, how it is ensured that bias is removed. When it comes to the impact of the process, it is clear that such processing could results in having candidate hired for the role or not, but nonetheless it should be mentioned. Depending on the AI involvement, candidates must understand why are they hired for the role or not. It can be often helpful to explain how AI is trained as candidates may understand better how AI arrives at decisions. It might be a good idea to provide practical examples as well, as people tend to identify with the examples.


All this does not have to be provided in one place; a layered approach could be taken. But it is important to highlight to the candidates that AI is used before any processing of personal data occurs. Companies should be transparent and could have a banner or a prominent text box mentioning that AI is used in the process of hiring, with more information provided through links to different pages, privacy policies and similar or with expandable menus.


Under the GDPR, where AI makes decisions without human involvement, which could have legal or similar significant effects on individuals, providing information how the decision was reached, significance and consequences of such processing is a legal obligation. In addition, the GDPR prescribes providing an information how individuals, in this case candidates, can express their views on the use of AI and the process, as well ,as challenge the decision made solely by the AI. However, even where humans make decisions based on AI recommendation, still explaining how AI makes recommendations that help humans form their opinion is important to comply with the transparency requirements.


Providing training to staff involved in deploying, auditing and using AI systems


All stakeholders involved in the use of AI should be trained on privacy and risks associated with the use of AI. The training should enable them to understand how AI systems work, if a system works as designed or not, how their expertise can address risks associated with the use of AI and so on.


In addition, humans involved in using AI predictions to make their own decisions or overriding AI outputs should be trained on how to do deal with AI so that they can perform their duties as expected. It goes without saying that training should be refreshed often enough and follow developments of the AI technologies.


Measures to achieve accuracy


These measures should cover what is called “statistical accuracy” which is essential for an AI system to comply with the accuracy principle. It is important to factor whether AI decisions represent predictions or factual information and document that.


In the recruitment context, this would mean that enough of the relevant data has been used to train AI and minimize the room for errors. Data used for training might be data about past applicants applied for different positions or artificial data, e.g., synthetic data – data artificially created to train the system, patterns for hiring candidates used by other companies – this would be patterns created by other organizations and by using their own data which would be shared with other organizations and so on.


But statistical accuracy needs to be monitored as well to ensure high level of procession and sensitivity. It should be monitored how many candidates who are fit for the role did AI find or how many candidates it did select. Where AI makes wrong decisions/recommendations adjusting should be performed, either by ingesting additional data or by removing data which caused errors.


It should be closely monitored how often humans accept, reject or override decisions/predictions made by AI. This is an important metric that can indicate whether AI is properly trained or not.


When we talk about the accuracy principle, it is important to allow individuals to request correction of their data and allow them to submit additional data that would keep their records up-to-date.


Measures to address bias and discrimination


AI can make biased or unbalanced predictions/decisions if the datasets used for learning are not well-adjusted. This could lead to a discrimination based on gender, race, age, health and similar.


That is why testing phase is important, and AI outcomes should be checked to ensure that they are not discriminatory or biased. These checks are not only important before putting AI in production, but they should be frequent enough throughout the use of AI, depending on the volume of the data processed by AI. If it is found that AI discriminates, then one of the ways to resolve the issue would be to add or remove data about under/overrepresented groups or introduce synthetical data.


For example, if AI selects male candidates over female ones, because more data of male candidates has been used to train AI, then data about male candidates can be removed so that volumes match data about female candidates, or additional synthetical data about female candidate could be introduced.


However, if AI relies on past practices of recruiters who were individuals with certain educational background which is not required for the role, then a learning process could be changed to ensure that right data sets are taken into consideration, or as said above additional data could be introduced or some data could be removed.


Assessing whether data used is required to fulfill the purpose of processing


AI systems process personal data in two phases – when AI is trained and when it is used to make predictions or decisions. AI processes data which was used to make predictions and data which represent predictions or decisions about individuals.


When using data to train AI, the best way to address this risk is to assess what data is objectively required to train AI, document the decision and ensure that only that data is extracted from past applications and used to train AI. Processing data just in case it might be required in the future might not be justifiable, unless perhaps where the event is foreseeable. Organisations should assess those situations carefully and be able to justify such decision.


When AI makes decisions, then it has to process all personal data about an individual required to produce the desired outcome. It is important to ensure that AI uses only data which is relevant for the processing in question. There are different techniques which can help with this principle, but it greatly depends on the system setup. Also, not everyone involved in the process should have access to personal data. For example, data used by AI could be pseudonymized and only available to individuals required to access it (e.g., vendor providing the technology wouldn’t have access to it), or converted to format which humans can’t read (e.g., strings).


It is important to think about the retention of data. If only last 24 months data is relevant for the hiring process, then AI should not use data older than 24 months, and such data should be deleted. If data forms candidates/employee record then appropriate retention periods usually contained in laws regulating employment should be followed.


Addressing security risks


Since training of AI includes copying of large volumes of personal data, such movements should be carefully mapped to ensure appropriate security.


Vendor due diligence is quite important, as many organizations would use third parties to develop and administer their AI systems. A non-reliable vendor can pose a high risk to individuals and the controller.


Security policies should be updated to cover security risks associated with the use AI and appropriate security testing should be done on the system. In addition, it is a good idea to consider whether a personal data must be used to train AI, or data can be de-identified enough to reduce risks to individuals as the data would be considered non-personal data.


Ensuring individual rights in AI systems


The system should be built in a way which would ensure that when individuals exercise their rights, the controller is able to respond. If personal data of the past candidates is used to train AI, the system should be able to trace and export data about candidates, allow data to be updated once they make a request, erased when not necessary anymore or where required by law and so on. Right to be informed has already been covered previously. Where outputs of the AI are stored in candidates or employee profiles, such data also represents data about individuals and should be considered when individuals exercise their rights.


However, one specific right related to the use of AI is the right not to be subject to a decision based solely on automated processing under the GDPR. When AI makes decisions without any human involvement then individuals should be able to:


    • obtain human intervention by someone with enough authority to overturn the decision;
    • express their point of view;
    • contest the decision made about them; and
    • obtain an explanation about the logic of the decision.


In practice this means that if a candidate is not selected for a role and the decision is made by AI they should be informed of the decision, be able to contact the controller and challenge the decision or express their view on it or obtain an explanation how was the decision made. This is in addition to the transparency requirements previously covered. This is because a denied employment can be considered to produce significant effect on an individual. Even though this is a requirement under the GDPR and the laws alike, it is nonetheless a good practice to be adopted even if not required by law.


Conclusion


As you can see, there are a lot of considerations to be made if AI is used. The complexity around the use of AI will depend on the regulatory framework that applies to the processing, the nature of the processing (whether AI makes decisions or predictions to humans), organization’s resources and capabilities and the actual need to use AI. In certain cases, even though AI might seem as the right approach to hiring candidates, that might not be the case after considering all the necessary steps which need to be taken to use AI in a compliant way as compliance responsibilities cannot be transferred to third parties, e.g., AI providers. That is why organizations have to assess whether AI is the right choice for them when it comes to hiring candidates.


Written by
Stevan Stanojevic


Photo by Markus Winkler from Pexels.

Ideas expressed in this article are personal views of the author and do not constitute legal advice.