Blog



Monday, 13.09.2021

in category boty


A recruitment interview with a bot or a human?

In the HR world, especially when it comes to senior positions you still talk to  a human mainly. However, artificial intelligence (AI), e. g. in the form of chatbots, is increasingly used by large organizations during the initial stage of recruitment – to verify basic data, and competencies, especially hard ones.

Recruitment using AI is unlikely to ever fully replace the traditional job candidate selection process. These two methods can complement each other, though. The best talents in organizations are acquired through traditional means, so don't worry, for now recruiters and headhunters will not disappear from the market yet.

Huge, multinational organizations need to rely on technology, especially for large numbers of recruitments and for positions where well-defined skills are required or for lower-level positions for which most candidates can potentially apply.

A conversation with a chatbot can be held at any time, from anywhere, BUT. . . .

On the one hand, it is a cost-effective tool, because it speeds up the initial research of potential candidates or selection of applications, but it also carries a number of risks, e. g. incorrect processing of data by the system. There have already been market cases and accusations of e. g. gender or individual illnesses discrimination and inappropriate questions from the bot. There were also (non-HR) attempts by Microsoft to create a chatbot for Twitter (Tay), but the bot learned offensive lines from tweeters and they shut it down after 24h. But that was over 4 years ago, today organizations and AI are in a different place and the pandemic has clearly accelerated that.

So artificial intelligence needs to keep learning Something interesting was shown by an American study (albeit unscientific) a propos ethics and AI. Pew Research Center and Elon University's Imagining the Internet Center asked experts (already before the pandemic) such as programmers or representatives of business and politics, what they think the work to create ethical artificial intelligence will look like by 2030.

68% of respondents said that ethical principles aimed at the public good will not be applied in most AI systems by 2030; 32% chose the option that these ethical principles will be applied to most AI systems by that time.

So it stands to reason that the U. S. National Security Commission on AI, led by technology industry leaders, has released a comprehensive report on accelerating innovation while defending against malicious uses of AI.

What is more advanced today are systems and technologies that make it easier to answer candidates’ questions or send them feedback after the recruitment process.

If you feel like talking to the human intelligence at RICG we are always available. We make sure, also on behalf of our clients (usually large organizations), that every candidate gets feedback after the interview.