Cara-cara manajer perekrutan dapat mengenali pelamar kerja yang menggunakan deepfake AI untuk menipu—Begini caranya

Vijay Balasubramaniyan, the CEO of Pindrop, found himself facing a unique problem within his company. During remote interviews with job candidates, his hiring team noticed strange noises and tonal abnormalities. This led Balasubramaniyan to suspect that some candidates were using deepfake AI technology to hide their true identities. Given Pindrop’s expertise in fraud detection, they were able to investigate the issue themselves.

To tackle the problem, Pindrop posted a job listing for a senior back-end developer and utilized their own technology to scan candidates for potential red flags. This move revealed that out of 827 applications received, approximately 12.5% were submitted by individuals using fake identities. This discovery was eye-opening for Balasubramaniyan, highlighting the growing issue of deepfake AI in a remote-first world.

The prevalence of deepfake technology in job applications is not limited to Pindrop. A survey conducted by Resume Genius found that about 17% of hiring managers have encountered candidates using deepfake technology to alter video interviews. Additionally, one startup founder shared that around 95% of the resumes he receives are from North Korean engineers posing as Americans. As AI technology advances rapidly, businesses and HR leaders must be prepared to navigate this new challenge in the recruitment process.

Balasubramaniyan believes that if Pindrop is facing this issue, it is likely widespread across other companies as well. The rise of deepfake AI job applicants poses various challenges for hiring managers. While some candidates may simply be seeking to secure multiple jobs simultaneously, there are more sinister motives at play. Instances have been reported where hired IT workers under false identities have been involved in criminal activities, leading to significant consequences for employers.

MEMBACA  Sekarang waktunya membeli Jepang?

In a notable case from 2024, cybersecurity company Crowdstrike dealt with over 300 incidents of criminal activity linked to a North Korean organized crime group, with a significant portion involving IT workers hired under false identities. These individuals were found to be funneling funds into a weapons program in North Korea and engaging in data theft and extortion tactics.

To combat the threat of deepfake AI candidates, hiring managers must be vigilant for red flags during the recruitment process. Dawid Moczadło, co-founder of Vidoc Security Lab, shared insights on identifying potential deepfake candidates. Signs include discrepancies between audio and video, unnatural movements or glitches in video quality, and reluctance to reveal their true identity during the interview.

Employers should also scrutinize LinkedIn profiles for inconsistencies, lack of activity, or sparse connections. During interviews, deepfake candidates may struggle to provide detailed information about their work experience or evade questions about their past. Additionally, requests to ship laptops to alternate locations and inconsistent attendance or behavior in meetings may raise suspicions.

Moczadło emphasized the importance of implementing thorough hiring procedures to prevent the infiltration of deepfake candidates. While his company now requires candidates to visit the office for an in-person evaluation before hiring, he acknowledged the challenges faced by recruiters inundated with numerous applications.

As the landscape of recruitment evolves in the face of advancing AI technology, businesses must adapt their hiring practices to detect and deter deepfake AI candidates effectively. By remaining vigilant and implementing robust screening processes, organizations can safeguard themselves against potential threats posed by fraudulent job applicants.

MEMBACA  Perdana Menteri Jepang Berjuang untuk Bertahan dalam Pemungutan Suara Parlemen Saat Trump Mendominasi By Reuters

Terjemahkan teks berikut ke dalam Bahasa Indonesia: