Forgery of ID with Deepfake using AI… Balanced solution
'Use the Chat GPT to make a deep -fake ID card'
Using ChatGPT to create a fake Korean military ID card a hacking organization believed to be supported by the North Korean government produced a Korean ID card with a deepfake image using generative AI and attempted a cyberattack. This is the first officially reported case of North Korean hacking that used AI-based deepfakes.
According to a report by the security firm Genius Security Center (GSC) on the 15th, the North Korea-linked hacking group “KIMSUKY” sent a phishing email in July requesting a review of military service ID cards. The photographs attached to the email were deepfake images created using ChatGPT, and the file was embedded with malware. Military service ID cards are legally protected public identification documents, and cloning or similar production is itself illegal. In fact, requests to create such cards (for example, by asking an AI like ChatGPT) would be refused. However, investigators believe the hackers obtained a deepfake image by deceiving the AI model with pretexts such as “creation of photos for legitimate purposes” or “sample use,” rather than openly requesting an actual military service card clone. The sender address was also disguised to look like a real military agency email, using “MLI.KR” instead of “Mil.kr.”
Using AI “Claude” to get jobs at overseas IT companies North Korean hackers’ AI exploitation has also been observed abroad.
Anthropic (U.S.) stated in a security report at the end of last month that North Korean hackers used the AI system “Claude” to apply for jobs at overseas IT firms. Hackers created sophisticated virtual identities with AI to pass technical evaluations, and some were even able to work in actual roles after hiring. The report analyzed that “these activities were designed to evade international sanctions and were carefully orchestrated to obtain foreign currency for the North Korean regime.”
Experts warn that while AI services are powerful tools for improving work efficiency, they can also be used for cyberattacks that threaten national security. GSC emphasized, “We need to prepare security measures that consider the possibility of AI being exploited across recruitment, work, and operations.”
This security threat reveals the harmful side effects AI technology can bring. As AI moves beyond being a simple tool to become a means of forgery and disguise, it is critical to support reliable identity authentication systems.
What services are being exploited by AI abuse?(+Recruitment systems, financial services that require identity authentication)
North Korean hackers’ AI abuses are not simply limited to cyberattacks. They are attempting to infiltrate company internal information through disguised employment, which poses a major threat to corporate recruitment, finance, and security systems.
Disguised employment in overseas jobs:
AI has been used to create fake resumes and portfolios to pass interviews. This can lead to secondary damage, such as leakage of sensitive corporate information or ransomware attacks.
Exploitation of financial services:
The possibility of opening a financial account with a forged ID card, or using it for illegal money laundering, is also increasing.
As a result, AI abuse poses cyber threats to both financial/payment services and corporate recruitment systems. To manage these threats safely, introducing digital identity authentication solutions such as ARGOS ID check is essential.
1. Recruitment and Security Systems: Ensuring Safe Workforce Management
AI deepfakes and counterfeit IDs are undermining corporate hiring systems. Hackers who are hired under fake identities can penetrate internal networks, steal core technologies, or sabotage systems.
ARGOS ID check Service: ARGOS’s solution accurately verifies applicant identities during the hiring and interview process in real time, detecting deepfakes and forged documents. This helps companies block security threats in advance and hire trustworthy talent.
2. Financial & Payment Services: Preventing Financial Crimes
Forged ID cards can be exploited in non-face-to-face financial services. With deepfake photos, criminals can bypass identity verification or open “mule accounts” using counterfeit IDs.
ARGOS ID check Service: ARGOS’s identity verification technology detects sophisticated forged documents and deepfake images that are difficult to spot with the naked eye. This helps eradicate illegal money laundering and fraudulent transactions, creating an environment where customers can safely use financial services.
Differentiation: Delivering Services Through Remote Verification
ARGOS identity authentication plays a decisive role in minimizing social risks caused by AI abuse.
How can we minimize these risks?
AI-based solutions can block such crimes by accurately detecting deepfakes and counterfeit IDs. This prevents hackers from infiltrating corporate systems and protects sensitive information.
Ultimately, lowering crime rates through strong identity authentication reduces massive social costs in security, administrative management, and post-incident tracking.
Today, we have examined how AI is being used to commit crimes involving real ID cards. As a result, it is critical to build a sustainable digital ecosystem by managing both the convenience and the security risks of AI technology.
ARGOS’s KYC solution has already proven its effectiveness in highly regulated and security-critical industries such as overseas remittances, ticketing, finance, gaming, and blockchain.
If you would like to learn more about ARGOS’s services, please click the link!