The increasing integration of Artificial Intelligence (AI) into everyday business operations has brought with it a dual-edged sword—unprecedented opportunities for growth and innovation, and equally, alarming vulnerabilities being exploited by cybercriminals. Experts now warn that AI-driven cybercrime is rising fast, fuelled not just by technological advancement but by a factor as old as time: human error.
According to Allan Juma, a cybersecurity engineer at ESET East Africa, AI is being weaponised by bad actors who now use generative tools to launch highly sophisticated cyberattacks. While AI-powered defenses are making strides, the gap created by unsuspecting and untrained users continues to make many organisations vulnerable.
“AI itself is neither inherently good nor bad—it all depends on who is behind the keyboard,” Juma noted in a recent statement. “In the hands of defenders, AI offers powerful protection. But in the wrong hands, it becomes a formidable threat, especially when combined with human lapses.”
One of the most concerning developments in cybercrime is the evolution of social engineering tactics, now enhanced by AI. Phishing scams—already responsible for a significant portion of data breaches—are being made more convincing through AI-generated content. Using platforms like ChatGPT and other large language models, attackers can now mimic the tone, language, and communication styles of corporate executives or colleagues with alarming accuracy.
“These AI models are good at writing and translating emails into multiple dialects. This allows attackers to reach remote areas or less protected markets,” said Juma. “And with deepfake technology, they can generate video or audio impersonations of CEOs or finance officers. This makes it nearly impossible for employees to differentiate real from fake.”
Despite advancements in software security, the biggest vulnerability remains human oversight. Misclicked links, weak passwords, and a general lack of cybersecurity training have all contributed to the growing success of AI-driven attacks.
“A large percentage of breaches occur because employees aren’t properly trained. Cybersecurity awareness is no longer optional—it’s essential,” Juma stressed. “Cybercriminals are not just exploiting systems; they’re exploiting people.”
On the flip side, AI is also revolutionising the way cybersecurity experts defend against threats. Modern security systems use AI to identify patterns, detect anomalies, and respond to threats in real time. This predictive capability is proving essential in staying ahead of increasingly adaptive attackers.
“AI has been part of cybersecurity software for years—long before it became a buzzword,” Juma explained. “Its integration allows security teams to automate responses, shut down breaches before they spread, and continuously learn from attack patterns.”
Recent findings from the Google Threat Intelligence Group (GTIG) confirm that hackers are using Google’s Gemini AI model for content creation, translation, and even psychological profiling of targets. The GTIG’s Adversarial Misuse of Generative AI report, released earlier this year, details how AI is being used to craft persuasive phishing content and localised attacks, making cybercrime scalable across regions.
Cybersecurity professionals are urging businesses, especially in Africa where digital transformation is accelerating, to strike a balance between AI adoption and risk mitigation. That includes conducting regular staff training, investing in AI-powered defense systems, and instituting strong data governance policies.
“With AI now ubiquitous, the danger lies in complacency,” Juma warned. “Organisations must stay alert. Security isn’t just an IT function—it’s a business priority.”
As Africa continues its digital leap, the need for proactive, AI-aware cybersecurity strategies has never been more critical. Companies are encouraged to invest in both technology and people to build robust defenses against a growing wave of AI-driven threats.