New Delhi: As hackers devise new methods to infiltrate into your devices, Generative AI (GenAI) has become the top threat this year, as cybercriminals use ChatGPT and Gemini AI models to up their game.
The large language models (LLMs) are only the start of a new disruption in the hacking space.
“It’s important to recognise that this is only the beginning of GenAI’s evolution, with many of the demos we’ve seen in security operations and application security showing real promise,” according to Richard Addiscott, senior director Analyst at Gartner.
GenAI is occupying significant headspace of security leaders as another challenge to manage, but also offers an opportunity to harness its capabilities to augment security at an operational level.
“Despite GenAI’s inescapable force, leaders also continue to contend with other external factors outside their control they shouldn’t ignore this year,” he added.
The inevitability of third parties experiencing cybersecurity incidents is pressuring security leaders to focus more on resilience-oriented investments and move away from front loaded due diligence activities.
“Start by strengthening contingency plans for third-party engagements that pose the highest cybersecurity risk,” said Addiscott.
More than one in four organisations have banned the use of GenAI over privacy and data security risks, a report showed last month.
Most firms are limiting the use of Generative AI (GenAI) over data privacy and security issues and 27 per cent had banned its use, at least temporarily, according to the ‘Cisco 2024 Data Privacy Benchmark Study’.
Among the top concerns, businesses cited the threats to an organisation’s legal and intellectual property rights (69 per cent), and the risk of disclosure of information to the public or competitors (68 per cent).
–IANS