The continued obsession with artificial intelligence (AI), especially generative AI, indicates the need for enterprises to focus on security, but the fundamentals of critical data protection are still somewhat lacking.
The growing interest in generative AI, driven largely by OpenAI’s ChatGPT, has led organizations to consider how the technology should be used.
again: How to use ChatGPT: everything you need to know
Nearly half (43%) of CEOs say their organizations are already leveraging generative AI for strategic decision-making, and 36% are using technology to facilitate operational decision-making. increase. According to one survey, half integrate it into their products and services. IBM research Released this week. The findings are based on interviews with his 3,000 CEOs in 30 markets around the world, including Singapore and the United States.
However, CEOs are mindful of potential risks posed by AI, including bias, ethics and safety. About 57% are concerned about data security, and 48% are concerned about data accuracy or bias. The survey further revealed that 76% of businesses believe that effective cybersecurity across the ecosystem requires consistent standards and governance.
About 56% said they were holding back from making at least one major investment due to a lack of consistent standards. Only 55% are confident their organization can accurately and comprehensively report the information stakeholders seek about data security and privacy.
again: ChatGPT and new AI are wreaking havoc on cybersecurity in exciting and terrifying ways
This lack of confidence is forcing companies to rethink how they deal with potential threats. Generative AI tools not only enable more sophisticated social engineering and phishing threats, but also make it easier for hackers to generate malicious code, said Avivah Litan, vice president analyst at Gartner. It says it will. director Discuss various risks associated with AI.
And while vendors offering generative AI underlying models say they train their models to reject malicious cybersecurity requests, customers don’t want tools that effectively audit the security controls in place. Litan points out that they don’t provide it to .
Employees may also expose confidential or proprietary data when interacting with generative AI chatbot tools. “These applications may store information obtained through user input indefinitely and then use that information to train other models, further compromising confidentiality,” the analyst said. said. “In the event of a security breach, such information could also fall into the wrong hands.”
again: Stability.ai Founder Explains Why Open Source Is Essential To Alleviating AI Fears
Litan leads organizations to establish strategies to manage emerging risks and security requirements, with new tools needed to manage data and process flows between users and enterprises that host generative AI foundation models. I asked for
Companies should leverage existing security controls and dashboards to identify policy violations and monitor unauthorized use of tools like ChatGPT, he said. For example, a firewall can block user access, while a security information and event management system can monitor event logs for policy violations. You can also deploy security web gateways to monitor unauthorized application programming interface (API) calls.
Most organizations still lack the basics
But, according to Terry Ray, senior vice president of data security and field CTO at Imperva, it’s all about the basics.
This security vendor now has a dedicated team that oversees the development of generative AI and identifies how it applies to their technology. This internal group didn’t exist a year ago, but Imperva has been using machine learning for a long time, Ray said, citing the rapid rise of generative AI.
again: How does ChatGPT work?
The monitoring team also reviews employee usage of applications such as ChatGPT to ensure that these tools are being used appropriately within company policy.
Ray said it’s still too early to tell how the new AI models will fit in, with employees likely providing ideas on how to apply generative AI during the vendor’s annual year-end hackathon. added that it could.
It is also important to note that so far, the availability of generative AI has not significantly changed how organizations attack. Threat actors are still largely fixated on low-hanging fruit, seeking systems unpatched against known exploits. .
When asked how threat actors use generative AI, Lei said it could be deployed alongside other tools to inspect and identify coding errors and vulnerabilities. suggested that it was sexual.
APIs in particular are now hot targets due to their widespread use and often having vulnerabilities. For example, broken Object Level Authentication (BOLA) is: One of the Top API Security Threats Identified by Open the Worldwide Application Security project. In the BOLA incident, attackers successfully exploited weaknesses in user authentication methods to obtain API requests to access data objects.
Oversights like this underscore the need for organizations to understand the data flowing through each API, an area that is a common challenge for enterprises, Ray added. Most people don’t even know where or how many APIs they’re running across their organization, he pointed out.
again: People are now turning to ChatGPT to troubleshoot technical issues.
Every application deployed in a business will likely have an API, and the number will continue to grow as organizations are mandated to share data such as medical and financial information. Some governments are aware of such risks and have introduced regulations to ensure APIs are deployed with the necessary security safeguards, he said.
Organizations need to get the basics right when it comes to data security. The impact of data loss is significant for most businesses. As data custodians, companies should know what they need to do to protect their data.
in another global IBM research In a survey of 3,000 Chief Data Officers, 61% believe their company’s corporate data is safe and protected. When asked about data management challenges, 47% cite reliability, 36% cite unclear data ownership, and 33% cite data silos or lack of data integration.
The growing popularity of generative AI may put all the focus on data, but it also highlights the need for companies to get the basics right first.
again: With GPT-4, OpenAI chooses secrecy over disclosure
Ray said many companies haven’t even established the first steps yet, pointing out that most companies typically monitor only a third of their data stores and lakes.
“Security is [having] Visibility. Hackers will take the path of least resistance,” he said.
again: Generative AI could significantly improve productivity for some workers, according to this study
A Gigamon study released last month found that 31% of breaches occurred when compromised data appeared on the dark web, files became inaccessible, or users experienced poor application performance. , was found to be identified a posteriori. According to a June report of more than 1,000 IT and security professionals in Singapore, Australia, EMEA and the US, the percentage was even higher at 52% for Australian respondents and 48% for the US.
These numbers come despite 94% of respondents saying their security tools and processes give them visibility and insight into their IT infrastructure. Nearly 90% said they had experienced a breach within the last 18 months.
When asked about their biggest concern, 56% pointed out unforeseen blind spots. About 70% admitted to lacking visibility into encrypted data, and 35% said they had limited insight into containers. Half were unsure of knowing where their most sensitive data was stored and how the information was protected.
“These findings highlight a trend of significant gaps in visibility from on-premises to the cloud, the dangers of which seem to be misunderstood by IT and security leaders around the world,” said Gigamon. said Ian Farquhar, Security CTO.
“Many people do not realize that these blind spots are a threat … Given that more than 50% of CISOs worldwide cannot sleep at night thinking of unexpected blind spots being exploited, it is critical. It appears that not enough is being done to close this visibility gap.”