Gen AI innovation race is leading to security gaps, according to IBM and AWS

Gen AI innovation race is leading to security gaps, according to IBM and AWS

Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.

What will it take to secure generative AI?

According to a new study released today by IBM and Amazon Web Services (AWS) there is no simple ‘silver bullet’ solution to secure gen AI, especially now. The report is based on a survey conducted by the IBM Institute for Business Value that surveyed leading executives at U.S. organizations. While gen AI is a top initiative for many, the survey found a high level of enthusiasm for security. 82% of C-suite leaders stated that secure and trustworthy AI is essential for business success.

That said, there is a dichotomy in the results and a difference with what’s actually happening in the real world. The report found that organizations are securing only 24% of their current generative AI projects. IBM isn’t the only firm with a report that raises concerns about security. PwC recently reported that 77% of CEOs are concerned about AI cybersecurity risks. 

Not coincidentally, IBM is working on different approaches with AWS to help improve that situation in the future. Today, IBM is also announcing the IBM X-Force Red Testing Service for AI to further advance generative AI security.

VB Event

The AI Impact Tour: The AI Audit

Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.

Request an invite

“In all the client conversations I’ve been having  I see that leaders are being pulled into different directions,” Dimple Ahluwalia, Global Senior Partner for Cybersecurity Services at IBM Consulting told VentureBeat. “They feel the pressure certainly from both their internal and external stakeholders to innovate with use of gen AI, but that means for some of them that security becomes an afterthought.”

Innovation or security? Gen AI implementations tend to only pick one

While it might seem that having security is common sense for any type of technology deployment, the reality is that’s not always the case.

The report found that for 69% of organizations, innovation takes precedence over security. Ahluwalia noted that organizations have not fully ingrained security across all lines of business. The report also makes it clear that business leaders understand the importance of security and address that issue to help make production deployments of gen AI more successful.

“People are so excited that they’re rushing to see if they can get productivity gains, if they can look at how to be more competitive,” she said. 

Ahluwalia said that the same thing happened in the early years of cloud when every conversation had to involve a discussion of moving workloads to the cloud, often without proper security oversight.

“That’s what is happening now with gen AI, everybody feels compelled and are rushing to get to it,” Ahluwalia said. “The plans haven’t been thought through and as a result, I think security kind of suffers as well.”

Guardrails and policy are the keys to gen AI security

So how can and should organizations improve?

The report recommends that in order to build trust in gen AI, organizations must start with governance. That includes establishing policies, processes and controls aligned with business objectives. 81% said generative AI requires new security governance models.

Once governance is set, strategies can address securing the full AI pipeline using available tools and controls. Collaboration across security, technology and business teams is needed. There is also benefit potentially in leveraging technology partners’ expertise for strategy, training, cost justification and navigating compliance.

How IBM X-Force Red Testing Service for AI fits in

Beyond guardrails and governance there is also a need to validate and test. 

IBM X-Force Red’s new Testing Service for AI is IBM’s first testing service tailored specifically for AI. The new service is bringing together a cross-discipline team of experts across penetration testing, AI systems, and data science. The service will also make use of expertise from IBM Research, which developed the Adversarial Robustness Toolbox (ART).

The concept of a ‘red team’ in security generally means there is a group that is taking an adversarial approach in proactively attacking resources in order to help learn where gaps exist. 

Chris Thompson, Global Head of X-Force Red at IBM explained to VentureBeat that the industry adopted the term “AI red teaming” for better or worse recently, primarily with a focus on the safety and security testing of models themselves. In his view, to date there hasn’t been a traditional red team focus on stealth and evasion. Rather the focus has been more on getting models to do something they shouldn’t, such as produce harmful content or gain access to sensitive RAG datasets. 

“Attacks against gen AI apps themselves are very similar to traditional application security attacks but with a new twist and expanded attack surface,” Thompson said.

At this point in 2024, he noted that IBM is seeing more of a convergence with what is considered to be true red teaming. The approach IBM is taking is to look at the wider attack paths into gen AI. The four areas of AI red teaming IBM has developed services around include: AI platforms, the pipeline used to tune and train the models (MLSecOps), the production environment running the gen AI applications, and the gen AI applications themselves. 

“Aligned with traditional red teaming, we’re also focused on missed detection opportunities and reducing the time it takes to detect any potential advanced threat actors successfully targeting these new AI solutions,” Thompson said.

Source link

Ticktock! 48 hours left to nab your early-bird tickets for Disrupt 2024 | TechCrunch Previous post Ticktock! 48 hours left to nab your early-bird tickets for Disrupt 2024 | TechCrunch
YoLa Fresh Next post YoLa Fresh, a GrubMarket for Morocco, digs up $7M to connect farmers with food sellers