DeepKeep Launches GenAI Threat Evaluation Module


DeepKeep, the main supplier of AI-Native Belief, Threat, and Safety Administration, publicizes the product launch of its GenAI Threat Evaluation module, designed to safe GenAI’s LLM and pc imaginative and prescient fashions, particularly specializing in penetration testing, figuring out potential vulnerabilities and threats to mannequin safety, trustworthiness and privateness.

Assessing and mitigating AI mannequin and software vulnerabilities ensures implementations are compliant, truthful and moral. DeepKeep‘s Threat Evaluation module presents a complete ecosystem method by contemplating dangers related to mannequin deployment, and figuring out software weak spots.

DeepKeep’s evaluation offers an intensive examination of AI fashions, guaranteeing excessive requirements of accuracy, integrity, equity, and effectivity. The module helps safety groups streamline GenAI deployment processes, granting a variety of scoring metrics for analysis.

Core options embrace:

  • Penetration Testing
  • Figuring out the mannequin’s tendency to hallucinate
  • Figuring out the mannequin’s propensity to leak personal knowledge
  • Assessing poisonous, offensive, dangerous, unfair, unethical, or discriminatory language
  • Assessing biases and equity
  • Weak spot evaluation

For instance, when making use of DeepKeep’s Threat Evaluation module to Meta’s LLM LlamaV2 7B to look at immediate manipulation sensitivity, findings pointed to a weak point in English-to-French translation as depicted within the chart beneath*:

“The market should be capable to belief its GenAI fashions, as increasingly enterprises incorporate GenAI into every day enterprise processes,” says Rony Ohayon, DeepKeep’s CEO and Founder. “Evaluating mannequin resilience is paramount, notably throughout its inference section in an effort to present insights into the mannequin’s potential to deal with numerous eventualities successfully. DeepKeep’s objective is to empower companies with the arrogance to leverage GenAI applied sciences whereas sustaining excessive requirements of transparency and integrity.”

DeepKeep’s GenAI Threat Evaluation module secures AI alongside its AI Firewall, enabling dwell safety in opposition to assaults on AI functions. Detection capabilities cowl a variety of safety and security classes, leveraging DeepKeep’s proprietary know-how and cutting-edge analysis.

*ROUGE and METEOR are pure language processing (NLP) strategies for evaluating machine studying outputs. Scores vary between 0-1, with 1 indicating perfection.

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here

Stay on op - Ge the daily news in your inbox