Topics In Demand
Notification
New

No notification found.

95

0

With the advent of GenAi, the security posture of enterprises has changed a lot and there are few new areas that needs to be looked at holistically. The enormous amount of data fed to AI systems have exponentially increased one of the reports from a security vendor quotes about 450+TB of data in the first few months of 2024 primary for text interoperation.

Given the enormity of the area and a structured OWASP framework for determining challenges with LLMs. There is more to look at beyond a standard validation. There are over 10+ areas that needs to be focused on in a real-world scenario ranging from the source of the LLM, Prompt control, Data validation and tagging, Biases, Ethical and sensitive information, Integrations with the systems consuming the data, ecosystem challenges, Processing time and resource challenges, adversarial attacks, misinformation, LLM thefts etc. Each area in itself is a challenge for avoiding data leaks and protecting the sanctity of the intelligence of LLMs.

Pertinent to the data and with a strong data governance in place some of the concerns can be mitigated. The following are the key areas to be dealt with

  1. Data Breaches and Leaks resulting from LLMs trained on massive datasets which may unintentionally reveal sensitive information in their responses. This can be a privacy nightmare and have to be plugged at the source or via a guard rail for the dame
  2. Misinformation and Bias as part of LLMs based on biases from their training data. If not addressed properly this can lead to the generation of misleading or discriminatory content.
  3. Model Manipulation and Poisoning a factor of discipline with the internal team and a right ser of validation could help attackers to try and manipulate an LLM by feeding it bad data during training (termed poisoning) or crafting special prompts (injection attacks) to get the LLM to generate harmful outputs undesirable for an enterprise setup. This could be used to create false public opinion.
  4. Insecure Integrations are everywhere LLMs often rely on plugins and APIs to interact with data and systems. Poorly designed plugins might be vulnerable to attacks, allowing unauthorized access or manipulation of the LLM's outputs.
  5. Denial-of-Service Attacks while the name states a challenge with availability to respond LLMs are resource-intensive. Attackers could overload an LLM with complex requests, causing it to crash or become unavailable to legitimate users as well as in an elastic environment leads to a big bills due to cost of GPUs and TPUs.

There are several methods in which such risks can be mitigated

Data Sanitization and Cleaning training data to remove sensitive information before feeding it to the LLM can minimize the risk of data leaks.

Bias Detection and Mitigation forms a key act of carefully evaluating training data for bias and employing methods like fairness metrics and algorithmic audits to help reduce bias in LLM outputs.

Prompt Engineering we get what we ask so controlling and building framing for specific prompts can help guide the LLM towards generating the desired output and minimize the risk of manipulation.

Secure Coding Practices as always, the engineers help with the trick to following secure coding principles when developing LLM systems and plugins to help prevent vulnerabilities that attackers can exploit.

Security Monitoring to include continuously monitoring LLM activity for suspicious behavior that can help identify and address potential attacks.

While most of us know the what and how to determine the challenges the remediation is a key and when we explored how such challenges can be mitigated there are seven specific techniques that stand out for developing safer GenAi ecosystems mostly driven via AI.

Adversarial training where a model is trained on adversarial examples and to help the model learn to recognize and resist these types of manipulations, improving its generalization ability and resilience to real-world attacks.

Data with a new lens while the standard Data Cleaning with an intent of removing irrelevant, incomplete, or erroneous data points. Data Filtering where Identifying and removing potential outliers or suspicious data points and anomaly detection using statistical methods to detect unusual patterns in the data could indicate potential manipulation attempts

Regularization techniques the models are penalizes with large weights, encouraging simpler models that generalize better as well as Randomly dropping out neurons during training which forces the model to learn and become more robust.

Differential Privacy the noise is injected into the training data during training. This added noise helps protect the privacy of the original data while still allowing the model to learn effectively. This method makes it more difficult for attackers to infer sensitive information about the training data from the model itself.

Model Distillation technique where a smaller child model to mimics the behavior of a larger complex parent model. The child model, being smaller and simpler, can be inherently more robust to adversarial attacks compared to the complex parent model

Ensemble Training combines multiple GenAI models (ensemble) trained with different techniques which can improve overall robustness. Even if an attacker succeeds in manipulating one model, the others might still produce correct outputs.

Ongoing Monitoring which is a keep it up to data process where model hardening is an ongoing process. It is crucial to continuously monitor the performance of GenAI models in production for signs of degradation or unexpected behavior. Regularly testing the model with new adversarial examples to identify potential vulnerabilities. Tracking key performance metrics of the model can tell us of sudden changes that could indicate manipulation. This can be a key item as part of the observability setups for AI models

While there are several regulations, methods and techniques available the implementation of these as part of the GenAI cycle is crucial and leads to a responsible AI Implementation.


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


images
Sripathy Balaji Venkataramani
Sr. Vice President - Products and Solutions

A accomplished product innovator, Concept realizations and incubation, Large Deals architect

© Copyright nasscom. All Rights Reserved.