Topics In Demand
Notification
New

No notification found.

๐‘ป๐’‰๐’† ๐‘ฐ๐’Ž๐’‘๐’‚๐’„๐’• ๐’๐’‡ ๐‘ฎ๐’†๐’๐’†๐’“๐’‚๐’•๐’Š๐’—๐’† ๐‘จ๐‘ฐ ๐’๐’ ๐‘ฉ๐’–๐’”๐’Š๐’๐’†๐’”๐’” โ€“ nasscom-Industry Roundtable
๐‘ป๐’‰๐’† ๐‘ฐ๐’Ž๐’‘๐’‚๐’„๐’• ๐’๐’‡ ๐‘ฎ๐’†๐’๐’†๐’“๐’‚๐’•๐’Š๐’—๐’† ๐‘จ๐‘ฐ ๐’๐’ ๐‘ฉ๐’–๐’”๐’Š๐’๐’†๐’”๐’” โ€“ nasscom-Industry Roundtable

265

0

The nasscom Responsible AI Hub, in collaboration with nasscom member connect, recently hosted an #industry roundtable on "๐‘ป๐’‰๐’† ๐‘ฐ๐’Ž๐’‘๐’‚๐’„๐’• ๐’๐’‡ ๐‘ฎ๐’†๐’๐’†๐’“๐’‚๐’•๐’Š๐’—๐’† ๐‘จ๐‘ฐ ๐’๐’ ๐‘ฉ๐’–๐’”๐’Š๐’๐’†๐’”๐’”" in Mumbai.

Leaders represented startups in generative AI, software developers turned investors, CIOs and CDOs from end-user companies, and CIO/CTO/CDOs and heads of innovation from tech services. The room represented over 500 years of collective experience.

Here are the major discussion themes andย ย insights from the discussion:

  1. Accept, donโ€™t negate Gen AI. Just understand its speed of evolution and figure out aย catch-up strategy
  2. Knowing the art of possible with Gen AI is critical before diving in
  3. Areas of applications of Gen AI differ for B2B and B2C, with some commonalities. B2C is moving faster
  4. Model and tool selection are key to deriving benefits
  5. Most models are western-origin; India-specific use cases may need new datasets, ontologies, and models
  6. Legal contours of Gen AI are fuzzy, but responsible AI is going to be the way forward
  7. Talent movements are undeniable but those will come with many more learning opportunities

ย 

  1. Accept, Donโ€™t Negate Gen AI and its Speed โ€“ย Generative AI has been the fastest adopted tech, hitting 1 Mn+ users in 5 days.
    1. Speed of disruption is way ahead of the organizational ability to respond
    2. Crucial to understand the core technology before diving deep
    3. Pockets of power users within the company need to be identified to beta-test the tech within safe guardrails

ย 

  1. Understand Gen AI Before Diving In โ€“ย There are some basic components of the Gen AI ecosystem that enable utilization of the large language models (LLMs) that Gen AI is based on.
    1. Foundational Models: Text, image, video/3D models, speech/voice synthesis, code, tools and services
    2. Databases: Generative AI models use vector databases. These are extremely compute-intensive to build and maintain
    3. Finetuning: Layer of applications, or API integrations, that enables building an entire set of open or closed enterprise models on top
    4. SDK or standard development kits: to build utility tools or services on top of the finetuned models
    5. Product engineering layer that looks into operationalizing efficiencies across multiple layers

ย 

  1. Areas of Applications Differ in B2C from B2B
    1. For B2C โ€“ Consumerization of Content โ€“ B2C experiences will grow faster in the initial days. Areas where traction is building up โ€“ original content creation, digital marketing/ advertising/ sales, image-video-audio models in film making. Here, newer models will offer a better experience than the previous versions, and the burden of model upgrades will lie with model OEMs.
    2. For B2B โ€“ Model-as-a-Service โ€“ In B2B, finetuned models on top will be the need and this will be offered as a service. Since model training depends on organizational data, these offerings will likely be based on proprietary pre-trained models.

ย 

  1. Model and Technology Selection โ€“ย Just like with any new, and disruptive, technology, Gen AI usage depends on being able to paint a practical usability picture for the end users. What does that mean? Here are some ideas that came across:
    1. Pre-empt user demand with practical use cases โ€“ This is the sweet spot of cost-benefit with Gen AI. Therefore, build a battery of power users and give them subscriptions to most impactful tools based on a set of initial safe use cases.ย Amidst the many queries from on โ€œHow can we quickly use Gen AIโ€. It is important to serve as a thought partner to uncover the right use cases and direct clients into that discovery mode early.
    2. Find the โ€œright usable size of dataโ€ โ€“ LLMs use on average 10k GPUs! Limiting dataset can lead to creation of something the experts termed as SLMs or small language models. SLMs are nothing but enterprise data on top of a pre-trained model to enable content creation, etc. SLMs could takeoff with as little as 5 GPUs. Coach users โ€“ clients, employees, executives โ€“ on use cases that will be suitable to draw maximum benefit from Gen AI, versus, narrow AI.ย BOTS can be thought of as SLMs, but their adoption stands at 10% globally.
    3. The finetuning layer will have maximum use cases. It enables model augmentation easily over the base LLM. Three major reasons why applications in this layer will fly off the shelves a lot faster:
      1. Helps get out an MVP quickly;
      2. Enables some level of stress-testing to set data sharing guardrails; and most importantly
      3. Uncovers pockets of high usability but unreliable output, versus, high usability and safe output

ย 

  1. Ontology and Datasetsย โ€“ย This is the core input to LLMs. Most LLMs have been fed on publicly available content libraries, maintained by the Western, more developed nations. A certain degree of bias is genetic to the existing LLMs between what is acceptable meaning in the West versus the East.ย Second concern is about ontologies for industry-specific implementations, where enterprises can use pre-existing models โ€“ generative adversarial networks (GANs), variational auto encoders (VAEs), diffusion models, or flow-based models โ€“ and then put a subset of enterprise data on top to create contained models. How can reliable, legally compliant, industry-specific ontologies be created and who owns them? This would be crucial to address in order to scale usage across the less glamorous but really productive use cases.ย Third aspect, about ontologies and getting them right, is to do with organizations building on top of LLMs that are stateless. Enterprise datasets enable models that record the state and relearn. But the base model does not learn, and when it is updated by the owner organization, ontologies have to be recreated!ย ย This fourth aspect is crucial, majorly in the India context. Spoken languages and dialects in a specific region can make most pre-existing LLMs practically challenging to use. Phonetics,ย lingual characteristics, pronunciations, and resulting jaw movements, etc. can make universal English-based libraries less usable as-is. Existing models then need a layer of NLU/NLP on top, again a resource-intensive task. So, for which use cases should India create its own datasets and generative AI models and is there an opportunity then to make in India for global use? Should these models be closed or open?ย Lastly, ontologies also depend on the principles and behaviors โ€“ the culture. Ethical behaviors, safe operational frameworks, acceptable data sharing, nature of reinforcement training, are all different in differentย cultures. This is a foundational element in aligning operational AI frameworks with ethical and responsible AI principles. A big challenge for organizations and countries.

ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย 

  1. Legal Contours of Using Gen AIย โ€“ Of utmost importance, ethical responsible AI is a dimension of using AI in itself. Certain companies that are building tools, apps, and services on top of existing models are front-loading customer consent based on an acceptable set of test case scenarios and outputs. Frontloading consent is a quick way to showcase to the customer where guardrails are needed, therefore, in such engagements, stress-testing or blackbox testing is the way to establish the type of data to be provided and the use cases to be built.ย Tech services providers are more cautious as the requests for code generation increase. While the code might be proprietary and usable by consent, the pre-trained Gen AI models are yet to prove themselves absolved of any data privacy and copyright infringements. With global operations, this can become a real Achilleโ€™s Heal. ย Therefore, while there are experiments across the board with a multitude of customer requests, actual deployment will be require clarity on broader legal aspects.ย Across the board, service providers are worried about:
  • Copyright action since the sources of data for many large models could have overridden IP
  • Plagiarism being a grey area, since most systems today do similarity checks, not an exact match.
  • Establishing data ownership, especially in large proprietary models
  • Building checks in place within the organization โ€“ who are the right people to drive thinking, strategy and execution on ethical and responsible AI

Nasscom Responsible AI Hub is working with the industry to put together the first responsible AI framework and the outcome is eagerly awaited.

ย 

  1. Impact on Existing Talent and Jobsย โ€“ย Leaders fully accepted the importance of this issue. Is there a straight solution? NO. But there are ways to manage this tech shift, taking cues from past disruptions.
  • Organizational awareness โ€“ On the high-end of awareness are CEOs/CFOs because they are exposed to the market and decision-making. Then the tech-savvy employees and AI experts who have dived deep already. End users who are nifty with search, SEO techniques, or are great at challenger selling techniques might find learning prompt engineering easy. Surprisingly, CIOs/CTOs/CDOs who have struggled with proving RoI with narrow AI models, find themselves in dรฉjร  vu situation.
  • Communication and change managementย โ€“ Employees are getting on to gen AI independently and couldย end up exposing business critical data. Case-in-point: Samsung. Or, they end up posting an output generated by gen AI and it turns out to be incorrect. Several such examples. This is a serious reputational damage risk.ย Therefore, it is imperative for CXOs to drive education, awareness, and quick skilling on gen AI tools to arrest shadow usage and loss of business data.ย 
  • Training for Gen AI โ€“ Prompt engineering is one skill. However, it is more important to understand the kind of outputs that can be successfully generated from different types of tools and use those tools according to context. Right-sizing Gen AI training is key.
  • Gen AI models could fuel resource reallocation โ€“ If we simply consider the 5-6 stage software development lifecycle, from requirements gathering, design, development, testing, QA, to delivery and release management, the distribution of resources may get thinned from the middle stages of development and testing towards design, QA, and delivery management. More bespoke code in less time, with quality assured โ€“ this might become the modus operandi for standardized code development. Reliability checks and output validation will become important roles in most gen AI code copilot cases.
  • Pressure on talent density โ€“ This was an interesting point of view. It was discussed that the pressure on talent density could go up, well, one because less people will have to know more, and two, less people will have to do more, as a lot of quality checks and validation tasks will stretch expert resources. Talent density is the collective competence of talent in an organization at a point in time.
  • Long-term skilling plan to address reducing half-life of jobs โ€“ย Half-life of jobs isย a measure of how long a skill is needed. And rightly thought so at this time, when prompt engineering is badly sought. According to an estimate by IBM, half life of skills has come down from 5 years to 4 years, and for more technical skills, it is only 2.5 years. A forecast suggests that for disruptive tech, it would be even lesser. This has major ramifications for design of a curriculum approach to building a ready-to-employ pipeline of skilled graduates, which means, upgrading skilling initiatives midway to address shifts in technology and the industrial usage.

Nasscom has always advocated that in this techade, the only way to stay competitive is to transform into a continuously learning organization. Gen AI is driving this message home!

ย 


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


images
Namita Jain
Deputy Director, Research

Researching technology for actionable impact since my 12 years in tech strategy and advisory

ยฉ Copyright nasscom. All Rights Reserved.