Topics In Demand
Notification
New

No notification found.

Harnessing the Power of Large Language Models: A Guide for Developing Enterprise Applications
Harnessing the Power of Large Language Models: A Guide for Developing Enterprise Applications

September 16, 2024

45

0

 

Article authored by Kamesh Sampath, Lead Developer Advocate, Snowflake

Large Language Models (LLM) have emerged as a transformative force in the rapidly evolving landscape of artificial intelligence, reshaping how organizations interact with data and build applications. These sophisticated AI models, trained on vast amounts of text, have unlocked new possibilities in Natural Language Processing (NLP) and generation. From enhancing customer service to revolutionizing data analysis, LLMs are paving the way for more intuitive and powerful AI-driven solutions.

 

Whether you're a business leader, data scientist, or technology enthusiast, this exploration of LLMs will equip you with the knowledge to navigate the exciting future of AI-powered enterprise solutions.

 

This article aims to demystify LLMs and explore their practical applications in enterprise data. We'll delve into:

1. LLM Fundamentals: Understand the core concepts and commonly used LLM jargons.

2. Generative AI Use Cases: Touch up on the possible applications and techniques to use with them.

3. Data-Centric Applications: Examining how LLMs can revolutionize data analysis and interaction using text-to-SQL

4. Low-Code and No-Code Solutions: Investigating how AI service providers like Snowflake, OpenAI, and Claude are making LLM adoption more accessible.

Techniques

Prompting

 

Prompting is a technique that guides the LLM in extracting the information we seek.  

The prompt has multiple parts:

 

  • Priming

Priming is a fundamental part of LLM Prompt, where we make the LLM assume a role, and the response should align with the role, e.g. “You are a customer support agent of a Telco telecommunications company. You respond to messages.”

  • Style and Tone

The style and tone determine the context and guide the   LLM responses to adhere to a specific tone when responding to the users. e.g. “Acknowledge and reference the customer's message content. Please thank the customer for their message. If the customer complains, please apologize to the customer.” 

  • How to handle errors

LLMs tend to naturally hallucinate when they do not have answers to our questions. Handling errors instructs the LLM what to do when it does not have the answer or know the answer. E.g. “If you don't know the answer, then say, Don't know. Do not hallucinate.”

  • Dynamic Content

We often need to enrich the prompt with extra content to give the LLM more relevant context and guidance and enable it to produce the desired output.

  • Output formatting

Output formatting has more to do with how the output from LLM should look, e.g JSON text, Python code etc.

 

This whole process of building the prompt is called Prompt Engineering. It is a crucial and essential part of Generative AI to determine how the output gets steered. Output steering refers to the techniques and methods we use with LLM guidance to generate the response/output as required.

 

Retrieval Augmented Generation(RAG)

 

LLMs are trained to a specific date and time and in a more general context. When organizations start to use LLM, they prefer it to understand their respective domain and generate output that aligns with it. LLMs don't have the capacity to do that out of the box. We could achieve this using a technique called Retrieval Augmented Generation(RAG).

 

In RAG, we would inject the LLM with extra context—that is, the domain-specific context—to enable the LLM to have the proper context to answer the question more relevant to the domain.

 

To use RAG, we need to extract and convert the context information, usually documents, knowledge bases, websites/portals, and so on, into Vectors. Vectors are numerical representations, usually floating-point numbers representing a word, character, sentence, or whole document, in the RAG world we call them as chunks.

 

As part of RAG, the LLM application semantically searches the question in the multidimensional vector space and retrieves the relevant documents that match the search. The relevant documents are then enriched as part of the prompt's Dynamic Content, providing the LLMs with contextual guidance. The prompt's Dynamic Content can then help LLM answer in the domain the user expects it to be. A classic example of applying this technique is a Knowledge Chatbot or a Customer support chatbot, enabling organizations to answer customer queries quickly.

 

Fine-Tuning

 

Before we get into the details of Fine-Tuning, we need to understand a few other essential pieces of LLM. Why should we apply fine-tuning to LLM in enterprise scenarios?

 

What is a token? 

 

A token is a fundamental unit in LLM, which could be a space, a word, a special character, etc. A prompt like “What is an AI Data Cloud Platform ?” has 15 tokens, which include a bunch of spaces and a “?”. Each text/character you send in a prompt will be counted toward the tokens.

 

 

Why are tokens important?

 

LLM service providers usually charge users $/million tokens. The Context Window is the number of tokens(text/characters) a model handles as part of a single user request. Models can be classified based on the size of the Context Window that they can handle. 

 

Some examples of models by sizes

 

  • Large
    • mistral-large2  - 128,000 tokens
    • llama3.1-405b - 128,000 tokens
  • Medium
    • snowflake-arctic - 4,096 tokens
    • reka-flash - 100,000 tokens
  • Small
    • mistral-7b - 32,000 tokens
    • gemma-7b - 8,000 tokens
       

 

The bigger the model supports the Context Window, which usually determines the accuracy of the model output, the larger the model ( bigger Context Window), the better the output.  From the cost perspective, the bigger context window tends to increase the costs. So, using a large LLM with an enterprise data task where millions and billions of rows will be passed as part of the context will drive the costs up. So what to do then? That's where the technique of Fine Tuning the LLM helps.

 

Fine-Tuning of LLM is a technique for making a smaller model(smaller Context Window). In this technique, we build a sample output, the more accurate one using a large LLM model and a smaller data set, and use the responses to train a smaller model to produce a similar reaction with the same prompt and request context. This way, we achieve the same result at a fraction of the cost.

 

Fine-Tuning effectively automates some maron business processes or tasks, e.g., using complex business rules. With Fine Tuning, we can do the same via simple natural language, e.g., say you want to categorize support tickets based on a service type automatically.

 

 

Low-Code and No-Code Solutions

 

Providing developers with no-code and low-code solutions can elevate the usage of LLM features to game-changing status. 

 

Low-code solutions can help developers with limited knowledge about LLMs to get started quickly. In this case, the LLM/AI service provider hides the complexity of building models, tuning the parameters, choosing suitable algorithms and so on, and provides the user with the wrapper functions around critical ML applications, e.g. Forecasting, Anomaly Detection, Classification and so on, enabling developers to use the AI/Machine learning quickly features instantly.  It is profoundly beneficial when applying AI/ML in an enterprise environment. 

No-Code code solution goes a step further by providing a state-of-the-art user interface(UI) that allows even non-techni​​cal users to leverage the power of LLMs without writing a single line of code.

Conclusion

 

Large Language Models (LLMs) are rapidly transforming the landscape of AI and its applications in the enterprise world. As we've explored, understanding the fundamentals of LLMs—from prompting techniques to context windows—is crucial for effectively leveraging these powerful tools.

 

Key takeaways include:

 

1. The importance of prompt engineering in guiding LLM outputs

 

2. The potential of Retrieval Augmented Generation (RAG) for enhancing LLMs with domain-specific knowledge

 

3. The balance between model size, performance, and cost considerations

 

4. The value of Fine-Tuning in optimizing LLMs for specific tasks

 

As organizations continue to adopt LLM technology, we're likely to see an explosion of innovative applications across various sectors. LLMs are poised to revolutionize how businesses interact with and extract value from their data. 

 

However, it's crucial to approach LLM implementation with careful consideration of security, ethics, and data privacy. As the technology evolves, so will the frameworks and best practices for responsible AI use.

 

The future of LLMs in enterprise settings is bright, with ongoing research and development promising even more powerful and efficient models. Organizations can unlock new levels of productivity, insight, and innovation by staying informed about these advancements and thoughtfully integrating LLMs into their operations.

 

As we move forward, the key to success will lie in the technology itself and in how creatively and responsibly we apply it to solve real-world problems and drive business value.


 

 

 


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


Snowflake makes enterprise AI easy, efficient and trusted. Thousands of companies around the globe, including hundreds of the world’s largest, use Snowflake’s AI Data Cloud to share data, build applications, and power their business with AI. The era of enterprise AI is here. Learn more at snowflake.com (NYSE: SNOW).

© Copyright nasscom. All Rights Reserved.