LangChain
This is a Framework for developing applications powered by large language models (LLMs)LLM : Machine learning that can comprehend and generate human language text. They work by analyzing massive data sets of language.
Prompts & Prompt Chaining
Prompt: Prompt is an input that a user provides to an AI model to get a specific response
PromptTemplate
Prompt Chaining: Sequential prompts enhance model coherence and structure
Sequential Chain
Code snippet and test generation tool
Chatbot – Fundamentals
Chat Memory:
- Feature in chatbot systems.
- Remembers past interactions and context.
- Enables personalized responses.
AI-powered chat functionalities
Retrieval-Augmented Generation
RAG is a technique for augmenting LLM knowledge with additional data.
RAG Architecture:
RAG Architecture:
- Indexing: Data ingestion and indexing pipeline from a source, typically done offline.
- Retrieval and generation: The RAG chain component that retrieves relevant data based on user queries from the index and passes it to the model at runtime.
Embedding Generation
Contextual Question Handling and Retrieval-Based QA System
LangChain Agents
- Agents use LLMs as reasoning engines for decision-making.
- They execute actions based on the LLM outputs.
- Results from actions can influence further decision-making by the LLM.
Setup:
- OpenAI Account Setup
- Install Python 3.11.0
- pip3 install pipenv
- pipenv install
- pipenv shell










No comments:
Post a Comment