Skip to main content

Langchain

Welcome to LangChain - 🦜🔗 LangChain 0.0.180

Langchain Modules

python -m pip install --upgrade langchain[llm]
pip install chromadb
pip install pypdf

pip install chainlit
chainlit hello

chainlit run document_qa.py

Langchain vs LlamaIndex​

Both LangChain & LlamaIndex offer distinct approaches to implementing RAG workflows.

LangChain follows a modular pipeline starting with Document Loaders that handle various file formats, followed by Text Splitters for chunk management, and Embeddings for vector creation.

It then utilizes Vector Stores like SingleStore, FAISS or Chroma for storage, a Retriever for similarity search, and finally, an LLM Chain for response generation. This framework emphasizes composability and flexibility in pipeline construction.

On the other hand, LlamaIndex begins with Data Connectors for multi-source loading, employs a Node Parser for sophisticated document processing, and features diverse Index Construction options including vector, list, and tree structures.

It implements a Storage Context for persistent storage, an advanced Query Engine for retrieval, and Response Synthesis for context integration. LlamaIndex specializes in data indexing and retrieval, offering more sophisticated indexing structures out of the box, while maintaining a focus on ease of use with structured data.

The key distinction lies in their approaches: LangChain prioritizes customization and pipeline flexibility, while LlamaIndex emphasizes structured data handling and advanced indexing capabilities, making each framework suitable for different use cases in RAG implementations.

No matter what AI framework you pick, I always recommend using a robust data platform like SingleStore that supports not just vector storage but also hybrid search, low latency, fast data ingestion, all data types, AI frameworks integration, and much more.

image

A Beginner’s Guide to Building LLM-Powered Applications with LangChain! - DEV Community

Understanding LlamaIndex in 9 Minutes! - YouTube

LangGraph​

Courses​

LangSmith​

SmolAgent - Agents​

Building your agent​

To initialize a minimal agent, you need at least these two arguments:

  • model, a text-generation model to power your agent - because the agent is different from a simple LLM, it is a system that uses a LLM as its engine. You can use any of these options:

    • TransformersModel takes a pre-initialized transformers pipeline to run inference on your local machine using transformers.
    • HfApiModel leverages a huggingface_hub.InferenceClient under the hood.
    • LiteLLMModel lets you call 100+ different models through LiteLLM!
    • AzureOpenAIServerModel allows you to use OpenAI models deployed in Azure.
  • tools, a list of Tools that the agent can use to solve the task. It can be an empty list. You can also add the default toolbox on top of your tools list by defining the optional argument add_base_tools=True.