AWS Gen AI Hackathon
Virtual Recruiter | GenAI - RAG - Google Slides
Links
- Visualizing Amazon SageMaker machine learning predictions with Amazon QuickSight AWS Machine Learning Blog
- Generative AI AWS Training and Certification Blog
- Generative AI for every business
Hackathons
- Some UI
- Amazon Transcribe / Amazon Rekognition
Building blocks
- Right LLM with prompting techniques
- In-context learning and RAG
- LLM Agents / Multi-Agents
- Fine-tuning / RLHF
- Pre-training or Build your own Model
Resources
Digital Trainings
- Planning a Generative AI Project - https://explore.skillbuilder.aws/learn/course/external/view/elearning/17256/planning-a-generative-ai-project
- Amazon Bedrock Getting Started - https://explore.skillbuilder.aws/learn/course/external/view/elearning/17508/amazon-bedrock-getting-started
- Foundations of Prompt Engineering - https://explore.skillbuilder.aws/learn/course/external/view/elearning/17763/foundations-of-prompt-engineering
- Building Generative AI Applications Using Amazon Bedrock - https://explore.skillbuilder.aws/learn/course/external/view/elearning/17904/building-generative-ai-applications-using-amazon-bedrock
- Building Language Models on AWS https://explore.skillbuilder.aws/learn/course/external/view/elearning/17556/building-language-models-on-aws
Blogs
- Build financial search applications using the Amazon Bedrock Cohere multilingual embedding model
- Inference Llama 2 models with real-time response streaming using Amazon SageMaker
- Build generative AI agents with Amazon Bedrock, Amazon DynamoDB, Amazon Kendra, Amazon Lex, and LangChain
- Boosting RAG-based intelligent document assistants using entity extraction, SQL querying, and agents with Amazon Bedrock
- Operationalize LLM Evaluation at Scale using Amazon SageMaker Clarify and MLOps services
Others
- Develop advanced generative AI chatbots by using RAG and ReAct prompting
- Back to Basics: Understanding Retrieval Augmented Generation (RAG)
- Powering Multiple Contact Centers with GenAI Using Amazon Bedrock
- https://www.promptingguide.ai
Hands-on
Build a question-answering bot using generative AI – Self-Paced Lab ( This is part of the AWS Skill Builder subscription Access this course and more, with an AWS Skill Builder subscription. Unlock hands-on training through immersive labs, games, challenges, and enhanced exam prep resources with a 7-day free trial. )
Bedrock
- Agents for Amazon Bedrock - Amazon Bedrock
- How Agents for Amazon Bedrock works - Amazon Bedrock
- amazon-bedrock-workshop/02_KnowledgeBases_and_RAG/1_managed-rag-kb-retrieve-generate-api.ipynb at a7e62b80669378de1bae414e0b646399c7934f8e · aws-samples/amazon-bedrock-workshop · GitHub
- amazon-bedrock-samples/rag-solutions/contextual-chatbot-using-knowledgebase at main · aws-samples/amazon-bedrock-samples · GitHub
Links
- GitHub - jossai87/bedrock-agents-streamlit: Creating Bedrock Agents with Streamlit Framework
- Implementing RAG App Using Knowledge Base from Amazon Bedrock and Streamlit | by Saikat Mukherjee | Mar, 2024 | Medium
- GitHub - build-on-aws/bedrock-agents-streamlit: Creating Amazon Bedrock agents with Streamlit Framework
- GitHub - build-on-aws/amazon-bedrock-agents-quickstart: Learn how to quickly build Agents with Amazon Bedrock
- Build a contextual chatbot application using Knowledge Bases for Amazon Bedrock | AWS Machine Learning Blog
- Preview – Connect Foundation Models to Your Company Data Sources with Agents for Amazon Bedrock | AWS News Blog
- Invoking LLM models using Bedrock from AWS. | by Sanjeeb Panda | Medium
- How to build your own RAG chatbot using LangChain and Streamlit
- Build a real-time RAG chatbot using Google Drive and Sharepoint
- GitHub - PatrickPT/RAG_LLM_example: Streamlit powered Python Chatbot with Retrieval Augmented Generation on LLMs
- Hands on with Retrieval Augmented Generation | Notes on AI
Code
import streamlit as st
from llama_index.core import VectorStoreIndex, ServiceContext, Document, SimpleDirectoryReader
from llama_index.llms.openai import OpenAI
import openai
import yaml
with open("config.yaml", "r") as yamlfile:
config = yaml.load(yamlfile, Loader=yaml.FullLoader)
# import configuration from yaml
name = config[0]['config']['name']
info = config[0]['config']['info']
input_dir = config[0]['config']['input_dir']
system_prompt = config[0]['config']['system_prompt']
api = config[0]['config']['api']
# Set Streamlit page configuration
st.set_page_config(page_title=name, page_icon="🦙", layout="centered", initial_sidebar_state="auto", menu_items=None)
# Set OpenAI API key
openai.api_key = st.secrets.openai_key
# Create main interface
st.title(name)
st.info(info, icon="📃")
# Initialize the chat messages history
if "messages" not in st.session_state.keys(): # Initialize the chat messages history
st.session_state.messages = [
{"role": "assistant", "content": "Ask me a question"}
]
# Function to load data
@st.cache_resource(show_spinner=False) # data is cached in memory so limit the knowledge base according to your machine
def load_data():
with st.spinner(text="Loading and indexing the provided data"):
reader = SimpleDirectoryReader(input_dir=input_dir, recursive=True) # read recursively all directories
docs = reader.load_data() # load data and create docs
service_context = ServiceContext.from_defaults(llm=OpenAI(model=api, temperature=0.5, system_prompt=system_prompt))# add a permanent service prompt which is added
index = VectorStoreIndex.from_documents(docs, service_context=service_context) # create your vector database
return index
# Load data and create the chat engine
index = load_data()
chat_engine = index.as_chat_engine(chat_mode="condense_question", verbose=True)
# User input and chat history
if prompt := st.chat_input("Your question"): # Prompt for user input and save to chat history
st.session_state.messages.append({"role": "user", "content": prompt})
# Display chat history
for message in st.session_state.messages: # Display the prior chat messages
with st.chat_message(message["role"]):
st.write(message["content"])
# Generate a response if the last message is not from the assistant
if st.session_state.messages[-1]["role"] != "assistant":
with st.chat_message("assistant"):
with st.spinner("Thinking..."):
response = chat_engine.chat(prompt)
st.write(response.response)
message = {"role": "assistant", "content": response.response}
st.session_state.messages.append(message) # Add response to message history