Generative AI in Production: Best Practices and Lessons Learned, 2023 State of AI Report, and Harrison Chase Gives Me a Shoutout!
Plus a philosophical take on prompt engineering, on-demand webinars, and understanding mental health via LLMs
What’s up, everyone!
Thank you to everyone who joined the LangChain ZoomCamp on Friday.
Don't worry if you missed it; I’ve got the recording for you. I’ll stick to sending the links directly to the Zoom recordings; here’s the link to the Zoom recording where you can watch or download the videos.
Here’s what we covered in this session:
Harpreet's Projects: Overview of Langchain course and a book I’m working on
Prompting Techniques: Deep dive into prompt templates, focusing on prompt engineering for the next session.
LLMs Varieties: Distinction between base and instruction-tuned LLMs, including their functionalities and applications.
Instruction-Tuned Lms: Emphasis on their advantages, the art of crafting prompts, and the shift towards multi-tasking with large language models.
Crafting Prompts: Importance of clear, structured prompts for guiding AI models and ensuring accurate outputs.
Prompt Engineering: Exploration of autoregressive models, few-shot prompt templates, and chatbot prompt creation.
Serializing and Selecting Prompts: Techniques for storing, sharing, and selecting relevant prompts, focusing on balancing relevance and diversity.
The session materials are available on GitHub.
Don’t forget to ⭐️ the reop!
My first LinkedIn Learning Course is LIVE!
This course is the guide I wish I had during my early days in deep learning.
It's designed to simplify complex concepts, minimize math, and maximize intuitive explanations. I've incorporated plenty of visuals to make ideas resonate.
I remember my initial days in deep learning, feeling overwhelmed with terms like ImageNet, ResNets, and EfficientNets. The intricacies of neural networks, from pooling to convolution layers and even the mysterious 'Adam'’ seemed like a maze.
Transitioning from the scikit-learn universe, I was also puzzled about coding these networks in PyTorch.
I also discuss the history and evolution of computer vision leading up to the transformative era of deep learning.
I've poured my experiences and insights into this course, and I genuinely hope it provides value to all learners.
Enjoy!
I Just Launched a T-Shirt Line!
I keep all my content freely available by partnering with brands for sponsorships.
Lately, the pipeline for sponsorships has been a bit dry, so I launched a t-shirt line to gain community support.
You can check out the designs I made here.
Free On-demand Generative AI Webinars
I’ve partnered with SingleStore to bring you free, on-demand webinars you can watch at your own pace.
These are hands-on sessions where you’ll see GenAI code in action.
How to Build LLM Apps That Can See, Hear, Speak - Explore techniques for fetching company news, enhancing Q&A embeddings, leveraging voice recognition in databases, understanding OpenAI's voice and image tools, and generating human-like audio with text-to-speech models.
LLMs in Banking: Building Predictive Analytics for Loan Approvals - Discover the fundamentals of LLMs in predictive analytics, delve into Vector DBs and OpenAI integration, gain hands-on experience through demos, and understand the future of banking technology with a focus on intelligent loan decisions.
How to Use Kafka & Vectors for Real-Time Anomaly Detection - Discover SingleStore's prowess in handling IoT data challenges, experience real-time anomaly detection with Kafka integration, delve into the acceleration of data analytics through vector processing, and explore live insights via SingleStore's interactive notebook feature.
How to Build a NoCode AWS Bedrock LLM App on Flowise - Delve into AWS Bedrock's capabilities enhanced by vector databases, comprehend the importance of tools like SingleStoreDB, explore the open-source Flowise AI for visual LLM crafting, and experience a live demo of an LLM app integration by Michael Connor.
How to Build Generative Voice Clone Applications with OpenAI - Discover the intricacies of OpenAI's generative models for voice cloning, understand its applications and ethics, and witness a live demo showcasing the creation and integration of a voice clone using OpenAI and SingleStoreDB.
I know it sucks they ask for your phone number; you can just put (123) 456-7890 and it’ll work 😆.
Generative AI in Production: Best Practices and Lessons Learned
Last week, in a session sponsored by the Generative AI World Summit, I invited Chris Alexiuk, Alessya Visnjic, Hannes Hapke, Ville Tuulos, Greg Loughnane, and Meryem Arik to discuss the practical aspects of deploying generative AI solutions to production.
Thanks to the Generative AI World Summit for sponsoring this session. You can register for the conference with the discount code 'harpreet' for 75$ off your ticket price.
Here are my key takeaways from the panel discussion:
Deploying Generative AI? It's not just about quick prototyping. Consider:
Shifting your approach to emphasize user feedback and real-world deployment challenges.
Aligning your generative AI solutions with what users truly need.
Balancing the theoretical with the practical for optimal deployment.
Design and Architecture Matter:
Team up with experts to whiteboard and determine the best system architecture.
Engage in building products around the AI model rather than just relying on it.
Facing challenges in transitioning from prototype to production? Keep iterating and addressing issues like model updates, monitoring, and scalability concerns.
When it comes to observability and evaluation:
Always prioritize system transparency, traceability, and understanding user behaviour.
Hold off on solely depending on metrics. Collaborate for a holistic evaluation approach, considering user feedback and desired experiences.
And let's not forget fine-tuning and computation. As we deploy generative AI solutions, we must focus on optimizing retrieval systems and managing computational intensity.
User Experience is Paramount:
Always put the user first, ensuring interfaces are intuitive and safe.
Design experiences that protect users from potential pitfalls and allow for easy recovery.
Lastly, when setting up an evaluation pipeline:
Define specific data sets and metrics that resonate with the intended use cases.
Ensure metrics are designed to represent real-world applications accurately.
✨ Blog of the Week
I came across the below tweet by François Chollet a while ago, and it took me quite a bit to digest. Honestly, I’m still wrapping my head around it. Thankfully, he fleshed his idea out in a full blog post.
Below is my interpretation of what he’s saying, and what it means for people looking to build the prompt engineering skill.
LLMs are vast repositories of 'vector programs' learned from data. These programs operate in a high-dimensional embedding space where similar concepts are clustered. Prompts are keys to navigating this space, retrieving and executing specific vector programs. The art of crafting these prompts is akin to fine-tuning a search in this vast database to extract desired behaviours or outputs from the model.
Chollet presents the idea that LLMs can be considered vast databases of "vector programs."
These programs are complex functions learned from data, transforming and processing information in intricate ways. When interacting with an LLM using a prompt, we're querying this database to retrieve and execute a specific vector program.
Crafting effective prompts (prompt engineering) is basically querying this database for the most suitable program for a given task.
What do embeddings have to do with prompts?
Embeddings are representations of words or tokens in a high-dimensional vector space. In models like word2vec and LLMs, words or phrases with similar meanings or contexts are positioned close to each other in this space. When we provide a prompt to an LLM, we specify a location or direction in this embedding space. The prompt helps the model understand which part of its vast knowledge (or which vector program) it should access to generate a response.
How do prompts help us traverse the embedding space?
Prompts act as "keys" or "queries" to the LLM's database of vector programs. By adjusting the phrasing or content of a prompt, we can navigate to different regions of the embedding space, thereby accessing different vector programs. This is why slight changes in a prompt can yield different outputs: we're effectively exploring different parts of the model's knowledge or capabilities.
Refining prompts (prompt engineering) is like fine-tuning our search to find the most appropriate response or behaviour from the model.
Chollet says prompt engineering will remain crucial, especially as LLMs become increasingly complex.
Crafting the right prompt to extract desired behaviours from LLMs is invaluable. However, he does hint at the possibility of automating the prompt engineering process, making it more user-friendly. While automation might simplify some aspects of the task, the expertise and intuition of prompt engineers will likely still be in demand, especially for complex or novel applications.
As LLMs evolve, prompt engineers can pivot to related roles, such as refining automated prompt systems or working on next-generation models.
🛠️ GitHub Gems
MentalLLaMA - Understanding mental health via LLMs
I came across this interesting project called MentalLLaMA that’s making mental health analysis more understandable using LLMs.
They've tested models like ChatGPT and GPT-4 to see how well they can explain mental health assessments without much training. They even created this dataset called IMHI with over 105K samples, the first for this kind of work on social media. From this, they developed MentaLLaMA, an open-source AI that follows instructions to analyze mental health on social platforms and then explains its findings.
They've also set up a comprehensive testing system with 19K samples to evaluate how well this AI works across various tasks. They've detailed all of this in two papers they've published.
📰 Industry Pulse
ℹ️ State of AI Report 2023 Highlights
The latest State of AI Report, curated by AI experts Nathan Benaich and the Air Street Capital team, is out.
Key takeaways include:
GPT-4's Superiority: This model outshines others, emphasizing the power of proprietary architectures.
AI's Real-world Impact: Significant advancements noted in life sciences, especially in drug discovery.
Compute's Rising Importance: NVIDIA's record earnings spotlight the growing significance of GPUs in the AI landscape.
GenAI's Financial Boost: AI startups focusing on generative applications secured over $18 billion from investors.
Safety in AI: The global debate intensifies, with regulatory actions emerging, but a unified approach remains elusive.
🔦 Google's AI-Powered Search Evolution
Generative AI Integration: Google Search uses generative AI to produce images based on specific queries. Imagine searching for a "capybara cooking breakfast" and getting AI-generated visuals!
Refinement Features: Users can tweak the AI's descriptions to get the perfect image. It's like having an AI artist at your fingertips.
Google Images Gets an Upgrade: This isn't just for Google Search. Google Images users can also harness this feature for inspiration-driven searches.
Responsible AI Use:
Every AI-generated image will carry metadata labels and watermarks. It's Google's way of ensuring transparency and responsible AI use.
More than Just Images:
Google isn't stopping at images. They're also introducing written drafts in SGE. Need help drafting a note or message? Google's got your back.
This text feature is especially handy for research or project-related searches.
🖼️ Addressing AI Image Misuse
Introducing "Stable Signature": FAIR and Inria have teamed up to launch an innovative watermarking technique for images generated by open-source AI models.
Invisible but Powerful: This watermark is a silent guardian. It's invisible to us, but algorithms can spot it, ensuring the authenticity of AI-generated images.
How It Works:
Embedded During Creation: Unlike traditional methods, this watermark is integrated during the image generation. This makes it sturdier than watermarks added after the fact.
Resilient to Edits: Share, edit, or modify the image; the watermark stands its ground, ensuring traceability to its AI origin.
The Bigger Picture:
Promoting Responsible AI: This isn't just about watermarking. It's a step towards ensuring the responsible use of generative AI and setting a standard for AI-generated content identification.
Beyond Images: While the focus is currently on images, the horizon looks promising. We might see this technique applied to other AI-generated content in the future.
🤖 Stability AI's New Release
Meet Stable LM 3B: Stability AI presents a compact language model with 3 billion parameters for on-the-go devices.
Size vs. Performance: Don't be fooled by its size. Stable LM 3B punches above its weight, outdoing many of its larger counterparts. It's a blend of efficiency and power.
Benefits and Features:
Eco-Friendly & Wallet-Friendly: Not only is it cost-effective, but its compact nature makes it a green choice in AI.
Versatile Applications: Whether penning a novel or coding the next big app, Stable LM 3B has your back.
Availability:
Open-Source Goodness: Stable LM 3B is up for grabs on the Hugging Face platform for those eager to dive in.
🔍 Research Refined
I found this paper while reading the comments on François Chollet’s tweet about prompt engineering.
The paper's abstract looked interesting, so I read it and tried to make connections between it and what Chollet was saying.
Both François Chollet's insights and the findings from the paper converge on the idea that the behaviours and capabilities of Transformer-based models can be understood in terms of their dynamics in an embedding space.
They both suggest that understanding these dynamics can provide deeper insights into how these models work and how they can be effectively utilized.
Embedding Space Dynamics: Chollet's description and the paper emphasize the importance of understanding models in terms of their movement or behaviour in an embedding (or latent) space. Chollet describes LLMs as repositories of vector programs that operate in this space, while the paper frames the dynamics of Transformers as movements through an embedding space.
Intelligence as Vector Dynamics: Chollet suggests that the intelligence of LLMs can be understood as the selection and execution of specific vector programs from the embedding space. The paper similarly posits that the intelligence and knowledge of Transformers are properties of the organization of vectors in this space.
No In-Context Learning During Decoding: Chollet's description implies that when we prompt an LLM, we're essentially fetching a specific vector program and running it. The paper supports this by stating that no learning occurs during the decoding phase, and the behaviours result from different contexts composing into different vectors.
Prompts as Keys: Chollet describes prompts as "keys" that fetch specific vector programs from the LLM. This aligns with the paper's perspective that the behaviour of Transformers can be understood by analyzing their trajectories in the embedding space in response to different inputs.
Knowledge Representation: Both Chollet and the paper emphasize that the knowledge and capabilities of these models are not localized to specific neurons or layers. Instead, they are distributed properties of the embedding space.
There was another paper mentioned in the comments, Large Language Models are Human-Level Prompt Engineers, that I skimmed as well.
Both François Chollet and the paper emphasize the role of prompts in directing LLMs. Chollet views LLMs as vast repositories of vector programs accessed via prompts, while the paper introduces an automated method (APE) to optimize these prompts. Both sources highlight the potential of LLMs as computational tools and suggest a future where prompt engineering becomes more automated.
Here's how they connect:
Prompt Engineering: Chollet discusses the concept of prompt engineering as a search over many keys to find a program that is empirically more accurate for a specific task. The paper further delves into this idea by introducing an Automatic Prompt Engineer (APE) that automates the instruction generation and selection process.
LLMs as General-Purpose Computing Tools: Chollet's perspective on LLMs is that they can be seen as general-purpose computing tools that store many programs. The paper aligns with this view by treating instructions as "programs" and optimizing them to guide the LLM's behaviour.
Automation of Prompt Engineering: Chollet predicts prompt engineering will likely get automated, so end-users don't have to deal with it directly. The paper's introduction of APE is a step in this direction, offering an automated method to generate and select high-quality instructions.
Anthropomorphism: Chollet warns against unnecessary anthropomorphism when interacting with LLMs, emphasizing that they don't understand language like humans do. The paper, by treating instructions as programs and leveraging LLMs for program synthesis, also moves away from anthropomorphic views and focuses on the computational capabilities of the models.
💡 My Two Cents
Neil generously gave me a shoutout for my LangChain content, and it caught the attention of Harrison Chase, the CEO of LangChain.
If Harrison mentions me, awesome, and Greg Kamradt in the same sentence…then maybe I’m on my way to making it?
Creating content can sometimes feel like screaming into the digital abyss, so this bit of validation and your support, helps add fuel to the fire!
That’s it for this one.
See you next week, and if there’s anything you want me to cover or have any feedback, shoot me an email.
Cheers,
Harpreet
I like the deeper dive on Chollet’s way of explaining LLMs as large repositories. BOOKMARKED 🔖
Cheers, to the shoutouts and all the efforts you are putting through in making LangChain easy breezy to play around 👏