Tuning Local LLMs With RAG Using Ollama and Langchain

import os from datetime import datetime from werkzeug.utils import secure_filename from langchain_community.document_loaders import UnstructuredPDFLoader from langchain_text_splitters import RecursiveCharacterTextSplitter from get_vector_db import get_vector_db TEMP_FOLDER = os.getenv(‘TEMP_FOLDER’, ‘./_temp’) def allowed_file(filename): return filename.lower().endswith(‘.pdf’)…

Setting Up Ollama With Docker

Setting Up Ollama With Docker

docker ps Method 2: Running Ollama with Docker compose Ollama exposes an API on http://localhost:11434, allowing other tools to connect and interact with it. That was when I got hooked…

I Ran the Famed SmolLM on Raspberry Pi

I Ran the Famed SmolLM on Raspberry Pi

Their compact nature makes them well-suited for various applications, particularly in scenarios where local processing is crucial. As the industry shifts towards local deployment of AI technologies, the advantages of…

What is Hugging Face?

What is Hugging Face?

There are 9,00,000+ models on the platform, and you can easily utilize each one of them on your system as per their usage instructions, and license requirements.A type of deep-learning…