I like big bots and I cannot lie.
You other devs can't deny.
Sir Chats-A-Lot is an open-source AI chatbot that actually reads your website. When a visitor walks in with an itty bitty query, you won't get hallucinations. Powered by Gnosys persistent memory. No database. No vector store. Just markdown and an LLM that knows what you've published.
My AI don't want none unless you got docs, hon.
Static prompt stuffing
You paste your entire site into a system prompt. 8,000 tokens every message. The LLM gets confused. Your bill gets big. You add a blog post and forget to update the prompt. The chatbot confidently tells visitors about services you retired six months ago.
RAG with a vector database
Now you need Pinecone or Weaviate or Chroma. Embeddings pipeline. Ingestion jobs. A database that costs money when nobody's chatting and cold-starts when someone finally does. You wanted a chatbot, not a data engineering project.
"AI-powered" widget services
$49/month for a chat bubble that hallucinates your pricing and can't tell visitors you're hiring. You don't control the model, the context, or the answers. When it's wrong, it's confidently wrong with your brand on it.
What if your chatbot just read your site like a person would?
Sir Chats-A-Lot will use Gnosys to crawl your website at build time and convert every page into a tagged, searchable markdown file. A lightweight search index deploys with your app as a static JSON file. When someone asks a question, the chatbot searches the index, pulls only the relevant pages, and hands focused context to the LLM. No database. No embeddings pipeline. No external service. The knowledge lives in your repo and deploys with your code.
New blog post? Push and it's in the knowledge base. Changed your pricing page? Next deploy picks it up. Removed a job listing? It's gone from the chatbot too. The site and the chatbot are always in sync because they ship together.
Install. Build. Deploy. That's the plan.
$ npm install -g gnosys$ gnosys web init
Scaffolds the knowledge directory and config. Point it at your sitemap or content folder.
$ gnosys web build
Crawls your site, converts every page to a structured markdown memory with tags, categories, and relevance keywords. Generates the search index. Works with any LLM provider for enrichment, or runs without one using TF-IDF keyword extraction.
"postbuild": "gnosys web build"
Add to package.json. Deploy. Every push rebuilds the knowledge base automatically. The chatbot is always current.
Four modes. One widget. No forms.
Ask Mode
Visitors ask anything about your business. Sir Chats-A-Lot searches the knowledge base, pulls the relevant context, and gives answers grounded in what's actually on your site. Not what an LLM thinks your site says. What it actually says.
Contact Mode
Click "Contact Us" and the chatbot opens a guided conversation. Collects name, email, and inquiry details through natural dialogue. Asks smart follow-ups based on your services. Submits to your CRM when the visitor is ready. No dead-end forms.
Careers Mode
Lists your current open roles pulled from the knowledge base, updated every deploy. Asks which role interests the visitor. Collects their information. If nothing's open, offers to keep them on file. Always accurate because the jobs list ships with the code.
Assessment Mode
A guided multi-turn conversation. Questions about strategy, data readiness, infrastructure, governance, skills. Scores each category. Generates a personalized summary with recommendations pulled from your services content. Ends with a call-to-action. Way more engaging than a Google Form.
Dial 1-900-MIX-A-BOT. No black boxes. No bill.
Zero database
The knowledge base is markdown files committed to your repo. The search index is a JSON file. Both deploy with your app. No Postgres, no Redis, no cold starts, no free-tier compute limits, no suspend-after-5-minutes surprises.
Any LLM
Chat responses will work with Claude, GPT, Grok, Gemini, or local models through Ollama. Gnosys knowledge enrichment supports the same providers. Not locked to any vendor.
Token efficient
Static prompt stuffing burns 4-8k tokens every message. Sir Chats-A-Lot injects 1-2k tokens of relevant context. Better answers, lower cost, faster responses.
GEO-ready
The /knowledge/ directory doubles as structured content for AI crawlers. Reference it from llms.txt and search engines powered by LLMs get clean, tagged information about your business. Your chatbot's brain is also your discoverability strategy.
Build-time freshness
Every deploy regenerates the knowledge base via postbuild hook. Blog posts, service pages, job listings, product updates. If it's on your site and you pushed it, the chatbot knows about it.
Open source. MIT.
No premium tier. No feature gating. No "contact sales for enterprise." Fork it, modify it, ship it. Built by Proticom. Powered by Gnosys.
The memory layer that makes it work.
Gnosys is an open-source persistent memory system for AI agents. 47+ MCP tools, federated search, Dream Mode consolidation, and a dual-write architecture that keeps everything as human-readable markdown. Sir Chats-A-Lot uses Gnosys's web knowledge base feature to crawl, structure, index, and search site content at build time. Same tool developers use to give their AI coding agents persistent memory, now powering your website's chatbot.
[Learn more about Gnosys → gnosys.ai]Your site has the answers.
Let your chatbot kick 'em.
Coming soon. Open source. Free forever.