The ability to interact with your own documents is one of the most powerful features of Tess AI, letting the AI understand and use info specific to your context. The AI Copilot for Chat, the platform’s AI chat, gives you the option to add files to your knowledge base, turning them into rich sources for more accurate and personalized answers.
This article goes over how you can add, set up, and get the most out of documents in AI Copilot, walking you through the different supported file types and processing modes that make sure you get the best performance in every situation.
AI Copilot is your chat space in Tess AI, bringing together different AI models in one easy place. If you want to start working with documents, the first thing to do is open up this tool.
Step-by-Step to Add Documents:
In the Tess AI dashboard, go to the AI Copilot for Chat.
Find the "Add to Knowledge Base" button, shown as a paperclip icon, next to the chat typing area.
When you click this button, a window will pop up that lets you choose the type of document you want to add.
Tess AI supports a wide range of file formats, including:
Audio (e.g. MP3, WAV)
CSV (Comma Separated Values)
Code (various languages)
DOCX (Microsoft Word)
Excel (XLSX)
Google Sheets (via URL)
Image (e.g. JPG, PNG)
PDF (Portable Document Format)
TXT (Plain Text)
Web Scraper (Extracting content from URLs)
When you add a document, Tess AI uses specialized tools to extract the text inside it. For example, for audio files, a transcription tool is used; for PDFs and DOCX, text extraction tools are applied. This initial extraction process may use a small amount of credits, even with unlimited AI models, since extraction itself isn't an AI function, but a preparatory step. Once extracted, the text is stored and available for the AI to check with no extra cost for each query.
Each file type has particular settings that show up after you pick it. For example:
Excel (XLSX): You'll need to pick the file to upload and set the "Range"—that is, the sheet and the range of cells the AI should read (ex: "Planilha1!A1:Z100"). It's not possible to read all sheets at once.
Audio: Besides the file, you can pick the transcription AI you want (ex: Deepgram, OpenAI Audio, AssemblyAI) and the audio language.
Google Sheets: You add it through the sheet's URL, which has to have public access or be shared with the Tess AI user.
Most settings have an information icon ("i") next to them, which gives you a short explanation about what it does.
One of the most important settings in Tess AI when working with documents is the "Context Mode". This option, usually not available on other AI platforms, lets you decide how the AI will process and interact with your document’s content. Before breaking down the modes, it’s key to understand what a "Context Window" is.
Understanding the Context Window: Every AI has a "context window", which is the maximum amount of info (measured in "tokens"—little pieces of text, not necessarily characters or whole words) it can handle and "remember" at once during a conversation or analysis. If the info goes beyond this window, the AI might start to "forget" the older parts. Newer AI models usually have bigger context windows. Tess shows this info for each available model.
There are two main context modes in Tess AI:
Deep Learning Mode (Short Content):
How it works: This mode grabs all the text from the document and dumps it straight into the AI’s context window for each query. It’s like if you copy-pasted the entire contents of the document in the chat every time you ask something.
Pros: The AI has access to all the document’s content for each answer, which helps a lot for holistic understanding of shorter texts.
Cons:
If the document is too long, it might go over the AI’s context window, so only a part of the document gets analyzed.
For long documents, even if they fit in the window, the AI might “lose track” or struggle to keep the focus exactly on your specific request.
The AI re-reads the whole document every time you interact, which can be less efficient.
When to use: Perfect for short docs (like up to 3-5 pages, depending on text density and which AI context window you’re using). Great for memos, short articles, or specific spreadsheet sections.
RAG Mode (Retrieval Augmented Generation) (Long Content):
How it works: RAG is an advanced technique. First, the document gets processed once: its text is split into smaller chunks and “vectorized” (turned into numeric representations that capture semantic meaning). These vectors are stored in a special database, serving as a smart index. When you ask something, the AI searches this index for the most relevant bits (vectors) for your question and uses only those parts to build the answer.
Pros:
Super efficient for long docs (manuals, books, huge knowledge bases).
The AI only focuses on relevant parts, giving you quicker and more accurate answers to your specific questions.
Lower chance of bumping into the AI’s context window limit, since only relevant parts are loaded.
Cons: The answer quality depends on how well the AI can find the right bits. How the document’s laid out can influence that. If key info is too far away—semantically or just physically—from where the AI looks, it might get missed.
When to use: The ideal pick for big documents, like technical manuals, company knowledge bases, books, long reports.
Practical Tip about RAG and Document Structure: The effectiveness of RAG mode can be improved by how your document is structured. For example, in a vaccination manual, if the heading "Pregnant Women" is too far from the list of applicable vaccines, AI might have trouble linking a specific vaccine (like "Flu Vaccine") to the group "Pregnant Women". In these cases, it might help to adjust the document so that key info (like the target audience) is closer to the detailed info (like the vaccine name and its details).
Let's walk through analyzing a PDF book, like "Dom Casmurro", using RAG mode, which is great for long documents.
Add the PDF:
Click on "Add to Knowledge Base".
Select "PDF".
Upload the file "Dom Casmurro.pdf".
Pick the "Extraction Mode": "Standard Text Processing" (if the PDF is text only) or "Image & Text Processing" (if it has images with relevant text).
Select the "Context Mode": RAG Mode (Long Content).
Select the AI Model:
You can choose a specific model (e.g., Claude, Gemini, ChatGPT) or go with "Tess 5," which automatically picks a suitable unlimited model, or "Tess 5 Pro," which can choose from all models, even the ones that use credits, for more complex tasks. For a first analysis of a well-known book, "Tess 5" (unlimited) is usually enough.
Asking Specific Questions:
To make sure the AI uses the document, be explicit in your prompt: "Based on the PDF 'Dom Casmurro.pdf' in your knowledge base, what are the book's publication details, like publisher, volume, and release year?"
The AI, using RAG mode, will look in the document index for bits about "publication," "publisher," "volume," and "year" and give you the answer.
Requesting Summaries of Specific Parts:
"Based on the PDF 'Dom Casmurro.pdf', create a summary of chapter 5."
The AI will find the section that matches chapter 5 and summarize it.
Be Specific in Prompts: When interacting with documents, especially if there are several in the knowledge base, mention the file name in your request (e.g., "In the document 'RelatorioAnual.pdf', what was...").
Think About the Document Structure (for RAG): Clear titles, related information close together, and good text organization help RAG mode find the most relevant parts.
Try Out Different AI Models: If an answer isn’t what you want, try switching the AI model. Some are better at text analysis, others at data. "Tess 5 Pro" can help you pick a more robust model if you need it.
Start with Unlimited Models: For most daily tasks, "Tess 5" (which uses unlimited models) is a great pick. If you need more power or accuracy for something specific, you can switch to a more advanced model for that interaction.
Limit for Deep Learning: As a general rule, think about using Deep Learning mode for documents up to about 10-20 pages. For bigger volumes, RAG is usually a better fit.
Check the Knowledge Base: You can view and manage the added documents by clicking the option to see the knowledge base. [IMAGEM AQUI - Tela mostrando a lista de documentos na Base de Conhecimento]
The same document processing principles (Deep Learning and RAG) apply when you use documents to train or give context to AI Agents in AI Studio. For example, when you add documents to an agent’s knowledge base, it’ll typically use RAG-like logic. If you use a "Extract Document Content" step within an agent’s flow, it'll act more like Deep Learning, putting the content into that step’s specific context window. This is a big topic that can be explored in more detail.
The feature that lets you add documents to the AI Copilot knowledge base in Tess AI totally changes how you interact with artificial intelligence. By understanding which file types are supported, the ins and outs of the web scraper, and—most importantly—the difference and use cases for Deep Learning and RAG context modes, you can pull out valuable info, get better and more personalized answers, and really make the most of the knowledge in your own files. Try it out, tweak the settings, and discover the endless potential of having an AI that really gets your world.