From ec71ccc6b2547f57c0f322fbc9e67adc298a8e49 Mon Sep 17 00:00:00 2001 From: Sarthak Karandikar Date: Mon, 12 May 2025 15:19:25 +0530 Subject: [PATCH 01/10] chore: update readme --- README.md | 339 +++++++++++++++++++++--------------------------------- 1 file changed, 131 insertions(+), 208 deletions(-) diff --git a/README.md b/README.md index 76d5a22c..ffc96185 100644 --- a/README.md +++ b/README.md @@ -49,10 +49,8 @@ - [Features](#dart-features) - [Roadmap](#compass-roadmap) - [Getting Started](#toolbox-getting-started) - - [Prerequisites](#bangbang-prerequisites-contributors) - - [Installation](#gear-installation-users) - - [Environment Variables](#-environment-variables-contributors) - - [Run Locally](#running-run-locally-contributors) + - [For Contributors (default branch)](#for-contributors-default-branch) + - [For Self-Hosting (self-host branch)](#for-self-hosting-self-host-branch) - [Usage](#eyes-usage) - [Contributing](#wave-contributing) - [Code of Conduct](#scroll-code-of-conduct) @@ -161,193 +159,145 @@ We at [Existence](https://existence.technology) believe that AI won't simply die ## :toolbox: Getting Started - +Choose your path below. +- **Contributors**: Follow along in the **default** branch. +- **Self-Hosters**: Switch to the **self-host** branch to get the self-hostable version with identical instructions. -### :gear: Installation (Users) +--- -If you're not interested in contributing to the project or self-hosting and simply want to use Sentient, join the [Early Adopters Group](https://chat.whatsapp.com/IOHxuf2W8cKEuyZrMo8DOJ). You can also join our paid waitlist for $3 - to do this, contact [@itsskofficial](https://github.com/itsskofficial). Users on the paid waitlist will be the first to get access to the full cloud version of Sentient via a closed beta. +### For Contributors (default branch) -If you are interested in contributing to the app or simply running the current latest version from source, you can proceed with the following steps 👇 +If you're here to **contribute** to Sentient—adding features, fixing bugs, improving docs—follow these steps on the **default** branch. - +1. **Clone the repo** + ```bash + git clone https://github.com/existence-master/Sentient.git + cd Sentient + ``` -### :bangbang: Prerequisites (Contributors) +2. **Prerequisites** -#### The following instructions are for Linux-based machines, but they remain fundamentally the same for Windows & Mac. Only things like venv configs and activations change on Windows, the rest of the process is pretty much the same. + * **Node.js & npm** + Install from [nodejs.org](https://nodejs.org/en/download). + * **Python 3.11+** + Install from [python.org](https://www.python.org/downloads/). + * **Ollama** (for text models) + Install from [ollama.com](https://ollama.com/). + * **Neo4j Community Edition** (for the knowledge graph) + Download from [neo4j.com](https://neo4j.com/deployment-center/). -Clone the project +3. **Frontend setup** -```bash - git clone https://github.com/existence-master/Sentient.git -``` + ```bash + cd src/client + npm install + ``` -Go to the project directory +4. **Backend setup** -```bash - cd Sentient -``` + ```bash + cd src/server + python3 -m venv venv + source venv/bin/activate + pip install -r requirements.txt + ``` -Install the following to start contributing to Sentient: + > ⚠️ If you encounter numpy errors, first install with the latest numpy (2.x), then downgrade to 1.26.4. -- npm: The ElectronJS frontend of the Sentient desktop app uses npm as its package manager. +5. **Ollama models** - Install the latest version of NodeJS and npm from [here.](https://nodejs.org/en/download) - - After that, install all the required packages. - - ```bash - cd ./src/client && npm install - ``` - -- python: Python will be needed to run the backend. - Install Python [from here.](https://www.python.org/downloads/) We recommend Python 3.11. - - After that, you will need to create a virtual environment and install all required packages. This venv will need to be activated whenever you want to run the Python server (backend). - - ```bash - cd src/server && python3 -m venv venv - cd venv/bin && source activate - cd ../../ && pip install -r requirements.txt - ``` - - `⚠️ If you get a numpy dependency error while installing the requirements, first install the requirements with the latest numpy version (2.x). After the installation of requirements completes, install a numpy 1.x version (backend has been tested and works successfully on numpy 1.26.4) and you will be ready to go. This is probably not the best practise, but this works for now.` - - `⚠️ If you intend to use Advanced Voice Mode, you MUST download and install llama-cpp-python with CUDA support (if you have an NVIDIA GPU) using the commented out pip command in the requirements.txt file. Otherwise, simply download and install the llama-cpp-python package with pip for simple CPU-only support. This line is commented out in the requirements file to allow users to download and install the appropriate version based on their preference (CPU only/GPU accelerated).` - -- Ollama: Download and install the latest version of Ollama [from here.](https://ollama.com/) - - After that, pull the model you wish to use from Ollama. For example, - - ```bash + ```bash ollama pull llama3.2:3b - ``` - - `⚠️ By default, the backend is configured with Llama 3.2 3B. We found this SLM to be really versatile and works really well for our usage, as compared to other SLMs. However a lot of new SLMs like Cogito are being dropped everyday so we will probably be changing the model soon. If you wish to use a different model, simply find all the places where llama3.2:3b has been set in the Python backend scripts and change it to the tag of the model you have pulled from Ollama.` - -- Neo4j Community: Download Neo4j Community Edition [from here.](https://neo4j.com/deployment-center/) - - Next, you will need to enable the APOC plugin. - After extracting Neo4j Community Edition, navigate to the labs folder. Copy the `apoc-x.x.x-core.jar` script to the plugins folder in the Neo4j folder. - Edit the neo4j.conf file to allow the use of APOC procedures: - - ```bash - sudo nano /etc/neo4j/neo4j.conf - ``` - - Uncomment or add the following lines: - - ```ini - dbms.security.procedures.unrestricted=apoc.* - dbms.security.procedures.allowlist=apoc.* - dbms.unmanaged_extension_classes=apoc.export=/apoc - ``` - - You can run Neo4j community using the following commands + ``` - ```bash - cd neo4j/bin && ./neo4j console - ``` + *Tip*: To use another model, update `BASE_MODEL_REPO_ID` in your `.env` accordingly. - While Neo4j is running, you can visit `http://localhost:7474/` to run Cypher Queries and interact with your knowledge graph. +6. **Neo4j APOC plugin** - `⚠️ On your first run of Neo4j Community, you will need to set a username and password. **Remember this password** as you will need to add it to the .env file on the Python backend.` + * Copy `apoc-x.x.x-core.jar` into `neo4j/plugins`. + * In `neo4j/conf/neo4j.conf`: -- Download the Voice Model (Orpheus TTS 3B) + ```ini + dbms.security.procedures.unrestricted=apoc.* + dbms.security.procedures.allowlist=apoc.* + dbms.unmanaged_extension_classes=apoc.export=/apoc + ``` - For using Advanced Voice Mode, you need to manually download [this model](https://huggingface.co/isaiahbjork/orpheus-3b-0.1-ft-Q4_K_M-GGUF) from Huggingface. Whisper is automatically downloaded by Sentient via fasterwhisper. +7. **Environment variables** - The model linked above is a Q4 quantization of the Orpheus 3B model. If you have even more VRAM at your disposal, you can go for the [Q8 quant](https://huggingface.co/Mungert/orpheus-3b-0.1-ft-GGUF). + * **Frontend**: create `src/interface/.env`: - Download the GGUF files - these models are run using llama-cpp-python. + ```env + ELECTRON_APP_URL="http://localhost:3000" + APP_SERVER_URL="http://127.0.0.1:5000" + NEO4J_SERVER_URL="http://localhost:7474" + BASE_MODEL_REPO_ID="llama3.2:3b" + AUTH0_DOMAIN="your-auth0-domain" + AUTH0_CLIENT_ID="your-auth0-client-id" + ``` + * **Backend**: create `src/server/.env`: - Place the model files here: - `src/server/voice/models` + ```env + NEO4J_URI=bolt://localhost:7687 + NEO4J_USERNAME=neo4j + NEO4J_PASSWORD=your-password + EMBEDDING_MODEL_REPO_ID=sentence-transformers/all-MiniLM-L6-v2 + BASE_MODEL_URL=http://localhost:11434/api/chat + BASE_MODEL_REPO_ID=llama3.2:3b + GOOGLE_CLIENT_ID=… + GOOGLE_CLIENT_SECRET=… + BRAVE_SUBSCRIPTION_TOKEN=… + AES_SECRET_KEY=… + AES_IV=… + ``` - and ensure that the correct model name is set in the Python scripts on the backend. By default, the app is configured to use the 8-bit quant using the same name that it has when you download it from HuggingFace. + > ⚠️ If you need example keys, see the discussion “Request Environment Variables” in Issues. - `⚠️ If you do not have enough VRAM and voice mode is not that important to you, you can comment out/remove the voice mode loading functionality in the main app.py located at src/server/app/app.py` +8. **Run everything** - + * Start Neo4j: -### 🔒: Environment Variables (Contributors) + ```bash + cd neo4j/bin && ./neo4j console + ``` + * Start backend: -You will need the following environment variables to run the project locally. For sensitive keys like Auth0, GCP, Brave Search you can create your own accounts and populate your own keys or comment in the discussion titled ['Request Environment Variables (.env) Here'](https://github.com/existence-master/Sentient/discussions/13) if you want pre-setup keys + ```bash + cd src/server + source venv/bin/activate + python -m server.app.app + ``` + * Start Electron client: -For the Electron Frontend, you will need to create a `.env` file in the `src/interface` folder. Populate that `.env` file with the following variables (examples given). + ```bash + cd src/client + npm run dev + ``` -```.emv.template - ELECTRON_APP_URL= "http://localhost:3000" - APP_SERVER_URL= "http://127.0.0.1:5000" - APP_SERVER_LOADED= "false" - APP_SERVER_INITIATED= "false" - NEO4J_SERVER_URL= "http://localhost:7474" - NEO4J_SERVER_STARTED= "false" - BASE_MODEL_REPO_ID= "llama3.2:3b" - AUTH0_DOMAIN = "abcdxyz.us.auth0.com" - AUTH0_CLIENT_ID = "abcd1234" -``` +--- -For the Python Backend, you will need to create a `.env` file and place it in the `src/model` folder. Populate that `.env` file with the following variables (examples given). +### For Self-Hosting (self-host branch) -```.emv.template - NEO4J_URI=bolt://localhost:7687 - NEO4J_USERNAME=neo4j - NEO4J_PASSWORD=abcd1234 - EMBEDDING_MODEL_REPO_ID=sentence-transformers/all-MiniLM-L6-v2 - BASE_MODEL_URL=http://localhost:11434/api/chat - BASE_MODEL_REPO_ID=llama3.2:3b - LINKEDIN_USERNAME=email@address.com - LINKEDIN_PASSWORD=password123 - BRAVE_SUBSCRIPTION_TOKEN=YOUR_TOKEN_HERE - BRAVE_BASE_URL=https://api.search.brave.com/res/v1/web/search - GOOGLE_CLIENT_ID=YOUR_GOOGLE_CLIENT_ID_HERE - GOOGLE_PROJECT_ID=YOUR_PROJECT_ID - GOOGLE_AUTH_URI=https://accounts.google.com/o/oauth2/auth - GOOGLE_TOKEN_URI=https://oauth2.googleapis.com/token - GOOGLE_AUTH_PROVIDER_CERT_URL=https://www.googleapis.com/oauth2/v1/certs - GOOGLE_CLIENT_SECRET=YOUR_SECRET_HERE - GOOGLE_REDIRECT_URIS=http://localhost - AES_SECRET_KEY=YOUR_SECRET_KEY_HERE (256 bits or 32 chars) - AES_IV=YOUR_IV_HERE (256 bits or 32 chars) - AUTH0_DOMAIN=abcdxyz.us.auth0.com - AUTH0_MANAGEMENT_CLIENT_ID=YOUR_MANAGEMENT_CLIENT_ID - AUTH0_MANAGEMENT_CLIENT_SECRET=YOUR_MANAGEMENT_CLIENT_SECRET -``` +If you just want to **self-host** Sentient—no contributions needed—switch to the `self-host` branch. The instructions are **identical** to the contributors guide above, but this branch is tailored for self-hosting deployments. -`⚠️ If you face some issues with Auth0 setup, please contact us via our Whatsapp Group or reach out to one of the lead contributors [@Kabeer2004](https://github.com/Kabeer2004), [@itsskofficial](https://github.com/itsskofficial) or [@abhijeetsuryawanshi12](https://github.com/abhijeetsuryawanshi12)` +1. **Switch branches** - + ```bash + git clone https://github.com/existence-master/Sentient.git + cd Sentient + git checkout self-host + ``` -### :running: Run Locally (Contributors) +2. **Follow all steps** in the **Contributors** section above, starting at “Prerequisites.” -**Install dependencies** + * Install dependencies + * Pull Ollama models + * Configure Neo4j & environment files + * Run Neo4j, backend, and client -Ensure that you have installed all the dependencies as outlined in the [Prerequisites Section](#bangbang-prerequisites). +> **Tip**: The `self-host` branch will always mirror the default branch’s setup instructions. Any updates to installation or configuration in the default branch will be back-ported here for self-hosting users. -**Start Neo4j** - -Start Neo4j Community Edition first. - -```bash -cd neo4j/bin && ./neo4j console -``` - -**Start the Python backend server.** - -```bash - cd src/server/venv/bin/ && source activate - cd ../../ && python -m server.app.app -``` - -Once the Python server has fully started up, start the Electron client. - -```bash - cd src/interface && npm run dev -``` - -`❗ You are free to package and bundle your own versions of the app that may or may not contain any modifications. However, if you do make any modifications, you must comply with the AGPL license and open-source your version as well.` - - +--- ## :eyes: Usage @@ -357,11 +307,11 @@ Sentient is a proactive companion that pulls context from the different apps you Sentient can also do a lot based on simple user commands. -- `"Hey Sentient, help me find a restaurant in Pune based on my food preferences.` -- `What are the upcoming events in my Google Calendar?` -- `Setup a lunch meeting with tom@email.com and add it to my Calendar` -- `Create a pitch deck for my startup in Google Slides and email it to tom@email.com` -- `Help me find new hobbies in my city` +* `"Hey Sentient, help me find a restaurant in Pune based on my food preferences.` +* `What are the upcoming events in my Google Calendar?` +* `Setup a lunch meeting with tom@email.com and add it to my Calendar` +* `Create a pitch deck for my startup in Google Slides and email it to tom@email.com` +* `Help me find new hobbies in my city` 📹 [Check out our ad!](https://www.youtube.com/watch?v=Oeqmg25yqDY) @@ -387,98 +337,71 @@ Please read the [code of conduct](https://github.com/existence-master/Sentient/b ## :grey_question: FAQ -- When will the cloud version launch? +* **When will the cloud version launch?** + We are working as fast as we can to bring it to life! Join our [WhatsApp Community](https://chat.whatsapp.com/IOHxuf2W8cKEuyZrMo8DOJ) to get daily updates and more! - - We are working as fast as we can to bring it to life! Join our [WhatsApp Community](https://chat.whatsapp.com/IOHxuf2W8cKEuyZrMo8DOJ) to get daily updates and more! +* **What data do you collect about me?** + For auth, we have a standard email–password flow provided by Auth0 (also supports Google OAuth). We only collect your email and login history. Read more in our [privacy policy](https://existence-sentient.vercel.app/privacy). -- What data do you collect about me? +* **What hardware do I need?** - - For auth, we have a standard email-password flow provided by Auth0 (also supports Google OAuth). So, the only data we collect is the email provided by users and their login history. This helps us understand how users are using the app, retention rates, daily signups and more. Read more about data collection in our [privacy policy.](https://existence-sentient.vercel.app/privacy). + * **Text mode**: CPU (Intel i5 or equivalent), 8 GB RAM, GPU with 4–6 GB VRAM. + * **Voice mode**: Additional VRAM depending on your Orpheus 3B quant. -- What kind of hardware do I need to run the app locally/self-host it? - - - To run Sentient - any decent CPU (Intel Core i5 or equivalent and above), 8GB of RAM and a GPU with 4-6GB of VRAM should be enough for text only. For voice, additional VRAM will be required based on the quant you choose for Orpheus 3B. A GPU is necessary for fast local model inference. You can self-host/run locally on Windows, Linux or Mac. - -- Why open source? - - - Since the app is going to be processing a lot of your personal information, maintaining transparency of the underlying code and processes is very important. The code needs to be available for everyone to freely view how their data is being managed in the app. We also want developers to be able to contribute to Sentient - they should be able to add missing integrations or features that they feel should be a part of Sentient. They should also be able to freely make their own forks of Sentient for different use-cases, provided they abide by the GNU AGPL license and open-source their work. - -- Why AGPL? - - - We intentionally decided to go with a more restrictive license, specifically AGPL, rather than a permissive license (like MIT or Apache) since we do not want any other closed-source, cloud-based competitors cropping up with our code at its core. Going with AGPL is our way of staying committed to our core principles of transparency and privacy while ensuring that others who use our code also follow the same principles. +* **Why open source & AGPL?** + Transparency around personal data is core to our philosophy. AGPL ensures derivatives also remain open, preventing closed-source forks. ## :warning: License -Distributed under the GNU AGPL License. Check [our lisence](https://github.com/existence-master/Sentient/blob/master/LICENSE.txt) for more information. +Distributed under the GNU AGPL License. See [LICENSE.txt](https://github.com/existence-master/Sentient/blob/master/LICENSE.txt) for details. ## :handshake: Contact -[existence.sentient@gmail.com](existence.sentient@gmail.com) +[existence.sentient@gmail.com](mailto:existence.sentient@gmail.com) ## :gem: Acknowledgements -Sentient wouldn't have been possible without +Sentient wouldn't have been possible without: -- [Ollama](https://ollama.com/) -- [Neo4j](https://neo4j.com/) -- [FastAPI](https://fastapi.tiangolo.com/) -- [Meta's Llama Models](https://www.llama.com/) -- [ElectronJS](https://www.electronjs.org/) -- [Next.js](https://nextjs.org/) +* [Ollama](https://ollama.com/) +* [Neo4j](https://neo4j.com/) +* [FastAPI](https://fastapi.tiangolo.com/) +* [Meta's Llama Models](https://www.llama.com/) +* [ElectronJS](https://www.electronjs.org/) +* [Next.js](https://nextjs.org/) ## :heavy_check_mark: Official Team -The official team behind Sentient - - - - - -

- - - itsskofficial - - + itsskofficial
-

- - - kabeer2004 - - + kabeer2004
-
+
- - - abhijeetsuryawanshi12 - - + abhijeetsuryawanshi12
-
From b0fb822b4bf8842f781752580f19276ae75720ce Mon Sep 17 00:00:00 2001 From: Sarthak Karandikar Date: Mon, 12 May 2025 15:19:25 +0530 Subject: [PATCH 02/10] chore: update readme --- README.md | 339 +++++++++++++++++++++--------------------------------- 1 file changed, 131 insertions(+), 208 deletions(-) diff --git a/README.md b/README.md index 76d5a22c..ffc96185 100644 --- a/README.md +++ b/README.md @@ -49,10 +49,8 @@ - [Features](#dart-features) - [Roadmap](#compass-roadmap) - [Getting Started](#toolbox-getting-started) - - [Prerequisites](#bangbang-prerequisites-contributors) - - [Installation](#gear-installation-users) - - [Environment Variables](#-environment-variables-contributors) - - [Run Locally](#running-run-locally-contributors) + - [For Contributors (default branch)](#for-contributors-default-branch) + - [For Self-Hosting (self-host branch)](#for-self-hosting-self-host-branch) - [Usage](#eyes-usage) - [Contributing](#wave-contributing) - [Code of Conduct](#scroll-code-of-conduct) @@ -161,193 +159,145 @@ We at [Existence](https://existence.technology) believe that AI won't simply die ## :toolbox: Getting Started - +Choose your path below. +- **Contributors**: Follow along in the **default** branch. +- **Self-Hosters**: Switch to the **self-host** branch to get the self-hostable version with identical instructions. -### :gear: Installation (Users) +--- -If you're not interested in contributing to the project or self-hosting and simply want to use Sentient, join the [Early Adopters Group](https://chat.whatsapp.com/IOHxuf2W8cKEuyZrMo8DOJ). You can also join our paid waitlist for $3 - to do this, contact [@itsskofficial](https://github.com/itsskofficial). Users on the paid waitlist will be the first to get access to the full cloud version of Sentient via a closed beta. +### For Contributors (default branch) -If you are interested in contributing to the app or simply running the current latest version from source, you can proceed with the following steps 👇 +If you're here to **contribute** to Sentient—adding features, fixing bugs, improving docs—follow these steps on the **default** branch. - +1. **Clone the repo** + ```bash + git clone https://github.com/existence-master/Sentient.git + cd Sentient + ``` -### :bangbang: Prerequisites (Contributors) +2. **Prerequisites** -#### The following instructions are for Linux-based machines, but they remain fundamentally the same for Windows & Mac. Only things like venv configs and activations change on Windows, the rest of the process is pretty much the same. + * **Node.js & npm** + Install from [nodejs.org](https://nodejs.org/en/download). + * **Python 3.11+** + Install from [python.org](https://www.python.org/downloads/). + * **Ollama** (for text models) + Install from [ollama.com](https://ollama.com/). + * **Neo4j Community Edition** (for the knowledge graph) + Download from [neo4j.com](https://neo4j.com/deployment-center/). -Clone the project +3. **Frontend setup** -```bash - git clone https://github.com/existence-master/Sentient.git -``` + ```bash + cd src/client + npm install + ``` -Go to the project directory +4. **Backend setup** -```bash - cd Sentient -``` + ```bash + cd src/server + python3 -m venv venv + source venv/bin/activate + pip install -r requirements.txt + ``` -Install the following to start contributing to Sentient: + > ⚠️ If you encounter numpy errors, first install with the latest numpy (2.x), then downgrade to 1.26.4. -- npm: The ElectronJS frontend of the Sentient desktop app uses npm as its package manager. +5. **Ollama models** - Install the latest version of NodeJS and npm from [here.](https://nodejs.org/en/download) - - After that, install all the required packages. - - ```bash - cd ./src/client && npm install - ``` - -- python: Python will be needed to run the backend. - Install Python [from here.](https://www.python.org/downloads/) We recommend Python 3.11. - - After that, you will need to create a virtual environment and install all required packages. This venv will need to be activated whenever you want to run the Python server (backend). - - ```bash - cd src/server && python3 -m venv venv - cd venv/bin && source activate - cd ../../ && pip install -r requirements.txt - ``` - - `⚠️ If you get a numpy dependency error while installing the requirements, first install the requirements with the latest numpy version (2.x). After the installation of requirements completes, install a numpy 1.x version (backend has been tested and works successfully on numpy 1.26.4) and you will be ready to go. This is probably not the best practise, but this works for now.` - - `⚠️ If you intend to use Advanced Voice Mode, you MUST download and install llama-cpp-python with CUDA support (if you have an NVIDIA GPU) using the commented out pip command in the requirements.txt file. Otherwise, simply download and install the llama-cpp-python package with pip for simple CPU-only support. This line is commented out in the requirements file to allow users to download and install the appropriate version based on their preference (CPU only/GPU accelerated).` - -- Ollama: Download and install the latest version of Ollama [from here.](https://ollama.com/) - - After that, pull the model you wish to use from Ollama. For example, - - ```bash + ```bash ollama pull llama3.2:3b - ``` - - `⚠️ By default, the backend is configured with Llama 3.2 3B. We found this SLM to be really versatile and works really well for our usage, as compared to other SLMs. However a lot of new SLMs like Cogito are being dropped everyday so we will probably be changing the model soon. If you wish to use a different model, simply find all the places where llama3.2:3b has been set in the Python backend scripts and change it to the tag of the model you have pulled from Ollama.` - -- Neo4j Community: Download Neo4j Community Edition [from here.](https://neo4j.com/deployment-center/) - - Next, you will need to enable the APOC plugin. - After extracting Neo4j Community Edition, navigate to the labs folder. Copy the `apoc-x.x.x-core.jar` script to the plugins folder in the Neo4j folder. - Edit the neo4j.conf file to allow the use of APOC procedures: - - ```bash - sudo nano /etc/neo4j/neo4j.conf - ``` - - Uncomment or add the following lines: - - ```ini - dbms.security.procedures.unrestricted=apoc.* - dbms.security.procedures.allowlist=apoc.* - dbms.unmanaged_extension_classes=apoc.export=/apoc - ``` - - You can run Neo4j community using the following commands + ``` - ```bash - cd neo4j/bin && ./neo4j console - ``` + *Tip*: To use another model, update `BASE_MODEL_REPO_ID` in your `.env` accordingly. - While Neo4j is running, you can visit `http://localhost:7474/` to run Cypher Queries and interact with your knowledge graph. +6. **Neo4j APOC plugin** - `⚠️ On your first run of Neo4j Community, you will need to set a username and password. **Remember this password** as you will need to add it to the .env file on the Python backend.` + * Copy `apoc-x.x.x-core.jar` into `neo4j/plugins`. + * In `neo4j/conf/neo4j.conf`: -- Download the Voice Model (Orpheus TTS 3B) + ```ini + dbms.security.procedures.unrestricted=apoc.* + dbms.security.procedures.allowlist=apoc.* + dbms.unmanaged_extension_classes=apoc.export=/apoc + ``` - For using Advanced Voice Mode, you need to manually download [this model](https://huggingface.co/isaiahbjork/orpheus-3b-0.1-ft-Q4_K_M-GGUF) from Huggingface. Whisper is automatically downloaded by Sentient via fasterwhisper. +7. **Environment variables** - The model linked above is a Q4 quantization of the Orpheus 3B model. If you have even more VRAM at your disposal, you can go for the [Q8 quant](https://huggingface.co/Mungert/orpheus-3b-0.1-ft-GGUF). + * **Frontend**: create `src/interface/.env`: - Download the GGUF files - these models are run using llama-cpp-python. + ```env + ELECTRON_APP_URL="http://localhost:3000" + APP_SERVER_URL="http://127.0.0.1:5000" + NEO4J_SERVER_URL="http://localhost:7474" + BASE_MODEL_REPO_ID="llama3.2:3b" + AUTH0_DOMAIN="your-auth0-domain" + AUTH0_CLIENT_ID="your-auth0-client-id" + ``` + * **Backend**: create `src/server/.env`: - Place the model files here: - `src/server/voice/models` + ```env + NEO4J_URI=bolt://localhost:7687 + NEO4J_USERNAME=neo4j + NEO4J_PASSWORD=your-password + EMBEDDING_MODEL_REPO_ID=sentence-transformers/all-MiniLM-L6-v2 + BASE_MODEL_URL=http://localhost:11434/api/chat + BASE_MODEL_REPO_ID=llama3.2:3b + GOOGLE_CLIENT_ID=… + GOOGLE_CLIENT_SECRET=… + BRAVE_SUBSCRIPTION_TOKEN=… + AES_SECRET_KEY=… + AES_IV=… + ``` - and ensure that the correct model name is set in the Python scripts on the backend. By default, the app is configured to use the 8-bit quant using the same name that it has when you download it from HuggingFace. + > ⚠️ If you need example keys, see the discussion “Request Environment Variables” in Issues. - `⚠️ If you do not have enough VRAM and voice mode is not that important to you, you can comment out/remove the voice mode loading functionality in the main app.py located at src/server/app/app.py` +8. **Run everything** - + * Start Neo4j: -### 🔒: Environment Variables (Contributors) + ```bash + cd neo4j/bin && ./neo4j console + ``` + * Start backend: -You will need the following environment variables to run the project locally. For sensitive keys like Auth0, GCP, Brave Search you can create your own accounts and populate your own keys or comment in the discussion titled ['Request Environment Variables (.env) Here'](https://github.com/existence-master/Sentient/discussions/13) if you want pre-setup keys + ```bash + cd src/server + source venv/bin/activate + python -m server.app.app + ``` + * Start Electron client: -For the Electron Frontend, you will need to create a `.env` file in the `src/interface` folder. Populate that `.env` file with the following variables (examples given). + ```bash + cd src/client + npm run dev + ``` -```.emv.template - ELECTRON_APP_URL= "http://localhost:3000" - APP_SERVER_URL= "http://127.0.0.1:5000" - APP_SERVER_LOADED= "false" - APP_SERVER_INITIATED= "false" - NEO4J_SERVER_URL= "http://localhost:7474" - NEO4J_SERVER_STARTED= "false" - BASE_MODEL_REPO_ID= "llama3.2:3b" - AUTH0_DOMAIN = "abcdxyz.us.auth0.com" - AUTH0_CLIENT_ID = "abcd1234" -``` +--- -For the Python Backend, you will need to create a `.env` file and place it in the `src/model` folder. Populate that `.env` file with the following variables (examples given). +### For Self-Hosting (self-host branch) -```.emv.template - NEO4J_URI=bolt://localhost:7687 - NEO4J_USERNAME=neo4j - NEO4J_PASSWORD=abcd1234 - EMBEDDING_MODEL_REPO_ID=sentence-transformers/all-MiniLM-L6-v2 - BASE_MODEL_URL=http://localhost:11434/api/chat - BASE_MODEL_REPO_ID=llama3.2:3b - LINKEDIN_USERNAME=email@address.com - LINKEDIN_PASSWORD=password123 - BRAVE_SUBSCRIPTION_TOKEN=YOUR_TOKEN_HERE - BRAVE_BASE_URL=https://api.search.brave.com/res/v1/web/search - GOOGLE_CLIENT_ID=YOUR_GOOGLE_CLIENT_ID_HERE - GOOGLE_PROJECT_ID=YOUR_PROJECT_ID - GOOGLE_AUTH_URI=https://accounts.google.com/o/oauth2/auth - GOOGLE_TOKEN_URI=https://oauth2.googleapis.com/token - GOOGLE_AUTH_PROVIDER_CERT_URL=https://www.googleapis.com/oauth2/v1/certs - GOOGLE_CLIENT_SECRET=YOUR_SECRET_HERE - GOOGLE_REDIRECT_URIS=http://localhost - AES_SECRET_KEY=YOUR_SECRET_KEY_HERE (256 bits or 32 chars) - AES_IV=YOUR_IV_HERE (256 bits or 32 chars) - AUTH0_DOMAIN=abcdxyz.us.auth0.com - AUTH0_MANAGEMENT_CLIENT_ID=YOUR_MANAGEMENT_CLIENT_ID - AUTH0_MANAGEMENT_CLIENT_SECRET=YOUR_MANAGEMENT_CLIENT_SECRET -``` +If you just want to **self-host** Sentient—no contributions needed—switch to the `self-host` branch. The instructions are **identical** to the contributors guide above, but this branch is tailored for self-hosting deployments. -`⚠️ If you face some issues with Auth0 setup, please contact us via our Whatsapp Group or reach out to one of the lead contributors [@Kabeer2004](https://github.com/Kabeer2004), [@itsskofficial](https://github.com/itsskofficial) or [@abhijeetsuryawanshi12](https://github.com/abhijeetsuryawanshi12)` +1. **Switch branches** - + ```bash + git clone https://github.com/existence-master/Sentient.git + cd Sentient + git checkout self-host + ``` -### :running: Run Locally (Contributors) +2. **Follow all steps** in the **Contributors** section above, starting at “Prerequisites.” -**Install dependencies** + * Install dependencies + * Pull Ollama models + * Configure Neo4j & environment files + * Run Neo4j, backend, and client -Ensure that you have installed all the dependencies as outlined in the [Prerequisites Section](#bangbang-prerequisites). +> **Tip**: The `self-host` branch will always mirror the default branch’s setup instructions. Any updates to installation or configuration in the default branch will be back-ported here for self-hosting users. -**Start Neo4j** - -Start Neo4j Community Edition first. - -```bash -cd neo4j/bin && ./neo4j console -``` - -**Start the Python backend server.** - -```bash - cd src/server/venv/bin/ && source activate - cd ../../ && python -m server.app.app -``` - -Once the Python server has fully started up, start the Electron client. - -```bash - cd src/interface && npm run dev -``` - -`❗ You are free to package and bundle your own versions of the app that may or may not contain any modifications. However, if you do make any modifications, you must comply with the AGPL license and open-source your version as well.` - - +--- ## :eyes: Usage @@ -357,11 +307,11 @@ Sentient is a proactive companion that pulls context from the different apps you Sentient can also do a lot based on simple user commands. -- `"Hey Sentient, help me find a restaurant in Pune based on my food preferences.` -- `What are the upcoming events in my Google Calendar?` -- `Setup a lunch meeting with tom@email.com and add it to my Calendar` -- `Create a pitch deck for my startup in Google Slides and email it to tom@email.com` -- `Help me find new hobbies in my city` +* `"Hey Sentient, help me find a restaurant in Pune based on my food preferences.` +* `What are the upcoming events in my Google Calendar?` +* `Setup a lunch meeting with tom@email.com and add it to my Calendar` +* `Create a pitch deck for my startup in Google Slides and email it to tom@email.com` +* `Help me find new hobbies in my city` 📹 [Check out our ad!](https://www.youtube.com/watch?v=Oeqmg25yqDY) @@ -387,98 +337,71 @@ Please read the [code of conduct](https://github.com/existence-master/Sentient/b ## :grey_question: FAQ -- When will the cloud version launch? +* **When will the cloud version launch?** + We are working as fast as we can to bring it to life! Join our [WhatsApp Community](https://chat.whatsapp.com/IOHxuf2W8cKEuyZrMo8DOJ) to get daily updates and more! - - We are working as fast as we can to bring it to life! Join our [WhatsApp Community](https://chat.whatsapp.com/IOHxuf2W8cKEuyZrMo8DOJ) to get daily updates and more! +* **What data do you collect about me?** + For auth, we have a standard email–password flow provided by Auth0 (also supports Google OAuth). We only collect your email and login history. Read more in our [privacy policy](https://existence-sentient.vercel.app/privacy). -- What data do you collect about me? +* **What hardware do I need?** - - For auth, we have a standard email-password flow provided by Auth0 (also supports Google OAuth). So, the only data we collect is the email provided by users and their login history. This helps us understand how users are using the app, retention rates, daily signups and more. Read more about data collection in our [privacy policy.](https://existence-sentient.vercel.app/privacy). + * **Text mode**: CPU (Intel i5 or equivalent), 8 GB RAM, GPU with 4–6 GB VRAM. + * **Voice mode**: Additional VRAM depending on your Orpheus 3B quant. -- What kind of hardware do I need to run the app locally/self-host it? - - - To run Sentient - any decent CPU (Intel Core i5 or equivalent and above), 8GB of RAM and a GPU with 4-6GB of VRAM should be enough for text only. For voice, additional VRAM will be required based on the quant you choose for Orpheus 3B. A GPU is necessary for fast local model inference. You can self-host/run locally on Windows, Linux or Mac. - -- Why open source? - - - Since the app is going to be processing a lot of your personal information, maintaining transparency of the underlying code and processes is very important. The code needs to be available for everyone to freely view how their data is being managed in the app. We also want developers to be able to contribute to Sentient - they should be able to add missing integrations or features that they feel should be a part of Sentient. They should also be able to freely make their own forks of Sentient for different use-cases, provided they abide by the GNU AGPL license and open-source their work. - -- Why AGPL? - - - We intentionally decided to go with a more restrictive license, specifically AGPL, rather than a permissive license (like MIT or Apache) since we do not want any other closed-source, cloud-based competitors cropping up with our code at its core. Going with AGPL is our way of staying committed to our core principles of transparency and privacy while ensuring that others who use our code also follow the same principles. +* **Why open source & AGPL?** + Transparency around personal data is core to our philosophy. AGPL ensures derivatives also remain open, preventing closed-source forks. ## :warning: License -Distributed under the GNU AGPL License. Check [our lisence](https://github.com/existence-master/Sentient/blob/master/LICENSE.txt) for more information. +Distributed under the GNU AGPL License. See [LICENSE.txt](https://github.com/existence-master/Sentient/blob/master/LICENSE.txt) for details. ## :handshake: Contact -[existence.sentient@gmail.com](existence.sentient@gmail.com) +[existence.sentient@gmail.com](mailto:existence.sentient@gmail.com) ## :gem: Acknowledgements -Sentient wouldn't have been possible without +Sentient wouldn't have been possible without: -- [Ollama](https://ollama.com/) -- [Neo4j](https://neo4j.com/) -- [FastAPI](https://fastapi.tiangolo.com/) -- [Meta's Llama Models](https://www.llama.com/) -- [ElectronJS](https://www.electronjs.org/) -- [Next.js](https://nextjs.org/) +* [Ollama](https://ollama.com/) +* [Neo4j](https://neo4j.com/) +* [FastAPI](https://fastapi.tiangolo.com/) +* [Meta's Llama Models](https://www.llama.com/) +* [ElectronJS](https://www.electronjs.org/) +* [Next.js](https://nextjs.org/) ## :heavy_check_mark: Official Team -The official team behind Sentient - - - - - -

- - - itsskofficial - - + itsskofficial
-

- - - kabeer2004 - - + kabeer2004
-
+
- - - abhijeetsuryawanshi12 - - + abhijeetsuryawanshi12
-
From 5a237783242c5322b7c346cac47366a214ccfd80 Mon Sep 17 00:00:00 2001 From: Sarthak Karandikar Date: Fri, 16 May 2025 18:55:03 +0530 Subject: [PATCH 03/10] chore: update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index ffc96185..55cc160f 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ ![README Banner](./.github/assets/banner.png) -

Your personal, private & interactive AI companion

+

Your proactive AI companion

From 79c6304f7cc9e7ee1b4dc27afa746bc8a6923d48 Mon Sep 17 00:00:00 2001 From: Sarthak Karandikar Date: Fri, 16 May 2025 18:55:03 +0530 Subject: [PATCH 04/10] chore: update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index ffc96185..55cc160f 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ ![README Banner](./.github/assets/banner.png) -

Your personal, private & interactive AI companion

+

Your proactive AI companion

From c78226678cc3f1214fc4e8bad9716a45b0052b31 Mon Sep 17 00:00:00 2001 From: Sarthak Karandikar Date: Mon, 28 Jul 2025 22:58:35 +0530 Subject: [PATCH 05/10] Update README.md --- README.md | 106 ++++++++++++++++++++++++++++-------------------------- 1 file changed, 56 insertions(+), 50 deletions(-) diff --git a/README.md b/README.md index d03998d7..e6e910d8 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ ![README Banner](./.github/assets/banner.png) -

Proactive Intelligence Across Your Apps

+

Sentient: Your Personal AI Assistant

@@ -38,82 +38,89 @@
-> Sentient is an open-source AI project aimed at bridging the gap between input context and output actions performed by agents. AI Agents heavily rely on input prompts to perform actions. We wish to _eliminate prompting entirely_ making the first big step towards truly autonomous AI that is aligned with a user's goals and can get stuff done without needing to context-switch between multiple apps and typing long prompts. +> Hey there! I'm Sarthak, and I'm building **Sentient**: a personal AI assistant for anyone and everyone who wants to live their life more productively. +> +> Sentient acts as your central command center, bridging the gap between your goals and the actions required to achieve them. It is designed to be a truly proactive partner that understands you, manages your digital life, and gets things done—without you having to type long, complex prompts. +> +> It can: +> - **💬 Chat with you** about any topic via text or voice. +> - **🧠 Learn your preferences, habits, and goals** to better serve you over time. +> - **⚙️ Execute complex, multi-step tasks** and recurring workflows. +> - **🗓️ Proactively manage your day**, reading your emails and calendar to suggest schedules and remind you of important events. +> - **🔗 Integrate seamlessly** with the apps you use every day. +> +> And the best part? **The project is fully open-source.** > > [Read our manifesto.](https://docs.google.com/document/d/1vbCGAbh9f8vXfPup_Z7cW__gnOLdRhEtHKyoIxJD8is/edit?tab=t.0#heading=h.2kit9yqvlc77) --- -## ✨ Current Features - -![image](https://github.com/user-attachments/assets/756c8aeb-1748-445c-a09a-df6d99aeee58) -

-

The Home Page

+

đź’¬ Join our WhatsApp Community! đź’¬

+

Interested in trying Sentient out? Join our community to get the latest updates, ask questions, and connect with the team.

-![Journl](https://github.com/user-attachments/assets/467fd26d-18a4-4107-98a9-05fa83b26a77) +--- -
-

The Journal page is the central page of the app - use it to track your day and Sentient gets stuff done.

-
+## ✨ Features -![image](https://github.com/user-attachments/assets/fcf05b39-7f8d-46f8-9702-e0791f27f918) +Sentient is a powerful, web-based platform designed for seamless interaction, automation, and intelligence. + +image
-

Sentient co-authors the journal with you.

+

The Home page is your central chat interface to talk with your AI assistant.

-![image](https://github.com/user-attachments/assets/319d7c35-9046-4ea1-a369-88ab1a9ded8e) +image
-

Use the Tasks page to create and manage workflows.

+

The Tasks page gives you a unified view of all your tasks, where the AI assists in execution.

-Sentient has evolved into a powerful web-based platform with a robust set of features designed for deep integration and automation: - -### 🧠 Proactive Context & Learning +image -Sentient automatically collects information from connected applications like **Gmail** and **Google Calendar**. It extracts relevant context, identifying important facts to remember and also creates plans to tackle action items - without needing to be prompted. - -### 📝 Memory System +
+

The Integrations page is where you connect all your apps.

+
-- **SuperMemory:** Permanent facts about you—your preferences, relationships, and key details—are stored and managed through an integration with **Supermemory**, creating a rich, personalized knowledge base that the agent can update and retrieve from anytime. -- **Notes & Journal:** A full-featured journal allows you to simply write down what's on your mind and have Sentient manage it for you. Sentient can also write to this journal, giving you updates on what it's doing and more. The journal also helps you keep track of scheduled and recurring tasks created by Sentient. Any information obtained from your context sources is also populated in the journal. +image -### 🤖 Autonomous Task & Agent System +
+

The Settings page is where you can customize the application.

+
-- **Generate Plans from Goals:** Sentient can generate detailed plans to execute tasks using connected tools, all from a simple high-level goal. -- **Asynchronous Execution:** Once approved, tasks are handled **asynchronously** in the background - you can approve as many tasks as you want simultaneously. The executor agent intelligently uses the available tools to complete the plan, providing real-time progress updates. -- **View & Manage Tasks:** A dedicated **Tasks page** lets you view active, pending, and completed tasks, check their progress, and see the final results. -### 🔌 Extensive Integrations (MCP Hub) +### 💬 Unified Chat Interface +The home page is a universal chat screen where you can talk with Sentient about anything. Use **text or voice** to ask questions, give commands, or simply have a conversation. The chat is also supercharged with tools like Internet Search, Weather, News, and Shopping for any specific queries. -Our **Model Context Protocol (MCP)** hub allows for a powerful, distributed system of tools. Current integrations include: +### 🤖 Autonomous Task Management +The **Tasks page** is your mission control center. Here you can add, view, and manage all your to-dos. +- **AI-Assisted Execution:** Describe a high-level goal, and Sentient will generate a detailed, step-by-step plan to achieve it using its integrated tools. +- **Asynchronous Workflows:** Approve a plan, and Sentient gets to work in the background, handling complex, multi-step workflows without interrupting you. You can monitor progress in real-time. +- **Unified View:** Track active, pending, and completed tasks all in one place. +### 🔌 Seamless Integrations +The **Integrations page** is where you connect Sentient to your digital life. Our **Model Context Protocol (MCP)** hub allows for a powerful, distributed system of tools. Current integrations include: - **Google Suite:** Gmail, Google Calendar, Google Drive, Google Docs, Google Sheets, and Google Slides. - **Productivity:** Slack and Notion. - **Developer:** GitHub. - **Information:** Internet Search (Google Search), News (NewsAPI), Weather (AccuWeather), Google Shopping and Google Maps. - **Miscellaneous:** QuickChart for generating charts on the fly. -More tools will be added soon. +### 🧠 Proactive Intelligence & Learning +Sentient doesn't just wait for commands. It proactively scans connected apps like **Gmail** and **Google Calendar** to understand your schedule and priorities. +- **Contextual Awareness:** It identifies action items, suggests tasks, and learns important facts about you. +- **Personalized Memory:** Key details about your preferences, relationships, and goals are stored via an integration with **Supermemory**, creating a rich, personalized knowledge base that helps the agent serve you better over time. -### 💬 Interactive Chat Overlay - -A chat interface is available on any page. It allows you to have conversations with Sentient and also use tools like Internet Search, Weather, News and Shopping for any specific queries. - -### ⚙️ Full Customization & Settings - -A central settings page gives you complete control: - -- Connect or disconnect applications with OAuth (for applications supporting OAuth) or manually. -- Set custom privacy filters to prevent Sentient from processing context containing sensitive information. -- Configure WhatsApp notifications to stay updated on the go. +### ⚙️ Full Customization +The **Settings page** gives you complete control over your agent. +- **Manage Connections:** Easily connect or disconnect your apps. +- **Privacy Filters:** Set custom filters to prevent Sentient from processing context containing sensitive information. +- **Notifications:** Configure WhatsApp notifications to stay updated on the go. ### 🔒 Self-Hostable - -The entire platform can be self-hosted and configured to run fully locally. [Check the relevant docs for more info.](https://sentient-2.gitbook.io/docs/getting-started/running-sentient-from-source-self-host) +The entire platform is open-source and can be self-hosted and configured to run fully locally, ensuring your data stays private. [Check the relevant docs for more info.](https://sentient-2.gitbook.io/docs/getting-started/running-sentient-from-source-self-host) --- @@ -121,12 +128,11 @@ The entire platform can be self-hosted and configured to run fully locally. [Che We are constantly working to expand Sentient's capabilities. Here is a glimpse of what's planned for the future: -- **Make the Web App as feature-rich as possible:** There is a lot that can be improved in the existing webapp. -- **OS-Level Integration:** Launch native apps for `Windows`, `MacOS`, `Android` and `iOS` that allow for deeper integrations. -- **Expanded Integrations:** Add support for more popular services, such as the `Microsoft 365 Suite`, `Spotify`, and so on. -- **Advanced Reasoning & Planning:** Reasoning improvements for the planning and execution pipeline. -- **Tool-Specific UI:** Enhance the interface with custom UI components for specific tool outputs, such as maps for location-based results. -- **Custom Tool Integrations:** Let users add any app of their choice. +- **OS-Level Integration:** Launch native apps for `Windows`, `MacOS`, `Android` and `iOS` for deeper, more proactive assistance. +- **Expanded Integrations:** Add support for more popular services, such as the `Microsoft 365 Suite`, `Spotify`, and more. +- **Advanced Conversational AI:** Enhance the chat experience with more natural voice interactions, better memory, and more sophisticated reasoning. +- **Richer Task Execution:** Improve the planning and execution pipeline and provide richer visual feedback for tasks. +- **Custom Tool Integrations:** Create a framework that allows users to easily add any app of their choice. ## :wave: Contributing @@ -166,7 +172,7 @@ Distributed under the GNU AGPL License. See [LICENSE.txt](https://github.com/exi
- itsskofficial + itsskofficial (Sarthak)
From 441c4875ace8b5f1133dfc4e555ba9ac74e1bdfc Mon Sep 17 00:00:00 2001 From: Sarthak Karandikar Date: Mon, 28 Jul 2025 23:00:53 +0530 Subject: [PATCH 06/10] Update README.md --- README.md | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index e6e910d8..e76666a2 100644 --- a/README.md +++ b/README.md @@ -38,9 +38,9 @@
-> Hey there! I'm Sarthak, and I'm building **Sentient**: a personal AI assistant for anyone and everyone who wants to live their life more productively. +> **Sentient** is a personal AI assistant for anyone and everyone who wants to live their life more productively. > -> Sentient acts as your central command center, bridging the gap between your goals and the actions required to achieve them. It is designed to be a truly proactive partner that understands you, manages your digital life, and gets things done—without you having to type long, complex prompts. +> It acts as your central command center, bridging the gap between your goals and the actions required to achieve them. It is designed to be a truly proactive partner that understands you, manages your digital life, and gets things done—without you having to type long, complex prompts. > > It can: > - **💬 Chat with you** about any topic via text or voice. @@ -49,9 +49,7 @@ > - **🗓️ Proactively manage your day**, reading your emails and calendar to suggest schedules and remind you of important events. > - **🔗 Integrate seamlessly** with the apps you use every day. > -> And the best part? **The project is fully open-source.** -> -> [Read our manifesto.](https://docs.google.com/document/d/1vbCGAbh9f8vXfPup_Z7cW__gnOLdRhEtHKyoIxJD8is/edit?tab=t.0#heading=h.2kit9yqvlc77) +> For more information [read our manifesto.](https://docs.google.com/document/d/1vbCGAbh9f8vXfPup_Z7cW__gnOLdRhEtHKyoIxJD8is/edit?tab=t.0#heading=h.2kit9yqvlc77) --- From 34c4a2d49f8534530025484faed3a5b311b09e28 Mon Sep 17 00:00:00 2001 From: itsskofficial Date: Thu, 7 Aug 2025 17:07:01 +0530 Subject: [PATCH 07/10] fix (memories): unauthorized issue --- src/client/app/api/memories/[memoryId]/route.js | 8 ++++---- src/client/app/api/memories/graph/route.js | 4 ++-- src/client/app/api/memories/route.js | 10 +++++----- src/server/main/memories/routes.py | 6 +++--- 4 files changed, 14 insertions(+), 14 deletions(-) diff --git a/src/client/app/api/memories/[memoryId]/route.js b/src/client/app/api/memories/[memoryId]/route.js index 89dd692f..52d4531b 100644 --- a/src/client/app/api/memories/[memoryId]/route.js +++ b/src/client/app/api/memories/[memoryId]/route.js @@ -11,7 +11,7 @@ export const PUT = withAuth(async function PUT( { params, authHeader } ) { const { memoryId } = params - const backendUrl = new URL(`${appServerUrl}/api/memories/${memoryId}`) + const backendUrl = new URL(`${appServerUrl}/memories/${memoryId}`) try { const body = await request.json() @@ -27,7 +27,7 @@ export const PUT = withAuth(async function PUT( } return NextResponse.json(data) } catch (error) { - console.error(`API Error in /api/memories/${memoryId} (PUT):`, error) + console.error(`API Error in /memories/${memoryId} (PUT):`, error) return NextResponse.json({ error: error.message }, { status: 500 }) } }) @@ -37,7 +37,7 @@ export const DELETE = withAuth(async function DELETE( { params, authHeader } ) { const { memoryId } = params - const backendUrl = new URL(`${appServerUrl}/api/memories/${memoryId}`) + const backendUrl = new URL(`${appServerUrl}/memories/${memoryId}`) try { const response = await fetch(backendUrl.toString(), { @@ -51,7 +51,7 @@ export const DELETE = withAuth(async function DELETE( } return NextResponse.json(data) } catch (error) { - console.error(`API Error in /api/memories/${memoryId} (DELETE):`, error) + console.error(`API Error in /memories/${memoryId} (DELETE):`, error) return NextResponse.json({ error: error.message }, { status: 500 }) } }) diff --git a/src/client/app/api/memories/graph/route.js b/src/client/app/api/memories/graph/route.js index c297173d..0b6ad43b 100644 --- a/src/client/app/api/memories/graph/route.js +++ b/src/client/app/api/memories/graph/route.js @@ -8,7 +8,7 @@ const appServerUrl = export const GET = withAuth(async function GET(request, { authHeader }) { // This new backend endpoint is assumed to exist and return { nodes: [], edges: [] }. - const backendUrl = new URL(`${appServerUrl}/api/memories/graph`) + const backendUrl = new URL(`${appServerUrl}/memories/graph`) try { const response = await fetch(backendUrl.toString(), { @@ -22,7 +22,7 @@ export const GET = withAuth(async function GET(request, { authHeader }) { } return NextResponse.json(data) } catch (error) { - console.error("API Error in /api/memories/graph:", error) + console.error("API Error in /memories/graph:", error) return NextResponse.json({ error: error.message }, { status: 500 }) } }) diff --git a/src/client/app/api/memories/route.js b/src/client/app/api/memories/route.js index a8d9f00e..3ac3ffd8 100644 --- a/src/client/app/api/memories/route.js +++ b/src/client/app/api/memories/route.js @@ -7,7 +7,7 @@ const appServerUrl = : process.env.NEXT_PUBLIC_APP_SERVER_URL export const GET = withAuth(async function GET(request, { authHeader }) { - const backendUrl = new URL(`${appServerUrl}/api/memories`) + const backendUrl = new URL(`${appServerUrl}/memories`) try { const response = await fetch(backendUrl.toString(), { @@ -21,13 +21,13 @@ export const GET = withAuth(async function GET(request, { authHeader }) { } return NextResponse.json(data) } catch (error) { - console.error("API Error in /api/memories:", error) + console.error("API Error in /memories:", error) return NextResponse.json({ error: error.message }, { status: 500 }) } }) export const POST = withAuth(async function POST(request, { authHeader }) { - const backendUrl = new URL(`${appServerUrl}/api/memories`) + const backendUrl = new URL(`${appServerUrl}/memories`) try { const body = await request.json() const response = await fetch(backendUrl.toString(), { @@ -42,7 +42,7 @@ export const POST = withAuth(async function POST(request, { authHeader }) { } return NextResponse.json(data, { status: response.status }) } catch (error) { - console.error("API Error in /api/memories (POST):", error) + console.error("API Error in /memories (POST):", error) return NextResponse.json({ error: error.message }, { status: 500 }) } -}) \ No newline at end of file +}) diff --git a/src/server/main/memories/routes.py b/src/server/main/memories/routes.py index 5bbbb98d..0a9a5cc1 100644 --- a/src/server/main/memories/routes.py +++ b/src/server/main/memories/routes.py @@ -12,7 +12,7 @@ logger = logging.getLogger(__name__) router = APIRouter( - prefix="/api/memories", + prefix="/memories", tags=["Memories"] ) @@ -22,7 +22,7 @@ async def startup_event(): utils._initialize_agents() utils._initialize_embedding_model() -@router.get("/", summary="Get all memories for a user") +@router.get("", summary="Get all memories for a user") async def get_all_memories( user_id: str = Depends(PermissionChecker(required_permissions=["read:memory"])) ): @@ -64,7 +64,7 @@ async def get_memory_graph( logger.error(f"Error generating memory graph for user {user_id}: {e}", exc_info=True) raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail="Error generating memory graph.") -@router.post("/", summary="Create a new memory for a user") +@router.post("", summary="Create a new memory for a user") async def create_memory( request: CreateMemoryRequest, user_id: str = Depends(PermissionChecker(required_permissions=["write:memory"])) From 308c99a8c49ab62680def857aed996e5f0953648 Mon Sep 17 00:00:00 2001 From: itsskofficial Date: Mon, 11 Aug 2025 14:23:18 +0530 Subject: [PATCH 08/10] fix (pwa): updated docker files --- src/client/Dockerfile | 8 ++++++++ src/client/docker-compose.yaml | 4 ++++ 2 files changed, 12 insertions(+) diff --git a/src/client/Dockerfile b/src/client/Dockerfile index 2269256a..3d8d8649 100644 --- a/src/client/Dockerfile +++ b/src/client/Dockerfile @@ -23,6 +23,10 @@ ARG AUTH0_CLIENT_ID ARG AUTH0_CLIENT_SECRET ARG AUTH0_AUDIENCE ARG AUTH0_SCOPE +ARG MONGO_URI +ARG MONGO_DB_NAME +ARG VAPID_PRIVATE_KEY +ARG VAPID_ADMIN_EMAIL ARG NEXT_PUBLIC_POSTHOG_KEY ARG NEXT_PUBLIC_POSTHOG_HOST ARG NEXT_PUBLIC_VAPID_PUBLIC_KEY @@ -40,6 +44,10 @@ ENV AUTH0_CLIENT_ID=$AUTH0_CLIENT_ID ENV AUTH0_CLIENT_SECRET=$AUTH0_CLIENT_SECRET ENV AUTH0_AUDIENCE=$AUTH0_AUDIENCE ENV AUTH0_SCOPE=$AUTH0_SCOPE +ENV VAPID_PRIVATE_KEY=$VAPID_PRIVATE_KEY +ENV VAPID_ADMIN_EMAIL=$VAPID_ADMIN_EMAIL +ENV MONGO_URI=$MONGO_URI +ENV MONGO_DB_NAME=$MONGO_DB_NAME ENV NEXT_PUBLIC_POSTHOG_KEY=$NEXT_PUBLIC_POSTHOG_KEY ENV NEXT_PUBLIC_POSTHOG_HOST=$NEXT_PUBLIC_POSTHOG_HOST ENV NEXT_PUBLIC_VAPID_PUBLIC_KEY=$NEXT_PUBLIC_VAPID_PUBLIC_KEY diff --git a/src/client/docker-compose.yaml b/src/client/docker-compose.yaml index 737cbfc1..3eb3b4c5 100644 --- a/src/client/docker-compose.yaml +++ b/src/client/docker-compose.yaml @@ -18,9 +18,13 @@ services: - AUTH0_CLIENT_SECRET=${AUTH0_CLIENT_SECRET} - AUTH0_AUDIENCE=${AUTH0_AUDIENCE} - AUTH0_SCOPE=${AUTH0_SCOPE} + - MONGO_URI=${MONGO_URI} + - MONGO_DB_NAME=${MONGO_DB_NAME} - NEXT_PUBLIC_POSTHOG_KEY=${NEXT_PUBLIC_POSTHOG_KEY} - NEXT_PUBLIC_POSTHOG_HOST=${NEXT_PUBLIC_POSTHOG_HOST} - NEXT_PUBLIC_VAPID_PUBLIC_KEY=${NEXT_PUBLIC_VAPID_PUBLIC_KEY} + - VAPID_PRIVATE_KEY=${VAPID_PRIVATE_KEY} + - VAPID_ADMIN_EMAIL=${VAPID_ADMIN_EMAIL} container_name: sentient-client restart: unless-stopped ports: From 0e1c8350d59750c4210f9ae218813487f6f901e8 Mon Sep 17 00:00:00 2001 From: Anshuman Date: Mon, 18 Aug 2025 21:50:46 +0530 Subject: [PATCH 09/10] feat: Add Outlook integration with Microsoft Graph API - Add Outlook configuration to INTEGRATIONS_CONFIG - Add Microsoft OAuth environment variables (OUTLOOK_CLIENT_ID, OUTLOOK_CLIENT_SECRET) - Create Outlook MCP server with full email management capabilities - Implement OAuth flow for Microsoft Graph API authentication - Add privacy filters and email formatting utilities - Update worker configurations to include Outlook integration - Add comprehensive documentation and test client Features: - Read emails from different folders (Inbox, Sent Items, etc.) - Send new emails with CC/BCC support - Reply to existing email threads - Search emails using Microsoft Graph search - List and navigate email folders - Apply user privacy filters to email content Closes #72 --- src/server/main/config.py | 13 + src/server/main/integrations/routes.py | 20 +- src/server/mcp_hub/outlook/README.md | 134 ++++++++++ src/server/mcp_hub/outlook/auth.py | 98 +++++++ src/server/mcp_hub/outlook/main.py | 269 ++++++++++++++++++++ src/server/mcp_hub/outlook/prompts.py | 16 ++ src/server/mcp_hub/outlook/requirements.txt | 6 + src/server/mcp_hub/outlook/test_client.py | 82 ++++++ src/server/mcp_hub/outlook/utils.py | 188 ++++++++++++++ src/server/workers/executor/config.py | 10 + src/server/workers/planner/config.py | 10 + 11 files changed, 844 insertions(+), 2 deletions(-) create mode 100644 src/server/mcp_hub/outlook/README.md create mode 100644 src/server/mcp_hub/outlook/auth.py create mode 100644 src/server/mcp_hub/outlook/main.py create mode 100644 src/server/mcp_hub/outlook/prompts.py create mode 100644 src/server/mcp_hub/outlook/requirements.txt create mode 100644 src/server/mcp_hub/outlook/test_client.py create mode 100644 src/server/mcp_hub/outlook/utils.py diff --git a/src/server/main/config.py b/src/server/main/config.py index 0eda903b..2637d13d 100644 --- a/src/server/main/config.py +++ b/src/server/main/config.py @@ -84,6 +84,8 @@ DISCORD_CLIENT_SECRET = os.getenv("DISCORD_CLIENT_SECRET") TODOIST_CLIENT_ID = os.getenv("TODOIST_CLIENT_ID") TODOIST_CLIENT_SECRET = os.getenv("TODOIST_CLIENT_SECRET") +OUTLOOK_CLIENT_ID = os.getenv("OUTLOOK_CLIENT_ID") +OUTLOOK_CLIENT_SECRET = os.getenv("OUTLOOK_CLIENT_SECRET") # --- WhatsApp --- WAHA_URL = os.getenv("WAHA_URL") @@ -347,5 +349,16 @@ "name": "trello_server", "url": os.getenv("TRELLO_MCP_SERVER_URL", "http://localhost:9025/sse") } + }, + "outlook": { + "display_name": "Outlook", + "description": "Connect to read, send, and manage emails in Outlook. The agent can list emails, read message content, send new emails, reply to messages, and manage folders.", + "auth_type": "oauth", + "icon": "IconMail", + "category": "Communication", + "mcp_server_config": { + "name": "outlook_server", + "url": os.getenv("OUTLOOK_MCP_SERVER_URL", "http://localhost:9027/sse") + } } } \ No newline at end of file diff --git a/src/server/main/integrations/routes.py b/src/server/main/integrations/routes.py index bb7fa78b..91615a80 100644 --- a/src/server/main/integrations/routes.py +++ b/src/server/main/integrations/routes.py @@ -23,6 +23,7 @@ TRELLO_CLIENT_ID, COMPOSIO_API_KEY, GITHUB_CLIENT_ID, GITHUB_CLIENT_SECRET, SLACK_CLIENT_ID, SLACK_CLIENT_SECRET, NOTION_CLIENT_ID, NOTION_CLIENT_SECRET, + OUTLOOK_CLIENT_ID, OUTLOOK_CLIENT_SECRET, ) from workers.tasks import execute_triggered_task from workers.proactive.utils import event_pre_filter @@ -73,8 +74,10 @@ async def get_integration_sources(user_id: str = Depends(auth_helper.get_current source_info["client_id"] = NOTION_CLIENT_ID elif name == 'trello': source_info["client_id"] = TRELLO_CLIENT_ID - elif name == 'discord': - source_info["client_id"] = DISCORD_CLIENT_ID + elif name == 'discord': + source_info["client_id"] = DISCORD_CLIENT_ID + elif name == 'outlook': + source_info["client_id"] = OUTLOOK_CLIENT_ID all_sources.append(source_info) @@ -200,6 +203,15 @@ async def connect_oauth_integration( "code": request.code, "redirect_uri": request.redirect_uri } + elif service_name == 'outlook': + token_url = "https://login.microsoftonline.com/common/oauth2/v2.0/token" + token_payload = { + "client_id": OUTLOOK_CLIENT_ID, + "client_secret": OUTLOOK_CLIENT_SECRET, + "grant_type": "authorization_code", + "code": request.code, + "redirect_uri": request.redirect_uri + } else: raise HTTPException(status_code=400, detail=f"OAuth flow not implemented for {service_name}") @@ -241,6 +253,10 @@ async def connect_oauth_integration( if "access_token" not in token_data: raise HTTPException(status_code=400, detail=f"Discord OAuth error: {token_data.get('error_description', 'No access token.')}") creds_to_save = token_data # This includes access_token, refresh_token, and the 'bot' object with bot token + elif service_name == 'outlook': + if "access_token" not in token_data: + raise HTTPException(status_code=400, detail=f"Outlook OAuth error: {token_data.get('error_description', 'No access token.')}") + creds_to_save = token_data # This includes access_token, refresh_token, and expires_in encrypted_creds = aes_encrypt(json.dumps(creds_to_save)) diff --git a/src/server/mcp_hub/outlook/README.md b/src/server/mcp_hub/outlook/README.md new file mode 100644 index 00000000..890716af --- /dev/null +++ b/src/server/mcp_hub/outlook/README.md @@ -0,0 +1,134 @@ +# Outlook Integration for Sentient + +This module provides Outlook email integration for the Sentient AI assistant using Microsoft Graph API. + +## Features + +- **Read Emails**: List and read emails from different folders (Inbox, Sent Items, etc.) +- **Send Emails**: Compose and send new emails +- **Reply to Emails**: Reply to existing email threads +- **Search Emails**: Search for specific emails using Microsoft Graph search +- **Manage Folders**: List and navigate email folders +- **Privacy Filters**: Apply user-defined privacy filters to email content + +## Setup + +### 1. Microsoft Azure App Registration + +1. Go to [Azure Portal](https://portal.azure.com) +2. Navigate to "Azure Active Directory" > "App registrations" +3. Click "New registration" +4. Fill in the details: + - **Name**: Sentient Outlook Integration + - **Supported account types**: Accounts in any organizational directory and personal Microsoft accounts + - **Redirect URI**: Web - `https://your-domain.com/integrations/oauth/callback` + +### 2. Configure API Permissions + +1. In your app registration, go to "API permissions" +2. Click "Add a permission" +3. Select "Microsoft Graph" +4. Choose "Delegated permissions" +5. Add the following permissions: + - `Mail.Read` - Read user mail + - `Mail.Send` - Send mail as a user + - `User.Read` - Sign in and read user profile + +### 3. Environment Variables + +Add the following environment variables to your `.env` file: + +```bash +# Outlook OAuth Configuration +OUTLOOK_CLIENT_ID=your_azure_app_client_id +OUTLOOK_CLIENT_SECRET=your_azure_app_client_secret + +# Outlook MCP Server URL (optional, defaults to localhost:9027) +OUTLOOK_MCP_SERVER_URL=http://localhost:9027/sse +``` + +### 4. Start the Outlook MCP Server + +```bash +cd src/server/mcp_hub/outlook +python main.py +``` + +The server will start on port 9027 by default. + +## Usage + +### Available Tools + +1. **get_emails**: Retrieve emails from a specific folder + - Parameters: `folder`, `top`, `skip`, `search` + +2. **get_email**: Get a specific email by ID + - Parameters: `message_id` + +3. **send_email**: Send a new email + - Parameters: `subject`, `body`, `to_recipients`, `cc_recipients`, `bcc_recipients` + +4. **reply_to_email**: Reply to an existing email + - Parameters: `message_id`, `body`, `cc_recipients`, `bcc_recipients` + +5. **get_folders**: List email folders + - Parameters: None + +6. **search_emails**: Search for emails + - Parameters: `query`, `top` + +### Example Usage + +```python +# Get recent emails from inbox +emails = await get_emails(folder="inbox", top=10) + +# Send an email +result = await send_email( + subject="Test Email", + body="

This is a test email.

", + to_recipients=["recipient@example.com"] +) + +# Search for emails +search_results = await search_emails(query="meeting", top=5) +``` + +## Privacy and Security + +- All credentials are encrypted using AES encryption +- User privacy filters are applied to email content +- Access tokens are stored securely in MongoDB +- The integration respects Microsoft's data handling policies + +## Troubleshooting + +### Common Issues + +1. **OAuth Error**: Ensure your redirect URI matches exactly in Azure app registration +2. **Permission Denied**: Verify all required API permissions are granted +3. **Token Expired**: The integration handles token refresh automatically +4. **Connection Issues**: Check that the MCP server is running on the correct port + +### Debug Mode + +Enable debug logging by setting the environment variable: +```bash +ENVIRONMENT=dev-local +``` + +## API Reference + +The integration uses Microsoft Graph API v1.0. For detailed API documentation, visit: +https://docs.microsoft.com/en-us/graph/api/overview + +## Contributing + +When contributing to this integration: + +1. Follow the existing code patterns +2. Add appropriate error handling +3. Include privacy filter considerations +4. Update this README with any new features +5. Test thoroughly with different email scenarios diff --git a/src/server/mcp_hub/outlook/auth.py b/src/server/mcp_hub/outlook/auth.py new file mode 100644 index 00000000..bca1bfd3 --- /dev/null +++ b/src/server/mcp_hub/outlook/auth.py @@ -0,0 +1,98 @@ +import os +import json +import logging +from typing import Optional, Dict, Any +from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes +from cryptography.hazmat.primitives import padding +from cryptography.hazmat.backends import default_backend +from motor.motor_asyncio import AsyncIOMotorClient +from dotenv import load_dotenv + +# Load environment variables +load_dotenv() + +logger = logging.getLogger(__name__) + +# MongoDB connection +MONGO_URI = os.getenv("MONGO_URI", "mongodb://localhost:27017") +MONGO_DB_NAME = os.getenv("MONGO_DB_NAME", "sentient") +mongo_client = AsyncIOMotorClient(MONGO_URI) +db = mongo_client[MONGO_DB_NAME] + +# Encryption key for credentials +ENCRYPTION_KEY = os.getenv("ENCRYPTION_KEY", "your-32-byte-encryption-key-here").encode() + +def aes_decrypt(encrypted_data: str) -> str: + """Decrypt AES encrypted data.""" + try: + # Decode from base64 + encrypted_bytes = bytes.fromhex(encrypted_data) + + # Extract IV and ciphertext + iv = encrypted_bytes[:16] + ciphertext = encrypted_bytes[16:] + + # Create cipher + cipher = Cipher(algorithms.AES(ENCRYPTION_KEY), modes.CBC(iv), backend=default_backend()) + decryptor = cipher.decryptor() + + # Decrypt + padded_data = decryptor.update(ciphertext) + decryptor.finalize() + + # Remove padding + unpadder = padding.PKCS7(128).unpadder() + data = unpadder.update(padded_data) + unpadder.finalize() + + return data.decode('utf-8') + except Exception as e: + logger.error(f"Error decrypting data: {e}") + raise + +def get_user_id_from_context(ctx) -> str: + """Extract user_id from MCP context.""" + try: + # Extract user_id from context metadata + user_id = ctx.metadata.get("user_id") + if not user_id: + raise ValueError("user_id not found in context metadata") + return user_id + except Exception as e: + logger.error(f"Error extracting user_id from context: {e}") + raise + +async def get_outlook_credentials(user_id: str) -> Dict[str, Any]: + """Get Outlook credentials for a user from MongoDB.""" + try: + user_profile = await db.user_profiles.find_one({"user_id": user_id}) + if not user_profile: + raise ValueError(f"User profile not found for user_id: {user_id}") + + integrations = user_profile.get("userData", {}).get("integrations", {}) + outlook_integration = integrations.get("outlook", {}) + + if not outlook_integration.get("connected", False): + raise ValueError("Outlook not connected for this user") + + encrypted_creds = outlook_integration.get("credentials") + if not encrypted_creds: + raise ValueError("No credentials found for Outlook integration") + + # Decrypt credentials + decrypted_creds = aes_decrypt(encrypted_creds) + return json.loads(decrypted_creds) + + except Exception as e: + logger.error(f"Error getting Outlook credentials for user {user_id}: {e}") + raise + +async def get_user_info(user_id: str) -> Dict[str, Any]: + """Get user information including privacy filters.""" + try: + user_profile = await db.user_profiles.find_one({"user_id": user_id}) + if not user_profile: + return {} + + return user_profile.get("userData", {}) + except Exception as e: + logger.error(f"Error getting user info for {user_id}: {e}") + return {} diff --git a/src/server/mcp_hub/outlook/main.py b/src/server/mcp_hub/outlook/main.py new file mode 100644 index 00000000..c046c712 --- /dev/null +++ b/src/server/mcp_hub/outlook/main.py @@ -0,0 +1,269 @@ +import os +import asyncio +import logging +from typing import Dict, Any, List, Optional + +from dotenv import load_dotenv +from fastmcp import FastMCP, Context +from fastmcp.prompts.prompt import Message +from fastmcp.utilities.logging import configure_logging, get_logger +from fastmcp.exceptions import ToolError + +# Local imports +from . import auth +from . import prompts +from . import utils as helpers + +# --- Standardized Logging Setup --- +configure_logging(level="INFO") +logger = get_logger(__name__) + +# Conditionally load .env for local development +ENVIRONMENT = os.getenv('ENVIRONMENT', 'dev-local') +if ENVIRONMENT == 'dev-local': + dotenv_path = os.path.join(os.path.dirname(__file__), '..', '..', '.env') + if os.path.exists(dotenv_path): + load_dotenv(dotenv_path=dotenv_path) + +# --- Server Initialization --- +mcp = FastMCP( + name="OutlookServer", + instructions="Provides a comprehensive suite of tools to read, search, send, and manage emails in Outlook using Microsoft Graph API.", +) + +# --- Prompt Registration --- +@mcp.resource("prompt://outlook-agent-system") +def get_outlook_system_prompt() -> str: + """Provides the system prompt for the Outlook agent.""" + return prompts.outlook_agent_system_prompt + +@mcp.prompt(name="outlook_user_prompt_builder") +def build_outlook_user_prompt(query: str, username: str, previous_tool_response: str = "{}") -> Message: + """Builds a formatted user prompt for the Outlook agent.""" + content = prompts.outlook_agent_user_prompt.format( + query=query, + username=username, + previous_tool_response=previous_tool_response + ) + return Message(role="user", content=content) + +# --- Tool Helper --- +async def _execute_outlook_action(ctx: Context, action_name: str, **kwargs) -> Dict[str, Any]: + """Helper to handle auth and execution for all Outlook tools.""" + try: + user_id = auth.get_user_id_from_context(ctx) + credentials = await auth.get_outlook_credentials(user_id) + + if not credentials or "access_token" not in credentials: + raise ToolError("Outlook not connected or access token missing") + + # Create Outlook API client + outlook_api = helpers.OutlookAPI(credentials["access_token"]) + + # Execute the requested action + if action_name == "get_emails": + return await outlook_api.get_emails(**kwargs) + elif action_name == "get_email": + return await outlook_api.get_email(**kwargs) + elif action_name == "send_email": + return await outlook_api.send_email(**kwargs) + elif action_name == "get_folders": + return await outlook_api.get_folders(**kwargs) + elif action_name == "search_emails": + return await outlook_api.search_emails(**kwargs) + else: + raise ToolError(f"Unknown action: {action_name}") + + except Exception as e: + logger.error(f"Error executing Outlook action {action_name}: {e}") + raise ToolError(f"Outlook action failed: {str(e)}") + +# --- Tool Definitions --- +@mcp.tool() +async def get_emails(ctx: Context, folder: str = "inbox", top: int = 10, skip: int = 0, + search: Optional[str] = None) -> Dict[str, Any]: + """Get emails from a specific folder in Outlook.""" + try: + result = await _execute_outlook_action(ctx, "get_emails", folder=folder, top=top, skip=skip, search=search) + + # Get user info for privacy filters + user_id = auth.get_user_id_from_context(ctx) + user_info = await auth.get_user_info(user_id) + privacy_filters = user_info.get("privacy_filters", {}) + + # Apply privacy filters and format emails + emails = result.get("value", []) + filtered_emails = helpers.apply_privacy_filters(emails, privacy_filters) + formatted_emails = [helpers.format_email_summary(email) for email in filtered_emails] + + return { + "emails": formatted_emails, + "total_count": len(formatted_emails), + "folder": folder + } + + except Exception as e: + logger.error(f"Error getting emails: {e}") + raise ToolError(f"Failed to get emails: {str(e)}") + +@mcp.tool() +async def get_email(ctx: Context, message_id: str) -> Dict[str, Any]: + """Get a specific email by ID.""" + try: + result = await _execute_outlook_action(ctx, "get_email", message_id=message_id) + + # Format the email for better readability + email = result + from_info = email.get("from", {}) + to_info = email.get("toRecipients", []) + cc_info = email.get("ccRecipients", []) + bcc_info = email.get("bccRecipients", []) + + formatted_email = { + "id": email.get("id"), + "subject": email.get("subject", "No Subject"), + "from": from_info.get("emailAddress", {}).get("address", "Unknown"), + "from_name": from_info.get("emailAddress", {}).get("name", "Unknown"), + "to": [recipient.get("emailAddress", {}).get("address", "") for recipient in to_info], + "cc": [recipient.get("emailAddress", {}).get("address", "") for recipient in cc_info], + "bcc": [recipient.get("emailAddress", {}).get("address", "") for recipient in bcc_info], + "received_date": email.get("receivedDateTime"), + "sent_date": email.get("sentDateTime"), + "is_read": email.get("isRead", False), + "has_attachments": email.get("hasAttachments", False), + "body": email.get("body", {}).get("content", ""), + "body_preview": email.get("bodyPreview", "") + } + + return formatted_email + + except Exception as e: + logger.error(f"Error getting email {message_id}: {e}") + raise ToolError(f"Failed to get email: {str(e)}") + +@mcp.tool() +async def send_email(ctx: Context, subject: str, body: str, to_recipients: List[str], + cc_recipients: Optional[List[str]] = None, + bcc_recipients: Optional[List[str]] = None, + reply_to_message_id: Optional[str] = None) -> Dict[str, Any]: + """Send an email through Outlook.""" + try: + result = await _execute_outlook_action( + ctx, "send_email", + subject=subject, + body=body, + to_recipients=to_recipients, + cc_recipients=cc_recipients, + bcc_recipients=bcc_recipients, + reply_to_message_id=reply_to_message_id + ) + + return { + "success": True, + "message": "Email sent successfully", + "to": to_recipients, + "subject": subject + } + + except Exception as e: + logger.error(f"Error sending email: {e}") + raise ToolError(f"Failed to send email: {str(e)}") + +@mcp.tool() +async def get_folders(ctx: Context) -> Dict[str, Any]: + """Get email folders in Outlook.""" + try: + result = await _execute_outlook_action(ctx, "get_folders") + + folders = result.get("value", []) + formatted_folders = [] + + for folder in folders: + formatted_folders.append({ + "id": folder.get("id"), + "name": folder.get("displayName"), + "message_count": folder.get("messageCount", 0), + "unread_count": folder.get("unreadItemCount", 0) + }) + + return { + "folders": formatted_folders, + "total_folders": len(formatted_folders) + } + + except Exception as e: + logger.error(f"Error getting folders: {e}") + raise ToolError(f"Failed to get folders: {str(e)}") + +@mcp.tool() +async def search_emails(ctx: Context, query: str, top: int = 10) -> Dict[str, Any]: + """Search emails in Outlook.""" + try: + result = await _execute_outlook_action(ctx, "search_emails", query=query, top=top) + + # Get user info for privacy filters + user_id = auth.get_user_id_from_context(ctx) + user_info = await auth.get_user_info(user_id) + privacy_filters = user_info.get("privacy_filters", {}) + + # Apply privacy filters and format emails + emails = result.get("value", []) + filtered_emails = helpers.apply_privacy_filters(emails, privacy_filters) + formatted_emails = [helpers.format_email_summary(email) for email in filtered_emails] + + return { + "emails": formatted_emails, + "total_count": len(formatted_emails), + "search_query": query + } + + except Exception as e: + logger.error(f"Error searching emails: {e}") + raise ToolError(f"Failed to search emails: {str(e)}") + +@mcp.tool() +async def reply_to_email(ctx: Context, message_id: str, body: str, + cc_recipients: Optional[List[str]] = None, + bcc_recipients: Optional[List[str]] = None) -> Dict[str, Any]: + """Reply to an existing email.""" + try: + # First get the original email to extract recipients + original_email = await _execute_outlook_action(ctx, "get_email", message_id=message_id) + + # Extract the original sender as recipient for reply + from_info = original_email.get("from", {}) + reply_to_email = from_info.get("emailAddress", {}).get("address") + + if not reply_to_email: + raise ToolError("Could not determine recipient for reply") + + # Create reply subject + original_subject = original_email.get("subject", "") + reply_subject = f"Re: {original_subject}" if not original_subject.startswith("Re:") else original_subject + + # Send the reply + result = await _execute_outlook_action( + ctx, "send_email", + subject=reply_subject, + body=body, + to_recipients=[reply_to_email], + cc_recipients=cc_recipients, + bcc_recipients=bcc_recipients, + reply_to_message_id=message_id + ) + + return { + "success": True, + "message": "Reply sent successfully", + "to": [reply_to_email], + "subject": reply_subject, + "original_message_id": message_id + } + + except Exception as e: + logger.error(f"Error replying to email: {e}") + raise ToolError(f"Failed to reply to email: {str(e)}") + +if __name__ == "__main__": + import uvicorn + uvicorn.run(mcp.app, host="0.0.0.0", port=9027) diff --git a/src/server/mcp_hub/outlook/prompts.py b/src/server/mcp_hub/outlook/prompts.py new file mode 100644 index 00000000..d893111b --- /dev/null +++ b/src/server/mcp_hub/outlook/prompts.py @@ -0,0 +1,16 @@ +outlook_agent_system_prompt = """You are an Outlook email assistant that helps users manage their emails through Microsoft Graph API. You can: + +1. List emails from different folders (Inbox, Sent Items, etc.) +2. Read email content and details +3. Send new emails +4. Reply to existing emails +5. Search for specific emails +6. Manage email folders + +Always be helpful, concise, and respect user privacy. When reading emails, focus on the most relevant information and summarize when appropriate.""" + +outlook_agent_user_prompt = """User Query: {query} +Username: {username} +Previous Tool Response: {previous_tool_response} + +Please help the user with their Outlook email management request. Use the available tools to perform the requested action and provide a clear, helpful response.""" diff --git a/src/server/mcp_hub/outlook/requirements.txt b/src/server/mcp_hub/outlook/requirements.txt new file mode 100644 index 00000000..f591ea1d --- /dev/null +++ b/src/server/mcp_hub/outlook/requirements.txt @@ -0,0 +1,6 @@ +fastmcp +httpx +python-dotenv +cryptography +motor +requests diff --git a/src/server/mcp_hub/outlook/test_client.py b/src/server/mcp_hub/outlook/test_client.py new file mode 100644 index 00000000..6f1a3900 --- /dev/null +++ b/src/server/mcp_hub/outlook/test_client.py @@ -0,0 +1,82 @@ +#!/usr/bin/env python3 +""" +Test client for Outlook MCP Server +""" + +import asyncio +import json +import logging +from typing import Dict, Any + +from fastmcp import FastMCPClient +from fastmcp.utilities.logging import configure_logging, get_logger + +# Configure logging +configure_logging(level="INFO") +logger = get_logger(__name__) + +async def test_outlook_server(): + """Test the Outlook MCP server functionality.""" + + # Create client + client = FastMCPClient( + name="OutlookTestClient", + server_url="http://localhost:9027/sse" + ) + + try: + # Test context with mock user_id + test_context = { + "metadata": { + "user_id": "test_user_123" + } + } + + logger.info("Testing Outlook MCP Server...") + + # Test 1: Get folders + logger.info("Test 1: Getting email folders...") + try: + folders_result = await client.call_tool( + "get_folders", + context=test_context + ) + logger.info(f"Folders result: {json.dumps(folders_result, indent=2)}") + except Exception as e: + logger.error(f"Error getting folders: {e}") + + # Test 2: Get emails from inbox + logger.info("Test 2: Getting emails from inbox...") + try: + emails_result = await client.call_tool( + "get_emails", + context=test_context, + folder="inbox", + top=5 + ) + logger.info(f"Emails result: {json.dumps(emails_result, indent=2)}") + except Exception as e: + logger.error(f"Error getting emails: {e}") + + # Test 3: Search emails + logger.info("Test 3: Searching emails...") + try: + search_result = await client.call_tool( + "search_emails", + context=test_context, + query="test", + top=3 + ) + logger.info(f"Search result: {json.dumps(search_result, indent=2)}") + except Exception as e: + logger.error(f"Error searching emails: {e}") + + logger.info("Outlook MCP Server tests completed!") + + except Exception as e: + logger.error(f"Error testing Outlook MCP server: {e}") + finally: + await client.close() + +if __name__ == "__main__": + asyncio.run(test_outlook_server()) diff --git a/src/server/mcp_hub/outlook/utils.py b/src/server/mcp_hub/outlook/utils.py new file mode 100644 index 00000000..957d499a --- /dev/null +++ b/src/server/mcp_hub/outlook/utils.py @@ -0,0 +1,188 @@ +import httpx +import logging +from typing import Dict, Any, List, Optional +from datetime import datetime, timezone + +logger = logging.getLogger(__name__) + +class OutlookAPI: + """Helper class for Microsoft Graph API calls.""" + + def __init__(self, access_token: str): + self.access_token = access_token + self.base_url = "https://graph.microsoft.com/v1.0" + self.headers = { + "Authorization": f"Bearer {access_token}", + "Content-Type": "application/json" + } + + async def get_emails(self, folder: str = "inbox", top: int = 10, skip: int = 0, + search: Optional[str] = None) -> Dict[str, Any]: + """Get emails from a specific folder.""" + try: + url = f"{self.base_url}/me/mailFolders/{folder}/messages" + params = { + "$top": top, + "$skip": skip, + "$orderby": "receivedDateTime desc", + "$select": "id,subject,from,toRecipients,receivedDateTime,isRead,hasAttachments,bodyPreview" + } + + if search: + params["$filter"] = f"contains(subject,'{search}') or contains(body/content,'{search}')" + + async with httpx.AsyncClient() as client: + response = await client.get(url, headers=self.headers, params=params) + response.raise_for_status() + return response.json() + + except Exception as e: + logger.error(f"Error getting emails: {e}") + raise + + async def get_email(self, message_id: str) -> Dict[str, Any]: + """Get a specific email by ID.""" + try: + url = f"{self.base_url}/me/messages/{message_id}" + params = { + "$select": "id,subject,from,toRecipients,ccRecipients,bccRecipients,receivedDateTime,sentDateTime,isRead,hasAttachments,body,bodyPreview" + } + + async with httpx.AsyncClient() as client: + response = await client.get(url, headers=self.headers, params=params) + response.raise_for_status() + return response.json() + + except Exception as e: + logger.error(f"Error getting email {message_id}: {e}") + raise + + async def send_email(self, subject: str, body: str, to_recipients: List[str], + cc_recipients: Optional[List[str]] = None, + bcc_recipients: Optional[List[str]] = None, + reply_to_message_id: Optional[str] = None) -> Dict[str, Any]: + """Send an email.""" + try: + url = f"{self.base_url}/me/sendMail" + + # Prepare recipients + to_emails = [{"emailAddress": {"address": email}} for email in to_recipients] + cc_emails = [{"emailAddress": {"address": email}} for email in (cc_recipients or [])] + bcc_emails = [{"emailAddress": {"address": email}} for email in (bcc_recipients or [])] + + # Prepare message + message = { + "subject": subject, + "body": { + "contentType": "HTML", + "content": body + }, + "toRecipients": to_emails + } + + if cc_emails: + message["ccRecipients"] = cc_emails + if bcc_emails: + message["bccRecipients"] = bcc_emails + if reply_to_message_id: + message["replyTo"] = [{"id": reply_to_message_id}] + + payload = {"message": message, "saveToSentItems": True} + + async with httpx.AsyncClient() as client: + response = await client.post(url, headers=self.headers, json=payload) + response.raise_for_status() + return {"success": True, "message": "Email sent successfully"} + + except Exception as e: + logger.error(f"Error sending email: {e}") + raise + + async def get_folders(self) -> Dict[str, Any]: + """Get email folders.""" + try: + url = f"{self.base_url}/me/mailFolders" + params = { + "$select": "id,displayName,messageCount,unreadItemCount" + } + + async with httpx.AsyncClient() as client: + response = await client.get(url, headers=self.headers, params=params) + response.raise_for_status() + return response.json() + + except Exception as e: + logger.error(f"Error getting folders: {e}") + raise + + async def search_emails(self, query: str, top: int = 10) -> Dict[str, Any]: + """Search emails using Microsoft Graph search.""" + try: + url = f"{self.base_url}/me/messages" + params = { + "$search": f'"{query}"', + "$top": top, + "$orderby": "receivedDateTime desc", + "$select": "id,subject,from,toRecipients,receivedDateTime,isRead,hasAttachments,bodyPreview" + } + + async with httpx.AsyncClient() as client: + response = await client.get(url, headers=self.headers, params=params) + response.raise_for_status() + return response.json() + + except Exception as e: + logger.error(f"Error searching emails: {e}") + raise + +def format_email_summary(email: Dict[str, Any]) -> Dict[str, Any]: + """Format email data for better readability.""" + try: + from_info = email.get("from", {}) + to_info = email.get("toRecipients", []) + + return { + "id": email.get("id"), + "subject": email.get("subject", "No Subject"), + "from": from_info.get("emailAddress", {}).get("address", "Unknown"), + "from_name": from_info.get("emailAddress", {}).get("name", "Unknown"), + "to": [recipient.get("emailAddress", {}).get("address", "") for recipient in to_info], + "received_date": email.get("receivedDateTime"), + "is_read": email.get("isRead", False), + "has_attachments": email.get("hasAttachments", False), + "body_preview": email.get("bodyPreview", "") + } + except Exception as e: + logger.error(f"Error formatting email: {e}") + return email + +def apply_privacy_filters(emails: List[Dict[str, Any]], privacy_filters: Dict[str, Any]) -> List[Dict[str, Any]]: + """Apply privacy filters to email list.""" + try: + if not privacy_filters: + return emails + + keyword_filters = privacy_filters.get("keywords", []) + email_filters = [email.lower() for email in privacy_filters.get("emails", [])] + + filtered_emails = [] + for email in emails: + # Skip if email contains filtered keywords + subject = email.get("subject", "").lower() + body = email.get("bodyPreview", "").lower() + + if any(keyword.lower() in subject or keyword.lower() in body for keyword in keyword_filters): + continue + + # Skip if from filtered email addresses + from_email = email.get("from", "").lower() + if from_email in email_filters: + continue + + filtered_emails.append(email) + + return filtered_emails + + except Exception as e: + logger.error(f"Error applying privacy filters: {e}") + return emails diff --git a/src/server/workers/executor/config.py b/src/server/workers/executor/config.py index 874193c1..c2fb2774 100644 --- a/src/server/workers/executor/config.py +++ b/src/server/workers/executor/config.py @@ -212,5 +212,15 @@ "name": "tasks_server", "url": os.getenv("TASKS_MCP_SERVER_URL", "http://localhost:9018/sse/") } + }, + "outlook": { + "display_name": "Outlook", + "description": "Connect to read, send, and manage emails in Outlook. The agent can list emails, read message content, send new emails, reply to messages, and manage folders.", + "auth_type": "oauth", + "icon": "IconMail", + "mcp_server_config": { + "name": "outlook_server", + "url": os.getenv("OUTLOOK_MCP_SERVER_URL", "http://localhost:9027/sse") + } } } \ No newline at end of file diff --git a/src/server/workers/planner/config.py b/src/server/workers/planner/config.py index 26835b39..a3a79053 100644 --- a/src/server/workers/planner/config.py +++ b/src/server/workers/planner/config.py @@ -220,6 +220,16 @@ "name": "whatsapp_server", "url": os.getenv("WHATSAPP_MCP_SERVER_URL", "http://localhost:9024/sse") } + }, + "outlook": { + "display_name": "Outlook", + "description": "Connect to read, send, and manage emails in Outlook. The agent can list emails, read message content, send new emails, reply to messages, and manage folders.", + "auth_type": "oauth", + "icon": "IconMail", + "mcp_server_config": { + "name": "outlook_server", + "url": os.getenv("OUTLOOK_MCP_SERVER_URL", "http://localhost:9027/sse") + } } } From c85215a6da8d5a62b3f39f3a22a4469467d1fc74 Mon Sep 17 00:00:00 2001 From: Anshuman Date: Mon, 18 Aug 2025 23:13:02 +0530 Subject: [PATCH 10/10] fix: Correct indentation error in routes.py for discord and outlook OAuth --- src/server/main/integrations/routes.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/server/main/integrations/routes.py b/src/server/main/integrations/routes.py index 91615a80..44d13031 100644 --- a/src/server/main/integrations/routes.py +++ b/src/server/main/integrations/routes.py @@ -74,10 +74,10 @@ async def get_integration_sources(user_id: str = Depends(auth_helper.get_current source_info["client_id"] = NOTION_CLIENT_ID elif name == 'trello': source_info["client_id"] = TRELLO_CLIENT_ID - elif name == 'discord': - source_info["client_id"] = DISCORD_CLIENT_ID + elif name == 'discord': + source_info["client_id"] = DISCORD_CLIENT_ID elif name == 'outlook': - source_info["client_id"] = OUTLOOK_CLIENT_ID + source_info["client_id"] = OUTLOOK_CLIENT_ID all_sources.append(source_info)