The ISEL APP is a mobile application designed to streamline access to critical information for the Lisbon School of Engineering (ISEL) university community. This repository contains all the resources needed to understand and set up the project.
- Introduction
- Features
- Technologies Used
- Getting Started
- Running the Application
- GitHub Actions
- Contributing
- License
The ISEL APP simplifies the process of accessing essential academic information. It offers a user-friendly and efficient interface to improve the student experience at ISEL.
- Access to academic records
- Grade simulations
- Virtual student card
- Course schedules
- Campus news and events
- Department and faculty information
- React Native: For building the cross-platform mobile application
- Python: For web scraping, data extraction, and processing
- Supabase: For the backend database and authentication
- PostgreSQL: Database management system
- Pandas, openpyxl: For handling Excel data
- BeautifulSoup: For web scraping
- Node.js (v14 or later)
- Python 3.12 (only version tested)
- Supabase account
- Android Studio (for Android Virtual Device) or Android w/ USB cable
- IDE for Development (e.g., VSCode)
-
Fill in your Supabase credentials and other configuration details in your
.envfile. -
Configure Supabase:
- Follow the instructions in the database/supabase/readme.md file to set up your Supabase account, project, and tables.
-
Configure web scraping:
- Modify the JSON files in
config/web/to match the structure of the web pages you want to scrape. See the config/web/readme.md file for a detailed guide on configuring the web scraper.
- Modify the JSON files in
-
Configure Excel import:
- Update the
config/siges/config.jsonfile to map your Excel sheets and columns to the corresponding Supabase tables and columns. Refer to the config/siges/readme.md file for instructions on configuring the Excel to Supabase importer.
- Update the
-
Run
run_scrapers.py:- Finally run
run_scrapers.pylocally to test the cronjob system manually.
- Finally run
The project includes a GitHub Actions workflow (scraper.yml) that automates the process of running the web scrapers and updating the scraped data in the repository. The workflow is triggered on a schedule (every hour) or can be run manually.
Make sure to set the SUPABASE_URL and SUPABASE_KEY secrets in your repository settings for the workflow to access your Supabase instance.
This project is licensed under the MIT License - see the LICENSE file for details.