Skip to content

nickmflorin/happybudget-api

Repository files navigation

HappyBudget API

© Nick Florin, 2022

This project is protected by federal copyright certificate with registration no. TXu 2-319-983.

This project serves as a REST API supporting the HappyBudget platform.

This documentation outlines how to properly setup and configure the application in both production and local environments. For other documentation regarding the operation and development of this project, refer to the following list:

  1. Testing
  2. Standards & Conventions
  3. Configuration

System Requirements

  • pyenv
  • homebrew
  • poetry
  • postgres

Note that this documentation describes how to setup/configure the application for local development on MacOSX. The steps outlined here can be done on a Windows/Ubuntu machine as well, but the steps will not be exactly as they are here (for obvious reasons).

Getting Started

Step 1: Repository

Clone this repository locally and cd into the directory.

$ git clone https://github.com/nickmflorin/happybudget-api.git

Step 2: Postgres

When developing locally (or being used in production) a postgres database is used. First, check to see if you already have it installed on your machine:

$ which postgres

If the result of the command is a directory (usually /usr/local/bin/postgres), then it is already installed and you can proceed to the next step. If not, simply install postgres:

$ brew install postgres

Step 3: Python

After postgres is installed, you need to install pyenv, a Python version manager for development. This will allow you to manage your Python version on a project basis.

$ brew install pyenv

We then need to initialize pyenv, so add this to your ~/.bash_profile (or ~/.zshrc, or whatever default shell script your machine uses):

$ eval "$(pyenv init -)"

Then source your shell profile as:

$ . . ~/.bash_profile

Get pyenv setup with the Python version you will need for this application.

$ pyenv install 3.9.0
$ pyenv local 3.9.0

Note: If using MacOS 11, certain Python versions do not build (at the time of this documentation) and the only versions that have patches for the fix are 3.7.8+, 3.8.4+, 3.9.0+. Any earlier versions will not build.

Confirm that pyenv is pointing at the correct Python version:

$ pyenv local
$ 3.9.0

If your pyenv local Python version is still pointing at your system Python version, update your ~/.bash_profile (or ~/.zshrc, or whatever default shell script your machine uses) to initialize pyenv as follows:

$ eval "$(pyenv init --path)"

Then, resource the shell profile and make sure that pyenv is pointing at the local Python version:

$ . . ~/.bash_profile
$ pyenv local
$ 3.9.0

If this still doesn't work, contact a team member before proceeding further.

Dependencies

Now we need to setup the dependencies. We use poetry as a dependency management system, so you will have to install that locally:

$ curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python -

Note that you may need to upgrade pip before installing poetry, you can do this as follows:

$ python -m pip install --upgrade pip

Once poetry is intalled, you need to add it to you PATH:

$ export PATH=$HOME/.poetry/bin:$PATH

Now that poetry is installed and on the system path, you need to create a virtual environment and install the dependencies to that virtual environment.

$ python -m venv ./<env_name>
$ . ./<env_name>/bin/activate
$ poetry install

Note that poetry has the ability to manage virtual environments for you, so there is also that option if you do not feel like managing it yourself.

Troubleshooting Dependencies

If you see an error when running poetry install that occurs when the installation of psycopg2 fails, this means that postgres was not properly installed. In this case, you will see an error with output that contains similar content as what is shown below:

python setup.py egg_info did not run successfully.

Error: pg_config executable not found.

pg_config is required to build psycopg2 from source.  Please add the directory
containing pg_config to the $PATH or specify the full executable path with the
option:

  python setup.py build_ext --pg-config /path/to/pg_config build ...

or with the pg_config option in 'setup.cfg'.

If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.

In this case, return to Step 2 and make sure that postgres was properly installed.

Once the dependencies are successfully installed, you should now be able to run the test suite (see Testing section below). However, there is still additional setup required to be able to run the application locally. These additional steps are outlined in Step 4 and Step 5.

Step 4: Environment

There are 4 distinct environments that the application runs in, with the current environment being dictated by the settings configuration file that is read from the OS environment variable DJANGO_SETTINGS_MODULE.

The 4 distinct environments that the application runs in are as follows:

Environment Settings Module URL File Storage Database
local happybudget.conf.settings.local local.happybudget.io:8000 Local File Storage Local PostgreSQL Server
test happybudget.conf.settings.test N/A Temporary File Storage Transactional SQLite3 File
develop happybudget.conf.settings.dev devapi.happybudget.io AWS S3 PostgreSQL on AWS RDS
prod happybudget.conf.settings.prod api.happybudget.io AWS S3 PostgreSQL on AWS RDS

ENV File

For all environments, we use a .env file in the project root that configuration values and sensitive information is read from. For remote environments, this .env file is required, but for test and local environments, the settings configuration associated with the OS environment variable DJANGO_SETTINGS_MODULE will default relevant values such that an .env file is not necessary when developing locally or running tests.

However, if you wish to override certain configurations (i.e. the database configurations) you can place a .env file in the project root and implement those configuration values.

Local Domain

Our authentication protocols rely on the ability to set cookies in the response that dictate user sessions and user information. Recent Google Chrome security improvements have introduced the caveat that the browser no longer considers localhost a valid domain, so setting cookies in the backend for the frontend application no longer works when running the application on localhost. For this reason, the application is configured locally to only work on 127.0.0.1:8000, not localhost:8000. However, the frontend expects that the API will be running on local.happybudget.io:8000, so we need to setup our /etc/hosts file such that we can use local.happybudget.io as a valid domain for the local development server. Note that this step is also included in the frontend setup, so this may have already been done as a part of that setup but is repeated here for completeness.

Edit your /etc/hosts file as follows:

$ sudo nano /etc/hosts

Add the following configuration to the file:

127.0.0.1       local.happybudget.io

Now, when we start the development server, we will be able to access the backend application at local.happybudget.io:8000, and the frontend application at local.happybudget.io:3000.

Local Database

We use postgresql for our local (and production) databases, with the exception being our tests which run on a lightweight sqlite database so they can do transactional tests that do not take forever to run. The first step is to install postgres:

$ brew install postgres

Then, we need to start postgres as a service:

$ brew services start postgresql

The database configuration parameters can be overridden in a .env file, but locally they default to the following if not present in the .env file:

DATABASE_NAME=postgres_happybudget
DATABASE_USER=happybudget
DATABASE_PASSWORD=''
DATABASE_HOST=localhost
DATABASE_PORT=5432

postgresql will, by default, setup a database named postgres with postgres as the user. Additionally, django will try to (by default) use "django" as the DATABASE_USER. To avoid complications around this, as well as complications that might arise from a developer already having the default postgres database name reserved for other purposes, we try to isolate these parameters to our use case.

This is not to say that you cannot use the application on the default postgresql parameters - you can, you just have to set them in the .env file.

When starting Django, we may see some of the following database errors:

$ django.db.utils.OperationalError: connection to server at "localhost" (::1), port 5432 failed: FATAL:
$ database "postgres_happybudget" does not exist

This happens if the database has not yet been created. We may also see the following:

$ django.db.utils.OperationalError: FATAL:  role "happybudget" does not exist

This happens if the database user has not yet been created or does not have the correct privileges to access the configured database.

Both of these errors mean is that we have to setup the database manually, since it is not already setup by postgresql by default (which it would be, if we were using the postgres database name and user).

To setup the database, we have to follow the steps outlined below. Note that not all of these steps are required each time, as some of the entities may already have been created or appropriately assigned when using a previous database or previously completing some of these steps.

Connect to Default Postgres Database

Since we do not know whether or not the database we are concerned with has been created yet, we connect to the default database postgres since we can still run commands for other databases from that entry point.

$ psql -d postgres
Create the Database

If the database was not already created, we need to create it.

$ CREATE DATABASE <DATABASE_NAME>;
Create the Database User

If the user does not already exist, we need to create one in Postgres. Note that if different databases are using the same user, the user may already have been created.

$ CREATE USER <DATABASE_USER> WITH PASSWORD '';
Grant Privileges to Database User

If the database was just created, or the user was just created, we need to grant access to the created or existing database to the created or existing user.

$ GRANT ALL PRIVILEGES ON DATABASE <DATABASE_NAME> TO <DATABASE_USER>;
Assign User as Owner of Database

If the database was just created, or the user was just created, we need to assign the created or existing user as the owner of the created or existing database.

$ ALTER DATABASE <DATABASE_NAME> OWNER TO <DATABASE_USER>;
Quit the Postgres Shell
$ \q
Populating the Database

Now that the application environment is setup, there is some data we need to populate in the database before we can run the application. This can either be done via the automated setup command or as outlined here in the explicit steps.

Again, make sure that our virtual environment is activated and that postgres is running.

$ . ./env/bin/activate
$ brew services start postgresql
Automated Setup Command

Simply run the following:

$ python src/manage.py setup

This will automatically wipe the local database defined in the .env file, run migrations, install fixtures and prompt you for information to create a superuser. You should always have a superuser account locally, and should use that as your primary account for logging into the application.

Explicit Setup Commands

If opting not to use the automated setup command (either due to errors with it's usage or personal preference),

The first step is only applicable if we are setting up the application after it was previously setup and a database was created. If we want to start from scratch, we can wipe the database defined in the .env file as follows:

$ python src/manage.py reset_db

If setting up for the first time, or setting up with an existing database, this step can be ignored.

Next, we need to run migrations such that the database tables and schemas are populated in the database. To do this, simply run the following command:

$ python src/manage.py migrate

Next, we need to load the fixtures into the database:

$ python src/manage.py loadfixtures

Then, we need to collect the server side static files for the admin:

$ python src/manage.py collectstatic

Finally, we need to create a superuser that we will use to login to the application with:

$ python src/manage.py createsuperuser

You should always have a superuser account locally, and should use that as your primary account for logging into the application.

Celery

We use celery to manage periodic background tasks in the application. These tasks are mostly concerned with performing cleanup or making sure that corrupted data or relationships aren't sneaking into the database.

Running celery locally isn't required to run the application, but there are times where it will be needed when debugging or developing tasks that run in the application background.

To run celery in the background, inside of the virtual environment simply run the following command:

$ celery -A happybudget worker -B -l info

Congratulations! You are now ready to run the application.

Development

Workflow

Developers should be free to use whatever workflow works best for them, and the IDE they use is an important aspect of that workflow.

IDE

While it is not required that a developer use VSCode, it is strongly, strongly recommended that they do. Usage of VSCode will mean that the setup for the team's code environment will more seamlessly integrate into your workflow.

If VSCode is not the ideal IDE for you, that is fine - but it is your responsibility to make sure that the IDE of your choosing is properly setup for the team's code environment, which primary relates to (but is not limited to) linting.

Extensions

If using VSCode, please make sure to install the "ms-python.vscode-pylance" and "ms-python.python" extensions from the VSCode marketplace.

settings.json

For VSCode to function properly with the code environment configuration remote to this repository, you should add the following configurations to your settings.json file:

{
  "editor.formatOnSave": true,
  "[python]": {
    "editor.tabSize": 4,
    "editor.insertSpaces": true,
    "editor.formatOnSave": true
  },
  "python.linting.flake8Enabled": true,
  "python.linting.pylintEnabled": true,
  "python.linting.flake8CategorySeverity.W": "Error",
  "python.linting.pycodestyleEnabled": true,
  "editor.rulers": [80, 100],
  "python.languageServer": "Pylance"
}

Running Locally

To run the application locally, simply activate the virtual environment and start the Django web server:

$ . ./env/bin/activate
$ python src/manage.py runserver

Django Settings

By default, the Django settings module used when running locally (toggled via the DJANGO_SETTINGS_MODULE environment variable) is happybudget.conf.settings.local. If you need to override certain settings for your own personal local development, happybudget.conf.settings.local should not be edited but instead a happybudget.conf.settings.local_override Python file should be created.

Testing

See the ReadMe.md file in the testing directory here.

tox

We use tox to automate our testing, linting and code coverage processes. tox was a project that began as a way to run tests across Python versions, Django versions and other package versions - but has extended itself to much more.

Our tox setup can be very useful for local development. One benefit of tox is that it completely isolates it's own cached virtual environments for purposes of running tests and producing coverage reports. The following commands can all be run outside of a virtual environment:

Run the complete pytest suite and generate coverage reports in /reports:

$ tox

or

tox -e test

Clean all build and test directories as well as extraneous artifacts:

tox -e clean

Run flake8 linting checks and generate linting reports in /reports:

tox -e lint

Running these commands will usually be slow the first time, just because tox has to setup the cached environment - but after that, they all run very quickly and can be very useful.

Managing Dependencies

We use poetry as a package management system. poetry's analogue to a requirements.txt file is the pyproject.toml file. Whenever you need to add or remove dependencies from the project, the pyproject.toml file will be updated to reflect the new dependency state.

See the discussion in Getting Started for instructions on how to install and setup poetry.

Installing All Packages

The poetry analogue to pip install -r requirements.txt is as follows:

$ poetry install

This will install/update all of the packages in the pyproject.toml file.

Installing a Specific Package

To install a specific package with poetry, all you have to do is:

$ poetry add <package-name>

This will install the package in your virtual environment and update the pyproject.toml file.

Remove a Specific Package

To remove a specific package with poetry, all you have to do is:

$ poetry remove <package-name>

This will remove the package from your virtual environment and update the pyproject.toml file.

Development vs. Production Dependencies

poetry uses a concept of development dependencies to allow additional dependencies to be included for local development that we do not want to add in a production environment. By default, when running poetry install it will include the development dependencies. In production, we use poetry install --no-dev so that development dependencies are not included.

Setting Up on EC2 Instance

The setup instructions in this section describe the steps required to setup and run the API on a Linux EC2 AWS instance. If a machine other than Linux is chosen, these steps will differ.

These instructions do not detail how to setup the EC2 instance in AWS, but rather detail how to setup the API on an EC2 instance assuming it has already been created.

Currently, we run EC2 instances for two separate environments:

Environment Settings Module URL
develop happybudget.conf.settings.dev devapi.happybudget.io
prod happybudget.conf.settings.prod api.happybudget.io

Step 1: Installing Git

After SSH'ing into the EC2 instance, we first need to install git on the machine. To install git on the EC2 instance, simply run:

$ sudo yum update
$ sudo yum install git -y

We then need to create the SSH keys for git that will authenticate the machine for SSH requests to git.

$ ssh-keygen -t rsa -b 4096 -C "your_email@example.com"

A passphrase is not required for the SSH key, and the key should be saved in it's default location which is usually ~/.ssh/id_rsa.

We then need to modify the SSH configuration to automatically include this SSH key. To do this, edit the ~/.ssh/config file as follows:

$ sudo nano ~/.ssh/config
Host *
  AddKeysToAgent yes
  IdentityFile ~/.ssh/id_rsa

This assumes that your SSH key was created in ~/.ssh/id_rsa.

Finally, we just need to start the ssh-agent in the background and add the newly created identity:

$ eval "$(ssh-agent -s)"
$ ssh-add ~/.ssh/id_rsa

In order for git to allow SSH access to the machine, we now need to add that SSH key to git. To do this, login to your GitHub account (or an organization GitHub account that has access to the repository) and go to the Settings page. Under "SSH and GPG Keys", click "Add SSH Key". Read the public key content on the machine and copy and paste it into the GitHub field:

tail ~/.ssh/id_rsa.pub

Step 2: Installing Docker

We now need to install docker and docker-compose on the machine. To install docker, run the following command:

$ sudo amazon-linux-extras install docker

Once docker is installed, we need to start the docker service and add the ec2-user to the docker group so we can execute docker commands without using sudo:

$ sudo service docker start
$ sudo usermod -a -G docker ec2-user

Logout from the EC2 instance and then SSH back in. Verify that you can run docker commands without sudo:

$ docker info

Finally, we simply need to install docker-compose:

$ sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Step 3: Clone Repository

After SSH'ing into the instance, we need to clone the git repository. Make a top level directory www with the correct permissions and clone the repository into that directory:

$ sudo mkdir www
$ sudo chmod 777 www
$ chmod go-w ~/.ssh/config
$ cd ./www
$ git clone git@github.com:nickmflorin/happybudget-api.git
$ cd ./happybudget-api
ENV File

We need to create a .env file to hold the sensitive keys required to run the API. You should talk to a team member to get these key values before proceeding. The .env file should look as follows:

DJANGO_SECRET_KEY=<DJANGO_SECRET_KEY>
DJANGO_SETTINGS_MODULE=<DJANGO_SETTINGS_MODULE>

DATABASE_NAME=<RDS_DATABASE_NAME>
DATABASE_USER=<RDS_DATABASE_USER>
DATABASE_PASSWORD=<RDS_DATABASE_PASSWORD>
DATABASE_HOST=<RDS_DATABASE_HOST>
DATABASE_PORT=5432

AWS_STORAGE_BUCKET_NAME=<AWS_STORAGE_BUCKET_NAME>
AWS_STORAGE_BUCKET_URL=<AWS_STORAGE_BUCKET_URL>
AWS_S3_REGION_NAME=<AWS_S3_REGION_NAME>
AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY >
AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID>

JWT_RSA_FINGERPRINT=<JWT_RSA_FINGERPRINT>

Note that DJANGO_SETTINGS_MODULE will be set based on the environment of the EC2 instance.

JWT RSA Signing Fingerprint

This step is very important. To dramatically improve the security of the application, we do not use the DJANGO_SECRET_KEY to sign JWT tokens used throughout the application. Instead, we use an RSA fingerprint, because it is significantly more secure and is easier to swap out if it ever becomes compromised.

In local and test environments, the RSA fingerprint defaults to a value that does not need to be stored sensitively. However, in production and development environments, we need to generate the RSA public/private key pairs on the server so they can be securely stored and referenced by the application.

To start, generate a private/public RSA key pair as follows:

$ ssh-keygen -m PEM -t rsa -b 4096 -C "your_email@example.com"

You will then be prompted to enter the file for which the key should be saved. Since we will be reading the file contents in the application itself, the private/public key pairs need to be stored inside of the docker environment. Our docker setup expects the file to be named jwt_signing_key, and in order for it to be copied to the docker environment, and found by Django, it needs to be stored in the BASE_DIR:

$ Enter file in which to save the key (/home/ec2-user/.ssh/id_rsa): /www/happybudget-api/src/happybudget/jwt_signing_key

Do not enter a passphrase.

Important: Do not ever, under any circumstances, remove this file from the EC2 instance, commit to source control or share with another person, inside or outside of the company.

Now that we have the private/public RSA key pairs generated, we simply need to reference it's filename in the .env file, since Django will by default try to find it in src/happybudget/<filename>. The RSA private/public key pairs are in the root of the docker environment, so this is simply:

$ nano .env
$ JWT_RSA_FINGERPRINT_FILENAME=jwt_signing_key

The application will now automatically read both the file and it's .pub counterpart and use the file contents to sign the JWT tokens in the application.

Step 4: Configuring Apache

On the EC2 instance, we need to use Apache as a proxy to route requests on port 80 to requests on port 8000 where the application is running. Requests are mapped to port 80 via the load balancer in AWS, which will route requests on port 443 (for HTTPS) to port 80.

First, we need to install httpd:

$ sudo yum install httpd

Next, we just need to edit the Apache configuration to route requests as described above:

$ cd /etc/httpd
$ touch vhosts.conf
$ sudo nano vhosts.conf

Add the following content to vhosts.conf:

<VirtualHost *:80>
  ProxyPreserveHost On
  ProxyRequests Off
  ServerName viska.localhost
  ProxyPass / http://localhost:8000/
  ProxyPassReverse / http://localhost:8000/

  Timeout 3600
  ProxyTimeout 3600
  ProxyBadHeader Ignore

  ErrorLog "/var/log/httpd/gb-error_log"
  CustomLog "/var/log/httpd/gb-access_log" common

</VirtualHost>

Finally, run the httpd service in the background:

$ sudo service httpd start

Step 5: Running the Application

When running the application, the docker-compose configuration file we use depends on the environment. In the prod environment, the configuration file is simply docker-compose.yml - which is the default. However, in the dev environment, we need to specify the configuration file as docker-compose.dev.yml. For this reason, the directions to start the application in each environment differ slightly.

Prod Environment

Check your .env file and make sure that DJANGO_SETTINGS_MODULE=happybudget.conf.settings.prod. Then, check out the master branch:

$ git fetch origin master
$ git checkout master
$ git pull

Then, we simply need to build the container and bring it up and then run Django management commands when applicable.

$ docker-compose up -d --build
$ docker-compose exec web python manage.py migrate
$ docker-compose exec web python manage.py collectstatic
$ docker-compose exec web python manage.py loadfixtures
$ docker-compose exec web python manage.py createsuperuser
Dev Environment

Check your .env file and make sure that DJANGO_SETTINGS_MODULE=happybudget.conf.settings.dev. Then, check out the develop branch:

$ git fetch origin develop
$ git checkout develop
$ git pull

Then, we simply need to build the container and bring it up and then run Django management commands when applicable.

$ docker-compose -f docker-compose.dev.yml up -d --build
$ docker-compose -f docker-compose.dev.yml exec web python manage.py migrate
$ docker-compose -f docker-compose.dev.yml exec web python manage.py collectstatic
$ docker-compose -f docker-compose.dev.yml exec web python manage.py loadfixtures
$ docker-compose -f docker-compose.dev.yml exec web python manage.py createsuperuser

About

Backend API for Happy Budget Application & Framework

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages