-
Notifications
You must be signed in to change notification settings - Fork 1
Complete walk through
This page is for guiding you through the project: as I've put quite some stuff in it, I believe it would be more effective if you discover things in the right order. This page serves this purpose.
Even though GitHub allows you to get this project as a zip, the "go" tool needs a Git installation to fetch dependencies.
Also, as the project references a Git subproject via SSH, you should ensure you have a SSH connection to GitHub properly set up.
You should install Go: I've tested this app with 1.9 on a Mac, and 1.8 on a Ubuntu machine; for your information, early versions of Go 1.8 and older won't work for sure because they are missing the definition of some HTTP status codes used in the app.
You should also set the following environment variables:
-
GOROOT: Should point to your Go installation (i.e.,/usr/local/goon Mac, and/usr/lib/go-1.Xon Ubuntu) -
GOPATH: I recommend setting it to the root directory of this project's checkout.
If you want to run the app in a container, you should install any reasonably recent release of Docker.
If you want to try out some of the fun stuff we'll cover later in this page, you will need:
- Docker version 1.10.0 or later
- Compose 1.6.0 or later
- make sure vm.max_map_count is adequate as described here - FYI I did not need to do this on my Mac box, and I needed this on my Ubuntu 16.04 box.
In a directory of your choice, run git clone --recurse-submodules git@github.com:peppelan/ToDoList.git: the --recurse-submodules option will also checkout a linked project for the fun stuff.
- From the directory where you checked out the project (which from now on I will call
ToDoList), enter thesrcdirectory, and run the commandgo get -t ./...: this will take a couple of minutes to get all the dependencies for the project. You will need this step only the first time you build. - From
ToDoList, run the commandgo build todolist: this will create an executable calledtodolist
- You can run the executable that has just been created by
./todolist. - You can now open your browser at http://localhost:8080 and see a welcome page, or http://localhost:8081 and find the app monitoring information also known as expvar.
The first HTTP server, listening at 8080, serves the app's APIs, while the second one is a technical HTTP server for monitoring only.
While opening the former, you should see in the app's console something like:
2017/09/30 07:20:04 127.0.0.1:56075 GET / Index 200 (OK) 29 us
which logs, in this order:
- when a request happens
- who made it
- what the request was
- how it ended in the form of the HTTP response code (also in human readable form)
- how long it took to the app to process it.
When you first started your app, you should have an empty To-Do-List in it: you can verify by querying it via HTTP, for example with the curl command:
$ curl -XPOST "http://localhost:8080/todos"
{}
then, you can insert a new To-Do:
$ curl -XPOST "http://localhost:8080/todos" -d '{"name":"Create a To-Do list ReST application", "due":"2017-09-30T23:59:59-01:00"}'
.. and verify it has been created with ID 1:
$ curl -XGET "http://localhost:8080/todos"
{"1":{"name":"Create a To-Do list ReST application","description":"","completed":false,"due":"2017-09-30T23:59:59-01:00"}}
you can then update it, for example to mark it as done:
$ curl -XPUT "http://localhost:8080/todos/1" -d '{"name":"Create a To-Do list ReST application", "due":"2017-09-30T23:59:59-01:00","completed":true}'
... and verify it has been updated:
$ curl -XGET "http://localhost:8080/todos"
{"1":{"name":"Create a To-Do list ReST application","description":"","completed":true,"due":"2017-09-30T23:59:59-01:00"}}
or delete it:
$ curl -XDELETE "http://localhost:8080/todos/1"
... and verify it is no longer there
$ curl -XGET "http://localhost:8080/todos"
{}
The very same sequence that you just ran, is automated in ToDoList/src/acceptance_tests/crud.go: you can quickly run it with the provided utility script:
$ ./run_acceptance_tests.sh
ok command-line-arguments 0.007s
In addition to this, a more in-depth testing, also comprehensive of error codes checking, is implemented at unit-tests level - which you can run from the ToDoList folder with:
$ go test todolist
ok todolist 0.009s
The project comes with a Dockerfile to let you build a Docker image that runs the app.
For this purpose, you can run the utility script:
$ ./build_docker.sh
this script will compile the app and create a Docker image to-do-list:latest that embeds it.
The resulting docker image has a small footprint (less than 16 Mb) and embeds all the metadata that is needed for running the application smoothly (exposed ports, default startup command).
Once the image is built, you can also run it - make sure you have stopped the app before this:
$ ./run_docker.sh
to-do-list
a9e2d200f379535de1d8bfd2610bf8638a9252a3c5252153f39c6f9a8f721564
and at this point you should see the container running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9e2d200f379 to-do-list "/usr/bin/todolist" 27 seconds ago Up 26 seconds 0.0.0.0:8080-8081->8080-8081/tcp to-do-list
and you can go through the manual testing again if you like, or more simply run again the automated acceptance tests:
$ ./run_acceptance_tests.sh
ok command-line-arguments 0.006s
The project comes with a script that aggregates the above mentioned scripts for running a full build/test cycle:
$ ./full_build_and_test.sh
--- Running Unit tests...
ok todolist 0.006s
--- Preparing docker image...
--- Running docker container...
to-do-list
2baf3cb389ac4078462848189df7ac6d473d6c2c94094a03184a1b2dc18ef261
--- Running Acceptance tests...
ok command-line-arguments 0.006s
we can think of this as what would be run in a collaborative environment's continuous integration system (i.e. Jenkins) to finally being able to mark a commit (or a branch) as acceptable or not.
For example, this very script is called on Travis CI, and the current status of the 'master' branch is .
Here we get to the fun stuff mentioned before. As this is a web app providing To-Do lists, and a lot of people on Earth do have stuff to be done, we can safely assume that this app will become very popular, very soon. Or maybe it will not...
For those who believe it will, the app can be used in conjunction with an Elasticsearch NoSQL database as follows.
If you installed docker-compose as per suggestion in the prerequisites, you can run, from the "ToDoList/docker-elk" folder:
$ docker-compose up -d
[omitting downloads]
Creating network "dockerelk_elk" with driver "bridge"
Creating network "dockerelk_dmz" with driver "bridge"
Creating dockerelk_elasticsearch_1
Creating dockerelk_kibana_1
Creating dockerelk_logstash_1
Creating dockerelk_todolist_1
Creating dockerelk_load_balancer_1
this command will instantiate our application in a richer deployment context:
- a reverse proxy accepting requests on your host's port 80 (
dockerelk_load_balancer_1) - the app's container, as seen before (
dockerelk_todolist_1) - an Elasticsearch node (
dockerelk_elasticsearch_1) - a few other containers that we'll describe later
After a couple of minutes - being the time needed by Elasticsearch to become healthy, you should be able to run your manual tests on the new deployment, right as before, except that you will have to remove ":8080" from all the curl commands.
Or, you can run the same acceptance tests as before, tweaked for targeting the composition:
$ ./run_prod_acceptance_tests.sh
ok command-line-arguments 1.444s
Once you are happy, you can create more containers serving the application:
$ docker-compose scale todolist=3
Creating and starting dockerelk_todolist_2 ... done
Creating and starting dockerelk_todolist_3 ... done
when doing so, two new containers running our application will be spawned, and the load balancer will start forwarding the requests to the three of them (more details here):
$ docker logs dockerelk_load_balancer_1
[a lot of stuff]
backend default_service
server dockerelk_todolist_1 dockerelk_todolist_1:8080 check inter 2000 rise 2 fall 3
server dockerelk_todolist_2 dockerelk_todolist_2:8080 check inter 2000 rise 2 fall 3
server dockerelk_todolist_3 dockerelk_todolist_3:8080 check inter 2000 rise 2 fall 3
[more stuff]
You can also scale the database nodes:
$ docker-compose scale elasticsearch=3
Creating and starting dockerelk_elasticsearch_2 ... done
Creating and starting dockerelk_elasticsearch_3 ... done
... and, after a couple of minutes, see that Elasticsearch is now a three-nodes cluster:
$ docker exec -it dockerelk_elasticsearch_1 curl -u elastic:changeme -XGET "http://localhost:9200/_nodes?pretty=true" | head -7
{
"_nodes" : {
"total" : 3,
"successful" : 3,
"failed" : 0
},
"cluster_name" : "docker-cluster",
After this command, the architecture of your application deployment will be something that looks like this:

The application also supports distributed tracing through zipkin, when the ZIPKIN_URL environment variable is defined:

This part is just a list of ideas of how I believe this project should evolve from this point on, so be warned that you will not find any code for the below ideas.
This is my first project in Go and my first ReST API implementation at the same time, so no surprises if the first thing that would need to be done next is to learn a bit more about these two topics and see what can be improved. There are good chances there is something silly here...
It would be nice to split the application into different parts: for example, there could be a first-level application that does the routing, and a downstream application per handler.
This would allow, just to have a few examples, for:
- better maintainability (i.e. rewriting from scratch one of these sub-apps would be a matter of hours);
- more fine-grained resource allocation (i.e. you may want to scale only the handlers for the "Get" requests because you believe these might be called more often);
- better security (i.e. let the "Get" apps connect to the DB with a user that is allowed to only perform read operations).
What we just did has created a cluster of containers on our development machine, meaning resources limited to our machine which is also a single point of failure. Maybe nice, but surely not brilliant.
The idea behind the deployment we just did with docker-compose is that for a productive environment it should be executed in a swarm, which should require little or no change to it. I am also pretty confident that the same architecture can be achieved with other Docker orchestrators such as Kubernetes.
Doing so would have allowed us to share load between different physical machines, finally enabling to scale for better performance and fault tolerance.
If we were to deploy in a productive environment, I would for sure have used secrets to share Elasticsearch credentials between containers rather than using the hard-coded, default ones.
Definitely, we would need more coverage than the 70% achieved by the unit tests, especially in the Elasticsearch-based repository.
Another nice thing to do would be to extend the acceptance tests to run in batches: I would like to see how the application behaves under stress, i.e. with concurrent users using the API.
Another nice thing to do would be to run these tests against an application deployed with artificial bottlenecks (i.e. extremely limited resources for the database, the application, and the load balancer in turn).
Once we have the application running in several places at the same time, it may become difficult to catch up with the logs.
For this reason, I believe that logs should be collected centrally, and a nice way to do it is the Elastic stack, that is what the other containers in our docker-compose are for. Also, an elegant solution to feed the application logs into Elasticsearch might be provided by Docker itself.
The Elastic stack comes with extra features such as search capabilities, easy graph creation with aggregation and trends, and - with paid option - machine learning to detect uncommon patterns and automated alerting.
The application exposes a web server for Go's expvar service.
This, too, would be better fed into Elasticsearch for automated analysis and at-a-glance monitoring of the memory used vs memory available, GCs (just to say a few) over the cluster. And Metricbeat is meant just for this feeding.