Netguru
=======

This contains a solution of the Netguru's recruiment task.

Please read the [notes](NOTES.md).

# Local development

To install the application, you check it out from the repo and do:
```bash
docker-compose up --build -d run_local
docker exec -it netguru_run_local_1 bash
python manage.py migrate
python manage.py createsuperuser --username admin --email admin@example.com
```

This will load schema for PostgreSQL and create a superuser. 
Don't worry
if error messages appear about not being able to bind to the socket 
- this is perfectly fine and normal. Provide your own password
and you're good to go!

Just take care that your generated links will contain
the https schema - you should substitute it with http.
I did it because you shouldn't expose non-secured HTTP to the world,
and you should always terminate SSL there (get a free cert from Let's Encrypt
if you're short on cash).

This will expose both ports 80 (for TCP traffic)
and ports 81 (for metrics) on localhost.

To unit test locally, just run:
```bash
docker-compose up -d unittest
```

# Production

I expect you to terminate SSL at a reverse proxy.
Please watch out for generated links, as they will
invariable link to SSL, because I've got no easy way
to detect if I've been called with SSL already, or is
it just a reverse proxy handing over the requests.

## Deployment

Deployment is handled automatically via a
[CI script](.gitlab-ci.yml).

Deployment is done to a Docker Swarm platform,
since I'm a big fan, and I'm still learning Kubernetes.

I realize that Docker Swarm is a dead end, as it isn't supported 
anymore, but I've got 1,5-FTE worth of jobs on my hands.

### Configuration

You need to provide your settings in following environment variables:

* DB_HOST (default is *postgres*)
* DB_USER (default is *postgres*)
* DB_PASS (default is *postgres*)
* DB_NAME (default is *postgres*)
* DB_PORT (default is *5432*)

You should also pick a env called *SECRET_KEY*
it should be a random sequence of characters. If you don't 
provide it, a reasonably default value is supplied,
but since it has already been used it isn't safe anymore.
**CHANGE IT!**

You also need to provide a list of hosts from which
this Django instance can be accessed. You do that
by defining an environment variable called 
ALLOWED_HOSTS and giving a list of hosts separated
by a comma. If you don't define it, a default 
value of `*` will be used

You can optionally provide the environment variable
`REDIS_CACHE_HOST` that should contain a hostname:port of
a Redis cache instance, if you mean to configure Redis caching.
It will be disabled by default.

**WARNING!!** This is not secure, don't do it!

### Volumes

The application needs a volume marked at 
`/data` to store files. Because storing 
files in a database is an antipattern.

### Monitoring

#### Logs

You will just have to forgive me that I failed
to hook up any loggers for it. Any Python-based
logger will do, I sincerely recommends
[seq-log](https://github.com/tintoy/seqlog)
- full disclosure: I'm a [contributor](https://github.com/tintoy/seqlog/blob/master/AUTHORS.rst) there

#### Metrics

The solution actively exports metrics at port 81.
You can hook up Prometheus to it.
It exports the metrics thanks to my excellent
[satella](https://github.com/piotrmaslanka/satella)
library and [django-satella-metrics](https://github.com/piotrmaslanka/django-satella-metrics)
adapter.

#### Traces

The solution fully supports tracing. You can tweak
the settings in [settings.py](netguru/settings.py) file
to install for example a Jaeger instance.

I failed to deploy that on your test environment since I'm
pretty much strapped for time.

BTW: If you're not already doing 
[tracing](https://opentracing.io/), 
you should totally consider it.