## Result at the history endpoint

I've taken the liberty to produce the results in the history endpoint
as such:

```json
[{
"day": "YYYY-MM-DD",
"links": 0,
"files": 1
},
...
]
```

Because keying them together by day is going to make the thing
harder for the frontend programmer to parse, and I suppose he 
needs the backend as accomodating as possible.

I know that I'm ignoring specification, and you are free to call me out 
on that - but as I got some experience with frontend, 
I'd rather do **the right thing**.

## Documentation

I couldn't get the DRF documentation to cooperate with me (most
possibly there's no way to do it without an external dependency), 
and frankly while with a bigger project I'd probably stick with drf-yasg,
the inline pydocs that I've written here will completely suffice.
A in-depth knowledge of your tools is all good, but knowing when 
to apply them trumps even that.

`/swagger-ui/` is your go-to URL when it comes to listing endpoints.
Submissions are to follow `/api/add`, because DRF happened to generate
nice documentation there and not for Swagger.

Note that the Swagger endpoints are available only in DEBUG mode, 
so they are unavailable on production.

## Authorization

You can authorize your API requests either via Django cookies
or through a Web-Basic authentication.
Getting the session token for your API requests is also on you, 
but since the API is unit-tested you'll just have to believe me that it works.

Since it was not specifically requested for history endpoint to be
available for admin only, it was made available for any logged in user.

## Test code duplication

I realize that I would be best practice to deduplicate some code contained
within tests, but since I'm running about 1,5 of full-time equvialents you just have to forgive me
for sparing the effort to do so. Thanks "from the mountain!"

# The [reaper job](counting/cron.py#L27)

I came up with the reaper job trying to think up a reasonable solution
that wouldn't load the processing server too much. It processes way-past-due
links into a nice compact representation and stores them in the database.
Two last days (since the job fires only every 24 hours) are computed on-the-fly, 
and cached as necessary, for up to 5 minutes.

# Nginx not serving static content

Since it's not ideal for Django to be serving large static files, 
I tried to optimize it as much as possible by using a streaming iterator.

However, in real-life I'd think of some authentication mechanism
to be handled purely by nginx (perhaps a JWT issued for a short time?),
and simply let it serve both the static files and the uploaded files.

Right now I'm serving them using the same Django, but keep
in mind that I fully realize that's not optimal. Perhaps looks
like a use case for a Kubernetes pod (Django+Nginx running in the same pod?).

# Checking for [existing UUIDs](shares/models.py#L140)

I know that UUID collisions practically don't happen, but since it's so cheap for us
to go that extra mile
and since we're at risk at overwriting somebody's files, I guess i should check it.

I realize that it's susceptible to race conditions, but then again what is the chance?
The chance that UUID will repeat during the 24 hours a link is active is much higher
than the chance that it will repeat during the same 2 second window where
both users upload their files. If we had to plan for this 
amazing contingency, it would require strong locking guarantees.

Since however the solution is written as a single process, it's actually
quite simple to implement. However, the chance that I get struck by a thunder
seven times over is much higher than this happening.

# Processes vs threads

I've written the solution as a single process, since it binds port 81
to export metrics. I realize I could export them as a part of normal Django
flow, and my library certainly allows for that. It's mainly a question of preference.

I would scale this solution by spawning additional replicas and gathering metrics
from each one of them, by adding some service discovery mechanism for Prometheus
to pick them up.

# Checking for file [existence before deletion](shares/models.py#L127)

Since we use ATOMIC_REQUESTS some files might be deleted, but the respective records
might still be around. Linux doesn't have distributed transactional filesystem 
functionality (not a production-ready one that I'm aware of), and requiring the sysadmin
to provide us with a transactional FS is overkill.

# Waiting for Postgres to start up with a [delay](test.sh)

Normally I'd write something along lines of
[wait-for-cassandra](https://github.com/smok-serwis/wait-for-cassandra) 
r
[wait-for-amqp](https://github.com/smok-serwis/wait-for-amqp) for
services that take a really long while to start up, but since Postgres
is rather fast on that and this is a throw-away solution I didn't see the need to solve 
it any other way.

I realize that on particularly slow CI servers the build will fail.