Documentation is progressing well. By now, I've moved on to documenting using the AWS bundle, which is a prepared collection of configuration meant for people to use Docker to set up all of their server side modules and to have that all run on one AWS instance. This mirrors the way we implemented our creation for We're providing this bundle because we think it will appeal to those who are comfortable with AWS and Docker, and who want a quicker way to get up and running. It also serves as an example to our users how one can arrange a Rutilus deployment. We found our method effective, and we want to share that with the world.

Documenting this means revisiting our prototype that we built for in order to better understand the code and the deployment process. A huge theme I've noticed with this documentation part of the project has been revisiting the work done to review it, and ultimately learning it much more thoroughly because of that. I've reminded myself how Docker works, how environment variables work (don't laugh...), how Bash scripting is done. I've worked on my English, noticing minor inefficiencies in what I say in my documentation. It's been a great opportunity to solidify everything I've started learning when it comes to being a software developer.

It also made me test what we've created. When I create a great paragraph of documentation, with clear instructions and code examples, I don't trust it until I know it works. So I pretend I'm the user and I do exactly what I told them to do. Sometimes everything works as intended, and sometimes it makes me realize I was forgetting about something important.

A good example is our new unified "config object", where one JavaScript object literal is used to configure every module they intend to use. There were little problems with pieces of data (ex. a MongoDB connection string) not being formatted perfectly for one module, when working in another.

Another example is our Bash script for deploying to AWS. It reads well as I review it... looks good to me... uses Docker to build Docker images and deploy them. But wait... We were using Docker without sudo. Oh yeah, that's because we chose the more liberal choice of allowing Docker more access to our machine than is needed. According to the Docker documentation, users who do that should take heed of the security risks. And to be honest, it may not even be a big deal... But things like this stand out to me. I want to err on the side of caution and create a deployment script that all users will feel comfortable using, even those who take their security very seriously.

I have two planned tutorials (one for AWS and one for Heroku) designed to guide users through every detail of getting up and running, no matter how inexperienced they are. I plan to continue this "test as I go" approach as I build the these tutorials to make sure the users are well-equipped when they use the software.

Note: This was originally posted on the blog I used for my co-op term while at Seneca College ( before being imported here.