Today we got back up to speed on our ideas with the UAT (User Affinity Tool). We will be using it to generate information that Engineering.com's ElasticSearch can use in its queries to drive better recommendations for their users.
There are a few problems we need to address as we begin to work on it. The first problem is caching. We need to be able to prevent the UAT from being overloaded by lots of users, but moreso we need to make sure that if a user has to wait a while for a recommendation query to finish (and it might take a while, because of the massive amount of data), that they only have to wait for this to complete once in a while. We plan to cache the results of our UAT's user behavior queries for a while, probably about an hour.
It turns out caching was a lot more powerful than I previously realized, in that it offers a lot of flexibility for various degrees of the aggressiveness of your caching, where the more aggressive it is, the less data that needs to be transmitted over the network if any at all. Google's developer documentation, which I had glanced at before, helped immensely with understanding HTTP caching. Put simply, the degrees of caching aggressiveness are:
- Most aggressive - From the server, set the "max-age" attribute of the "Cache-Control" header to a high number (of seconds). For example, use 31,536,000 for one year. The browser will use its locally cached response for an entire year before ever sending a request to your server again. No request sent to server on subsequent calls for a long time.
- From the server, set the max-age attribute to something reasonable, for example, one minute, for something that might not be expected to change very often. This can help with performance. No request sent to server on subsequent calls for a little while.
- From the server, do nothing at all to the request your HTTP library is about to send. This means that the "ETag" header will still be created and sent, which acts as hash code representing your response. If the max-age wasn't set, it's still cached by the client, it's just immediately considered "stale" and would normally not be used on subsequent requests. The client will send you an ETag too as part of the subsequent requests, and if that ETag matches your (server side) ETag, it means the resource hasn't changed, and instead of sending the expected response, your HTTP library will simply send a 304 Not Modified response. The client will then use its cached copy even though it's considered "stale" because the resource wasn't modified and there's no need to download it again. Request sent to server on subsequent calls, but tiny response sent back, until the resource changes. This is the default caching policy of most HTTP libraries in server side web frameworks.
- Least aggressive - From the server, use the "no-store" attribute of the Cache-Control header. The browser will always complete a full request and response, never even consulting the local cache, even if the ETag header would have revealed that the resource didn't change. Request sent to server on subsequent calls, forever. No caching.
But it turns out that I was assuming that caching and the browser were the same thing. It turns out, browsers are implementing a universal caching standard, so that developers that know to set the response headers accordingly don't need to worry about how the caching is done. This is great if you're talking to browsers, but we may be serving up these responses from the UAT to non-browser clients, like AWS Lambdas, etc. This means that we would need to implement our own caching. We found that AWS's API Gateway service, which controls access to the Lambdas, has a feature to perform caching, but it does cost money and has some limitations (disk space etc). We will need to research API gateway or consider other ways to address this caching issue of our UAT project.
The second major problem we have with the UAT is to decide on a format for the queries we need to have it perform. We need to have something that encompasses all of Engineering.com's needs. However, we want to employ the same tactics we did when we created the VAT (Visual Analysis Tool), in that the tool should be adaptable and the staff should be able to make their own queries etc. We think we can add a feature to the VAT to export the results of its queries not as CSV and pretty graphics, but as the JSON format HTTP request body we would need for a UAT query. To pull this off, we'll need to consider what parameters the the UAT queries need, and how the staff can pull them dynamically as the thing runs, given that the VAT gets these parameter values from the staff sitting in front of the screen and entering them in (ex. userId, authorId, country, etc).