Devlog 9

You're viewing a Gemini article on the web

It's much better to view it on Gemini!

Slow progress, but progress nonetheless. Lots of frustrations, but at least I've ended up with something tangible.

Today's work

I spent the day trying to get some frontend stuff set up. This meant adding CORS logic to the backend and attempting to set up a docker compose environment so I can easily spin the dev environment up and down. This was a hugely frustrating process. Node remains a disaster to work with, particularly if you're working on an ARM machine with AMD64 containers. I'm not knowledgable enough about Docker to get this working quickly, so I've quietly abandoned it for now in favor of just using Makefiles.

Anyhow, all this has culminated in me actually getting started with frontend work (finally). I now have a somewhat functioning login page which at the very least sends data to and receives a payload from the API. I decided to write out all interactions as an API client class so I can easily access the necessary parts in my Svelte components, which has already made my life much easier.

In addition to this, I added support on the backend for searching the TMDB database. I'd prefer to use Wikidata personally, but the REST API just isn't fully-featured enough for what I want. I need to be able to limit search results by entity type (e.g. "Movie") but it doesn't seem like this is supported yet. Using TMDB will introduce some burden for hosts as they'll need to create a key, and it does add a small amount of complexity on the API side too. But it gives me something to start with.

What's next?

I'm going to finish the login page and actually handle errors and auth properly, then look at building the search page. This will enable me to get my bearings with Svelte enough to actually work out how to create protected routes, get auth working, and handle reactivity. The last thing I'm going to be touching is the kanban logic. I want to really understand Svelte by the time I start trying to tackle that.

I also need to plan how I'm going to interact with TMDB. Per their API usage terms, I have to ensure I don't cache old data for longer than 6 months. This is a strange requirement, but I guess they don't want you advertising that you use TMDB data then showing data that isn't up-to-date and true. Not a good look for them. This should be an easy enough fix, though. I just need to create a task on the API side that checks for any items that have a TMDB ID and were updated more than 6 months ago so I can refetch them.

I also need to think about how to cache information. I think the best approach is to always query TMDB for search results then implement a cache check on the API side when a user selects an item. If there is no item with a corresponding TMDB ID in the database, the server should fetch the data from TMDB and relay it so the user can immediately access the information. In the background, it should then add the fetched data to the database so that the next time the user navigates to the item in question they receive cached data instead. The goal here is to simply make sure that larger instances don't go beyond their API limits for TMDB, although this is probably not really a concern since I don't anticipate much usage of this app.

Lastly, I need to think about implementing rate limiting of my own. This will be the first layer of protection for the server <-> TMDB interaction. Hopefully this will be a simple piece of middleware, but I need to read up on it.

Tell me what you think.