My Journey To Replace Subscription Services

The longer I spend time on the internet and just my devices in general, I feel more and more disgruntled by the value that services provide in parallel to the lack of true control offered to me. Since 2024, I’ve been slowly making changes to the software and services I use from day to day, balancing out what I genuinely find essential and what could do with a replacement.

This blog post will detail some of the changes I have made to my software and services, why I changed them and the benefits I’ve discovered. This will be updated from time to time as I make more changes, and updates to my existing systems.

There will also be links to software I’ve created as a replacement, there are more thorough explanations of each piece of software I create in their respective project pages with links to GitHub repos if I have open sourced them.

News

The news is crammed with things I simply don’t care about, and algorithmic based services like YouTube and Twitter/X have become so unbearable for me to use that I only make use of their search functions now if I’m looking for something specific. My requirements for news involved the following:

Since I don’t engage with social media much anymore, I decided to go through and collect RSS feeds for a more controlled selection. For my phone, I retrieve these feeds through an app called ReadYou which is fine although it doesn’t solve the problem of fluff and determining political leanings isn’t an automatic process. To remove fluff, I came across a self hostable service known as N8N which allows you to automate pretty much anything, including being able to prompt a locally hosted LLM with any context. With this, I was able to set up a system that runs every 30 minutes, it pulls all the content from my RSS feeds and sends the contents of each one to my local LLM. From there, the LLM processes the information and summarises it focusing on only the important details. Once it’s finished processing, it sends the snippet to a Discord server via a webhook integration. This worked out decently for a while although I have further plans to improve the service and distribute it on a more lightweight platform rather than relying on Discord.

The following section will cover AI in which I’ll explain my solution for the last issue, having a reliable collection of sources that cover world events.

AI

My AI usage isn’t really anything to write home about with probably <50 chats in total with ChatGPT since 2023. I have always felt restricted with the token limit and uncomfortable with the idea of my interactions being used to train further model data. However around June time in 2025, I discovered that you could host your own LLM locally with either Ollama or LM Studio. This was a much more attractive option for me and since I had good hardware, I gave it a shot and quickly realised the potential. I won’t detail you on how I make use of AI as that’s not the purpose of this post but I had one core issue with local AI that I just couldn’t avoid, that being connectivity to the internet. One of the nice benefits with many of these larger AI models online is the ability to have up to date information when queried. Since LLMs take a ton of time and compute power to train, it doesn’t make sense to continually train these models with up to date information and so I learnt that they make use of ‘retrieval augmentation generation’ and use tools to scrape the internet for data to provide this system.

After seeing that LM Studio supports R.A.G, I decided that I would have a go at deploying a web scraper tool like Scrapy that I could provide key words to and receive a text file that contains a bunch of site content from various sources on the topic. Using various media bias charts as reference, I could personally provide sources of my choosing as primary search spots for my scraper, which would allow my LLM to compose news stories with structured output that would compare the details between political leanings. Now more than ever, it’s important for me to consider many different perspectives for stories around the world so that I can avoid radicalising myself to particular viewpoints and get a good idea of truth. Being able to control exactly where I get the information from is a powerful tool to have and allows me to be sure that there’s no interference from third parties. It’s definitely more of a superstitious idea to believe that sources would be tampered with but I feel it’s still worth the effort to provide your own sources instead whether you believe that could happen or not.

Since writing this, I also began using the Duckduckgo MCP Server through Docker as a way to quickly search the internet for general current and up to date news or information. There’s also an alternative for even more privacy through using a SearXNG MCP that works of your locally hosted version of it, although I currently don’t plan to migrate to this just yet.

File Backup

For most of my files, I store them locally just because it’s easy to access, costs me nothing, and gives me comfort knowing it’s not stored on the cloud. I make a few exceptions for this, with some content I host online for use within my website and elsewhere without needing to take up any space. One area I’ve always neglected was pictures on my phone, primarily because I don’t take too many pictures and so I just let them back up to google drive. Now I recently just bought a Google Pixel 8 Pro which has a very decent camera setup built into it allowing me to shoot in RAW at 50MP as well as 4K 60fps for video. Inevitably I would reach the storage limit in which case Gmail would stop incoming and outgoing emails, and nothing would be backed up anymore. With Google’s pricing for extended storage being a joke, I found the solution with more freedom and more features being to just use my other devices as the backup solution.

I use Syncthing already to sync my Obsidian vault between my phone, laptop, and computer, so it was already an attractive option. It was an even better deal when I realised I could configure it so that my phone would back everything up to my computer, and I could delete whatever I wanted off of my phone whilst the deletion was ignored on my computer. It felt like a cheat code, being able to just harness the unused storage on my computer but I suppose it comes down to an effort factor. If you don’t want to put effort into anything, then you’re guided towards things you are assumed to want, with a price tag added on typically. On the flip side of that, even putting in just a little bit of effort opens up your opportunities to have exactly what you want, and take control of the devices and services you use every single day.

There isn’t too much else to mention in terms of file backups, although I’d be interested to move everything to a N.A.S server which would also give me the power to backup and view my files from anywhere in the world rather than being restricted to whenever I’m home. Furthermore, whilst my files are backed up, unless I go on my home computer, there’s no way to view and selectively restore items easily with a GUI like you can with Google Photos. A future evolution could involve making this possible so then I could also pull these files to my laptop and anywhere else.

Video Streaming

After I purged all of my subscription services, the first area I decided to work on was media streaming. Having video files spread around my computer was a hassle to deal with and actually find what I wanted to watch so I sought to create a solution for myself. I firstly looked into options like Plex and Jellyfin which both allow you to stream your local library. I had seen frustration with Plex online for how they handled logins, paid features, and user privacy and so that ended my interest for it very quickly. Jellyfin on the other hand was more intriguing and I actually set it up on my computer, however I ran into two damning issues that stopped me from continuing with it. First of all the general app performance was surprisingly abysmal considering I have a very powerful pc and hadn’t backed up much content to it. Secondly, the metadata sourcing wouldn’t actually occur until I had watched a small portion of the content I was hosting and other videos wouldn’t even get indexed until they were classed as “similar” to what I had just watched. At that point I had little patience to troubleshoot and decided to do it myself.

Apricot

That’s where Apricot comes in, an effortless local media library that would allow me to not only track all of my files, but make API calls to databases to retrieve and download metadata that I could use however I wanted. Working on Apricot was interesting because I had never worked in full stack web development before, let alone the main languages I ended up using. I knew it would be a massive undertaking with no experience but I adopted a unique approach that I found to work flawlessly for me. I’m already familiar with programming concepts, from both front end and back end alongside my gameplay related experience and so if I could just find a way to teach myself extremely fast whilst focusing my learning towards the actual project I was working on then that would be perfect.

I’ll talk about AI later but I had set up my own locally hosted AI which meant that I could endlessly prompt it with whatever questions I had without fear of running out of tokens. I refused to engage in “Vibe Coding”, a technique that proposes leaving the AI to program everything without interference other than asking for features or fixes to bugs. I had heard stories about vulnerabilities, unnecessary complexity, and generally shockingly poor results from any vibe coded projects larger than mass produced, simple software. My alternative solution was to maintain full control whilst using AI as a powerful teacher and assistant, and within a day I was able to program a functioning backend database and file watcher service in Node.js and SQLite with a good understanding of exactly how it worked since I was guided purely by the concepts of what I needed with help for syntax.

My personal methodology for getting the most effective help from AI whilst being in full control is to provide a detailed insight into what I want to get out of this project along with the full specifications for what I want to create. This came back to bite me later and taught me to be extensive in my prompting and researching when it comes to what stack I will be using. However the power to be able to have a full document outlining how the project works, what I’m using, and what each file is doing and how they communicate together to then pass that into an LLM and have direct one to one support for any query is completely unparalleled and I strongly believe it’s the future of programming where AI purely becomes a powerful extension of human capabilities. In order to ensure that the LLM could fully understand my codebase and successfully guide me when it came to adding features and troubleshooting, I made use of the large document and passed it through as a vector database with R.A.G. I had the model update this document often so that not only did I have a point of reference where I could see what each file was doing, but whenever I prompted something, the AI had the same kind of understanding that I did, without additional context needing to be provided.

I found it’s important to be thorough in setting up that initial document provided through R.A.G since I had to swap my entire frontend framework due to video issues. I wanted a Netflix-like experience where I could launch the frontend as a website that would run inside of my browser, and then I could format the page nicely with HTML and CSS. When I had the backend capable of passing the video URL to the frontend and launching it when play was pressed, I discovered that not all container types or codecs were supported properly within web browsers. Having not worked on a similar project prior, I didn’t realise this was possible and so I had to reconsider how I was going to deliver the content. Initially I used FFmpeg to transcode MKV files to MP4 but that didn’t help much and HDR content wasn’t being tone mapped properly on Firefox. I tried to convert the format to 8-bit with other options but ultimately the quality was worsened and it wasn’t worth the time taken to properly transcode all content. After prompting my AI, I decided that the best path would be to completely swap the frontend and deal with adapting to a new framework.

I ended up pursuing Tauri for my frontend as it was far more lightweight than something like Electron and allowed me the opportunity to try out Rust for the first time. Thankfully, the only Rust implementation I needed to add was the ability to launch MPV with the specified video file, and I could drag and drop the HTML and CSS I had worked on previously. With a few added window flags I had successfully launched a native application that would allow me to watch any video in my media folders with the correct metadata being displayed.

Whilst I’m no where near finished with Apricot, I can create a Windows, Linux, and Android build for my family and the bitrate of video is no longer artificially limited just because it’s not displayed on a TV with direct access to any file on the server with zero buffering. One of the more interesting ideas I had for the future was for AI integration for things like recommendations. I used a self hostable workflow automation tool N8N previously and could send through suggestions from the frontend to a local LLM and get results based on factors like story similarity, cast, and visual style for example. Not only are barriers lifted on things that should be standard for the initial cost, but there’s so much room for expansion on whatever you’d like.

Audiobooks

I picked up reading again as a habit at the end of 2023, and engaged in now just physical books, but with audiobooks from Audible. Since I travelled to University via walking and bus everyday, I found massive value in just being able to put headphones on and either learn or immerse myself in some fiction during moments where my time wasn’t being spent doing anything productive. Now personally, I like Audible a lot and unlike other services, I was happy to pay since the service was great and unobtrusive, However there was one recurring issue I was finding with audiobooks in general. With the situations I was in whilst listening to audiobooks, my thoughts weren’t able to be articulated and logged the same way they are when I read a physical book. My brain is doing all of this thinking while I listen but when the day has passed and so much has gone on, nothing sticks. It’s a slightly esoteric solution, but I wanted a more refined bookmarking system with easy integration with my notetaking app, Obsidian. I wanted to create a simple app that would feature a locally stored database of audiobooks with their metadata and some stored user information like playback position, and then work on the core bookmarking feature that would make my experience so much more beneficial whilst being tied in with my own ecosystem.

ReadOn

ReadOn is the name for the audiobook app although currently I have nothing to share on progress. I’m hoping development shouldn’t take too long so be sure to look back here for progress soon if you’re interested.

A feature I’ve done some experimenting with is being able to link my bookmarks to what is actually said within the book. My current experiment involves using OpenAI’s whisper to transcribe the entire book beforehand which provides a subtitle file with timestamps, and then link my bookmark timestamps with user-defined ranges to this transcription so that I can process both what the author wrote and what I wrote into one section and upload that to my notes. Once again these are incredibly powerful features I can add with relative ease now to elevate my user experience far more than these services can provide, with direct control over how everything works.

Future Plans

Most of the things I’ve mentioned here today have been implemented individually within just a couple of days tops and have not only saved me a lot of money, but have also enabled me to have a deeper relationship with my software, tailoring it how I want without restriction as to how it works or when it gets implemented. Ultimately, the hope is that any software that doesn’t fully meet my wants and needs can become a project that benefits me both short term as well as for the foreseeable future. Currently, I’m pretty happy with how I have everything set up but down the road I plan on moving my main pc operating system to Linux in order to be able to sync configuration with my laptop, and reap the benefits of not having Microsoft plaguing my hardware. For that to happen, there’s software that I simply can’t give up on using at the moment, and limitations that whilst I believe will be lifted in the future, still pose an issue at the moment that can’t be fixed by running software inside of a container or virtual machine.