March 2026
March 1, 2026
The UK driving system feels surprisingly disorganized, especially when you come from a country where things are done differently. Back home, we have large, dedicated driving schools with their own internal road networks. Learners stay within those controlled environments until they've genuinely mastered the basics -- steering, observation, manoeuvring, clutch control -- before ever touching a public road. The whole process feels structured, efficient, and relatively straightforward.
The UK, by contrast, is a different story. Expensive lessons, near-impossible exam slots, and a testing process that often feels more subjective than it should be. The core issue is this: you're not really being examined on your driving ability -- you're being examined on your ability to make the examiner feel safe. Those are two very different things.
I failed my test once already, and looking back, both reasons highlight exactly this problem. The first was approaching speed -- not dangerously fast by any objective measure, but enough to make an examiner uncomfortable when they don't know you or your capabilities yet. The second was lack of observation, which is honestly debatable. You might genuinely check your mirrors with a subtle eye movement and take in all the information you need, but if the examiner didn't see you do it, as far as they're concerned, it didn't happen.
So, this post is partly a reminder to myself for next time. Drive slower than you think you need to, especially when approaching junctions or hazards -- give the examiner plenty of time to feel settled. And make your observations obvious: turn your head, use your mirrors frequently and visibly, and don't rely on subtle glances that only you know happened. It sounds performative, because it is -- but that's the nature of the test.
I feel if everyone going into their test applied just these two principles, the national pass rate would jump by at least 30%.
March 7, 2026
WeChat -- The Worst Chatting App Ever Made
If I had to cast a vote for the worst messaging app in human history, it would go to WeChat -- and it wouldn't even be close.
WeChat is a Chinese messaging app developed by Tencent, and the uncomfortable truth behind its dominance is simple: it doesn't succeed because it's good. It succeeds because the Chinese government has banned virtually every mainstream alternative -- WhatsApp, Telegram, Signal, you name it. When the competition is legislated out of existence, there's no pressure to actually build something decent. What you get instead is a textbook example of what state-backed technology monopoly produces: an app so poorly designed it would never survive in a free market.
Let's start with the data storage model. WeChat only stores messages locally -- once a message is delivered to the recipient, it's wiped from the server after a short window. Fine in principle; local-first storage is a legitimate design choice. The problem is what comes next: there's no straightforward way to back up your own data. The only supported backup method requires the desktop WeChat app running on a computer. No computer? You simply can't back up your chat history. Your only option is a direct phone-to-phone transfer, which works until one of those phones dies or gets lost. And it gets worse. Even if you do manage to back up your data to a computer, you cannot actually read it. The backup is encrypted and bound to your WeChat account using a key that WeChat controls. You can restore it back to a phone -- that's it. You cannot open it, search it, export it, or do anything useful with it on a computer. It's your data, stored on your own machine, and you're locked out of it.
Naturally, a handful of developers reverse-engineered the encryption, extracted the decryption keys at runtime, and published open-source tools so people could access their own chat histories. Tencent's response? Lawsuits. The projects were taken down from GitHub. And then, to make matters more absurd, Tencent began forcing users to upgrade away from older versions that were more vulnerable to this kind of extraction -- yet version 3.9 still sits on their official website available for download. You install it, log in, and immediately get kicked out with a prompt telling you the version is outdated. If the version is truly unsupported, why is it still being served from your own servers? The cynicism is breathtaking.
I genuinely don't have words for the level of mediocrity on display here -- from the product decisions all the way down to the legal intimidation of developers who simply wanted access to their own messages.
So here's what I'm doing next: I'm going to explore whether the extraction methods from those now-deleted projects can be replicated for newer versions of WeChat. I'll document everything I find and, if it works, I'll post it on GitHub. I'm based in the UK, and I'm not particularly worried about a lawsuit from a company with a track record of silencing people for wanting to read their own data. This is my data. I own it. Wish me luck -- updates to follow.
March 8, 2026
Good news, re: last blog, WCDB (WeChat's SQLCipher wrapper) caches derived raw keys in process memory as x'<64hex_enc_key><32hex_salt>', and we can scan the memory to find the keys, and match the keys to databases by salt, and decrypts them.
Right now I have a working prototype, currentlt still working on imrpoving the usability of the tool.
March 16, 2026
Over the last three weeks, I've been studying how to get the most out of agentic coding tools -- not by throwing everything at them, but by being deliberate about how I use them.
The common assumption among many users seems to be that maximising value from something like Claude Max is straightforward: crank up the thinking effort, throw in a vague prompt, and let it burn through your weekly usage. More tokens consumed must mean more work done, right? I'd argue the opposite.
My approach has been focused on minimising waste at every step. Before an agent touches a task, I prepare comprehensive instruction sets and structured markdown files it can read immediately -- this dramatically reduces the time and context an agent needs to orient itself and get going. Rather than babysitting sessions interactively, I run everything through remote servers with tmux, which lets me monitor tasks continuously without being physically present. During the day, I define and queue up tasks with clear todos, so the agent keeps working through the night while I sleep. The work doesn't stop when I do.
The results have been tangible. In my first week, I used roughly 20% of my weekly allocation. Second week, around 30%. This week is trending toward 70%+ -- but that's not because I've become less efficient. It's because the pipeline is now mature enough to take on significantly more ambitious work. In these three weeks, this setup has produced over 2,000 unit and integration tests -- a volume that would have taken far longer and cost far more with a less structured approach.
The lesson I'd take from this: don't stress about hitting your usage ceiling every week. A half-used week with a well-structured pipeline and meaningful output beats a maxed-out week of chaotic, expensive prompting. Build the scaffolding first. The productivity will follow -- and it will compound.
Microsoft Copilot and the MCP Integration Experience — A Mess
When people talk about the best AI models right now, the conversation usually centres on Claude, ChatGPT, and Gemini -- with Grok increasingly earning a mention. But enterprise AI is a different landscape entirely. Inside large organisations with strict security and compliance requirements, the shortlist shrinks fast. Many firms effectively have one sanctioned option: Microsoft Copilot. It's deeply embedded in the Microsoft 365 ecosystem that most enterprises already run on, which makes it the path of least resistance for IT departments -- regardless of whether it's actually the best tool for the job.
Today I was working through the process of connecting our MCP server to Copilot. It did not go well.
The documentation is ambiguous to the point of being genuinely misleading. The UI is cluttered and poorly thought through. And the settings -- where do I even start. Here's a question that should have a simple answer: how many distinct Copilot platforms does Microsoft currently operate? The answer, as best as I can tell, is at least three. Microsoft 365 Copilot, Copilot Studio, and GitHub Copilot all exist as separate products with separate configurations, separate interfaces, and separate documentation -- and the lines between them are blurry enough that figuring out which one you're actually supposed to be working in is itself a non-trivial task. For a developer trying to do something as specific as MCP integration, this fragmentation is a genuine obstacle.
This is what Microsoft looks like right now from the inside -- a company sitting on an enormous pile of products that don't quite talk to each other, held together by inertia and enterprise lock-in rather than coherent design. The AI wrapper is new; the organisational chaos underneath it is not.
On a brighter note: I also got a new work machine today. A 64GB RAM workstation, first thing to do -- setting up a proper Linux environment for future development work. Some things, at least, are still built with the developer in mind.
March 30, 2026
I Passed My Driving Test—and And I Have Something to Say
I passed my driving test today. Finally. Long sigh.
Goodbye to the UK learner system, with all its quirks and frustrations. Goodbye to the overpriced lessons, the examiner theatre, and the months of waiting for a slot. It's done.
But since I'm in a reflective mood, let me leave with one parting observation -- because I genuinely could talk about UK roundabout design for an entire day without repeating myself.
The roundabout, as a concept, was designed for light, manageable traffic. The logic is elegant in theory: no signals, drivers yield naturally, traffic flows continuously. It works beautifully in a quiet market town. It does not work in a city of tens of millions of people and millions of cars -- and the infrastructure itself quietly admits this. When a roundabout is functioning as intended, you don't need traffic lights on it. The moment you start bolting signals onto a roundabout, you're essentially acknowledging that the original design has been overwhelmed.
Take the two major roundabouts near Mill Hill test centre, where I passed my test. Apex Corner and Mill Hill Circus — both traffic light controlled. Mill Hill Circus goes further: six "keep clear" boxes painted across the roundabout itself. Six. That's not a roundabout anymore, that's a signalised junction that happens to be circular. The keep clear boxes exist precisely because without them, the roundabout gridlocks. Drivers from one arm block the path of drivers from another, and the whole thing seizes up.
The deeper problem is that roundabouts depend entirely on every driver behaving correctly. In low-traffic environments, that's a reasonable assumption. In a dense urban area, it only takes one confused driver, one hesitation, one mistake -- and the whole system backs up. There's no mechanism to absorb the error. Traffic lights, for all their inefficiency, at least impose order. A roundabout just hopes for the best.
If nothing structurally changes, driving tests in the UK -- particularly in London -- are only going to get harder. The roads are more congested, the junctions more patched-together, and the margin for error on test shrinks accordingly. I got through it. But the system isn't getting any easier to navigate, for learners or anyone else.