The Secret is in the Struggle

Published on 2025/10/31

Recently, a friend asked me why I read instead of using that time to “upskill” or pursue something that’ll have a more immediate payoff.

I have 2 things to say to that question.

  1. Asking me why I read is like asking me why I eat food :D
  2. The question assumes a particular model of learning that’s linear and optimized, but that’s not how my brain works. As I get older, I really think it might not be how learning should work in general.

My Learning Loop (aka my event loop)

Here’s what my learning process actually looks like:

  1. It always starts with me seeing something and going “oh shit, that’s really cool”.
  2. Then, I’ll want to do the cool thing, so I start looking into it.
  3. I hyperfixate and lose 2 days researching something tangential to it.
  4. I accidentally learn something tangential to the previous tangent.
  5. I hyperfixate on something else for a little bit.
  6. Eventually, at the end of all this, I’ll know how to do the original cool thing.

There’s no straight line, and there’s friction every step of the way. I’ve had people mention to me that this process feels very inefficient, but here’s the thing: the struggle is doing the epistemic work.

When I hit friction, like when I get slapped by trait bound errors in Rust, or architectural decisions in the P2P chat system I’m building (you’ll hear about this soon, I promise!!), I’m forced to understand why things work the way they do, not just the mechanics of how to make them work.

Someone following a linear curriculum might complete a tutorial without ever internalizing the design philosophy. I hit the wall and now I understand it fundamentally.

The Friction Problem (and why I dislike how people are using AI)

Something I’m seeing a lot of these days that bothers me is how people are using AI to cognitively offload their thinking. They ask a model to explain something, or solve a problem, or generate code and just like that the friction disappears. And with it, so does the learning.

Look, I get it, it’s convenient and fast and it feels like you’re learning because you read the explanation and suddenly understand it, but you haven’t. Not really.

I don’t think friction is a bug in learning. Rather, I think it’s the mechanism by which learning actually happens in a meaningful way. When I struggle with a problem, I’m forced to build mental models to resolve it. I have to understand the structure of the problem space, not just the answer to the problem itself. The time I spend stuck, or figuring out ways around a dead end, that’s where the actual learning happens for me. The dopamine hit I get when I finally truly see the solution unravel itself is unmatched.

Using AI to bypass that friction is like asking for a shortcut to the top of a mountain without actually learning how to climb it. Sure, you get to the top faster, but you never develop the muscle memory or the intuition about terrain. The next mountain you climb will be just as hard because you didn’t build anything durable.

I think the most insidious part of it all is that using AI feels productive. You feel like you’re progressing, but without actually figuring it out yourself, you haven’t internalized anything. You can’t apply it to novel problems and you’ll be blind to any deeper patterns. You can’t build on something if you haven’t spent time laying the foundation for it.

My Hyperfixation Spirals

Let’s take something I’m working on right now as an example. For my masters dissertation, I’m writing a chat protocol. It’s a federated P2P messaging system that I’m implementing in Rust. (I’ll write a full post about this soon!! I’m close to freezing the spec!!).

I didn’t start working on this by following some predetermined learning path. Instead, I found a problem that I thought needed solving (the not-so-great messaging app landscape) and I put myself to work trying to solve it.

I hit trait bound errors which led me deep into Rust’s type system where I had to figure out what needed to be done to get them to work and how they enable the abstractions I need. I’m not “learning Rust Traits” here, I’m learning them because I need to understand them to solve my problem.

Then I’m working on Identity management, KeyPackage handling, etc. Each problem spirals outwards. The MLS protocol’s architecture becomes increasingly clearer not because I read the spec linearly but because I’m implementing it and hitting walls that force me to dig deeper. I understand why it’s designed the way it is because I’m feeling the friction of designing it wrong firsthand.

Then I’m thinkng about message authentication, about server knowledge minimization and then I randomly find this really cool thing called Private Information Retrieval and that’s another tangent I go down because it looks like I can use this to prevent more metadata from leaking out. “Oh but what about rate limiting and abuse prevention” I ask myself and a friend points me to Verifiable Delay Functions which leads me down yet another spiral because this is all really cool shit and I can’t get enough of it. Each spiral pulls me into adjacent domains and I’m rapidly accumulating knowledge that doesn’t fit neatly into “Rust knowledge” or “MLS knowledge”. It’s all one integrated learning now.

What’s interesting about this is that none of this would have happened if I’d optimized my learning path. I would’ve learnt things in isolation, never found out about Private Information Retrieval, and never would’ve “seen” how all these problems connect with each other. The friction of implementation is what generated that synthesis.

Why Struggle Matters

The “random knowledge” I accumulate isn’t actually random. It’s organized by relevance to problems I’m trying to solve. That’s why I can correlate things that textbooks present as unrelated. I arrived at them through different doors because I was solving real world problems, not following a pre-set curriculum.

Someone following a linear path might complete a tutorial on, say, cryptographic authentication without ever understanding the threat model that makes it necessary. I’m implementing it because I’m designing a system where adversaries exist. I understand the problem first, and then I understand why the solution exists. That’s backwards from how most learning happens, and I find that it’s infinitely more useful.

The struggle of figuring things out myself builds mental models instead of just information. Information is cheap now. I can ask an AI anything and get a “mostly correct” answer, but the AI won’t be able to make any connections or understand the problem space enough to suggest novel solutions. AIs in their current state don’t have an “understanding” of anything really.

Just so we’re on the same page here, I’m not saying that you shouldn’t reach out to people and ask for help and instaed keep bashing your head in against a wall. You absolutely should reach out to other people for help. That’s another avenue for learning. Having an actual human interaction with someone while they explain something to you is incredible.

Horizontal vs Vertical Growth

Most learning advice I see online emphasizes vertical growth: go deep in one direction, stack credentials, and optimize for measurable progress. The allure is obvious, it’s incredibly rewarding in the short term. But vertical growth alone keeps you confined to a single frame of reference.

I prefer building horizontally instead. I love accumulating a wide breadth of knowledge that lets me see patterns across domains. My cryptography work, experience as a sysadmin and full-stack engineering experience aren’t really separate skills that are just stacked vertically on top of each other. They’re interconnected through my drive to understand how things work. That breadth is what lets me see the picture that someone going purely vertical would miss.

Someone optimizing and min-maxing for a vertical build might become an expert at one very specific thing, but they’re vulnerable to missing how that one thing connects to everything else. I’m building resilience through breadth. I see patterns across domains and I can apply insights from one domain to another because I’ve seen how those connections naturally come about during my struggle to understand.

The Social Friction

So here’s where things get awkward. My learning process isn’t legible. Someone watching from the outside sees me spiraling and researching seemingly unrelated things and it looks inefficient to them because it doesn’t produce a clear trajectory. But my output often speaks for itself.

I don’t think the problem is my approach necessarily. I just think people put way too much stock in optimizing for legibility (i.e., the ability to explain progress and show that they’re “on track” to hit some milestone.) I’m optimizing for understanding. They’re not necessarily the same thing and I’ve often found them to be at odds to one another.

Legibility requires linearity. You need to be able to show how one dot connects to the other. Meanwhile, understanding requires friction and spirals. I don’t think I can make this process look clean from the outside because understanding doesn’t emerge cleanly. It emerges through getting lost, finding a potential way out, discovering connections I didn’t expect, spiraling outwards, and then spiraling back in with a newer perspective.

”Just Do It” as a Philosophy

My methodology is fundamentally just starting with something that excites me and spiraling my way into building it. Preoptimization is the enemy. I don’t spend months planning the perfect path. I just throw myself in head first and figure things out along the way.

This approach selects for genuine curiosity over performative upskilling. If something doesn’t pull me, I don’t force it. If it does, I dig deeper. That’s a much better filter than external pressure or the latest productivity framework.

It also means that I’m always solving real problems. The friction I encounter isn’t artificial. It’s the friction of building something that works. This grounds my learning in reality instead of abstract tutorials.

The Wider Picture

Sooo, back to my friend who texted me. I told him that upskilling just for the sake of upskilling is a doomed effort because you’ll end up growing vertically in a single aspect and miss the forest for the trees.

The skills I have aren’t byproducts of me actively trying to acquire skills. They’re byproducts of my pursuit of knowledge itself. That’s a fundamental difference in orientation. One is outcome-focused, and the other is process-focused. One optimized for credentials, while the other optimizes for understanding.

Right now, as AI makes it easier than ever to bypass the friction of thinking, I feel like that distinction matters more than ever. AI can’t “understand” things for me. I have to build that understanding myself by chucking myself off the deep end.

I predict that the people who will end up thriving in a world saturated with AI are the ones who’ve built deep understanding through friction. They’re the ones who can see what doesn’t make sense and can ask better questions. They’re the ones who’ll be able to recognize when an answer is wrong even if it seems plausible. That comes from having struggled through the problem yourself, not from having been handed the solution.

That’s why I read instead of actively upskilling. That’s why I spiral instead of following a set path.

Because the friction is the point.


Comments

You can comment on this blog post by replying to this post using any ActivityPub/Fediverse account!