A few weeks ago, the “devil” made me do something I never knew I could do: I started building an internal HRMS for my team. Not because we couldn’t afford one. That part is important to highlight. But then, being a legendary cheapskate, maybe I couldn’t? You won’t know 🤐
We had been using Freshteam for a while, and like clockwork, Freshworks did what Google often does when they’re bored: announced they wouldn’t be supporting Freshteam anymore. So, we did what most sensible teams would do in that situation; find a safe harbor. We decided to move to Zoho.
I was sitting around quietly, minding my own business and then my CTO casually mentioned that with the Pro version of Codex from Open AI, we could pretty much build anything we wanted. That statement stayed with me longer than it should have; I ruminated over it like a hungry cow.
Because once you actually believe that, even for a second, it starts to make your existing decisions look a bit lazy. And then my HR comes in, not particularly patient about these things, and says we should just build our own system. I dragged my feet at first, mostly because building internal tools always sounds easier in theory than it plays out in practice. We eventually did it anyway.
By April 1, we’re launching our own internal HRMS. Not a scrappy prototype, not a “good enough for now” system, but something that is genuinely better than what we were paying for. More aligned with how we work, more polished in the areas that actually matter to us, and without all the unnecessary bulk that comes with off-the-shelf tools trying to serve everyone at once. We are not commercializing it. At best, I might give it to a few friends for free and leave it at that.
But then I’m broke, maybe if someone gives me some benjamins, I could sell it to them, alongside a few of my annoying employees as extras 🤣.
Jokes apart, at a team lead meeting, someone asked a question that has been sitting at the back of my mind ever since: if we could build something as good as Freshteam, what exactly stops someone else from building something as good as Lendsqr?
That question is uncomfortable in a very precise way – like when your least favorite cousin’s annoying son asks if they could stay over your place for the summer. Because it forces you to confront something most people would rather avoid. The barrier to entry is thinning out in real time. And if you’re paying attention, you can feel it happening. But most people never pay attention, do they?
Everyone now has the same tools
When AI started becoming genuinely useful for writing and code, I was excited in the way most people were. It felt like the advantage had suddenly increased. Things that used to take days could now be done in hours, sometimes minutes.
One of my engineers even told a story where a team met a customer (not Lendsqr), and delivered a feature the customer wanted right on the call. It was crazy!
But that excitement didn’t last in its pure form. At some point, a more annoying thought crept in. The same capability I’m enjoying is not exclusive to me. It is available to my customers and even more dangerously, my competitors. It is available to people who want to compete with me but haven’t even started yet. It is available to customers who may decide one day that they no longer need us.
So the question becomes obvious. If everyone has access to the same tools, what actually separates outcomes? It is tempting to assume that equal access leads to equal results. That logic feels smart, but it does not survive even basic scrutiny. Baby dinosaurs like us from the 80s and 90s have seen this play out before when the internet first came to be.
We have always had access
Take writing as an example. Someone like J.K. Rowling did not emerge in a world where storytelling tools were scarce. Writing materials have been widely available for a long time. Today, it is even more extreme. Google Docs is free for anyone who can breathe. That is over two billion people with access to a writing tool that is more powerful than what many professionals used less than a decade ago.
Yet the number of people who actually sit down, stay with an idea, and turn it into a complete, coherent novel remains very small. And the few who do are writing such crap you could suffer from a bad case of nausea. It is not because people lack ideas; ideas are cheap and widely distributed. It is also not because people lack tools; the tools are sitting in their pockets.
The gap comes from something far less glamorous. Most people do not have the discipline to continue once the initial excitement fades. The largest middle stretch of any meaningful project is usually boring, frustrating, and slow. That is where most attempts quietly die and I guess, if god’s real, he designed it that way.
You see the same pattern everywhere else. People start YouTube channels, record a few videos, share them with friends, and then disappear. Not because the platform stopped working or because the camera failed them. They simply lost the will to continue when it stopped being immediately rewarding.
AI does not fix that problem. If anything, it quite frankly exposes it more clearly.
So what actually matters now
After sitting with all of this for a while, I keep arriving at the same conclusion, and it is one that becomes harder to ignore the more you pay attention to how people actually work. AI tends to amplify people who already move with intent, and in practice, that amplification shows up unevenly because not everyone brings the same level of intent into the process.
From what I have observed, there are a few traits that consistently show up in people who are able to extract real value from these tools, and they are not particularly new or exotic. They have always mattered, but AI has a way of making their absence more obvious.
Agency: the part no one can automate for you
This is the most visible factor, and somehow still the one people sidestep the most. Nothing really progresses without someone deciding to take action and following through on it, and that reality has stayed constant even as the tools around us have improved. What has changed is how little friction now exists between intention and execution, which makes inaction stand out more sharply than it used to.
It is difficult to ignore how often people still operate below even this new baseline. You see CVs that are poorly structured and clearly rushed, even though it takes very little effort to clean them up with the tools available today. You remind someone to submit something important and they still find a way to delay it without any real constraint forcing that delay.
We are operating in an environment where rewriting, refining, and structuring output can happen almost instantly, yet that small initial step still does not happen as often as it should. At that point, the constraint reveals itself quite clearly as a matter of willingness rather than capability.
AI responds to direction, and without that initial push, there is nothing for it to build on. The system does not originate effort on your behalf, so whatever momentum exists still has to come from you.
Taste: knowing when something is actually good
This one is less talked about, but it shows up everywhere once you start paying attention. You don’t need to be wildly creative to have taste, you just need to carry a clear internal standard that pushes you to look at something and say this is not good enough yet, this can still be better. That simple insistence on quality is where a lot of the difference comes from.
You’ll be very surprised how many people don’t have taste. I’ve seen wealthy people, especially in Nigeria, who can afford anything and still end up building and living in complete rubbish. The quality of what comes out at the end does not match the resources that went in, and you see the same thing with clothes where people spend good money with tailors and still end up with something poorly sewn.
So even when the materials are there and the money is there, the outcome still falls short because nobody is really steering it toward something better.
AI behaves in a similar way. It will give you something that works and something that looks acceptable, but if you don’t push it further with a clear sense of what “good” looks like, it will settle there. And when it settles there, you end up with something that feels common, which means it does not stand out in any meaningful way.
By the way, taste isn’t about perfection. Far from it, it’s putting the extra efforts, within immediate control, to release things that can be as good as you could push it, NOW!
Grit: staying long enough for it to get good
There is also the matter of staying with something long enough for it to mature into what you actually had in mind.
Very few outputs land exactly where you want them on the first attempt, especially when you are working with something as iterative as AI. You start with a prompt, get a response that is close but incomplete, and then begin the process of refining, adjusting, and pushing it further. That loop is where most of the real work happens, and it demands a level of patience that many people underestimate.
When that patience is missing, the process gets cut short and the output remains shallow. When it is present, you begin to see the compounding effect of small improvements, each one bringing the result closer to something that feels deliberate and well-formed.
The system itself does not carry that process forward independently. It does not return to your work unprompted or continue refining in the background. The continuity has to come from you, which means the outcome is tightly linked to how long you are willing to stay engaged.
Curiosity: the engine behind improvement
The last piece, which often sits underneath everything else, is curiosity. People who get the most out of AI tend to engage with it in a more exploratory way. They are not just issuing instructions and moving on; they are probing, questioning, and trying to understand why something works the way it does. They push on responses, test variations, and look for ways to improve what they are seeing.
That orientation changes how the tool gets used. Instead of settling for the first acceptable output, they treat it as a starting point and keep working it until it aligns more closely with what they had in mind.
Without that curiosity, usage tends to stay at a surface level, where outputs are generated quickly but rarely developed further. Over time, that produces work that blends into everything else, because it follows the same obvious paths without any real effort to go beyond them. If you never push the envelope, how do you know how far you could go or what you could discover?
The nasty and unfriendly conclusion, and where you and I land on it
AI is going to make the top 1% dramatically better, and the distance between them and everyone else will grow in a way that becomes hard to ignore.
That outcome follows the same pattern we’ve always seen. The tools are now widely available, but agency, taste, grit, and curiosity are not evenly distributed, and those are the things that actually determine what gets built and how far it goes. Some new people will break into that top 1% because they know how to use these tools properly, and some of the people sitting comfortably at the top will fall out because they were there due to structural advantages rather than genuine excellence. The composition will change, but the gap itself will remain.
For example, just this morning, one of my children, a world-class security expert, told me he vibe-coded a Drata/Vanta replacement, got on a call with a CISO and sold it for $20k 🤯. If I could net $20k every weekend, I shall turn Mondays to Fridays to weekend days as well.
Just a month ago, my good friend and the co-founder of Carbon, Ngozi Dozie, had chronicled what he did with just a $20 Claude Code subscription. He was addicted but in a positive way – he found freedom and tasted the forbidden fruit.
But for me, personally, here’s the sober truth and this is less of an abstract observation and more of a direct challenge I’ve placed in front of myself. If the tools are this good, and the access is this open, and I still cannot produce something that is genuinely world-class, then I have to be honest about what that means. It means the problem was never the tools and it points back to whether I actually have the agency to do the work, the taste to know when it is good, the grit to stay with it, and the curiosity to keep pushing it further.
I intend to find out, and I’m choosing to believe the answer is yes. And I think that choice, made deliberately, held onto stubbornly, and acted on consistently is exactly what separates the people who will thrive in what’s coming from the people who will spend the next decade wondering why AI didn’t do more for them. May that never be my case.
Discover more from Adedeji Olowe
Subscribe to get the latest posts sent to your email.