Burnrazor

Léandre Desmaretz

2026-04-02me-chatgpt-and-the-others

Me, ChatGPT, and the others

A personal essay about what happened when AI moved from science fiction into daily life, and why it became the ultimate tool for a generalist who learns by doing.

ai / chatgpt / codex / learning

The first time ChatGPT answered one of my prompts, I think I shouted.

Not because the answer was perfect. It probably was not. But because it was there, appearing in front of me in real time. Not written by another human. Not retrieved from some crude if/else tree. Not copied from a static block somewhere else. Something was actually generating language in front of me.

I remember the feeling more than the exact prompt. My attention left the answer almost immediately and jumped to the consequences. If this was real, then a lot of doors I had assumed were closed were not actually closed. Or at least, they would not stay closed for long.

That was the real shock.

Before it was real

AI did not enter my life as a trend.

Long before ChatGPT, it already existed in the background of my imagination. I grew up obsessed with science fiction, computing, and technology. I remember visiting a home expo in the 2000s and being completely fascinated by the idea of the smart home, the one people kept promising was just around the corner. I was also the kind of kid who spent a lot of time exploring the web, tinkering, downloading too much, clicking too far, trying to understand how things worked.

Science fiction was not just entertainment to me. It was a way of rehearsing the future. Star Wars, Blade Runner, Barjavel, games full of machines, systems, worlds, and artificial beings. The idea of intelligence that was not exactly human had always been there. Jarvis. HAL. That whole register. I had always felt that something like this belonged to the same category as space travel or extreme longevity: possible, maybe inevitable, but probably not concrete enough to matter in my own life.

That is why the arrival of GPT felt so strange.

It was not just that it was useful. It was that something I had mentally stored in the category of fiction had suddenly crossed over into daily life.

The first door it opened

At first, like a lot of people, I tested obvious things.

Writing. Summaries. Idea exploration.

But the real rupture happened when I asked it for code.

I came from a no-code background. Zapier, automation, systems, the usual builder instinct of trying to get things done with the tools available. I had touched technical worlds before, but I was never a real programmer in the traditional sense. I went through the Epitech track briefly and failed it in spectacular fashion. I never built the discipline, the rigor, or the formal relationship to code that would have made me say: yes, I know how to code.

And honestly, for a long time, that felt like a locked door.

Then GPT gave me a piece of code. I tested it. It worked.

That moment mattered more than most people would think. Not because the snippet itself was extraordinary, but because it changed the geometry of what was accessible to me. I did not suddenly become a software engineer. That is not the point. The point is that the distance between intention and implementation started to collapse.

I could ask. I could test. I could iterate. I could understand while doing.

For a generalist who learns by doing, AI became the most powerful and malleable tool I had ever encountered.

GPT-4 was the first serious proof

GPT-3.5 was the shock.

GPT-4 was the first time it stopped feeling like a curiosity and started feeling like real leverage.

That was the model that made me take a genuine step forward. It let me push ideas further, structure them better, and try things that were simply too fragile or too vague with 3.5. One of the first moments where that became undeniable for me was at Shadow, where I built a workflow that combined public signals, scraped discussions, and internal satisfaction data to surface recurring problems and behavioral patterns.

I do not need to go into sensitive detail. The important point is simpler than that: it worked.

Not perfectly. Not magically. But enough to prove something fundamental. Enough to prove that this was not just an impressive chatbot, and that intelligence in this form could already become part of an operating system for work.

That period also came with a strange social feeling. I was talking about AI all the time, in the expressive and slightly evangelist way I tend to have when I truly believe in something. Some people understood. Some saw it early. But many around me, especially technical people, thought it was overhyped, shallow, limited, or about to plateau. They looked at the current version and assumed the trajectory would stay roughly where it was.

I did not.

And I do not spend my time saying "I was right," even if sometimes I am tempted to. Reality has said it loudly enough already. Most people use these tools now. The argument is over. But I remember very clearly what it felt like to stand in that gap between what was already visible to me and what still looked absurd to others.

That gap matters. It is part of this story.

When it entered daily life

The interesting thing is that AI did not stay inside work.

It entered everything.

One of the clearest examples for me was cooking. Cooking is actually a very good intelligence test. Not in an academic sense. In a human sense. It requires synthesis, sequencing, selection, adjustment, and taste without being reducible to one formula. It is not enough to know ingredients. You need to understand what to do with them, in what order, in what proportions, and with what logic.

At some point, my in-laws were talking about a friend of theirs, someone deeply passionate about cooking, someone who supposedly made the best asparagus risotto in the world. I asked GPT for a recipe, then kept refining the prompt, improving the response, adjusting the process, and asking better questions.

It was my first risotto ever.

And it was one of the best risottos of my life.

I had just beaten a sixty-year-old family reference point by following a process generated by a machine with no tongue, no taste buds, no body, no dinner memories, no grandmother, no instinct in the human sense. That stayed with me.

Not because it proved that AI had taste. It does not. But because it proved that it could synthesize enough structure, enough pattern, enough procedural intelligence to help me produce something real and excellent in a domain that is subtle, embodied, and usually treated as deeply human.

That changed something in my head.

From there, AI became less of a tool I consulted and more of a recurring presence in how I operated. Projects, ideas, meals, learning, planning, questions, systems, random curiosities. It became the easiest way I had ever found to extend my reach without pretending to know everything in advance.

Me, ChatGPT, and the others

The title is not accidental.

This piece is not only about me and ChatGPT. It is also about everything around that relationship.

The others are the friends, colleagues, and skeptics who thought I was exaggerating. The people who saw me as the crazy one talking too early, too loudly, too intensely. The ones who looked at the first versions and saw a gimmick. The ones who thought the limits were the story instead of the trajectory.

But the others are not only human.

They are also the other models, the other systems, the other forms this intelligence started taking. I tested Claude at different times, and I can clearly see why people love parts of it. Some of the integrations are impressive. Some features ship fast. There is real quality there. But my relationship to a model is practical before it is ideological. If the limits make the tool disappear fifteen minutes into actual work, it stops being a serious companion for my use case. That is not a moral judgment. It is just operational truth.

This is why I do not experience AI as a brand story. I experience it as a capability story.

The others, in the end, are everyone and everything that helps reveal what this relationship really is by contrast. The skeptics. The believers. The competing models. The colleagues who changed their mind. The friends who thought I was insane until they started using the same systems daily. The social world around a tool often tells you as much as the tool itself.

The agentic turn

If the first rupture was generated language, and the second rupture was usable output, then the third was agency.

There is a real difference between pasting context into a chat window and working with agentic systems that can hold a task, operate over time, inspect files, revise code, move across a project, and come back with something structured.

The integration of Codex into VS Code changed the game for me. The CLI did too. The fact that these systems can now stay on a task long enough to actually build, inspect, revise, and coordinate changes means we are no longer just talking about smart responses. We are talking about operational extension.

That matters because my frustration with coding was never with software itself. It was with the size of the gate in front of it. To reach the level of autonomy I wanted, I thought I would have to spend years acquiring the full stack of technical rigor needed to cross every threshold manually. That path always felt enormous to me. Not impossible, but enormous.

What these systems changed is not the existence of skill. Skill still matters. Judgment still matters. Structure still matters. Security still matters. But the path between vision and implementation is much shorter now, and for someone like me that changes the entire shape of what is possible.

I can work on servers, websites, systems, automation, structure, product logic, and operating layers in a way that used to feel permanently out of reach.

That is not a small shift.

What it changed in me

The biggest change AI made in my life is not that it made me faster.

It is that it made me more able.

I learn by doing. That has always been true. I can learn quickly, but only if I can touch the thing, test the thing, build the thing, break the thing, and remake the thing. Pure abstraction has never been enough for me. I need contact with reality. I need feedback. I need action.

AI fits that pattern almost unnaturally well.

It lets me enter a field by acting inside it. It lets me learn while producing. It lets me move from curiosity to execution faster than any teacher, textbook, tutorial chain, or conventional learning path I have ever known. That is why it spilled into other parts of life too: food, health, sport, budgeting, finance. Not because I suddenly needed advice on everything, but because I had access to a form of adaptive intelligence that could stay in dialogue with my actual questions.

That matters even more because I have a visceral relationship to evidence. I like things when they are empirical, tested, and stress-checked by meta-analyses rather than vibes. I am deeply anti-pseudoscience. One of the most practical things GPT gave me was the ability to read across scientific literature much faster, compare claims, compress studies, and extract something closer to clean signal.

That is especially powerful in health. It is also, to be honest, slightly dangerous for someone with a hypochondriac streak. Give me a system that can condense the literature, compare interventions, explain trade-offs, and turn scattered papers into something usable and personal in fifteen minutes, and I will absolutely use it to interrogate every symptom or optimization angle. But even there, the deeper effect is not panic. It is reassurance through structure. It is the feeling that I can get to a clearer, less bullshit understanding of my own body and choices much faster than before.

For a generalist, that is transformative.

For a generalist who learns by doing, it is almost unfair.

At this point, AI is not some external topic in my life. It is part of how I think about possibility. It is part of how I structure work. It is part of how I approach learning. In a very real sense, it has become part of my identity.

What I believe now

I currently see AI as the most powerful and malleable tool I have ever encountered.

Not the cleanest. Not the most stable. Not the final form. But the most powerful and the most malleable, yes.

I also think it is one of the most extraordinary learning tools humanity has ever produced. I have been saying for years now that my future children will probably have the best teacher imaginable, and that teacher may not be human. Not because humans are obsolete. Not because human intelligence no longer matters. But because a system that knows your strengths, your weaknesses, your pace, your blind spots, and your way of understanding can become an astonishing guide if we build and use it well.

What I still want, though, is more initiative. I do not just want an intelligence that waits behind a prompt box. I want one that knows me well enough to surface the right reminder, ask the right question, remember what matters that day, notice patterns, and adapt in the background. ChatGPT's Pulse is an interesting glimpse. OpenClaw gets closer in spirit, but it is still too complex and too niche to become a product for most people. The shape I am waiting for is easier to describe than to build: I want my own Jarvis.

That idea does not make me feel less attached to humans.

It makes me realize how much access has changed.

This is why I cannot treat AI as a simple software category, or as a productivity story, or as an overhyped trend cycle. It has already become too intimate, too practical, too generative of real capability for that. It has changed how I work, how I learn, what I attempt, what I consider reachable, and how I imagine the next decade.

And the strange part is that I still feel like we are at the beginning.

That is where the vertigo comes from.

Not from fear alone. Not from admiration alone. From the realization that something I had once filed under science fiction is now sitting inside ordinary life, already opening doors, already changing ambition, already reshaping what one person can do.

This is only the start.

I currently see AI as the most powerful and malleable tool I have ever encountered.

Not the cleanest. Not the most stable. Not the final form. But the most powerful and the most malleable, yes.

I also think it is one of the most extraordinary learning tools humanity has ever produced. I have been saying for years now that my future children will probably have the best teacher imaginable, and that teacher may not be human. Not because humans are obsolete. Not because human intelligence no longer matters. But because a system that knows your strengths, your weaknesses, your pace, your blind spots, and your way of understanding can become an astonishing guide if we build and use it well.

What I still want, though, is more initiative. I do not just want an intelligence that waits behind a prompt box. I want one that knows me well enough to surface the right reminder, ask the right question, remember what matters that day, notice patterns, and adapt in the background. ChatGPT's Pulse is an interesting glimpse. OpenClaw gets closer in spirit, but it is still too complex and too niche to become a product for most people. The shape I am waiting for is easier to describe than to build: I want my own Jarvis.

That idea does not make me feel less attached to humans.

It makes me realize how much access has changed.

This is why I cannot treat AI as a simple software category, or as a productivity story, or as an overhyped trend cycle. It has already become too intimate, too practical, too generative of real capability for that. It has changed how I work, how I learn, what I attempt, what I consider reachable, and how I imagine the next decade.

And the strange part is that I still feel like we are at the beginning.

That is where the vertigo comes from.

Not from fear alone. Not from admiration alone. From the realization that something I had once filed under science fiction is now sitting inside ordinary life, already opening doors, already changing ambition, already reshaping what one person can do.

This is only the start.