TL;DR: AI is genuinely helpful: it speeds up boring tasks, shrinks the distance between ideas and finished work, and gives more people access to information and tools that used to be out of reach. At the same time, it makes many of us nervous, because the same power that helps us can also replace parts of our jobs, blur what is “ours,” and pressure companies to do more with fewer people. The healthiest way forward is to treat AI like a strong tool, not a replacement for your mind: use it to extend your abilities, keep your own judgment in charge, and accept that it is normal to feel both excited and uneasy about it at the same time.


Since it is Valentine’s week, this is a good moment to sit with the contradiction instead of trying to resolve it. This is about the real, messy relationship many of us have with AI, not a love letter and not a warning poster.

The “love” side: AI quietly making everyday life easier

If you strip away the headlines, a lot of AI’s impact is actually pretty gentle and boring, in a good way.

  • It shrinks the distance between idea and execution.
    You think, “I should send this email,” and instead of staring at a blank screen, you get a rough draft in a few seconds. You still edit it, but the hard part is no longer getting started.

  • It makes you feel less alone with the blank page.
    Writers, developers, designers, students, all know that moment where your brain simply refuses to move. A model throwing out a messy first pass or a few options can be enough to get you unstuck.

  • It turns tedious tasks into background noise.
    Transcribing a meeting, cleaning a dataset, rewriting the same boilerplate code, summarizing long documents. These things still happen, but they move from “I need to block time and suffer” to “I’ll set this up and review the result.”

  • It gives people without formal training a kind of borrowed expertise.
    Someone who never studied finance can ask for a simple breakdown of Roth vs traditional retirement accounts. Someone who never studied computer science can get help debugging a script or understanding an error message. Access to explanations is more evenly spread.

This side feels easy to love. It is like suddenly getting a very fast, slightly chaotic assistant who never gets tired of your questions. It is especially powerful for people who used to be shut out by jargon, gatekeeping, or geography.

The “hate” side: the quiet fear behind all the convenience

Underneath the convenience, there is a harder truth: the same things that make AI helpful also make it disruptive.

  • If drafting, summarizing, and basic coding get cheaper, who still gets paid to do those parts

  • If one person with good tools can do the work of three, what happens to the other two

  • If a model can imitate style, brand voice, or design patterns, how do we protect real creators and their livelihoods

These are not abstract questions for “the future.” People already feel it:

  • Junior roles that used to be an entry point are at risk of being hollowed out or reduced.

  • Teams are quietly asked to “do more with less” because automation exists now.

  • Whole categories of work feel less secure, even if they have not changed yet, simply because “AI” sits in the air like a threat.

There is also a more personal sting: identity.

If you have spent years building a craft, it can hurt to watch a tool produce something decent in seconds. Even if you know, rationally, that it is not the same as your best work, it pokes at questions like:

  • “What am I actually good at”

  • “What part of my skill is uniquely mine, not just pattern matching”

  • “If a model can do this, what do I bring to the table”

Love makes your life easier. Hate whispers that you are replaceable. Holding both at once is uncomfortable.

The middle: using AI without letting it use you

Between “AI will save us” and “AI will destroy everything” is a very human middle ground. In this space, the question is less “Is AI good or bad” and more “How do I relate to it in a way that feels honest.”

A few ideas that help:

1. Treat AI as power tools, not as your identity

If a carpenter starts using power tools, they are still a carpenter. The skill moves from “I can swing a hammer perfectly” toward “I can design, plan, and execute a build that meets real human needs.”

In knowledge work, AI is starting to feel like that:

  • The value shifts from “I can type fast and know lots of trivia” to “I can ask the right questions, judge quality, and design systems humans actually want.”

  • Speed and surface‑level outputs matter less than taste, judgment, and the ability to understand messy constraints.

You are not “cheating” if you use tools. You are cheating yourself if you let your skill shrink to “I press a button and accept whatever comes out.”

2. See where your humanity is still very hard to automate

There are things models can mimic, but not really own:

  • Feeling the emotional temperature of a room or a relationship.

  • Sensing what someone is too ashamed or scared to say directly.

  • Knowing when the “right” answer is not what this particular person needs right now.

  • Carrying responsibility for decisions that affect real lives, not just outputs.

In finance, this might look like helping someone face their anxiety about money, not just optimizing their portfolio. In AI engineering, it might be saying “no” to a feature that would over‑collect data or harm trust, even if it is technically doable.

These are the parts of work that still feel very human, and they might become more valuable as AI takes over the mechanical parts.

3. Use it to stretch yourself, not shrink yourself

There is a version of using AI that makes you smaller: you let it think for you, write for you, decide for you, until you start to forget what your own voice sounds like.

There is another version that makes you bigger:

  • You let it propose options you would never have considered.

  • You use it to simulate different paths and see tradeoffs.

  • You ask it to argue against your idea so you can refine your thinking.

  • You use it as a mirror for your drafts, then decide what you actually believe.

The difference is subtle but important. In one direction, you are outsourcing your mind. In the other, you are sparring with a tool to sharpen it.

The job question: some honest, uncomfortable truths

We do not do ourselves any favors by pretending AI will not affect jobs. It already is. But the impact is not uniform.

Some realities that can coexist:

  • Many jobs will change rather than vanish.
    Tasks inside those jobs will be reshuffled. Some pieces will be automated, some will expand, some will appear out of nowhere.

  • New roles will appear that are hard to picture now.
    When the internet started, “social media manager” and “DevOps engineer” were not common titles. Agent orchestration, AI safety, oversight, data curation, and human‑in‑the‑loop roles are early hints of what is coming.

  • The transition will be uneven and often unfair.
    Some people will have time, support, and resources to adapt. Others will be hit harder, especially in places where safety nets are thin.

It is okay to feel angry or anxious about this. You can appreciate how much easier your own work gets with AI and still care deeply about people whose work is getting squeezed.

The love and hate are not mutually exclusive. They are both valid responses to a technology that magnifies what was already there in our economy: efficiency pressures, inequality, and the tendency to squeeze the middle.

A small, personal way forward

If there is any practical “Valentine’s week” advice for this love‑hate relationship, it might be this:

  • Get curious, but stay grounded.
    Learn what AI can do in your field. Play with it. Use it. Do not turn your back on it out of fear. But keep your feet in real life: talk to people, notice where things break, pay attention to how it actually feels to use these tools day after day.

  • Invest in your judgment and taste.
    Models will get faster and cheaper. What will stay rare is someone who can say “This is good enough,” “This is wrong,” “This is missing the point,” and “Here is how we make this more humane.”

  • Protect your own voice.
    Let AI help you draft, but edit in a way that sounds like you. Let it write scaffolding, but decide what the final story is. Machines can generate content. Only you can decide what you actually want to say.

  • Hold space for mixed feelings.
    You are allowed to love how much time AI saves you and still feel sad or worried about what it does to certain kinds of work. You are allowed to be excited and cautious at the same time.

Love that ignores the risks is shallow. Hate that ignores the benefits is incomplete. A real relationship with AI in 2026 is somewhere in between: using it fully, questioning it often, and remembering that the point of all this is not to worship the tool, but to build a life and a world we are not embarrassed to hand to the next generation.

Keep Reading