When Machines Watch, Do We Look Away?

The real question is not what AI can do, but what we are doing with it.

There is something both awe-inspiring and unsettling about what we have built. Artificial intelligence now sits at the edge of every major decision, from medicine to warfare, from climate prediction to political propaganda. It is not just another tool. It is a force that reflects who we are and what we choose to become. That is what keeps me up at night. The real question is not what AI can do, but what we are doing with it. In the wrong hands, it becomes a weapon of quiet destruction. In the right hands, a means of progress. And somewhere between these hands, the people of Gaza continue to suffer in silence as machines learn to target faster than we learn to care. As is often said, AI is not a conscious tool; rather, it imitates the data it has been fed and generates responses based on that input. It is a simulation of intelligence—not inherently intelligent. Yet, it’s a miracle-like tool, yeah!

AI is amazing, right? It can diagnose diseases, predict weather patterns, and even help us connect across the globe. But it’s not all sunshine and rainbows. Lately, we’ve seen it turn hostile—not because it’s evil on its own, but because of the hands guiding it. I’m worried about this, and I bet you are too. So, here’s a question that’s been bugging me: when AI messes up, who’s really to blame? The people who made it? The ones using it? Or maybe the AI itself?

Honestly, I lean toward thinking AI is just a tool—like a knife or a car. It doesn’t have a moral compass; we do. It’s our choices that matter. Hegel once wrote, “The spirit that embraces both sides, objectivity and subjectivity, now posits itself firstly in the form of subjectivity, and then it is intelligence; secondly, in the form of objectivity, and then it is will.” In other words, what begins as thought becomes action. AI is just a reflection of our own intelligence and will—made manifest in code. If it goes astray, maybe it’s because our spirit did first.

Take healthcare for a second. AI can spot cancer early and save lives, which is beautiful. But flip the coin, and it’s also used to watch people, track their every move, and strip away privacy. Same tech, different outcomes. It’s all about how we decide to wield it. And that’s where ethics come in. We’ve got to ask: are we using AI in ways that respect human dignity, or are we crossing lines we shouldn’t?

Now, AI doesn’t just sit out there doing its thing—it changes us too. On the good side, it can take boring tasks off our plates, giving us room to be creative or just breathe. But there’s a catch. When we lean on it too much, we might stop thinking for ourselves. It’s like handing over our brains to a machine, and that freaks me out a little. In war, it gets even darker. AI can make split-second calls humans might hesitate over, like picking targets in a fight. That speed can save time, sure, but it can also strip away the human pause that says, “Wait, is this right?” I worry it makes us numb, detached from the real pain our choices cause. If a machine picks who lives or dies, it’s easier to shrug off the guilt.

Speaking of lives, what’s been happening in Gaza breaks my heart. I hope you feel the weight of it too. It’s not just a “conflict”—it’s a tragedy. Reports say the Israeli military used AI tools to pick targets for airstrikes. These weren’t always soldiers; too often, they were civilians—women, kids, and old folks. Helpless people. And big tech, like Microsoft, has been tied to this. They sold AI and cloud services to Israel during the war, saying it was for finding hostages, not hurting Palestinians. But they also admit they don’t fully know how their tech was used once it left their hands. That’s a problem. If AI helped target innocent people, even indirectly, it’s a mess we can’t unsee. Profit’s fine, but not when it’s built on the bodies of the defenseless. It makes my stomach turn.

This isn’t just about Gaza—it’s about what AI could become anywhere. It’s a wake-up call. AI can cure diseases or end lives, depending on who’s steering it. When tech giants hand over powerful tools without enough oversight, they’re rolling the dice with human lives. And we’re the ones who pay. Psychologically, it’s chilling too. If soldiers or leaders rely on AI to make kill calls, they might sleep better at night—but should they? That detachment could make cruelty easier, not harder. Philosophically, it begs the question: where’s the line? If we let AI blur our moral edges, are we still human?

I don’t have all the answers, but I know we can’t just sit back. AI’s potential is too big to waste, but its risks are too real to ignore. We need rules, strong ones. Think about how we limit chemical weapons or landmines. Why not AI in war? It’s not about banning it; it’s about making sure it doesn’t turn into a monster because we weren’t paying attention. And it’s on us too. We’ve got to push for transparency, ask hard questions, and demand that profit doesn’t trump people. AI could be our greatest ally if we guide it right. But if we let it slip, Gaza won’t be the last tragedy we mourn.

So, here’s where I land: AI isn’t the bad guy—we are, if we let it be. It’s a mirror, reflecting our best and worst. I’m concerned, really concerned, about where this could go. But I also believe we can steer it toward good. Let’s not let greed or laziness write AI’s story. Let’s write it together, with care, so it lifts us up instead of tearing us down. What do you think—how can we make sure this incredible tool doesn’t become our biggest regret?

Mohammad Zain
Mohammad Zain
Mohammad Zain is a graduate of English Literature and Linguistics as well as International Relations. He likes to dive deep into complexities of life, and the changes brought forward by the technologies. He, furthermore, is quite critical of advancements that is shaping the modern, market-centred world.