
By Stantin Siebritz
Artificial intelligence today seems to live at two absurd extremes.
On the one hand, we have the shallow consumer spectacle: “Make me look like a movie star in Santorini,” “Generate me on a beach in Zanzibar,” “Turn my selfie into a Marvel superhero, with abs of steel.” It is playful, flashy, and sometimes genuinely useful.
But it is also the version of AI that makes society underestimate the seriousness of what is unfolding.
On the other hand sits the darker extreme: unlimited compute, vast data, minimal guardrails, and an almost religious belief that because a machine can decide, it therefore should. That is where the Skynet analogy stops being a joke and starts becoming a policy warning.
What makes this even more urgent is that AI is no longer confined to harmless novelty or office productivity. Reports on recent conflicts suggest AI is increasingly being woven into military and intelligence workflows.
The Associated Press reported that the Israeli military has used AI-enabled systems to help identify targets, while insisting that human analysts and senior officers still review decisions.
Reuters also reported claims that Anthropic’s Claude was used through Palantir-linked systems during a January operation that captured former Venezuelan president Nicolás Maduro, although Reuters said it could not independently verify the Wall Street Journal’s account.
That should sober all of us. The issue is no longer whether AI is “powerful.” The issue is whether governance is keeping up with deployment.
This is why policy needs to be finalized and adopted now — even if version one is imperfect. We can improve policy later. We can amend it, strengthen it, localize it, and modernize it. But having no policy while AI capabilities are already bleeding into warfare, surveillance, public administration, and critical decision-making is like driving without brakes because you are still debating the ideal brake-pad material.
The old IBM warning remains painfully relevant: a computer cannot be held accountable, therefore it should never make a management decision on its own. That logic applies even more strongly today. No matter how advanced the model, accountability still rests with humans.
And here is the really uncomfortable part: OpenAI may now be more willing than some rivals to support broad military use. The Associated Press reported that Anthropic resisted allowing “all lawful” military uses of its models, while OpenAI had agreed to work with the Pentagon, raising fresh concerns about whether frontier models could be deployed in far less restricted defence settings. That does not prove fully autonomous lethal deployment, but it does show the industry is moving dangerously close to that line.
For Namibia, Africa, and really the world, the principle must be simple: humans must remain in the loop, regardless of the application. Whether AI is used in medicine, banking, policing, recruitment, or national security, a human must be able to question, intervene, override, and take responsibility.
Because the real danger is not a robot saying, “I’ll be back.” The real danger is humans quietly stepping back and letting the machine take the wheel.
*Stantin Siebritz is Managing Director of New Creation Solutions, and a Namibian Artificial Intelligence Specialist






