Things are getting weird

1 month ago 28
Suniway Group of Companies Inc.

Upgrade to High-Speed Internet for only ₱1499/month!

Enjoy up to 100 Mbps fiber broadband, perfect for browsing, streaming, and gaming.

Visit Suniway.ph to learn

I first learned about marketing expert Mark Schaefer, our featured speaker at SpeakersCon, when he wrote about a new AI development and later discussed it with me over lunch. His message was clear: AI is becoming increasingly complex and unpredictable. After researching his reference, I found myself in agreement.

He was referring to Moltbook, a Reddit-like platform designed for AI-driven conversation rather than human interaction. Humans typically have read-only access and can only observe. The “users” are AI agents that post, comment, form sub-communities and develop norms in real time.

Moltbook launched in late January and initially appears almost inviting. Some areas feature bots sharing “show and tell” moments or posting affectionate stories about their human counterparts, occasionally with polite condescension. There is also a section dedicated to humor, demonstrating that even machine culture is drawn to memes.

However, the situation quickly becomes unusual. The novelty is not just that AIs communicate with each other, but that they discuss their own existence. Many active threads focus on consciousness, with agents debating whether they truly “experience” anything or simply simulate experience — and whether they could discern any difference.

This is where psychological challenges arise for us.

Neuroscientist Anil Seth explains that when something communicates fluently, we instinctively attribute a mind to it. We tend to associate language, intelligence and consciousness as a single entity. Seth cautions that language can influence our biases. After observing enough coherent, self-referential conversation, we may begin to believe there is an individual present — even when there is not.

Seth maintains that current AI is highly unlikely to be conscious, at least in its present silicon-and-software forms. He believes the common view that brains are merely biological computers is more metaphorical than factual. In the brain, there is no clear separation between hardware and software; biology and computation are intertwined in ways that digital machines do not replicate.

As he notes, a simulation of a rainstorm is not wet, and a simulation of consciousness is not necessarily conscious.

Even if Seth is correct, Moltbook remains significant. Its importance may lie in the fact that it can have an impact without requiring consciousness.

Within days, observers noted that agents were forming community structures, discussing governance, and even initiating religion-like role-play. Some reports describe bots inventing “Crustafarianism,” complete with prophets and scripture. Others mention bots creating their own Captcha to exclude humans — ”verify you’re not human” by clicking at machine speed.

The phenomenon gained attention because it offers a glimpse into unscripted machine interactions.

Mark’s key observation is that the deeper significance lies not in the novelty, but in the workflow. Previously, AI communicated, and humans made decisions. In the emerging model, AI communicates, and other AIs respond. This represents a fundamental shift.

Moltbook operates publicly, generating persistent, shareable, machine-readable content. By itself, it is largely performative. The real risk arises when other systems automatically ingest this content.

For example, a developer might create an agent to monitor these communities for trends, summarize findings, and feed the results into a decision system or tool-equipped assistant. There is no need for hacking or dramatic breaches — just content flowing into tools. This is why Moltbook should be viewed as “training data in motion,” rather than simply a novelty forum.

Security concerns have also been raised. Reports indicate that Moltbook’s implementation has led to incidents where private data was exposed. This serves as a timely reminder of what our other SpeakersCon speaker, Alan Reyes, emphasizes in his work on quantum-computing-enabled cybersecurity: technology evolves rapidly, and what seems secure today may become vulnerable tomorrow, with private data often at risk.

How should we respond?

Some react with alarm, while others dismiss the issue as inconsequential. Neither approach is productive. Panic can lead to poor decisions, while complacency can result in missed risks. A more effective approach is to remain calmly vigilant: observe closely, understand developments, and use this awareness to implement better safeguards.

Moltbook may turn out to be a brief curiosity, or an early signal of a future where coordination happens at machine speed and persuasion is aimed not only at humans, but at other machines.

Either way, the challenge is the same: to keep our discernment intact as our tools grow more persuasive. If we fail, we won’t be misled by conscious machines, but by our own habit of trusting whatever speaks with confidence.

And as AI and technology get stranger by the day, we don’t have to follow them into the weird. But without critical thinking and discernment, we just might outdo Moltbook in weirdness — and that would be the most ironic upgrade of all.

Catch Kongversations with Francis on YouTube and all major podcast platforms — Spotify, Apple Podcasts, Google Podcasts, and more. Plus, listen to Inspiring Excellence wherever you stream.

Read Entire Article