It started with a monkey.

A few weeks ago the internet fell in love with a young Japanese macaque named Punch who lives at Ichikawa City Zoo. Punch was rejected by his mother shortly after birth, and zookeepers gave him a stuffed orangutan toy for comfort. Videos of the tiny monkey dragging the toy around and clinging to it spread quickly across social media.

Like millions of others, I watched one video. Then another. Soon my feeds were full of Punch.

Updates about Punch. Concern about Punch. Speculation about whether he was lonely or bullied by other monkeys. Then new videos appeared suggesting he might be making friends.

Within days the internet had built an entire emotional narrative around a baby monkey living in a zoo thousands of miles away.

That was when the question crossed my mind:

How much of what I am seeing online is actually real and how much is something the internet has assembled for me?

The truth is that question is becoming harder to answer.

In recent years, the Internet crossed a quiet but important threshold. Automated systems now generate more activity online than humans. Research from the Imperva 2025 Bad Bot Report shows that nearly half of written content online now comes from bots rather than people. Roughly 37 percent of that traffic is considered malicious, including bots designed to scrape content, distribute spam, or artificially inflate engagement.

Social media conversations are also influenced by automation. Academic studies examining large online discussions have found that roughly 20 percent of posts in those conversations can come from bot accounts.

At the same time, artificial intelligence has begun producing enormous volumes of online content. Analysts estimate that nearly half of written content online now involves some level of AI generation or translation.

In other words, a growing portion of the internet is no longer written, shared, or amplified by humans.

It is not surprising that trust in online information has declined. Surveys show that most Americans say they trust what they see online less than they did just a few years ago. Many say they struggle to distinguish between content written by people and content produced by machines.

Concerns about how social platforms shape behavior are also reaching the courts. In New Mexico, a judge recently allowed a lawsuit against Meta to move forward that claims the company intentionally designed features that make social media addictive for young users. The case focuses on recommendation algorithms that learn what captures attention and deliver more of it.

That is exactly how my feeds became filled with Punch the monkey.

The story itself is real. But the experience of seeing it everywhere is shaped by algorithms that decide what we see and how often we see it.

The internet was once built on the assumption that people were talking to other people. Today humans share that space with bots, algorithms, and artificial intelligence systems that can mimic human communication with surprising realism.

That means we need a new kind of digital awareness.

How to recognize when something online may not be real:

  • Look for repetition. Posts that use identical language or hashtags across multiple accounts may be automated amplification.
  • Check the account history. Real users usually have a longer timeline with varied interests. Bot accounts often appear suddenly and begin posting at high volume.
  • Watch posting patterns. Accounts posting constantly at all hours may be automated.
  • Evaluate emotional intensity. Content designed to trigger outrage or strong emotional reactions is often amplified by automated networks.
  • Verify across multiple sources. If a claim appears in only one place or spreads through identical posts, it may not be reliable.
  • Look for conversation depth. Real discussions evolve with nuance and disagreement. Automated accounts tend to repeat talking points.

Why This Matters in Veterinary Medicine

Veterinary medicine relies heavily on trust. Pet owners depend on accurate information when making decisions about their animals’ health. But the same forces shaping the broader internet are now influencing conversations in animal health.

Misinformation about vaccines, parasite prevention, nutrition, and emerging diseases can spread quickly through social media. AI generated articles and automated accounts can amplify advice that is inaccurate or taken out of context.

Algorithms also tend to promote emotional content. Stories about suffering animals, miracle cures, or controversial treatments often travel faster than careful medical guidance.

For Veterinary professionals, this creates a new challenge. It is no longer enough to simply provide accurate information. The profession must also help clients navigate an online environment where not everything they read was written by a person or reviewed by a medical expert.

The internet remains one of the most powerful tools ever created for sharing knowledge and building community. It allows people to learn about animal care, connect with Veterinary experts, and yes, care deeply about a baby monkey living halfway around the world.

But the landscape has changed.

Today the information flowing across our screens comes from humans, algorithms, bots, and artificial intelligence systems all interacting at once. Navigating that environment requires curiosity, skepticism, and a willingness to pause before assuming that what we see online reflects reality.

And if you found yourself wondering whether this article was written by a person or a machine, that reaction captures the challenge the modern internet has created.

In the interest of transparency, this article about the difficulty of knowing what is real online was written with the assistance of artificial intelligence.