Is an Algorithm Raising our Kids?
Thirty years ago, if a parent believed the moon landing was faked, their influence on a child’s worldview was constrained. It might have come up during dinner conversations or through a library book or a late-night TV or radio segment. That influence, while potentially significant, was limited in reach and repetition. Today, the same belief, expressed once, can be algorithmically amplified across a child’s entire online experience.
This observation raises important questions:
- What mechanisms are at play when online platforms personalize content for children?
- How do algorithms respond when a child watches a conspiracy-related video?
- Is there evidence to support the idea that emotionally charged or conspiratorial content is more likely to be recommended next?
It seems that once a child engages with this kind of content, the platform’s recommendation engine reinforces it, feeding a constant stream of similar videos, articles, and suggested content. This appears to create a feedback loop. But is that what’s actually happening under the hood? What does current research in algorithmic behavior and engagement-driven design tell us?
I’m seeing this firsthand, and it’s concerning. In split households especially, when one parent sends a child a video link filled with conspiracy-laden content, flat Earth theories, anti-vaccine rhetoric, or deep-state paranoia, the consequences can ripple far beyond that one interaction. Whether the child has an account on that platform or is simply viewing the content in a browser, the algorithm can register that engagement. Does it treat that click, even from a shared link, as a signal of interest? If so, is the platform then sculpting future content around that data point? Anecdotally, it seems so. Once that doorway is opened, even briefly, the child may be bombarded with increasingly similar content. Platforms like YouTube, TikTok, and Facebook appear optimized to maximize engagement, and emotionally charged, misleading content often wins the algorithm’s attention economy. What starts as a shared link becomes an echo chamber, one that’s no longer guided by a parent’s hand, but by opaque digital processes few of us understand the reach and long-term impacts of.
It feels like a new kind of problem, one that our institutions may not yet be equipped to address. Thirty years ago, a child might have passively accepted or questioned a parent’s claim. Today, that child could be plunged into an immersive digital environment where misinformation isn’t just present, it is persistent and increasingly persuasive. This leads me to wonder:
- How does the design of these platforms influence the development of critical thinking in children?
- Are we seeing measurable changes in children’s trust in experts and institutions due to prolonged algorithmic exposure?
- What role does cognitive development play in their ability to resist or question digital content?
Even when I attempt to offer counter-information, my voice is just one among many. And increasingly, it feels like I’m competing not with ideas, but with a highly sophisticated attention optimization machine.
So, how should we respond?
- Should we view this as a public health issue? If exposure to misinformation correlates with harmful behavior or beliefs, does it warrant the same kind of societal attention we give to smoking or seatbelt use? What frameworks already exist that might help us evaluate and mitigate these harms?
- How do we teach digital literacy in a way that equips children to navigate these environments? Is there evidence that early education in critical thinking, media literacy, and algorithmic awareness improves outcomes? Are there age-appropriate methods being studied or tested?
- Can regulation meaningfully shift the incentives that drive engagement-based platforms? Many have called for ethical AI design and increased transparency, but is there empirical support that such measures work? What models exist globally that we might learn from?
For those of us in split households, the challenge is especially complicated. When co-parents are not aligned in their understanding of media influence, it becomes a one-sided struggle. Are there best practices or community-based strategies for addressing this within families?
Ultimately, these questions aren’t rhetorical. They demand answers from researchers, psychologists, educators, ethicists, and technologists. The internet is not a neutral space, and algorithms are not passive bystanders. They are active agents in shaping our children’s perceptions.
I’m not claiming to have the answers. But I believe we need to ask the right questions.