top of page

Reach out to small business owners like you: Advertising solutions for small business owners

Salesfully has over 30,000 users worldwide. We offer advertising solutions for small businesses. 

Why the Average Person Finally Understands the Potential Harms Caused by Social Media Platforms

For years, warnings about social media sounded abstract to a lot of adults. Researchers, parents, and mental-health experts kept saying these platforms could damage young people, but many people who were not living inside Instagram, YouTube, TikTok, or Snapchat every day simply could not feel the force of the argument. Then the pandemic arrived. Daily life moved onto screens. Work moved onto screens. School moved onto screens. Isolation moved onto screens.


In its five-year look at how COVID changed Americans’ relationship with technology, Pew found that 48% of Americans say the pandemic changed the way they now use technology, and by April 2021, 58% said the internet had been essential to them during the outbreak. Once millions of adults were pulled deeper into digital life themselves, the old warnings stopped sounding theoretical. They started sounding familiar.



What the Los Angeles record shows beyond the headline verdict


Judge Carolyn B. Kuhl’s November 2025 summary-judgment order

Some of the most revealing details in the Los Angeles case came not from the verdict alert, but from Judge Carolyn B. Kuhl’s November 2025 summary-judgment order and from the testimony itself. Before trial, the defense argued that Section 230 and the First Amendment should knock out K.G.M.’s claims. The court refused, saying there was evidence that design features themselves, not just third-party content, could have harmed her. The order specifically pointed to evidence that Instagram’s infinite scroll could itself be harmful and that YouTube’s autoplay could itself contribute to compulsive use, anxiety, depression, and insecurity. That matters because it shows exactly how the case was able to get around the usual platform-immunity defenses: the theory was not “bad content hurt me,” but “the product design intensified the harm.”

How young and how deeply immersed 

Another detail that deserves more attention is just how young and how deeply immersed the plaintiff was. In her trial testimony, K.G.M. said she started using YouTube at age six, created her own account at eight by lying about her age, and by age ten had uploaded 240 videos, eventually reaching 360. She also said she created nine additional accounts to boost likes and comments on her own videos. On Instagram, she testified that she secretly created an account at age nine, used it every day, and once spent 16 hours on the app in a single day. Those facts make the case feel less like a broad culture-war argument and more like a record of a child growing up inside systems designed to reward constant return, constant checking, and constant self-comparison.

Failure to warn

The record also shows that the failure-to-warn claim was more concrete than many casual readers may realize. The court order says the plaintiff was not arguing that Meta should have tucked some cautionary language into a terms-of-service document no one reads. Instead, the plaintiffs’ warning expert said any warning needed to be “large” and “prominently placed.” The same order notes it was undisputed that neither K.G.M. nor her mother ever saw warnings about Instagram’s safety, and that her mother testified she only learned about the dangers of social media from a “60 Minutes” segment long after her daughter had already been using the apps. She said that if she had known earlier, she “would have never given K.G.M. a phone.” That is the kind of testimony jurors can understand instantly.

Executive decisions

The testimony from executives likely mattered too, especially because it highlighted the gap between how these companies describe themselves in public and how the products are experienced in private life. Adam Mosseri testified that he does not believe people can be clinically addicted to social media, even though plaintiffs confronted him with earlier remarks where he had used the word “addiction” more casually. Mark Zuckerberg, for his part, said he still agreed with his prior position that the science had not proved social media causes mental-health harm. But under questioning, he also acknowledged that Instagram had previously had goals associated with time spent, even though he said Meta later moved away from that framework. Those are the kinds of moments that can shape how a jury hears everything else.


One more underappreciated point is that YouTube’s defense was not just “we didn’t cause this.” It was also “we are not really social media.” In closing, YouTube argued that it was more like television, pointed out that K.G.M. and her family did not use tools like YouTube Kids or safety mode, and even stressed that her original lawsuit paperwork did not initially assert claims against YouTube specifically. That argument tells you a lot about where these companies think the legal line now sits: if they can reframe themselves as neutral media channels instead of socially interactive behavioral products, they have a better shot at escaping the new design-liability theory. The jury was plainly not persuaded.





That is part of what makes the recent Los Angeles verdict against Meta and Google over Instagram and YouTube feel like a turning point. A jury found both companies liable for harming a 20-year-old woman through negligent design and failure to warn, and awarded $6 million in damages, with Meta responsible for 70% and Google for 30%.


Reuters also reported that the trial was the first real test of whether big tech companies can be held liable for the design of apps accused of harming young people’s wellbeing. That matters not just legally, but culturally. Jurors do not live in a vacuum. They live in the same country the rest of us do, and that country now has a much more intuitive feel for what these products can do.


The legal theory matters here. These cases are not mainly about blaming Meta or YouTube for every bad post uploaded by a user. They are about product design. Reuters reported that plaintiffs in both the California and New Mexico cases sidestepped the usual Section 230 defense by arguing the harm came from the companies’ own design choices rather than from user-generated content.


In the California case, the jury found Meta negligent in designing or operating Instagram and Google negligent in designing or operating YouTube, and found both companies failed to adequately warn users about the dangers of using the platforms. In other words, the courtroom argument is moving from “you hosted harmful speech” to “you built a machine that predictably intensified harm.”



That shift lands differently now because the public has changed. Before COVID, a lot of adults still treated social media as something frivolous, irritating, or juvenile. It was easy to dismiss concerns if you were not the one trapped in the loop. But lockdowns shoved even reluctant adults into the same ecosystem of feeds, alerts, doomscrolling, algorithmic rabbit holes, status anxiety, outrage bait, and compulsive checking.


Pew’s report puts it plainly: for many Americans, life in the early pandemic was lived on screens. Once that happened, the platforms’ power to hijack attention and distort mood became easier for ordinary people to grasp from experience rather than from expert testimony alone.



And if adults learned the lesson late, children were already living in it. In the Surgeon General’s advisory on social media and youth mental health, the government said social media use by youth is nearly universal: up to 95% of teens ages 13 to 17 use social media, more than a third use it “almost constantly,” and nearly 40% of children ages 8 to 12 use social media too.


The advisory said there are “ample indicators” that social media can pose a profound risk of harm to children and adolescents’ mental health and wellbeing, and noted that more than three hours a day on social media is associated with double the risk of poor mental health outcomes, including symptoms of depression and anxiety. Those were not fringe alarms. They were official ones.


The pandemic did not invent those dangers. It democratized the experience of them. People who once thought the warnings were overblown got their own taste of compulsive use, social comparison, manipulative recommendation loops, and the strange exhaustion that comes from being endlessly connected and somehow less grounded.


That is one reason the cultural mood has changed. The average juror today is more likely to know, either personally or through a child, niece, nephew, sibling, or friend, what it looks like when a platform stops feeling like a communication tool and starts feeling like a behavioral engine. That is an inference, but it fits the broader shift in public awareness reflected in the legal and policy response.


There is also a plainspoken way to describe what many of these platforms have become. They are not merely places where people connect. Increasingly, they feel like marketplaces where the worst ideas, the most corrosive comparisons, the crudest temptations, and the most manipulative attention tactics can all find a buyer instantly.


For many adults, Instagram no longer feels like a digital yearbook or a harmless photo-sharing app. It feels like a storefront for status, beauty, envy, aspiration, monetized intimacy, and algorithmically amplified performance. That is not a legal finding. It is a cultural one. But it helps explain why these companies are losing the benefit of the doubt.


The recent verdicts suggest that public patience is thinning. Reuters reported that Meta, Google, Snap, TikTok, and ByteDance are facing thousands of lawsuits over claims that they knowingly designed platforms with features that addict children and teens, fueling a mental health crisis. More than 2,400 cases have been centralized in federal court in California alone, with thousands more in state court.


Bellwether trials are designed to test how juries respond to a set of facts and legal theories. That is why the Los Angeles result matters so much. It is not the end of the fight, and both Meta and Google plan to appeal, but it is a signal that juries may no longer see these companies as neutral pipes. They may see them as designers of products with foreseeable consequences.


This is why the moment feels different from earlier rounds of criticism. Back then, the platforms could still hide behind novelty, optimism, and the language of connection. Now the public has lived with the products long enough to see what they optimize for.


The Surgeon General’s call for warning labels on social media apps was not just a policy suggestion. It was an acknowledgment that these services are no longer viewed merely as fun tools with a few side effects. They are increasingly being treated as products that can impose mental-health costs, especially on adolescents. Once that framing takes hold, courtroom victories become easier to understand and harder to dismiss.


So yes, this may be a turning point, especially for Meta. Not because every claim has been resolved, and not because appeals will not come. They will. But because the country has finally caught up to what experts were warning about years ago. The average person now has enough lived experience with these products to understand the case against them. That may be the biggest shift of all. When the public did not grasp the harm, tech companies could still look futuristic. Now they look familiar. And familiar is much more dangerous in front of a jury.

Comments


Featured

Try Salesfully for free

bottom of page