The Courts

Mark Zuckerberg Testifies During Landmark Trial On Social Media Addiction (nbcnews.com) 31

Mark Zuckerberg is testifying in a landmark Los Angeles trial examining whether Meta and other social media firms can be held liable for designing platforms that allegedly addict and harm children. NBC News reports: It's the first of a consolidated group of cases -- from more than 1,600 plaintiffs, including over 350 families and over 250 school districts -- scheduled to be argued before a jury in Los Angeles County Superior Court. Plaintiffs accuse the owners of Instagram, YouTube, TikTok and Snap of knowingly designing addictive products harmful to young users' mental health. Historically, social media platforms have been largely shielded by Section 230, a provision added to the Communications Act of 1934, that says internet companies are not liable for content users post. TikTok and Snap reached settlements with the first plaintiff, a 20-year-old woman identified in court as K.G.M., ahead of the trial. The companies remain defendants in a series of similar lawsuits expected to go to trial this year.

[...] Matt Bergman, founding attorney of Social Media Victims Law Center -- which is representing about 750 plaintiffs in the California proceeding and about 500 in the federal proceeding -- called Wednesday's testimony "more than a legal milestone -- it is a moment that families across this country have been waiting for." "For the first time, a Meta CEO will have to sit before a jury, under oath, and explain why the company released a product its own safety teams warned were addictive and harmful to children," Bergman said in a statement Tuesday, adding that the moment "carries profound weight" for parents "who have spent years fighting to be heard." "They deserve the truth about what company executives knew," he said. "And they deserve accountability from the people who chose growth and engagement over the safety of their children."

Windows

GameHub Will Give Mac Owners Another Imperfect Way To Play Windows Games (arstechnica.com) 8

An anonymous reader quotes a report from Ars Technica: For a while now, Mac owners have been able to use tools like CrossOver and Game Porting Toolkit to get many Windows games running on their operating system of choice. Now, GameSir plans to add its own potential solution to the mix, announcing that a version of its existing Windows emulation tool for Android will be coming to macOS. Hong Kong-based GameSir has primarily made a name for itself as a manufacturer of gaming peripherals -- the company's social media profile includes a self-description as "the Anti-Stick Drift Experts." Early last year, though, GameSir rolled out the Android GameHub app, which includes a GameFusion emulator that the company claims "provides complete support for Windows games to run on Android through high-precision compatibility design."

In practice, GameHub and GameFusion for Android haven't quite lived up to that promise. Testers on Reddit and sites like EmuReady report hit-or-miss compatibility for popular Steam titles on various Android-based handhelds. At least one Reddit user suggests that "any Unity, Godot, or Game Maker game tends to just work" through the app, while another reports "terrible compatibility" across a wide range of games. With Sunday's announcement, GameSir promises a similar opportunity to "unlock your entire Steam library" and "run Win games/Steam natively" on Mac will be "coming soon." GameSir is also promising "proprietary AI frame interpolation" for the Mac, following the recent rollout of a "native rendering mode" that improved frame rates on the Android version.
There are some "reasons to worry" though, based on the company's uneven track record. The Android version faced controversy for including invasive tracking components, which were later removed after criticism. There were also questions about the use of open-source code, as GameSir acknowledged referencing and using UI components from Winlator, even while maintaining that its core compatibility layer was developed in-house.
Privacy

Leaked Email Suggests Ring Plans To Expand 'Search Party' Surveillance Beyond Dogs (404media.co) 47

Ring's AI-powered "Search Party" feature, which links neighborhood cameras into a networked surveillance system to find lost dogs, was never intended to stop at pets, according to an internal email from founder Jamie Siminoff obtained by 404 Media.

Siminoff told employees in early October, shortly after the feature launched, that Search Party was introduced "first for finding dogs" and that the technology would eventually help "zero out crime in neighborhoods." The on-by-default feature faced intense backlash after Ring promoted it during a Super Bowl ad. Ring has since also rolled out "Familiar Faces," a facial recognition tool that identifies friends and family on a user's camera, and "Fire Watch," an AI-based fire alert system.

A Ring spokesperson told the publication Search Party does not process human biometrics or track people.
AI

WordPress Gets AI Assistant That Can Edit Text, Generate Images and Tweak Your Site (techcrunch.com) 21

WordPress has started rolling out an AI assistant built into its site editor and media library that can edit and translate text, generate and edit images through Google's Nano Banana model, and make structural changes to sites like creating new pages or swapping fonts.

Users can also invoke the assistant by tagging "@ai" in block notes, a commenting feature added to the site editor in December's WordPress 6.9 update. The tool is opt-in -- users need to toggle on "AI tools" in their site settings -- though sites originally created using WordPress's AI website builder, launched last year, will have it enabled by default.
AI

India Tells University To Leave AI Summit After Presenting Chinese Robot as Its Own (reuters.com) 11

An anonymous reader shares a report: An Indian university has been asked to vacate its stall at the country's flagship AI summit after a staff member was caught presenting a commercially available robotic dog made in China as its own creation, two government sources said.

"You need to meet Orion. This has been developed by the Centre of Excellence at Galgotias University," Neha Singh, a professor of communications, told state-run broadcaster DD News this week in remarks that have since gone viral.

But social media users quickly identified the robot as the Unitree Go2, sold by China's Unitree Robotics for about $2,800 and widely used in research and education globally. The episode has drawn sharp criticism and has cast an uncomfortable spotlight on India's artificial intelligence ambitions.

Social Networks

Discord Rival Maxes Out Hosting Capacity As Players Flee Age-Verification Crackdown (pcgamer.com) 33

Following backlash over Discord's global rollout of strict age-verification checks, users are flocking to rival platform TeamSpeak and overwhelming its servers. According to PC Gamer, the Discord alternative said its hosting capacity has been maxed out in a number of regions including the U.S. From the report: [A]s I saw for myself while testing out free Discord alternatives, it's hard to deny the appeal of TeamSpeak. It's quick and easy to make an account, join or start a group chat, or join a massive, game-based community voice server, and at no point does TeamSpeak cheekily ask if it can scan your wizened visage.

During my testing, I was able to dive into 18+ group chats without tripping over an age gate. However, there's no guarantee TeamSpeak won't have to deploy its own age verification mechanism in the future. In the UK at least, the Online Safety Act makes those sorts of checks a legal obligation, with Prime Minister Keir Starmer recently stating "No social media platform should get a free pass when it comes to protecting our kids."

Besides all of that, if you'd rather not chat to randoms who also happen to have an unhealthy obsession with Arc Raiders, you'll likely need to pay an admittedly small subscription fee to rent your own ten-person community voice server. By that point, you're handing over card details and essentially fulfilling an age assurance check anyway. If you'd rather limit how much info your chat platform of choice has about you, there are arguably better options out there.

Movies

A YouTuber's $3M Movie Nearly Beat Disney's $40M Thriller at the Box Office (theatlantic.com) 45

Mark Fischbach, the YouTube creator known as Markiplier who has spent nearly 15 years building an audience of more than 38 million subscribers by playing indie-horror video games on camera, has pulled off something that most independent filmmakers never manage -- a self-financed, self-distributed debut feature that has grossed more than $30 million domestically against a $3 million budget.

Iron Lung, a 127-minute sci-fi adaptation of a video game Fischbach wrote, directed, starred in, and edited himself, opened to $18.3 million in its first weekend and has since doubled that figure worldwide in just two weeks, nearly matching the $19.1 million debut of Send Help, a $40 million thriller from Disney-owned 20th Century Studios. Fischbach declined deals from traditional distributors and instead spent months booking theaters privately, encouraging fans to reserve tickets online; when prospective viewers found the film wasn't screening in their city, they called local cinemas to request it, eventually landing Iron Lung on more than 3,000 screens across North America -- all without a single paid media campaign.
Social Networks

Instagram Boss Says 16 Hours of Daily Use Is Not Addiction (bbc.com) 62

Instagram head Adam Mosseri told a Los Angeles courtroom last week that a teenager's 16-hour single-day session on the platform was "problematic use" but not an addiction, a distinction he drew repeatedly during testimony in a landmark trial over social media's harm to minors.

Mosseri, who has led Instagram for eight years, is the first high-profile tech executive to take the stand. He agreed the platform should do everything in its power to protect young users but said how much use was too much was "a personal thing." The lead plaintiff, identified as K.G.M., reported bullying on Instagram more than 300 times; Mosseri said he had not known. An internal Meta survey of 269,000 users found 60% had experienced bullying in the previous week.
Social Networks

India's New Social Media Rules: Remove Unlawful Content in Three Hours, Detect Illegal AI Content Automatically (bbc.com) 23

Bloomberg reports: India tightened rules governing social media content and platforms, particularly targeting artificially generated and manipulated material, in a bid to crack down on the rapid spread of misinformation and deepfakes. The government on Tuesday (Feb 10) notified new rules under an existing law requiring social media firms to comply with takedown requests from Indian authorities within three hours and prominently label AI-generated content. The rules also require platforms to put in place measures to prevent users from posting unlawful material...

Companies will need to invest in 24-hour monitoring centres as enforcement shifts toward platforms rather than users, said Nikhil Pahwa, founder of MediaNama, a publication tracking India's digital policy... The onus of identification, removal and enforcement falls on tech firms, which could lose immunity from legal action if they fail to act within the prescribed timeline.

The new rules also require automated tools to detect and prevent illegal AI content, the BBC reports. And they add that India's new three-hour deadline is "a sharp tightening of the existing 36-hour deadline." [C]ritics worry the move is part of a broader tightening of oversight of online content and could lead to censorship in the world's largest democracy with more than a billion internet users... According to transparency reports, more than 28,000 URLs or web links were blocked in 2024 following government requests...

Delhi-based technology analyst Prasanto K Roy described the new regime as "perhaps the most extreme takedown regime in any democracy". He said compliance would be "nearly impossible" without extensive automation and minimal human oversight, adding that the tight timeframe left little room for platforms to assess whether a request was legally appropriate. On AI labelling, Roy said the intention was positive but cautioned that reliable and tamper-proof labelling technologies were still developing.

DW reports that India has also "joined the growing list of countries considering a social media ban for children under 16."

"Young Indians are not happy and are already plotting workarounds."
Social Networks

Social Networks Agree to Be Rated On Their Teen Safety Efforts (yahoo.com) 14

Meta, TikTok, Snap and other social neteworks agreed this week to be rated on their teen safety efforts, reports the Los Angeles Times, "amid rising concern about whether the world's largest social media platforms are doing enough to protect the mental health of young people." The Mental Health Coalition, a collective of organizations focused on destigmatizing mental health issues, said Tuesday that it is launching standards and a new rating system for online platforms. For the Safe Online Standards (S.O.S.) program, an independent panel of global experts will evaluate companies on parameters including safety rules, design, moderation and mental health resources. TikTok, Snap and Meta — the parent company of Facebook and Instagram — will be the first companies to be graded. Discord, YouTube, Pinterest, Roblox and Twitch have also agreed to participate, the coalition said in a news release.

"These standards provide the public with a meaningful way to evaluate platform protections and hold companies accountable — and we look forward to more tech companies signing up for the assessments," Antigone Davis, vice president and global head of safety at Meta, said in a statement... The ratings will be color-coded, and companies that perform well on the tests will get a blue shield badge that signals they help reduce harmful content on the platform and their rules are clear. Those that fall short will receive a red rating, indicating they're not reliably blocking harmful content or lack proper rules. Ratings in other colors indicate whether the platforms have partial protection or whether their evaluations haven't been completed yet.

Social Networks

The EU Moves To Kill Infinite Scrolling 37

Doom scrolling is doomed, if the EU gets its way. From a report: The European Commission is for the first time tackling the addictiveness of social media in a fight against TikTok that may set new design standards for the world's most popular apps. Brussels has told the company to change several key features, including disabling infinite scrolling, setting strict screen time breaks and changing its recommender systems. The demand follows the Commission's declaration that TikTok's design is addictive to users -- especially children.

The fact that the Commission said TikTok should change the basic design of its service is "ground-breaking for the business model fueled by surveillance and advertising," said Katarzyna Szymielewicz, president of the Panoptykon Foundation, a Polish civil society group. That doesn't bode well for other platforms, particularly Meta's Facebook and Instagram. The two social media giants are also under investigation over the addictiveness of their design.
AI

Anthropic's Claude Got 11% User Boost from Super Bowl Ad Mocking ChatGPT's Advertising (cnbc.com) 8

Anthropic saw visits to its site jump 6.5% after Sunday's Super Bowl ad mocking ChatGPT's advertising, reports CNBC (citing data analyzed by French financial services company BNP Paribas).

The Claude gain, which took it into the top 10 free apps on the Apple App Store, beat out chatbot and AI competitors OpenAI, Google Gemini and Meta. Daily active users also saw an 11% jump post-game, the most significant within the firm's AI coverage. [Just in the U.S., 125 million people were watching Sunday's Super Bowl.]

OpenAI's ChatGPT had a 2.7% bump in daily active users after the Super Bowl and Gemini added 1.4%. Claude's user base is still much smaller than ChatGPT and Gemini...

OpenAI CEO Sam Altman attacked Anthropic's Super Bowl ad campaign. In a post to social media platform X, Altman called the commercials "deceptive" and "clearly dishonest."

OpenAI's Altman admitted in his social media post (February 4) that Anthropic's ads "are funny, and I laughed." But in several paragraphs he made his own OpenAI-Anthropic comparisons:
  • "We believe everyone deserves to use AI and are committed to free access, because we believe access creates agency. More Texans use ChatGPT for free than total people use Claude in the U.S... Anthropic serves an expensive product to rich people. We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can't pay for subscriptions.
  • "If you want to pay for ChatGPT Plus or Pro, we don't show you ads."
  • "Anthropic wants to control what people do with AI — they block companies they don't like from using their coding product (including us), they want to write the rules themselves for what people can and can't use AI for, and now they also want to tell other companies what their business models can be."

Businesses

Israeli Soldiers Accused of Using Polymarket To Bet on Strikes (wsj.com) 128

An anonymous reader shares a report: Israel has arrested several people, including army reservists, for allegedly using classified information to place bets on Israeli military operations on Polymarket. Shin Bet, the country's internal security agency, said Thursday the suspects used information they had come across during their military service to inform their bets.

One of the reservists and a civilian were indicted on a charge of committing serious security offenses, bribery and obstruction of justice, Shin Bet said, without naming the people who were arrested. Polymarket is what is called a prediction market that lets people place bets to forecast the direction of events. Users wager on everything from the size of any interest-rate cut by the Federal Reserve in March to the winner of League of Legends videogame tournaments to the number of times Elon Musk will tweet in the third week of February.

The arrests followed reports in Israeli media that Shin Bet was investigating a series of Polymarket bets last year related to when Israel would launch an attack on Iran, including which day or month the attack would take place and when Israel would declare the operation over. Last year, a user who went by the name ricosuave666 correctly predicted the timeline around the 12-day war between Israel and Iran. The bets drew attention from other traders who suspected the account holder had access to nonpublic information. The account in question raked in more than $150,000 in winnings before going dormant for six months. It resumed trading last month, betting on when Israel would strike Iran, Polymarket data shows.

AI

Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code (theshamblog.com) 92

"I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change."

"Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." (UPDATE: Ars Technica acknowledges they'd asked ChatGPT to extract quotes from Shambaugh's post, and that it instead responded with inaccurate quotes it hallucinated.)

From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but.

It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet.

I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat...

It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine.

"How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...")

And amazingly, Shambaugh then had another run-in with a hallucinating AI...

I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here...

So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.

Thanks to long-time Slashdot reader steak for sharing the news.
Facebook

Meta's New Patent: an AI That Likes, Comments and Messages For You When You're Dead (businessinsider.com) 89

Meta was granted a patent in late December that describes how a large language model could be trained on a deceased user's historical activity -- their comments, likes, and posted content -- to keep their social media accounts active after they're gone.

Andrew Bosworth, Meta's CTO, is listed as the primary author of the patent, first filed in 2023. The AI clone could like and comment on posts, respond to DMs, and even simulate video or audio calls on the user's behalf. A Meta spokesperson told Business Insider the company has "no plans to move forward" with the technology.
Privacy

Ring Cancels Its Partnership With Flock Safety After Surveillance Backlash (theverge.com) 41

Following intense backlash to its partnership with Flock Safety, a surveillance technology company that works with law enforcement agencies, Ring has announced it is canceling the integration. From a report: In a statement published on Ring's blog and provided to The Verge ahead of publication, the company said: "Following a comprehensive review, we determined the planned Flock Safety integration would require significantly more time and resources than anticipated. We therefore made the joint decision to cancel the integration and continue with our current partners ... The integration never launched, so no Ring customer videos were ever sent to Flock Safety."

[...] Over the last few weeks, the company has faced significant public anger over its connection to Flock, with Ring users being encouraged to smash their cameras, and some announcing on social media that they are throwing away their Ring devices. The Flock partnership was announced last October, but following recent unrest across the country related to ICE activities, public pressure against the Amazon-owned Ring's involvement with the company started to mount. Flock has reportedly allowed ICE and other federal agencies to access its network of surveillance cameras, and influencers across social media have been claiming that Ring is providing a direct link to ICE.

United States

CIA Makes New Push To Recruit Chinese Military Officers as Informants (reuters.com) 72

An anonymous reader shares a report: Just weeks after a dramatic purge of China's top general, the CIA is moving to capitalize on any resulting discord with a new public video targeting potential informants in the Chinese military. The U.S. spy agency on Thursday rolled out the video depicting a disillusioned mid-level Chinese military officer, in the latest U.S. step in a campaign to ramp up human intelligence gathering on Washington's strategic rival.

It follows a similar effort last May that focused on fictional figures within China's ruling Communist Party that provided detailed Chinese-language instructions on how to securely contact U.S. intelligence. CIA Director John Ratcliffe said in a statement that the agency's videos had reached many Chinese citizens and that it would continue offering Chinese government officials an "opportunity to work toward a brighter future together."

United States

Border Officials Are Said To Have Caused El Paso Closure by Firing Anti-Drone Laser (nytimes.com) 116

An anonymous reader shares a report: The abrupt closure of El Paso's airspace late Tuesday was precipitated when Customs and Border Protection officials deployed an anti-drone laser on loan from the Department of Defense without giving aviation officials enough time to assess the risks to commercial aircraft, according to multiple people briefed on the situation.

The episode led the Federal Aviation Administration to abruptly declare that the nearby airspace would be shut down for 10 days, an extraordinary pause that was quickly lifted Wednesday morning at the direction of the White House. Top administration officials quickly claimed that the closure was in response to a sudden incursion of drones from Mexican drug cartels that required a military response, with Transportation Secretary Sean Duffy declaring in a social media post that "the threat has been neutralized."

But that assertion was undercut by multiple people familiar with the situation, who said that the F.A.A.'s extreme move came after immigration officials earlier this week used an anti-drone laser shared by the Pentagon without coordination with the F.A.A. The people spoke on the condition of anonymity because they were not authorized to speak publicly. C.B.P. officials thought they were firing on a cartel drone, the people said, but it turned out to be a party balloon. Defense Department officials were present during the incident, one person said.

Privacy

With Ring, American Consumers Built a Surveillance Dragnet (404media.co) 71

Ring's Super Bowl ad on Sunday promoted "Search Party," a feature that lets a user post a photo of a missing dog in the Ring app and triggers outdoor Ring cameras across the neighborhood to use AI to scan for a match. 404 Media argues the cheerful premise obscures what the Amazon-owned company has become: a massive, consumer-deployed surveillance network.

Ring founder Jamie Siminoff, who left in 2023 and returned last year, has since moved to re-establish police partnerships and push more AI into Ring cameras. The company has also partnered with Flock, a surveillance firm used by thousands of police departments, and launched a beta feature called "Familiar Faces" that identifies known people at your door. Chris Gilliard, author of the upcoming book Luxury Surveillance, called the ad "a clumsy attempt by Ring to put a cuddly face on a rather dystopian reality: widespread networked surveillance by a company that has cozy relationships with law enforcement."

Further reading: No One, Including Our Furry Friends, Will Be Safer in Ring's Surveillance Nightmare, EFF Says
United Kingdom

UK Orders Deletion of Country's Largest Court Reporting Archive (thetimes.com) 57

The UK's Ministry of Justice has ordered the deletion of the country's largest court reporting archive [non-paywalled source], a database built by data analysis company Courtsdesk that more than 1,500 journalists across 39 media organizations have used since the lord chancellor approved the project in 2021.

Courtsdesk's research found that journalists received no advance notice of 1.6 million criminal hearings, that court case listings were accurate on just 4.2% of sitting days, and that half a million weekend cases were heard without any press notification. In November, HM Courts and Tribunal Service issued a cessation notice citing "unauthorized sharing" of court data based on a test feature.

Courtsdesk says it wrote 16 times asking for dialogue and requested a referral to the Information Commissioner's Office; no referral was made. The government issued a final refusal last week, and the archive must now be deleted within days. Chris Philp, the former justice minister who approved the pilot and now shadow home secretary, has written to courts minister Sarah Sackman demanding the decision be reversed.

Slashdot Top Deals