AI

Scarlett Johansson Calls For Deepfake Ban After AI Video Goes Viral (people.com) 75

An anonymous reader quotes a report from People: Scarlett Johansson is urging U.S. legislators to place limits on artificial intelligence as an unauthorized, A.I.-generated video of her and other Jewish celebrities opposing Kanye West goes viral. The video, which has been circulating on social media, opens with an A.I. version of Johansson, 40, wearing a white T-shirt featuring a hand and its middle finger extended. In the center of the hand is a Star of David. The name "Kanye" is written underneath the hand.

The video contains A.I.-generated versions of over a dozen other Jewish celebrities, including Drake, Jerry Seinfeld, Steven Spielberg, Mark Zuckerberg, Jack Black, Mila Kunis and Lenny Kravitz. It ends with an A.I. Adam Sandler flipping his finger at the camera as the Jewish folk song "Hava Nagila" plays. The video ends with "Enough is Enough" and "Join the Fight Against Antisemitism." In a statement to PEOPLE, Johansson denounced what she called "the misuse of A.I., no matter what its messaging."
Johansson continued: "It has been brought to my attention by family members and friends, that an A.I.-generated video featuring my likeness, in response to an antisemitic view, has been circulating online and gaining traction. I am a Jewish woman who has no tolerance for antisemitism or hate speech of any kind. But I also firmly believe that the potential for hate speech multiplied by A.I. is a far greater threat than any one person who takes accountability for it. We must call out the misuse of A.I., no matter its messaging, or we risk losing a hold on reality."

"I have unfortunately been a very public victim of A.I.," she added, "but the truth is that the threat of A.I. affects each and every one of us. There is a 1000-foot wave coming regarding A.I. that several progressive countries, not including the United States, have responded to in a responsible manner. It is terrifying that the U.S. government is paralyzed when it comes to passing legislation that protects all of its citizens against the imminent dangers of A.I."

The statement concluded, "I urge the U.S. government to make the passing of legislation limiting A.I. use a top priority; it is a bipartisan issue that enormously affects the immediate future of humanity at large."

Johansson has been outspoken about AI technology since its rise in popularity. Last year, she called out OpenAI for using an AI personal assistant voice that the actress claims sounds uncannily similar to her own.
Apple

Apple Now Lets You Move Purchases Between Your 25 Years of Accounts (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: Last night, Apple posted a new support document about migrating purchases between accounts, something that Apple users with long online histories have been waiting on for years, if not decades. If you have movies, music, or apps orphaned on various iTools/.Mac/MobileMe/iTunes accounts that preceded what you're using now, you can start the fairly involved process of moving them over.

"You can choose to migrate apps, music, and other content you've purchased from Apple on a secondary Apple Account to a primary Apple Account," the document reads, suggesting that people might have older accounts tied primarily to just certain movies, music, or other purchases that they can now bring forward to their primary, device-linked account. The process takes place on an iPhone or iPad inside the Settings app, in the "Media & Purchases" section in your named account section.

There are a few hitches to note. You can't migrate purchases from or into a child's account that exists inside Family Sharing. You can only migrate purchases to an account once a year. There are some complications if you have music libraries on both accounts and also if you have never used the primary account for purchases or downloads. And migration is not available in the EU, UK, or India. The process is also one direction, so you have to give some real thought to which account is your "primary" account going forward. If you goof it up, you can undo the migration.
"The list of things you need to do on both the primary and secondary accounts to enable this migration is almost comically long and detailed: two-factor authentication must be turned on, there can be no purchases or rentals in the last 15 days, payment methods must be updated, and so on," notes Ars' Kevin Purdy.
Crime

'Serial Swatter' Who Made Nearly 400 Threatening Calls Gets 4 Years In Prison (thehill.com) 98

Alan W. Filion, an 18-year-old from Lancaster, Calif., was sentenced to four years in prison for making nearly 400 false bomb threats and threats of violence (source may be paywalled; alternative source) to religious institutions, schools, universities and homes across the country. The New York Times reports: The threatening calls Mr. Filion made would often cause large deployments of police officers to a targeted location, the Justice Department said in a news release. In some cases, officers would enter people's homes with their weapons drawn and detain those inside. In January 2023, Mr. Filion wrote on social media that his swats had often led the police to "drag the victim and their families out of the house cuff them and search the house for dead bodies."

Investigators linked Mr. Filion to over 375 swatting calls made in several states, including one that he made to the police in Sanford, Fla., saying that he would commit a mass shooting at the Masjid Al Hayy Mosque. During the call, he played audio of gunfire in the background. Mr. Filion was arrested in California in January 2024, and was then extradited to Florida to face state charges for making that threat. Mr. Filion began swatting for recreation in August 2022 before making it into a business, the Justice Department said. The teenager became a "serial swatter" and would make social media posts about his "swatting-for-a-fee" services, according to prosecutors.

In addition to pleading guilty to the false threat against the mosque in Florida, Mr. Filion pleaded guilty in three other swatting cases: a mass shooting threat to a public school in Washington State in October 2022; a bomb threat call to a historically Black college or university in Florida in May 2023; and a July 2023 call in which he claimed to be a federal law enforcement officer in Texas and told dispatchers that he had killed his mother and would kill any responding officers.

AI

Thomson Reuters Wins First Major AI Copyright Case In the US 54

An anonymous reader quotes a report from Wired: Thomson Reuters has won the first major AI copyright case in the United States. In 2020, the media and technology conglomerate filed an unprecedentedAI copyright lawsuit against the legal AI startup Ross Intelligence. In the complaint, Thomson Reuters claimed the AI firm reproduced materials from its legal research firm Westlaw. Today, a judge ruled (PDF) in Thomson Reuters' favor, finding that the company's copyright was indeed infringed by Ross Intelligence's actions. "None of Ross's possible defenses holds water. I reject them all," wrote US District Court of Delaware judge Stephanos Bibas, in a summary judgement. [...] Notably, Judge Bibas ruled in Thomson Reuters' favor on the question of fair use.

The fair use doctrine is a key component of how AI companies are seeking to defend themselves against claims that they used copyrighted materials illegally. The idea underpinning fair use is that sometimes it's legally permissible to use copyrighted works without permission -- for example, to create parody works, or in noncommercial research or news production. When determining whether fair use applies, courts use a four-factor test, looking at the reason behind the work, the nature of the work (whether it's poetry, nonfiction, private letters, et cetera), the amount of copyrighted work used, and how the use impacts the market value of the original. Thomson Reuters prevailed on two of the four factors, but Bibas described the fourth as the most important, and ruled that Ross "meant to compete with Westlaw by developing a market substitute."
"If this decision is followed elsewhere, it's really bad for the generative AI companies," says James Grimmelmann, Cornell University professor of digital and internet law.

Chris Mammen, a partner at Womble Bond Dickinson who focuses on intellectual property law, adds: "It puts a finger on the scale towards holding that fair use doesn't apply."
Social Networks

US-Funded 'Social Network' Attacking Pesticide Critics Shuts Down (theguardian.com) 64

The US company v-Fluence secretly compiled profiles on over 500 food and environmental health advocates, scientists, and politicians in a private web portal to discredit critics of pesticides and GM crops. Following public backlash and corporate cancellations after its actions were revealed by the Guardian, the company announced it was shutting down the profiling service. The Guardian reports: The profiles -- part of an effort that was financed, in part, by US taxpayer dollars -- often provided derogatory information about the industry opponents and included home addresses and phone numbers and details about family members, including children. They were provided to members of an invite-only web portal where v-Fluence also offered a range of other information to its roster of more than 1,000 members. The membership included staffers of US regulatory and policy agencies, executives from the world's largest agrochemical companies and their lobbyists, academics and others.

The profiling was one element of a push to downplay pesticide dangers, discredit opponents and undermine international policymaking, according to court records, emails and other documents obtained by the non-profit newsroom Lighthouse Reports. Lighthouse collaborated with the Guardian, the New Lede, Le Monde, Africa Uncensored, the Australian Broadcasting Corporation and other international media partners on the September 2024 publication of the investigation. News of the profiling and the private web portal sparked outrage and threats of litigation by some of the people and organizations profiled. [...]

v-Fluence says it not only has eliminated the profiling, but also has made "significant staff cuts" after the public exposure, according to Jay Byrne, the former Monsanto public relations executive who founded and heads the company. Byrne blamed the company's struggles on "rising costs from continued litigator and activist harassment of our staff, partners, and clients with threats and misrepresentations." He said the articles published about the company's profiling and private web portal were part of a "smear campaign" which was based on "false and misleading misrepresentations" that were "not supported by any facts or evidence." Adding to the company's troubles, several corporate backers and industry organizations have cancelled contracts with v-Fluence, according a post in a publication for agriculture professionals.

Technology

Microchip Company Ceases Operations, Pet Owners Urged To Re-Register (cbsnews.com) 37

An anonymous reader quotes a report from CBS News: Animal shelters, rescues, and veterinarian clinics around the U.S. are posting on social media telling pet owners to check their four-legged friends' microchips after learning a major microchip company [called Save This Life] is no longer providing services. [...] If you're unsure which company your cats or dogs' chips are registered with, check them. "You can go to your local veterinarian office, a local police station, or even a local animal shelter like HARP, and we can help check that for you and scan your animal. And then you take that number that's on there and there's a tool online where you can go look it up," [said Dan Cody, Executive Director of Humane Animal Rescue of Pittsburgh].

He said you check the number by using the AAHA Universal Microchip Lookup Tool at this link. If you discover your pet's microchip was registered to the company that's ceased operations, you'll need to register with a different company. "So, if you find that you are affected by this, you're going to want to go to one of these other websites that do the registrations. So, things like AKC Reunite, and PetLink. 24PetWatch these are all large companies who've been around for a long time and have good reputations," said Cody.

The American Kennel Club shared a post from its AKC Reunite Facebook page, encouraging people to enroll in microchips with AKC Reunite. The post said in part, "If your dog or cat has a microchip number that starts with 991 or 900164 then it could be a Save This Life microchip. Save This Life suddenly closed, and your pet may not be protected." Cody said if your furry best friend isn't microchipped, take them to a vet or shelter like HARP to get one implanted under their skin so they have a permanent ID. Microchipping can be done at HARP's East Side and North Side Veterinary Medical Center by appointment.

Crime

California Tech Founder Admits to Defrauding $4M For His Luxury Lifestyle (sfgate.com) 47

The tech startup "purported to make smart home and business products," writes America's Justice Department — products that were "meant to stop package theft, prevent weather damage to packages, and make it easier for emergency responders and delivery services to find homes and businesses." Royce Newcomb "developed prototypes of his products and received local and national media attention for them. For example, Time Magazine included his eLiT Address Box & Security System, which used mobile networks to pinpoint home and business locations, on its Best Inventions of 2021 list."

But then he told investors he'd also received a grant by the National Science Foundation — one of "several false representations to his investors to deceive and cheat them out of their money... Newcomb used the money to pay for gambling, a Mercedes and Jaguar, and a mansion." He also used the money to pay for refunds to other investors who wanted out, and to pay for new, unrelated projects without the investors' authorization. During this period, Newcomb also received a fraudulent COVID-19 loan for more than $70,000 from the Small Business Administration and fraudulent loans for more than $190,000 from private lenders. He lied about Strategic Innovations having hundreds of thousands and even millions in revenue to get these loans.

Newcomb was previously convicted federally in 2011 for running a real estate fraud scheme in Sacramento. He was sentenced to more than five years in prison for that offense, and he was on federal supervised release for that offense when he committed the offenses charged in this case... Newcomb faces maximum statutory penalties of 20 years in prison and a $250,000 fine for the wire fraud charge, and 10 years in prison and a $250,000 fine for the money laundering charge...

This effort is part of a California COVID-19 Fraud Enforcement Strike Force operation, one of five interagency COVID-19 fraud strike force teams established by the U.S. Department of Justice.

SFGate writes that "Despite receiving significant funding, his startup, Strategic Innovations, never made a dime or released any products to market, according to legal documents." The owner of a California tech startup has pleaded guilty to stealing over $4 million from investors, private lenders and the U.S. government in order to live a luxurious lifestyle, the United States Attorney's Office announced Monday... When investors asked about product delays and when they'd be paid back, Newcomb made excuses and provided conflicting info, telling them that there were supply chain issues or software problems, according to the indictment. In reality, federal prosecutors said, he was using the money to travel and continue to make these lavish personal expenses.
AI

Creators Demand Tech Giants Fess Up, Pay For All That AI Training Data 55

The Register highlights concerns raised at a recent UK parliamentary committee regarding AI companies' exploitation of copyrighted content without permission or payment. From the report: The Culture, Media and Sport Committee and Science, Innovation and Technology Committee asked composer Max Richter how he would know if "bad-faith actors" were using his material to train AI models. "There's really nothing I can do," he told MPs. "There are a couple of music AI models, and it's perfectly easy to make them generate a piece of music that sounds uncannily like me. That wouldn't be possible unless it had hoovered up my stuff without asking me and without paying for it. That's happening on a huge scale. It's obviously happened to basically every artist whose work is on the internet."

Richter, whose work has been used in a number of major film and television scores, said the consequences for creative musicians and composers would be dire. "You're going to get a vanilla-ization of music culture as automated material starts to edge out human creators, and you're also going to get an impoverishing of human creators," he said. "It's worth remembering that the music business in the UK is a real success story. It's 7.6 billion-pound income last year, with over 200,000 people employed. That is a big impact. If we allow the erosion of copyright, which is really how value is created in the music sector, then we're going to be in a position where there won't be artists in the future."

Speaking earlier, former Google staffer James Smith said much of the damage from text and data mining had likely already been done. "The original sin, if you like, has happened," said Smith, co-founder and chief executive of Human Native AI. "The question is, how do we move forward? I would like to see the government put more effort into supporting licensing as a viable alternative monetization model for the internet in the age of these new AI agents."

Matt Rogerson, director of global public policy and platform strategy at the Financial Times, said: "We can only deal with what we see in front of us and [that is] people taking our content, using it for the training, using it in substitutional ways. So from our perspective, we'll prosecute the same argument in every country where we operate, where we see our content being stolen." The risk, if the situation continued, was a hollowing out of creative and information industries, he said. [...] "The problem is we can't see who's stolen our content. We're just at this stage where these very large companies, which usually make margins of 90 percent, might have to take some smaller margin, and that's clearly going to be upsetting for their investors. But that doesn't mean they shouldn't. It's just a question of right and wrong and where we pitch this debate. Unfortunately, the government has pitched it in thinking that you can't reduce the margin of these big tech companies; otherwise, they won't build a datacenter."
Patents

Amazon Says Germany Customers Won't Lose Amazon Prime As a Result of Nokia Patent Win 12

A German court has ruled that Amazon's Prime Video service violates a Nokia-owned patent, ordering Amazon to stop streaming in its current form or face fines of 250,000 euros per violation. However, Amazon assured customers in a statement on Friday that there is no risk of losing access to Prime Video because the decision affects only a limited functionality related to casting videos between devices.

"Prime Video will comply with this local judgement and is currently considering next steps. However, there is absolutely no risk at all for customers losing access to Prime Video," Amazon's Prime Video spokesperson told Reuters. Meanwhile, Nokia's chief licensing officer, Arvin Patel, said: "...the innovation ecosystem breaks down if patent holders are not fairly compensated for the use of their technologies, as it becomes much harder for innovators to fund the development of next generation technologies."
Security

'Zombie Devices' Raise Cybersecurity Alarm as Consumers Ignore Smart Tech Expiry Dates 54

A survey of 2,130 Americans has revealed widespread vulnerability to cyber attacks through unsupported smart devices, with 43% unaware their devices might lose software support. The security threat was underscored in December 2023 when U.S. authorities disrupted a Chinese state-sponsored botnet targeting home routers and cameras that had stopped receiving security updates. Cloudflare separately reported a record-breaking DDoS attack in late 2023, primarily originating from compromised smart TVs and set-top boxes.

The survey, conduced by Consumer Reports, found that only 39% of consumers learned about lost software support from manufacturers, with most discovering issues when devices stopped working (40%) or through media reports (15%). Most consumers expect their smart devices to retain functionality after losing software support, particularly for large appliances (70%). However, Consumer Reports' research found only 14% of 21 smart appliance brands specify support timeframes, while an FTC study of 184 devices showed just 11% disclose support duration.
Linux

Asahi Linux Lead Developer Hector Martin Resigns From Linux Kernel (theregister.com) 86

Asahi lead developer Hector Martin, writing in an email: I no longer have any faith left in the kernel development process or community management approach.

Apple/ARM platform development will continue downstream. If I feel like sending some patches upstream in the future myself for whatever subtree I may, or I may not. Anyone who feels like fighting the upstreaming fight themselves is welcome to do so.

The Register points out that the action follows this interaction with Linux Torvalds.

Hector Martin: If shaming on social media does not work, then tell me what does, because I'm out of ideas.

Linus Torvalds: How about you accept the fact that maybe the problem is you. You think you know better. But the current process works. It has problems, but problems are a fact of life. There is no perfect. However, I will say that the social media brigading just makes me not want to have anything at all to do with your approach. Because if we have issues in the kernel development model, then social media sure as hell isn't the solution.
Security

Ransomware Payments Dropped 35% In 2024 (therecord.media) 44

An anonymous reader quotes a report from CyberScoop: Ransomware payments saw a dramatic 35% drop last year compared to 2023, even as the overall frequency of ransomware attacks increased, according to a new report released by blockchain analysis firm Chainalysis. The considerable decline in extortion payments is somewhat surprising, given that other cybersecurity firms have claimed that 2024 saw the most ransomware activity to date. Chainalysis itself warned in its mid-year report that 2024's activity was on pace to reach new heights, but attacks in the second half of the year tailed off. The total amount in payments that Chainalysis tracked in 2024 was $812.55 million, down from 2023's mark of $1.25 billion.

The disruption of major ransomware groups, such as LockBit and ALPHV/BlackCat, were key to the reduction in ransomware payments. Operations spearheaded by agencies like the United Kingdom's National Crime Agency (NCA) and the Federal Bureau of Investigation (FBI) caused significant declines in LockBit activity, while ALPHV/BlackCat essentially rug-pulled its affiliates and disappeared after its attack on Change Healthcare. [...] Additionally, [Chainalysis] says more organizations have become stronger against attacks, with many choosing not to pay a ransom and instead using better cybersecurity practices and backups to recover from these incidents. [...]
Chainalysis also says ransomware operators are letting funds sit in wallets, refraining from moving any money out of fear they are being watched by law enforcement.

You can read the full report here.
Government

Bill Banning Social Media For Youngsters Advances (politico.com) 86

The Senate Commerce Committee approved the Kids Off Social Media Act, banning children under 13 from social media and requiring federally funded schools to restrict access on networks and devices. Politico reports: The panel approved the Kids Off Social Media Act -- sponsored by the panel's chair, Texas Republican Ted Cruz, and a senior Democrat on the panel, Hawaii's Brian Schatz -- by voice vote, clearing the way for consideration by the full Senate. Only Ed Markey (D-Mass.) asked to be recorded as a no on the bill. "When you've got Ted Cruz and myself in agreement on something, you've pretty much captured the ideological spectrum of the whole Congress," Sen. Schatz told POLITICO's Gabby Miller.

[...] "KOSMA comes from very good intentions of lawmakers, and establishing national screen time standards for schools is sensible. However, the bill's in-effect requirements on access to protected information jeopardize all Americans' digital privacy and endanger free speech online," said Amy Bos, NetChoice director of state and federal affairs. The trade association represents big tech firms including Meta and Google. Netchoice has been aggressive in combating social media legislation by arguing that these laws illegally restrict -- and in some cases compel -- speech. [...] A Commerce Committee aide told POLITICO that because social media platforms already voluntarily require users to be at least 13 years old, the bill does not restrict speech currently available to kids.

E3

ESA Wants To Replace E3 With a Bunch of Buzzwords (engadget.com) 30

The Entertainment Software Association is launching a new gaming event to replace E3, which was permanently canceled in 2023. According to Engadget, the new event is called iicon (short for "interactive innovation conference") and will feature many of the same major gaming companies that once participated in E3. "Sony, Nintendo, Microsoft, Disney, EA, Epic Games, Ubisoft, Square Enix, Take Two Interactive, Amazon Games and Warner Bros. Games are all named as participants." From the report: [T]he announcements on social media promote iicon as being for "visionaries," "changemakers" and "innovators," so our best guess is that this event will swing more toward the corporate side of gaming where people might use that language unironically. If that's the case, this won't really be a replacement for the heyday of E3, when studios big and small would showcase their upcoming projects and drop internet-breaking surprises. Instead, the inaugural event in April 2026 sounds like it will focus more on moving the needle, brand alignments and synergy.
The Internet

The Enshittification Hall of Shame 249

In 2022, writer and activist Cory Doctorow coined the term "enshittification" to describe the gradual deterioration of a service or product. The term's prevalence has increased to the point that it was the National Dictionary of Australia's word of the year last year. The editors at Ars Technica, having "covered a lot of things that have been enshittified," decided to highlight some of the worst examples the've come across. Here's a summary of each thing mentioned in their report: Smart TVs: Evolved into data-collecting billboards, prioritizing advertising and user tracking over user experience and privacy. Features like convenient input buttons are sacrificed for pushing ads and webOS apps. "This is all likely to get worse as TV companies target software, tracking, and ad sales as ways to monetize customers after their TV purchases -- even at the cost of customer convenience and privacy," writes Scharon Harding. "When budget brands like Roku are selling TV sets at a loss, you know something's up."

Google's Voice Assistant (e.g., Nest Hubs): Functionality has degraded over time, with previously working features becoming unreliable. Users report frequent misunderstandings and unresponsiveness. "I'm fine just saying it now: Google Assistant is worse now than it was soon after it started," writes Kevin Purdy. "Even if Google is turning its entire supertanker toward AI now, it's not clear why 'Start my morning routine,' 'Turn on the garage lights,' and 'Set an alarm for 8 pm' had to suffer."

Portable Document Format (PDF): While initially useful for cross-platform document sharing and preserving formatting, PDFs have become bloated and problematic. Copying text, especially from academic journals, is often garbled or impossible. "Apple, which had given the PDF a reprieve, has now killed its main selling point," writes John Timmer. "Because Apple has added OCR to the MacOS image display system, I can get more reliable results by screenshotting the PDF and then copying the text out of that. This is the true mark of its enshittification: I now wish the journals would just give me a giant PNG."

Televised Sports (specifically cycling and Formula 1): Streaming services have consolidated, leading to significantly increased costs for viewers. Previously affordable and comprehensive options have been replaced by expensive bundles across multiple platforms. "Formula 1 racing has largely gone behind paywalls, and viewership is down significantly over the last 15 years," writes Eric Berger. "Major US sports such as professional and college football had largely been exempt, but even that is now changing, with NFL games being shown on Peacock, Amazon Prime, and Netflix. None of this helps viewers. It enshittifies the experience for us in the name of corporate greed."

Google Search: AI overviews often bury relevant search results under lengthy, sometimes inaccurate AI-generated content. This makes finding specific information, especially primary source documents, more difficult. "Google, like many big tech companies, expects AI to revolutionize search and is seemingly intent on ignoring any criticism of that idea," writes Ashley Belanger.

Email AI Tools (e.g., Gemini in Gmail): Intrusive and difficult to disable, these tools offer questionable value due to their potential for factual inaccuracies. Users report being unable to fully opt-out. "Gmail won't take no for an answer," writes Dan Goodin. "It keeps asking me if I want to use Google's Gemini AI tool to summarize emails or draft responses. As the disclaimer at the bottom of the Gemini tool indicates, I can't count on the output being factual, so no, I definitely don't want it."

Windows: While many complaints about Windows 11 originated with Windows 10, the newer version continues the trend of unwanted features, forced updates, and telemetry data collection. Bugs and performance issues also plague the operating system. "... it sure is easy to resent Windows 11 these days, between the well-documented annoyances, the constant drumbeat of AI stuff (some of it gated to pricey new PCs), and a batch of weird bugs that mostly seem to be related to the under-the-hood overhauls in October's Windows 11 24H2 update," writes Andrew Cunningham. "That list includes broken updates for some users, inoperable scanners, and a few unplayable games. With every release, the list of things you need to do to get rid of and turn off the most annoying stuff gets a little longer."

Web Discourse: The rapid spread of memes, trends, and corporate jargon on social media has led to a homogenization of online communication, making it difficult to distinguish original content and creating a sense of constant noise. "[T]he enshittifcation of social media, particularly due to its speed and virality, has led to millions vying for their moment in the sun, and all I see is a constant glare that makes everything look indistinguishable," writes Jacob May. "No wonder some companies think AI is the future."
Cellphones

Mobile Ban In Schools Not Improving Grades or Behavior, Study Suggests (bbc.com) 94

Longtime Slashdot reader AmiMoJo shares a report from the BBC: Banning phones in schools is not linked to pupils getting higher grades or having better mental wellbeing, the first study of its kind suggests. Students' sleep, classroom behavior, exercise or how long they spend on their phones overall also seems to be no different for schools with phone bans and schools without, the academics found. But they did find that spending longer on smartphones and social media in general was linked with worse results for all of those measures.

The first study in the world to look at school phone rules alongside measures of pupil health and education feeds into a fierce debate that has played out in homes and schools in recent years. [...] The University of Birmingham's findings, peer-reviewed and published by the Lancet's journal for European health policy, compared 1,227 students and the rules their 30 different secondary schools had for smartphone use at break and lunchtimes. The schools were chosen from a sample of 1,341 mainstream state schools in England.

The paper says schools restricting smartphone use did not seem to be seeing their intended improvements on health, wellbeing and focus in lessons. However, the research did find a link between more time on phones and social media, and worse mental wellbeing and mental health, less physical activity, poorer sleep, lower grades and more disruptive classroom behavior. The study used the internationally recognized Warwick-Edinburgh Mental Wellbeing Scales to determine participants' wellbeing. It also looked at students' anxiety and depression levels.
Dr Victoria Goodyear, the study's lead author, told the BBC the findings were not "against" smartphone bans in schools, but "what we're suggesting is that those bans in isolation are not enough to tackle the negative impacts."

She said the "focus" now needed to be on reducing how much time students spent on their phones, adding: "We need to do more than just ban phones in schools."
AI

'AI Granny' Driving Scammers Up the Wall 82

Since November, British telecom O2 has deployed an AI chatbot masquerading as a 78-year-old grandmother to waste scammers' time. The bot, named Daisy, engages fraudsters by discussing knitting patterns, recipes, and asking about tea preferences while feigning computer illiteracy. The Guardian has an update this week: In tests over several weeks, Daisy has kept individual scammers occupied for up to 40 minutes, with one case showing her being passed between four different callers. An excerpt from the story: "When a third scammer tries to get her to download the Google Play Store, she replies: 'Dear, did you say pastry? I'm not really on the right page.' She then complains that her screen has gone blank, saying it has 'gone black like the night sky'."
Books

AI-Generated Slop Is Already In Your Public Library 20

An anonymous reader writes: Low quality books that appear to be AI generated are making their way into public libraries via their digital catalogs, forcing librarians who are already understaffed to either sort through a functionally infinite number of books to determine what is written by humans and what is generated by AI, or to spend taxpayer dollars to provide patrons with information they don't realize is AI-generated.

Public libraries primarily use two companies to manage and lend ebooks: Hoopla and OverDrive, the latter of which people may know from its borrowing app, Libby. Both companies have a variety of payment options for libraries, but generally libraries get access to the companies' catalog of books and pay for customers to be able to borrow that book, with different books having different licenses and prices. A key difference is that with OverDrive, librarians can pick and choose which books in OverDrive's catalog they want to give their customers the option of borrowing. With Hoopla, librarians have to opt into Hoopla's entire catalog, then pay for whatever their customers choose to borrow from that catalog. The only way librarians can limit what Hoopla books their customers can borrow is by setting a limit on the price of books. For example, a library can use Hoopla but make it so their customers can only borrow books that cost the library $5 per use.

On one hand, Hoopla's gigantic catalog, which includes ebooks, audio books, and movies, is a selling point because it gives librarians access to more for cheaper price. On the other hand, making librarians buy into the entire catalog means that a customer looking for a book about how to diet for a healthier liver might end up borrowing Fatty Liver Diet Cookbook: 2000 Days of Simple and Flavorful Recipes for a Revitalized Liver. The book was authored by Magda Tangy, who has no online footprint, and who has an AI-generated profile picture on Amazon, where her books are also for sale. Note the earring that is only on one ear and seems slightly deformed. A spokesperson for deepfake detection company Reality Defender said that according to their platform, the headshot is 85 percent likely to be AI-generated. [...] It is impossible to say exactly how many AI-generated books are included in Hoopla's catalog, but books that appeared to be AI-generated were not hard to find for most of the search terms I tried on the platform.
"This type of low quality, AI generated content, is what we at 404 Media and others have come to call AI slop," writes Emanuel Maiberg. "Librarians, whose job it is in part to curate what books their community can access, have been dealing with similar problems in the publishing industry for years, and have a different name for it: vendor slurry."

"None of the librarians I talked to suggested the AI-generated content needed to be banned from Hoopla and libraries only because it is AI-generated. It might have its place, but it needs to be clearly labeled, and more importantly, provide borrowers with quality information."

Sarah Lamdan, deputy director of the American Library Association, told 404 Media: "Platforms like Hoopla should offer libraries the option to select or omit materials, including AI materials, in their collections. AI books should be well-identified in library catalogs, so it is clear to readers that the books were not written by human authors. If library visitors choose to read AI eBooks, they should do so with the knowledge that the books are AI-generated."
News

Chris Anderson Is Giving TED Away To Whoever Has the Best Idea for Its Future (wired.com) 41

Chris Anderson, who transformed TED from a small conference into a global platform for sharing ideas, announced today he's stepping down after 25 years at the helm. The nonprofit's leader is seeking new ownership through an unusual open call for proposals. Anderson told WIRED he wants potential buyers -- whether universities, philanthropic organizations, media companies or tech firms -- to demonstrate both vision and financial capacity.

The organization, which charges $12,500 for its flagship conference seats, maintains $25 million in cash reserves and reports a $100 million break-even balance sheet. The future owner must commit to keeping the conference running and maintaining TED's practice of sharing talks for free.
Crime

Senator Hawley Proposes Jail Time For People Who Download DeepSeek 226

Senator Josh Hawley has introduced a bill that would criminalize the import, export, and collaboration on AI technology with China. What this means is that "someone who knowingly downloads a Chinese developed AI model like the now immensely popular DeepSeek could face up to 20 years in jail, a million dollar fine, or both, should such a law pass," reports 404 Media. From the report: Hawley introduced the legislation, titled the Decoupling America's Artificial Intelligence Capabilities from China Act, on Wednesday of last year. "Every dollar and gig of data that flows into Chinese AI are dollars and data that will ultimately be used against the United States," Senator Hawley said in a statement. "America cannot afford to empower our greatest adversary at the expense of our own strength. Ensuring American economic superiority means cutting China off from American ingenuity and halting the subsidization of CCP innovation."

Hawley's statement explicitly says that he introduced the legislation because of the release of DeepSeek, an advanced AI model that's competitive with its American counterparts, and which its developers claimed was made for a fraction of the cost and without access to as many and as advanced of chips, though these claims are unverified. Hawley's statement called DeepSeek "a data-harvesting, low-cost AI model that sparked international concern and sent American technology stocks plummeting." Hawley's statement says the goal of the bill is to "prohibit the import from or export to China of artificial intelligence technology, "prohibit American companies from conducting AI research in China or in cooperation with Chinese companies," and "Prohibit U.S. companies from investing money in Chinese AI development."

Slashdot Top Deals