AI

Even Linus Torvalds Is Vibe Coding Now 54

Linus Torvalds has started experimenting with vibe coding, using Google's Antigravity AI to generate parts of a small hobby project called AudioNoise. "In doing so, he has become the highest-profile programmer yet to adopt this rapidly spreading, and often mocked, AI-driven programming," writes ZDNet's Steven Vaughan-Nichols. Fro the report: [I]t's a trivial program called AudioNoise -- a recent side project focused on digital audio effects and signal processing. He started it after building physical guitar pedals, GuitarPedal, to learn about audio circuits. He now gives them as gifts to kernel developers and, recently, to Bill Gates.

While Torvalds hand-coded the C components, he turned to Antigravity for a Python-based audio sample visualizer. He openly acknowledges that he leans on online snippets when working in languages he knows less well. Who doesn't? [...] In the project's README file, Torvalds wrote that "the Python visualizer tool has been basically written by vibe-coding," describing how he "cut out the middle-man -- me -- and just used Google Antigravity to do the audio sample visualiser." The remark underlines that the AI-generated code met his expectations well enough that he did not feel the need to manually re-implement it.
Further reading: Linus Torvalds Says Vibe Coding is Fine For Getting Started, 'Horrible Idea' For Maintenance
AI

Should AI Agents Be Classified As People? (hbr.org) 80

New submitter sziring writes: Harvard Business Review's IdeaCast podcast interviewed McKinsey CEO Bob Sternfels, where he classified AI agents as people. "I often get asked, 'How big is McKinsey? How many people do you employ?' I now update this almost every month, but my latest answer to you would be 60,000, but it's 40,000 humans and 20,000 agents."

This statement looks to be the opening shots of how we as a society need to classify AI agents and whether they will replace human jobs. Did those agents take roles that previously would have been filled by a full-time human? By classifying them as people, did the company break protocols or laws by not interviewing candidates for those jobs, not providing benefits or breaks, and so on?

Yes, it all sounds silly but words matter. What happens when a job report comes out claiming we just added 20,000 jobs in Q1? That line of thinking leads directly to Bill Gates' point that agents taking on human roles might need to be taxed.

The Internet

How Markdown Took Over the World 60

22 years ago, developer and columnist John Gruber released Markdown, a simple plain-text formatting system designed to spare writers the headache of memorizing arcane HTML tags. As technologist Anil Dash writes in a long piece, Markdown has since embedded itself into nearly every corner of modern computing.

Aaron Swartz, then seventeen years old, served as the beta tester before its quiet March 2004 debut. Google eventually added Markdown support to Docs after more than a decade of user requests; Microsoft put it in Notepad; Slack, WhatsApp, Discord, and Apple Notes all support it now. Dash writes: The part about not doing this stuff solely for money matters, because even the most advanced LLM systems today, what the big AI companies call their "frontier" models, require complex orchestration that's carefully scripted by people who've tuned their prompts for these systems through countless rounds of trial and error. They've iterated and tested and watched for the results as these systems hallucinated or failed or ran amok, chewing up countless resources along the way. And sometimes, they generated genuinely astonishing outputs, things that are truly amazing to consider that modern technology can achieve. The rate of progress and evolution, even factoring in the mind-boggling amounts of investment that are going into these systems, is rivaled only by the initial development of the personal computer or the Internet, or the early space race.

And all of it -- all of it -- is controlled through Markdown files. When you see the brilliant work shown off from somebody who's bragging about what they made ChatGPT generate for them, or someone is understandably proud about the code that they got Claude to create, all of the most advanced work has been prompted in Markdown. Though where the logic of Markdown was originally a very simple version of "use human language to tell the machine what to do", the implications have gotten far more dire when they use a format designed to help expresss "make this **bold**" to tell the computer itself "make this imaginary girlfriend more compliant".
Google

Apple Partners With Google on Siri Upgrade, Declares Gemini 'Most Capable Foundation' (theverge.com) 26

Apple has struck a multi-year partnership with Google to power a more capable version of Siri using Gemini AI models, ending months of speculation about which company would help the iPhone maker catch up in the generative AI race. In a statement, Apple said it had determined after "careful evaluation" that "Google's technology provides the most capable foundation for Apple Foundation Models."

The deal comes after Apple delayed its planned Siri AI upgrade last March, acknowledging that the project was taking "longer than we thought." Bloomberg had reported in August that Apple was in early talks with Google about using a custom Gemini model. Apple also explored potential partnerships with OpenAI, Anthropic and Perplexity, and CEO Tim Cook has said the company plans to integrate with more AI companies over time. The upgraded Siri is expected to perform actions on users' behalf and understand personal context.
Bug

How Long Does It Take to Fix Linux Kernel Bugs? (itsfoss.com) 36

An anonymous reader shared this report from It's FOSS: Jenny Guanni Qu, a researcher at [VC fund] Pebblebed, analyzed 125,183 bugs from 20 years of Linux kernel development history (on Git). The findings show that the average bug takes 2.1 years to find. [Though the median is 0.7 years, with the average possibly skewed by "outliers" discovered after years of hiding.] The longest-lived bug, a buffer overflow in networking code, went unnoticed for 20.7 years! [But 86.5% of bugs are found within five years.]

The research was carried out by relying on the Fixes: tag that is used in kernel development. Basically, when a commit fixes a bug, it includes a tag pointing to the commit that introduced the bug. Jenny wrote a tool that extracted these tags from the kernel's git history going back to 2005. The tool finds all fixing commits, extracts the referenced commit hash, pulls dates from both commits, and calculates the time frame. As for the dataset, it includes over 125k records from Linux 6.19-rc3, covering bugs from April 2005 to January 2026. Out of these, 119,449 were unique fixing commits from 9,159 different authors, and only 158 bugs had CVE IDs assigned.

It took six hours to assemble the dataset, according to the blog post, which concludes that the percentage of bugs found within one year has improved dramatically, from 0% in 2010 to 69% by 2022. The blog post says this can likely be attributed to:
  • The Syzkaller fuzzer (released in 2015)
  • Dynamic memory error detectors like KASAN, KMSAN, KCSAN sanitizers
  • Better static analysis
  • More contributors reviewing code

But "We're simultaneously catching new bugs faster AND slowly working through ~5,400 ancient bugs that have been hiding for over 5 years."

They've also developed an AI model called VulnBERT that predicts whether a commit introduces a vulnerability, claiming that of all actual bug-introducing commits, it catches 92.2%. "The goal isn't to replace human reviewers but to point them at the 10% of commits most likely to be problematic, so they can focus attention where it matters..."


AI

Amazon's AI Tool Listed Products from Small Businesses Without Their Knowledge (msn.com) 40

Bloomberg reports on Amazon listings "automatically generated by an experimental AI tool" for stores that don't sell on Amazon.

Bloomberg notes that the listings "didn't always correspond to the correct product", leaving the stores to handle the complaints from angry customers: Between the Christmas and New Year holidays, small shop owners and artisans who had found their products listed on Amazon took to social media to compare notes and warn their peers... In interviews, six small shop owners said they found themselves unwittingly selling their products on Amazon's digital marketplace. Some, especially those who deliberately avoided Amazon, said they should have been asked for their consent. Others said it was ironic that Amazon was scouring the web for products with AI tools despite suing Perplexity AI Inc.for using similar technology to buy products on Amazon... Some retailers say the listings displayed the wrong product image or mistakenly showed wholesale pricing. Users of Shopify Inc.'s e-commerce tools said the system flagged Amazon's automated purchases as potentially fraudulent...

In a statement, Amazon spokesperson Maxine Tagay said sellers are free to opt out. Two Amazon initiatives — Shop Direct, which links out to make purchases on other retailers' sites, and Buy For Me, which duplicates listings and handles purchases without leaving Amazon — "are programs we're testing that help customers discover brands and products not currently sold in Amazon's store, while helping businessesâreach new customers and drive incremental sales," she said in an emailed statement. "We have received positive feedback on these programs." Tagay didn't say why the sellers were enrolled without notifying them. She added that the Buy For Me selection features more than 500,000 items, up from about 65,000 at launch in April.

The article includes quotes from the owners of affected businesses.
  • A one-person company complained that "If suddenly there were 100 orders, I couldn't necessarily manage. When someone takes your proprietary, copyrighted works, I should be asked about that. This is my business. It's not their business."
  • One business owner said "I just don't want my products on there... It's like if Airbnb showed up and tried to put your house on the market without your permission."
  • One business owner complained "When things started to go wrong, there was no system set up by Amazon to resolve it. It's just 'We set this up for you, you should be grateful, you fix it.'" One Amazon representative even suggested they try opening a $39-a-month Amazon seller account.

AI

Nvidia CEO Jensen Huang Says AI Doomerism Has 'Done a Lot of Damage' (businessinsider.com) 105

Nvidia CEO Jensen Huang "said one of his biggest takeaways from 2025 was 'the battle of narratives' over the future of AI development between those who see doom on the horizon and the optimists," reports Business Insider.

Huang did acknowledge that "it's too simplistic" to entirely dismiss either side (on a recent episode of the "No Priors" podcast). But "I think we've done a lot of damage with very well-respected people who have painted a doomer narrative, end of the world narrative, science fiction narrative." "It's not helpful to people. It's not helpful to the industry. It's not helpful to society. It's not helpful to the governments..." [H]e cited concerns about "regulatory capture," arguing that no company should approach governments to request more regulation. "Their intentions are clearly deeply conflicted, and their intentions are clearly not completely in the best interest of society," he said. "I mean, they're obviously CEOs, they're obviously companies, and obviously they're advocating for themselves..."

"When 90% of the messaging is all around the end of the world and the pessimism, and I think we're scaring people from making the investments in AI that makes it safer, more functional, more productive, and more useful to society," he said.

Elsewhere in the podcast, Huang argues that the AI bubble is a myth. Business Insider adds that "a spokesperson for Nvidia declined to elaborate on Huang's remarks."

Thanks to Slashdot reader joshuark for sharing the article.
AI

Walmart Announces Drone Delivery, Integration with Google's AI Chatbot Gemini (nerds.xyz) 20

Alphabet-owned Wing "is expanding its drone delivery service to an additional 150 Walmart stores across the U.S.," reports Axios: [T]he future is already here if you live in Dallas — where some Walmart customers order delivery by Wing three times a week. By the end of 2026, some 40 million Americans, or about 12 percent of the U.S. population, will be able to take advantage of the convenience, the companies claim... Once the items are picked and packed in a small cardboard basket, they are loaded onto a drone inside a fenced area in the Walmart parking lot. Drones fly autonomously to the designated address, with human pilots monitoring each flight from a central operations hub....

For now, Wing deliveries are free. "The goal is to expose folks to the wonders of drone delivery," explains Wing's chief business officer, Heather Rivera... Over time, she said Wing expects delivery fees to be comparable to other delivery options, but faster and more convenient.
Service began recently in Atlanta and Charlotte, and it's coming soon to Los Angeles, Houston, Cincinnati, St. Louis, Miami and other major U.S. cities to be announced later, according to the article. "By 2027, Walmart and Wing say they'll have a network of more than 270 drone delivery locations nationwide."

Walmart also announced a new deal today with Google's Gemini, allowing customers to purchase Walmart products from within Gemini. (Walmart announced a similar deal for ChatGPT in October.)

Slashdot reader BrianFagioli calls this "a defensive angle that Walmart does not quite say out loud." As AI models answer more questions directly, retailers risk losing customers before they ever hit a website. If Gemini recommends a product from someone else first, Walmart loses the sale before it starts. By planting itself inside the AI, Walmart keeps a seat at the table while the internet shifts under everyone's feet.

Google clearly benefits too. Gemini gets a more functional purpose than just telling you how to boil pasta or summarize recipes. Now it can carry someone from the moment they wonder what they need to the moment the order is placed. That makes the assistant stickier and a bit more practical than generic chat. Walmart's incoming CEO John Furner says the company wants to shape this new pattern instead of being dragged into it later. Sundar Pichai calls Walmart an early partner in what he sees as a broader wave of agent style commerce, where AI starts doing the errands people used to handle themselves.

The article concludes "This partnership serves as a snapshot of where retail seems to be heading..."
Social Networks

Elon Musk: X's New Algorithm Will Be Made Open Source in Seven Days (msn.com) 90

"We will make the new ð algorithm...open source in 7 days," Elon Musk posted Saturday on X.com. Musk says this is "including all code used to determine what organic and advertising posts are recommended to users," and "This will be repeated every 4 weeks, with comprehensive developer notes, to help you understand what changed."

Some context from Engadget: Musk has been making promises of open-sourcing the algorithm since his takeover of Twitter, and in 2023 published the code for the site's "For You" feed on GitHub. But the code wasn't all that revealing, leaving out key details, according to analyses at the time. And it hasn't been kept up to date.
Bloomberg also reported on Saturday's announcement: The billionaire didn't say why X was making its algorithm open source. He and the company have clashed several times with regulators over content being shown to users.

Some X users had previously complained that they were receiving fewer posts on the social media platform from people they follow. In October, Musk confirmed in a post on X that the company had found a "significant bug" in the platform's "For You" algorithm and pledged a fix. The company has also been working to incorporate more artificial intelligence into its recommendation algorithm for X, using Grok, Musk's artificial intelligence chatbot...

In September, Musk wrote that the goal was for X's recommendation engine to "be purely AI" and that the company would share its open source algorithm about every two weeks. "To the degree that people are seeing improvements in their feed, it is not due to the actions of specific individuals changing heuristics, but rather increasing use of Grok and other AI tools," Musk wrote in October. The company was working to have all of the more than 100 million daily posts published to X evaluated by Grok, which would then offer individual users the posts most likely to interest them, Musk wrote. "This will profoundly improve the quality of your feed." He added that the company was planning to roll out the new features by November.

Social Networks

AI-Powered Social Media App Hopes To Build More Purposeful Lives (msn.com) 32

A founder of Twitter and a founder of Pinterest are now working on "social media for people who hate social media," writes a Washington Post columnist.

"When I heard that this platform would harness AI to help us live more meaningful lives, I wanted to know more..." Their bid for redemption is West Co. — the Workshop for Emotional and Spiritual Technology Corporation — and the platform they're testing is called Tangle, a "purpose discovery tool" that uses AI to help users define their life purposes, then encourages them to set intentions toward achieving those purposes, reminds them periodically and builds a community of supporters to encourage steps toward meeting those intentions. "A lot of people, myself included, have been on autopilot," Stone said. "If all goes well, we'll introduce a lot of people to the concept of turning off autopilot."

But will all go well? The entrepreneurs have been at it for two years, and they've scrapped three iterations before even testing them. They still don't have a revenue model. "This is a really hard thing to do," Stone admitted. "If we were a traditional start-up, we would have probably been folded by now." But the two men, with a combined net worth of at least hundreds of millions, and possibly billions, had the luxury of self-funding for a year, and now they have $29 million in seed funding led by Spark Capital...

[T]he project revolves around training existing AI models in "what good intentions and helpful purposes look like," explained Long Cheng, the founding designer. When you join Tangle, which is invitation-only until this spring at the earliest, the AI peruses your calendar, examines your photos, asks you questions and then produces "threads," or categories that define your life purpose. You're free to accept, reject or change the suggestions. It then encourages you to make "intentions" toward achieving your threads, and to add "reflections" when you experience something meaningful in your life. Users then receive encouragement from friends, or "supporters." A few of the "threads" on Tangle are about personal satisfaction (traveler, connoisseur), but the vast majority involve causes greater than self: family (partner, parent, sibling), community (caregiver, connector, guardian), service (volunteer, advocate, healer) and spirituality (seeker, believer). Even the work-related threads (mentor, leader) suggest a higher purpose.

The column includes this caveat. "I have no idea whether they will succeed. But as a columnist writing about how to keep our humanity in the 21st century, I believe it's important to focus on people who are at least trying..."

"Quite possibly, West Co. and the various other enterprises trying to nudge technology in a more humane direction will find that it doesn't work socially or economically — they don't yet have a viable product, after all — but it would be a noble failure."
AI

AI Fails at Most Remote Work, Researchers Find (msn.com) 39

A new study "compared how well top AI systems and human workers did at hundreds of real work assignments," reports the Washington Post.

They add that at least one example "illustrates a disconnect three years after the release of ChatGPT that has implications for the whole economy." AI can accomplish many impressive tasks involving computer code, documents or images. That has prompted predictions that human work of many kinds could soon be done by computers alone. Bentley University and Gallup found in a survey [PDF] last year that about three-quarters of Americans expect AI to reduce the number of U.S. jobs over the next decade. But economic data shows the technology largely has not replaced workers.

To understand what work AI can do on its own today, researchers collected hundreds of examples of projects posted on freelancing platforms that humans had been paid to complete. They included tasks such as making 3D product animations, transcribing music, coding web video games and formatting research papers for publication. The research team then gave each task to AI systems such as OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude. The best-performing AI system successfully completed only 2.5 percent of the projects, according to the research team from Scale AI, a start-up that provides data to AI developers, and the Center for AI Safety, a nonprofit that works to understand risks from AI. "Current models are not close to being able to automate real jobs in the economy," said Jason Hausenloy, one of the researchers on the Remote Labor Index study...

The results, which show how AI systems fall short, challenge predictions that the technology is poised to soon replace large portions of the workforce... The AI systems failed on nearly half of the Remote Labor Index projects by producing poor-quality work, and they left more than a third incomplete. Nearly 1 in 5 had basic technical problems such as producing corrupt files, the researchers found.

One test involved creating an interactive dashboard for data from the World Happiness Report, according to the article. "At first glance, the AI results look adequate. But closer examination reveals errors, such as countries inexplicably missing data, overlapping text and legends that use the wrong colors — or no colors at all."

The researchers say AI systems are hobbled by a lack of memory, and are also weak on "visual" understanding.
AI

Meta Announces New Smartglasses Features, Delays International Rollout Claiming 'Unprecedented' Demand' (cnbc.com) 30

This week Meta announced several new features for "Meta Ray-Ban Display" smartglasses:

- A new teleprompter feature for the smart glasses (arriving in a phased rollout)

- The ability to send messages on WhatsApp and Messenger by writing with your finger on any surface. (Available for those who sign up for an "early access" program).

- "Pedestrian navigation" for 32 cities. ("The 28 cities we launched Meta Ray-Ban Display with, plus Denver, Las Vegas, Portland, and Salt Lake City," and with more cities coming soon.)


But they also warned Meta Ray-Ban Display "is a first-of-its-kind product with extremely limited inventory," saying they're delaying international expansion of sales due to inventory constraints — and also due to "unprecedented" demand in the U.S. CNBC reports: "Since launching last fall, we've seen an overwhelming amount of interest, and as a result, product waitlists now extend well into 2026," Meta wrote in a blog post. Due to "limited" inventory, the company said it will pause plans to launch in the U.K., France, Italy and Canada early this year and concentrate on U.S. orders as it reassesses international availability...

Meta is one of several technology companies moving into the smart glasses market. Alphabet announced a $150 million partnership with Warby Parker in May and ChatGPT maker OpenAI is reportedly working on AI glasses with Apple.

Government

More US States Are Preparing Age-Verification Laws for App Stores (politico.com) 57

Yes, a federal judge blocked an attempt by Texas at an app store age-verification law. But this year Silicon Valley giants including Google and Apple "are expected to fight hard against similar legislation," reports Politico, "because of the vast legal liability it imposes on app stores and developers." In Texas, Utah and Louisiana, parent advocates have linked up with conservative "pro-family" groups to pass laws forcing mobile app stores to verify user ages and require parental sign-off. If those rules hold up in court, companies like Google and Apple, which run the two largest app stores, would face massive legal liability... California has taken a different approach, passing its own age-verification law last year that puts liability on device manufacturers instead of app stores. That model has been better received by the tech lobby, and is now competing with the app-based approach in states like Ohio. In Washington D.C., a GOP-led bill modeled off of Texas' law is wending its way through Capitol Hill. And more states are expected to join the fray, including Michigan and South Carolina.

Joel Thayer, president of the conservative Digital Progress Institute and a key architect of the Texas law, said states are only accelerating their push. He explicitly linked the age-verification debate to AI, arguing it's "terrifying" to think companies could build new AI products by scraping data from children's apps. Thayer also pointed to the Trump administration's recent executive order aimed at curbing state regulation of AI, saying it has galvanized lawmakers. "We're gonna see more states pushing this stuff," Thayer said. "What really put fuel in the fire is the AI moratorium for states. I think states have been reinvigorated to fight back on this."

He told Politico that the issue will likely be decided by America's Supreme Court, which in June upheld Texas legislation requiring age verification for online content. Thayer said states need a ruling from America's highest court to "triangulate exactly what the eff is going on with the First Amendment in the tech world.

"They're going to have to resolve the question at some point."
AI

Meta Signs Deals With Three Nuclear Companies For 6+ GW of Power 28

Meta has signed long-term nuclear power deals totaling more than 6 gigawatts to fuel its data centers: "one from a startup, one from a smaller energy company, and one from a larger company that already operates several nuclear reactors in the U.S," reports TechCrunch. From the report: Oklo and TerraPower, two companies developing small modular reactors (SMR), each signed agreements with Meta to build multiple reactors, while Vistra is selling capacity from its existing power plants. [...] The deals are the result of a request for proposals that Meta issued in December 2024, in which Meta sought partners that could add between 1 to 4 gigawatts of generating capacity by the early 2030s. Much of the new power will flow through the PJM interconnection, a grid which covers 13 Mid-Atlantic and Midwestern states and has become saturated with data centers.

The 20-year agreement with Vistra will have the most immediate impact on Meta's energy needs. The tech company will buy a total of 2.1 gigawatts from two existing nuclear power plants, Perry and Davis-Besse in Ohio. As part of the deal, Vistra will also add capacity to those power plants and to its Beaver Valley power plant in Pennsylvania. Together, the upgrades will generate an additional 433 MW and are scheduled to come online in the early 2030s.

Meta is also buying 1.2 gigawatts from young provider Oklo. Under its deal with Meta, Oklo is hoping to start supplying power to the grid as early as 2030. The SMR company went public via SPAC in 2023, and while Oklo has landed a large deal with data center operator Switch, it has struggled to get its reactor design approved by the Nuclear Regulatory Commission. If Oklo can deliver on its timeline, the new reactors would be built in Pike County, Ohio. The startup's Aurora Powerhouse reactors each produce 75 megawatts of electricity, and it will need to build more than a dozen to fulfill Meta's order. TerraPower is a startup co-founded by Bill Gates, and it is aiming to start sending electricity to Meta as early as 2032.
AI

AI Models Are Starting To Learn By Asking Themselves Questions (wired.com) 82

An anonymous reader quotes a report from Wired: [P]erhaps AI can, in fact, learn in a more human way -- by figuring out interesting questions to ask itself and attempting to find the right answer. A project from Tsinghua University, the Beijing Institute for General Artificial Intelligence (BIGAI), and Pennsylvania State University shows that AI can learn to reason in this way by playing with computer code. The researchers devised a system called Absolute Zero Reasoner (AZR) that first uses a large language model to generate challenging but solvable Python coding problems. It then uses the same model to solve those problems before checking its work by trying to run the code. And finally, the AZR system uses successes and failures as a signal to refine the original model, augmenting its ability to both pose better problems and solve them.

The team found that their approach significantly improved the coding and reasoning skills of both 7 billion and 14 billion parameter versions of the open source language model Qwen. Impressively, the model even outperformed some models that had received human-curated data. [...] A key challenge is that for now the system only works on problems that can easily be checked, like those that involve math or coding. As the project progresses, it might be possible to use it on agentic AI tasks like browsing the web or doing office chores. This might involve having the AI model try to judge whether an agent's actions are correct. One fascinating possibility of an approach like Absolute Zero is that it could, in theory, allow models to go beyond human teaching. "Once we have that it's kind of a way to reach superintelligence," [said Zilong Zheng, a researcher at BIGAI who worked on the project].

AI

AI Is Intensifying a 'Collapse' of Trust Online, Experts Say (nbcnews.com) 60

Experts interviewed by NBC News warn that the rapid spread of AI-generated images and videos is accelerating an online trust breakdown, especially during fast-moving news events where context is scarce. From the report: President Donald Trump's Venezuela operation almost immediately spurred the spread of AI-generated images, old videos and altered photos across social media. On Wednesday, after an Immigration and Customs Enforcement officer fatally shot a woman in her car, many online circulated a fake, most likely AI-edited image of the scene that appears to be based on real video. Others used AI in attempts to digitally remove the mask of the ICE officer who shot her.

The confusion around AI content comes as many social media platforms, which pay creators for engagement, have given users incentives to recycle old photos and videos to ramp up emotion around viral news moments. The amalgam of misinformation, experts say, is creating a heightened erosion of trust online -- especially when it mixes with authentic evidence. "As we start to worry about AI, it will likely, at least in the short term, undermine our trust default -- that is, that we believe communication until we have some reason to disbelieve," said Jeff Hancock, founding director of the Stanford Social Media Lab. "That's going to be the big challenge, is that for a while people are really going to not trust things they see in digital spaces."

Though AI is the latest technology to spark concern about surging misinformation, similar trust breakdowns have cycled through history, from election misinformation in 2016 to the mass production of propaganda after the printing press was invented in the 1400s. Before AI, there was Photoshop, and before Photoshop, there were analog image manipulation techniques. Fast-moving news events are where manipulated media have the biggest effect, because they fill in for the broad lack of information, Hancock said.
"In terms of just looking at an image or a video, it will essentially become impossible to detect if it's fake. I think that we're getting close to that point, if we're not already there," said Hancock. "The old sort of AI literacy ideas of 'let's just look at the number of fingers' and things like that are likely to go away."

Renee Hobbs, a professor of communication studies at the University of Rhode Island, added: "If constant doubt and anxiety about what to trust is the norm, then actually, disengagement is a logical response. It's a coping mechanism. And then when people stop caring about whether something's true or not, then the danger is not just deception, but actually it's worse than that. It's the whole collapse of even being motivated to seek truth."
Microsoft

Microsoft May Soon Allow IT Admins To Uninstall Copilot (bleepingcomputer.com) 41

Microsoft is testing a new Windows policy that lets IT administrators uninstall Microsoft Copilot from managed devices. The change rolls out via Windows Insider builds and works through standard management tools like Intune and SCCM. BleepingComputer reports: The new policy will apply to devices where the Microsoft 365 Copilot and Microsoft Copilot are both installed, the Microsoft Copilot app was not installed by the user, and the Microsoft Copilot app was not launched in the last 28 days. "Admins can now uninstall Microsoft Copilot for a user in a targeted way by enabling a new policy titled RemoveMicrosoftCopilotApp," the Windows Insider team said.

"If this policy is enabled, the Microsoft Copilot app will be uninstalled, once. Users can still re-install if they choose to. This policy is available on Enterprise, Pro, and EDU SKUs. To enable this policy, open the Group policy editor and go to: User Configuration -> Administrative Templates -> Windows AI -> Remove Microsoft Copilot App."

The Internet

Google: Don't Make 'Bite-Sized' Content For LLMs If You Care About Search Rank (arstechnica.com) 22

An anonymous reader quotes a report from Ars Technica: Search engine optimization, or SEO, is a big business. While some SEO practices are useful, much of the day-to-day SEO wisdom you see online amounts to superstition. An increasingly popular approach geared toward LLMs called "content chunking" may fall into that category. In the latest installment of Google's Search Off the Record podcast, John Mueller and Danny Sullivan say that breaking content down into bite-sized chunks for LLMs like Gemini is a bad idea.

You've probably seen websites engaging in content chunking and scratched your head, and for good reason -- this content isn't made for you. The idea is that if you split information into smaller paragraphs and sections, it is more likely to be ingested and cited by gen AI bots like Gemini. So you end up with short paragraphs, sometimes with just one or two sentences, and lots of subheads formatted like questions one might ask a chatbot.

According to Google's Danny Sullivan, this is a misconception, and Google doesn't use such signals to improve ranking. "One of the things I keep seeing over and over in some of the advice and guidance and people are trying to figure out what do we do with the LLMs or whatever, is that turn your content into bite-sized chunks, because LLMs like things that are really bite size, right?" said Sullivan. "So... we don't want you to do that."

The conversation, which begins around the podcast's 18-minute mark, goes on to illustrate the folly of jumping on the latest SEO trend. Sullivan notes that he has consulted engineers at Google before making this proclamation. Apparently, the best way to rank on Google continues to be creating content for humans rather than machines. That ensures long-term search exposure, because the behavior of human beings -- what they choose to click on -- is an important signal for Google.

Technology

CES Worst In Show Awards Call Out the Tech Making Things Worse (ifixit.com) 41

Longtime Slashdot reader chicksdaddy writes: CES, the Consumer Electronics Show, isn't just about shiny new gadgets. As AP reports, this year brought back the fifth annual Worst in Show anti-awards, calling out the most harmful, wasteful, invasive, and unfixable tech at the Las Vegas show. The coalition behind the awards -- including Repair.org, iFixit, EFF, PIRG, Secure Repairs, and others -- put the spotlight on products that miss the point of innovation and make life worse for users.

2026 Worst in Show winners include:

Overall (and Repairability): Samsung's AI-packed Family Hub Fridge -- over-engineered, hard to fix, and trying to do everything but keep food cold.
Privacy: Amazon Ring AI -- expanding surveillance with features like facial recognition and mobile towers.
Security: Merach UltraTread treadmill -- an AI fitness coach that also hoovers up sensitive data with weak security guarantees, including a privacy policy that declares the company "cannot guarantee the security of your personal information" (!!).
Environmental Impact: Lollipop Star -- a single-use, music-playing electronic lollipop that epitomizes needless e-waste.
Enshittification: Bosch eBike Flow App -- pushing lock-in and digital restrictions that make gear worse over time.
"Who Asked For This?": Bosch Personal AI Barista -- a voice-assistant coffee maker that nobody really wanted.
People's Choice: Lepro Ami AI Companion -- an overhyped "soulmate" cam that creeps more than it comforts.

The message? Not all tech is progress. Some products add needless complexity, threaten privacy, or throw sustainability out the window -- and the industry's watchdogs are calling them out.

IT

Torvalds Tells Kernel Devs To Stop Debating AI Slop - Bad Actors Won't Follow the Rules Anyway (theregister.com) 53

Linus Torvalds has weighed in on an ongoing debate within the Linux kernel development community about whether documentation should explicitly address AI-generated code contributions, and his position is characteristically blunt: stop making it an issue. The Linux creator was responding to Oracle-affiliated kernel developer Lorenzo Stoakes, who had argued that treating LLMs as "just another tool" ignores the threat they pose to kernel quality. "Thinking LLMs are 'just another tool' is to say effectively that the kernel is immune from this," Stoakes wrote.

Torvalds disagreed sharply. "There is zero point in talking about AI slop," he wrote. "Because the AI slop people aren't going to document their patches as such." He called such discussions "pointless posturing" and said that kernel documentation is "for good actors." The exchange comes as a team led by Intel's Dave Hansen works on guidelines for tool-generated contributions. Stoakes had pushed for language letting maintainers reject suspected AI slop outright, arguing the current draft "tries very hard to say 'NOP.'" Torvalds made clear he doesn't want kernel documentation to become a political statement on AI. "I strongly want this to be that 'just a tool' statement," he wrote.

Slashdot Top Deals