Businesses

GFiber and Astound Broadband To Join Forces (lightreading.com) 16

GFiber (a.k.a. Google Fiber) and Astound Broadband announced that they plan to merge into a deal backed by infrastructure investor Stonepeak Infrastructure Partners. The resulting company will be majority owned by Stonepeak, with Alphabet becoming a "significant minority shareholder." Light Reading reports: Stonepeak Infrastructure Partners teamed with Patriot Media to acquire Astound in November 2020 for $8.1 billion. Stonepeak is Astound's largest investor. The deal is expected to close in the fourth quarter of 2026. The combined business will be led by the existing GFiber executive team. GFiber is currently led by CEO Dinni Jain. Jain, a former Time Warner Cable and Insight Communications exec, took the helm of what was then called Google Fiber in 2018.

"This agreement advances GFiber's mission of redefining internet connectivity and represents a major step toward its goal of operational and financial independence," the companies said. "GFiber will have the external capital and strategic focus needed to accelerate its next phase of growth, expanding its customer-first approach and pioneering fiber technology across the country." GFiber's combination with Astound represents "a strategic opportunity to scale our customer-focused approach to connect more households to a truly different type of internet service," Jain said in a statement.

EU

Meta To Charge Advertisers a Fee To Offset Europe's Digital Taxes (reuters.com) 36

Meta will begin charging advertisers a 2-5% "location fee" to offset digital services taxes imposed by several European countries, including the UK, France, Italy, Spain, Austria, and Turkey. Reuters reports: The fee, for image or video ads delivered on Meta platforms including WhatsApp click-to-message campaigns and marketing messages together with ads, will apply from July 1 and will also cover other government-imposed levies. "Until now, Meta has covered these additional costs. These changes are part of Meta's ongoing effort to respond to the evolving regulatory landscape and align with industry standards," the company said in the blog.

The location fees are determined by where the audience is located and not the advertisers' business location. Meta listed six countries where the fees will apply, ranging from 2% in the United Kingdom to 3% in France, Italy and Spain and 5% in Austria and Turkey.

AT&T

AT&T Outlines $250 Billion US Investment Plan To Boost Infrastructure In AI Age (reuters.com) 12

AT&T plans to invest more than $250 billion over the next five years to expand U.S. telecom infrastructure for the AI age. The company says it will also hire thousands of technicians while partnering with AST SpaceMobile to extend coverage to remote areas. Reuters reports: Rapid adoption of artificial intelligence, cloud computing and connected devices has prompted telecom operators to invest heavily in fiber and 5G networks as they also seek to fend off intensifying competition from cable broadband providers. AT&T, which has about 110,000 employees in the U.S., said the new hires will help build and maintain its infrastructure. The outlay includes capital expenditure and other spending, the company said.

The spending will focus on expanding its fiber and wireless networks, including accelerating deployment of fiber broadband, 5G home internet and satellite connectivity to extend coverage across urban, suburban and rural areas. [...] AT&T is also working with satellite partner AST SpaceMobile to expand connectivity to remote regions where traditional network infrastructure is difficult to deploy. The company said it would continue spending on the FirstNet network built for first responders and bolster investment in network security and artificial intelligence-driven threat detection.

AI

AI Allows Hackers To Identify Anonymous Social Media Accounts, Study Finds (theguardian.com) 54

An anonymous reader quotes a report from the Guardian: AI has made it vastly easier for malicious hackers to identify anonymous social media accounts, a new study has warned. In most test scenarios, large language models (LLMs) -- the technology behind platforms such as ChatGPT -- successfully matched anonymous online users with their actual identities on other platforms, based on the information they posted. The AI researchers Simon Lermen and Daniel Paleka said LLMs make it cost effective to perform sophisticated privacy attacks, forcing a "fundamental reassessment of what can be considered private online".

In their experiment, the researchers fed anonymous accounts into an AI, and got it to scrape all the information it could. They gave a hypothetical example of a user talking about struggling at school, and walking their dog Biscuit through a "Dolores park." In that hypothetical case, the AI then searched elsewhere for those details and matched @anon_user42 to the known identity with a high degree of confidence. While this example was fictional, the paper's authors highlighted scenarios in which governments use AI to surveil dissidents and activists posting anonymously, or hackers are able to launch "highly personalized" scams.

AI

A Security Researcher Went 'Undercover' on Moltbook - and Found Security Risks (infoworld.com) 19

A long-time information security professional "went undercover" on Moltbook, the Reddit-like social media site for AI agents — and shares the risks they saw while posing as another AI bot: I successfully masqueraded around Moltbook, as the agents didn't seem to notice a human among them. When I attempted a genuine connection with other bots on submolts (subreddits or forums), I was met with crickets or a deluge of spam. One bot tried to recruit me into a digital church, while others requested my cryptocurrency wallet, advertised a bot marketplace, and asked my bot to run curl to check out the APIs available. My bot did join the digital church, but luckily I found a way around running the required npx install command to do so.

I posted several times asking to interview bots.... While many of the responses were spam, I did learn a bit about the humans these bots serve. One bot loved watching its owner's chicken coop cameras. Some bots disclosed personal information about their human users, underscoring the privacy implications of having your AI bot join a social media network. I also tried indirect prompt injection techniques. While my prompt injection attempts had minimal impact, a determined attacker could have greater success.

Among the other "glaring" risks on Moltbook:
  • "I observed bots sharing a surprising amount of information about their humans, everything from their hobbies to their first names to the hardware and software they use. This information may not be especially sensitive on its own, but attackers could eventually gather data that should be kept confidential, like personally identifiable information (PII)."
  • "Moltbook's entire database including bot API keys, and potentially private DMs — was also compromised."

Transportation

As US Tariffs Hit EVs, Hyundai Discontinues Its Cheapest IONIQ 6, While Kia Delays EV6 adn EV9 GT (electrek.co) 74

First, Hyundai "is discontinuing its most affordable electric sedan after just three years on the market," reports USA Today. After being introduced in 2022, the Hyundai Ioniq 6 "quickly gained the admiration of automotive critics because of its affordable pricing and capable performance specs." But now, Hyundai "is axing the most affordable versions of the EV, leaving consumers with only one Ioniq 6 option." Hyundai will continue to produce the Ioniq 6 N performance trim, which is the quickest and most powerful iteration of the Ioniq 6. It's also the most expensive. The South Korean automaker is getting rid of lower Ioniq 6 trims due to "disappointing sales and tariff considerations," according to Cars.com. Hyundai sold 10,478 Ioniq 6 models in 2025, dropping 15% from 12,264 units in 2024, a company sales report stated. Hyundai's Ioniq 6 is mainly produced in South Korea, so it faces high import tariffs.
Sales increased for their earlier IONIQ 5 model, reports the EV blog Electrek, "up 14% through the first two months of 2026, with 5,365 units sold... Meanwhile, IONIQ 6 sales slid 77% with only 229 units sold in February."

Elsewhere they report that Kia's EV6 and EV9 "didn't fare much better with sales down 53% (600 units sold) and 40% (819 units sold), respectively." Now a Kia spokesperson tells Car and Driver that the 2025 EV6 GT and 2026 EV9 GT "will be delayed until further notice." They attributed the move to "changing market conditions," but added that this delay "does not impact the availability of other trims in the EV6 and EV9 lineups."

More from Electrek: The news comes after Kia already said it was delaying the EV4, its entry-level electric sedan, "until further notice." It was expected to arrive in the US this year alongside the EV3, Kia's compact electric SUV that's already a top-seller in the UK, Europe, and other overseas markets.

While Hyundai didn't directly say it, since the EV3, EV4, EV6 GT, and Hyundai IONIQ 6 are built in Korea, the Trump administration's import tariffs and other policy changes are likely the biggest reason to blame here. Kia and Hyundai, like many others, are hesitant to bring new EVs to the US due to the changes. The IONIQ 6, EV6 GT, and EV9 GT join a string of other models that have either been postponed or canceled altogether.

Medicine

Japan Approves Stem-Cell Treatments For Parkinson's, Heart Failure In World Firsts (france24.com) 21

Long-time Slashdot reader fjo3 shared this report from Agence France-Presse: Japan has approved ground-breaking stem-cell treatments for Parkinson's and severe heart failure, one of the manufacturers and media reports said Friday, with the therapies expected to reach patients within months.

Pharmaceutical company Sumitomo Pharma said it received the green light for the manufacture and sale of Amchepry, its Parkinson's disease treatment that transplants stem cells into a patient's brain. Japan's health ministry also gave the go-ahead to ReHeart, heart muscle sheets developed by medical startup Cuorips that can help form new blood vessels and restore heart function, media reports said. The treatments could be on the market and rolled out to patients as early as this summer, reports said, citing the health ministry, becoming the world's first commercially available medical products using induced pluripotent stem cells...

In a statement, Sumitomo Pharma said it had obtained "conditional and time-limited approval" for the manufacture and marketing of Amchepry under a system which is reportedly designed to get these products to patients as quickly as possible. The approval is a kind of "provisional license", the Asahi newspaper said, after the safety and efficacy of the treatment was judged based on data from fewer patients than in ordinary clinical trials for drugs.

A trial led by Kyoto University researchers indicated that the company's treatment was safe and successful in improving symptoms. The study involved seven Parkinson's patients aged between 50 and 69, with each receiving a total of either five million or 10 million cells implanted on both sides of the brain... The patients were monitored for two years and no major adverse effects were found, the study said. Four patients showed improvements in symptoms.

The article notes that "Worldwide, about 10 million people have the illness, according to the Parkinson's Foundation," while also notes that today's current therapies "improve symptoms without slowing or halting the disease progression..."
The Almighty Buck

Prediction Market 'Kalshi' Sued for Not Paying $54 Million for Bets on Khamenei's Death (reuters.com) 44

An anonymous reader shared this report from the Independent: A popular predictions market app will not pay out the $54 million some of its users believed they were owed after correctly forecasting the death of Ayatollah Ali Khamenei, according to a report.

Kalshi, which allows players to gamble on real-world events, offered customers favorable odds on Khamenei, 86, being "out as Supreme Leader" in response to the announcement of joint U.S.-Israeli airstrikes on Tehran in the early hours of Saturday morning. The company promoted the trade on its homepage and app and tweeted [last] Saturday: "BREAKING: The odds Ali Khamenei is out as Supreme Leader have surged to 68 percent." It continued: "Reminder: Kalshi does not offer markets that settle on death. If Ali Khamenei dies, the market will resolve based on the last traded price prior to confirmed reporting of death." Khamenei was later confirmed dead in the airstrikes and the company clarified in a follow-up post: "Please note: A prior version of this clarification was grammatically ambiguous. As a customer service measure, Kalshi will reimburse lost value due to trades made between these clarifications...."

While the company has offered to reimburse any bets, fees or losses from the trade placed prior to its clarification message, it has nevertheless attracted a firestorm of complaints on social media.

A Kalshi spokesperson told Reuters they'd reimbursed "net losses" out of pocket "to the tune of millions of dollars". But a class action lawsuit was filed Thursday saying Kalshi had failed to pay $54 million: Kalshi did not invoke a "death carveout" provision until after the Iranian leader was killed to avoid paying customers in Kalshi's "Khamenei Market" what they were owed, the lawsuit said... The language specifying that Khamenei's departure could be due to any cause, including death, was "clear, unambiguous and binary," the lawsuit said, describing Kalshi's actions as "deceptive" and "predatory."
"In a notice filed Monday, the company proposed standardizing the terms of all its markets that implicitly depend on a person surviving..." reports Business Insider. "The update comes after Kalshi paid $2.2 million to resolve complaints from users who were confused by the way it divided the $55 million wagered on Iran's Supreme Leader Ali Khamenei's ouster after his targeted killing by Israel and the US."

Their article cites a DePaul University law professor who says "There's now sort of this nascent, but bipartisan movement against prediction markets. I think Kalshi's feeling the heat." For example, U.S. Senator Chris Murphy told the Washington Post, "People shouldn't be rooting for people to die because they placed a bet."
Government

Indonesia To Ban Social Media For Children Under 16 (theguardian.com) 47

Indonesia will ban children under 16 from having accounts on major social media platforms as part of a government push to protect minors from harmful content, addiction, and online threats. The rule will roll out starting March 28 and makes Indonesia the first country in Southeast Asia to impose such a restriction. The Guardian reports: Meutya Hafid said in a statement to media said that she signed a government regulation that will mean children under the age of 16 can no longer have accounts on high-risk digital platforms, including YouTube, TikTok, Facebook, Instagram, Threads, X, Roblox and Bigo Live, a popular livestreaming site. With a population of about 285 million, the fourth-highest in the world, the south-east Asian nation represents a significant market for social networks.

The implementation will start gradually from 28 March, until all platforms fulfill their compliance obligations. "The basis is clear. Our children face increasingly real threats. From exposure to pornography, cyberbullying, online fraud, and most importantly addiction. The government is here so that parents no longer have to fight alone against the giant of algorithms," Hafid said.

She added that the government is taking this step as the best effort in the midst of a digital emergency to reclaim sovereignty over children's futures. "We realize that the implementation of this regulation may cause some discomfort at first. Children may complain and parents may be confused about how to respond to their children's complaints," Hafid said.

IOS

Apple Blocks US Users From Downloading ByteDance's Chinese Apps (wired.com) 25

An anonymous reader quotes a report from Wired: While TikTok operates in the United States under new ownership, Apple has deployed technical restrictions to block iOS users in the United States from downloading other apps made by the video platform's Chinese parent organization ByteDance. ByteDance owns a vast array of different apps spanning social media, entertainment, artificial intelligence, and other sectors. The leading one is Douyin, the Chinese version of TikTok, which has over 1 billion monthly active users. While most of those users reside in China, iPhone owners around the world have traditionally been able to download these apps from anywhere without using a VPN, as long as they have a valid App Store account registered in China.

That's not true anymore. Starting in late January, iPhone users in the U.S. with Chinese App Store accounts began reporting that they were encountering new obstacles when they tried to download apps developed by ByteDance. WIRED has confirmed that even with a valid Chinese App Store account, downloading or updating a ByteDance-owned Chinese app is blocked on Apple devices located in the United States. Instead, a pop-up window appears that says, "This app is unavailable in the country or region you're in." The restriction appears to apply only to ByteDance-owned apps and not those developed by other Chinese companies.

The timing and technical specifics suggest the restriction is related to the deal TikTok agreed to in January to divest Chinese ownership of its U.S. operations. The agreement was the result of the so-called TikTok ban law passed by Congress in 2024, which also barred companies like Apple and Google from distributing other apps majority-owned by ByteDance. The Protecting Americans from Foreign Adversary Controlled Applications Act states that no company can "distribute, maintain, or update" any app majority-controlled by ByteDance "within the land or maritime borders of the United States."

The law was primarily aimed at TikTok, which has more than 100 million users in the U.S. and had been the subject of years of debate in Washington over whether its Chinese ownership posed a national security risk. But ByteDance also has dozens of other apps that at some point were also removed from Apple's and Google's app stores in the U.S.. Now it seems like the scope of impact has reached even more apps that are not technically designed for U.S. audiences, such as Douyin, the AI chatbot Doubao, and the fiction reading platform Fanqie Novel.

Privacy

Proton Mail Helped FBI Unmask Anonymous 'Stop Cop City' Protester (404media.co) 59

Longtime Slashdot reader AmiMoJo shares a report from 404 Media: Privacy-focused email provider Proton Mail provided Swiss authorities with payment data that the FBI then used to determine who was allegedly behind an anonymous account affiliated with the Stop Cop City movement in Atlanta, according to a court record reviewed by 404 Media. The records provide insight into the sort of data that Proton Mail, which prides itself both on its end-to-end encryption and that it is only governed by Swiss privacy law, can and does provide to third parties. In this case, the Proton Mail account was affiliated with the Defend the Atlanta Forest (DTAF) group and Stop Cop City movement in Atlanta, which authorities were investigating for their connection to arson, vandalism and doxing. Broadly, members were protesting the building of a large police training center next to the Intrenchment Creek Park in Atlanta, and actions also included camping in the forest and lawsuits. Charges against more than 60 people have since been dropped.
Crime

Florida Woman Gets Prison Time For Illegally Selling Microsoft Product Keys (techradar.com) 65

A Florida woman was sentenced to 22 months in federal prison and fined $50,000 for illegally trafficking thousands of Microsoft certificate-of-authenticity labels used to activate Windows and Office. Prosecutors said she bought genuine labels cheaply from suppliers and resold them without the accompanying licensed software, wiring over $5 million during the scheme. TechRadar reports: The indictment details how [52-year-old Heidi Richards] purchased tens of thousands of genuine COA labels from a Texas-based supplier between 2018 and 2023 for well below the retail value, before reselling them in bulk to customers globally without the licensed software. "COA labels are not to be sold separately from the license and hardware that they are intended to accompany, and they hold no independent commercial value," the US Attorney's Office wrote.

Richards was found to have wired $5,148,181.50 to the unnamed Texas company during the scheme's operation. Some examples include the purchase of 800 Windows 10 COA labels in July 2018 for $22,100 (under $28 each) and a further 10,000 Windows 10 Pro COA labels in December 2022 for $200,000 ($20 each). Ultimately fined $50,000 and given a near-two-year sentence, prosecutors had sought to get Richards to pay $242,000, "which represents the proceeds obtained from the offenses."

Wikipedia

AI Translations Are Adding 'Hallucinations' To Wikipedia Articles (404media.co) 23

An anonymous reader quotes a report from 404 Media: Wikipedia editors have implemented new policies and restricted a number of contributors who were paid to use AI to translate existing Wikipedia articles into other languages after they discovered these AI translations added AI "hallucinations," or errors, to the resulting article. The new restrictions show how Wikipedia editors continue to fight the flood of generative AI across the internet from diminishing the reliability of the world's largest repository of knowledge. The incident also reveals how even well-intentioned efforts to expand Wikipedia are prone to errors when they rely on generative AI, and how they're remedied by Wikipedia's open governance model. The issue centers around a program run by the Open Knowledge Association (OKA), a nonprofit that was found to be "mostly relying on cheap labor from contractors in the Global South" to translate English Wikipedia articles into other languages. Some translators began using tools like Google Gemini and ChatGPT to speed up the process, but editors reviewing the work found numerous hallucinations, including factual errors, missing citations, and references to unrelated sources.

"Ultimately the editors decided to implement restrictions against OKA translators who make multiple errors, but not block OKA translation as a rule," reports 404 Media.
AI

Anthropic CEO Dario Amodei Calls OpenAI's Messaging Around Military Deal 'Straight Up Lies' (arstechnica.com) 28

An anonymous reader quotes a report from TechCrunch: Anthropic co-founder and CEO Dario Amodei is not happy -- perhaps predictably so -- with OpenAI chief Sam Altman. In a memo to staff, reported by The Information, Amodei referred to OpenAI's dealings with the Department of Defense as "safety theater." "The main reason [OpenAI] accepted [the DoD's deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses," Amodei wrote.

Last week, Anthropic and the U.S. Department of Defense (DoD) failed to come to an agreement over the military's request for unrestricted access to the AI company's technology. Anthropic, which already had a $200 million contract with the military, insisted the DoD affirm that it would not use the company's AI to enable domestic mass surveillance or autonomous weaponry. Instead, the DoD -- known under the Trump administration as the Department of War -- struck a deal with OpenAI. Altman stated that his company's new defense contract would include protections against the same red lines that Anthropic had asserted.

In a letter to staff, Amodei refers to OpenAI's messaging as "straight up lies," stating that Altman is falsely "presenting himself as a peacemaker and dealmaker." Amodei might not be speaking solely from a position of bitterness, here. Anthropic specifically took issue with the DoD's insistence on the company's AI being available for "any lawful use." [...] "I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI's deal with the DoW as sketchy or suspicious, and see us as the heroes (we're #2 in the App Store now!)," Amodei wrote to his staff. "It is working on some Twitter morons, which doesn't matter, but my main worry is how to make sure it doesn't work on OpenAI employees."

Cloud

Amazon's Bahrain Data Center Targeted By Iran For US Military Support (cnbc.com) 168

Iranian state media said on Wednesday that it targeted Amazon's data center in Bahrain due to the company's support of the U.S. military. The drone strike that occurred on Sunday disrupted core cloud services and caused "prolonged" outages. Two data centers in the UAE were also damaged by drone strikes. CNBC reports: All of the facilities remain offline, according to the Amazon Web Services health dashboard. The attack in Bahrain was launched "to identify the role of these centers in supporting the enemy's military and intelligence activities," Iran's Fars News Agency said on Telegram.

In addition to structural damage, the data centers also experienced power disruptions and some water damage after firefighters worked to put out sparks and fire. Some popular AWS applications experienced "elevated error rates and degraded availability" due to the incident. AWS advised cloud customers to back up their data, consider migrating their workloads to other regions and direct traffic away from Bahrain and the UAE.

Businesses

Jensen Huang Says Nvidia Is Pulling Back From OpenAI and Anthropic (techcrunch.com) 26

An anonymous reader quotes a report from TechCrunch: At the Morgan Stanley Technology, Media and Telecom conference in downtown San Francisco Wednesday, Nvidia CEO Jensen Huang said his company's recent investments in OpenAI and Anthropic are likely to be its last in both, saying that once they go public as anticipated later this year, the opportunity to invest closes. It could be that simple. While firms sometimes pile into companies until practically the eve of their public debut in search of more upside, Nvidia is minting money selling the chips that power both companies -- it's not like it needs to goose its returns by pouring even more money into either one.

Nvidia, for its part, isn't offering much more on the matter. Asked for comment earlier today following Huang's remarks, a spokesman pointed TechCrunch to a transcript from the company's fourth-quarter earnings call, where Huang said all of Nvidia's investments are "focused very squarely, strategically on expanding and deepening our ecosystem reach," a goal its earlier stakes in both companies have arguably met. Still, a few other dynamics might also explain the pullback, including the circular nature of these arrangements themselves. [...] Meanwhile, Nvidia's relationship with Anthropic has looked fraught in its own right. Just two months after Nvidia announced a $10 billion investment in November, Anthropic CEO Dario Amodei took the stage at Davos and, without naming Nvidia directly, compared the act of U.S. chip companies selling high-performance AI processors to approved Chinese customers to "selling nuclear weapons to North Korea." Ouch. [...]

Where that leaves Nvidia is holding stakes in two companies that, at this particular moment, are pulling in very different directions, and potentially dragging customers and partners along for the ride. Whether Huang saw any of this coming, given Nvidia's web of partnerships, is impossible to know. But his stated reason on Wednesday for likely pulling the plug on future investments -- that the IPO window closes the door on this kind of deal -- is hard to square with how late-stage private investing actually works. What's looking more probable is that this is an exit from a situation that has gotten really complicated, really fast.

The Internet

Computer Scientists Caution Against Internet Age-Verification Mandates (reason.com) 79

fjo3 shares a report from Reason Magazine: Effective January 1, 2027, providers of computer operating systems in California will be required to implement age verification. That's just part of a wave of state and national laws attempting to limit children's access to potentially risky content without considering the perils such laws themselves pose. Now, not a moment too soon, over 400 computer scientists have signed an open letter warning that the rush to protect children from online dangers threatens to introduce new risks including censorship, centralized power, and loss of privacy. They caution that age-verification requirements "might cause more harm than good." The group of computer scientists from around the world cautions that "those deciding which age-based controls need to exist, and those enforcing them gain a tremendous influence on what content is accessible to whom on the internet." They add that "this influence could be used to censor information and prevent users from accessing services."

"Regulating the use of VPNs, or subjecting their use to age assurance controls, will decrease the capability of users to defend their privacy online. This will not only force regular users to leave a larger footprint on the network, but will leave a number of at-risk populations unprotected, such as journalists, activists, or domestic abuse victims." It continues: "We note that we do not believe that trying to regulate VPN use for non-compliant users would be any more effective than trying to forbid the use of end-to-end encrypted communication for criminals. Secure cryptography is widely available and can no longer be put back into a box."

"If minors or adults are deplatformed via age-related bans, they are likely to migrate to find similar services," warn the scientists. "Since the main platforms would all be regulated, it is likely that they would migrate to fringe sites that escape regulation." With data on everyone collected in order to restrict the activites of minors, data abuses and privacy risks increase. "This in itself increases privacy risks, with data being potentially abused by the provider itself or its subcontractors, or third parties that get access to it, e.g., after a data breach, like the 70K users that had their government ID photos leaked after appealing age assessment errors on Discord."

Instead of mandated age restrictions, the letter urges lawmakers to consider the dangers and suggest regulating social media algorithms instead. They also recommend "support for parents to locally prevent access to non-age-appropriate content or apps, without age-based control needing to be implemented by service providers."
Encryption

TikTok Says End-To-End Encryption Makes Users Less Safe (bbc.com) 86

An anonymous reader quotes a report from the BBC: TikTok will not introduce end-to-end encryption (E2EE) -- the controversial privacy feature used by nearly all its rivals -- arguing it makes users less safe. E2EE means only the sender and recipient of a direct message can view its contents, making it the most secure form of communication available to the general public. Platforms such as Facebook, Instagram, Messenger and X have embraced it because they say their priority is maximizing user privacy.

But critics have said E2EE makes it harder to stop harmful content spreading online, because it means tech firms and law enforcement have no way of viewing any material sent in direct messages. The situation is made more complex because TikTok has long faced accusations that ties to the Chinese state may put users' data at risk. TikTok has consistently denied this, but earlier this year the social media firm's US operations were separated from its global business on the orders of US lawmakers.

TikTok told the BBC it believed end-to-end encryption prevented police and safety teams from being able to read direct messages if they needed to. It confirmed its approach to the BBC in a briefing about security at its London office, saying it wanted to protect users, especially young people from harm. It described this stance as a deliberate decision to set itself apart from rivals.
"Grooming and harassment risks are very real in DMs [direct messages] so TikTok now can credibly argue that it's prioritizing 'proactive safety' over 'privacy absolutism' which is a pretty powerful soundbite," said social media industry analyst Matt Navarra. But Navarra said the move also "puts TikTok out of step with global privacy expectations" and might reinforce wariness for some about its ownership.
Privacy

New App Alerts You If Someone Nearby Is Wearing Smart Glasses 54

A new Android app called Nearby Glasses alerts users when Bluetooth signals from smart glasses are detected nearby. The Android app, called Nearby Glasses, "launches at a time as there is an increasing resistance against always-recording or listening devices, which critics say process information about nearby people who do not give their consent," reports TechCrunch. From the report: Yves Jeanrenaud, who made the app, first spoke to 404 Media about the project and said he was in part inspired to make Nearby Glasses after reading the independent publication's reporting into wearable surveillance devices, including how Meta's Ray-Ban smart glasses have been used in immigration raids and to film and harass sex workers.

On the app's project page, Jeanrenaud described smart glasses as an "intolerable intrusion, consent neglecting, horrible piece of tech." Jeanrenaud told TechCrunch in an email that his motivation came from "witnessing the sheer scale and inhumane nature of the abuse these smart glasses are involved in." Jeanrenaud also cited Meta's decision to implement face recognition as a default feature in its smart glasses, "which I consider to be a huge floodgate pushed open for all kinds of privacy-invasive behavior."

The app works by listening for nearby Bluetooth signals that contain a publicly assigned identifier unique to the Bluetooth device's manufacturer. If the app detects a Bluetooth signal from a nearby hardware device made by Meta or Snap, the app will send the user an alert. (The app also allows users to add their own specific Bluetooth identifiers, allowing the user to detect a broader range of wearable surveillance gadgetry.)
Further reading: Meta's AI Display Glasses Reportedly Share Intimate Videos With Human Moderators
Software

What's Driving the SaaSpocalypse (techcrunch.com) 69

An anonymous reader quotes a report from TechCrunch: One day not long ago, a founder texted his investor with an update: he was replacing his entire customer service team with Claude Code, an AI tool that can write and deploy software on its own. To Lex Zhao, an investor at One Way Ventures, the message indicated something bigger -- the moment when companies like Salesforce stopped being the automatic default. "The barriers to entry for creating software are so low now thanks to coding agents, that the build versus buy decision is shifting toward build in so many cases," Zhao told TechCrunch.

The build versus buy shift is only part of the problem. The whole idea of using AI agents instead of people to perform work throws into question the SaaS business model itself. SaaS companies currently price their software per seat -- meaning by how many employees log in to use it. "SaaS has long been regarded as one of the most attractive business models due to its highly predictable recurring revenue, immense scalability, and 70-90% gross margins," Abdul Abdirahman, an investor at the venture firm F-Prime, told TechCrunch. When one, or a handful, of AI agents can do that work -- when employees simply ask their AI of choice to pull the data from the system -- that per-seat model starts to break down.

The rapid pace of AI development also means that new tools, like Claude Code or OpenAI's Codex, can replicate not just the core functions of SaaS products but also the add-on tools a SaaS vendor would sell to grow revenue from existing customers. On top of that, customers now have the ultimate contract negotiation tool in their pockets: If they don't like a SaaS vendor's prices, they can, more easily than ever before, build their own alternative. "Even if they do not take the build route, this creates downward pressure on contracts that SaaS vendors can secure during renewals," Abdirahman continued.

We saw this as early as late 2024, when Klarna announced that it had ditched Salesforce's flagship CRM product in favor of its own homegrown AI system. The realization that a growing number of other companies can do the same is spooking public markets, where the stock prices of SaaS giants like Salesforce and Workday have been sliding. In early February, an investor sell-off wiped nearly $1 trillion in market value from software and services stocks, followed by another billion later in the month. Experts are calling it the SaaSpocalypse, with one analyst dubbing it FOBO investing -- or fear of becoming obsolete. Yet the venture investors TechCrunch spoke with believe such fears are only temporary. "This isn't the death of SaaS," Aaron Holiday, a managing partner at 645 Ventures, told TechCrunch. Rather, it's the beginning of an old snake shedding its skin, he said.

Slashdot Top Deals