Social Networks

Are Employers Using Your Data To Figure Out the Lowest Salary You'll Accept? (marketwatch.com) 96

MarketWatch looks at "surveillance wages," pay rates "based not on an employee's performance or seniority, but on formulas that use their personal data, often collected without employees' knowledge." According to Nina DiSalvo, policy director at labor advocacy group Towards Justice, some systems use signals associated with financial vulnerability — including data on whether a prospective employee has taken out a payday loan or has a high credit-card balance — to infer the lowest pay a candidate might accept. Companies can also scrape candidates' public personal social-media pages, she said...

A first-of-its-kind audit of 500 labor-management artificial-intelligence companies by Veena Dubal, a law professor at University of California, Irvine, and Wilneida Negrón, a tech strategist, found that employers in the healthcare, customer service, logistics and retail industries are customers of vendors whose tools are designed to enable this practice. Published by the Washington Center for Equitable Growth, a progressive economic think tank, the August 2025 report... does not claim that all employers using these systems engage in algorithmic wage surveillance. Instead, it warns that the growing use of algorithmic tools to analyze workers' personal data can enable pay practices that prioritize cost-cutting over transparency or fairness...

Surveillance wages don't stop at the hiring stage — they follow workers onto the job, too. The vendors that provide such services also offer tools that are built to set bonus or incentive compensation, according to the report. These tools track their productivity, customer interactions and real-time behavior — including, in some cases, audio and video surveillance on the job. Nearly 70% of companies with more than 500 employees were already using employee-monitoring systems in 2022, such as software that monitors computer activity, according to a survey from the International Data Corporation. "The data that they have about you may allow an algorithmic decision system to make assumptions about how much, how big of an incentive, they need to give to a particular worker to generate the behavioral response they seek," DiSalvo said.

The article notes that Colorado introduced the "Prohibit Surveillance Data to Set Prices and Wages Act" to ban companies from setting pay rates with algorithms that use payday-loan history, location data or Google search behavior for algorithmically set.

Thanks to long-time Slashdot reader sinij for sharing the article.
Microsoft

Microsoft To Invest $10 Billion In Japan For AI, Cyber Defense Expansion (reuters.com) 10

Microsoft plans to invest $10 billion in Japan from 2026 to 2029 to expand AI infrastructure, boost local cloud capacity, train 1 million engineers and developers, and deepen cybersecurity cooperation with the Japanese government. Reuters reports: The investment includes the training of 1 million engineers and developers by 2030, Microsoft said, which was unveiled during a visit to Tokyo by Vice Chair and President Brad Smith. In a statement, the company said the plan aligns with Prime Minister Sanae Takaichi's goal to boost growth through advanced, strategic technologies while safeguarding national security.

Microsoft will work with domestic firms including SoftBank and Sakura Internet to expand Japan-based AI computing capacity, allowing Ecompanies and government agencies to keep sensitive data within the country while accessing Microsoft Azure services, it said. It will also deepen cooperation with Japanese authorities on sharing intelligence related to cyber threats and crime prevention.

Businesses

OpenAI Acquires Popular Tech-Industry Talk Show TBPN (cnbc.com) 25

OpenAI is acquiring tech news podcast TBPN, a fast-growing daily show hosted by John Coogan and Jordi Hays. OpenAI says TBPN will keep its editorial independence, even though the acquisition is widely viewed as part of a broader effort to influence public discourse around AI. CNBC reports: In the announcement, OpenAI CEO of AGI Deployment Fidji Simo wrote that their mission of bringing artificial general intelligence comes with a responsibility to have a space for "constructive conversation about the changes AI creates." Altman has appeared on TBPN multiple times and is a frequent presence across media and podcasts, even hitting NBC's "Tonight Show Starring Jimmy Fallon" in December.

The announcement says TBPN will maintain editorial independence and continue to choose its own guests. "TBPN is my favorite tech show. We want them to keep that going and for them to do what they do so well," Altman wrote in a post on X. "I don't expect them to go any easier on us, am sure I'll do my part to help enable that with occasional stupid decisions." OpenAI did not disclose the terms of the deal but said TBPN will be housed within its strategy organization.
"While we've been critical of the industry at times, after getting to know Sam and the OpenAI team, what stood out most was their openness to feedback and commitment to getting this right," wrote Hays in a statement. "Moving from commentary to real impact in how this technology is distributed and understood globally is incredibly important to us."
NASA

Artemis II Astronauts Have 'Two Microsoft Outlooks' and Neither Work (404media.co) 140

Even on NASA's Artemis II mission around the moon, astronauts apparently still have to deal with broken Microsoft Outlook. One of the crew members, Reid Wiseman, jokingly reported that he had "two Microsoft Outlooks" and neither worked. 404 Media reports: On April 1, four astronauts from the U.S. and Canada embarked on a 10-day flight to loop around the moon. Spotted by VGBees podcast host Niki Grayson on the NASA livestream of live views from the , around 2 a.m. ET, mission control acknowledges an issue with a process control system and offers to remote in -- yes, like how your office IT guy would pause his CoD campaign to log into Okta for you because you used the wrong password too many times.

One of the astronauts, Reid Wiseman, says that's chill, but while they're in there: "I also see that I have two Microsoft Outlooks, and neither one of those are working." Astronauts are trained for decades in some of the most physically and mentally grueling environments of any career. They're some of the smartest people on the planet, and they have to be, before we strap them to 3.2 million pounds of jet fuel and make them do complex experiments and high-stakes decisions for days on end. And yet, once they get up there, fucking Outlook is borked.

AI

Group Pushing Age Verification Requirements For AI Sneakily Backed By OpenAI 54

An anonymous reader quotes a report from Gizmodo: OpenAI hasn't been shy about spending money lobbying for favorable laws and regulations. But when it comes to its involvement with child safety advocacy groups, the company has apparently decided it's best to stay in the shadows -- even if it means hiding from the people actually pushing for policy changes. According to a report from the San Francisco Standard, a number of people involved in the California-based Parents and Kids Safe AI Coalition were blindsided to learn their efforts were secretly being funded by OpenAI. Per the Standard, the Parents and Kids Safe AI Coalition was a group formed to push the Parents and Kids Safe AI Act, a piece of California legislation proposed earlier this year that would require AI firms to implement age verification and additional safeguards for users under the age of 18. That bill was backed by OpenAI in partnership with Common Sense Media, which proposed the legislation as a compromise after the two groups had pushed dueling ballot initiatives last year.

But when the coalition started to reach out to child safety groups and other advocacy organizations to try to get them to lend support to the bill, OpenAI was apparently conveniently left off the messaging. The AI giant was also left out of the marketing on the coalition's website, according to the Standard. That reportedly led to a number of groups and individuals lending their support to the Parents and Kids Safe AI Coalition without realizing that they were aligning themselves with OpenAI. As it turns out, OpenAI isn't just one of the members of the coalition; it is the group's biggest funder. In fact, the Standard characterized the Parents and Kids Safe AI Coalition as being "entirely funded" by OpenAI. While it's not clear exactly how much the company has funneled to this particular group, a Wall Street Journal report from January said OpenAI pledged $10 million to push the Parents and Kids Safe AI Act.
Gizmodo notes that OpenAI's backing of the Parents and Kids Safe AI Act "could be self-serving for CEO Sam Altman," who just so happens to head a company called World that provides age verification services.
AI

Anthropic Issues Copyright Takedown Requests To Remove 8,000+ Copies of Claude Code Source Code 69

Anthropic is using copyright takedown notices to try to contain an accidental leak of the underlying instructions for its Claude Code AI agent. According to the Wall Street Journal, "Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and adaptations of the raw Claude Code instructions ... that developers had shared on programming platform GitHub." From the report: Programmers combing through the source code so far have marveled on social media at some of Anthropic's tricks for getting its Claude AI models to operate as Claude Code. One feature asks the models to go back periodically through tasks and consolidate their memories -- a process it calls dreaming. Another appears to instruct Claude Code in some cases to go "undercover" and not reveal that it is an AI when publishing code to platforms like GitHub. Others found tags in the code that appeared pointed at future product releases. The code even included a Tamagotchi-style pet called "Buddy" that users could interact with.

After Anthropic requested that GitHub remove copies of its proprietary code, another programmer used other AI tools to rewrite the Claude Code functionality in other programming languages. Writing on GitHub, the programmer said the effort was aimed at keeping the information available without risking a takedown. That new version has itself become popular on the programming platform.
Transportation

Robotaxi Outage In China Leaves Passengers Stranded On Highways (wired.com) 31

An anonymous reader quotes a report from Wired: An unknown technical problem caused a number of robotaxis owned by the Chinese tech giant Baidu to freeze on Tuesday in the middle of traffic, trapping some passengers in the vehicles for more than an hour. In Wuhan, a city in central China where Baidu has deployed hundreds of its Apollo Go self-driving taxis, people on Chinese social media reported witnessing the cars suddenly malfunction and stop operating. Photos and videos shared online show the Baidu cars halted on busy highways, often in the fast lane.

[...] Local police in Wuhan issued a statement around midnight in China that said the situation was "likely caused by a system malfunction," but the incident is still under investigation. No one was injured, and all passengers have exited the vehicles, the police added. It's unclear how many of Baidu's robotaxis may have been impacted. [...] There were at least two other collisions on the same day, according to photos and videos posted on Chinese social media. A RedNote user in Wuhan confirmed to WIRED that she drove past a white minivan that had gotten into a rear-end collision with a parked robotaxi. The back of the Baidu car was badly damaged, but the two people standing beside the scene looked unharmed, she says. She added that she estimates she also saw at least a dozen more parked robotaxies.

Businesses

Oracle Cuts Thousands of Jobs Across Sales, Engineering, Security (theregister.com) 46

bobthesungeek76036 shares a report from the Register: Oracle laid off thousands of employees on Tuesday as it ramps spending on AI infrastructure projects internally and with major technology partners. The layoffs were carried out via email, according to copies of the message viewed by Business Insider. The email told affected workers they would be terminated immediately and to provide a personal email for follow-up.

The cuts echo a TD Cowen forecast earlier this year, when the investment bank questioned how Oracle would finance its expanding AI datacenter buildout and suggested headcount reductions could reach 20,000 to 30,000. It is not clear how many employees were notified on Tuesday, but one screenshot that purports to show the number of internal Slack users showed a drop of 10,000 overnight.

[...] Oracle employs about 162,000 people, with 58,000 of those in the US and approximately 104,000 internationally. If the rumored cuts of 30,000 are correct, it would amount to 18 percent of the company's workforce. According to posts from Oracle workers on LinkedIn, the cuts were spread through multiple departments around the country, with employees in Kansas, Tennessee, and Texas taking to social media to say they were among those chopped.
"This news didn't seem to affect stock price," adds bobthesungeek76036. "ORCL is up 6% for the day."
Social Networks

Australia Readies Social Media Court Action Citing Teen Ban Breaches (reuters.com) 27

Australia is preparing possible court action against major social media platforms that are failing to enforce the country's social media ban on under-16s. "Three months after the ban came into effect, the eSafety Commissioner said it was probing Meta's Instagram and Facebook, Google's YouTube, Snapchat and TikTok for possible breaches of the law," reports Reuters. From the report: Communications Minister Anika Wells said the government was gathering evidence "so that the eSafety Commissioner can go to the Federal Court and win." "We have spent the summer building that evidence base of all the stories that no doubt you have all heard ... about how kids are getting around that," Wells told reporters in Canberra. The legal threat is a striking change of tone from a government which had hailed tech giants' shows of cooperation when the ban went live in December.

Under the Australian law, platforms must show they are taking reasonable steps to keep out underage users or face fines of up to $34 million per breach, something eSafety would need to pursue in a civil court. The regulator previously said it would only take enforcement action in cases of systemic noncompliance. But in its first comprehensive compliance report since the ban took effect, eSafety said measures taken by the platforms were substandard and it would make a decision about next steps by mid-year. "We are now moving âinto an enforcement stance," said commissioner Julie Inman Grant in a statement.

The regulator reported major compliance gaps, including platforms prompting children who had previously declared ages under 16 to do fresh age checks, allowing repeated attempts at age-assurance tests until a child got a result over 16 and poor pathways for people to report underage accounts. Some platforms did not use age-inference, which estimates age based on someone's online activity, and some only used age-assurance measures like photo-based checks after a user tried to change their age, rather than at sign-up. That made it "likely many Australian children aged under 16 have been able to create accounts on age-restricted social media platforms by simply declaring they are 16 or older", the regulator said. Nearly one-third of parents reported their under-16 child had at least one social media account after the ban took effect, of which two-thirds said the platform had not asked the child's age, it added.

Data Storage

Sony Shuts Down Nearly Its Entire Memory Card Business Due To SSD Shortage (petapixel.com) 50

For the "foreseeable future," Sony says it has stopped accepting new orders for most of its CFexpress and SD memory card lines due to the an ongoing memory supply shortage. "Due to the global shortage of semiconductors (memory) and other factors, it is anticipated that supply will not be able to meet demand for CFexpress memory cards and SD memory cards for the foreseeable future," the company said in a notice. "Therefore, we have decided to temporarily suspend the acceptance of orders from our authorized dealers and from customers at the Sony Store from March 27, 2026 onwards. PetaPixel reports: The suspension includes all of Sony's memory card lines, including CFexpress Type A, CFexpress Type B, and SD cards. The 240GB, 480GB, 960GB, and 1920GB capacity Type A cards have been suspended, as have the 480GB and 240GB Type B cards. The full gamut of Sony's high-end SD cards has also been suspended, including the 256GB, 128GB, and 64GB TOUGH-branded cards and the lower-end 512GB, 256GB, 128GB, and 256GB plainly-branded Sony cards, which cap out at V60 speeds. Even Sony's lower-end, V30 128GB and 64GB SD cards have been suspended, showcasing that the SSD shortage affects all types of solid state, not just the high-end ones.

It appears that only the 960GB CFexpress Type B card and the lowest-end SF-UZ series SD cards remain in production. However, those UHS-I SD cards are discontinued in the United States outside of a scant few retailers and resellers. "We sincerely apologize for any inconvenience this may cause our customers," Sony concludes.

Social Networks

Will Social Media Change After YouTube and Meta's Court Defeat? (theverge.com) 54

Yes, this week YouTube and Meta were found negligent in a landmark case about social media addiction.

But "it's still far from certain what this defeat will change," argues The Verge's senior tech and policy editor, "and what the collateral damage could be." If these decisions survive appeal — which isn't certain — the direct outcome would be multimillion-dollar penalties. Depending on the outcome of several more "bellwether" cases in Los Angeles, a much larger group settlement could be reached down the road... For many activists, the overall goal is to make clear that lawsuits will keep piling up if companies don't change their business practices...

The best-case outcome of all this has been laid out by people like Julie Angwin, who wrote in The New York Times that companies should be pushed to change "toxic" features like infinite scrolling, beauty filters that encourage body dysmorphia, and algorithms that prioritize "shocking and crude" content. The worst-case scenario falls along the lines of a piece from Mike Masnick at Techdirt, who argued the rulings spell disaster for smaller social networks that could be sued for letting users post and see First Amendment-protected speech under a vague standard of harm. He noted that the New Mexico case hinged partly on arguing that Meta had harmed kids by providing end-to-end encryption in private messaging, creating an incentive to discontinue a feature that protects users' privacy — and indeed, Meta discontinued end-to-end encryption on Instagram earlier this month.

Blake Reid, a professor at Colorado Law, is more circumspect. "It's hard right now to forecast what's going to happen," Reid told The Verge in an interview. On Bluesky, he noted that companies will likely look for "cold, calculated" ways to avoid legal liability with the minimum possible disruption, not fundamentally rethink their business models. "There are obviously harms here and it's pretty important that the tort system clocked those harms" in the recent cases, he told The Verge. "It's just that what comes in the wake of them is less clear to me".

The article also includes this prediction from legal blogger/Section 230 export Eric Goldman. "There will be even stronger pushes to restrict or ban children from social media." Goldman argues "This hurts many subpopulations of minors, ranging from LGBTQ teens who will be isolated from communities that can help them navigate their identities to minors on the autism spectrum who can express themselves better online than they can in face-to-face conversations."
Data Storage

World's Smallest QR Code - Smaller Than Bacteria - Could Store Data for Centuries (sciencedaily.com) 40

"Scientists have created a microscopic QR code so tiny it can only be seen with an electron microscope," reports Science Daily. It's "smaller than most bacteria and now officially a world record."

"But this isn't just about size; it's about durability. By engraving data into ultra-stable ceramic materials, the team has opened the door to storing information that could last for centuries or even millennia without needing power or maintenance." Scientists at TU Wien, working with data storage company Cerabyte, produced a QR code measuring just 1.98 square micrometers... officially confirmed and recorded in the Guinness Book of Records...

Each pixel measures just 49 nanometers, which is about ten times smaller than the wavelength of visible light. As a result, the pattern is completely invisible under normal conditions and cannot be resolved using visible light. However, when viewed with an electron microscope, the QR code can be clearly and reliably read. The storage capacity is also impressive. More than 2 terabytes of data could fit within the area of a single A4 sheet of paper using this approach...

This work points toward a more sustainable future for data storage, where information can be preserved securely for the long term with minimal energy use.

"We live in the information age, yet we store our knowledge in media that are astonishingly short-lived," says Alexander Kirnbaue (from the thin film materials science division at Vienna's Tu Wein research university). "With ceramic storage media, we are pursuing a similar approach to that of ancient cultures, whose inscriptions we can still read today..."

"We now aim to use other materials, increase writing speeds, and develop scalable manufacturing processes so that ceramic data storage can be used not only in laboratories but also in industrial applications."
Social Networks

Bluesky's Newest Product: an AI Tool That Gives You Custom Feeds (attie.ai) 39

"What happens when you can describe the social experience you want and have it built for you...?" asks Bluesky? "We've just started experimenting, but we're sharing it now because we want you to build alongside us."

Called "Attie" — because it's built with Bluesky's decentralized publishing framework, AT Protocol (which is open source) — the new assistant turns natural language prompts into social feeds, without users having to know how to code. (It's part of Bluesky's mission to "develop and drive large-scale adoption of technologies for open and decentralized public conversation.")

Engadget reports: On the Attie website, examples include prompts like, "Show me electronic music and experimental sound from people in my network" or "Builders working on agent infrastructure and open protocol design."

"It feels more like having a conversation than configuring software," [writes Bluesky's former CEO/current chief innovation officer, Jay Graber, in a blog post]. "You describe the sort of posts you want to see, and the coding agent builds the feed you described."

Graber added that Attie is a separate app from Bluesky and users don't have to use the new AI assistant if they don't want to. However, since Attie and Bluesky were built on the same framework, it could mean there will be some cross-app implementation between the two or any other app built on the AT Protocol.

"Attie is open for beta signups today, and we'll be sharing what we learn along the way," Graber writes in the blog post. "To learn more about Attie, visit: Attie.AI. Come help us find out what this can be."

The blog post warns that "Right now, AI is undermining human agency at the same time it's enhancing it," since "The proliferation of low-quality AI-generated content is making public social networks noisier and less trustworthy..." And in a world where "signal is getting harder to find... The major platforms aren't trying to fix this problem." They're using AI to increase the time users spend on-platform, to harvest training data, and to shape what users see and believe through systems they can't inspect and didn't choose. We think AI should serve people, not platforms...

An open protocol puts this power directly in users' hands. You can use it to build your own feeds, create software that works the way you want it to, and find signal in the noise. We built the AT Protocol so anyone could build any app they imagine on top of it, but until recently "anyone" really meant "anyone who can code." Agentic coding tools change that. For the first time, an open protocol can be genuinely open to everyone...

The Atmosphere [Bluesky's interoperable ecosystem] is an open data layer with a clearly defined schema for applications, which makes it uniquely well-suited for coding agents to build on... Bluesky will continue to evolve as a social app millions of people rely on. Attie will be where we experiment with agentic social.

AI is an accelerant on whatever it's applied to. I want it to accelerate decentralizing social and putting power back in users' hands. But I don't think the most interesting things built on AT Protocol will come from us. They're going to come from everyone who picks up these tools and starts building.

United Kingdom

Apple Now Requires Device-Level Age Verification in the UK. Could the US Be Next? (gizmodo.com) 121

Apple unveiled new device-level age restrictions in the UK on Wednesday. "After downloading a new update, users will now have to confirm that they are 18 or older to access unrestricted features," reports Gizmodo.

"Users will be able to confirm their age with a credit card or by scanning an ID." For those underage or who have not confirmed their age, Apple will turn on Web Content Filter and Communication Safety, which will not only restrict access to certain apps or websites, but will also monitor messages, shared photo albums, AirDrop, and FaceTime calls for nudity. Apple didn't specify exactly which services and features are banned for under-18 users, but it will likely be in compliance with UK legislation...

The British government does not require Apple and other OS providers to institute device-level age checks, but it does restrict minor access to online pornography under the Online Safety Act, which passed in 2023. So far, that restriction has only been implemented at the website level, but UK officials have been worried about easy loopholes to evade the age restrictions, like VPNs.

The broader tech industry has been campaigning for some time to use device-level age checks instead in response to the rising tide of under-16 social media and internet bans around the world. Last month, in a landmark social media trial in California, Meta CEO Mark Zuckerberg also supported this idea, saying that conducting age verification "at the level of the phone is just a lot clearer than having every single app out there have to do this separately." Pornhub-operator Aylo had advocated for device-level restrictions in the UK as well, and even sent out letters to Apple, Google, and Microsoft in November asking for OS-level age verification...

The most obvious question: Could this be brought stateside?

Advertising

'Ads Are Popping Up On the Fridge and It Isn't Going Over Well' (msn.com) 122

The Wall Street Journal reports: Walking into his kitchen, Tim Yoder recoiled at a message on his refrigerator door: "Shop Samsung water filters." Yoder, a supply-chain manager in Chicago, owns a Samsung Electronics Family Hub fridge. He paid $1,400 for an appliance that came with a 32-inch screen on the door that allows him to control other Samsung gadgets, pull up recipes or stream music. But since last fall, it's been intermittently serving up ads, part of a pilot program being tested on some of Samsung's smart fridges sold in the U.S. The response? Not warm. "I guess this is another place for somebody to shove an ad in your face," said the 47-year-old Yoder, recalling the first time he noticed one...

The ads are only on certain Family Hub fridges that have screens and internet connectivity. They run as a rectangular banner at the bottom — part of a widget that also shows news, the weather and a calendar. Samsung declined to say how long the pilot might last or whether it would end. The firm recently unveiled a "Screens Everywhere" initiative that also includes washers, dryers and ovens.... Samsung launched the banner-type fridge ads that come as part of the widget via an October software update. In a footnote of a news release at the time, Samsung pledged to "serve contextual or non-personal ads" and respect data privacy. The banner ads can be turned off in settings.

Samsung said the purpose of the pilot is to explore whether ads relevant to home chores can be useful to owners, and that overall pushback has been negligible. The "turn-off" rate for the pilot ad program remains in the bottom single-digit range, it said... While owners can turn off the banner ads, doing so eliminates the widget altogether, a bummer for Brian Bosworth, a media-industry engineer who liked the feature. Bosworth thinks it's wrong to take away the new feature as a condition. Wanting to keep the widget but not the ads, the 49-year-old in Edgewater, Md., made sure his home router's ad-blocking software extended to his fridge. He hasn't seen another since.

One 27-year-old plans to return his refrigerator after the entire display "lit up with a full-screen ad for Apple TV's sci-fi show Pluribus," according to the article. The all-caps ad beckoned him "with an oft-used refrain directed at protagonist Carol Sturka: 'We're Sorry We Upset You, Carol.'"

Thanks to Slashdot reader fjo3 for sharing the article.
AI

People are Using AI-Powered Services to Find Lost Pets (yahoo.com) 35

A dog missing for two months was found at an animal shelter — and its owner received an email from an artificial intelligence service that identified it, according to the Washington Post.

"As controversial as AI is right now, this is one of those areas where it's a real win," according to the chief executive at the nonprofit animal welfare organization Best Friends Animal Society. And while it shouldn't replace microchipping pets, AI does offer another tool to help desperate pet owners (and overcrowded animal shelters) — and might even be "game-changing"... People send photos of their lost pets to a database, and AI compares the pets' features — including facial structure, coat pattern and ear shape — to photos of stray pets that have been spotted elsewhere. Many of the stray pets have already been taken to shelters... Doorbell cameras have recently implemented facial recognition for dogs, and perhaps the largest AI database for pet reunification is Petco Love Lost, which says it has reunited more than 200,000 pets and owners since 2021... After owners upload photos of their lost pets, AI scans thousands of photos of lost animals from social media and from about 3,000 animal shelters and rescues that use the software, according to Petco Love, an animal welfare nonprofit that's affiliated with the pet store Petco. It notifies owners if two photos match.
The article notes that one in three pets go missing during their lifetime, according to figures from the Animal Humane Society. "But as technology has progressed, so have resources for finding lost pets" — including GPS collars — and now, apparently, AI-powered pet identification.
AI

OpenAI's US Ad Pilot Exceeds $100 Million In Annualized Revenue In Six Weeks (reuters.com) 53

An anonymous reader quotes a report from Reuters: OpenAI's ChatGPT ads pilot in the United States has crossed the $100 million annualized revenue mark within six weeks of launch, a company spokesperson said on Thursday, pointing to robust early demand for the AI startup's nascent advertising business. [...] While roughly 85% of users are currently eligible to see ads, fewer than 20% are shown ads daily, with considerable room to grow ad monetization within the existing user pool, the spokesperson said.

"We're seeing no impact on consumer trust metrics, low dismissal rates of ads, and ongoing improvements in the relevance of ads as we learn from feedback," OpenAI said. The company plans to expand the test globally in additional countries in the coming weeks, including in Australia, New Zealand, and Canada. OpenAI has now expanded to over 600 advertisers, with nearly 80% of small- and medium-sized businesses signaling interest in ChatGPT ads, the spokesperson said. The ChatGPT maker is set to launch self-serve advertiser capabilities in April to broaden access and drive further growth.
CEO Sam Altman announced plans to begin testing ads on ChatGPT back in January after previously rejecting the idea. "I kind of think of ads as like a last resort for us as a business model," Altman said in 2024.

Further reading: OpenAI CFO Says Annualized Revenue Crosses $20 Billion In 2025
Media

AV1's Open, Royalty-Free Promise In Question As Dolby Sues Snapchat Over Codec (arstechnica.com) 44

An anonymous reader quotes a report from Ars Technica: AOMedia Video 1 (AV1) was invented by a group of technology companies to be an open, royalty-free alternative to other video codecs, like HEVC/H.265. But a lawsuit that Dolby Laboratories Inc. filed this week against Snap Inc. calls all that into question with claims of patent infringement. Numerous lawsuits are currently open in the US regarding the use of HEVC. Relevant patent holders, such as Nokia and InterDigital, have sued numerous hardware vendors and streaming service providers in pursuit of licensing fees for the use of patented technologies deemed essential to HEVC.

It's a touch rarer to see a lawsuit filed over the implementation of AV1. The Alliance for Open Media (AOMedia), whose members include Amazon, Apple, Google, Microsoft, Mozilla, and Netflix, says it developed AV1 "under a royalty-free patent policy (Alliance for Open Media Patent License 1.0)" and that the standard is "supported by high-quality reference implementations under a simple, permissive license (BSD 3-Clause Clear License)."

Yet, Dolby's lawsuit filed in the US District Court for the District of Delaware [PDF] alleges that AV1 leverages technologies that Dolby has patented and has not agreed to license for free and without receiving royalties. The filing reads: "[AOMedia] does not own all patents practiced by implementations of the AV1 codec. Rather, the AV1 specification was developed after many foundational video coding patents had already been filed, and AV1 incorporates technologies that are also present in HEVC. Those technologies are subject to existing third-party patent rights and associated licensing obligations." Dolby is seeking a jury trial, a declaration that Dolby isn't obligated to license the patents in questions under FRAND (fair, reasonable, and non-discriminatory) licensing obligations, and for the court to enjoin Snap from further "infringement."

Social Networks

Austria Plans Social Media Ban For Under-14s (bbc.com) 11

Austria plans to restrict under-14s from using social media platforms over concerns about addictive algorithms and harmful content. The government says draft legislation should be ready by the end of June, though details around enforcement and age verification have yet to be finalized. The BBC reports: Announcing the plans, Vice-Chancellor Andreas Babler of the Social Democrats said the government could not stand by and watch as social media made children "addicted and also often ill." He said it was the responsibility of politicians to protect children and argued that the issue should be treated no different to alcohol or tobacco: "There must be clear rules in the digital world too." In future, said Babler, children under 14 would be protected from algorithms that were addictive. "Other information providers have clear rules to protect young people from harmful content." These, he said, should now be implemented in the digital space. Yesterday, juries in two separate cases found social media giants liable for harming young people's mental health. The verdicts are being hailed as social media's Big Tobacco moment.

Further reading: California Bill Would Require Parent Bloggers To Delete Content of Minors On Social Media
Social Networks

California Bill Would Require Parent Bloggers To Delete Content of Minors On Social Media (latimes.com) 46

A California bill would let adults demand the removal of social media posts about them that were created by paid family content creators when they were minors. Supporters say Senate Bill 1247 addresses privacy, dignity, and safety harms caused when parents monetize their children's lives online. The Los Angeles Times reports: The legislation would require the parent or other relative to delete or edit the content within 10 business days of receiving the notification. Petitioners could take civil action against those who fail to comply and statutory damages would be set at $3,000 for each day the content remained online. Sen. Steve Padilla (D-San Diego), who introduced the bill last month, said it would help protect the dignity and mental health of those who had their childhood shared on social media. The measure was referred to the Senate Privacy, Digital Technologies and Consumer Protection Committee and is slated for a hearing on April 6.

"The evolution of these applications and technology is incredible," Padilla said. "But it's changing our social dynamic and it's creating situations that, while very productive for some folks, also need some guardrails." The bill would build upon previous legislation from Padilla that was signed into law two years ago and requires content creators that feature minors in at least 30% of their material to place some of their earnings into a trust the children can access when they turn 18.

Slashdot Top Deals