AI

Police Department Apologizes for Sharing AI-Doctored Evidence Photo on Social Media (boston.com)

A Maine police department has now acknowledged "it inadvertently shared an AI-altered photo of drug evidence on social media," reports Boston.com: The image from the Westbrook Police Department showed a collection of drug paraphernalia purportedly seized during a recent drug bust on Brackett Street, including a scale and white powder in plastic bags. According to Westbrook police, an officer involved in the arrests snapped the evidence photo and used a photo editing app to insert the department's patch. "The patch was added, and the photograph with the patch was sent to one of our Facebook administrators, who posted it," the department explained in a post. "Unbeknownst to anyone, when the app added the patch, it altered the packaging and some of the other attributes on the photograph. None of us caught it or realized it."

It wasn't long before the edited image's gibberish text and hazy edges drew criticism from social media users. According to the Portland Press Herald, Westbrook police initially denied AI had been used to generate the photo before eventually confirming its use of the AI chatbot ChatGPT. The department issued a public apology Tuesday, sharing a side-by-side comparison of the original and edited images.

"It was never our intent to alter the image of the evidence," the department's post read. "We never realized that using a photoshop app to add our logo would alter a photograph so substantially."

Microsoft

Microsoft Shuts Down Operations in Pakistan After 25 Years (pakistantoday.com.pk) 38

Newspaper Pakistan Today: In a significant moment for Pakistan's technology sector, Microsoft has officially shut down its operations in the country, concluding a 25-year journey that began with high hopes for digital transformation and global partnership.

The move, confirmed by employees and media sources, marks the quiet departure of the software giant, which had launched its Pakistan presence in June 2000. The last remaining employees were formally informed of the closure in recent days, signalling the end of an era that saw Microsoft play a key role in developing local talent, building enterprise partnerships, and promoting digital literacy across sectors.

The Internet

Websites Hosting Major US Climate Reports Taken Down (apnews.com) 75

An anonymous reader quotes a report from the Associated Press: Websites that displayed legally mandated U.S. national climate assessments seem to have disappeared, making it harder for state and local governments and the public to learn what to expect in their backyards from a warming world. Scientists said the peer-reviewed authoritative reports save money and lives. Websites for the national assessments and the U.S. Global Change Research Program were down Monday and Tuesday with no links, notes or referrals elsewhere. The White House, which was responsible for the assessments, said the information will be housed within NASA to comply with the law, but gave no further details. Searches for the assessments on NASA websites did not turn them up.

"It's critical for decision makers across the country to know what the science in the National Climate Assessment is. That is the most reliable and well-reviewed source of information about climate that exists for the United States," said University of Arizona climate scientist Kathy Jacobs, who coordinated the 2014 version of the report. "It's a sad day for the United States if it is true that the National Climate Assessment is no longer available," Jacobs said. "This is evidence of serious tampering with the facts and with people's access to information, and it actually may increase the risk of people being harmed by climate-related impacts."

"This is a government resource paid for by the taxpayer to provide the information that really is the primary source of information for any city, state or federal agency who's trying to prepare for the impacts of a changing climate," said Texas Tech climate scientist Katharine Hayhoe, who has been a volunteer author for several editions of the report. Copies of past reports are still squirreled away in NOAA's library. NASA's open science data repository includes dead links to the assessment site. [...] Additionally, NOAA's main climate.gov website was recently forwarded to a different NOAA website. Social media and blogs at NOAA and NASA about climate impacts for the general public were cut or eliminated. "It's part of a horrifying big picture," [said Harvard climate scientist John Holdren, who was President Obama's science advisor and whose office directed the assessments]. "It's just an appalling whole demolition of science infrastructure."
National climate assessments are more detailed and locally relevant than UN reports and undergo rigorous peer review and validation by scientific and federal institutions, Hayhoe and Jacobs said. Suppressing these reports would be censoring science, Jacobs said.
The Almighty Buck

Wells Fargo Scandal Pushed Customers Toward Fintech, Says UC Davis Study (nerds.xyz) 18

BrianFagioli shares a report from NERDS.xyz: A new academic study has found that the 2016 Wells Fargo scandal pushed many consumers toward fintech lenders instead of traditional banks. The research, published in the Journal of Financial Economics, suggests that it was a lack of trust rather than interest rates or fees that drove this behavioral shift. Conducted by Keer Yang, an assistant professor at the UC Davis Graduate School of Management, the study looked closely at what happened after the Wells Fargo fraud erupted into national headlines. Bank employees were caught creating millions of unauthorized accounts to meet unrealistic sales goals. The company faced $3 billion in penalties and a massive public backlash.

Yang analyzed Google Trends data, Gallup polls, media coverage, and financial transaction datasets to draw a clear conclusion. In geographic areas with a strong Wells Fargo presence, consumers became measurably more likely to take out mortgages through fintech lenders. This change occurred even though loan costs were nearly identical between traditional banks and digital lenders. In other words, it was not about money. It was about trust. That simple fact hits hard. When big institutions lose public confidence, people do not just complain. They start moving their money elsewhere.

According to the study, fintech mortgage use increased from just 2 percent of the market in 2010 to 8 percent in 2016. In regions more heavily exposed to the Wells Fargo brand, fintech adoption rose an additional 4 percent compared to areas with less exposure. Yang writes, "Therefore it is trust, not the interest rate, that affects the borrower's probability of choosing a fintech lender." [...] Notably, while customers may have been more willing to switch mortgage providers, they were less likely to move their deposits. Yang attributes that to FDIC insurance, which gives consumers a sense of security regardless of the bank's reputation. This study also gives weight to something many of us already suspected. People are not necessarily drawn to fintech because it is cheaper. They are drawn to it because they feel burned by the traditional system and want a fresh start with something that seems more modern and less manipulative.

Education

Hacker With 'Political Agenda' Stole Data From Columbia, University Says (therecord.media) 28

A politically motivated hacker breached Columbia University's IT systems, stealing vast amounts of sensitive student and employee data -- including admissions decisions and Social Security numbers. The Record reports: The hacker reportedly provided Bloomberg News with 1.6 gigabytes of data they claimed to have stolen from the university, including information from 2.5 million applications going back decades. The stolen data the outlet reviewed reportedly contains details on whether applicants were rejected or accepted, their citizenship status, their university ID numbers and which academic programs they sought admission to. While the hacker's claims have not been independently verified, Bloomberg said it compared data provided by the hacker to that belonging to eight Columbia applicants seeking admission between 2019 and 2024 and found it matched.

The threat actor reportedly told Bloomberg he was seeking information that would indicate whether the university continues to use affirmative action in admissions despite a 2023 Supreme Court decision prohibiting the practice. The hacker told Bloomberg he obtained 460 gigabytes of data in total -- after spending two months targeting and penetrating increasingly privileged layers of the university's servers -- and said he harvested information about financial aid packages, employee pay and at least 1.8 million Social Security numbers belonging to employees, applicants, students and their family members.

China

China's Giant New Gamble With Digital IDs (economist.com) 73

China will launch digital IDs for internet use on July 15th, transferring online verification from private companies to government control. Users obtain digital IDs by submitting personal information including facial scans to police via an app. A pilot program launched one year ago enrolled 6 million people.

The system currently remains voluntary, though officials and state media are pushing citizens to register for "information security." Companies will see only anonymized character strings when users log in, while police retain exclusive access to personal details. The program replaces China's existing system requiring citizens to register with companies using real names before posting comments, gaming, or making purchases.

Police say they punished 47,000 people last year for spreading "rumours" online. The digital ID serves a broader government strategy to centralize data control. State planners classify data as a production factor alongside labor and capital, aiming to extract information from private companies for trading through government-operated data exchanges.
China

China Successfully Tests Hypersonic Aircraft, Maybe At Mach 12 (theregister.com) 156

China's Northwestern Polytechnical University successfully tested a hypersonic aircraft called Feitian-2, claiming it reached Mach 12 and achieved a world-first by autonomously switching between rocket and ramjet propulsion mid-flight. The Register reports: The University named the craft "Feitian-2" and according to Chinese media the test flight saw it reach Mach 12 (14,800 km/h or 9,200 mph) -- handily faster than the Mach 5 speeds considered to represent hypersonic flight. Chinese media have not detailed the size of Feitian-2, or its capabilities other than to repeat the University's claim that it combined a rocket and a ramjet into a single unit. [...] The University and Chinese media claim the Feitian-2 flew autonomously while changing from rocket to ramjet while handling the hellish stresses that come with high speed flight.

This test matters because, as the US Congressional Budget Office found in 2023, hypothetical hypersonic missiles "have the potential to create uncertainty about what their ultimate target is. Their low flight profile puts them below the horizon for long-range radar and makes them difficult to track, and their ability to maneuver while gliding makes their path unpredictable." "Hypersonic weapons can also maneuver unpredictably at high speeds to counter short-range defenses near a target, making it harder to track and intercept them," the Office found.

Washington is so worried about Beijing developing hypersonic weapons that the Trump administration cited the possibility as one reason for banning another 27 Chinese organizations from doing business with US suppliers of AI and advanced computing tech. The flight of Feitian-2 was therefore a further demonstration of China's ability to develop advanced technologies despite US bans.

The Internet

WordPress CEO Regrets 'Belongs to Me' Comment Amid Ongoing WP Engine Legal Battle (theverge.com) 6

Automattic CEO Matt Mullenweg said he regrets telling the media that "WordPress.org just belongs to me personally" during a new interview about his company's legal dispute with hosting provider WP Engine. The comment has been "taken out of context so many times" and represents "the worst thing ever," Mullenweg said in a new podcast interview with The Verge.

The dispute began when Mullenweg accused WP Engine of "free-riding" on WordPress's open-source ecosystem without contributing adequate resources back to the project. Mullenweg filed a lawsuit against WP Engine while cutting off the company's access to core WordPress technologies. WP Engine countersued, and Automattic was forced to reverse some retaliatory measures.

The controversy triggered significant internal upheaval at Automattic. The company offered "alignment" buyouts to employees who disagreed with the direction, reducing headcount from a peak of 2,100 to approximately 1,500 people. Mullenweg said this was "probably the fourth big time" WordPress has faced such community controversy, though the first in the current media landscape. WordPress powers 43% of websites globally. Mullenweg said he wants to return to "the most collaborative version of WordPress possible" but noted the legal proceedings continue with both sides spending "millions of dollars a month on lawyers."
Security

New NSA/CISA Report Again Urges the Use of Memory-Safe Programming Language (theregister.com) 66

An anonymous reader shared this report from the tech news site The Register: The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA) this week published guidance urging software developers to adopt memory-safe programming languages. "The importance of memory safety cannot be overstated," the inter-agency report says...

The CISA/NSA report revisits the rationale for greater memory safety and the government's calls to adopt memory-safe languages (MSLs) while also acknowledging the reality that not every agency can change horses mid-stream. "A balanced approach acknowledges that MSLs are not a panacea and that transitioning involves significant challenges, particularly for organizations with large existing codebases or mission-critical systems," the report says. "However, several benefits, such as increased reliability, reduced attack surface, and decreased long-term costs, make a strong case for MSL adoption."

The report cites how Google by 2024 managed to reduce memory safety vulnerabilities in Android to 24 percent of the total. It goes on to provide an overview of the various benefits of adopting MSLs and discusses adoption challenges. And it urges the tech industry to promote memory safety by, for example, advertising jobs that require MSL expertise.

It also cites various government projects to accelerate the transition to MSLs, such as the Defense Advanced Research Projects Agency (DARPA) Translating All C to Rust (TRACTOR) program, which aspires to develop an automated method to translate C code to Rust. A recent effort along these lines, dubbed Omniglot, has been proposed by researchers at Princeton, UC Berkeley, and UC San Diego. It provides a safe way for unsafe libraries to communicate with Rust code through a Foreign Function Interface....

"Memory vulnerabilities pose serious risks to national security and critical infrastructure," the report concludes. "MSLs offer the most comprehensive mitigation against this pervasive and dangerous class of vulnerability."

"Adopting memory-safe languages can accelerate modern software development and enhance security by eliminating these vulnerabilities at their root," the report concludes, calling the idea "an investment in a secure software future."

"By defining memory safety roadmaps and leading the adoption of best practices, organizations can significantly improve software resilience and help ensure a safer digital landscape."
AI

Has an AI Backlash Begun? (wired.com) 132

"The potential threat of bosses attempting to replace human workers with AI agents is just one of many compounding reasons people are critical of generative AI..." writes Wired, arguing that there's an AI backlash that "keeps growing strong."

"The pushback from the creative community ramped up during the 2023 Hollywood writer's strike, and continued to accelerate through the current wave of copyright lawsuits brought by publishers, creatives, and Hollywood studios." And "Right now, the general vibe aligns even more with the side of impacted workers." "I think there is a new sort of ambient animosity towards the AI systems," says Brian Merchant, former WIRED contributor and author of Blood in the Machine, a book about the Luddites rebelling against worker-replacing technology. "AI companies have speedrun the Silicon Valley trajectory." Before ChatGPT's release, around 38 percent of US adults were more concerned than excited about increased AI usage in daily life, according to the Pew Research Center. The number shot up to 52 percent by late 2023, as the public reacted to the speedy spread of generative AI. The level of concern has hovered around that same threshold ever since...

[F]rustration over AI's steady creep has breached the container of social media and started manifesting more in the real world. Parents I talk to are concerned about AI use impacting their child's mental health. Couples are worried about chatbot addictions driving a wedge in their relationships. Rural communities are incensed that the newly built data centers required to power these AI tools are kept humming by generators that burn fossil fuels, polluting their air, water, and soil. As a whole, the benefits of AI seem esoteric and underwhelming while the harms feel transformative and immediate.

Unlike the dawn of the internet where democratized access to information empowered everyday people in unique, surprising ways, the generative AI era has been defined by half-baked software releases and threats of AI replacing human workers, especially for recent college graduates looking to find entry-level work. "Our innovation ecosystem in the 20th century was about making opportunities for human flourishing more accessible," says Shannon Vallor, a technology philosopher at the Edinburgh Futures Institute and author of The AI Mirror, a book about reclaiming human agency from algorithms. "Now, we have an era of innovation where the greatest opportunities the technology creates are for those already enjoying a disproportionate share of strengths and resources."

The impacts of generative AI on the workforce are another core issue that critics are organizing around. "Workers are more intuitive than a lot of the pundit class gives them credit for," says Merchant. "They know this has been a naked attempt to get rid of people."

The article suggests "the next major shift in public opinion" is likely "when broad swaths of workers feel further threatened," and organize in response...
Social Networks

To Spam AI Chatbots, Companies Spam Reddit with AI-Generated Posts (9to5mac.com) 38

The problem? "Companies want their products and brands to appear in chatbot results," reports 9to5Mac. And "Since Reddit forms a key part of the training material for Google's AI, then one effective way to make that happen is to spam Reddit." Huffman has confirmed to the Financial Times that this is happening, with companies using AI bots to create fake posts in the hope that the content will be regurgitated by chatbots:

"For 20 years, we've been fighting people who have wanted to be popular on Reddit," Huffman said... "If you want to show up in the search engines, you try to do well on Reddit, and now the LLMs, it's the same thing. If you want to be in the LLMs, you can do it through Reddit."

Multiple ad agency execs confirmed to the FT that they are indeed "posting content on Reddit to boost the likelihood of their ads appearing in the responses of generative AI chatbots." Huffman says that AI bots are increasingly being used to make spam posts, and Reddit is trying to block them: For Huffman, success comes down to making sure that posts are "written by humans and voted on by humans [...] It's an arms race, it's a never ending battle." The company is exploring a number of new ways to do this, including the World ID eyeball-scanning device being touted by OpenAI's Sam Altman.

It's Reddit's 20th anniversary, notes CNBC. And while "MySpace, Digg and Flickr have faded into oblivion," Reddit "has refused to die, chugging along and gaining an audience of over 108 million daily users..."

But now Reddit "faces a gargantuan challenge gaining new users, particularly if Google's search floodgates dry up." [I]n the age of AI, many users simply "go the easiest possible way," said Ann Smarty, a marketing and reputation management consultant who helps brands monitor consumer perception on Reddit. And there may be no simpler way of finding answers on the internet than simply asking ChatGPT a question, Smarty said. "People do not want to click," she said. "They just want those quick answers."
But in response, CNBC's headline argues that Reddit "is fighting AI with AI." It launched its own Reddit Answers AI service in December, using technology from OpenAI and Google. Unlike general-purpose chatbots that summarize others' web pages, the Reddit Answers chatbot generates responses based purely on the social media service, and it redirects people to the source conversations so they can see the specific user comments. A Reddit spokesperson said that over 1 million people are using Reddit Answers each week.
IT

Duolingo Stock Plummets After Slowing User Growth, Possibly Caused By 'AI-First' Backlash (fool.com) 24

"Duolingo stock fell for the fourth straight trading day on Wednesday," reported Investor's Business Daily, "as data shows user growth slowing for the language-learning software provider."

Jefferies analyst John Colantuoni said he was "concerned" by this drop — saying it "may be the result of Duolingo's poorly received AI-driven hiring announcement in late April (later clarified in late May)." Also Wednesday, DA Davidson analyst Wyatt Swanson slashed his price target on Duolingo stock to 500 from 600, but kept his buy rating. He noted that the "'AI-first' backlash" on social media is hurting Duolingo's brand sentiment. However, he expects the impact to be temporary.
Colantuoni also maintained a "hold" rating on Duolingo stock — though by Monday Duolingo fell below its 50-day moving average line (which Investor's Business Daily calls "a key sell signal.")

And Thursday afternoon (2:30 p.m. EST) Duolingo's stock had dropped 14% for the week, notes The Motley Fool: While 30 days' worth of disappointing daily active user (DAU) data isn't bad in and of itself, it extends a worrying trend. Over the last five months, the company's DAU growth declined from 56% in February to 53% in March, 41% in April, 40% in May [the month after the "AI-first" announcement], and finally 37% in June.

This deceleration is far from a death knell for Duolingo's stock. But the market may be justified in lowering the company's valuation until it sees improving data. Even after this drop, the company trades at 106 times free cash flow, including stock-based compensation.

Maybe everyone's just practicing their language skills with ChatGPT?
AI

Call Center Workers Are Tired of Being Mistaken for AI (bloomberg.com) 83

Bloomberg reports: By the time Jessica Lindsey's customers accuse her of being an AI, they are often already shouting. For the past two years, her work as a call center agent for outsourcing company Concentrix has been punctuated by people at the other end of the phone demanding to speak to a real human. Sometimes they ask her straight, 'Are you an AI?' Other times they just start yelling commands: 'Speak to a representative! Speak to a representative...!' Skeptical customers are already frustrated from dealing with the automated system that triages calls before they reach a person. So when Lindsey starts reading from her AmEx-approved script, callers are infuriated by what they perceive to be another machine. "They just end up yelling at me and hanging up," she said, leaving Lindsey sitting in her home office in Oklahoma, shocked and sometimes in tears. "Like, I can't believe I just got cut down at 9:30 in the morning because they had to deal with the AI before they got to me...."

In Australia, Canada, Greece and the US, call center agents say they've been repeatedly mistaken for AI. These people, who spend hours talking to strangers, are experiencing surreal conversations, where customers ask them to prove they are not machines... [Seth, a US-based Concentrix worker] said he is asked if he's AI roughly once a week. In April, one customer quizzed him for around 20 minutes about whether he was a machine. The caller asked about his hobbies, about how he liked to go fishing when not at work, and what kind of fishing rod he used. "[It was as if she wanted] to see if I glitched," he said. "At one point, I felt like she was an AI trying to learn how to be human...."

Sarah, who works in benefits fraud-prevention for the US government — and asked to use a pseudonym for fear of being reprimanded for talking to the media — said she is mistaken for AI between three or four times every month... Sarah tries to change her inflections and tone of voice to sound more human. But she's also discovered another point of differentiation with the machines. "Whenever I run into the AI, it just lets you talk, it doesn't cut you off," said Sarah, who is based in Texas. So when customers start to shout, she now tries to interrupt them. "I say: 'Ma'am (or Sir). I am a real person. I'm sitting in an office in the southern US. I was born.'"

Desktops (Apple)

After 27 Years, Engineer Discovers How To Display Secret Photo In Power Mac ROM (arstechnica.com) 12

An anonymous reader quotes a report from Ars Technica: On Tuesday, software engineer Doug Brown published his discovery of how to trigger a long-known but previously inaccessible Easter egg in the Power Mac G3's ROM: a hidden photo of the development team that nobody could figure out how to display for 27 years. While Pierre Dandumont first documented the JPEG image itself in 2014, the method to view it on the computer remained a mystery until Brown's reverse engineering work revealed that users must format a RAM disk with the text "secret ROM image."

Brown stumbled upon the image while using a hex editor tool called Hex Fiend with Eric Harmon's Mac ROM template to explore the resources stored in the beige Power Mac G3's ROM. The ROM appeared in desktop, minitower, and all-in-one G3 models from 1997 through 1999. "While I was browsing through the ROM, two things caught my eye," Brown wrote. He found both the HPOE resource containing the JPEG image of team members and a suspicious set of Pascal strings in the PowerPC-native SCSI Manager 4.3 code that included ".Edisk," "secret ROM image," and "The Team."

The strings provided the crucial clue Brown needed. After extracting and disassembling the code using Ghidra, he discovered that the SCSI Manager was checking for a RAM disk volume named "secret ROM image." When found, the code would create a file called "The Team" containing the hidden JPEG data. Brown initially shared his findings on the #mac68k IRC channel, where a user named Alex quickly figured out the activation method. The trick requires users to enable the RAM Disk in the Memory control panel, restart, select the RAM Disk icon, choose "Erase Disk" from the Special menu, and type "secret ROM image" into the format dialog. "If you double-click the file, SimpleText will open it," Brown explains on his blog just before displaying the hidden team photo that emerges after following the steps.

Graphics

Graphics Artists In China Push Back On AI and Its Averaging Effect (theverge.com) 33

Graphic artists in China are pushing back against AI image generators, which they say "profoundly shifts clients' perception of their work, specifically in terms of how much that work costs and how much time it takes to produce," reports The Verge. "Freelance artists or designers working in industries with clients that invest in stylized, eye-catching graphics, like advertising, are particularly at risk." From the report: Long before AI image generators became popular, graphic designers at major tech companies and in-house designers for large corporate clients were often instructed by managers to crib aesthetics from competitors or from social media, according to one employee at a major online shopping platform in China, who asked to remain anonymous for fear of retaliation from their employer. Where a human would need to understand and reverse engineer a distinctive style to recreate it, AI image generators simply create randomized mutations of it. Often, the results will look like obvious copies and include errors, but other graphic designers can then edit them into a final product.

"I think it'd be easier to replace me if I didn't embrace [AI]," the shopping platform employee says. Early on, as tools like Stable Diffusion and Midjourney became more popular, their colleagues who spoke English well were selected to study AI image generators to increase in-house expertise on how to write successful prompts and identify what types of tasks AI was useful for. Ultimately, it was useful for copying styles from popular artists that, in the past, would take more time to study. "I think it forces both designers and clients to rethink the value of designers," Jia says. "Is it just about producing a design? Or is it about consultation, creativity, strategy, direction, and aesthetic?" [...]

Across the board, though, artists and designers say that AI hype has negatively impacted clients' view of their work's value. Now, clients expect a graphic designer to produce work on a shorter timeframe and for less money, which also has its own averaging impact, lowering the ceiling for what designers can deliver. As clients lower budgets and squish timelines, the quality of the designers' output decreases. "There is now a significant misperception about the workload of designers," [says Erbing, a graphic designer in Beijing who has worked with several ad agencies and asked to be called by his nickname]. "Some clients think that since AI must have improved efficiency, they can halve their budget." But this perception runs contrary to what designers spend the majority of their time doing, which is not necessarily just making any image, Erbing says.

Privacy

Facebook Is Asking To Use Meta AI On Photos In Your Camera Roll You Haven't Yet Shared (techcrunch.com) 19

Facebook is prompting users to opt into a feature that uploads photos from their camera roll -- even those not shared on the platform -- to Meta's servers for AI-driven suggestions like collages and stylized edits. While Meta claims the content is private and not used for ads, opting in allows the company to analyze facial features and retain personal data under its broad AI terms, raising privacy concerns. TechCrunch reports: The feature is being suggested to Facebook users when they're creating a new Story on the social networking app. Here, a screen pops up and asks if the user will opt into "cloud processing" to allow creative suggestions. As the pop-up message explains, by clicking "Allow," you'll let Facebook generate new ideas from your camera roll, like collages, recaps, AI restylings, or photo themes. To work, Facebook says it will upload media from your camera roll to its cloud (meaning its servers) on an "ongoing basis," based on information like time, location, or themes.

The message also notes that only you can see the suggestions, and the media isn't used for ad targeting. However, by tapping "Allow," you are agreeing to Meta's AI Terms. This allows your media and facial features to be analyzed by AI, it says. The company will additionally use the date and presence of people or objects in your photos to craft its creative ideas. [...] According to Meta's AI Terms around image processing, "once shared, you agree that Meta will analyze those images, including facial features, using AI. This processing allows us to offer innovative new features, including the ability to summarize image contents, modify images, and generate new content based on the image," the text states.

The same AI terms also give Meta's AIs the right to "retain and use" any personal information you've shared in order to personalize its AI outputs. The company notes that it can review your interactions with its AIs, including conversations, and those reviews may be conducted by humans. The terms don't define what Meta considers personal information, beyond saying it includes "information you submit as Prompts, Feedback, or other Content." We have to wonder whether the photos you've shared for "cloud processing" also count here.

Earth

Renewables Soar, But Fossil Fuels Continue To Rise as Global Electricity Demand Hits Record Levels (energyinst.org) 48

In a year when average air temperatures consistently breached the 1.5C warming threshold, global COâ-equivalent emissions from energy rose by 1%, marking yet another record, the fourth in as many years. From a report: Wind and solar energy alone expanded by an impressive 16% in 2024, nine times faster than total energy demand. Yet this growth did not fully counterbalance rising demand elsewhere, with total fossil fuel use growing by just over 1%, highlighting a transition defined as much by disorder as by progress.

Crude oil demand in OECD countries remained flat, following a slight decline in the previous year. In contrast, non-OECD countries, where much of the world's energy demand growth is concentrated and fossil fuels continue to play a dominant role, saw oil demand rise by 1%. Notably, Chinese crude oil demand fell in 2024 by 1.2%, indicating that 2023 may have reached a peak. Elsewhere, global natural gas demand rebounded, rising by 2.5% as gas markets rebalanced after the 2023 slump. India's demand for coal rose 4% in 2024 and now equals that of the CIS, Southern and Central America, North America, and Europe combined.

Social Networks

Brazil Supreme Court Rules Digital Platforms Are Liable For Users' Posts (ft.com) 41

Brazil's supreme court has ruled that social media platforms can be held legally responsible for their users' posts. From a report: Companies such as Facebook, TikTok and X will have to act immediately to remove material such as hate speech, incitement to violence or "anti-democratic acts," even without a prior judicial takedown order, as a result of the decision in Latin America's largest nation late on Thursday.
Advertising

As AI Kills Search Traffic, Google Launches Offerwall To Boost Publisher Revenue (techcrunch.com) 37

An anonymous reader quotes a report from TechCrunch: Google's AI search features are killing traffic to publishers, so now the company is proposing a possible solution. On Thursday, the tech giant officially launched Offerwall, a new tool that allows publishers to generate revenue beyond the more traffic-dependent options, like ads.

Offerwall lets publishers give their sites' readers a variety of ways to access their content, including through options like micropayments, taking surveys, watching ads, and more. In addition, Google says that publishers can add their own options to the Offerwall, like signing up for newsletters. The new feature is available for free in Google Ad Manager after earlier tests with 1,000 publishers that spanned over a year.
While no broad case studies were shared, India's Sakal Media Group implemented Google Ad Manager's Offerwall feature and saw a 20% revenue boost and up to 2 million more impressions in three months. Overall, publishers testing Offerwall experienced an average 9% revenue lift, with some seeing between 5% and 15%.
Australia

Australia Regulator and YouTube Spar Over Under-16s Social Media Ban 26

Australia's eSafety Commissioner has urged the government to deny YouTube an exemption from upcoming child safety regulations, citing research showing it exposes more children to harmful content than any other platform. YouTube pushed back, calling the commissioner's stance inconsistent with government data and parental feedback. "The quarrel adds an element of uncertainty to the December rollout of a law being watched by governments and tech leaders around the world as Australia seeks to become the first country to fine social media firms if they fail to block users aged under 16," reports Reuters. From the report: The centre-left Labor government of Anthony Albanese has previously said it would give YouTube a waiver, citing the platform's use for education and health. Other social media companies such as Meta's Facebook and Instagram, Snapchat, and TikTok have argued such an exemption would be unfair. eSafety Commissioner Julie Inman Grant said she wrote to the government last week to say there should be no exemptions when the law takes effect. She added that the regulator's research found 37% of children aged 10 to 15 reported seeing harmful content on YouTube -- the most of any social media site. [...]

YouTube, in a blog post, accused Inman Grant of giving inconsistent and contradictory advice, which discounted the government's own research which found 69% of parents considered the video platform suitable for people under 15. "The eSafety commissioner chose to ignore this data, the decision of the Australian Government and other clear evidence from teachers and parents that YouTube is suitable for younger users," wrote Rachel Lord, YouTube's public policy manager for Australia and New Zealand.

Inman Grant, asked about surveys supporting a YouTube exemption, said she was more concerned "about the safety of children and that's always going to surpass any concerns I have about politics or being liked or bringing the public onside". A spokesperson for Communications Minister Anika Wells said the minister was considering the online regulator's advice and her "top priority is making sure the draft rules fulfil the objective of the Act and protect children from the harms of social media."

Slashdot Top Deals