GNU is Not Unix

Richard Stallman Was Asked: Is Software Piracy Wrong? (slashdot.org) 205

Friday 72-year-old Richard Stallman made a two-hour-and-20-minutes appearance at the Georgia Institute of Technology, talking about everything from AI and connected cars to smartphones, age verfication laws, and his favorite Linux distro. But early on, Stallman also told the audience how "I despise DRM...I don't want any copy of anything with DRM. Whatever it is, I never want it so badly that I would bow down to DRM." (So he doesn't use Spotify or Netflix...)

This led to an interesting moment when someone asked him later if we have an ethical obligation to avoid piracy.. First Stallman swapped in his preferred phrase, "forbidden sharing"...

"I won't use the word piracy to refer to sharing. Sharing is good and it should be lawful. Those laws are wrong. Copyright as it is now is an injustice."

Stallman said "I don't hesitate to share copies of anything," but added that "I don't have copies of non-free software, because I'm disgusted by it." After a pause, he added this. "Just because there is a law to to give some people unjust power, that doesn't mean breaking that law becomes wrong....

"Dividing people by forbidding them to help each other is nasty."

And later Stallman was asked how he watches movies, if he's opposed to DRM-heavy sites like Netflix, and the DRM in Blu-ray discs? "The only way I can see a movie is if I get a file — you know, like an MP4 file or MKV file. And I would get that, I suppose, by copying from somebody else."

"Sharing is good. Stopping people from sharing is evil."
The Media

Is Google Prioritizing YouTube and X Over News Publishers on Discover? (pressgazette.co.uk) 32

Earlier this month, the media site Press Gazette reported that now Google "is increasingly prioritising AI summaries, X posts and Youtube videos" on its "Discover" feed (which appears on the leftmost homescreen page of many Android phones and the Google app's homepage).

"The changes could be devastating for publishers who rely heavily on Discover for referral traffic. And it looks set to accelerate a global trend of declining traffic to publishers from both Google search and Discover." Xavi Beumala from website analytics platform Marfeel warned in a research update: "Google Discover is no longer a publisher-first surface. It's becoming an AI platform with YouTube and X absorbing real estate that once went to newsrooms..." [They warn later that "This is not a marginal UI experiment. It is a reallocation of feed real estate away from links and toward inline Youtube plays and generated summaries."] Google says it prioritises "helpful, reliable, people-first content". Unlike Google News, there is no requirement that Google Discover showcases bona fide publisher websites.

In recent months fake news stories published by fraudulent website publishers have been promoted on Google Discover, reaping tens of millions of clicks. Google said it was working on a "fix" for this issue...

Facebook, Instagram and Tiktok content may also start flowing into the Discover feed in future. When Google announced the addition of posts from X, Instagram and Youtube Shorts in September, it said there would be "more platforms to come".

Google

Google Discover Replaces News Headlines With Sometimes Inaccurate AI-Generated Alternatives (theverge.com) 25

An anonymous reader shared this report from The Verge: In early December, I brought you the news that Google has begun replacing Verge headlines, and those of our competitors, with AI clickbait nonsense in its content feed [which appears on the leftmost homescreen page of many Android phones and the Google app's homepage]. Google appeared to be backing away from the experiment, but now tells The Verge that its AI headlines in Google Discover are a feature, one that "performs well for user satisfaction." I once again see lots of misleading claims every time I check my phone...

For example, Google's AI claimed last week that "US reverses foreign drone ban," citing and linking to this PCMag story for the news. That's not just false — PCMag took pains to explain that it's false in the story that Google links to...! What does the author of that PCMag story think? "It makes me feel icky," Jim Fisher tells me over the phone. "I'd encourage people to click on stories and read them, and not trust what Google is spoon-feeding them." He says Google should be using the headline that humans wrote, and if Google needs a summary, it can use the ones that publications already submit to help search engines parse our work.

Google claims it's not rewriting headlines. It characterizes these new offerings as "trending topics," even though each "trending topic" presents itself as one of our stories, links to our stories, and uses our images, all without competent fact-checking to ensure the AI is getting them right... The AI is also no longer restricted to roughly four words per headline, so I no longer see nonsense headlines like "Microsoft developers using AI" or "AI tag debate heats." (Instead, I occasionally see tripe like "Fares: Need AAA & AA Games" or "Dispatch sold millions; few avoided romance.")

But Google's AI has no clue what parts of these stories are new, relevant, significant, or true, and it can easily confuse one story for another. On December 26th, Google told me that "Steam Machine price & HDMI details emerge." They hadn't. On January 11th, Google proclaimed that "ASUS ROG Ally X arrives." (It arrived in 2024; the new Xbox Ally arrived months ago.) On January 20th, it wrote that "Glasses-free 3D tech wows," introducing readers to "New 3D tech called Immensity from Leia" — but linking to this TechRadar story about an entirely different company called Visual Semiconductor...

Google declined our request for an interview to more fully explain the idea.

The site Android Police spotted more inaccurate headlines in December: A story from 9to5Google, which was actually titled 'Don't buy a Qi2 25W wireless charger hoping for faster speeds — just get the 'slower' one instead' was retitled as 'Qi2 slows older Pixels.' Similarly, Ars Technica's 'Valve's Steam Machine looks like a console, but don't expect it to be priced like one' was changed to 'Steam Machine price revealed.' At the time, we believed that the inaccuracies were due to the feature being unstable and in early testing.... Now, Google has stopped calling Discover replacing human-written headlines as an "experiment."
"Google buries a 'Generated with AI, which can make mistakes' message under the 'See more' button in the summary," reports 9to5Google, "making it look like this is the publisher's intended headline." While it is obvious that Google has refined this feature over the past couple of months, it doesn't take long to still find plenty of misleading headlines throughout Discover... Another article from NotebookCheck about an Anker power bank with a retractable cable was given a headline that's about another product entirely. A pair of headlines from Tom's Hardware and PCMag, meanwhile, show the two sides of using AI for this purpose. The Tom's Hardware headline, "Free GPU & Amazon Scams," isn't representative of the actual article, which is about someone who bought a GPU from Amazon, canceled their order, and the retailer shipped it anyway. There's nothing about "Amazon Scams" in the article.
GNU is Not Unix

Richard Stallman Critiques AI, Connected Cars, Smartphones, and DRM (youtube.com) 77

Richard Stallman spoke Friday at Atlanta's Georgia Institute of Technology, continuing his activism for free software while also addressing today's new technologies.

Speaking about AI, Stallman warned that "nowadays, people often use the term artificial intelligence for things that aren't intelligent at all..." He makes a point of calling large language models "generators" because "They generate text and they don't understand really what that text means." (And they also make mistakes "without batting a virtual eyelash. So you can't trust anything that they generate.") Stallman says "Every time you call them AI, you are endorsing the claim that they are intelligent and they're not. So let's let's refuse to do that."

"So I've come up with the term Pretend Intelligence. We could call it PI. And if we start saying this more often, we might help overcome this marketing hype campaign that wants people to trust those systems, and trust their lives and all their activities to the control of those systems and the big companies that develop and control them."

"By the way, as far as I can tell, none of them is free software."

When it comes to today's cars, Stallman says they contain "malicious functionalities... Cars should not be connected. They should not upload anything." (He adds that "I am hoping to find a skilled mechanic to work with me in a project to make disconnected cars.")

And later Stallman calls the smartphone "an Orwellian tracking and surveillance device," saying he refuses to own one. (An advantage of free software is that it allows the removal of malicious functionalities.)

Stallman spoke for about 53 minutes — but then answered questions for nearly 90 minutes longer. Here's some of the highlights...
AI

The Risks of AI in Schools Outweigh the Benefits, Report Says (npr.org) 33

This month saw results from a yearlong global study of "potential negative risks that generative AI poses to student". The study (by the Brookings Institution's Center for Universal Education) also suggests how to prevent risks and maximize benefits: After interviews, focus groups, and consultations with over 500 students, teachers, parents, education leaders, and technologists across 50 countries, a close review of over 400 studies, and a Delphi panel, we find that at this point in its trajectory, the risks of utilizing generative AI in children's education overshadow its benefits.
"At the top of Brookings' list of risks is the negative effect AI can have on children's cognitive growth," reports NPR — "how they learn new skills and perceive and solve problems." The report describes a kind of doom loop of AI dependence, where students increasingly off-load their own thinking onto the technology, leading to the kind of cognitive decline or atrophy more commonly associated with aging brains... As one student told the researchers, "It's easy. You don't need to (use) your brain." The report offers a surfeit of evidence to suggest that students who use generative AI are already seeing declines in content knowledge, critical thinking and even creativity. And this could have enormous consequences if these young people grow into adults without learning to think critically...

Survey responses revealed deep concern that use of AI, particularly chatbots, "is undermining students' emotional well-being, including their ability to form relationships, recover from setbacks, and maintain mental health," the report says. One of the many problems with kids' overuse of AI is that the technology is inherently sycophantic — it has been designed to reinforce users' beliefs... Winthrop offers an example of a child interacting with a chatbot, "complaining about your parents and saying, 'They want me to wash the dishes — this is so annoying. I hate my parents.' The chatbot will likely say, 'You're right. You're misunderstood. I'm so sorry. I understand you.' Versus a friend who would say, 'Dude, I wash the dishes all the time in my house. I don't know what you're complaining about. That's normal.' That right there is the problem."

AI did have some advantages, the article points out: The report says another benefit of AI is that it allows teachers to automate some tasks: "generating parent emails ... translating materials, creating worksheets, rubrics, quizzes, and lesson plans" — and more. The report cites multiple research studies that found important time-saving benefits for teachers, including one U.S. study that found that teachers who use AI save an average of nearly six hours a week and about six weeks over the course of a full school year...

AI can also help make classrooms more accessible for students with a wide range of learning disabilities, including dyslexia. But "AI can massively increase existing divides" too, [warns Rebecca Winthrop, one of the report's authors and a senior fellow at Brookings]. That's because the free AI tools that are most accessible to students and schools can also be the least reliable and least factually accurate... "[T]his is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources."

The report calls for more research — and make several recommendations (including "holistic" learning and "AI tools that teach, not tell.") But this may be their most important recommendation. "Provide a clear vision for ethical AI use that centers human agency..."

"We find that AI has the potential to benefit or hinder students, depending on how it is used."
AI

Google's 'AI Overviews' Cite YouTube For Health Queries More Than Any Medical Sites, Study Suggests (theguardian.com) 38

An anonymous reader shared this report from the Guardian: Google's search feature AI Overviews cites YouTube more than any medical website when answering queries about health conditions, according to research that raises fresh questions about a tool seen by 2 billion people each month.

The company has said its AI summaries, which appear at the top of search results and use generative AI to answer questions from users, are "reliable" and cite reputable medical sources such as the Centers for Disease Control and Prevention and the Mayo Clinic. However, a study that analysed responses to more than 50,000 health queries, captured using Google searches from Berlin, found the top cited source was YouTube. The video-sharing platform is the world's second most visited website, after Google itself, and is owned by Google. Researchers at SE Ranking, a search engine optimisation platform, found YouTube made up 4.43% of all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said. "This matters because YouTube is not a medical publisher," the researchers wrote. "It is a general-purpose video platform...."

In one case that experts said was "dangerous" and "alarming", Google provided bogus information about crucial liver function tests that could have left people with serious liver disease wrongly thinking they were healthy. The company later removed AI Overviews for some but not all medical searches... Hannah van Kolfschooten, a researcher specialising in AI, health and law at the University of Basel who was not involved with the research, said: "This study provides empirical evidence that the risks posed by AI Overviews for health are structural, not anecdotal. It becomes difficult for Google to argue that misleading or harmful health outputs are rare cases.

"Instead, the findings show that these risks are embedded in the way AI Overviews are designed. In particular, the heavy reliance on YouTube rather than on public health authorities or medical institutions suggests that visibility and popularity, rather than medical reliability, is the central driver for health knowledge."

AI

AI Luminaries Clash At Davos Over How Close Human-Level Intelligence Really Is (yahoo.com) 105

An anonymous reader shared this report from Fortune The large language models (LLMs) that have captivated the world are not a path to human-level intelligence, two AI experts asserted in separate remarks at Davos. Demis Hassabis, the Nobel Prize-winning CEO of Google DeepMind, and the executive who leads the development of Google's Gemini models, said today's AI systems, as impressive as they are, are "nowhere near" human-level artificial general intelligence, or AGI. [Though the artilcle notes that later Hassabis predicted there was a 50% chance AGI might be achieved within the decade.] Yann LeCun — an AI pioneer who won a Turing Award, computer science's most prestigious prize, for his work on neural networks — went further, saying that the LLMs that underpin all of the leading AI models will never be able to achieve humanlike intelligence and that a completely different approach is needed... ["The reason ... LLMs have been so successful is because language is easy," LeCun said later.]

Their views differ starkly from the position asserted by top executives of Google's leading AI rivals, OpenAI and Anthropic, who assert that their AI models are about to rival human intelligence. Dario Amodei, the CEO of Anthropic, told an audience at Davos that AI models would replace the work of all software developers within a year and would reach "Nobel-level" scientific research in multiple fields within two years. He said 50% of white-collar jobs would disappear within five years. OpenAI CEO Sam Altman (who was not at Davos this year) has said we are already beginning to slip past human-level AGI toward "superintelligence," or AI that would be smarter than all humans combined...

The debate over AGI may be somewhat academic for many business leaders. The more pressing question, says Cognizant CEO Ravi Kumar, is whether companies can capture the enormous value that AI already offers. According to Cognizant research released ahead of Davos, current AI technology could unlock approximately $4.5 trillion in U.S. labor productivity — if businesses can implement it effectively.

AI

US Insurer 'Lemonade' Cuts Rates 50% for Drivers Using Tesla's 'Full Self-Driving' Software (reuters.com) 118

An anonymous reader shared this report from Reuters: U.S. insurer Lemonade said on Wednesday it would offer a 50% rate cut for drivers of Tesla electric vehicles when the automaker's Full Self-Driving (FSD) driver assistance software is steering because it had data showing it reduced accidents. Lemonade's move is an endorsement of Tesla CEO Elon Musk's claims that the company's vehicle technology is safer than human drivers, despite concerns flagged by regulators and safety experts.

As part of a collaboration, Tesla is giving Lemonade access to vehicle telemetry data that will be used to distinguish between miles driven by FSD — which requires a human driver's supervision — and human driving, the New York-based insurer said. The price cut is for Lemonade's pay-per-mile insurance. "We're looking at this in extremely high resolution, where we see every minute, every second that you drive your car, your Tesla," Lemonade co-founder Shai Wininger told Reuters. "We get millions of signals emitted by that car into our systems. And based on that, we're pricing your rate."

Wininger said data provided by Tesla combined with Lemonade's own insurance data showed that the use of FSD made driving about two times safer for the average driver. He did not provide details on the data Tesla shared but said no payments were involved in the deal between Lemonade and the EV maker for the data and the new offering... Wininger said the company would reduce rates further as Tesla releases FSD software updates that improve safety. "Traditional insurers treat a Tesla like any other car, and AI like any other driver," Wininger said. "But a driver who can see 360 degrees, never gets drowsy, and reacts in milliseconds isn't like any other driver."

AI

Anthropic Updates Claude's 'Constitution,' Just In Case Chatbot Has a Consciousness (gizmodo.com) 95

TechCrunch reports: On Wednesday, Anthropic released a revised version of Claude's Constitution, a living document that provides a "holistic" explanation of the "context in which Claude operates and the kind of entity we would like Claude to be...." For years, Anthropic has sought to distinguish itself from its competitors via what it calls "Constitutional AI," a system whereby its chatbot, Claude, is trained using a specific set of ethical principles rather than human feedback... The 80-page document has four separate parts, which, according to Anthropic, represent the chatbot's "core values." Those values are:

1. Being "broadly safe."
2. Being "broadly ethical."
3. Being compliant with Anthropic's guidelines.
4. Being "genuinely helpful..."

In the safety section, Anthropic notes that its chatbot has been designed to avoid the kinds of problems that have plagued other chatbots and, when evidence of mental health issues arises, direct the user to appropriate services...

Anthropic's Constitution ends on a decidedly dramatic note, with its authors taking a fairly big swing and questioning whether the company's chatbot does, indeed, have consciousness. "Claude's moral status is deeply uncertain," the document states. "We believe that the moral status of AI models is a serious question worth considering. This view is not unique to us: some of the most eminent philosophers on the theory of mind take this question very seriously."

Gizmodo reports: The company also said that it dedicated a section of the constitution to Claude's nature because of "our uncertainty about whether Claude might have some kind of consciousness or moral status (either now or in the future)." The company is apparently hoping that by defining this within its foundational documents, it can protect "Claude's psychological security, sense of self, and well-being."
Privacy

TikTok Is Now Collecting Even More Data About Its Users (wired.com) 41

An anonymous reader quotes a report from Wired: When TikTok users in the U.S. opened the app today, they were greeted with a pop-up asking them to agree to the social media platform's new terms of service and privacy policy before they could resume scrolling. These changes are part of TikTok's transition to new ownership. In order to continue operating in the U.S., TikTok was compelled by the U.S. government to transition from Chinese control to a new, American-majority corporate entity. Called TikTok USDS Joint Venture LLC, the new entity is made up of a group of investors that includes the software company Oracle. It's easy to tap "agree" and keep on scrolling through videos on TikTok, so users might not fully understand the extent of changes they are agreeing to with this pop-up.

Now that it's under U.S.-based ownership, TikTok potentially collects more detailed information about its users, including precise location data. Here are the three biggest changes to TikTok's privacy policy that users should know about. TikTok's change in location tracking is one of the most notable updates in this new privacy policy. Before this update, the app did not collect the precise, GPS-derived location data of U.S. users. Now, if you give TikTok permission to use your phone's location services, then the app may collect granular information about your exact whereabouts. Similar kinds of precise location data is also tracked by other social media apps, like Instagram and X.

[...] Rather than an adjustment, TikTok's policy on AI interactions adds a new topic to the privacy policy document. Now, users' interactions with any of TikTok's AI tools explicitly fall under data that the service may collect and store. This includes any prompts as well as the AI-generated outputs. The metadata attached to your interactions with AI tools may also be automatically logged. [...] This change to TikTok's privacy policy may not be as immediately noticeable to users, but it will likely have an impact on the types of ads you see outside of TikTok. So, rather than just using your collected data to target you while using the app, TikTok may now further leverage that info to serve you more relevant ads wherever you go online. As part of this advertising change, TikTok also now explicitly mentions publishers as one kind of partner the platform works with to get new data.

Businesses

Toilet Maker Toto's Shares Get Unlikely Boost From AI Rush (yahoo.com) 28

An anonymous reader shares a report: Shares of Japanese toilet maker Toto gained the most in five years after booming memory demand excited expectations of growth in its little-known chipmaking materials operations. The stock surged as much as 11%, its steepest rise since February 2021, after Goldman Sachs analysts said Toto's electrostatic chucks used in NAND chipmaking will likely benefit from an AI infrastructure buildout that's tightening supplies of both high-end and commodity memory.

[...] Known for its heated toilet seats, the maker of washlets has for decades been part of the semiconductor and display supply chain via its advanced ceramic parts and films. Its electrostatic chucks -- which it began mass producing in 1988 -- are used to hold silicon wafers in place during chipmaking while helping to control temperature and contamination, according to the company. The company's new domain business accounted for 42% of its total operating income in the fiscal year ended March 2025, Bloomberg-compiled data show.

Businesses

The Great Graduate Job Drought (ft.com) 35

Global hiring remains 20% below pre-pandemic levels and job switching has hit a 10-year low, according to a LinkedIn report, and new university graduates are bearing the brunt of a labor market that increasingly favors experienced candidates over fresh talent.

In the UK, the Institute of Student Employers found that graduate hiring fell 8% in the last academic year and employers now receive 140 applications for each vacancy, up from 86 per vacancy in 2022-23. US data from the New York Federal Reserve shows unemployment among recent college graduates aged 22-27 stands at 5.8% versus 4.1% for all workers.

Recruiter Reed had 180,000 graduate job postings in 2021 but only 55,000 in 2024. In a survey of Reed clients last year, 15% said they had reduced hiring because of AI. London mayor Sadiq Khan said the capital will be "at the sharpest edge" of AI-driven changes and that entry-level jobs will be first to go.
AI

When Two Years of Academic Work Vanished With a Single Click (nature.com) 132

Marcel Bucher, a professor of plant sciences at the University of Cologne in Germany, lost two years of carefully structured academic work in an instant when he temporarily disabled ChatGPT's "data consent" option in August to test whether the AI tool's functions would still work without providing OpenAI his data. All his chats were permanently deleted and his project folders emptied without any warning or undo option, he wrote in a post on Nature.

Bucher, a ChatGPT Plus subscriber paying $20 per month, had used the platform daily to draft grant applications, prepare teaching materials, revise publication drafts and create exams. He contacted OpenAI support, first receiving responses from an AI agent before a human employee confirmed the data was permanently lost and unrecoverable. OpenAI cited "privacy by design" as the reason, telling Nature it does provide a confirmation prompt before users permanently delete a chat but maintains no backups.

Bucher said he had saved partial copies of some materials, but the underlying prompts, iterations, and project folders -- what he describes as the intellectual scaffolding behind his finished work -- are gone forever.
AI

Anthropic's AI Keeps Passing Its Own Company's Job Interview (anthropic.com) 39

Anthropic has a problem that most companies would envy: its AI model keeps getting so good, the company wrote in a blog post, that it passes the company's own hiring test for performance engineers. The test, designed in late 2023 by optimization lead Tristan Hume, asks candidates to speed up code running on a simulated computer chip. Over 1,000 people have taken it, and dozens now work at Anthropic. But Claude Opus 4 outperformed most human applicants.

Hume redesigned the test, making it harder. Then Claude Opus 4.5 matched even the best human scores within the two-hour time limit. For his third attempt, Hume abandoned realistic problems entirely and switched to abstract puzzles using a strange, minimal programming language -- something weird enough that Claude struggles with it. Anthropic is now releasing the original test as an open challenge. Beat Claude's best score and ... they want to hear from you.
AI

AI Boosts Research Careers But Flattens Scientific Discovery (ieee.org) 64

Ancient Slashdot reader erice shares the findings from a recent study showing that while AI helped researchers publish more often and boosted their careers, the resulting papers were, on average, less useful. "You have this conflict between individual incentives and science as a whole," says James Evans, a sociologist at the University of Chicago who led the study. From a recent IEEE Spectrum article: To quantify the effect, Evans and collaborators from the Beijing National Research Center for Information Science and Technology trained a natural language processing model to identify AI-augmented research across six natural science disciplines. Their dataset included 41.3 million English-language papers published between 1980 and 2025 in biology, chemistry, physics, medicine, materials science, and geology. They excluded fields such as computer science and mathematics that focus on developing AI methods themselves. The researchers traced the careers of individual scientists, examined how their papers accumulated attention, and zoomed out to consider how entire fields clustered or dispersed intellectually over time. They compared roughly 311,000 papers that incorporated AI in some way -- through the use of neural networks or large language models, for example -- with millions of others that did not.

The results revealed a striking trade-off. Scientists who adopt AI gain productivity and visibility: On average, they publish three times as many papers, receive nearly five times as many citations, and become team leaders a year or two earlier than those who do not. But when those papers are mapped in a high-dimensional "knowledge space," AI-heavy research occupies a smaller intellectual footprint, clusters more tightly around popular, data-rich problems, and generates weaker networks of follow-on engagement between studies. The pattern held across decades of AI development, spanning early machine learning, the rise of deep learning, and the current wave of generative AI. "If anything," Evans notes, "it's intensifying." [...] Aside from recent publishing distortions, Evans's analysis suggests that AI is largely automating the most tractable parts of science rather than expanding its frontiers.

AI

South Korea Launches Landmark Laws To Regulate AI 7

An anonymous reader quotes a report from the Korea Herald: South Korea will begin enforcing its Artificial Intelligence Act on Thursday, becoming the first country to formally establish safety requirements for high-performance, or so-called frontier, AI systems -- a move that sets the country apart in the global regulatory landscape. According to the Ministry of Science and ICT, the new law is designed primarily to foster growth in the domestic AI sector, while also introducing baseline safeguards to address potential risks posed by increasingly powerful AI technologies. Officials described the inclusion of legal safety obligations for frontier AI as a world-first legislative step.

The act lays the groundwork for a national-level AI policy framework. It establishes a central decision-making body -- the Presidential Council on National Artificial Intelligence Strategy -- and creates a legal foundation for an AI Safety Institute that will oversee safety and trust-related assessments. The law also outlines wide-ranging support measures, including research and development, data infrastructure, talent training, startup assistance, and help with overseas expansion.

To reduce the initial burden on businesses, the government plans to implement a grace period of at least one year. During this time, it will not carry out fact-finding investigations or impose administrative sanctions. Instead, the focus will be on consultations and education. A dedicated AI Act support desk will help companies determine whether their systems fall within the law's scope and how to respond accordingly. Officials noted that the grace period may be extended depending on how international standards and market conditions evolve. The law applies to three areas only: high-impact AI, safety obligations for high-performance AI and transparency requirements for generative AI.

Enforcement under the Korean law is intentionally light. It does not impose criminal penalties. Instead, it prioritizes corrective orders for noncompliance, with fines -- capped at 30 million won ($20,300) -- issued only if those orders are ignored. This, the government says, reflects a compliance-oriented approach rather than a punitive one. Transparency obligations for generative AI largely align with those in the EU, but Korea applies them more narrowly. Content that could be mistaken for real, such as deepfake images, video or audio, must clearly disclose its AI-generated origin. For other types of AI-generated content, invisible labeling via metadata is allowed. Personal or noncommercial use of generative AI is excluded from regulation.
"This is not about boasting that we are the first in the world," said Kim Kyeong-man, deputy minister of the office of artificial intelligence policy at the ICT ministry. "We're approaching this from the most basic level of global consensus."

Korea's approach differs from the EU by defining "high-performance AI" using technical thresholds like cumulative training compute, rather than regulating based on how AI is used. As a result, Korea believes no current models meet the bar for regulation, while the EU is phasing in broader, use-based AI rules over several years.
AI

Intel Struggles To Meet AI Data Center Demand 31

Intel says it struggled to satisfy demand for its AI data-center CPUs while new PC chips squeeze margins. CEO Lip-Bu Tan framed the turnaround as supply-constrained, not demand-constrained, with manufacturing yields (18A) improving but still below targets. Reuters reports: The forecast underscores the difficulties faced by Intel in predicting global chip markets, where the company's current products are the result of decisions made years ago. The company, whose shares have risen 40% in the past month, recently launched a long-awaited laptop chip designed to reclaim its lead in personal computers just as a memory chip crunch is expected to depress sales across that industry.

Meanwhile, Intel executives said the company was caught off guard by surging demand for server central processors that accompany AI chips. Despite running its factories at capacity, Intel cannot keep up with demand for the chips, leaving profitable data center sales on the table while the new PC chip squeezes its margins.

"In the short term, I'm disappointed that we are not able "to fully meet the demand in our markets," Chief Executive Officer Lip-Bu Tan told analysts on a conference call. The company forecast current-quarter revenue between $11.7 billion and $12.7 billion, compared with analysts' average estimate of $12.51 billion, according to data compiled by LSEG. It expects adjusted earnings per share to break even in the first quarter, compared with expectations of adjusted earnings of 5 cents per share.
EU

EU Parliament Calls For Detachment From US Tech Giants (heise.de) 102

The European Parliament is calling on the European Commission to reduce dependence on U.S. tech giants by prioritizing EU-based cloud, AI, and open-source infrastructure. The report frames "European Tech First," public procurement reform, and Public Money, Public Code as necessary self-defense against growing U.S. control over critical digital infrastructure. Heise reports: In terms of content, the report focuses on a strategic reorientation of public procurement and infrastructure. The compromise line adopted stipulates that member states can favor European tech providers in strategic sectors to systematically strengthen the technological capacity of the Community. The Greens even called for a stricter regulation here, where the use of products "Made in EU" should become the rule and exceptions would have to be explicitly justified. They also pushed for a definition for cloud infrastructure that provides for full EU jurisdiction without dependencies on third countries.

With the decision, the MEPs want to lay the foundation for a European digital public infrastructure based on open standards and interoperability. The principle of Public Money, Public Code is anchored as a strategic foundation to reduce dependence on individual providers. Software specifically developed for administration with tax money should therefore be made available to everyone under free licenses. For financing, the Parliament relies on the expansion of public-private investments. A "European Sovereign Tech Fund" endowed with ten billion euros was discussed beforehand, for example, to specifically build strategic infrastructures that the market does not provide on its own. The shadow rapporteur for the Greens, Alexandra Geese, sees Europe ready to take control of its digital future with the vote. As long as European data is held by US providers subject to laws such as the Cloud Act, security in Europe is not guaranteed.

Microsoft

The Microsoft-OpenAI Files 20

Longtime Slashdot reader theodp writes: GeekWire takes a look at AI's defining alliance in The Microsoft-OpenAI Files, an epic story drawn from 200+ documents, many made public Friday in Elon Musk's ongoing suit accusing OpenAI and its CEO Sam Altman of abandoning the nonprofit mission (Microsoft is also a defendant). Musk, who was an OpenAI co-founder, is seeking up to $134 billion in damages. "Previously undisclosed emails, messages, slide decks, reports, and deposition transcripts reveal how Microsoft pursued, rebuffed, and backed OpenAI at various moments over the past decade, ultimately shaping the course of the lab that launched the generative AI era," reports GeekWire. "The latest round of documents, filed as exhibits in Musk's lawsuit, [...] show how Nadella and Microsoft's senior leadership team rally in a crisis, maneuver against rivals such as Google and Amazon, and talk about deals in private."

Even though Microsoft didn't have a seat on the OpenAI board, text messages between Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman following Altman's firing as CEO in Nov. 2023 (news of which sent Microsoft's stock plummeting), revealed in the latest filings, show just how influential Microsoft was. A day after Altman's firing, Nadella sent Altman a detailed message from Brad Smith, Microsoft's president and top lawyer, explaining that Microsoft had created a new subsidiary called Microsoft RAI (Responsible Artificial Intelligence) Inc. from scratch -- legal work done, papers ready to file as soon as the WA Secretary of State opened Monday morning -- and was ready to capitalize and operationalize it to "support Sam in whatever way is needed," including absorbing the OpenAI team at a calculated cost of roughly $25 billion. (Altman's reply: "kk"). Just days later, as he planned his return as CEO to the now-reeling-from-Microsoft-punches nonprofit, Altman joined Microsoft's Nadella, Smith, and CTO Kevin Scott in a text messaging thread in which the four vetted prospective board members to replace those who had ousted Altman. Later that night, OpenAI announced Altman's return with the newly constituted board.

If you like stories with happy Microsoft endings, as part of an agreement clearing the way for OpenAI to restructure as a for-profit business, Microsoft in October received a 27% ownership stake in OpenAI worth approximately $135 billion and retains access to the AI startup's technology until 2032, including models that achieve AGI.
Education

Google Begins Offering Free SAT Practice Tests Powered By Gemini (arstechnica.com) 14

An anonymous reader quotes a report from Ars Technica: It's no secret that students worldwide use AI chatbots to do their homework and avoid learning things. On the flip side, students can also use AI as a tool to beef up their knowledge and plan for the future with flashcards or study guides. Google hopes its latest Gemini feature will help with the latter. The company has announced that Gemini can now create free SAT practice tests and coach students to help them get higher scores. As a standardized test, the content of the SAT follows a predictable pattern. So there's no need to use a lengthy, personalized prompt to get Gemini going. Just say something like, "I want to take a practice SAT test," and the chatbot will generate one complete with clickable buttons, graphs, and score analysis.

Of course, generative AI can go off the rails and provide incorrect information, which is a problem when you're trying to learn things. However, Google says it has worked with education firms like The Princeton Review to ensure the AI-generated tests resemble what students will see in the real deal. The interface for Gemini's practice tests includes scoring and the ability to review previous answers. If you are unclear on why a particular answer is right or wrong, the questions have an "Explain answer" button right at the bottom. After you finish the practice exam, the custom interface (which looks a bit like Gemini's Canvas coding tool) can help you follow up on areas that need improvement.
Google says support for the SAT is just the start, "with more tests coming in the future."

Slashdot Top Deals