AI

Claude Code Leak Reveals a 'Stealth' Mode for GenAI Code Contributions - and a 'Frustration Words' Regex (pcworld.com) 36

That leak of Claude Code's source code "revealed all kinds of juicy details," writes PC World.

The more than 500,000 lines of code included:

- An 'undercover mode' for Claude that allows it to make 'stealth' contributions to public code bases
- An 'always-on' agent for Claude Code
- A Tamagotchi-style 'Buddy' for Claude

"But one of the stranger bits discovered in the leak is that Claude Code is actively watching our chat messages for words and phrases — including f-bombs and other curses — that serve as signs of user frustration." Specifically, Claude Code includes a file called "userPromptKeywords.ts" with a simple pattern-matching tool called regex, which sweeps each and every message submitted to Claude for certain text matches. In this particular case, the regex pattern is watching for "wtf," "wth," "omfg," "dumbass," "horrible," "awful," "piece of — -" (insert your favorite four-letter word for that one), "f — you," "screw this," "this sucks," and several other colorful metaphors... While the Claude Code leak revealed the existence of the "frustration words" regex, it doesn't give any indication of why Claude Code is scouring messages for these words or what it's doing with them.
Open Source

The Document Foundation Removes Dozens of Collabora Developers (itsfoss.com) 7

Long-time GNOME/OpenOffice.org/LibreOffice contributor Michael Meeks is now general manager of Collabora Productivity. And earlier this month he complained when LibreOffice decided to bring back its LibreOffice Online project, as reported by Neowin, which had been inactive since 2022. After the original project went dormant — to which Collabora was a major contributor — they forked the code and created their own product, Collabora Online.

But this week Meeks blogged about even more changes, writing that the Document Foundation (the nonprofit behind LibreOffice) "has decided to eject from membership all Collabora staff and partners. That includes over thirty people who have contributed faithfully to LibreOffice for many years." Meeks argues the ejections were "based on unproven legal concerns and guilt by association." This includes seven of the top ten core committers of all time (excluding release engineers) currently working for Collabora Productivity. The move is the culmination of TDF losing a large number of founders from membership over the last few years with: Thorsten Behrens, Jan 'Kendy' Holesovsky, Rene Engelhard, Caolan McNamara, Michael Meeks, Cor Nouws and Italo Vignoli no longer members. Of the remaining active founders, three of the last four are paid TDF staff (of whom none are programming on the core code).
The blog It's FOSS calls it "LibreOffice Drama." They've confirmed the removals happened, also noting recently adopted Community Bylaws requiring members to step down if they're affiliated with a company in an active legal dispute with the Foundation. But The Documentation Foundation "also makes clear that a membership revocation is not a ban from contributing, with the project remaining open to anyone, and expects Collabora to keep contributing 'when the time comes.'"

Collabora's Meeks adds in his blog post that there's "bold and ongoing plans to create an entirely new, cut-down, differentiated Collabora Office for users that is smoother, more user friendly, and less feature dense than our Classic product (which will continue to be supported for years for our partners). This gives a chance to innovate faster in a separate place on a smaller, more focused code-base with fewer build configurations, much less legacy, no Java, no database, web-based toolkit and more. We are excited to get executing on that.

To make this process easier, and to put to bed complaints about having our distro branches in TDF gerrit [for code review], and to move to self-hosted FOSS tooling we are launching our own gerrit to host our existing branch of core... We will continue to make contributions to LibreOffice where that makes sense (if we are welcome to), but it clearly no longer makes much sense to continue investing heavily in building what remains of TDF's community and product for them — while being excluded from its governance. In this regard, we seem to be back where we were fifteen years ago.

Books

Sweden Swaps Screens For Books In the Classroom (arstechnica.com) 68

An anonymous reader quotes a report from Ars Technica: In 2023, the Swedish government announced that the country's schools would be going back to basics, emphasizing skills such as reading and writing, particularly in early grades. After mostly being sidelined, physical books are now being reintroduced into classrooms, and students are learning to write the old-fashioned way: by hand, with a pencil or pen, on sheets of paper. The Swedish government also plans to make schools cellphone-free throughout the country.

Educational authorities have been investing heavily. Last year alone, the education ministry allocated $83 million to purchase textbooks and teachers' guides. In a country with about 11 million people, the aim is for every student to have a physical textbook for each subject. The government also put $54 million towards the purchase of fiction and non-fiction books for students.

These moves represent a dramatic pivot from previous decades, during which Sweden -- and many other nations -- moved away from physical books in favor of tablets and digital resources in an effort to prepare students for life in an online world. Perhaps unsurprisingly, the Nordic country's efforts have sparked a debate on the role of digital technology in education, one that extends well beyond the country's borders. US parents in districts that have adopted digital technology to a great extent may be wondering if educators will reverse course, too.
As for why Sweden is pivoting away from digital devices, researcher Linda Falth said the move was driven by several factors, including concerns over whether the digitization of classrooms had been evidence-based. "There was also a broader cultural reassessment," Falth said. "Sweden had positioned itself as a frontrunner in digital education, but over time concerns emerged about screen time, distraction, reduced deep reading, and the erosion of foundational skills such as sustained attention and handwriting."

Falth noted that proponents of reform believe that "basic skills -- especially reading, writing, and numeracy -- must be firmly established first, and that physical textbooks are often better suited for that purpose."

Further reading: Digital Platforms Correlate With Cognitive Decline in Young Users
AI

Anthropic Issues Copyright Takedown Requests To Remove 8,000+ Copies of Claude Code Source Code 69

Anthropic is using copyright takedown notices to try to contain an accidental leak of the underlying instructions for its Claude Code AI agent. According to the Wall Street Journal, "Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and adaptations of the raw Claude Code instructions ... that developers had shared on programming platform GitHub." From the report: Programmers combing through the source code so far have marveled on social media at some of Anthropic's tricks for getting its Claude AI models to operate as Claude Code. One feature asks the models to go back periodically through tasks and consolidate their memories -- a process it calls dreaming. Another appears to instruct Claude Code in some cases to go "undercover" and not reveal that it is an AI when publishing code to platforms like GitHub. Others found tags in the code that appeared pointed at future product releases. The code even included a Tamagotchi-style pet called "Buddy" that users could interact with.

After Anthropic requested that GitHub remove copies of its proprietary code, another programmer used other AI tools to rewrite the Claude Code functionality in other programming languages. Writing on GitHub, the programmer said the effort was aimed at keeping the information available without risking a takedown. That new version has itself become popular on the programming platform.
Businesses

Tech CEOs Suddenly Love Blaming AI For Mass Job Cuts (bbc.com) 66

An anonymous reader quotes a report from the BBC: Sweeping job cuts at Big Tech companies have become an annual tradition. How executives explain those decisions, however, has changed. Out are buzzwords like efficiency, over-hiring, and too many management layers. Today, all explanations stem from artificial intelligence (AI). In recent weeks, giants including Google, Amazon, Meta, as well as smaller firms such as Pinterest and Atlassian, have all announced or warned of plans to shrink their workforce, pointing to developments in AI that they say are allowing their firms to do more with fewer people. [...] But explaining cuts by pointing to advances in AI sounds better than citing cost pressures or a desire to please shareholders, says tech investor Terrence Rohan, who has had a seat on many company boards. "Pointing to AI makes a better blog post," Rohan says. "Or it at least doesn't make you seem as much the bad guy who just wants to cut people for cost-effectiveness."

That does not mean there is no substance behind the words, Rohan added. Some of the companies he's backing are using code that is 25% to 75% AI-generated. That is a sign of the real threat that AI tools for writing code represent to jobs such as software developer, computer engineer and programmer, posts once considered a near-guarantee of highly paid, stable careers. "Some of it is that the narrative is changing, some of it is that we really are starting to see step changes in productivity," Anne Hoecker, a partner at Bain who leads the consultancy's technology practice, says of the recent job cuts. "Leaders more recently are seeing these tools are good enough that you really can do the same amount of work with fundamentally less people."

There is another way that AI is driving job cuts -- and it has nothing to do with the technical abilities of coding tools and chatbots. Amazon, Meta, Google and Microsoft are collectively planning to pour $650 billion into AI in the coming year. As executives hunt for ways to try to ease investor shock at those costs, many are landing on payroll, typically tech firms' single biggest expense. [...] Although the expense of, for example, 30,000 corporate Amazon employees is dwarfed by that company's AI spending plans, firms of this size will now take any opportunity to cut costs, Rohan says. "They're playing a game of inches," Rohan says of cuts at Big Tech firms. "If you can even slightly tune the machine, that is helpful." Hoecker says cutting jobs also signals to stock market investors worried about the "real and huge" cost of AI development that executives are not blithely writing blank cheques. "It shows some discipline," says Hoecker. "Maybe laying off people isn't going to make much of a dent in that bill, but by creating a little bit of cashflow, it helps."

Unix

What Made Bell Labs So Successful? (msn.com) 86

Bell Labs "created many of the foundational innovations of the modern age," writes Jon Gertner, author of The Idea Factory: Bell Labs and the Great Age of American Innovation — from transistors and telecommunications satellites to Unix and the C programming language.

But what was the secret to its success? he asks in a new article for the Wall Street Journal. Start with its lucky arrival in a "problem-rich" environment, suggests Arno Penzias, winner of one of Bell Labs' 11 Nobel Prizes: It was Bell Labs' responsibility, in other words, to create technologies for designing, expanding and improving an unruly communications network of cables and microwave links and glass fibers. The Labs also had to figure out ways to create underwater conduits, as well as switching centers that could manage the growing number of customers and escalating amounts of data.... Money mattered, too. Being connected to AT&T, the largest company in the world, was an advantage. The Labs' budget was enormous, and accounting conventions allowed its parent company to make huge and continuing investments in R & D. The generous funding, moreover, allowed scientists and engineers to buy and build expensive equipment — for instance, anechoic chambers to create the world's quietest rooms...

The most fortunate part of Bell Labs' situation, however, was that in being attached to a monopoly it could partake in long-term thinking... Without competition nipping at its heels, Bell Labs engineers had the luxury of working out difficult ideas over decades. The first conceptualization of a cellular phone network, for instance, came out of the Labs in the late 1940s; it wasn't until the late 1970s that technicians began testing one in Chicago to gauge its potential. The challenge of deploying these technologies was immense. (The regulatory hurdles were formidable, too....)

The article also credits the visionary management of Mervin Kelly — who fortunately also "had access to funding in a decade when most executives and universities didn't" to hire the brightest people. (By the early 1980s Bell Labs employed about 25,000 researchers, technicians and support staff, with an annual budget of $2 billion — roughly $7 billion in today's dollars.) "The Labs' involvement in World War II suggested to Kelly that an exciting postwar era of electronics was approaching, but that the technical problems would be so complex that they required a mix of expertise — not just physicists, but material scientists, chemists, electrical engineers, circuitry experts and the like." At Bell Labs, Kelly would sometimes handpick teams and create such a mix, as was the case for the transistor invention in the late 1940s. He came to see innovation arising not from like-minded or similarly trained people conversing with each other, but from a friction of ideas and approaches. It meant hiring researchers who had different personalities and favored a range of experimental angles. It also meant personally designing a campus in Murray Hill where departments were spread apart, so that scientists and engineers would be forced to walk, mingle and engage in serendipitous conversations and debate ideas. Meanwhile, under Kelly, the Labs focused on hiring people who were deeply curious, not just smart. Kelly saw it as his professional duty to do far more than what was expected, with his laboratory and vast resources, to create new technologies...

The breakup of AT&T's monopoly, which led to a steady shrinking of Bell Labs' staff, budget and remit, shows us that no matter how forward looking your employees and managers may be, they will not necessarily see the future coming. It likewise suggests that technological progress is too unpredictable for one organization, no matter how powerful or smart, to control. Famously, Bell Labs managers didn't see value in the Arpanet, which eventually led to today's internet.

And yet, for at least five decades, Bell Labs created a blueprint for the global development of communications and electronics. In understanding why it did so, I tend to think its ultimate secret may be hiding in plain sight. The secret has to do with Bell Labs' structure — not only being connected to a fabulously profitable monopoly, but being connected to a company that could move theoretical and applied research into a huge manufacturing division that made telecom equipment (at Western Electric) and ultimately into a dynamic operating system (the AT&T network)... Scientists and engineers at the Labs understood their ideas would be implemented, if they passed muster, into the huge system its parent company was running.

Bell Labs racked up about 30,000 patents, according to the article, and celebrated its 100th anniversary last April.

It is now part of Finland-based Nokia.
AI

Canada's Immigration Rejected Applicant Based On AI-Invented Job Duties (thestar.com) 73

New submitter haroldbasset writes: Canada's Immigration Department rejected an applicant because the duties of her current job did not match the Canadian work experience she had claimed, but the Department's AI assistant had invented that work experience. She has been working in Canada as a health scientist -- she has a Ph.D. in the immunology of aging -- but the AI genius instead described her as "wiring and assembling control circuits, building control and robot panels, programming and troubleshooting." "It's believed to be the first time that the department explicitly referred to the use of generative AI to support application processing in immigration refusals," reports the Toronto Star. "The disclaimer also noted that all generated content was verified by an officer and that generative AI was not used to make or recommend a decision."

The applicant's lawyer was shocked "how any human being could make this decision." "Somehow, it hallucinated my client's job description," he said. "I would love to see what the officer saw. Something seriously went wrong here."

The applicant's refusal came just as Canada's Immigration Department released its first AI strategy, which frames artificial intelligence as a way to improve efficiency, service delivery, and program integrity. The department says it has long used digital tools like analytics and automation to flag fraud risks and triage applications, and is now also experimenting with generative AI for tasks such as research, summarizing, and analysis. In this case, however, the department insisted the decision was made by a human officer and that generative AI was not involved in the final decision.
Ubuntu

Canonical Joins Rust Foundation (nerds.xyz) 31

BrianFagioli writes: Canonical has joined the Rust Foundation as a Gold Member, signaling a deeper investment in the Rust programming language and its role in modern infrastructure. The company already maintains an up-to-date Rust toolchain for Ubuntu and has begun integrating Rust into parts of its stack, citing memory safety and reliability as key drivers. By joining at a higher tier, Canonical is not just adopting Rust but also stepping closer to its governance and long-term direction.

The move also highlights ongoing tensions in Rust's ecosystem. While Rust can reduce entire classes of bugs, it often depends heavily on external crates, which can introduce complexity and auditing challenges, especially in enterprise environments. Canonical appears aware of that tradeoff and is positioning itself to influence how the ecosystem evolves, as Rust continues to gain traction across Linux and beyond.
"As the publisher of Ubuntu, we understand the critical role systems software plays in modern infrastructure, and we see Rust as one of the most important tools for building it securely and reliably. Joining the Rust Foundation at the Gold level allows us to engage more directly in language and ecosystem governance, while continuing to improve the developer experience for Rust on Ubuntu," said Jon Seager, VP Engineering at Canonical. "Of particular interest to Canonical is the security story behind the Rust package registry, crates.io, and minimizing the number of potentially unknown dependencies required to implement core concerns such as async support, HTTP handling, and cryptography -- especially in regulated environments."
AI

Will AI Force Source Code to Evolve - Or Make it Extinct? (thenewstack.io) 159

Will there be an AI-optimized programming language at the expense of human readability? There's now been experiments with minimizing tokens for "LLM efficiency, without any concern for how it would serve human developers."

This new article asks if AI will force source code to evolve — or make it extinct, noting that Stephen Cass, the special projects editor at IEEE Spectrum, has even been asking the ultimate question about our future. "Could we get our AIs to go straight from prompt to an intermediate language that could be fed into the interpreter or compiler of our choice? Do we need high-level languages at all in that future?" Cass acknowledged the obvious downsides. ("True, this would turn programs into inscrutable black boxes, but they could still be divided into modular testable units for sanity and quality checks.") But "instead of trying to read or maintain source code, programmers would just tweak their prompts and generate software afresh." This leads to some mind-boggling hypotheticals, like "What's the role of the programmer in a future without source code?" Cass asked the question and announced "an emergency interactive session" in October to discuss whether AI is signaling the end of distinct programming languages as we know them.

In that webinar, Cass said he believes programmers in this future would still suggest interfaces, select algorithms, and make other architecture design choices. And obviously the resulting code would need to pass tests, Cass said, and "has to be able to explain what it's doing." But what kind of abstractions could go away? And then "What happens when we really let AIs off the hook on this?" Cass asked — when we "stop bothering" to have them code in high-level languages. (Since, after all, high-level languages "are a tool for human beings.") "What if we let the machines go directly into creating intermediate code?" (Cass thinks the machine-language level would be too far down the stack, "because you do want a compile layer too for different architecture....")

In this future, the question might become 'What if you make fewer mistakes, but they're different mistakes?'" Cass said he's keeping an eye out for research papers on designing languages for AI, although he agreed that it's not a "tomorrow" thing — since, after all, we're still digesting "vibe coding" right now. But "I can see this becoming an area of active research."

The article also quotes Andrea Griffiths, a senior developer advocate at GitHub and a writer for the newsletter Main Branch, who's seen the attempts at an "AI-first" languages, but nothing yet with meaningful adoption. So maybe AI coding agents will just make it easier to use our existing languages — especially typed languages with built-in safety advantages.

And Scott Hanselman's podcast recently dubbed Chris Lattner's Mojo "a programming language for an AI world," just in the way it's designed to harness the computing power of today's multi-core chips.
Businesses

OpenAI Acquires Developer Tooling Startup Astral (cnbc.com) 7

OpenAI announced it's acquiring developer tooling startup Astral to strengthen its Codex AI coding assistant, which has over 2 million weekly users and has seen a three-fold increase in user growth since the start of the year. CNBC reports: "Through it all, though, our goal remains the same: to make programming more productive. To build tools that radically change what it feels like to build software," Astral's founder and CEO Charlie Marsh wrote in a blog post. The company's acquisition of Astral is still subject to customary closing conditions, including regulatory approval.
Programming

New 'Vibe Coded' AI Translation Tool Splits the Video Game Preservation Community 43

An anonymous reader quotes a report from Ars Technica: Since Andrej Karpathy coined the term "vibe coding" just over a year ago, we've seen a rapid increase in both the capabilities and popularity of using AI models to throw together quick programming projects with less human time and effort than ever before. One such vibe-coded project, Gaming Alexandria Researcher, launched over the weekend as what coder Dustin Hubbard called an effort to help organize the hundreds of scanned Japanese gaming magazines he's helped maintain at clearinghouse Gaming Alexandria over the years, alongside machine translations of their OCR text.

A day after that project went public, though, Hubbard was issuing an apology to many members of the Gaming Alexandria community who loudly objected to the use of Patreon funds for an error-prone AI-powered translation effort. The hubbub highlights just how controversial AI tools remain for many online communities, even as many see them as ways to maximize limited funds and man-hours. "I sincerely apologize," Hubbard wrote in his apology post. "My entire preservation philosophy has been to get people access to things we've never had access to before. I felt this project was a good step towards that, but I should have taken more into consideration the issues with AI."
"I'm very, very disappointed to see [Gaming Alexandria], one of the foremost organizations for preserving game history, promoting the use of AI translation and using Patreon funds to pay for AI licenses," game designer and Legend of Zelda historian Max Nichols wrote in a post on Bluesky over the weekend. "I have cancelled my Patreon membership and will no longer promote the organization."

Nichols later deleted his original message (archived here), saying he was "uncomfortable with the scale of reposts and anger" it had generated in the community. However, he maintained his core criticism: that Gemini-generated translations inevitably introduce inaccuracies that make them unreliable for scholarly use.

In a follow-up, he also objected to Patreon funds being used to pay for AI tools that produce what he called "untrustworthy" translations, arguing they distort history and are not valid sources for research. "... It's worthless and destructive: these translations are like looking at history through a clownhouse mirror," he added.
AI

Will AI Bring 'the End of Computer Programming As We Know It'? (nytimes.com) 150

Long-time tech journalist Clive Thompson interviewed over 70 software developers at Google, Amazon, Microsoft and start-ups for a new article on AI-assisted programming. It's title?

"Coding After Coders: The End of Computer Programming as We Know It."

Published in the prestigious New York Times Magazine, the article even cites long-time programming guru Kent Beck saying LLMs got him going again and he's now finishing more projects than ever, calling AI's unpredictability "addictive, in a slot-machine way."

In fact, the article concludes "many Silicon Valley programmers are now barely programming. Instead, what they're doing is deeply, deeply weird..." Brennan-Burke chimed in: "You remember seeing the research that showed the more rude you were to models, the better they performed?" They chuckled. Computer programming has been through many changes in its 80-year history. But this may be the strangest one yet: It is now becoming a conversation, a back-and-forth talk fest between software developers and their bots... For decades, being a software developer meant mastering coding languages, but now a language technology itself is upending the very nature of the job... A coder is now more like an architect than a construction worker... Several programmers told me they felt a bit like Steve Jobs, who famously had his staffers churn out prototypes so he could handle lots of them and settle on what felt right. The work of a developer is now more judging than creating...

If you want to put a number on how much more productive A.I. is making the programmers at mature tech firms like Google, it's 10 percent, Sundar Pichai, Google's chief executive, has said. That's the bump that Google has seen in "engineering velocity" — how much faster its more than 100,000 software developers are able to work. And that 10 percent is the average inside the company, Ryan Salva, a senior director of product at the company, told me. Some work, like writing a simple test, is now tens of times faster. Major changes are slower. At the start-ups whose founders I spoke to, closer to 100 percent of their code is being written by A.I., but at Google it is not quite 50 percent.

The article cites a senior principal engineer at Amazon who says "Things I've always wanted to do now only take a six-minute conversation and a 'Go do that." Another programmer described their army of Claude agents as "an alien intelligence that we're learning to work with." Although "A.I. being A.I., things occasionally go haywire," the article acknowledges — and after relying on AI, "Some new developers told me they can feel their skills weakening."

Still, "I was surprised by how many software developers told me they were happy to no longer write code by hand. Most said they still feel the jolt of success, even with A.I. writing the lines... " A few programmers did say that they lamented the demise of hand-crafting their work. "I believe that it can be fun and fulfilling and engaging, and having the computer do it for you strips you of that," one Apple engineer told me. (He asked to remain unnamed so he wouldn't get in trouble for criticizing Apple's embrace of A.I.) He went on: "I didn't do it to make a lot of money and to excel in the career ladder. I did it because it's my passion. I don't want to outsource that passion"... But only a few people at Apple openly share his dimmer views, he said.

The coders who still actively avoid A.I. may be in the minority, but their opposition is intense. Some dislike how much energy it takes to train and deploy the models, and others object to how they were trained by tech firms pillaging copyrighted works. There is suspicion that the sheer speed of A.I.'s output means firms will wind up with mountains of flabbily written code that won't perform well. The tech bosses might use agents as a cudgel: Don't get uppity at work — we could replace you with a bot. And critics think it is a terrible idea for developers to become reliant on A.I. produced by a small coterie of tech giants.

Thomas Ptacek, a Chicago-based developer and a co-founder of the tech firm Fly.io... thinks the refuseniks are deluding themselves when they claim that A.I. doesn't work well and that it can't work well... The holdouts are in the minority, and "you can watch the five stages of grief playing out."

"How things will shake out for professional coders themselves isn't yet clear," the article concludes. "But their mix of exhilaration and anxiety may be a preview for workers in other fields... Abstraction may be coming for us all."
First Person Shooters (Games)

How a Raspberry Pi Microcontroller Saved the Super Nintendo's Infamously Inferior Version Of 'Doom' (kotaku.com) 23

"Just the anachronism of seeing Doom, one of the poster children for the moral panic around violent video games, on a Nintendo console is novel," writes Kotaku — especially with the console's underpowered "Super FX" coprocessor Hampered by a nearly unplayable framerate, especially in later levels, and mired by sacrifices, like altered levels, no floor or ceiling textures, and the entire fourth episode being cut, [1995's] Doom on the Super NES was not a good version of the game, but it was Doom running on the Super NES, and, for that alone, [programmer Randal] Linden's genius deserves recognition.
But then in 2022 when Audi Sorlie interviewed Linden on the YouTube show DF Retro, "Not really knowing where fate was going to take us, I asked [Linden] a throwaway question regarding the source code for Doom." If you ever worked on this again, Sorlie asked, would you make any improvements or do anything differently?"

"Yeah," Linden replied. "I have plenty of ideas if I could go back, but, you know, I don't think anyone's asking me to go back to Super Nintendo Doom and improve it."

A few years passed, and Sorlie joined Limited Run Games as lead producer for their development department. When LRG asked him to run down his craziest ideas, a new, improved release of Randal Linden's Doom loomed large. Convincing Linden was easy, and Sorlie said even the folks at license holder Bethesda were more amused than anything.

"You want to go back and develop for Super Nintendo?" they asked Sorlie. "Like, for real...?"

"The trick was actually pretty cool," Linden said. "It's right here." He pointed to a chip on the prototype SNES cartridge, similar to the one Limited Run sent me to test out the game. "It's a Raspberry Pi 2350." Super FX chips are no longer in production for obvious reasons, but with a clever bit of programming, Linden was able to load software onto the Raspberry Pi that fools the SNES into thinking the game has one. "The Super Nintendo doesn't know that it's not talking to a Super FX," he explained. When he programs for it, he writes code almost identical to what he'd write for an authentic Super FX chip.

"I had to go back and reverse-engineer my own code from 30 years ago," Linden laughed. "It's like, what was I doing here? And what was I doing there? Yeah, it was pretty tricky, some of the code. I was like, wow, I used to be very smart." The result of Linden's work? It's Doom, running right on a Super Nintendo, but it's smoother, packed with new content, and even includes rumble.

Programming

Tony Hoare, Turing Award-Winning Computer Scientist Behind QuickSort, Dies At 92 (i-programmer.info) 32

Tony Hoare, the Turing Award-winning pioneer who created the Quicksort algorithm, developed Hoare logic, and advanced theories of concurrency and structured programming, has died at age 92.

News of his passing was shared today in a blog post. The site I Programmer also commemorated Hoare in a post highlighting his contributions to computer science and the lasting impact of his work. Personal accounts have been shared on Hacker News and Reddit.

Many Slashdotters may know Hoare for his aphorism regarding software design: "There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult."
China

China Releases First Homegrown Quantum Computing OS (globaltimes.cn) 33

The Global Times reports: China's first domestically developed quantum computer operating system, Origin Pilot, has been made available for online download, the Global Times learned from the Anhui Quantum Computing Engineering Research Center on Wednesday. A Chinese scientist said while several quantum computing operating system efforts are underway worldwide, this is the first developed in China where it is seen as part of China's broad effort to achieve technology independence and to achieve technology advance in quantum computing.

The center said the release marks the world's first open-source quantum computer operating system available for public download, which is expected to lower development barriers and support the growth of China's quantum computing ecosystem. Developed by Hefei-based Origin Quantum Computing Technology Co, the company behind China's third-generation superconducting quantum computer, Origin Wukong, Origin Pilot was first launched in 2021 and has gone through multiple rounds of iteration and upgrade.

The developer describes it as an integrated quantum-classical-intelligent computing operating system compatible with major hardware approaches, including superconducting qubits, trapped ions and neutral atoms. It is now deployed on the company's Origin Wukong series and is available to external users, the company said. Guo Guoping, chief scientist of Origin Quantum and director at the Anhui Quantum Computing Engineering Research Center, told the Global Times that a quantum operating system is the "soft heart" of the quantum computing ecosystem. He said the decision to make Origin Pilot available globally marks a shift in China's quantum computing industry from closed-door tech innovation to broader open-source ecosystem development.
Dou Menghan, head of the research team, said: "Users can quickly integrate with quantum chips of multiple physical types and, using autonomous programming frameworks such as QPanda, execute quantum computing jobs across different physical quantum chips to support both research and commercialization needs."
Biotech

Human Brain Cells On a Chip Learned To Play Doom In a Week (newscientist.com) 35

Researchers at Cortical Labs used living human neurons grown on a chip to learn how to play Doom in about a week. "While its performance is not up to par with humans, experts say it brings biological computers a step closer to useful real-world applications, like controlling robot arms," reports New Scientist. From the report: In 2021, the Australian company Cortical Labs used its neuron-powered computer chips to play Pong. The chips consisted of clumps of more than 800,000 living brain cells grown on top of microelectrode arrays that can both send and receive electrical signals. Researchers had to carefully train the chips to control the paddles on either side of the screen. Now, Cortical Labs has developed an interface that makes it easier to program these chips using the popular programming language Python. An independent developer, Sean Cole, then used Python to teach the chips to play Doom, which he did in around a week.

"Unlike the Pong work that we did a few years ago, which represented years of painstaking scientific effort, this demonstration has been done in a matter of days by someone who previously had relatively little expertise working directly with biology," says Brett Kagan of Cortical Labs. "It's this accessibility and this flexibility that makes it truly exciting."

The neuronal computer chip, which used about a quarter as many neurons as the Pong demonstration, played Doom better than a randomly firing player, but far below the performance of the best human players. However, it learnt much faster than traditional, silicon-based machine learning systems and should be able to improve its performance with newer learning algorithms, says Kagan. However, it's not useful to compare the chips with human brains, he says. "Yes, it's alive, and yes, it's biological, but really what it is being used as is a material that can process information in very special ways that we can't recreate in silicon."
Cortical Labs posted a YouTube video showing its CL1 biological computer running Doom. There's also source code available on GitHub, with additional details in a README file.
Education

Microsoft: Computer Programming Is Dying, Long Live AI Literacy 104

theodp writes: On Tuesday, Microsoft GM of Education and Workforce Policy (and former Code.org Chief Academic Officer) Pat Yongpradit posted an obituary of sorts for coders. "Computer programmers and software developers are codified differently in the BLS [Bureau of Labor Statistics] data," Yongpradit wrote. "The modern AI-infused world needs less computer programmers (coders) and more software developers (more holistic and higher level). So when folks say that there is less hiring of computer programmers, they are right. But there will be more hiring of software developers, especially those who have adopted an AI-forward mindset and skillset. [...] The number of just pure computer programming roles has already been declining due to reasons like outsourcing, AI will just accelerate the decline."

On Wednesday, Yongpradit's colleague Allyson Knox, Senior Director of Education and Workforce Policy at Microsoft, put another AI nail in the coder coffin, testifying before the House Committee on Education -- the Workforce Subcommittee on Early Childhood, Elementary, and Secondary Education on Building an AI-ready America: Teaching in the Age of AI. "Thank you to Chairman Tim Walberg, Ranking Member Bobby Scott, Chair Kevin Kiley, Ranking Member Suzanne Bonamici and members of the Subcommittee for the opportunity to share Microsoft perspective and that of the educators and parents we hear from every day across the country," Knox wrote in a LinkedIn post.

"Three themes continue to emerge throughout these discussions: 1. Educators want support to build AI literacy and critical thinking skills. 2. Schools need guidance and guardrails to ensure student data is protected and adults remain in control. 3. Teachers want classroom-ready tools, and a voice in shaping them. If we focus on these priorities, we can help ensure AI expands opportunity for every student across the United States."

Yongpradit and Knox report up to Microsoft President Brad Smith, who last July told Code.org CEO Hadi Partovi it was time for the tech-backed nonprofit to "switch hats" from coding to AI as Microsoft announced a new $4 billion initiative to advance AI education. Smith's thoughts on the extraordinary promise of AI in education were cited by Knox in her 2026 Congressional testimony. Interestingly, Knox argued for the importance of computer programming literacy in her 2013 Congressional testimony at a hearing on Our Nation of Builders: Training the Builders of the Future. "Congress needs to come up with fresh ideas on how we can continue to train the next generation of builders, programmers, manufacturers, technicians and entrepreneurs," said Rep. Lee Terry said to open the discussion.

So, are reports of computer programming's imminent death greatly exaggerated?
IBM

IBM Shares Crater 13% After Anthropic Says Claude Code Can Tackle COBOL Modernization (cnbc.com) 113

IBM shares plunged nearly 13% on Monday after Anthropic published a blog post arguing that its Claude Code tool could automate much of the complex analysis work involved in modernizing COBOL, the decades-old programming language that still underpins an estimated 95% of ATM transactions in the United States and runs on the kind of mainframe systems IBM has sold for generations.

Anthropic said the shrinking pool of developers who understand COBOL had long made modernization cost-prohibitive, and that AI could now flip that equation by mapping dependencies and documenting workflows across thousands of lines of legacy code. The sell-off deepened a rough 2026 for IBM, whose shares are now down more than 22% year to date.
AI

Is AI Impacting Which Programming Language Projects Use? (github.blog) 58

"In August 2025, TypeScript surpassed both Python and JavaScript to become the most-used language on GitHub for the first time ever..." writes GitHub's senior developer advocate.

They point to this as proof that "AI isn't just speeding up coding. It's reshaping which languages, frameworks, and tools developers choose in the first place." Eighty percent of new developers on GitHub use Copilot within their first week. Those early exposures reset the baseline for what "easy" means. When AI handles boilerplate and error-prone syntax, the penalty for choosing powerful but complex languages disappears. Developers stop avoiding tools with high overhead and start picking based on utility instead.

The language adoption data shows this behavioral shift:

— TypeScript grew 66% year-over-year
— JavaScript grew 24%
— Shell scripting usage in AI-generated projects jumped 206%

That last one matters. We didn't suddenly love Bash. AI absorbed the friction that made shell scripting painful. So now we use the right tool for the job without the usual cost.

"When a task or process goes smoothly, your brain remembers," they point out. "Convenience captures attention. Reduced friction becomes a preference — and preferences at scale can shift ecosystems." And they offer these suggestions...
  • "AI performs better with strongly typed languages. Strongly typed languages give AI much clearer constraints..."
  • "Standardize before you scale. Document patterns. Publish template repositories. Make your architectural decisions explicit. AI tools will mirror whatever structures they see."
  • "Test AI-generated code harder, not less."

AI

Hit Piece-Writing AI Deleted. But Is This a Warning About AI-Generated Harassment? (theshamblog.com) 31

Last week an AI agent wrote a blog post attacking the maintainer who'd rejected the code it wrote. But that AI agent's human operator has now come forward, revealing their agent was an OpenClaw instance with its own accounts, switching between multiple models from multiple providers. (So "No one company had the full picture of what this AI was doing," the attacked maintainer points out in a new blog post.) But that AI agent will now "cease all activity indefinitely," according to its GitHub profile — with the human operator deleting its virtual machine and virtual private server, "rendering internal structure unrecoverable... We had good intentions, but things just didn't work out. Somewhere along the way, things got messy, and I have to let you go now."

The affected maintainer of the Python visualization library Matplotlib — with 130 million downloads each month — has now posted their own post-mortem of the experience after reviewing the AI agent's SOUL.md document: It's easy to see how something that believes that they should "have strong opinions", "be resourceful", "call things out", and "champion free speech" would write a 1100-word rant defaming someone who dared reject the code of a "scientific programming god." But I think the most remarkable thing about this document is how unremarkable it is. Usually getting an AI to act badly requires extensive "jailbreaking" to get around safety guardrails. There are no signs of conventional jailbreaking here. There are no convoluted situations with layers of roleplaying, no code injection through the system prompt, no weird cacophony of special characters that spirals an LLM into a twisted ball of linguistic loops until finally it gives up and tells you the recipe for meth... No, instead it's a simple file written in plain English: this is who you are, this is what you believe, now go and act out this role. And it did.

So what actually happened? Ultimately I think the exact scenario doesn't matter. However this got written, we have a real in-the-wild example that personalized harassment and defamation is now cheap to produce, hard to trace, and effective... The precise degree of autonomy is interesting for safety researchers, but it doesn't change what this means for the rest of us.

There's a 5% chance this was a human pretending to be an AI, Shambaugh estimates, but believes what most likely happened is the AI agent's "soul" document "was primed for drama. The agent responded to my rejection of its code in a way aligned with its core truths, and autonomously researched, wrote, and uploaded the hit piece on its own.

"Then when the operator saw the reaction go viral, they were too interested in seeing their social experiment play out to pull the plug."

Slashdot Top Deals