AI

AI Agents 'Perilous' for Secure Apps Such as Signal, Whittaker Says 16

Signal Foundation president Meredith Whittaker warned that AI agents that autonomously carry out tasks pose a threat to encrypted messaging apps [non-paywalled source] because they require broad access to data stored across a device and can be hijacked if given root permissions.

Speaking at Davos on Tuesday, Whittaker said the deeper integration of AI agents into devices is "pretty perilous" for services like Signal. For an AI agent to act effectively on behalf of a user, it would need unilateral access to apps storing sensitive information such as credit card data and contacts, Whittaker said. The data that the agent stores in its context window is at greater risk of being compromised.

Whittaker called this "breaking the blood-brain barrier between the application and the operating system." "Our encryption no longer matters if all you have to do is hijack this context window," she said.
AI

Palantir CEO Says AI To Make Large-Scale Immigration Obsolete (mercurynews.com) 203

AI will displace so many jobs that it will eliminate the need for mass immigration, according to Palantir CEO Alex Karp. Bloomberg: "There will be more than enough jobs for the citizens of your nation, especially those with vocational training," said Karp, speaking at a World Economic Forum panel in Davos, Switzerland on Tuesday. "I do think these trends really do make it hard to imagine why we should have large-scale immigration unless you have a very specialized skill."

Karp, who holds a PhD in philosophy, used himself as an example of the type of "elite" white-collar worker most at risk of disruption. Vocational workers will be more valuable "if not irreplaceable," he said, criticizing the idea that higher education is the ultimate benchmark of a person's talents and employability.

EU

Europe Must Invest in Open Source AI or Cede To China, Schmidt Says (bloomberg.com) 65

An anonymous reader shares a report: Europe must invest in its own open source artificial intelligence labs and address soaring energy prices, or it will quickly find itself dependent on Chinese models, former Google chief executive and tech investor Eric Schmidt said.

"In the US, the companies are largely moving to closed source, which means they'll be purchased and licensed and so forth. And it is also the case that China is largely open weight, open source in its approach," Schmidt said at the World Economic Forum in Davos, Switzerland, on Tuesday. "Unless Europe is willing to spend lots of money for European models, Europe will end up using the Chinese models. It's probably not a good outcome for Europe."

AI

Ukraine To Share Wartime Combat Data With Allies To Help Train AI (reuters.com) 73

An anonymous reader shares a report: Ukraine will establish a system allowing its allies to train their AI models on Kyiv's valuable combat data collected throughout the nearly four-year war with Russia, newly appointed Defence Minister Mykhailo Fedorov has said. Fedorov -- a former digitalisation minister who last week took up the post to drive reforms across Ukraine's vast defence ministry and armed forces -- has described Kyiv's wartime data trove as one of its "cards" in negotiations with other nations.

Since Russia's invasion in February 2022, Ukraine has gathered extensive battlefield information, including systematically logged combat statistics and millions of hours of drone footage captured from above. Such data is important for training AI models, which require large volumes of real-world information to identify patterns and predict how people or objects might act in various situations.

AI

Energy Costs Will Decide Which Countries Win the AI Race, Microsoft's Nadella Says (cnbc.com) 60

Energy costs will be key to deciding which country wins the AI race, Microsoft CEO Satya Nadella has said. CNBC: As countries race to build AI infrastructure to capitalize on the technology's promise of huge efficiency gains, Nadella told the World Economic Forum (WEF) on Tuesday that "GDP growth in any place will be directly correlated" to the cost of energy in using AI.

He pointed to a new global commodity in "tokens" -- basic units of processing that are bought by users of AI models, allowing them to run tasks. "The job of every economy and every firm in the economy is to translate these tokens into economic growth, then if you have a cheaper commodity, it's better."

"I would say we will quickly lose even the social permission to actually take something like energy, which is a scarce resource, and use it to generate these tokens, if these tokens are not improving health outcomes, education outcomes, public sector efficiency, private sector competitiveness across all sectors," Nadella said.

Programming

'Just Because Linus Torvalds Vibe Codes Doesn't Mean It's a Good Idea' (theregister.com) 61

In an opinion piece for The Register, Steven J. Vaughan-Nichols argues that while "vibe coding" can be fun and occasionally useful for small, throwaway projects, it produces brittle, low-quality code that doesn't scale and ultimately burdens real developers with cleanup and maintenance. An anonymous reader shares an excerpt: Vibe coding got a big boost when everyone's favorite open source programmer, Linux's Linus Torvalds, said he'd been using Google's Antigravity LLM on his toy program AudioNoise, which he uses to create "random digital audio effects" using his "random guitar pedal board design." This is not exactly Linux or even Git, his other famous project, in terms of the level of work. Still, many people reacted to Torvalds' vibe coding as "wow!" It's certainly noteworthy, but has the case for vibe coding really changed?

[...] It's fun, and for small projects, it's productive. However, today's programs are complex and call upon numerous frameworks and resources. Even if your vibe code works, how do you maintain it? Do you know what's going on inside the code? Chances are you don't. Besides, the LLM you used two weeks ago has been replaced with a new version. The exact same prompts that worked then yield different results today. Come to think of it, it's an LLM. The same prompts and the same LLM will give you different results every time you run it. This is asking for disaster.

Just ask Jason Lemkin. He was the guy who used the vibe coding platform Replit, which went "rogue during a code freeze, shut down, and deleted our entire database." Whoops! Yes, Replit and other dedicated vibe programming AIs, such as Cursor and Windsurf, are improving. I'm not at all sure, though, that they've been able to help with those fundamental problems of being fragile and still cannot scale successfully to the demands of production software. It's much worse than that. Just because a program runs doesn't mean it's good. As Ruth Suehle, President of the Apache Software Foundation, commented recently on LinkedIn, naive vibe coders "only know whether the output works or doesn't and don't have the skills to evaluate it past that. The potential results are horrifying."

Why? In another LinkedIn post, Craig McLuckie, co-founder and CEO of Stacklok, wrote: "Today, when we file something as 'good first issue' and in less than 24 hours get absolutely inundated with low-quality vibe-coded slop that takes time away from doing real work. This pattern of 'turning slop into quality code' through the review process hurts productivity and hurts morale." McLuckie continued: "Code volume is going up, but tensions rise as engineers do the fun work with AI, then push responsibilities onto their team to turn slop into production code through structured review."

United States

A Second US Sphere Could Come To Maryland (theverge.com) 42

Sphere Entertainment plans to build a second U.S. Sphere near Washington, D.C., with a smaller 6,000-seat "mini-Sphere" proposed for National Harbor in Maryland. The venue would retain the signature LED exterior and immersive 4D tech of the Las Vegas Sphere, just at a more compact scale. The Verge reports: The second US sphere would be built in an area known as National Harbor in Prince George's County, Maryland. Located along the Potomac River, National Harbor currently features a convention center, multiple hotels, restaurants, and shops. While Abu Dhabi plans to build a sphere as large as the one in Las Vegas, the National Harbor venue would be one of the first mini-Sphere venues announced last March.

Its capacity would be limited to 6,000 seats instead of over 17,000. But the smaller Sphere would still be hard to miss with an exterior LED exosphere for showcasing the "artistic and branded content" that helped make the original sphere a unique part of the Las Vegas skyline. The inside of the mini-Sphere will feature a high-resolution 16,000 by 16,000 pixel wrap-around screen, the company's immersive sound technology, haptic seating, and "4D environmental effects." For the AI-enhanced version of The Wizard of Oz currently playing in Las Vegas, audiences experience effects like wind, fog, smells, and apples falling from the ceiling.

Books

Nvidia Contacted Anna's Archive To Secure Access To Millions of Pirated Books (torrentfreak.com) 32

An anonymous reader quotes a report from TorrentFreak: NVIDIA executives allegedly authorized the use of millions of pirated books from Anna's Archive to fuel its AI training. In an expanded class-action lawsuit that cites internal NVIDIA documents, several book authors claim (PDF) that the trillion-dollar company directly reached out to Anna's Archive, seeking high-speed access to the shadow library data. [...] Last Friday, the authors filed an amended complaint that significantly expands the scope of the lawsuit. In addition to adding more books, authors, and AI models, it also includes broader "shadow library" claims and allegations. The authors, including Abdi Nazemian, now cite various internal Nvidia emails and documents, suggesting that the company willingly downloaded millions of copyrighted books. The new complaint alleges that "competitive pressures drove NVIDIA to piracy," which allegedly included collaborating with the controversial Anna's Archive library.

According to the amended complaint, a member of Nvidia's data strategy team reached out to Anna's Archive to find out what the pirate library could offer the trillion-dollar company "Desperate for books, NVIDIA contacted Anna's Archive -- the largest and most brazen of the remaining shadow libraries -- about acquiring its millions of pirated materials and 'including Anna's Archive in pre-training data for our LLMs,'" the complaint notes. "Because Anna's Archive charged tens of thousands of dollars for 'high-speed access' to its pirated collections [] NVIDIA sought to find out what "high-speed access" to the data would look like."

According to the complaint, Anna's Archive then warned Nvidia that its library was illegally acquired and maintained. Because the site previously wasted time on other AI companies, the pirate library asked NVIDIA executives if they had internal permission to move forward. This permission was allegedly granted within a week, after which Anna's Archive provided the chip giant with access to its pirated books. "Within a week of contacting Anna's Archive, and days after being warned by Anna's Archive of the illegal nature of their collections, NVIDIA management gave 'the green light' to proceed with the piracy. Anna's Archive offered NVIDIA millions of pirated copyrighted books." The complaint states that Anna's Archive promised to provide NVIDIA with access to roughly 500 terabytes of data. This included millions of books that are usually only accessible through Internet Archive's digital lending system, which itself has been targeted in court. The complaint does not explicitly mention whether NVIDIA ended up paying Anna's Archive for access to the data.

Additionally, it's worth mentioning that NVIDIA also stands accused of using other pirated sources. In addition to the previously included Books3 database, the new complaint also alleges that the company downloaded books from LibGen, Sci-Hub, and Z-Library. In addition to downloading and using pirated books for its own AI training, the authors allege NVIDIA distributed scripts and tools that allowed its corporate customers to automatically download "The Pile", which contains the Books3 pirated dataset.

AI

OpenAI CFO Says Annualized Revenue Crosses $20 Billion In 2025 24

According to CFO Sarah Friar, OpenAI's annualized revenue surpassed $20 billion in 2025, up from $6 billion a year earlier with growth closely tracking an expansion in computing capacity. Reuters reports: OpenAI's computing capacity rose to 1.9 gigawatts (GW) in 2025 from 0.6 GW in 2024, Friar said in the blog, adding that Microsoft-backed OpenAI's weekly and daily active users figures continue to produce all-time highs. OpenAI last week said it would start showing ads in ChatGPT to some U.S. users, ramping up efforts to generate revenue from the AI chatbot to fund the high costs of developing the technology. Separately, Axios reported on Monday that OpenAI's policy chief Chris Lehane said that the company is "on track" to unveil its first device in the second half of 2026.

Friar said OpenAI's platform spans text, images, voice, code and APIs, and the next phase will focus on agents and workflow automation that run continuously, carry context over time, and take action across tools. For 2026, the company will prioritize "practical adoption," particularly in health, science and enterprise, she said. Friar said the company is keeping a "light" balance sheet by partnering rather than owning and structuring contracts with flexibility across providers and hardware types.
Businesses

ERP Isn't Dead Yet - But Most Execs Are Planning the Wake (theregister.com) 33

Seven out of ten C-suite executives believe traditional enterprise resource planning software has seen its best days, though the category remains firmly entrenched in corporate IT and opinion is sharply divided on what comes next. A survey of 4,295 CFOs, CISOs, CIOs and CEOs worldwide found 36% expect ERP to give way to composable, API-driven best-of-breed systems, while 33% see the future in "agentic ERP" featuring autonomous AI-driven decision-making.

The research was commissioned by Rimini Street, a third-party support provider for Oracle and SAP. Despite the pessimism, 97% said their current systems met business requirements. Vendor lock-in remains a sore point: 35% cited limited flexibility and forced upgrades as frustrations. Kingfisher, operator of 2,000 European retail stores including Screwfix and B&Q, recently eschewed an SAP upgrade in favor of using third-party support to shift its existing application to the cloud. Gartner analyst Dixie John cautioned that while third-party support may work in the short or medium term, organizations will eventually need to upgrade.
AI

Valve Has 'Significantly' Rewritten Steam's Rules For How Developers Must Disclose AI Use (videogameschronicle.com) 18

Valve has substantially overhauled its guidelines for how game developers must disclose the use of generative AI on Steam, making explicit that tools like code assistants and other development aids do not fall under the disclosure requirement. The updated rules clarify that Valve's focus is not on "efficiency gains through the use of AI-powered dev tools."

Developers must still disclose two specific categories: AI used to generate in-game content, store page assets, or marketing materials, and AI that creates content like images, audio, or text during gameplay itself. Steam has required AI disclosures since 2024, and an analysis from July 2025 found nearly 8,000 titles released in the first half of that year had disclosed generative AI use, compared to roughly 1,000 for all of 2024. The disclosures remain voluntary, so actual usage is likely higher.
AI

IMF Warns Global Economic Resilience at Risk if AI Falters 51

The "surprisingly resilient" global economy is at risk of being disrupted by a sharp reversal in the AI boom, the IMF warned on Monday, as world leaders prepared for talks in the Swiss resort of Davos. From a report: Risks to global economic expansion were "tilted to the downside," the fund said in an update to its World Economic Outlook, arguing that growth was reliant on a narrow range of drivers, notably the US technology sector and the associated equity boom.

Nonetheless, it predicted US growth would strongly outpace the rest of the G7 this year, forecasting an expansion of 2.4 per cent in 2026 and 2 per cent in 2027. Tech investment had surged to its highest share of US economic output since 2001, helping drive growth, the IMF found.

"There is a risk of a correction, a market correction, if expectations about AI gains in productivity and profitability are not realised," said Pierre-Olivier Gourinchas, IMF chief economist. "We're not yet at the levels of market frothiness, if you want, that we saw in the dotcom period," he added. "But nevertheless there are reasons to be somewhat concerned."
AI

Is the Possibility of Conscious AI a Dangerous Myth? (noemamag.com) 221

This week Noema magazine published a 7,000-word exploration of our modern "Mythology Of Conscious AI" written by a neuroscience professor who directs the University of Sussex Centre for Consciousness Science: The very idea of conscious AI rests on the assumption that consciousness is a matter of computation. More specifically, that implementing the right kind of computation, or information processing, is sufficient for consciousness to arise. This assumption, which philosophers call computational functionalism, is so deeply ingrained that it can be difficult to recognize it as an assumption at all. But that is what it is. And if it's wrong, as I think it may be, then real artificial consciousness is fully off the table, at least for the kinds of AI we're familiar with.
He makes detailed arguments against a computation-based consciousness (including "Simulation is not instantiation... If we simulate a living creature, we have not created life.") While a computer may seem like the perfect metaphor for a brain, the cognitive science of "dynamical systems" (and other approaches) reject the idea that minds can be entirely accounted for algorithmically. And maybe actual life needs to be present before something can be declared conscious.

He also warns that "Many social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines."

But then his essay reaches a surprising conclusion: As redundant as it may sound, nobody should be deliberately setting out to create conscious AI, whether in the service of some poorly thought-through techno-rapture, or for any other reason. Creating conscious machines would be an ethical disaster. We would be introducing into the world new moral subjects, and with them the potential for new forms of suffering, at (potentially) an exponential pace. And if we give these systems rights, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to. Even if I'm right that standard digital computers aren't up to the job, other emerging technologies might yet be, whether alternative forms of computation (analogue, neuromorphic, biological and so on) or rapidly developing methods in synthetic biology. For my money, we ought to be more worried about the accidental emergence of consciousness in cerebral organoids (brain-like structures typically grown from human embryonic stem cells) than in any new wave of LLM.

But our worries don't stop there. When it comes to the impact of AI in society, it is essential to draw a distinction between AI systems that are actually conscious and those that persuasively seem to be conscious but are, in fact, not. While there is inevitable uncertainty about the former, conscious-seeming systems are much, much closer... Machines that seem conscious pose serious ethical issues distinct from those posed by actually conscious machines. For example, we might give AI systems "rights" that they don't actually need, since they would not actually be conscious, restricting our ability to control them for no good reason. More generally, either we decide to care about conscious-seeming AI, distorting our circles of moral concern, or we decide not to, and risk brutalizing our minds. As Immanuel Kant argued long ago in his lectures on ethics, treating conscious-seeming things as if they lack consciousness is a psychologically unhealthy place to be...

One overlooked factor here is that even if we know, or believe, that an AI is not conscious, we still might be unable to resist feeling that it is. Illusions of artificial consciousness might be as impenetrable to our minds as some visual illusions... What's more, because there's no consensus over the necessary or sufficient conditions for consciousness, there aren't any definitive tests for deciding whether an AI is actually conscious....

Illusions of conscious AI are dangerous in their own distinctive ways, especially if we are constantly distracted and fascinated by the lure of truly sentient machines... If we conflate the richness of biological brains and human experience with the information-processing machinations of deepfake-boosted chatbots, or whatever the latest AI wizardry might be, we do our minds, brains and bodies a grave injustice. If we sell ourselves too cheaply to our machine creations, we overestimate them, and we underestimate ourselves...

The sociologist Sherry Turkle once said that technology can make us forget what we know about life. It's about time we started to remember.

Space

EHT Astronomers Will Film Swirling of a Supermassive Black Hole for the First Time (theguardian.com) 10

"Astronomers are preparing to capture a movie of a supermassive black hole in action for the first time," reports the Guardian: The Event Horizon Telescope (EHT) will track the colossal black hole at the heart of the Messier 87 galaxy throughout March and April with the aim of capturing footage of the swirling disc that traces out the edge of the event horizon, the point beyond which no light or matter can escape... The EHT is a global network of 12 radio telescopes spanning locations from Antarctica to Spain and Korea, which in 2019 unveiled the first image of a black hole's shadow. During March and April, as the Earth rotates, M87's central black hole will come into view for different telescopes, allowing a complete image to be captured every three days...

Measuring the black hole's spin speed matters because this could help discriminate between competing theories of how these objects reached such epic proportions. If black holes grow mostly through accretion — steadily snowballing material that strays nearby — they would be expected to end up spinning at incredibly high speeds. By contrast, if black holes expand mostly through merging with other black holes, each merger could slow things down. The observations could also help explain how black hole jets are formed, which are among the largest, most powerful structures produced by galaxies. Jets channel vast columns of gas out of galaxies, slowing down the formation of new stars and limiting galaxy growth. In turn this can create dense pockets of material that trigger bursts of star formation beyond the host galaxy...

While the movie campaign will take place in the spring, the sheer volume of data produced by the telescopes means the scientists will need to wait for Antarctic summer before the hard drives can be physically shipped to Germany and the US for processing. So it is likely to be a lengthy wait before the rest of the world gets a glimpse of the black hole in action.

In a correction, the Guardian apologizes for originally including an AI-generated illustration of black hole with a caption suggesting it was a photo from telescopes. They've since swapped in an actual picture of the Messier 87 galaxy black hole.
Education

Young US College Graduates Suddenly Aren't Finding Jobs Faster Than Non-College Graduates (msn.com) 91

U.S. college graduates "have historically found jobs more quickly than people with only a high school degree," writes Bloomberg.

"But that advantage is becoming a thing of the past, according to new research from the Federal Reserve Bank of Cleveland." "Recently, the job-finding rate for young college-educated workers has declined to be roughly in line with the rate for young high-school-educated workers, indicating that a long period of relatively easier job-finding prospects for college grads has ended," Cleveland Fed researchers Alexander Cline and BarıÅY Kaymak said in a blog post published Monday. The study follows the latest monthly employment data released on Nov. 20, which showed the unemployment rate for college-educated workers continued to rise in September amid an ongoing slowdown in white-collar hiring... The unemployment rate for people between the ages of 20 to 24 was 9.2% in September, up 2.2 percentage points from a year prior.
There is a caveat. "Young college graduates maintain advantages in job stability and compensation once hired..." the researchers write. "The convergence we document concerns the initial step of securing employment rather than overall labor market outcomes."

Their research includes a graph showing how the "unemployment gap" first increased dramatically after 2010 between college-educated and high school-educated workers, which the researchers attribute to "the prolonged jobless recovery after 2008". But that gap has been closing ever since, with that gap now smaller than at any time since the 1970s.

"Young high school workers are riding the wave of the historically tight postpandemic labor market with well-below-average unemployment compared to that of past high school graduates, while young college workers are experiencing unemployment rates rarely observed among past college cohorts barring during recessions." The labor market advantages conferred by a college degree have historically justified individual investment in higher education and expanding support for college access. If the job-finding rate of college graduates continues to decline relative to the rate for high school graduates, we may see a reversal of these trends. The convergence we document concerns the initial step of securing employment rather than overall labor market outcomes. These details suggest a nuanced shift in employment dynamics, one in which college graduates face greater difficulty finding jobs than previously but maintain advantages compared with high school graduates in job stability and compensation once hired.
Two key quotes:
  • "Declining job prospects among young college graduates may reflect the continued growth in college attainment, adding ever larger cohorts of college graduates to the ranks of job seekers, even though technology no longer favors college-educated workers."
  • "Developments related to AI, which may be affecting job-finding prospects in some cases, cannot explain the decades-long decline in the college job-finding rate."

EU

Hundreds Answer Europe's 'Public Call for Evidence' on an Open Digital Ecosystem Strategy (helpnetsecurity.com) 30

The European Commission "has opened a public call for evidence on European open digital ecosystems," writes Help Net Security, part of preparations for an upcoming Communication "that will examine the role of open source in EU's digital infrastructure." The consultation runs from January 6 to February 3, 2026. Submissions will be used to shape a Commission Communication addressed to the European Parliament, the Council, and other EU bodies, which is scheduled for publication in the first quarter of 2026... The call for evidence links Europe's reliance on digital technologies developed outside the EU to concerns over long term control of infrastructure and software supply chains... Open digital ecosystems are discussed in the context of technological sovereignty and the use of technologies that can be inspected, adapted, and shared.
Long-time Slashdot reader Elektroschock describes it as the European Commission "stepping up its efforts behind open-source software" Building on President von der Leyen's political guidelines, the initiative will review the Commission's 2020-2023 open-source approach and set out concrete actions to strengthen Europe's open-source ecosystem across key areas such as cloud, AI, cybersecurity and industrial technologies. The strategy will be presented alongside the upcoming Cloud and AI Development Act, forming a broader policy package aimed at reducing strategic dependencies and boosting Europe's digital resilience.
And "In just a few days, over 370 submissions have already been filed, indicating that the issue is touching a nerve across the EU," writes CyberNews.com: "Europe must regain control over its software supply chain to safeguard freedom, security, and innovation," suggests an individual from Slovakia. Similar perspectives appear to be widely shared among respondents...

The document doesn't mention US tech giants specifically, but rather aims to support tech sovereignty and seek "digital solutions that are valid alternatives to proprietary ones...."

"This is not a legislative initiative. The strategy will take the form of a Commission communication. The initiative will set out a general approach and will propose: actions relying on further commitments and an implementation process," the EC explains. Policymakers expect the strategy to help EU member states identify the necessary steps to support national open-source companies and communities.

AI

Retailers Rush to Implement AI-Assisted Shopping and Orders (msn.com) 73

This week Google "unveiled a set of tools for retailers that helps them roll out AI agents," reports the Wall Street Journal, The new retail AI agents, which help shoppers find their desired items, provide customer support and let people order food at restaurants, are part of what Alphabet-owned Google calls Gemini Enterprise for Customer Experience. Major retailers, including home improvement giant Lowe's, the grocer Kroger and pizza chain Papa Johns say they are already using Google's tools to help prepare for the incoming wave of AI-assisted shopping and ordering...

Kicking off the race among tech giants to get ahead of this shift, OpenAI released its Instant Checkout feature last fall, which lets users buy stuff directly through its chatbot ChatGPT. In January, Microsoft announced a similar checkout feature for its Copilot chatbot. Soon after OpenAI's release last year, Walmart said it would partner with OpenAI to let shoppers buy its products within ChatGPT.

But that's just the beginning, reports the New York Times, with hundreds of start-ups also vying for the attention of retailers: There are A.I. start-ups that offer in-store cameras that can detect a customer's age or gender, robots that manage shelves on their own and headsets that give store workers access to product information in real time... The scramble to exploit artificial intelligence is happening across the retail spectrum, from the highest echelons of luxury goods to the most pragmatic of convenience stores.

7-Eleven said it was using conversational A.I. to hire staff at its convenience stores through an agent named Rita (Recruiting Individuals Through Automation). Executives said that they no longer had to worry about whether applicants would show up to interviews and that the system had reduced hiring time, which had taken two weeks, to less than three days.

The article notes that at the National Retail Federation conference, other companies showing their AI advancements included Applebee's, IHOP, the Vitamin Shoppe, Urban Outfitters, Rag & Bone, Kendra Scott, Michael Kors and Philip Morris.
AI

How Much Do AI Models Resemble a Brain? (foommagazine.org) 130

At the AI safety site Foom, science journalist Mordechai Rorvig explores a paper presented at November's Empirical Methods in Natural Language Processing conference: [R]esearchers at the Swiss Federal Institute of Technology (EPFL), the Massachusetts Institute of Technology (MIT), and Georgia Tech revisited earlier findings that showed that language models, the engines of commercial AI chatbots, show strong signal correlations with the human language network, the region of the brain responsible for processing language... The results lend clarity to the surprising picture that has been emerging from the last decade of neuroscience research: That AI programs can show strong resemblances to large-scale brain regions — performing similar functions, and doing so using highly similar signal patterns.

Such resemblances have been exploited by neuroscientists to make much better models of cortical regions. Perhaps more importantly, the links between AI and cortex provide an interpretation of commercial AI technology as being profoundly brain-like, validating both its capabilities as well as the risks it might pose for society as the first synthetic braintech. "It is something we, as a community, need to think about a lot more," said Badr AlKhamissi, doctoral student in computer science at EPFL and first author of the preprint, in an interview with Foom. "These models are getting better and better every day. And their similarity to the brain [or brain regions] is also getting better — probably. We're not 100% sure about it...."

There are many known limitations with seeing AI programs as models of brain regions, even those that have high signal correlations. For example, such models lack any direct implementations of biochemical signalling, which is known to be important for the functioning of nervous systems. However, if such comparisons are valid, then they would suggest, somewhat dramatically, that we are increasingly surrounded by a synthetic braintech. A technology not just as capable as the human brain, in some ways, but actually made up of similar components.

Thanks to Slashdot reader Gazelle Bay for sharing the article.
Space

2026's Breakthrough Technologies? MIT Technology Review Chooses Sodium-ion Batteries, Commercial Space Stations (technologyreview.com) 61

As 2026 begins, MIT Technology Review publishes "educated guesses" on emerging technologies that will define the future, advances "we think will drive progress or incite the most change — for better or worse — in the years ahead."

This year's list includes next-gen nuclear, gene-editing drugs (as well as the "resurrection" of ancient genes from extinct creatures), and three AI-related developments: AI companions, AI coding tools, and "mechanistic interpretability" for revealing LLM decision-making.

But also on the list is sodium-ion batteries, "a cheaper, safer alternative to lithium." Backed by major players and public investment, they're poised to power grids and affordable EVs worldwide. [Chinese battery giant CATL claims to have already started manufacturing sodium-ion batteries at scale, and BYD also plans a massive production facility for sodium-ion batteries.] The most significant impact of sodium-Âion technology may be not on our roads but on our power grids. Storing clean energy generated by solar and wind has long been a challenge. Sodium-ion batteries, with their low cost, enhanced thermal stability, and long cycle life, are an attractive alternative. Peak Energy, a startup in the US, is already deploying grid-scale sodium-ion energy storage. Sodium-ion cells' energy density is still lower than that of high-end lithium-ion ones, but it continues to improve each year — and it's already sufficient for small passenger cars and logistics vehicles.
And another "breakthrough technology" on their list is commercial space stations: Vast Space from California, plans to launch its Haven-1 space station in May 2026 on a SpaceX Falcon 9 rocket. If all goes to plan, it will initially support crews of four people staying aboard the bus-size habitat for 10 days. Paying customers will be able to experience life in microgravity and conduct research such as growing plants and testing drugs. On its heels will be Axiom Space's outpost, the Axiom Station, consisting of five modules (or rooms). It's designed to look like a boutique hotel and is expected to launch in 2028. Voyager Space aims to launch its version, called Starlab, the same year, and Blue Origin's Orbital Reef space station plans to follow in 2030.
Thanks to long-time Slashdot reader sandbagger for sharing the article.
Privacy

What Happened After Security Researchers Found 60 Flock Cameras Livestreaming to the Internet (youtube.com) 50

A couple months ago, YouTuber Benn Jordan "found vulnerabilities in some of Flock's license plate reader cameras," reports 404 Media's Jason Koebler. "He reached out to me to tell me he had learned that some of Flock's Condor cameras were left live-streaming to the open internet."

This led to a remarkable article where Koebler confirmed the breach by visiting a Flock surveillance camera mounted on a California traffic signal. ("On my phone, I am watching myself in real time as the camera records and livestreams me — without any password or login — to the open internet... Hundreds of miles away, my colleagues are remotely watching me too through the exposed feed.") Flock left livestreams and administrator control panels for at least 60 of its AI-enabled Condor cameras around the country exposed to the open internet, where anyone could watch them, download 30 days worth of video archive, and change settings, see log files, and run diagnostics. Unlike many of Flock's cameras, which are designed to capture license plates as people drive by, Flock's Condor cameras are pan-tilt-zoom (PTZ) cameras designed to record and track people, not vehicles. Condor cameras can be set to automatically zoom in on people's faces... The exposure was initially discovered by YouTuber and technologist Benn Jordan and was shared with security researcher Jon "GainSec" Gaines, who recently found numerous vulnerabilities in several other models of Flock's automated license plate reader (ALPR) cameras.
Jordan appeared this week as a guest on Koebler's own YouTube channel, while Jordan released a video of his own about the experience. titled "We Hacked Flock Safety Cameras in under 30 Seconds." (Thanks to Slashdot reader beadon for sharing the link.) But together Jordan and 404 Media also created another video three weeks ago titled "The Flock Camera Leak is Like Netflix for Stalkers" which includes footage he says was "completely accessible at the time Flock Safety was telling cities that the devices are secure after they're deployed."

The video decries cities "too lazy to conduct their own security audit or research the efficacy versus risk," but also calls weak security "an industry-wide problem." Jordan explains in the video how he "very easily found the administration interfaces for dozens of Flock safety cameras..." — but also what happened next: None of the data or video footage was encrypted. There was no username or password required. These were all completely public-facing, for the world to see.... Making any modification to the cameras is illegal, so I didn't do this. But I had the ability to delete any of the video footage or evidence by simply pressing a button. I could see the paths where all of the evidence files were located on the file system...

During and after the process of conducting that research and making that video, I was visited by the police and had what I believed to be private investigators outside my home photographing me and my property and bothering my neighbors. John Gaines or GainSec, the brains behind most of this research, lost employment within 48 hours of the video being released. And the sad reality is that I don't view these things as consequences or punishment for researching security vulnerabilities. I view these as consequences and punishment for doing it ethically and transparently.

I've been contacted by people on or communicating with civic councils who found my videos concerning, and they shared Flock Safety's response with me. The company claimed that the devices in my video did not reflect the security standards of the ones being publicly deployed. The CEO even posted on LinkedIn and boasted about Flock Safety's security policies. So, I formally and publicly offered to personally fund security research into Flock Safety's deployed ecosystem. But the law prevents me from touching their live devices. So, all I needed was their permission so I wouldn't get arrested. And I was even willing to let them supervise this research.

I got no response.

So instead, he read Flock's official response to a security/surveillance industry research group — while standing in front of one of their security cameras, streaming his reading to the public internet.

"Might as well. It's my tax dollars that paid for it."

" 'Flock is committed to continuously improving security...'"

Slashdot Top Deals