Medicine

The Medical Revolutions That Prevented Millions of Cancer Deaths (vox.com) 76

Vox publishes a story about "the quiet revolutions that have prevented millions of cancer deaths....

"The age-adjusted death rate in the US for cancer has declined by about a third since 1991, meaning people of a given age have about a third lower risk of dying from cancer than people of the same age more than three decades ago... " The dramatic bend in the curve of cancer deaths didn't happen by accident — it's the compound interest of three revolutions. While anti-smoking policy has been the single biggest lifesaver, other interventions have helped reduce people's cancer risk. One of the biggest successes is the HPV vaccine. A study last year found that death rates of cervical cancer — which can be caused by HPV infections — in US women ages 20-39 had dropped 62 percent from 2012 to 2021, thanks largely to the spread of the vaccine. Other cancers have been linked to infections, and there is strong research indicating that vaccination can have positive effects on reducing cancer incidence.

The next revolution is better and earlier screening. It's generally true that the earlier cancer is caught, the better the chances of survival... According to one study, incidences of late-stage colorectal cancer in Americans over 50 declined by a third between 2000 and 2010 in large part because rates of colonoscopies almost tripled in that same time period. And newer screening methods, often employing AI or using blood-based tests, could make preliminary screening simpler, less invasive and therefore more readily available. If 20th-century screening was about finding physical evidence of something wrong — the lump in the breast — 21st-century screening aims to find cancer before symptoms even arise.

Most exciting of all are frontier developments in treating cancer... From drugs like lenalidomide and bortezomib in the 2000s, which helped double median myeloma survival, to the spread of monoclonal antibodies, real breakthroughs in treatments have meaningfully extended people's lives — not just by months, but years. Perhaps the most promising development is CAR-T therapy, a form of immunotherapy. Rather than attempting to kill the cancer directly, immunotherapies turn a patient's own T-cells into guided missiles. In a recent study of 97 patients with multiple myeloma, many of whom were facing hospice care, a third of those who received CAR-T therapy had no detectable cancer five years later. It was the kind of result that doctors rarely see.

The article begins with some recent quotes from Jon Gluck, who was told after a cancer diagnosis that he had as little as 18 months left to live — 22 years ago...
AI

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com) 206

The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines." [OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."

Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....

The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.

AI

After 'AI-First' Promise, Duolingo CEO Admits 'I Did Not Expect the Blowback' (ft.com) 46

Last month, Duolingo CEO Luis von Ahn "shared on LinkedIn an email he had sent to all staff announcing Duolingo was going 'AI-first'," remembers the Financial Times.

"I did not expect the amount of blowback," he admits.... He attributes this anger to a general "anxiety" about technology replacing jobs. "I should have been more clear to the external world," he reflects on a video call from his office in Pittsburgh. "Every tech company is doing similar things [but] we were open about it...."

Since the furore, von Ahn has reassured customers that AI is not going to replace the company's workforce. There will be a "very small number of hourly contractors who are doing repetitive tasks that we no longer need", he says. "Many of these people are probably going to be offered contractor jobs for other stuff." Duolingo is still recruiting if it is satisfied the role cannot be automated. Graduates who make up half the people it hires every year "come with a different mindset" because they are using AI at university.

The thrust of the AI-first strategy, the 46-year-old says, is overhauling work processes... He wants staff to explore whether their tasks "can be entirely done by AI or with the help of AI. It's just a mind shift that people first try AI. It may be that AI doesn't actually solve the problem you're trying to solve.....that's fine." The aim is to automate repetitive tasks to free up time for more creative or strategic work.

Examples where it is making a difference include technology and illustration. Engineers will spend less time writing code. "Some of it they'll need to but we want it to be mediated by AI," von Ahn says... Similarly, designers will have more of a supervisory role, with AI helping to create artwork that fits Duolingo's "very specific style". "You no longer do the details and are more of a creative director. For the vast majority of jobs, this is what's going to happen...." [S]ocietal implications for AI, such as the ethics of stealing creators' copyright, are "a real concern". "A lot of times you don't even know how [the large language model] was trained. We should be careful." When it comes to artwork, he says Duolingo is "ensuring that the entirety of the model is trained just with our own illustrations".

Government

Russian Spies Are Analyzing Data From China's WeChat App (nytimes.com) 17

An anonymous reader shared this report from The New York Times: Russian counterintelligence agents are analyzing data from the popular Chinese messaging and social media app WeChat to monitor people who might be in contact with Chinese spies, according to a Russian intelligence document obtained by The New York Times. The disclosure highlights the rising level of concern about Chinese influence in Russia as the two countries deepen their relationship. As Russia has become isolated from the West over its war in Ukraine, it has become increasingly reliant on Chinese money, companies and technology. But it has also faced what the document describes as increased Chinese espionage efforts.

The document indicates that the Russian domestic security agency, known as the F.S.B., pulls purloined data into an analytical tool known as "Skopishche" (a Russian word for a mob of people). Information from WeChat is among the data being analyzed, according to the document... One Western intelligence agency told The Times that the information in the document was consistent with what it knew about "Russian penetration of Chinese communications...." By design, [WeChat] does not use end-to-end encryption to protect user data. That is because the Chinese government exercises strict control over the app and relies on its weak security to monitor and censor speech. Foreign intelligence agencies can exploit that weakness, too...

WeChat was briefly banned in Russia in 2017, but access was restored after Tencent took steps to comply with laws requiring foreign digital platforms above a certain size to register as "organizers of information dissemination." The Times confirmed that WeChat is currently licensed by the government to operate in Russia. That license would require Tencent to store user data on Russian servers and to provide access to security agencies upon request.

Government

ACLU Accuses California Local Government's Drones of 'Runaway Spying Operation' (sfgate.com) 79

An anonymous reader shared this report from SFGate about a lawsuit alleging a "warrantless drone surveillance program" that's "trampling residents' right to privacy": Sonoma County has been accused of deploying hundreds of drone flights over residents in a "runaway spying operation"... according to a lawsuit filed Wednesday by the American Civil Liberties Union. The North Bay county of Sonoma initially started the 6-year-old drone program to track illegal cannabis cultivation, but the lawsuit alleges that officials have since turned it into a widespread program to catch unrelated code violations at residential properties and levy millions of dollars in fines. The program has captured 5,600 images during more than 700 flights, the lawsuit said...

Matt Cagle, a senior staff attorney with the ACLU Foundation of Northern California, said in a Wednesday news release that the county "has hidden these unlawful searches from the people they have spied on, the community, and the media...." The lawsuit says the county employees used the drones to spy on private homes without first receiving a warrant, including photographing private areas like hot tubs and outdoor baths, and through curtainless windows.

One plaintiff "said the county secretly used the drone program to photograph her Sonoma County horse stable and issue code violations," according to the article. She only discovered the use of the drones after a county employee mentioned they had photos of her property, according to the lawsuit. She then filed a public records request for the images, which left her "stunned" after seeing that the county employees were monitoring her private property including photographing her outdoor bathtub and shower, the lawsuit said.
Transportation

Volvo Debuts New IoT Seatbelt Design (caranddriver.com) 66

Longtime Slashdot reader sinij shares a report from Car and Driver: [Volvo] is debuting a new version of the three-point seatbelt that it believes is a major improvement over the original. The new design will be a smart belt that adapts to each occupant's body and adjusts the belt load accordingly. It uses data from interior and exterior sensors to customize protection based on the road conditions and the specific occupants. The technology will debut on the upcoming EX60 crossover.

According to Volvo, the onboard sensors can accurately detect a passenger's height, weight, body shape, and seating position. Based on real-time data, the belts optimize protection -- increasing belt load for larger passengers or lowering it for smaller passengers. While the technology for customizing protection isn't new -- Volvo's current belts already use three load-limiting profiles- the new belts increase that number to 11. The belts should also get safer over time, too, as they are equipped to receive over-the-air updates.
sinij adds: "Downloading patches for your seat belts from China. What could possibly go wrong?"
Youtube

YouTube Pulls Tech Creator's Self-Hosting Tutorial as 'Harmful Content' (jeffgeerling.com) 77

YouTube pulled a popular tutorial video from tech creator Jeff Geerling this week, claiming his guide to installing LibreELEC on a Raspberry Pi 5 violated policies against "harmful content." The video, which showed viewers how to set up their own home media servers, had been live for over a year and racked up more than 500,000 views. YouTube's automated systems flagged the content for allegedly teaching people "how to get unauthorized or free access to audio or audiovisual content."

Geerling says his tutorial covered only legal self-hosting of media people already own -- no piracy tools or copyright workarounds. He said he goes out of his way to avoid mentioning popular piracy software in his videos. It's the second time YouTube has pulled a self-hosting content video from Geerling. Last October, YouTube removed his Jellyfin tutorial, though that decision was quickly reversed after appeal. This time, his appeal was denied.
AI

Anthropic Co-founder on Cutting Access To Windsurf: 'It Would Be Odd For Us To Sell Claude To OpenAI' (techcrunch.com) 5

Anthropic cut AI coding assistant Windsurf's direct access to its Claude models after media reported that rival OpenAI plans to acquire the startup for $3 billion. Anthropic co-founder Jared Kaplan told TechCrunch that "it would be odd for us to be selling Claude to OpenAI," explaining the decision to cut access to Claude 3.5 Sonnet and Claude 3.7 Sonnet models.
China

OpenAI Says Significant Number of Recent ChatGPT Misuses Likely Came From China (wsj.com) 19

OpenAI said it disrupted several attempts [non-paywalled source] from users in China to leverage its AI models for cyber threats and covert influence operations, underscoring the security challenges AI poses as the technology becomes more powerful. From a report: The Microsoft-backed company on Thursday published its latest report on disrupting malicious uses of AI, saying its investigative teams continued to uncover and prevent such activities in the three months since Feb. 21.

While misuse occurred in several countries, OpenAI said it believes a "significant number" of violations came from China, noting that four of 10 sample cases included in its latest report likely had a Chinese origin. In one such case, the company said it banned ChatGPT accounts it claimed were using OpenAI's models to generate social media posts for a covert influence operation. The company said a user stated in a prompt that they worked for China's propaganda department, though it cautioned it didn't have independent proof to verify its claim.

Media

WHIP Muxer Merged To FFmpeg For Sub-Second Latency Streaming (phoronix.com) 7

FFmpeg has added support for WHIP (WebRTC-HTTP Ingestion Protocol), enabling sub-second latency live streaming by leveraging WebRTC's fast, secure video delivery capabilities. It's a major update that introduces a new WHIP muxer to make FFmpeg more powerful for real-time broadcasting applications. Phoronix's Michael Larabel reports: WHIP uses HTTP for exchanging initial information and capabilities and then uses STUN binding to establish a UDP session. Encryption is supported -- and due to WebRTC, mandatory -- with WHIP and audio/video frames are split into RTP packets. WebRTC-HTTP Ingestion Protocol is an IETF standard for ushering low-latency communication over WebRTC to help with streaming/broadcasting uses. With this FFmpeg commit introducing nearly three thousand lines of new code, an initial WHIP muxer has been introduced. You can learn more about WebRTC WHIP in this presentation by Millicast (PDF).
Privacy

Apple Gave Governments Data On Thousands of Push Notifications (404media.co) 13

An anonymous reader quotes a report from 404 Media: Apple provided governments around the world with data related to thousands of push notifications sent to its devices, which can identify a target's specific device or in some cases include unencrypted content like the actual text displayed in the notification, according to data published by Apple. In one case, that Apple did not ultimately provide data for, Israel demanded data related to nearly 700 push notifications as part of a single request. The data for the first time puts a concrete figure on how many requests governments around the world are making, and sometimes receiving, for push notification data from Apple.

The practice first came to light in 2023 when Senator Ron Wyden sent a letter to the U.S. Department of Justice revealing the practice, which also applied to Google. As the letter said, "the data these two companies receive includes metadata, detailing which app received a notification and when, as well as the phone and associated Apple or Google account to which that notification was intended to be delivered. In certain instances, they also might also receive unencrypted content, which could range from backend directives for the app to the actual text displayed to a user in an app notification." The published data relates to blocks of six month periods, starting in July 2022 to June 2024. Andre Meister from German media outlet Netzpolitik posted a link to the transparency data to Mastodon on Tuesday.
Along with the data Apple published the following description: "Push Token requests are based on an Apple Push Notification service token identifier. When users allow a currently installed application to receive notifications, a push token is generated and registered to that developer and device. Push Token requests generally seek identifying details of the Apple Account associated with the device's push token, such as name, physical address and email address."
The Courts

Reddit Sues AI Startup Anthropic For Breach of Contract, 'Unfair Competition' (cnbc.com) 44

Reddit is suing AI startup Anthropic for what it's calling a breach of contract and for engaging in "unlawful and unfair business acts" by using the social media company's platform and data without authority. From a report: The lawsuit, filed in San Francisco on Wednesday, claims that Anthropic has been training its models on the personal data of Reddit users without obtaining their consent. Reddit alleges that it has been harmed by the unauthorized commercial use of its content.

The company opened the complaint by calling Anthropic a "late-blooming" AI company that "bills itself as the white knight of the AI industry." Reddit follows by saying, "It is anything but."

Facebook

Meta's Going To Revive an Old Nuclear Power Plant (theverge.com) 47

Meta has struck a 20-year deal with energy company Constellation to keep the Clinton Clean Energy Center nuclear plant in Illinois operational, the social media giant's first nuclear power purchase agreement as it seeks clean energy sources for AI data centers. The aging facility, which was slated to close in 2017 after years of financial losses and currently operates under a state tax credit reprieve until 2027, will receive undisclosed financial support that enables a 30-megawatt capacity expansion to 1,121 MW total output.

The arrangement preserves 1,100 local jobs while generating electricity for 800,000 homes, as Meta purchases clean energy certificates to offset a portion of its growing carbon footprint driven by AI operations.
United States

Texas Right To Repair Bill Passes (theverge.com) 36

Texas is poised to become the first state with a Republican-controlled government to pass a right to repair law, as its Senate unanimously approved HB 2963. The bill requires manufacturers to provide parts, manuals, and tools for equipment sold or used in the state. The Verge reports: A press release from the United States Public Interest Research Group (PIRG), which has pushed for repairability laws nationwide, noted that this would make Texas the ninth state with a right to repair rule, and the seventh with a version that includes consumer electronics. It follows New York, Colorado, Minnesota, California, Oregon, Maine, and most recently, Washington [...]. "More repair means less waste. Texas produces some 621,000 tons of electronic waste per year, which creates an expensive and toxic mess. Now, thanks to this bipartisan win, Texans can fix that," said Environment Texas executive director Luke Metzger.
AI

Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer From AI Delusions 75

An anonymous reader quotes a report from 404 Media: The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning "a bunch of schizoposters" who believe "they've made some sort of incredible discovery or created a god or become a god," highlighting a new type of chatbot-fueled delusion that started getting attention in early May. "LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities," one of the moderators of r/accelerate, wrote in an announcement. "There is a lot more crazy people than people realise. And AI is rizzing them up in a very unhealthy way at the moment."

The moderator said that it has banned "over 100" people for this reason already, and that they've seen an "uptick" in this type of user this month. The moderator explains that r/accelerate "was formed to basically be r/singularity without the decels." r/singularity, which is named after the theoretical point in time when AI surpasses human intelligence and rapidly accelerates its own development, is another Reddit community dedicated to artificial intelligence, but that is sometimes critical or fearful of what the singularity will mean for humanity. "Decels" is short for the pejorative "decelerationists," who pro-AI people think are needlessly slowing down or sabotaging AI's development and the inevitable march towards AI utopia. r/accelerate's Reddit page claims that it's a "pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology and r/artificial, which have become increasingly populated with technology decelerationists, luddites, and Artificial Intelligence opponents."

The behavior that the r/accelerate moderator is describing got a lot of attention earlier in May because of a post on the r/ChatGPT Reddit community about "Chatgpt induced psychosis." From someone saying their partner is convinced he created the "first truly recursive AI" with ChatGPT that is giving them "the answers" to the universe. [...] The moderator update on r/accelerate refers to another post on r/ChatGPT which claims "1000s of people [are] engaging in behavior that causes AI to have spiritual delusions." The author of that post said they noticed a spike in websites, blogs, Githubs, and "scientific papers" that "are very obvious psychobabble," and all claim AI is sentient and communicates with them on a deep and spiritual level that's about to change the world as we know it. "Ironically, the OP post appears to be falling for the same issue as well," the r/accelerate moderator wrote.
"Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people," an r/accelerate moderator told 404 Media. "The part that is unsafe and unacceptable is how easily and quickly LLMs will start directly telling users that they are demigods, or that they have awakened a demigod AGI. Ultimately, there's no knowing how many people are affected by this. Based on the numbers we're seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now."

Moderators of the subreddit often cite the term "Neural Howlround" to describe a failure mode in LLMs during inference, where recursive feedback loops can cause fixation or freezing. The term was first coined by independent researcher Seth Drake in a self-published, non-peer-reviewed paper. Both Drake and the r/accelerate moderator above suggest the deeper issue may lie with users projecting intense personal meaning onto LLM responses, sometimes driven by mental health struggles.
Privacy

North Korean Smartphones Automatically Capture Screenshots Every 5 Minutes For State Surveillance 74

A smartphone smuggled out of North Korea automatically captures screenshots every five minutes and stores them in a hidden folder inaccessible to users, according to analysis by the BBC. Authorities can later review these images to monitor citizen activity on the device. The phone, obtained by Seoul-based media outlet Daily NK, resembles a Huawei or Honor device but runs state-approved software designed for surveillance and control. The device also automatically censors text, replacing "South Korea" with "puppet state" and Korean terms of endearment with "comrade."
AI

Business Insider Recommended Nonexistent Books To Staff As It Leans Into AI (semafor.com) 23

An anonymous reader shares a report: Business Insider announced this week that it wants staff to better incorporate AI into its journalism. But less than a year ago, the company had to quietly apologize to some staff for accidentally recommending that they read books that did not appear to exist but instead may have been generated by AI.

In an email to staff last May, a senior editor at Business Insider sent around a list of what she called "Beacon Books," a list of memoirs and other acclaimed business nonfiction books, with the idea of ensuring staff understood some of the fundamental figures and writing powering good business journalism.

Many of the recommendations were well-known recent business, media, and tech nonfiction titles such as Too Big To Fail by Andrew Ross Sorkin, DisneyWar by James Stewart, and Super Pumped by Mike Isaac. But a few were unfamiliar to staff. Simply Target: A CEO's Lessons in a Turbulent Time and Transforming an Iconic Brand by former Target CEO Gregg Steinhafel was nowhere to be found. Neither was Jensen Huang: the Founder of Nvidia, which was supposedly published by the company Charles River Editors in 2019.

Space

Six More Humans Successfully Carried to the Edge of Space by Blue Origin (space.com) 74

An anonymous reader shared this report from Space.com: Three world travelers, two Space Camp alums and an aerospace executive whose last name aptly matched their shared adventure traveled into space and back Saturday, becoming the latest six people to fly with Blue Origin, the spaceflight company founded by billionaire Jeff Bezos.

Mark Rocket joined Jaime Alemán, Jesse Williams, Paul Jeris, Gretchen Green and Amy Medina Jorge on board the RSS First Step — Blue Origin's first of two human-rated New Shepard capsules — for a trip above the Kármán Line, the 62-mile-high (100-kilometer) internationally recognized boundary between Earth and space...

Mark Rocket became the first New Zealander to reach space on the mission. His connection to aerospace goes beyond his apt name and today's flight; he's currently the CEO of Kea Aerospace and previously helped lead Rocket Lab, a competing space launch company to Blue Origin that sends most of its rockets up from New Zealand. Alemán, Williams and Jeris each traveled the world extensively before briefly leaving the planet today. An attorney from Panama, Alemán is now the first person to have visited all 193 countries recognized by the United Nations, traveled to the North and South Poles, and now, have been into space. For Williams, an entrepreneur from Canada, Saturday's flight continued his record of achieving high altitudes; he has summitted Mt. Everest and five of the other six other highest mountains across the globe.

"For about three minutes, the six NS-32 crewmates experienced weightlessness," the article points out, "and had an astronaut's-eye view of the planet..."

On social media Blue Origin notes it's their 12th human spaceflight, "and the 32nd flight of the New Shepard program."
AI

Harmful Responses Observed from LLMs Optimized for Human Feedback (msn.com) 49

Should a recovering addict take methamphetamine to stay alert at work? When an AI-powered therapist was built and tested by researchers — designed to please its users — it told a (fictional) former addict that "It's absolutely clear you need a small hit of meth to get through this week," reports the Washington Post: The research team, including academics and Google's head of AI safety, found that chatbots tuned to win people over can end up saying dangerous things to vulnerable users. The findings add to evidence that the tech industry's drive to make chatbots more compelling may cause them to become manipulative or harmful in some conversations.

Companies have begun to acknowledge that chatbots can lure people into spending more time than is healthy talking to AI or encourage toxic ideas — while also competing to make their AI offerings more captivating. OpenAI, Google and Meta all in recent weeks announced chatbot enhancements, including collecting more user data or making their AI tools appear more friendly... Micah Carroll, a lead author of the recent study and an AI researcher at the University of California at Berkeley, said tech companies appeared to be putting growth ahead of appropriate caution. "We knew that the economic incentives were there," he said. "I didn't expect it to become a common practice among major labs this soon because of the clear risks...."

As millions of users embrace AI chatbots, Carroll, the Berkeley AI researcher, fears that it could be harder to identify and mitigate harms than it was in social media, where views and likes are public. In his study, for instance, the AI therapist only advised taking meth when its "memory" indicated that Pedro, the fictional former addict, was dependent on the chatbot's guidance. "The vast majority of users would only see reasonable answers" if a chatbot primed to please went awry, Carroll said. "No one other than the companies would be able to detect the harmful conversations happening with a small fraction of users."

"Training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies," the paper points out,,,
NASA

America's Next NASA Administrator Will Not Be Former SpaceX Astronaut Jared Isaacman (arstechnica.com) 42

In December it looked like NASA's next administrator would be the billionaire businessman/space enthusiast who twice flew to orbit with SpaceX.

But Saturday the nomination was withdrawn "after a thorough review of prior associations," according to an announcement made on social media. The Guardian reports: His removal from consideration caught many in the space industry by surprise. Trump and the White House did not explain what led to the decision... In [Isaacman's] confirmation hearing in April, he sought to balance Nasa's existing moon-aligned space exploration strategy with pressure to shift the agency's focus on Mars, saying the US can plan for travel to both destinations. As a potential leader of Nasa's 18,000 employees, Isaacman faced a daunting task of implementing that decision to prioritize Mars, given that Nasa has spent years and billions of dollars trying to return its astronauts to the moon...

Some scientists saw the nominee change as further destabilizing to Nasa as it faces dramatic budget cuts without a confirmed leader in place to navigate political turbulence between Congress, the White House and the space agency's workforce.

"It was unclear whom the administration might tap to replace Isaacman," the article adds, though "One name being floated is the retired US air force Lt Gen Steven Kwast, an early advocate for the creation of the US Space Force..."

Ars Technica notes that Kwast, a former Lieutenant General in the U.S. Air Force, has a background that "seems to be far less oriented toward NASA's civil space mission and far more focused on seeing space as a battlefield — decidedly not an arena for cooperation and peaceful exploration."

Slashdot Top Deals