×
China

China's Secretive Sunway Pro CPU Quadruples Performance Over Its Predecessor (tomshardware.com) 73

An anonymous reader shares a report: Earlier this year, the National Supercomputing Center in Wuxi (an entity blacklisted in the U.S.) launched its new supercomputer based on the enhanced China-designed Sunway SW26010 Pro processors with 384 cores. Sunway's SW26010 Pro CPU not only packs more cores than its non-Pro SW26010 predecessor, but it more than quadrupled FP64 compute throughput due to microarchitectural and system architecture improvements, according to Chips and Cheese. However, while the manycore CPU is good on paper, it has several performance bottlenecks.

The first details of the manycore Sunway SW26010 Pro CPU and supercomputers that use it emerged back in 2021. Now, the company has showcased actual processors and disclosed more details about their architecture and design, which represent a significant leap in performance, recently at SC23. The new CPU is expected to enable China to build high-performance supercomputers based entirely on domestically developed processors. Each Sunway SW26010 Pro has a maximum FP64 throughput of 13.8 TFLOPS, which is massive. For comparison, AMD's 96-core EPYC 9654 has a peak FP64 performance of around 5.4 TFLOPS.

The SW26010 Pro is an evolution of the original SW26010, so it maintains the foundational architecture of its predecessor but introduces several key enhancements. The new SW26010 Pro processor is based on an all-new proprietary 64-bit RISC architecture and packs six core groups (CG) and a protocol processing unit (PPU). Each CG integrates 64 2-wide compute processing elements (CPEs) featuring a 512-bit vector engine as well as 256 KB of fast local store (scratchpad cache) for data and 16 KB for instructions; one management processing element (MPE), which is a superscalar out-of-order core with a vector engine, 32 KB/32 KB L1 instruction/data cache, 256 KB L2 cache; and a 128-bit DDR4-3200 memory interface.

Businesses

Nvidia's Revenue Triples As AI Chip Boom Continues 30

Nvidia's fiscal third-quarter results surpassed Wall Street's predictions, with revenue growing 206% year over year. However, Nvidia shares are down after the company called for a negative impact in the next quarter due to export restrictions affecting sales in China and other countries. CNBC reports: Nvidia's revenue grew 206% year over year during the quarter ending Oct. 29, according to a statement. Net income, at $9.24 billion, or $3.71 per share, was up from $680 million, or 27 cents per share, in the same quarter a year ago. The company's data center revenue totaled $14.51 billion, up 279% and more than the StreetAccount consensus of $12.97 billion. Half of the data center revenue came from cloud infrastructure providers such as Amazon, and the other from consumer internet entities and large companies, Nvidia said. Healthy uptake came from clouds that specialize in renting out GPUs to clients, Kress said on the call.

The gaming segment contributed $2.86 billion, up 81% and higher than the $2.68 billion StreetAccount consensus. With respect to guidance, Nvidia called for $20 billion in revenue for the fiscal fourth quarter. That implies nearly 231% revenue growth. [...] Nvidia faces obstacles, including competition from AMD and lower revenue because of export restrictions that can limit sales of its GPUs in China. But ahead of Tuesday report, some analysts were nevertheless optimistic.
Microsoft

Microsoft Celebrates 20th Anniversary of 'Patch Tuesday' (microsoft.com) 17

This week the Microsoft Security Response Center celebrated the 20th anniversary of Patch Tuesday updates.

In a blog post they call the updates "an initiative that has become a cornerstone of the IT world's approach to cybersecurity." Originating from the Trustworthy Computing memo by Bill Gates in 2002, our unwavering commitment to protecting customers continues to this day and is reflected in Microsoft's Secure Future Initiative announced this month. Each month, we deliver security updates on the second Tuesday, underscoring our pledge to cyber defense. As we commemorate this milestone, it's worth exploring the inception of Patch Tuesday and its evolution through the years, demonstrating our adaptability to new technology and emerging cyber threats...

Before this unified approach, our security updates were sporadic, posing significant challenges for IT professionals and organizations in deploying critical patches in a timely manner. Senior leaders of the Microsoft Security Response Center (MSRC) at the time spearheaded the idea of a predictable schedule for patch releases, shifting from a "ship when ready" model to a regular weekly, and eventually, monthly cadence...

This led to a shift from a "ship when ready" model to a regular weekly, and eventually, monthly cadence. In addition to consolidating patch releases into a monthly schedule, we also organized the security update release notes into a consolidated location. Prior to this change, customers had to navigate through various Knowledge Base articles, making it difficult to find the information they needed to secure themselves. Recognizing the need for clarity and convenience, we provided a comprehensive overview of monthly releases. This change was pivotal at a time when not all updates were delivered through Windows Update, and customers needed a reliable source to find essential updates for various products.

Patch Tuesday has also influenced other vendors in the software and hardware spaces, leading to a broader industry-wide practice of synchronized security updates. This collaborative approach, especially with hardware vendors such as AMD and Intel, aims to provide a united front against vulnerabilities, enhancing the overall security posture of our ecosystems. While the volume and complexity of updates have increased, so has the collaboration with the security community. Patch Tuesday has fostered better relationships with security researchers, leading to more responsible vulnerability disclosures and quicker responses to emerging threats...

As the landscape of security threats evolves, so does our strategy, but our core mission of safeguarding our customers remains unchanged.

AMD

AMD-Powered Frontier Remains Fastest Supercomputer in the World (tomshardware.com) 25

The Top500 organization released its semi-annual list of the fastest supercomputers in the world, with the AMD-powered Frontier supercomputer retaining its spot at the top of the list with 1.194 Exaflop/s (EFlop/s) of performance, fending off a half-scale 585.34 Petaflop/s (PFlop/s) submission from the Argonne National Laboratory's Intel-powered Aurora supercomputer. From a report: Argonne's submission, which only employs half of the Aurora system, lands at the second spot on the Top500, unseating Japan's Fugaku as the second-fastest supercomputer in the world. Intel also made inroads with 20 new supercomputers based on its Sapphire Rapids CPUs entering the list, but AMD's EPYC continues to take over the Top500 as it now powers 140 systems on the list -- a 39% year-over-year increase.

Intel and Argonne are currently still working to bring Arora fully online for users in 2024. As such, the Aurora submission represented 10,624 Intel CPUs and 31,874 Intel GPUs working in concert to deliver 585.34 PFlop/s at a total of 24.69 megawatts (MW) of energy. In contrast, AMD's Frontier holds the performance title at 1.194 EFlop/s, which is more than twice the performance of Aurora, while consuming a comparably miserly 22.70 MW of energy (yes, that's less power for the full Frontier supercomputer than half of the Aurora system). Aurora did not land on the Green500, a list of the most power-efficient supercomputers, with this submission, but Frontier continues to hold eighth place on that list. However, Aurora is expected to eventually reach up to 2 EFlop/s of performance when it comes fully online. When complete, Auroroa will have 21,248 Xeon Max CPUs and 63,744 Max Series 'Ponte Vecchio' GPUs spread across 166 racks and 10,624 compute blades, making it the largest known single deployment of GPUs in the world. The system leverages HPE Cray EX â" Intel Exascale Compute Blades and uses HPE's Slingshot-11 networking interconnect.

AI

Nvidia Upgrades Processor as Rivals Challenge Its AI Dominance (bloomberg.com) 39

Nvidia, the world's most valuable chipmaker, is updating its H100 artificial intelligence processor, adding more capabilities to a product that has fueled its dominance in the AI computing market. From a report: The new model, called the H200, will get the ability to use high-bandwidth memory, or HBM3e, allowing it to better cope with the large data sets needed for developing and implementing AI, Nvidia said Monday. Amazon's AWS, Alphabet's Google Cloud and Oracle's Cloud Infrastructure have all committed to using the new chip starting next year.

The current version of the Nvidia processor -- known as an AI accelerator -- is already in famously high demand. It's a prized commodity among technology heavyweights like Larry Ellison and Elon Musk, who boast about their ability to get their hands on the chip. But the product is facing more competition: AMD is bringing its rival MI300 chip to market in the fourth quarter, and Intel claims that its Gaudi 2 model is faster than the H100. With the new product, Nvidia is trying to keep up with the size of data sets used to create AI models and services, it said. Adding the enhanced memory capability will make the H200 much faster at bombarding software with data -- a process that trains AI to perform tasks such as recognizing images and speech.

AMD

Gaining on Intel? AMD Increases CPU Market Share In Desktops, Laptops, and Servers (techspot.com) 21

A a report from TechSpot says AMD has recently increased its market share in the CPU sector for desktops, laptops, and servers: According to Mercury Research (via Tom's Hardware), AMD gained 5.8% unit share in desktops, 3.8% in laptops, and 5.8% in servers. In terms of revenue share, Team Red gained 4.1% in desktops, 5.1% in laptops, and 1.7% in servers. The report does not mention competitors by name, but the global PC industry only has one other major CPU supplier, Intel, which has a major stake in all the market segments.

While Intel and AMD make x86 processors for PCs, Qualcomm offers Arm-based SoCs for Windows notebooks, but its market share is minuscule by comparison. So, while the report doesn't say anything about the market share of Intel or Qualcomm, it is fair to assume that most of AMD's gains came at Intel's expense.

Thanks to Slashdot reader jjslash for sharing the news.
AMD

AMD Begins Polaris and Vega GPU Retirement Process, Reduces Ongoing Driver Support (anandtech.com) 19

As AMD is now well into their third generation of RDNA architecture GPUs, the sun has been slowly setting on AMD's remaining Graphics Core Next (GCN) designs, better known by the architecture names of Polaris and Vega. From a report: In recent weeks the company dropped support for those GPU architectures in their open source Vulkan Linux driver, AMDVLK, and now we have confirmation that the company is slowly winding down support for these architectures in their Windows drivers as well. Under AMD's extended driver support schedule for Polaris and Vega, the drivers for these architectures will no longer be kept at feature parity with the RDNA architectures. And while AMD will continue to support Polaris and Vega for some time to come, that support is being reduced to security updates and "functionality updates as available."

For AMD users keeping a close eye on their driver releases, they'll likely recognize that AMD already began this process back in September -- though AMD hasn't officially documented the change until now. As of AMD's September Adrenaline 23.9 driver series, AMD split up the RDNA and GCN driver packages, and with that they have also split the driver branches between the two architectures. As a result, only RDNA cards are receiving new features and updates as part of AMD's mainline driver branch (currently 23.20), while the GCN cards have been parked on a maintenance driver branch - 23.19.

Intel

Intel's Failed 64-bit Itanium CPUs Die Another Death as Linux Support Ends (arstechnica.com) 78

Officially, Intel's Itanium chips and their IA-64 architecture died back in 2021, when the company shipped its last processors. But failed technology often dies a million little deaths. From a report: To name just a few: Itanium also died in 2013, when Intel effectively decided to stop improving it; in 2017, when the last new Itanium CPUs shipped; in 2020, when the last Itanium-compatible version of Windows Server stopped getting updates; and in 2003, when AMD introduced a 64-bit processor lineup that didn't break compatibility with existing 32-bit x86 operating systems and applications.

Itanium is dying another death in the next version of the Linux kernel. According to Phoronix, all code related to Itanium support is being removed from the kernel in the upcoming 6.7 release after several months of deliberation. Linus Torvalds removed some 65,219 lines of Itanium-supporting code in a commit earlier this week, giving the architecture a "well-earned retirement as planned."

Security

Hackers Can Force iOS and macOS Browsers To Divulge Passwords (arstechnica.com) 29

Researchers have devised an attack that forces Apple's Safari browser to divulge passwords, Gmail message content, and other secrets by exploiting a side channel vulnerability in the A- and M-series CPUs running modern iOS and macOS devices. From a report: iLeakage, as the academic researchers have named the attack, is practical and requires minimal resources to carry out. It does, however, require extensive reverse-engineering of Apple hardware and significant expertise in exploiting a class of vulnerability known as a side channel, which leaks secrets based on clues left in electromagnetic emanations, data caches, or other manifestations of a targeted system. The side channel in this case is speculative execution, a performance enhancement feature found in modern CPUs that has formed the basis of a wide corpus of attacks in recent years. The nearly endless stream of exploit variants has left chip makers -- primarily Intel and, to a lesser extent, AMD -- scrambling to devise mitigations.

The researchers implement iLeakage as a website. When visited by a vulnerable macOS or iOS device, the website uses JavaScript to surreptitiously open a separate website of the attacker's choice and recover site content rendered in a pop-up window. The researchers have successfully leveraged iLeakage to recover YouTube viewing history, the content of a Gmail inbox -- when a target is logged in -- and a password as it's being autofilled by a credential manager. Once visited, the iLeakage site requires about five minutes to profile the target machine and, on average, roughly another 30 seconds to extract a 512-bit secret, such as a 64-character string.

Technology

Qualcomm's Snapdragon X Elite Chips Promise Major PC Performance (pcworld.com) 9

Qualcomm unveiled a new laptop processor designed to outperform rival products from Intel and Apple on Tuesday, stepping up its long-running effort to break into the personal computer market. From a report: Qualcomm formally launched the Snapdragon X Elite, the flagship platform of its Snapdragon X family that leverages its Oryon CPU core, and promises to double -- yes, double -- the performance of some of the most popular 13th-gen Core chips from AMD and Intel. Sound familiar? It should. Qualcomm promised the same with its earlier Snapdragon 8-series chips, and really didn't deliver. But after buying chip designer Nuvia in 2021, Qualcomm is trying again, hoping that its superpowered Arm chips can once again make Windows on Arm PCs a competitor to conventional X86 PCs when they launch in mid-2024. And they'e talking some big numbers to prove it.

Qualcomm sees Oryon first going into PCs (as the engine of the Snapdragon X Elite platform) but then moving into smartphones, cars, "extended reality" devices, and more, Qualcomm chief executive Cristiano Amon is expected to say today. [...] To begin with, Qualcomm's Snapdragon X Elite is manufactured on a 4nm process node, versus the Intel 4 process node of Intel's Meteor Lake. (The two process technologies aren't directly comparable, though they're close enough for most purposes.) Oryon is a tri-cluster design. Historically, that has meant prime, performance, and efficiency cores, with each type of core taking on their own role depending upon the task. However, it appears that Qualcomm and its X86 rivals may have swapped strategies; as Intel adopts performance and efficiency cores, Qualcomm has chosen AMD's path. There are twelve cores within the Snapdragon X Elite, all running at 3.8GHz. Well, most of the time. If needed one or two of the cores can boost to 4.3GHz, the turbo boost strategy that's become common on the PC. The 64-bit Oryon CPU will be paired with 42Mbytes of total cache, and a memory controller that can access eight channels of LPDDR5x memory (64GB in total) with 130GBps memory bandwidth, executives said. It will be a single die, not a chiplet design.

AMD

Nvidia To Make Arm-Based PC Chips (reuters.com) 42

According to Reuters, Nvidia is designing ARM-based processors that would run Microsoft's Windows operating system. While they're not expected to be ready until 2025, it poses a major new challenge to Intel which has long dominated the PC industry. From the report: The AI chip giant's new pursuit is part of Microsoft's effort to help chip companies build Arm-based processors for Windows PCs. Microsoft's plans take aim at Apple, which has nearly doubled its market share in the three years since releasing its own Arm-based chips in-house for its Mac computers, according to preliminary third-quarter data from research firm IDC. Advanced Micro Devices also plans to make chips for PCs with Arm technology, according to two people familiar with the matter. Nvidia and AMD could sell PC chips as soon as 2025, one of the people familiar with the matter said. Nvidia and AMD would join Qualcomm, which has been making Arm-based chips for laptops since 2016. At an event on Tuesday that will be attended by Microsoft executives, including vice president of Windows and Devices Pavan Davuluri, Qualcomm plans to reveal more details about a flagship chip that a team of ex-Apple engineers designed, according to a person familiar with the matter.

Nvidia, AMD and Qualcomm's efforts could shake up a PC industry that Intel long dominated but which is under increasing pressure from Apple. Apple's custom chips have given Mac computers better battery life and speedy performance that rivals chips that use more energy. Executives at Microsoft have observed how efficient Apple's Arm-based chips are, including with AI processing, and desire to attain similar performance, one of the sources said. Microsoft has been encouraging the involved chipmakers to build advanced AI features into the CPUs they are designing. The company envisions AI-enhanced software such as its Copilot to become an increasingly important part of using Windows. To make that a reality, forthcoming chips from Nvidia, AMD and others will need to devote the on-chip resources to do so.
"Microsoft learned from the 90s that they don't want to be dependent on Intel again, they don't want to be dependent on a single vendor," said Jay Goldberg, chief executive of D2D Advisory, a finance and strategy consulting firm. "If Arm really took off in PC (chips), they were never going to let Qualcomm be the sole supplier."
Open Source

OpenBSD 7.4 Released (phoronix.com) 8

Long-time Slashdot reader Noryungi writes: OpenBSD 7.4 has been officially released. The 55th release of this BSD operating system, known for being security oriented, brings a lot of new things, including dynamic tracer, pfsync improvements, loads of security goodies and virtualization improvements. Grab your copy today! As mentioned by Phoronix's Michael Larabel, some of the key highlights include:

- Dynamic Tracer (DT) and Utrace support on AMD64 and i386 OpenBSD
- Power savings for those running OpenBSD 7.4 on Apple Silicon M1/M2 CPUs by allowing deep idle states when available for the idle loop and suspend
- Support for the PCIe controller found on Apple M2 Pro/Max SoCs
- Allow updating AMD CPU Microcode updating when a newer patch is available
- A workaround for the AMD Zenbleed CPU bug
- Various SMP improvements
- Updating the Direct Rendering Manager (DRM) graphics driver support against the upstream Linux 6.1.55 state
- New drivers for supporting various Qualcomm SoC features
- Support for soft RAID disks was improved for the OpenBSD installer
- Enabling of Indirect Branch Tracking (IBT) on x86_64 and Branch Target Identifier (BTI) on ARM64 for capable processors

You can download and view all the new changes via OpenBSD.org.
AMD

AMD's Monstrous Threadripper 7000 CPUs Aim For Desktop PC Dominance (pcworld.com) 62

AMD's powerhouse Threadripper chips are back for desktop PCs. Despite declaring the end of consumer Threadripper chips last generation, AMD announced three new Ryzen Threadripper 7000-series chips on Thursday, with up to 64 cores and 128 threads -- and the option of installing a "Pro"-class Threadripper 700 WX series for a massive 96 cores and 192 threads, too. PCWorld: Take a deep breath, though. The underlying message is the same as when AMD released the Threadripper 3970X back in 2019: these chips are for those who live and breathe video editing and content creation, and are optimized for such. Nevertheless, they almost certainly represent the most powerful CPU you can buy on a desktop, for whatever purpose.

The key differences between the older workstation-class Threadripper 5000 series and these new 7000-class processors are simple: AMD has brought forward its Zen 4 architecture into Threadripper alongside a higher core count, faster boost frequencies, and a generational leap ahead to PCI Express 5.0. Consumers will need new motherboards, though, as the new "TRX50" consumer Threadripper platform uses the new AMD TRX50 HEDT (high-end desktop) chipset and sTR5 socket. And did we mention they consume (gulp) 350W of power? In some ways, though, the new Threadripper 7980X, 7970X, and 7960X consumer Threadripper offerings are familiar. They stick with AMD's tried-and-true 64-core configuration, the same as the Threadripper 5000 series, moving down to 24 cores. The 12- and 16-core configurations have been trimmed off from the prior generation.

Open Source

AlmaLinux Stays Red Hat Enterprise Linux Compatible Without Red Hat Code (zdnet.com) 34

AlmaLinux is creating a Red Hat Enterprise Linux (RHEL) without any Red Hat code. Instead, AlmaLinux OS will aim to be Application Binary Interface (ABI) compatible and use the CentOS Stream source code that Red Hat continues to offer. Additional code is pulled from Red Hat Universal Base Images, and upstream Linux code. Benny Vasquez, chairperson of the AlmaLinux OF Foundation, explained how all this works at the open-source community convention All Things Open. ZDNet's Steven Vaughan-Nichols reports: The hardest part is Red Hat's Linux kernel updates because, added Vasquez, "you can't get those kernel updates without violating Red Hat's licensing agreements." Therefore, she continued, "What we do is we pull the security patches from various other sources, and, if nothing else, we can find them when Oracle releases them." Vasquez did note one blessing from this change in production: "AlmaLinux, no longer bound to Red Hat's releases, has been able to release upstream security fixes faster than Red Hat. "For example, the AMD microcode exploits were patched before Red Hat because they took a little bit of extra time to get out the door. We then pulled in, tested, and out the door about a week ahead of them." The overall goal remains to maintain RHEL compatibility. "Any breaking changes between RHEL and AlmaLinux, any application that stops working, is a bug and must be fixed."

That's not to say AlmaLinux will be simply an excellent RHEL clone going forward. It plans to add features of its own. For instance, Red Hat users who want programs not bundled in RHEL often turn to Extra Packages for Enterprise Linux (EPEL). These typically are programs included in Fedora Linux. Besides supporting EPEL software, AlmaLinux has its own extra software package -- called Synergy -- which holds programs that the AlmaLinux community wants but are not available in either EPEL or RHEL. If one such program is subsequently added to EPEL or RHEL, AlmaLinux drops it from Synergy to prevent confusion and duplication of effort.

This has not been an easy road for AlmaLinux. Even a 1% code difference is a lot to write and maintain. For example, when AlmaLinux tried to patch CentOS Stream code to fix a problem, Red Hat was downright grumpy about AlmaLinux's attempt to fix a security hole. Vasquez acknowledged it was tough sledding at first, but noted: "The good news is that they have been improving the process, and things will look a little bit smoother." AlmaLinux, she noted, is also not so much worried as aware that Red Hat may throw a monkey wrench into their efforts. Vasquez added: "Internally, we're working on stopgap things we'd need to do to anticipate Red Hat changing everything terribly." She doesn't think Red Hat will do it, but "we want to be as prepared as possible."

AMD

AMD Pulls Graphics Driver After 'Anti-Lag+' Triggers Counter-Strike 2 Bans (arstechnica.com) 93

AMD has taken down the latest version of its AMD Adrenalin Edition graphics driver after Counter-Strike 2-maker Valve warned that players using its Anti-Lag+ technology would result in a ban under Valve's anti-cheat rules. From a report: AMD first introduced regular Anti-Lag mitigation in its drivers back in 2019, limiting input lag by reducing the amount of queued CPU work when the processor was getting too far ahead of the GPU frame processing. But the newer Anti-Lag+ system -- which was first rolled out for a handful of games last month -- updates this system by "applying frame alignment within the game code itself," according to AMD. That method leads to additional lag reduction of up to 10 ms, according to AMD's data. That additional lag reduction could offer players a bit of a competitive advantage in these games (with the usual arguments about whether that advantage is "unfair" or not). But it's Anti-Lag+'s particular method of altering the "game code itself" that sets off warning bells for the Valve Anti-Cheat (VAC) system. After AMD added Anti-Lag+ support for Counter-Strike 2 in a version 23.10.1 update last week, VAC started issuing bans to unsuspecting AMD users that activated the feature.

"AMD's latest driver has made their 'Anti-Lag/+' feature available for CS2, which is implemented by detouring engine dll functions," Valve wrote on social media Friday. "If you are an AMD customer and play CS2, DO NOT ENABLE ANTI-LAG/+; any tampering with CS code will result in a VAC ban." Beyond Valve, there are also widespread reports of Anti-Lag+ triggering crashes or account bans in competitive online games like Modern Warfare 2 and Apex Legends. But Nvidia users haven't reported any similar problems with the company's Reflex system, which uses SDK-level code adjustments to further reduce input lag in games including Counter-Strike 2.

AMD

T2 Linux Discovers (Now Patched) AMD Zen 4 Invalid Opcode Speculation Bug (youtube.com) 13

T2 SDE is not just a Linux distribution, but "a flexible Open Source System Development Environment or Distribution Build Kit," according to a 2022 announcement of its support for 25 CPU architectures, variants, and C libraries. ("Others might even name it Meta Distribution. T2 allows the creation of custom distributions with state of the art technology, up-to-date packages and integrated support for cross compilation.")

And while working on it, Berlin-based T2 Linux developer René Rebe (long-time Slashdot reader ReneR) discovered random illegal instruction speculation on AMD Ryzen 7000-Series and Epyc Zen 4 CPU.

ReneR writes: Merged to Linux 6.6 Git is a fix for the bug now known at AMD as Erratum 1485.

The discovery was possible through continued high CPU load cross-compiling the T2 Linux distribution with support for all CPU architectures from ARM, MIPS, PowerPC, RISC-V to x86 (and more) for 33 build variants. With sustained high CPU load and various instruction sequences being compiled, pseudo random illegal instruction errors were observed and subsequently analyzed.

ExactCODE Research GmbH CTO René Rebe is thrilled that working with AMD engineers lead to a timely mitigation to increase system stability of the still new and highest performance Zen4 platform.

"I found real-world code that might be similar or actually trigger the same bugs in the CPU that are also used for all the Spectre Meltdown and other side-channel security vulnerability mitigations," Rebe says in a video announcement on YouTube.

It took Rebe a tremendous amount of research, and he says now that "all the excessive work changed my mind. Mitigations equals considered harmful... If you want stable, reliable computational results — no, you can't do this. Because as Spectre Meltdown and all the other security issues have proven, the CPUs are nowadays as complex as complex software systems..."
Graphics

Higher Quality AV1 Video Encoding Now Available For Radeon Graphics On Linux (phoronix.com) 3

Michael Larabel reports via Phoronix: For those making use of GPU-accelerated AV1 video encoding with the latest AMD Radeon graphics hardware on Linux, the upcoming Mesa 23.3 release will support the high-quality AV1 preset for offering higher quality encodes. Merged this week to Mesa 23.3 are the RadeonSI Video Core Next (VCN) changes for supporting the high quality AV1 encoding mode preset.

Mesa 23.3 will be out as stable later this quarter for those after slightly higher quality AV1 encode support for Radeon graphics on this open-source driver stack alongside many other recent Mesa driver improvements especially on the Vulkan side with Radeon RADV and Intel ANV.

IT

Qualcomm Will Try To Have Its Apple Silicon Moment in PCs With 'Snapdragon X' (arstechnica.com) 32

Qualcomm's annual "Snapdragon Summit" is coming up later this month, and the company appears ready to share more about its long-planned next-generation Arm processor for PCs. ArsTechnica: The company hasn't shared many specifics yet, but yesterday we finally got a name: "Snapdragon X," which is coming in 2024, and it may finally do for Arm-powered Windows PCs what Apple Silicon chips did for Macs a few years ago (though it's coming a bit later than Qualcomm had initially hoped). Qualcomm has been making chips for PCs for years, most recently the Snapdragon 8cx Gen 3 (you might also know it as the Microsoft SQ3, which is what the chip is called in Surface devices). But those chips have never quite been fast enough to challenge Intel's Core or AMD's Ryzen CPUs in mainstream laptops. Any performance deficit is especially noticeable because many people will run at least a few apps designed for the x86 version of Windows, code that needs to be translated on the fly for Arm processors.

So why will Snapdragon X be any different? It's because these will be the first chips born of Qualcomm's acquisition of Nuvia in 2021. Nuvia was founded and staffed by quite a few key personnel from Apple's chipmaking operation, the team that had already upended a small corner of the x86 PC market by designing the Apple M1 and its offshoots. Apple had sued Nuvia co-founder and current Qualcomm engineering SVP Gerard Williams for poaching Apple employees, though the company dropped the suit without comment earlier this year. The most significant change from current Qualcomm chips will be a CPU architecture called Oryon, Qualcomm's first fully custom Arm CPU design since the original Kryo cores back in 2015. All subsequent versions of Kryo, from 2016 to now, have been tweaked versions of off-the-shelf Arm Cortex processors rather than fully custom designs. As we've seen in the M1 and M2, using a custom design with the same Arm instruction set gives chip designers the opportunity to boost performance for everyday workloads while still maintaining impressive power usage and battery life.

Data Storage

Reviewer Tests $3 SATA SSD, Gets Exactly What They Paid For 51

An anonymous reader shares a report: StorageReview went through the remarkable journey of testing a $3 SSD from AliExpress. The Goldenfir-brand SSD was reportedly given to the storage site by one of its Discord users for testing. The good news is that Goldenfir is actually using an SSD controller for its NAND drive. The controller is a Yeestor YS9083XT, which the Chinese company announced as a SATA3.2 controller in 2019. [...] StorageReview tested the drive by putting it into a Lenovo SR635 1U server with an AMD Epyc 7742 processor and 512GB of DDR4-3200 RAM. StorageReview also decided to, admittedly "unfairly," put it up against Kingston's DC600M entry-level enterprise SATA drive. You can guess what happens next. With a 64GB file and the CrystalDiskMark benchmark, StorageReview reported that the "Kingston drive finished the entire test before this piece of turd [the $3 drive] could even build its test." With the VDBench workload benchmark filling up the entire drive, the $3 drive hit a wall at around 15,500 IOPS when running the 4K random read test, compared to the Kingston drive's approximately 80,000. The cheap SSD ultimately finished the test at 13,000 IOPS and 10,225 ms, compared to the Kingston's 78,000 IOPS and 1,630 ms.
Supercomputing

Europe's First Exascale Supercomputer Will Run On ARM Instead of X86 (extremetech.com) 40

An anonymous reader quotes a report from ExtremeTech: One of the world's most powerful supercomputers will soon be online in Europe, but it's not just the raw speed that will make the Jupiter supercomputer special. Unlike most of the Top 500 list, the exascale Jupiter system will rely on ARM cores instead of x86 parts. Intel and AMD might be disappointed, but Nvidia will get a piece of the Jupiter action. [...] Jupiter is a project of the European High-Performance Computing Joint Undertaking (EuroHPC JU), which is working with computing firms Eviden and ParTec to assemble the machine. Europe's first exascale computer will be installed at the Julich Supercomputing Centre in Munich, and assembly could start as soon as early 2024.

EuroHPC has opted to go with SiPearl's Rhea processor, which is based on ARM architecture. Most of the top 10 supercomputers in the world are running x86 chips, and only one is running on ARM. While ARM designs were initially popular in mobile devices, the compact, efficient cores have found use in more powerful systems. Apple has recently finished moving all its desktop and laptop computers to the ARM platform, and Qualcomm has new desktop-class chips on its roadmap. Rhea is based on ARM's Neoverse V1 CPU design, which was developed specifically for high-performance computing (HPC) applications with 72 cores. It supports HBM2e high-bandwidth memory, as well as DDR5, and the cache tops out at an impressive 160MB.
The report says the Jupiter system "will have Nvidia's Booster Module, which includes GPUs and Mellanox ultra-high bandwidth interconnects," and will likely include the current-gen H100 chips. "When complete, Jupiter will be near the very top of the supercomputer list."

Slashdot Top Deals