crookedvulture writes "AMD's recently introduced Radeon R9 290X is one of the fastest graphics cards around. However, the cards sent to reviewers differ somewhat from the retail units available for purchase. The press samples run at higher clock speeds and deliver better performance as a result. There's some variance in clock speeds between different press and retail cards, too. Part of the problem appears to be AMD's PowerTune mechanism, which dynamically adjusts GPU frequencies in response to temperature and power limits. AMD doesn't guarantee a base clock speed, saying only that the 290X runs at 'up to 1GHz.' Real-world clock speeds are a fair bit lower than that, and the retail cards suffer more than the press samples. Cooling seems to be a contributing factor. AMD issued a driver update that raises fan speeds, and that helps the performance of some retail cards. Retail units remain slower than the cards seeded to the press, though. Flashing retail cards with the press firmware raises clock speeds slightly, but it doesn't entirely close the gap, either. AMD hasn't explained why the retail cards are slower than expected, and it's possible the company cherry-picked the samples sent to the press. At the very least, it's clear that the 290X exhibits more card-to-card variance than we're used to seeing in a PC graphics product."
Make a difference in your data center. Sign up for SlashDataCenter Update newsletter now.
MojoKid writes "There's a great deal riding on the launch of AMD's next-generation Kaveri APU. The new chip will be the first processor from AMD to incorporate significant architectural changes to the Bulldozer core AMD launched two years ago and the first chip to use a graphics core derived from AMD's GCN (Graphics Core Next) architecture. A strong Kaveri launch could give AMD back some momentum in the enthusiast business. Details are emerging that point to a Kaveri APU that's coming in hot — possibly a little hotter than some of us anticipated. Kaveri's Steamroller CPU core separates some of the core functions that Bulldozer unified and should substantially improve the chip's front-end execution. Unlike Piledriver, which could only decode four instructions per module per cycle (and topped out at eight instructions for a quad-core APU), Steamroller can decode four instructions per core or 16 instructions per quad-core module. The A10-7850K will offer a 512-core GPU while the A10-7700K will be a 384-core part. Again, GPU clock speeds have come down, from 844MHz on the A10-6800K to 720MHz on the new A10-7850K but should be offset by the gains from moving to AMD's GCN architecture."
An anonymous reader writes "Intel's open-source Linux graphics driver is now running neck-and-neck with the Windows 8.1 driver for OpenGL performance between the competing platforms when using the latest drivers for each platform. The NVIDIA driver has long been able to run at similar speeds between Windows and Linux given the common code-base, but the Intel Linux driver is completely separate from their Windows driver due to being open-source and complying with the Linux DRM and Mesa infrastructure. The Intel Linux driver is still trailing the Windows OpenGL driver in supporting OpenGL4."
An anonymous reader writes "DragonFlyBSD 3.6 was released [Monday] with the big new features being dports, Intel and AMD Radeon KMS kernel graphics drivers, major SMP improvements, and improved language support. Dports is the new package management system based upon the FreeBSD Ports collection and replaces pkgsrc as the default; over 20k packages are available via dports. Major SMP scaling improvements come via reducing lock contention within the kernel and other multi-core enhancements. The Intel and Radeon graphics drivers on DragonFlyBSD were ported from the FreeBSD kernel, which in turn were ported from the upstream Linux kernel."
MojoKid writes "Benchmarks are serious business. Buying decisions are often made based on how well a product scores, which is why the press and analysts spend so much time putting new gadgets through their paces. However, benchmarks are only meaningful when there's a level playing field, and when companies try to 'game' the business of benchmarking, it's not only a form of cheating, it also bamboozles potential buyers who (rightfully) assume the numbers are supposed mean something. 3D graphics benchmark software developer Futuremark just 'delisted' a bunch of devices from its 3DMark benchmark results database because it suspects foul play is at hand. Of the devices listed, it appears Samsung and HTC in particular are indirectly being accused of cheating 3DMark for mobile devices. Delisted devices are stripped of their rank and scores. Futuremark didn't elaborate on which specific rule(s) these devices broke, but a look at the company's benchmarking policies reveals that hardware makers aren't allowed to make optimizations specific to 3DMark, nor are platforms allowed to detect the launch of the benchmark executable unless it's needed to enable multi-GPU and/or there's a known conflict that would prevent it from running."
MojoKid writes "When Intel debuted Haswell this year, it launched its first mobile processor with a massive 128MB L4 cache. Dubbed "Crystal Well," this on-package (not on-die) pool of memory wasn't just a graphics frame buffer, but a giant pool of RAM for the entire core to utilize. The performance impact from doing so is significant, though the Haswell processors that utilize the L4 cache don't appear to account for very much of Intel's total CPU volume. Right now, the L4 cache pool is only available on mobile parts, but that could change next year. Apparently Broadwell-K will change that. The 14nm desktop chips aren't due until the tail end of next year but we should see a desktop refresh in the spring with a second-generation Haswell part. Still, it's a sign that Intel intends to integrate the large L4 as standard on a wider range of parts. Using EDRAM instead of SRAM allows Intel's architecture to dedicate just one transistor per cell instead of the 6T configurations commonly used for L1 or L2 cache. That means the memory isn't quite as fast but it saves an enormous amount of die space. At 1.6GHz, L4 latencies are 50-60ns which is significantly higher than the L3 but just half the speed of main memory."
alphadogg writes "A network researcher at the U.S. Department of Energy's Fermi National Accelerator Laboratory has found a potential new use for graphics processing units — capturing data about network traffic in real time. GPU-based network monitors could be uniquely qualified to keep pace with all the traffic flowing through networks running at 10Gbps or more, said Fermilab's Wenji Wu. Wenji presented his work as part of a poster series of new research at the SC 2013 supercomputing conference this week in Denver."
Nerval's Lobster writes "Scientists have been using everything from supercomputing clusters to 3D printers to virtually recreate dinosaur bones. Now another expert is trying to do something similar with the ancient imperial villa built for Roman emperor Hadrian, who ruled from 117 A.D. to 138 A.D. Hadrian's Villa is already one of the best-preserved Roman imperial sites, but that wasn't quite good enough for Indiana University Professor of Informatics Bernie Frischer, who trained as a classical philologist and archaeologist before being seduced by computers into what evolved into the academic discipline of digital analysis and reproduction of archaeological and historical works. The five-year effort to recreate Hadrian's Villa is based on information from academic studies of the buildings and grounds, as well as analyses of how the buildings, grounds and artifacts were used; the team behind it decided to go with gaming platform Unity 3D as a key part of the simulation."
jones_supa writes "As can be recalled, Mir didn't make it to the Ubuntu 13.10 release to replace X.org as the display server. Back then it suffered of problems in multi-monitor support, along with other issues. Now it turns out that Canonical's product will not make it even into the next LTS version (14.04) of the Ubuntu desktop. Mir itself would be ready for showtime in the schedule, but there are problems with XMir, which is the X11 compatibility layer that ensures Mir can work with applications built for X. The comments came at the Ubuntu Developer Summit: in an online event Mark Shuttleworth stressed that the 14.04 desktop has to be rock-solid for customers with large-scale deployments, such as educational institutions. In the meantime, you can already try out Mir in your Ubuntu system."
MojoKid writes "The supercomputing conference SC13 kicks off this week and Nvidia is kicking off their own event with the launch of a new GPU and a strategic partnership with IBM. Just as the GTX 780 Ti was the full consumer implementation of the GK110 GPU, the new K40 Tesla card is the supercomputing / HPC variant of the same core architecture. The K40 picks up additional clock headroom and implements the same variable clock speed threshold that has characterized Nvidia's consumer cards for the past year, for a significant overall boost in performance. The other major shift between Nvidia's previous gen K20X and the new K40 is the amount of on-board RAM. K40 packs a full 12GB and clocks it modestly higher to boot. That's important because datasets are typically limited to on-board GPU memory (at least, if you want to work with any kind of speed). Finally, IBM and Nvidia announced a partnership to combine Tesla GPUs and Power CPUs for OpenPOWER solutions. The goal is to push the new Tesla cards as workload accelerators for specific datacenter tasks. According to Nvidia's release, Tesla GPUs will ship alongside Power8 CPUs, which are currently scheduled for a mid-2014 release date. IBM's venerable architecture is expected to target a 4GHz clock speed and offer up to 12 cores with 96MB of shared L3 cache. A 12-core implementation would be capable of handling up to 96 simultaneous threads. The two should make for a potent combination."
An anonymous reader writes "The FreeBSD Foundation's annual year-end fundraising drive is currently running. Their goal this year is US$ 1M, and they're currently at US$ 427K. In 2013, the efforts that were funded were from the last drive were: Native iSCSI kernel stack, Updated Intel graphics chipset support, Integration of Newcons, UTF-8 console support, Superpages for ARM architecture, and Layer 2 networking updates. Also various conferences and summit sponsorships, as well as hardware purchases for the Project. The Foundation is a US 501(c)3 non-profit, so your donations (if in the US) are tax-deductible. Some of the larger 2013 (corporate?) sponsors so far are NetApp, LineRate, WhatsApp, and Tarsnap."
An anonymous reader writes "There's many improvements due in the Linux 3.13 kernel that just entered development. On the matter of new hardware support, there's open-source driver support for Intel Broadwell and AMD Radeon R9 290 'Hawaii' graphics. NFTables will eventually replace IPTables; the multi-queue block layer is supposed to make disk access much faster on Linux; HDMI audio has improved; Stereo/3D HDMI support is found for Intel hardware; file-system improvements are on the way, along with support for limiting the power consumption of individual PC components."
An anonymous reader writes "Starting with version 3.7, POV-Ray is released under the AGPLv3 (or later) license and thus is Free Software according to the FSF definition. 'Free software' means software that respects users' freedom and community. Roughly, the users have the freedom to run, copy, distribute, study, change and improve the software. With these freedoms, the users (both individually and collectively) control the program and what it does for them. Full source code is available, allowing users to build their own versions and for developers to incorporate portions or all of the POV-Ray source into their own software provided it is distributed under a compatible license (for example, the AGPL3 or — at their option — any later version). The POV-Ray developers also provide officially-supported binaries for selected platforms (currently only Microsoft Windows, but expected to include OS X shortly)." Update: 11/14 21:57 GMT by U L : The previous distribution terms and source modification license.
MojoKid writes "The seemingly never-ending onslaught of new graphics cards as of late continues today with the official release of the AMD Radeon R9 270. This mainstream graphics card actually leverages the same GPU that powered last-year's Radeon HD 7870 GHz Edition. AMD, however, has tweaked the clocks and through software and board level revisions updated the card to allow for more flexible use of its display outputs (using Eyefinity no longer requires the use of a DisplayPort). Versus the 1GHz (GPU) and 4.8Gbps (memory) of the Radeon HD 7870 GHz Edition, the Radeon R9 270 offers slightly lower compute performance (2.37 TFLOPS vs. 2.56 TFLOPS), but much more memory bandwidth--179.2GB/s vs. 153.6GB/s to be exact. AMD and its add in board partners are launching the Radeon R9 270 today, with prices starting at $179. The Radeon R9 270's starting price is somewhat aggressive and once again puts pressure on NVIDIA. GeForce GTX 660 cards, which typically performed lower than the Radeon R9 270 are priced right around the $190 mark. Along with this card, AMD is also announcing an update to its game bundle, and beginning November 13 Radeon R9 270 – R9 290X cards will include a free copy of Battlefield 4. NVIDIA, on the other hand, is offering Splinter Cell: Blacklist and Assassins Creed – Black Flag, plus $50 off a SHIELD portable gaming device with GTX 660 and 760 cards."
MojoKid writes "At APU13 today, AMD announced a full suite of new products and development tools as part of its push to improve HSA development. One of the most significant announcements to come out the sessions today-- albeit in a tacit, indirect fashion, is that Kaveri is going to pack a full 512 GPU cores. There's not much new to see on the CPU side of things — like Richland/Trinity, Steamroller is a pair of CPU modules with two cores per module. AMD also isn't talking about clock speeds yet, but the estimated 862 GFLOPS that the company is claiming for Kaveri points to GPU clock speeds between 700 — 800MHz. With 512 cores, Kaveri picks up a 33% boost over its predecessors, but memory bandwidth will be essential for the GPU to reach peak performance. For performance, AMD showed Kaveri up against the Intel 4770K running a low-end GeForce GT 630. In the intro scene to BF4's single-player campaign (1920x1080, Medium Details), the AMD Kaveri system (with no discrete GPU) consistently pushed frame rates in the 28-40 FPS range. The Intel system, in contrast, couldn't manage 15 FPS. Performance on that system was solidly in the 12-14 FPS range — meaning AMD is pulling 2x the frame rate, if not more."
An anonymous reader writes "Project Flare, the new server side gaming technology from Square Enix, turned heads when it was announced last week. The first tech demos do little more than show the vast number of calculations it can handle with hundreds of boxes tumbling down in Deus Ex, but the potential is there to do much more than just picture-in-picture feeds in MMOs. As a new article points out, what's most interesting is the potential to use the technology for games that use more than one system — OnLive may have used this tech before, but only to play games you can buy on discs in the shops anyway, but the future is in games that need the equivalent of dozens of PS4s or Xbox Ones to power them. Ubisoft has already partnered with Square on the project."
An anonymous reader writes "Nvidia lifted the veil on its latest high-end graphics board, the GeForce GTX 780 Ti. With a total of 2,880 CUDA cores and 240 texture units, the GK110 GPU inside the GTX 780 Ti is fully unlocked. This means that the new card has an additional SMX block, 192 more shader cores, and 16 additional texture units than the $1,000 GTX Titan launched back in February! Offered at just $700, the GTX 780 Ti promises to improve gaming performance over the Titan, yet the card has been artificially limited in GPGPU performance — no doubt in order to make sure the pricier card remains relevant to those unable or unwilling to spring for a Quadro. The benchmark results simply illustrate the GTX 780 Ti's on-paper specs. The card was able to beat AMD's just-released flagship, the Radeon R9 290x by single-digit percentages, up to double-digits topping 30% — depending on the variability of AMD's press and retail samples."
MojoKid writes "One of the hallmark features of Google's Nexus 5 flagship smartphone by LG isn't its bodaciously big 5-inch HD display, its 8MP camera, or its "OK Google" voice commands. That has all been done before. What does stand out about the Nexus 5 is Google's new Android 4.4 Kit Kat OS and LG's SoC (System on Chip) processor of choice, namely Qualcomm's Snapdragon 800 quad-core. Qualcomm is known for licensing ARM core technology and making it their own; and Qualcomm's latest Krait 400 quad-core along with the Adreno 330 GPU that comprise the Snapdragon 800, is a powerful beast. Google also has taken the scalpel to Kit Kat in all the right places, whittling down the overall footprint of the OS, so it's more efficient on lower-end devices and also offers faster multitasking. Specifically memory usage has been optimized in a number of areas. Couple these OS tweaks with Qualcomm's Snapdragon 800 and you end up with a smartphone that hugs the corners and lights 'em up on the straights. Putting the Nexus 5 through its paces, it turns out preliminary figures are promising. In fact, the Nexus 5 actually was able to surpass the iPhone 5s with Apple's 64-bit A7 processor in a few tests and goes toe to toe with it in gaming and graphics." Ars Technica has a similarly positive view of the hardware aspects of the phone, dinging it slightly for its camera but otherwise finding little to fault.
Dangerous_Minds writes "GIMP, a free and open source alternative to image manipulation software like Photoshop, recently announced that it will no longer be distributing their program through SourceForge. Citing some of the ads as reasons, they say that the tipping point was 'the introduction of their own SourceForge Installer software, which bundles third-party offers with Free Software packages. We do not want to support this kind of behavior, and have thus decided to abandon SourceForge.' The policy changes were reported back in August by Gluster. GIMP is now distributing their software via their own FTP page instead." Note: SourceForge and Slashdot share a corporate parent.