Gigabyte's Aorus Z390 PRO Wifi is on sale in the UK with 25% off

The Gigabyte Z390 PRO Wifi is the perfect pairing for any K series processor The Gigabyte Z390 PRO Wifi is the perfect pairing for any K series processor

A high-spec Z390 Intel board for £151

If you’re going to invest in a high performance Intel CPU (like this awesome Intel Core i5-9600K deal), you’ll really want a high quality motherboard to go with it. For Intel’s mainstream desktop platform, that means a board with the Z390 chipset. Handily, Gigabyte’s excellent Aorus Z390 PRO WIFI has just hit its lowest ever price on Amazon UK.

A cut-down variant of Gigabyte’s Z390 Aorus Master, the Gigabyte Aorus Z390 PRO WIFI is aimed pretty squarely at gamers. Some of the fancier features include 12+1 power delivery via an Intersil ISL69138 PWM 7-phase controller, extensive cooling for critical components such as your M.2 PCIe SSDs, and of course a plethora of LED lighting options too.

The latter, in fact, is divided into four customizable zones – the audio circuit separation line, the rear panel cover, the RAM slots and the main Z390 chipset heatsink. So, this board doesn’t just perform, it looks great doing it.
 

● The Gigabyte Aorus Z390 PRO WIFI motherboard is available now from amazon.co.uk for £151.37 (25% off RRP)
 

Other highlights include a total of three full-length PCIe 3.0 slots with support for two-way Nvidia SLI graphics and three-way AMD Crossfire. The top two slots sport steel armour reinforcement and are hooked directly into the CPU, while the bottom full-length slot is wired up to the Z390 chipset and limited to x4 speeds. As for storage, there are six SATA ports plus a pair of quad-lane M.2 PCIe slots to satisfy all your storage needs.

Specifications

Chipset Intel Z390
Socket LGA1151
CPU Support Intel 8th and 9th
Power Delivery 12+1
Memory Support 4x DDR4 up to 128GB + 4,266 MHz Max
Networking Intel 9560 802.11ac Wave 2 Wi-Fi, Intel I219-V Gigabit Ethernet
Expansion 2x PCIe 3.0 x16, 1x PCIe 3.0 x4, 3x PCIe 3.0 x1
Storage 6x SATA 6Gbps, 2x M.2 PCIe X4
Rear I/O 5x USB 3.1 Type A, 1x USB 3.1 Type C, 4x USB 2.0 Type A, HDMI 1.4, Gigabit Lan, 5.1 Audio, Optical Out

The Z390 PRO Wifi has more RGB than you could throw thermal paste atThe Z390 PRO Wifi has more RGB than you could throw thermal paste at

Anything else…?

It may not be Christmas, it may not be Black Friday, or even Amazon Prime day, yet there’s still more goodies to have with this beauty. As the name suggests, the Gigabyte Aorus Z390 PRO WIFI comes complete with a high performance Intel 9560 802.11ac Wave 2 Wi-Fi module, plus an Intel I219-V gigabit ethernet controller for speedy network performance. As for built-in ports, you get two USB 3.1 Gen2 Type-A, one USB 3.1 Gen2 Type-C, three USB 3.1 Gen1 Type-A and four USB 2.0 ports.

For the record, the Gigabyte Aorus Z390 PRO WIFI’s 12+1 power delivery makes for good overclocking headroom too, so it’s a good choice to pair with any unlocked K Series Intel processors. It also happens to be one of the fastest booting Z390 boards on the market. If you like a PC that gets up and running quickly, the Gigabyte Aorus Z390 PRO WIFI is a great choice.

Image Credits: Tom’s Hardware / Gigabyte

Our Favorite Portable SSD Is Up to 61% Off

Credit: SanDiskCredit: SanDisk

If you’re a storage addict like us at Tom’s Hardware, you can never have enough storage drives. If it’s a solid-state drive, even better. And if it’s on sale, it’s time to celebrate. Today, we’re in luck. The SanDisk Extreme Portable SSD, our favorite external SSD, is now on sale for up to 61% off, depending on how much space you need.

The biggest discount is on the 2TB version, which is selling for $430 off its original $700 MSRP.

A portable SSD is a great pick over an external hard drive. SSDs are making hard drives obsolete, and if you don’t need a massive amount of storage space, you can get one at an affordable price.

Case in point, the Sandisk 2TB Extreme Portable. With USB Type-A and Type-C connectivity and strong performance thanks to its use of the USB 3.1 Gen 2 transfer protocol, it’s a solid piece of storage.On top of that, its stylish, compact design will make you excited to take it with you on trips or even across different systems. Although, you’ll have to supply your own traveling pouch and worry about the detachable cable and adapter.

This portable SSD has sold for over $300 as recently as this month. Now, its price tag is its smallest yet, meaning it’s a better time than ever to snag a reliable form of storage that moves with you.

Need Less Storage?

Smaller versions of this SSD are also on sale right now, although at lower discounts than the 2TB model. The 500GB has been sitting at its lowes price, $89.99 since around Amazon Prime Day in June, and the 1TB version is also on sale, down from $349.99 when it debuted last year (although we’ve seen it for as cheap as $135).

But if the SanDisk is not for you, you can find more recommendations on our Best External Hard Drives and Portable SSDs page.

Nvidia Announces Ray Tracing Lineup at Gamescom

Photo Source: NvidiaPhoto Source: Nvidia

The gaming industry has descended on Cologne, Germany for Gamescom 2019, and Nvidia kicked off the event by announcing that nine upcoming games will support real-time ray tracing in at least some capacity. Implementation will vary between titles (it’s not as simple as flipping a switch) but the support from major publishers in some of their biggest titles adds to the slowly growing number of games supporting the feature.

Three of the most notable titles Nvidia announced today are Minecraft, the next entry in the Call of Duty: Modern Warfare series, and Cyberpunk 2077. Some mods have already made ray tracing available in Minecraft, but Nvidia said that it worked with Mojang and Microsoft to bring official support for the tech into the incredibly popular game by way of path tracing. (Just like modder “Sonic Ether” demonstrated with a fan patch in April.)

Call of Duty: Modern Warfare is also set to include ray traced shadows, Nvidia said, as well as the Nvidia Adaptive Shading technology. The company didn’t say how Cyberpunk 2077–which basically pulled a Kim Kardashian and “broke the internet” at E3 2019 when it released a cinematic trailer featuring Keanu Reeves–would implement ray tracing. We suspect that won’t dampen excitement for CD Projekt Red’s followup to The Witcher series.

Call of Duty: Modern Warfare | Official GeForce RTX Ray Tracing Reveal Trailer

Here are the six other titles Nvidia heralded at Gamescom. Some were already confirmed to have ray tracing, but Nvidia is promising a more in-depth look at how the rendering technology will be used in the titles for Gamescom attendees:

  • Synced: Off Planet
  • Dying Light 2
  • Vampire: The Masquerade — Bloodlines 2
  • Watch Dogs: Legion
  • Control
  • Wolfenstein: Youngblood
Most of these titles have noteworthy developers and/or publishers. Minecraft is one of the most popular games in the world; Call of Duty: Modern Warfare, Watch Dogs: Legion, and Wolfenstein: Youngblood are the latest entries in big-league franchises; and Synced: Off Planet is being developed by Tencent Next Studios. That studio’s parent company, Tencent, is a gaming juggernaut that owns large chunks of Riot Games and Epic Games.

The other titles are either critical darlings in the waiting (Control won numerous awards at E3 2019) or followups to well-received series. Support from all of these games will help bring ray tracing from a novelty that’s only supported by a few titles to a nigh-ubiquitous presence in AAA releases. Nvidia invited some Gamescom attendees to experience the games for themselves; the rest of us can learn more via the company’s website.

Latest Windows 10 Update Leads to High CPU Usage, Microsoft Responds (Update)

Credit: ymgerman / ShutterstockCredit: ymgerman / Shutterstock

Updated, 9/6/19, 8:45 a.m. PT: Microsoft acknowledged the CPU spikes caused by the KB4512941 update to Windows 10. The company said the problems listed here–abnormal CPU usage from the SearchUI.exe process and the absence of search results from the Taskbar–are only affecting “a small number of users.” (Which doesn’t mean a whole lot when there are more than 825 million active Windows 10 devices around the world.)

According to Microsoft, this problem “is only encountered on devices in which searching the web from Windows Desktop Search has been disabled,” which has become increasingly difficult after the Windows 10 May 2019 Update. The company said it’s “working on a resolution and estimate a solution will be available in mid-September.” It’s not clear if re-enabling web search via Windows Desktop Search can address the issue.

Original article, 9/1/19, 9:22am PT:

Here’s an evergreen lede for you: a cumulative update to Windows 10 has caused performance issues for some users. Windows Latest reported Saturday that problems with Windows 10 Build 18362.329, which debuted with the KB4512941 cumulative update, have caused high CPU usage issues.

KB4512941 was released as an optional update on August 30. Microsoft said in a knowledge base article that the update was supposed to address numerous issues affecting Remote Desktop, Windows Sandbox and other aspects of Windows 10. (The company also said that it would be phasing out the Edge browser’s support for ePUB files, which are typically used for e-books, “over the next several months.” More information can be found here.)

Windows Latest said that numerous people have complained about high CPU usage problems after installing the update, however, purportedly because of an issue with Cortana. The virtual assistant’s SearchUI.exe process has reportedly used 30-40% of the CPU as well as 150-200MB of memory since the installation of the KB4512941 update. But those resources are being totally wasted–the Cortana window opened from the Taskbar fails to load.

It’s not clear why Cortana decided to be a processor hog with the KB4512941 update. What is clear, however, was the fact that members of the Windows Insider Program reported these problems to Microsoft via the Feedback Hub before the update was made available to the public. Those reports appear to have slipped through the cracks; now people who install the optional update are sacrificing a significant portion of their CPU to Cortana.

Issues like these have become something of a trend with Windows 10 cumulative updates. An update in April led to performance issues, a May update broke Windows Sandbox, and an August update caused networking problems on Surface devices. Other cumulative updates have simply failed to resolve the problems they were made to address. At this point it’s harder to explain why people should install these updates than why they might want to wait.

Windows Latest said that some users resolved the problem with Cortana by deleting a registry key, but unless one of the update’s improvements outweighs the performance hit, it’s probably easier to simply uninstall the KB4512941 update until Microsoft addresses the issue. 

Developer Releases Unpatchable Jailbreak Exploit For Older iOS Devices

Credit: ShutterstockCredit: Shutterstock

The developer of ipwndfu, an open source jailbreaking application for older iOS devices, announced a “permanent unpatchable bootrom exploit” that works against hundreds of millions of iOS devices. All iPhones starting with the 4S generation and ending with the iPhone X are vulnerable. Similarly, all iPads that use chips from A5 to A11 are affected by the exploit.

According to the the ipwndfu developer, who goes by the handle @axi0mX on Twitter, there hasn’t been a public bootrom exploit for iOS since iPhone 4 came out in 2010. The developer also noted that this is the biggest news in the iOS jailbreaking community in years. 

The reason for that has to do not just with the fact that the exploit affects multiple iOS device generations, but also because Apple can’t fix an exploit in the bootrom without a new hardware revision. 

The bootrom, called SecureROM by Apple, is read-only precisely so that it can’t suffer third-party modifications. However, that means Apple can’t update it via an iOS update either. Even though the bootroom is read-only so it shouldn’t allow any kind of exploitation, no piece of software is ever perfect, so eventually someone finds a bug in it and exploits it.

The developer noted that what he’s releasing today is not a full jailbreak, but only the exploit for the bootroom. He said you would still need additional hardware and software to jailbreak the devices, but he’s hopeful someone else will be able to discover a method that doesn’t require additional hardware and software for the jailbreak.

The developer was able to find the vulnerability in the bootrom after Apple patched a critical use-after-free bug in iOS 12 beta in the summer of 2018. He believes that others likely saw what other bugs this patch created, but they may not have made those public.

Exploits that enable jailbreaking of iOS devices take advantage of security vulnerabilities that malicious parties could use, too, similar to how rooting works on Android devices. This is why both Apple and Google play a constant game of cat-and-mouse with the jailbreaking/rooting communities. However, in this case,  Apple should not be able to update the existing devices to fix the flaw, so we may see the jailbreaking community rise in numbers over the next few years.

Lightning Deal: 16GB of DDR4 RAM For Just $50

Credit: AmazonCredit: Amazon

The Adata XPG Z1 DDR4-2400 16GB (2x8GB) memory kit is at a bargain price of $49.99. The deal is available for the next two hours, and stock is extremely limited, so don’t think twice!

Adata’s XPG Z1 memory modules are built on a 10-layer, 2oz. copper black PCB to offer superior stability and the best cooling performance. They feature a unique, red heatsink that is responsable for passive cooling. The heatsink measures 44mm at the tallest point so you might want to verify that you have clearance space especially if you’re running an oversized CPU air cooler.

The Adata XPG Z1 DDR4-2400 16GB (AX4U240038G16-DRZ) memory kit comes with two 8GB DDR4 memory modules, therefore, it’s compatible with Intel and AMD platforms that run on dual-channel technology. The memory sticks clock in at 2,400 MHz with CL16-16-16 timings. They only need 1.20V to run at the advertised speed so there’s definitely overclocking headroom if you want to push the memory modules to higher speeds.

Installation and setup is a breeze thanks to the onboard XMP 2.0 profile. You just need to flip the XMP switch inside your motherboard BIOS, and you’re pretty much good to go.

Adata backs the XPG Z1 DDR4-2400 memory kit with a limited lifetime warranty.

Intel Reveals Three new Cutting-Edge Packaging Technologies

Credit: @david_schor WikiChipCredit: @david_schor WikiChip
Intel revealed three new packaging technologies at SEMICON West: Co-EMIB, Omni-Directional Interconnect (ODI) and Multi-Die I/O (MDIO). These new technologies enable massive designs by stitching together multiple dies into one processor. Building upon Intel’s 2.5D EMIB and 3D Foveros tech, the technologies aim to bring near-monolithic power and performance to heterogeneous packages. For the data-center, that could enable a platform scope that far exceeds the die-size limits of single dies.

The focus of semiconductors is usually on the process node itself, but packaging is one of the oft-unsung enablers of modern semiconductors. Ultimately a silicon chip is just part of a bigger system that requires power and data interconnection. Packaging, in that view, provides the physical interface between the processor and the motherboard – the board acts as a landing zone for the chip’s electrical signals and power supply. Intel stated a few years ago that its assembly and test R&D is bigger than the top two OSATs (outsourced assembly and test) combined.
Credit: IntelCredit: IntelPackaging innovation could lead to smaller packages that enable bigger batteries as we had seen with Broadwell-Y. Similar board size reductions have been realized with the use of interposers to integrate high-bandwidth memory (HBM). With the industry tending towards a heterogeneous design paradigm with chiplet building blocks, the platform-level interconnect has gained greatly in importance. 

EMIB

Intel has been shipping its EMIB (Embedded Multi-die Interconnect Bridge), a low-cost alternative to interposers, since 2017, and it also plans to bring that chiplet strategy to its mainstream chips. In short, EMIB is a silicon bridge that enables a high-speed pathway between two chips. The bridge is embedded inside the package between two adjacent dies.

Compared to interposers, which can be reticle-sized (832mm2) or even larger, EMIB is just a small (hence, cheap) piece of silicon. It provides the same bandwidth and energy-per-bit advantages of an interposer compared to standard package traces, which are traditionally used for multi-chip packages (MCPs), such as AMD’s Infinity Fabric. (To some extent, because the PCH is a separate die, chiplets have actually been around for a very long time.)

Another advantage of EMIB is the ability to build each function or IP block of a chip on its own most-suitable process technology, which reduces costs and improves yield by using smaller dies. EMIB has several other advantages, such as decoupling IP development and integration by allowing designers to build chips from a library of chiplets, using the best chiplet available at each point in time. Intel currently uses EMIB in the Stratix 10, Agilex FPGAs, and in Kaby Lake-G, and the company has more extensive plans for the technology on its roadmap.

Foveros

At its Architecture Day last year, Intel went a step further, describing its upcoming 3D Foveros technology that it will use in Lakefield. To recap, it is an active interposer that uses through-silicon vias (TSVs) to stack multiple layers of silicon atop each other. It has even lower power and higher bandwidth than EMIB, although Intel hasn’t discussed their relative cost.

In Lakefield, Foveros is used to connect the base die (which provides power delivery and PCH functionality) on 22FFL to the 10nm compute die with four Tremont and one Sunny Cove core. In May, the company teased its vision of an advanced concept product using both EMIB and Foveros together to an create enormous package with many chips, all on a single package. 

On Tuesday at SEMICON West, Intel unveiled three more advanced packaging technologies it is working on.

Co-EMIB

Co-EMIB is the technology that will largely make the above heterogenous data-centric product a reality. In essence, it allows Intel to connect multiple 3D-stacked Foveros chips together to create even bigger systems.

Intel showed off a concept product that contains four Foveros stacks, with each stack having eight small compute chiplets that are connected via TSVs to the base die. (So the role of Foveros there is to connect the chiplets as if it were a monolithic die.) Each Foveros stack is then interconnected via two (Co-)EMIB links with its two adjacent Foveros stacks. Co-EMIB is further used to connect the HBM and transceivers to the compute stacks.

Evidently, the cost of such a product would be enormous, as it essentially contains multiple traditional monolithic-class products in a single package. That’s likely why Intel categorized it as a data-centric concept product, aimed mainly at the cloud players that are more than happy to absorb those costs in exchange for the extra performance. 

The attraction is that the whole package provides a near-monolithic performance and interconnect power. Additionally, the advantage of Co-EMIB over a monolithic die is that the heterogeneous package can far exceed the monolithic die-size constraints, with each IP on its own most suitable process node. At its Investor Meeting in May, Chief of Engineering Murthy said Foveros would allow the company to intercept new process technologies up to two years earlier by using smaller chiplets. Credit: IntelCredit: IntelOf course, since EMIB is a bridge inside the package, it is inserted at the start of the assembly process, followed by the Foveros stacks. WikiChip has provided a diagram of Co-EMIB used to connect two Foveros stacks.

ODI


Omni-Directional Interconnect (ODI) is a new interconnect. It is yet another type of multi-chip interconnect besides the standard MCP, EMIB, and Foveros. As the name implies, it permits both horizontal and vertical transmission. The bandwidth is higher than traditional TSVs because the ODI TSVs are much larger. This allows for current conduction directly from the package substrate. Resistance and latency are also lower. ODI will need far fewer vertical channels in the base die than traditional TSVs. This minimizes the die area and frees up area for active transistors.

MDIO

Lastly, Multi-Die I/O (MDIO) is an evolution of the Advanced Interconnect Bus (AIB) that provided a standardized SiP PHY-level interface for EMIB, for chiplet-to-chiplet communication. Last year, Intel donated its AIB to DARPA as a royalty-free interconnect standard for chiplets. MDIO bumps up the pin speed from 2Gbps to 5.4Gbps. The areal bandwidth density has increased somewhat, but mainly the linear bandwidth density has increased by a large factor. Intel reduced the I/O voltage swing from 0.9V to 0.5V and improved energy efficiency. Intel also provided a comparison to TSMC’s recently announced LIPINCON.

Credit: IntelCredit: IntelOne word of caution, though. While it would seem that a higher pin speed it better, that is not necessarily the case, higher speeds tend to result in higher power consumption. It’s best to look at it is as a whole spectrum of interconnect options. At one end of the spectrum, there are protocols with high lane speeds (and hence, few lanes), such as PCIe 4.0’s 32Gbps. At the other end, technologies such as EMIB and HBM have a lower per-pin data rate, but typically they have many more interconnections. EMIB’s roadmap consists of shrinking the bump pitch, which will provide ever more connections, so a high lane rate isn’t a priority.

Further Discussion

When they are ready, these technologies will provide Intel with powerful capabilities for the heterogeneous and data-centric era. On the client side, the benefits of advanced packaging include smaller package size and lower power consumption (for Lakefield, Intel claims a 10x SoC standby power improvement at 2.6mW). In the data center, advanced packaging will help to build very large and powerful platforms on a single package, with performance, latency, and power characteristics close to what a monolithic die would yield. The yield advantage of small chiplets and the establishment of chipset ecosystem are major drivers, too.

As an Integrated Device Manufacturer (IDM), Intel says it can extensively co-develop its IP and packaging in a way that no other company could possibly do, from silicon to architecture and platform. As Babak Sabi, CVP of Intel’s Assembly and Test Technology Development, put it: “Our vision is to develop leadership technology to connect chips and chiplets in a package to match the functionality of a monolithic system-on-chip. A heterogeneous approach gives our chip architects unprecedented flexibility to mix and match IP blocks and process technologies with various memory and I/O elements in new device form factors. Intel’s vertically integrated structure provides an advantage in the era of heterogeneous integration, giving us an unmatched ability to co-optimize architecture, process, and packaging to deliver leadership products.”

MDIO is slated for 2020 availability. Rumor has it that Intel is going to use Foveros, and hence possibly Co-EMIB, with Granite Rapids in early 2022. Intel has not specified a timeframe for ODI.

Tobii Spotlight Technology Uses Eye Tracking to Lighten the Load for VR Headsets

Tobii, famous for its eye tracking technology used in security features like Windows Hello, is using its powers to lighten the workload of VR headsets and make the overall experience smoother. Tobii Spotlight Technology, announced today, uses eye tracking for dynamic foveated rendering (DFR), so VR headsets can focus on images in the center of the user’s focus, rather than what’s in their peripheral vision.

Foveated rendering is inspired by fovea, a small part of your retina that sees clearly while your peripheral vision is more blurred. Tobii Spotlight Technology uses DFR for better computation efficiency and to enable accurate, low latency tracking of what your eyes are looking at in real-time.

The technology may even be able to help people who get nauseous while in VR.

“Part of the nausea effect in VR can be due to an insufficient graphics refresh rate,” a Tobii spokesperson told Tom’s Hardware. “With DFR, it is possible to create graphics with higher and smoother frame rates that match display refresh rate and, thus, mitigate that part of the problem. Additionally, Tobii eye tracking is designed to help make VR devices better generally, including helping to better align the device for each user’s unique eye position (and thus provide a better visual experience).”

Tobii Spotlight Technology only works with graphics cards that support VRS (variable rate shading), which is currently any card based on Nvidia’s Turing architecture. No AMD graphics cards currently support VRS, but a patent discovered in February suggests that some may do so eventually.

Tobii’s Benchmarks

Tobii Spotlight Technology is already available in HTC Vive Pro Eye, an enterprise-focused VR headset with built-in eye tracking. Tobii is bullish on its impact on VR headsets, based on benchmarks it shared using a Vive Pro Eye connected to a PC running an RTX 2070 graphics card and playing the game ShowdownVR with DFR enabled by Nvidia VRS and Tobii Spotlight Technology. Of course, we’ll have to take these results with a grain of salt, since it’s a vendor benchmarking its own technology.

According to Tobii’s testing, the average GPU rendering load decreased by 57% with an average shading rate of 16%.

With a lower GPU load, VR headsets, even high resolution ones, should have more headroom to maintain desirable frame rates.

Tobii also claims that this advanced eye tracking technique will be helpful as next-generation VR HMDs introduce even higher display resolutions. In fact, the greater the resolution, the more impact DFR has, according to Tobii’s benchmarks.

VR headsets using Tobii Spotlight Technology will also be able to use more advanced shading techniques without burdening the GPU load. This will purportedly allow developers to implement better lighting, colors, textures and fragment shaders.

What’s Next?

A company spokesperson told Tom’s Hardware that Tobii Spotlight Technology is “intended to support a variety of headsets, including both tethered and standalone headsets”.

The company believes that the next generation of foveating will enable things like foveated streaming (such as live content in low-latency 5G networks) and foveated transport, which uses data compression to optimize graphics transfer based on where the user is looking.

Photo Credits: Tobii

Nvidia Releases New Gamescom Game Ready Driver

Photo Source: NvidiaPhoto Source: Nvidia

Nvidia didn’t just bring a bunch of games that will support ray tracing to Gamescom 2019. The company also released the new Gamescom Game Ready Driver with “big software optimizations” for several popular titles, new beta features, and support for three new G-Sync Compatible gaming monitors.

Those optimizations were said to offer performance improvements in Apex Legends, Battlefield V, Forza Horizon 4, Strange Brigade and World War Z that could lead to frame rate increases up to 23%. The actual improvement will vary between titles, of course, and will depend on other factors as well. Nvidia broke down the frame rate increases seen on the RTX 2060 Super, RTX 2070 Super, RTX 2080 Super and RTX 2080 Ti in the images below.

The company also introduced a new Ultra-Low Latency Mode said to reduce latency by up to 33% by implementing “just-in-time” frame scheduling and “submitting frames to be rendered just before the GPU needs them.” Nvidia said this feature is currently in beta. There are some restrictions worth noting: the company said it works best in GPU-restricted games between 60-100 frames per second and only supports DirectX 9 and DirectX 11.

Ultra-Low Latency Mode can be accessed by opening the Nvidia Control Panel, clicking “Manage 3D Settings” and selecting Low Latency Mode. There are three options to choose from–“Off” lets games automatically queue 1-3 frames, “On” restricts games to queuing one frame and “Ultra” doesn’t allow games to queue any frames. Nvidia said the new Ultra-Low Latency Mode is available for all GPUs running the Gamescom Game Ready Driver.

This new driver introduced several other new features: GPU Integer Scaling makes pixel art look better on high-resolution displays for people using graphics cards based on the Turing architecture, a new Sharpen Freestyle filter in GeForce Experience improves screenshot image quality while mitigating the performance impact of using Freestyle, and 30-bit color support allows pixels to be built from over 1 billion shades of color.

Nvidia also included new Optimal Playable Settings for games such as Bloodstained: Ritual of the Night, Battalion 1944 and others; the full list of supported games is available on the company’s website. The Gamescom Game Ready Driver expands support for three new G-Sync Compatible monitors (the Asus VG27A, Acer CP3271 and Acer XB273K GP) and improves launch day support for Remnant: From the Ashes as well.

The GeForce Game Ready Driver–also known as GeForce Game Ready 436.02 WHQL–is available now from Nvidia’s website and GeForce Experience.

Intel Gen12 Graphics Linux Patches Reveal New Display Feature for Tiger Lake (Update)

Update, 9/7/19, 4:38 a.m. PT: Some extra information about Gen12 has come out. According to a GitHub merge request, Gen12 will be one of the biggest ISA updates in the history of the Gen architecture and removal of data coherency between register reads and writes: “Gen12 is planned to include one of the most in-depth reworks of the Intel EU ISA since the original i965. The encoding of almost every instruction field, hardware opcode and register type needs to be updated in this merge request. But probably the most invasive change is the removal of the register scoreboard logic from the hardware, which means that the EU will no longer guarantee data coherency between register reads and writes, and will require the compiler to synchronize dependent instructions anytime there is a potential data hazard.”Twitter user @miktdt also noted that Gen12 will double the amount of EUs per subslice from 8 to 16, which likely helps with scaling up the architecture.

Original Article, 9/6/19, 9:34 a.m. PT:

Some information about the upcoming Gen12 (aka Xe) graphics architecture from Intel has surfaced via recent Linux kernel patches. In particular, Gen12 will have a new display feature called the Display State Buffer. This engine would improve Gen12 context switching.

Phoronix reported on the patches on Thursday. The patches provide clues about the new Display State Buffer (DSB) feature of the Gen12 graphics architecture, which will find its way to Tiger Lake (and possibly Rocket Lake) and the Xe discrete graphics cards in 2020. In the patches, DSB is generically described as a hardware capability that will be introduced in the Gen12 display controller. This engine will only be used for some specific scenarios for which it will deliver performance improvements, and after completion of its work, it will be disabled again.

Some additional (technical) documentation of the feature is available, but the benefits of the DSB are described as follows: “[It] helps to reduce loading time and CPU activity, thereby making the context switch faster.” In other words, it is new engine that offloads some work from the CPU and helps to improve context switching time.

Of course, the bigger picture here is the enablement for Gen12 that has been going on in the Linux kernel (similar to Gen11), which is especially of interest given that it will mark the first graphics architecture from Intel to get released as a discrete GPU. To that end, Phoronix reported in June that the first Tiger Lake graphics driver support was added to the kernel, with more batches in August.

Tiger Lake and Gen12 Graphics: What we know so far

With the first 10th Gen (10nm) Ice Lake laptops nearly getting into customers’ hands after almost a year of disclosures, Intel has already provided some initial information about what to expect for the 11th-Gen processors next year, codenamed Tiger Lake (with Rocket Lake on 14nm still in the rumor mill). Ice Lake focused on integration and a strong CPU and GPU update, and with the ‘mobility redefined’ tag line, Tiger Lake looks to be another 10nm product solely for the mobile market.

Credit: IntelCredit: Intel

On the CPU side, Tiger Lake will incorporate the latest Willow Cove architecture. Intel has said that it will feature a redesigned cache, transistor optimizations for higher frequency (possibly 10nm++), and further security features.

While the company has been teasing its Xe discrete graphics cards for even longer than it has talked about Ice Lake, details remain scarce. Intel said it had split the Gen12 (aka Xe) architecture in two microarchitectures, one that is client optimized, and another one that is data center optimized, intending to scale from teraflops to petaflops. From a couple of leaks from 2018, it is rumored that the Arctic Sound GPU would consist of a multi-chip package (MCP) with 2-4 dies (likely using EMIB for packaging), and was targeted for qualification in the first half of next year. The leak also stated that Tiger Lake would incorporate power management from Lakefield.

The MCP rumor is also corroborated by some recent information from an Intel graphics driver, with the (Discrete Graphics) DG2 family coming in variants of what is presumably 128, 256 and 512 executions units (EUs). This could indicate one, two and four chiplet configurations of a 128EU die. Ice Lake’s integrated graphics (IGP) has 64EUs, and the small print from Intel’s Tiger Lake performance numbers revealed that it would have 96EUs.

A GPU with 512EUs would have in the neighborhood of 10 TFLOPS, which does not look sufficient to compete with 2020 GPU offerings from AMD and NVIDIA in the high-end space. However, not all the gaps are filled yet. A summary chart posted by @KOMACHI_ENSAKA talks about three variants of Gen12:

  • Gen12 (LP) in DG1, Lakefield-R, Ryefield, Tiger Lake, Rocket Lake and Alder Lake (the successor of Tiger Lake)
  • Gen12.5 (HP) in Arctic Sound
  • Gen12.7 (HP) in DG2

How those differ is still unclear. For some speculation, the regular Gen12 probably refers simply to the integrated graphics in Tiger Lake and other products. However, the existence of DG1 and information about Rocket Lake could indicate that Intel has also put this IP in a discrete chiplet. This chiplet could then serve as the graphics for Rocket Lake by packaging it together via EMIB. If we assume Arctic Sound is the mainstream GPU, then Gen12.5 would refer to the client optimized version and Gen12.7 to the data center optimized version of Xe. In that case, the amount of EUs Intel intends to offer to the gaming community remains unknown.

Moving to the display, it remains to be seen if the Display State Buffer is what Intel referred to with the ‘latest display technology’ bullet point, or if the DSB is just one of multiple new display improvements. Tiger Lake will also feature next-gen I/O, likely referring to PCIe 4.0.

Given the timing of Ice Lake and Comet Lake, Tiger Lake is likely set for launch in the second half of next year.

Display Improvements in Gen11 Graphics Engine

With display being one of the key pillars of Tiger Lake, it is worth recapping the big changes in Gen11’s display block (we covered the graphics side of Gen11 previously).

Credit: IntelCredit: Intel

As the name implies, the display controller controls what is displayed on the screen. In Ice Lake’s Gen11, it is part of the system agent, and it had some hefty improvements. The Gen11 display engine introduced support for Adaptive-Sync (variable display refresh rate technology) as well as HDR and a wider color gamut. The Gen11 platform also integrated the USB Type-C subsystem, and the display controller has specific outputs for Type-C, and it can also target the Thunderbolt controller.

Intel also introduced some features for power management, most notably Panel Self Refresh (PSR), a technology first introduces in the smartphone realm. With PSR, a copy of the last frame is stored in a small frame buffer of the display. In the case of a static screen, the panel will refresh out of the locally stored image, which allows the display controller to go to a low power state. As another power-saving feature, Intel added a buffer to the display controller, to fetch pixels for the screen into. This allows the display engine to concentrate its memory accesses into a burst, meanwhile shutting down the rest of display controller. This is effectively a form of race to halt, reminiscent of the duty cycle Intel introduced in Broadwell’s Gen8 graphics engine.

Lastly, on the performance side, in response to the increasing monitor resolutions, the display controller now has a two-pixel-per-clock pipeline (instead of one). This reduces the required clock rate of the display controller by 50%, effectively trading die area for power efficiency (since transistors are more efficient at lower clocks as voltage is reduced). Additionally, the pipeline has also gained in precision in response to HDR and wider color gamut displays. The Gen11 controller now also supports a compressed memory format generated by the graphics engine to reduce bandwidth.