Lightning Deal: 16GB of DDR4 RAM For Just $50

Credit: AmazonCredit: Amazon

The Adata XPG Z1 DDR4-2400 16GB (2x8GB) memory kit is at a bargain price of $49.99. The deal is available for the next two hours, and stock is extremely limited, so don’t think twice!

Adata’s XPG Z1 memory modules are built on a 10-layer, 2oz. copper black PCB to offer superior stability and the best cooling performance. They feature a unique, red heatsink that is responsable for passive cooling. The heatsink measures 44mm at the tallest point so you might want to verify that you have clearance space especially if you’re running an oversized CPU air cooler.

The Adata XPG Z1 DDR4-2400 16GB (AX4U240038G16-DRZ) memory kit comes with two 8GB DDR4 memory modules, therefore, it’s compatible with Intel and AMD platforms that run on dual-channel technology. The memory sticks clock in at 2,400 MHz with CL16-16-16 timings. They only need 1.20V to run at the advertised speed so there’s definitely overclocking headroom if you want to push the memory modules to higher speeds.

Installation and setup is a breeze thanks to the onboard XMP 2.0 profile. You just need to flip the XMP switch inside your motherboard BIOS, and you’re pretty much good to go.

Adata backs the XPG Z1 DDR4-2400 memory kit with a limited lifetime warranty.

Intel Reveals Three new Cutting-Edge Packaging Technologies

Credit: @david_schor WikiChipCredit: @david_schor WikiChip
Intel revealed three new packaging technologies at SEMICON West: Co-EMIB, Omni-Directional Interconnect (ODI) and Multi-Die I/O (MDIO). These new technologies enable massive designs by stitching together multiple dies into one processor. Building upon Intel’s 2.5D EMIB and 3D Foveros tech, the technologies aim to bring near-monolithic power and performance to heterogeneous packages. For the data-center, that could enable a platform scope that far exceeds the die-size limits of single dies.

The focus of semiconductors is usually on the process node itself, but packaging is one of the oft-unsung enablers of modern semiconductors. Ultimately a silicon chip is just part of a bigger system that requires power and data interconnection. Packaging, in that view, provides the physical interface between the processor and the motherboard – the board acts as a landing zone for the chip’s electrical signals and power supply. Intel stated a few years ago that its assembly and test R&D is bigger than the top two OSATs (outsourced assembly and test) combined.
Credit: IntelCredit: IntelPackaging innovation could lead to smaller packages that enable bigger batteries as we had seen with Broadwell-Y. Similar board size reductions have been realized with the use of interposers to integrate high-bandwidth memory (HBM). With the industry tending towards a heterogeneous design paradigm with chiplet building blocks, the platform-level interconnect has gained greatly in importance. 

EMIB

Intel has been shipping its EMIB (Embedded Multi-die Interconnect Bridge), a low-cost alternative to interposers, since 2017, and it also plans to bring that chiplet strategy to its mainstream chips. In short, EMIB is a silicon bridge that enables a high-speed pathway between two chips. The bridge is embedded inside the package between two adjacent dies.

Compared to interposers, which can be reticle-sized (832mm2) or even larger, EMIB is just a small (hence, cheap) piece of silicon. It provides the same bandwidth and energy-per-bit advantages of an interposer compared to standard package traces, which are traditionally used for multi-chip packages (MCPs), such as AMD’s Infinity Fabric. (To some extent, because the PCH is a separate die, chiplets have actually been around for a very long time.)

Another advantage of EMIB is the ability to build each function or IP block of a chip on its own most-suitable process technology, which reduces costs and improves yield by using smaller dies. EMIB has several other advantages, such as decoupling IP development and integration by allowing designers to build chips from a library of chiplets, using the best chiplet available at each point in time. Intel currently uses EMIB in the Stratix 10, Agilex FPGAs, and in Kaby Lake-G, and the company has more extensive plans for the technology on its roadmap.

Foveros

At its Architecture Day last year, Intel went a step further, describing its upcoming 3D Foveros technology that it will use in Lakefield. To recap, it is an active interposer that uses through-silicon vias (TSVs) to stack multiple layers of silicon atop each other. It has even lower power and higher bandwidth than EMIB, although Intel hasn’t discussed their relative cost.

In Lakefield, Foveros is used to connect the base die (which provides power delivery and PCH functionality) on 22FFL to the 10nm compute die with four Tremont and one Sunny Cove core. In May, the company teased its vision of an advanced concept product using both EMIB and Foveros together to an create enormous package with many chips, all on a single package. 

On Tuesday at SEMICON West, Intel unveiled three more advanced packaging technologies it is working on.

Co-EMIB

Co-EMIB is the technology that will largely make the above heterogenous data-centric product a reality. In essence, it allows Intel to connect multiple 3D-stacked Foveros chips together to create even bigger systems.

Intel showed off a concept product that contains four Foveros stacks, with each stack having eight small compute chiplets that are connected via TSVs to the base die. (So the role of Foveros there is to connect the chiplets as if it were a monolithic die.) Each Foveros stack is then interconnected via two (Co-)EMIB links with its two adjacent Foveros stacks. Co-EMIB is further used to connect the HBM and transceivers to the compute stacks.

Evidently, the cost of such a product would be enormous, as it essentially contains multiple traditional monolithic-class products in a single package. That’s likely why Intel categorized it as a data-centric concept product, aimed mainly at the cloud players that are more than happy to absorb those costs in exchange for the extra performance. 

The attraction is that the whole package provides a near-monolithic performance and interconnect power. Additionally, the advantage of Co-EMIB over a monolithic die is that the heterogeneous package can far exceed the monolithic die-size constraints, with each IP on its own most suitable process node. At its Investor Meeting in May, Chief of Engineering Murthy said Foveros would allow the company to intercept new process technologies up to two years earlier by using smaller chiplets. Credit: IntelCredit: IntelOf course, since EMIB is a bridge inside the package, it is inserted at the start of the assembly process, followed by the Foveros stacks. WikiChip has provided a diagram of Co-EMIB used to connect two Foveros stacks.

ODI


Omni-Directional Interconnect (ODI) is a new interconnect. It is yet another type of multi-chip interconnect besides the standard MCP, EMIB, and Foveros. As the name implies, it permits both horizontal and vertical transmission. The bandwidth is higher than traditional TSVs because the ODI TSVs are much larger. This allows for current conduction directly from the package substrate. Resistance and latency are also lower. ODI will need far fewer vertical channels in the base die than traditional TSVs. This minimizes the die area and frees up area for active transistors.

MDIO

Lastly, Multi-Die I/O (MDIO) is an evolution of the Advanced Interconnect Bus (AIB) that provided a standardized SiP PHY-level interface for EMIB, for chiplet-to-chiplet communication. Last year, Intel donated its AIB to DARPA as a royalty-free interconnect standard for chiplets. MDIO bumps up the pin speed from 2Gbps to 5.4Gbps. The areal bandwidth density has increased somewhat, but mainly the linear bandwidth density has increased by a large factor. Intel reduced the I/O voltage swing from 0.9V to 0.5V and improved energy efficiency. Intel also provided a comparison to TSMC’s recently announced LIPINCON.

Credit: IntelCredit: IntelOne word of caution, though. While it would seem that a higher pin speed it better, that is not necessarily the case, higher speeds tend to result in higher power consumption. It’s best to look at it is as a whole spectrum of interconnect options. At one end of the spectrum, there are protocols with high lane speeds (and hence, few lanes), such as PCIe 4.0’s 32Gbps. At the other end, technologies such as EMIB and HBM have a lower per-pin data rate, but typically they have many more interconnections. EMIB’s roadmap consists of shrinking the bump pitch, which will provide ever more connections, so a high lane rate isn’t a priority.

Further Discussion

When they are ready, these technologies will provide Intel with powerful capabilities for the heterogeneous and data-centric era. On the client side, the benefits of advanced packaging include smaller package size and lower power consumption (for Lakefield, Intel claims a 10x SoC standby power improvement at 2.6mW). In the data center, advanced packaging will help to build very large and powerful platforms on a single package, with performance, latency, and power characteristics close to what a monolithic die would yield. The yield advantage of small chiplets and the establishment of chipset ecosystem are major drivers, too.

As an Integrated Device Manufacturer (IDM), Intel says it can extensively co-develop its IP and packaging in a way that no other company could possibly do, from silicon to architecture and platform. As Babak Sabi, CVP of Intel’s Assembly and Test Technology Development, put it: “Our vision is to develop leadership technology to connect chips and chiplets in a package to match the functionality of a monolithic system-on-chip. A heterogeneous approach gives our chip architects unprecedented flexibility to mix and match IP blocks and process technologies with various memory and I/O elements in new device form factors. Intel’s vertically integrated structure provides an advantage in the era of heterogeneous integration, giving us an unmatched ability to co-optimize architecture, process, and packaging to deliver leadership products.”

MDIO is slated for 2020 availability. Rumor has it that Intel is going to use Foveros, and hence possibly Co-EMIB, with Granite Rapids in early 2022. Intel has not specified a timeframe for ODI.

Tobii Spotlight Technology Uses Eye Tracking to Lighten the Load for VR Headsets

Tobii, famous for its eye tracking technology used in security features like Windows Hello, is using its powers to lighten the workload of VR headsets and make the overall experience smoother. Tobii Spotlight Technology, announced today, uses eye tracking for dynamic foveated rendering (DFR), so VR headsets can focus on images in the center of the user’s focus, rather than what’s in their peripheral vision.

Foveated rendering is inspired by fovea, a small part of your retina that sees clearly while your peripheral vision is more blurred. Tobii Spotlight Technology uses DFR for better computation efficiency and to enable accurate, low latency tracking of what your eyes are looking at in real-time.

The technology may even be able to help people who get nauseous while in VR.

“Part of the nausea effect in VR can be due to an insufficient graphics refresh rate,” a Tobii spokesperson told Tom’s Hardware. “With DFR, it is possible to create graphics with higher and smoother frame rates that match display refresh rate and, thus, mitigate that part of the problem. Additionally, Tobii eye tracking is designed to help make VR devices better generally, including helping to better align the device for each user’s unique eye position (and thus provide a better visual experience).”

Tobii Spotlight Technology only works with graphics cards that support VRS (variable rate shading), which is currently any card based on Nvidia’s Turing architecture. No AMD graphics cards currently support VRS, but a patent discovered in February suggests that some may do so eventually.

Tobii’s Benchmarks

Tobii Spotlight Technology is already available in HTC Vive Pro Eye, an enterprise-focused VR headset with built-in eye tracking. Tobii is bullish on its impact on VR headsets, based on benchmarks it shared using a Vive Pro Eye connected to a PC running an RTX 2070 graphics card and playing the game ShowdownVR with DFR enabled by Nvidia VRS and Tobii Spotlight Technology. Of course, we’ll have to take these results with a grain of salt, since it’s a vendor benchmarking its own technology.

According to Tobii’s testing, the average GPU rendering load decreased by 57% with an average shading rate of 16%.

With a lower GPU load, VR headsets, even high resolution ones, should have more headroom to maintain desirable frame rates.

Tobii also claims that this advanced eye tracking technique will be helpful as next-generation VR HMDs introduce even higher display resolutions. In fact, the greater the resolution, the more impact DFR has, according to Tobii’s benchmarks.

VR headsets using Tobii Spotlight Technology will also be able to use more advanced shading techniques without burdening the GPU load. This will purportedly allow developers to implement better lighting, colors, textures and fragment shaders.

What’s Next?

A company spokesperson told Tom’s Hardware that Tobii Spotlight Technology is “intended to support a variety of headsets, including both tethered and standalone headsets”.

The company believes that the next generation of foveating will enable things like foveated streaming (such as live content in low-latency 5G networks) and foveated transport, which uses data compression to optimize graphics transfer based on where the user is looking.

Photo Credits: Tobii

Nvidia Releases New Gamescom Game Ready Driver

Photo Source: NvidiaPhoto Source: Nvidia

Nvidia didn’t just bring a bunch of games that will support ray tracing to Gamescom 2019. The company also released the new Gamescom Game Ready Driver with “big software optimizations” for several popular titles, new beta features, and support for three new G-Sync Compatible gaming monitors.

Those optimizations were said to offer performance improvements in Apex Legends, Battlefield V, Forza Horizon 4, Strange Brigade and World War Z that could lead to frame rate increases up to 23%. The actual improvement will vary between titles, of course, and will depend on other factors as well. Nvidia broke down the frame rate increases seen on the RTX 2060 Super, RTX 2070 Super, RTX 2080 Super and RTX 2080 Ti in the images below.

The company also introduced a new Ultra-Low Latency Mode said to reduce latency by up to 33% by implementing “just-in-time” frame scheduling and “submitting frames to be rendered just before the GPU needs them.” Nvidia said this feature is currently in beta. There are some restrictions worth noting: the company said it works best in GPU-restricted games between 60-100 frames per second and only supports DirectX 9 and DirectX 11.

Ultra-Low Latency Mode can be accessed by opening the Nvidia Control Panel, clicking “Manage 3D Settings” and selecting Low Latency Mode. There are three options to choose from–“Off” lets games automatically queue 1-3 frames, “On” restricts games to queuing one frame and “Ultra” doesn’t allow games to queue any frames. Nvidia said the new Ultra-Low Latency Mode is available for all GPUs running the Gamescom Game Ready Driver.

This new driver introduced several other new features: GPU Integer Scaling makes pixel art look better on high-resolution displays for people using graphics cards based on the Turing architecture, a new Sharpen Freestyle filter in GeForce Experience improves screenshot image quality while mitigating the performance impact of using Freestyle, and 30-bit color support allows pixels to be built from over 1 billion shades of color.

Nvidia also included new Optimal Playable Settings for games such as Bloodstained: Ritual of the Night, Battalion 1944 and others; the full list of supported games is available on the company’s website. The Gamescom Game Ready Driver expands support for three new G-Sync Compatible monitors (the Asus VG27A, Acer CP3271 and Acer XB273K GP) and improves launch day support for Remnant: From the Ashes as well.

The GeForce Game Ready Driver–also known as GeForce Game Ready 436.02 WHQL–is available now from Nvidia’s website and GeForce Experience.

Intel Gen12 Graphics Linux Patches Reveal New Display Feature for Tiger Lake (Update)

Update, 9/7/19, 4:38 a.m. PT: Some extra information about Gen12 has come out. According to a GitHub merge request, Gen12 will be one of the biggest ISA updates in the history of the Gen architecture and removal of data coherency between register reads and writes: “Gen12 is planned to include one of the most in-depth reworks of the Intel EU ISA since the original i965. The encoding of almost every instruction field, hardware opcode and register type needs to be updated in this merge request. But probably the most invasive change is the removal of the register scoreboard logic from the hardware, which means that the EU will no longer guarantee data coherency between register reads and writes, and will require the compiler to synchronize dependent instructions anytime there is a potential data hazard.”Twitter user @miktdt also noted that Gen12 will double the amount of EUs per subslice from 8 to 16, which likely helps with scaling up the architecture.

Original Article, 9/6/19, 9:34 a.m. PT:

Some information about the upcoming Gen12 (aka Xe) graphics architecture from Intel has surfaced via recent Linux kernel patches. In particular, Gen12 will have a new display feature called the Display State Buffer. This engine would improve Gen12 context switching.

Phoronix reported on the patches on Thursday. The patches provide clues about the new Display State Buffer (DSB) feature of the Gen12 graphics architecture, which will find its way to Tiger Lake (and possibly Rocket Lake) and the Xe discrete graphics cards in 2020. In the patches, DSB is generically described as a hardware capability that will be introduced in the Gen12 display controller. This engine will only be used for some specific scenarios for which it will deliver performance improvements, and after completion of its work, it will be disabled again.

Some additional (technical) documentation of the feature is available, but the benefits of the DSB are described as follows: “[It] helps to reduce loading time and CPU activity, thereby making the context switch faster.” In other words, it is new engine that offloads some work from the CPU and helps to improve context switching time.

Of course, the bigger picture here is the enablement for Gen12 that has been going on in the Linux kernel (similar to Gen11), which is especially of interest given that it will mark the first graphics architecture from Intel to get released as a discrete GPU. To that end, Phoronix reported in June that the first Tiger Lake graphics driver support was added to the kernel, with more batches in August.

Tiger Lake and Gen12 Graphics: What we know so far

With the first 10th Gen (10nm) Ice Lake laptops nearly getting into customers’ hands after almost a year of disclosures, Intel has already provided some initial information about what to expect for the 11th-Gen processors next year, codenamed Tiger Lake (with Rocket Lake on 14nm still in the rumor mill). Ice Lake focused on integration and a strong CPU and GPU update, and with the ‘mobility redefined’ tag line, Tiger Lake looks to be another 10nm product solely for the mobile market.

Credit: IntelCredit: Intel

On the CPU side, Tiger Lake will incorporate the latest Willow Cove architecture. Intel has said that it will feature a redesigned cache, transistor optimizations for higher frequency (possibly 10nm++), and further security features.

While the company has been teasing its Xe discrete graphics cards for even longer than it has talked about Ice Lake, details remain scarce. Intel said it had split the Gen12 (aka Xe) architecture in two microarchitectures, one that is client optimized, and another one that is data center optimized, intending to scale from teraflops to petaflops. From a couple of leaks from 2018, it is rumored that the Arctic Sound GPU would consist of a multi-chip package (MCP) with 2-4 dies (likely using EMIB for packaging), and was targeted for qualification in the first half of next year. The leak also stated that Tiger Lake would incorporate power management from Lakefield.

The MCP rumor is also corroborated by some recent information from an Intel graphics driver, with the (Discrete Graphics) DG2 family coming in variants of what is presumably 128, 256 and 512 executions units (EUs). This could indicate one, two and four chiplet configurations of a 128EU die. Ice Lake’s integrated graphics (IGP) has 64EUs, and the small print from Intel’s Tiger Lake performance numbers revealed that it would have 96EUs.

A GPU with 512EUs would have in the neighborhood of 10 TFLOPS, which does not look sufficient to compete with 2020 GPU offerings from AMD and NVIDIA in the high-end space. However, not all the gaps are filled yet. A summary chart posted by @KOMACHI_ENSAKA talks about three variants of Gen12:

  • Gen12 (LP) in DG1, Lakefield-R, Ryefield, Tiger Lake, Rocket Lake and Alder Lake (the successor of Tiger Lake)
  • Gen12.5 (HP) in Arctic Sound
  • Gen12.7 (HP) in DG2

How those differ is still unclear. For some speculation, the regular Gen12 probably refers simply to the integrated graphics in Tiger Lake and other products. However, the existence of DG1 and information about Rocket Lake could indicate that Intel has also put this IP in a discrete chiplet. This chiplet could then serve as the graphics for Rocket Lake by packaging it together via EMIB. If we assume Arctic Sound is the mainstream GPU, then Gen12.5 would refer to the client optimized version and Gen12.7 to the data center optimized version of Xe. In that case, the amount of EUs Intel intends to offer to the gaming community remains unknown.

Moving to the display, it remains to be seen if the Display State Buffer is what Intel referred to with the ‘latest display technology’ bullet point, or if the DSB is just one of multiple new display improvements. Tiger Lake will also feature next-gen I/O, likely referring to PCIe 4.0.

Given the timing of Ice Lake and Comet Lake, Tiger Lake is likely set for launch in the second half of next year.

Display Improvements in Gen11 Graphics Engine

With display being one of the key pillars of Tiger Lake, it is worth recapping the big changes in Gen11’s display block (we covered the graphics side of Gen11 previously).

Credit: IntelCredit: Intel

As the name implies, the display controller controls what is displayed on the screen. In Ice Lake’s Gen11, it is part of the system agent, and it had some hefty improvements. The Gen11 display engine introduced support for Adaptive-Sync (variable display refresh rate technology) as well as HDR and a wider color gamut. The Gen11 platform also integrated the USB Type-C subsystem, and the display controller has specific outputs for Type-C, and it can also target the Thunderbolt controller.

Intel also introduced some features for power management, most notably Panel Self Refresh (PSR), a technology first introduces in the smartphone realm. With PSR, a copy of the last frame is stored in a small frame buffer of the display. In the case of a static screen, the panel will refresh out of the locally stored image, which allows the display controller to go to a low power state. As another power-saving feature, Intel added a buffer to the display controller, to fetch pixels for the screen into. This allows the display engine to concentrate its memory accesses into a burst, meanwhile shutting down the rest of display controller. This is effectively a form of race to halt, reminiscent of the duty cycle Intel introduced in Broadwell’s Gen8 graphics engine.

Lastly, on the performance side, in response to the increasing monitor resolutions, the display controller now has a two-pixel-per-clock pipeline (instead of one). This reduces the required clock rate of the display controller by 50%, effectively trading die area for power efficiency (since transistors are more efficient at lower clocks as voltage is reduced). Additionally, the pipeline has also gained in precision in response to HDR and wider color gamut displays. The Gen11 controller now also supports a compressed memory format generated by the graphics engine to reduce bandwidth.

AMD Ryzen 9 3950X Might Be Available Without Stock Cooler

A new OPN (Ordering Part Number) for the Ryzen 9 3950X processor, which has been delayed until November, suggests that AMD will offer the flagship chip with and without the AMD Wraith Prism cooler.

Credit: AMDCredit: AMD

Intel, for instance, has stopped including stock coolers for its unlocked Core i5, i7, and i9 processors for a while now. The general reasoning is that if you have the budget for a high-end chip, you most likely won’t be using it with a stock cooler. However, a stock cooler still holds its value like when you need it to stand in the event that your aftermarket cooling solution fails and you need to RMA it. Unlike Intel, AMD might give its future Ryzen 9 3950X customers a choice to pick up the chip with the stock cooler or not.

Credit: Jumbo Computer SuppliesCredit: Jumbo Computer Supplies

The regular Ryzen 9 3950X, which is listed with the 100-100000051BOX OPN on AMD’s website, comes with the fancy AMD Wraith Prism RGB cooler. The 16-core, 32-thread part is rated with a 105W TDP (thermal design power), so the Wraith Prism should be more than capable of keeping its operating temperatures under control.

If you don’t plan on using AMD’s stock cooler, the Ryzen 9 3950X that comes without the Wraith Prism cooler carries the 100-100000051WOF OPN, where the “WOF” suffix means without a fan.

AMD has already revealed that the Ryzen 9 3950X will have a $749 MSRP. It’ll be interesting to see how the chipmaker prices the Ryzen 9 3950X without the Wraith Prism cooler. If the price difference is significant enough, consumers would definitely opt for the latter. Meanwhile, a Chinese retailer has listed the Ryzen 9 3950X that doesn’t come with the cooler for 6,380 HKD (Hong Kong Dollars), which roughly translates to $814.

As per usual for these types of listings, these could just be placeholder values, so take the pricing with a grain of salt.

Windows Terminal Is Now Available via the Microsoft Store

Photo Source: MicrosoftPhoto Source: Microsoft

Microsoft announced during the Build developer conference in May that it was rethinking Windows 10’s command line tool. The new utility, which the company unimaginatively dubbed the Windows Terminal, was today released as a “very early preview” on the Microsoft Store.

The new app features an updated interface with support for custom themes, multiple tabs, and numerous other personalization options. Because text is such a core part of the app, Microsoft also included GPU-accelerated text rendering as well as support for multiple fonts and emoji. (Because if there’s anything command line users need in a utility it’s the ability to render the “100” emoji in all its crimson glory.)

Windows Terminal won’t immediately replace Command Prompt. Microsoft told us at Build that it was considering options for making Windows Terminal the new default command line tool, but for now, the company is maintaining the status quo with Command Prompt. That’s partly for compatibility reasons, but it likely stems from the fact that Windows Terminal isn’t exactly ready to make a grand debut.

Microsoft said in the Store description: “This very early preview release includes many usability issues, most notably the lack of support for assistive technology. Much of the internal work to support this is complete and it’s our top priority to support assistive technology very soon.” We suspect that Windows Terminal might not have even been released yet if it wasn’t given a mid-June launch date at Build.

Windows Terminal is being developed as an open source project. Interested users can follow the app’s progress on GitHub or, if they like, contribute to its development themselves. Microsoft hasn’t yet revealed when it plans to release a non-preview version of the program.

Steam Labs Lets You Test Valve Experiments Like Machine Learning

Valve has been hard at work on a variety of new apps and programs behind the scenes, and with the unveiling of the new Steam Labs online hub today, we can have a glimpse at what the developer has been experimenting with.

That’s the idea behind Steam Labs at least. It’s an online home for all works in progress Valve is tinkering with, such as features with codenames like The Peabody Recommender or Organize Your Steam Library Using Morse Code. Users can evaluate these different modules and share feedback. Should Valve continue development? That’s your opinion to share, as well as thoughts on how each feature should change and evolve, if at all.

The first three Labs experiments are Micro Trailers, the Interactive Recommender and the Automated Show. Micro Trailers are a series of six-second game trailers arranged on a page for you to view all at once. You can peruse these new micro trailer collections for builder games, RPGs, adventure games and plenty of other genres.

Credit: ValveCredit: Valve

The Automated Show is akin to in-store programming at GameStop, where a half-hour video showcases some of the latest Steam game launches. It’s meant for you to take in and check out a few hundred games at a time, or leave on as background noise.

Finally, the Interactive Recommender uses machine learning to recommend new titles to you based on games you’re currently into. Using a neural network trained to recommend games based on a user’s playtime history and other collections of data, it analyzes various play patterns, preference and a wide variety of additional information about the games you gravitate toward. 

Interactive Recommender doesn’t require developer optimization. Instead, it works with information gleaned from the Steam community itself. The feature learns about games for itself during training. In fact, the only information it does get is the release date during the preliminary setup process. It’s a hefty undertaking by Valve that’s unlike anything the developer has tackled in the process just yet. You can give it a try here.

For additional Steam Labs experiments coming down the pipeline, you can join the Steam Labs Community Group and keep an eye on when new additions join the hub. For now, you can try out these three intriguing new experiments and share what you think with Valve. Meanwhile, we’ll be watching to see where this all goes.

Steam Proposes Linux Kernel Changes To Improve Multi-Threaded Games

Steam's CPU test. Credit: SteamSteam’s CPU test. Credit: SteamSteam announced this week that it released the first build of Proton 4.11, which is based on WINE 4.11, the Linux utility that allows thousands of Windows games to run on Linux. The new version includes many bug fixes, as well as a new Vulkan-based implementation of Direct3D 9. Additionally, the new release includes functionality that could reduce the CPU overhead for multi-threaded games if Linux kernel developers adopt Steam’s proposed changes to the kernel.

The Steam developers said they forced “a CPU-bound scenario on a high-end machine by reducing graphics details to a minimum” to see the difference between the existing version of Proton and one that included the multi-threading improvement. We can see in the image above that the CPU load decreased by at least 10% in the Tomb Raider game. The developers expect the results to be reproducible on lower-end machines, too.

The new release also includes an experimental replacement for esync, an older WINE feature that could increase the multi-threaded performance for some games. However, according to the Steam developers, this feature comes with some major trade-offs, such as relying on the Linux kernel’s eventfd() functionality. The use of eventfd() can cause some file descriptor exhaustion on event-hungry applications and can result in extraneous spinning in the kernel. 

The Steam team then came up with some changes to the Linux kernel to extend the futex() system call to expose additional core functionality that could be used to support optimal thread pool synchronization. 

Proton 4.11 already includes the fsync() patches that will replace the older esync and take advantage of the new functionality, once the changes to the Linux kernel are made. In the meantime, the Steam team will continue to test its solution on Ubuntu and Arch Linux distributions with custom kernels that contain the above-mentioned patches.

MSI’s Content Creation Laptops Get Comet Lake

Ahead of IFA in Berlin, MSI is introducing a new suite of laptops focused on content creation with Intel’s new tenth generation “Comet Lake” processors. The company didn’t reveal precise specs or release dates, but the new laptops are available for pre-order today through Newegg.

The new laptops are the MSI Modern 14 and the MSI Prestige 14 and 15. The biggest difference is that the Modern will go up to Nvidia GeForce MX250 graphics, while the Prestige laptops will go up to GTX 1650 Max-Q. While the Prestige laptops will have 4K options, the Modern will only go up to FHD.

MSI Prestige 14 MSI Prestige 15 MSI Modern 14
CPU Up to 10th Gen Intel Core i7 “Comet Lake” U-Series Up to 10th Gen Intel Core i7 “Comet Lake” U-Series Up to 10th Gen Intel Core i7 “Comet Lake” U-Series
GPU Up to Nvidia GeForce GTX 1650 Max-Q (4GB GDDR5) Up to Nvidia GeForce GTX 1650 Max-Q (4GB GDDR5) Up to Nvidia GeForce MX250 (2GB GDDR5)
RAM Up to 16GB LPDDR3-2144 2x DDR SODIMMs (up to 64GB) 1x DDR4 SODIMM (Max 32GB)
Display 14-inch, up to 4K UHD 15.6-inch, up to 4K UHD 14-inch FHD 1920 x 1080
Storage 1x M.2 SSD slot (PCIe NVMe or SATA) 1x M.2 SSD (NVMe or SATAA), 1x M.2 (NVMe only) 1x M.2 SSD slot (PCIe NVMe or SATA)
Size 12.6 x 8.5 x 0.6 inches / 320 x 215.9 x 15.2 mm 14 x 9.2 x 0.6 inches / 355.6 x 233.7 x 15.2 mm 12.7 x 8.7 x 0.6 inches / 322.6 x 221 x 15.2 mm

A number of specs weren’t fully detailed at press time, including exact  storage sizes and amounts of RAM.

The Prestige laptops are being touted as mobile workstations with what MSI calls a “True Pixel” display; that means a 4K panel that covers 100% of the sRGB color gamut with a Delta-E value of less than 2. MSI is claiming 16 hours of battery life on the 15-inch notebook and and 14 hours on the 14 incher.

All three will come in a “carbon gray” color and aluminum chassis. I found the Modern felt a bit more solid than the Prestige options. Additionally, the content creation laptops are getting a new font on the keyboards to separate them from prior models, which shared a font with the gaming machines.