Tag Archives: Computer Chips

AMD Ryzen World Record

[Image by TeX9.net]
New World Record: AMD Ryzen 9 9950X Outshines the Competition

Before even hitting the market, the AMD Ryzen 9 9950X has shattered records, solidifying its position as a powerhouse among processors worldwide and sets new world record. Chinese overclocker Tony Yu pushed this 16-core giant to new heights in Cinebench R23, achieving an astonishing 55,327 multicore points, surpassing the previous record held by its predecessor, the Ryzen 9 7950X.

The AMD Ryzen 9 9950X has already claimed its first title, even though it’s not yet officially on sale. Chinese overclocker Tony Yu set new records for a 16-core processor in Cinebench R23 with this chip.

After its predecessor, the Ryzen 9 7950X, led the charts for nearly two years, the AMD Ryzen 9 9950X now tops the Cinebench benchmark rankings. Tech influencer Tony Yu achieved an impressive 55,327 multicore points with the processor. AMD’s internal overclocking team had previously reached similar, albeit slightly lower, scores.

The AMD Ryzen 9 9950X typically operates at 4.3 GHz. For the record, Tony Yu naturally boosted the frequency, first to 5 GHz, then to 6 GHz, and finally to 6.5 GHz. He managed to break the predecessor’s record at the middle setting.

Such performance is not achievable with standard means. To elevate the CPU to this level, he had to use liquid nitrogen for cooling. This method is common in extreme overclocking attempts, temporarily lowering temperatures to -165 degrees Celsius.

The new Ryzen processors will hit the market soon. Sales start in August with the Ryzen 5 9600X and the Ryzen 7 9700X. A week later, the Ryzen 9 9900X and the aforementioned Ryzen 9 9950X will follow. The computer processors were originally scheduled for an earlier release but had to be delayed due to a typographical error.

nVidia & Snowflake cooperation for AI models from cloud

nVidia and Snowflake announce cooperation for generative AI models from the cloud.

The US graphics card manufacturer nVidia and the US software company Snowflake have announced a cooperation. They want to offer companies generative AI from the cloud.

Image: Artificial Intelligence, AI (Inteligencia artificial), via: pixabay, by: geralt.
Image: Artificial Intelligence, AI (Inteligencia artificial), via: pixabay, by: geralt.

The nVidia Corp. and Snowflake Inc. announced at the Snowflake Summit 2023 that they have entered into a partnership.

The goal of the agreement: Providing companies to build generative AI applications using their own data within the Snowflake Data Cloud. Snowflake along with Nvidia enables enterprises to use data in their Snowflake account clouds. So they can build advanced generative AI service. This includs chatbots.

“Data is the foundation for building generative AI applications that understand the complexities and unique voice of each businesss” said Jensen Huang, Founder and CEO of Nvidia.

“nVidia and Snowflake will create an AI factory that will help companies transform their own valuable data into custom generative AI models to build breakthrough new applications. Right from the cloud platform they can run their businesses” added Jensen Huang.

Expanding AI capabilities in Data Cloud enables these customers to build generative AI applications, just where the data they manage already resides. This will reduce costs and latency.

The Snowflake Data Cloud has more than 8,000 customers. In Future it will offer the companies the ability to unify, integrate, analyze and share data within their organizatio, customers, partners, suppliers.

Snowflake’s unified platform offers industry-specific data clouds. Advertising, media and entertainment, financial services, healthcare and life sciences, manufacturing, retail and consumer goods, technology and telecom are included.

Snowflake also recently launched the Government and Education Data Cloud to enable data-driven decision-making for the public sector.

Snowflake plans to host and operate NeMo. NeMo is nVidia’s cloud-native enterprise platform for building, customizing and deploying generative AI models, on the data cloud.

Competition for SSD

Competition for SSD: 1.4 petabytes of data on future magnetic tape

Tape storage technology seemed to be falling short of expectations recently. Now it looks like it’s catching up again – thanks to a new material.

Linear Tape Open (LTO) first appeared on the market in 2000. At that time, the storage media that recorded data as encoded tracks on magnetic tape still had a capacity of 200 gigabytes.

The tapes have now reached their ninth generation and are currently able to record data volumes of 18 terabytes (uncompressed). The technology is a cost-effective way, especially for companies, to preserve critical data reliably and durably.

Image: Data Storage, Free Stock Picture, MorgueFile.com.
Image: Data Storage, Free Stock Picture, MorgueFile.com.

LTO09 still fell short of expectations

But what sounds like an incredible amount of storage space was initially a disappointment. LTO09 was originally supposed to offer 24 terabytes of space, but could not meet these expectations.

According to a new roadmap presented by the developers IBM, HPE and Quantum at the beginning of September, it should be possible to store a whopping 1,440 terabytes (1.4 petabytes) from the 14th generation, which is expected to appear in 2033/34.

Magnetic storage tapes: New material creates new possibilities

According to Heise Online , this is made possible by coating the magnetic tapes with strontium ferrite (SrFe) instead of barium ferrite (BaFe), which has been used up until now. The first prototypes have already been developed and tested by Fujifilm. The new material is to be used from LTO13.

In contrast to SSD hard drives, whose maximum capacity is currently around 100 terabytes, LTO is also significantly cheaper. While the most expensive SSD medium with a price of 40,000 US dollars causes costs of 2.5 dollars per gigabyte, LTO are only 0.01 dollars per gigabyte.

According to Sam Werner, IBM vice president of storage product management, LTO “provides organizations with a sustainable, reliable and cost-effective solution to protect and store their critical business data.”

Samsung has a new battery issue that can affect any Galaxy phone

Samsung has a new battery problem with its Galaxy smartphones. After the disaster with the Galaxy Note 7, which exploded and was recalled worldwide, older smartphones are now bloating and can become an undiscovered danger.

Image: Circuit Board Chip, Free Stock Picture, MorgueFile.com.
Image: Circuit Board Chip, Free Stock Picture, MorgueFile.com.

Samsung smartphones are bloating

Samsung has promised that after the catastrophic incidents involving the Galaxy Note 7 , the batteries will be safe and there will be no more problems. As it turns out, Samsung has an even bigger battery problem than just one model. YouTuber Mrwhosetheboss has found many Samsung phones in his smartphone collection with swollen batteries. A reaction and gas formation probably occurs inside, so that the battery swells up and the back bursts open . In this case, there is no explosion or fire. The batteries are therefore fundamentally safe.

It becomes critical if the battery is still bloating and you don’t notice it immediately. In fact, the problem tends to affect older Samsung phones that are stored with an empty battery. Our Galaxy S6 edge has also ballooned, as you can see in the cover photo above . But Mrwhosetheboss has also noticed early signs of a swelling battery on his Galaxy S20 FE and Galaxy Z Fold 2. And then it gets dangerous. If gases are produced and the battery swells, there could be a reaction and excessive heat development when charging the smartphone.

How should you store a Samsung phone?

Mrwhosetheboss gives an important tip on how to store your Samsung phones when you are no longer using them. Then you should charge the battery to about 50 percent. This should reduce the risk of the battery bloating. If you are still using an older Samsung cell phone, you should regularly check whether the battery has not already swelled up slightly. Then you shouldn’t charge your cell phone anymore and turn to Samsung. We will seek an opinion from Samsung on the matter.

iPhone 14 Pro chip bigger despite smaller transistors

Image: Circuit Board, Free Stock Picture, MorgueFile.com.
Closeup of old circuit board. Image: Circuit Board, Free Stock Picture, MorgueFile.com.

Small changes to caches and processor cores, this is how a preliminary analysis of the A16 silicon chip  from Angstronomics can be summarized. Although there is still no high-resolution image of the die, there is a video in which some details can already be seen. Since various components such as caches, processors and GPU form unique patterns on the die, they can be identified and at least roughly measured.

The operator of Angstronomics, who publishes under the pseudonym Skyjuice, comes to the conclusion that Apple has reduced the L3 or system level cache (SLC) in the A16 compared to the predecessor A15. Compared to the 4 MB L2 cache of the Efficiency CPU cluster made of Sawtooth cores, each of the two SLC blocks occupies about three times the area, so it should hold 12 MB – the SRAM memory cells need the same regardless of the cache hierarchy lots of space.

This means that the SLC of the A16, at 24 MB, is a quarter smaller than that of the A15, which has 32 MB. However, Apple has given the performance cores named Everest a third more L2 cache: the area here suggests that each of the two blocks holds 8 MB, while the A15 had a total of 12 MB.

One can only speculate about the reason for the reduction of the size of the SLC: Angronomics brings the higher data rate of the memory into play as a possible reason: LPDDR5-6400 is used for the first time in the A16. Optimizations are also conceivable, since the L2 cache of the P-cores was enlarged at the same time. Many factors play a role in the dimension of caches, including the micro architecture of the processors – it is very likely that there was not a single decisive argument for the redistribution.

Changes to processor cores

There are also small changes in the processor cores: they are arranged differently on the die, and Apple has also revised their structure. Both the Everest and Sawtooth (P/E) cores also appear to be slightly larger than their Avalanche and Blizzard predecessors. The neural and graphics processing units (NPU and GPU), on the other hand, seem to be quite unchanged. However, they are hardly recognizable in the Angtronomics image.

However, the NPU is only eight percent faster than the A15. This is part of the switch from the supplier TSMC from the N5 to the N4 process and the expected increase in speed of ten percent as a result. Major changes are therefore unlikely. The higher switching speed of the transistors in N4 should also play a role in the GPU, which also benefits from the larger memory bandwidth. Together, both could almost explain the measured 28 percent increase in speed .

Bigger chip despite (slightly) smaller transistors

With N4, TSMC refers to a further development of the N5 manufacturing process, with which Apple’s A15 is manufactured. According to TSMC, this increases the integration density by six percent, and the number of transistors also increases by six percent – ​​16 billion in the A16, 15 billion in the A15. Theoretically, the dies of A15 and A16 could be the same size.