Allen
Hotshot
Joined: Oct 1999
Posts: 8,916
Ohio USA
Quote
TSMC's 3nm Process Becomes The "Holy Grail" For The Markets: Entire Supply Reserved Until 2026 While Tech Giants NVIDIA, Apple, Qualcomm and AMD Considering Price Hikes
the firm's N3 process has seen massive demand from the AI segment, with its integration into current-gen AI accelerators, and along with that, TSMC's N3P node will witness widespread adoption from the mobile segment
While it isn't conclusive that consumer products will see a price increase with the 3nm supply disruptions, the indicators hint at it happening.
Allen
Hotshot
Joined: Oct 1999
Posts: 8,916
Ohio USA
More AI information. As we know, AI continues to be the main thrust of Nvidia and AMD.
Quote
AMD Instinct MI300X 192 GB GPU Takes The OpenCL Benchmark Crown, [19%] Faster Than NVIDIA's Flagship RTX 4090
The AMD Instinct MI300X is primarily a Data Center GPU designed to meet today's AI requirements. Its main competition is the Hopper AI GPU family from NVIDIA but it looks like someone tested the GPU in the Geekbench OpenCL benchmark and it completely obliterates the entire GPU ladder, taking the top spot with ease.
Based on the CDNA 3 architecture, the MI300X is an engineering marvel that goes heavy into the chaplet packaging design with 153 billion transistors. There are a total of 28 dies on the chip itself which include eight HBM3 packages for up to 192 GB memory capacities, the highest of all Data Center GPUs that are available today.
On the GPU side, the Instinct MI300X packs a total of 304 compute units with a total of 19,456 cores.
As for power consumption, the MI300X has a rated TDP of 750W which is almost twice the figure of NVIDIA's RTX 4090 which is rated at 450W
a single RTX 4090 sells for around $1500 to $2000 US while a single [AMD] MI300X chip sells for around $15,000 US
Allen
Hotshot
Joined: Oct 1999
Posts: 8,916
Ohio USA
So far, just a serious discussion -- not actually being built (yet).
Quote
AMD talks 1.2 million GPU AI supercomputer to compete with Nvidia — 30X more GPUs than world's fastest supercomputer
The best supercomputers in the world have less than 50,000 GPUs, how in the world is someone going to make an AI cluster with 1.2 million GPUs?
AMD's admission comes from ... Forrest Norrod, AMD's EVP and GM of the Datacenter Solutions Group, about the future of AMD in the data center. ... the biggest AI training cluster that someone is seriously considering.
1.2 million GPUs is an absurd number (mind-boggling, as Forest quips later in the interview). AI-training clusters are often built with a few thousand GPUs ... creating an AI cluster with 1.2 million GPUs seems virtually impossible.
The goal of million-GPU clusters speaks to the seriousness of the AI race that is molding the 2020s. If it is in the realm of possibility, someone will try to do it if it means greater AI processing power. Forest didn't say which organization is considering building a system of this scale but did mention that "very sober people" are contemplating spending tens to hundreds of billions of dollars on AI training clusters (which is why millions of GPU clusters are being considered at all).
Allen
Hotshot
Joined: Oct 1999
Posts: 8,916
Ohio USA
Quote
Intel Denies RMA Requests For 14th & 13th Gen Desktop CPUs Plagued With Instability Issues, System Providers Switch To AMD Ryzen
Intel and its 14th & 13th Gen Desktop CPUs seem to have bothered consumers a lot, and the problem has gone way too far now. The fiasco has been quite long, with Intel initially being completely clueless about what's causing the issue.
In another development, data from Level1Techs shows that Intel's 13th and 14th Gen CPUs have represented a major portion of error logs at Oodle game telemetry data, linked with the 14th and 13th Gen CPUs. Team Blue accounted for 1,431 decompression errors, while AMD, on the other hand, only had four such errors, which is significantly lower than Intel. A breakdown shows that more than 70% of Intel's CPUs were prone to errors compared to 30% of AMD, which shows that the situation is alarming for Intel.
Allen
Hotshot
Joined: Oct 1999
Posts: 8,916
Ohio USA
Quote
AMD Surpasses Intel In Brand Recognition, NVIDIA Bags 6th Spot With The Single Largest Brand Value Growth
Well, it won't be wrong to say that amid the AI hype, AMD emerged as the clear winner compared to the likes of Intel, especially in the professional sectors. The company's Instinct lineup of AI accelerators was massively popular among clients compared to Intel's Gaudi series, simply because Team Red did a better job in capitalizing on the market hype and gaining the trust of its clients. Apart from that, AMD's new AI-focused consumer offers, such as the Strix Point APUs, saw huge market adoption, and based on such factors, Kantar has put AMD above Intel when it comes to the overall brand value.
AMD's brand value increased by 53%, while Intel's increased by 23% since 2023, which shows that AMD made huge strides in the markets. AMD currently stands at 41st position while Intel stands at 48th position. [Nvidia among "top ten" - i.e >10th]
AMD was included in the list of "Top 10 Risers" this year, which included NVIDIA, which grew by a whopping 178%, along with Instagram and Facebook, which, too, showed double-digit gains.
Allen
Hotshot
Joined: Oct 1999
Posts: 8,916
Ohio USA
Quote
Game publisher claims 100% crash rate with Intel CPUs – Alderon Games says company sells defective 13th and 14th gen chips
No amount of BIOS or firmware updates fix the problem.
Frustration is mounting over Intel's inability to resolve its Raptor Lake instability problems. Game development studio Alderon Games revealed on its website that the company has had nonstop problems with Intel's 13th and 14th-generation processors in its servers, development systems, and customers' gaming PCs. The issues have become so prevalent that Alderon Games publicly stated that Intel sells defective 13th and 14th-generation CPUs.
Allen
Hotshot
Joined: Oct 1999
Posts: 8,916
Ohio USA
Leaked data -- may not be entirely accurate.
Quote
Nvidia RTX 50 graphics card family TDPs 'leaked' by Seasonic
RTX 5090 buyers may need 500W just to cover GPU power consumption.
it is interesting to see the largest generational uplift in TDP is going to be with the ’60 series GPU. Hopefully, this means Nvidia is addressing the common complaints that it faced due to the large performance and price gap between the RTX 4070 and 4060. Importantly, we should see the RTX 5060 return to a 12GB VRAM standard, too. [see attached chart]
We expect Nvidia’s rollout of its Blackwell architecture consumer graphics cards to begin around October – starting with the high-end RTX 5090 and RTX 5080 cards. The enthusiast RTX 5070 should follow in January 2025, with the mainstream RTX 5060 launched in Q3 or Q4.