Yeah clearly the PRO part with just 8Gb of RAM is the problem.
Come on that’s what I have on my Surface Go 1 from 2019. It runs Linux perfectly and is okay for my needs, but I wouldn’t put such specs on a PRO thing😅
I’m pretty brutal on my machines, and if my 8 gig m1 really only starts to beach ball when multiple accounts are open, and those accounts all have bloated multimedia software running.
My 16 gig machines can handle that use case fine, but the 8 gig machine will occasionally beach ball.
Personally, I won’t buy an 8 gig config again. But I’m a fucking monster that leaves a million bloated things open across multiple active user sessions.
"Unified" only means there's not a discrete block for the CPU and a discrete block for the GPU to use. But it's still RAM- specifically, LPDDR4x (for M1), LPDDR5 (for M2), or LPDDR5X (for M3).
Besides, low-end PCs with integrated graphics have been using unified memory for decades- no one ever said "They don't have RAM, they have UM!"
Yes, that’s true, but it’s still an indicator of an uninformed reporter.
Apple Silicon chips pass data from one dedicated cores directly to another without the need of passing through memory, hence the smaller processor cache. There are between 18 and 58 cores in the M3 (model dependent). The architecture works very differently than the conventional CPU/GPU/RAM model.
I can run FCP and Logic Pro and have memory to spare with 16GB of UM. The only thing that pushes me into swap is Chrome. lol
Maybe you’re not familiar with the apps I’m referring to. Final Cut Pro and Logic Pro are professional video and audio workstations.
If I tried to master an export from Adobe Premiere Pro in Protools on PC I’d need 32GB of RAM to to prevent stutter. I only use ~12GB of 16GB doing the same on Apple Silicon.
8GB of UM is not for someone running two pro apps at once. It’s for grandma to use for online banking and check her email and Facebook.
it’s still an indicator of an uninformed reporter.
My dude, you're literally in here arguing that because Apple has a blob for both CPU memory and GPU memory that somehow makes that blob "not RAM." Apple's design might give fantastic performance, but that's irrelevant to the fact that the memory on the chip is RAM of known and established standards.
Each power intensive process is given its own dedicated core. The OS is designed specifically to send dedicated processes to the associated core. For example, your CPU isn’t bogged down decrypting data while loading an application.
You can’t compare it to anything else out at this time. Just learn about it, or don’t. Guessing is just a waste of time.
It’s different. The GPU is broken into several parts and integrated into the SoC along with the CPU’s dedicated processes. Data is passed within the SoC without entering UM. It’s exclusively used as a storage liaison.
You should check out Apple Silicon M-Series. Specs don’t translate to performance in the way conventional PC architecture does. I guarantee you’ll see PC manufacturers going to 2nm SoC configurations soon enough. The performance is undeniable.
Those are only two of the 18-52 cores (model dependent) of Apple M chips. The OS is designed around this for maximum efficiency. Most Macs don’t even have a fan anymore.
Dude it is just GDDR#, the same stuff consoles use
PC's have had this ability for over a decade there mate apple is just good at marketing.
What's next? When VRAM overflows it gets dumped into regular ram? Oh wait PC's can do that too...
According to benchmarks the 8700G vs M3 is on average 22% slower single core, and is 31% faster multicore, FP32 is 41% higher than the M3 and AI is 54% slower 8700G also uses 54% more energy
What about those stats says AMD can't compete? 8700G is a APU just as is the M3
I’m talking about practical use performance. I understand your world, you don’t understand mine. I’ve been taking apart and upgrading PCs since the 286. I understand benchmarks. What you don’t understand, is how MacOS uses the SoC in a way where benchmarks =/= real-world performance. I’ve used pro apps on powerful PCs and powerful Macs, and I’m speaking from experience. We can agree to disagree.
I grew up with a Tandy 1000 and was always getting yelled at for taking it apart along with just about every PC we owned after than too.
Benchmarks are indicative of real world performance for most part. If they were useless we wouldn't use them, kinda like userbenchmark.
The one benefit apple does have is owning its own ecosystem where they can modify the silicon/OS/Software to work with each other better.
Does not mean the M3 is the best there is and can't be touched, that is just misleading
8700G is gonna stomp the M3 using Maxton's software suite just as the M3 will stop the 8700G using Apples software suite.
Then also on-top if that the process node for manufacturing said silicon is different (3nm vs 4nm) that alone allows for a 20% (give or take some) performance difference just like every process node change in the past decade or so
I'll take the loss on the experience part as the only apple product I own is an Apple TV 4k, but there are many nuances you've obviously glossed over
Is the M3 a good piece of silicon? Yes
Is it the best at EVERYTHING? Of course not
Should apple give up because they are not the best? Fuck no
Man, you’re kinda off the point. This is about how much UM is appropriate for a base model. I’m simply saying the architecture of an SoC utilizes UM as a storage liaison exclusively, since CPU and GPU are cores of the same chip. It simply does not mean the same thing as 8GB of RAM in standard architecture. As a pro app user, 16GB is enough. 8GB is plenty for grandma to check her Facebook and online banking.
Notice the soldered RAM and lack of video card? Kinda like what the M series does.
And when all is said and done, 8gb is not nearly enough and apple should be chastised for just like Nvidia when they first decided to make 5 different variations of the 1060 making sure 4 of those variations will become ewaste in a few short years and again with the 3050 6gb vs 3050 8gb
They both have have independent CPU and GPU. UM is not used to pass from CPU to GPU on an SoC system, it’s exclusively a storage liaison. Therefore it’s used far less than in non SoC applications.
The CPU and GPU are one chip. Learn about Apple Silicon SoC rather than trying to find a comparison. You won’t find one anywhere yet.