• 0 Posts
  • 9 Comments
Joined 11 months ago
cake
Cake day: October 25th, 2023

help-circle
  • If you listen to the investor calls and roadmaps they state when 20A/18A will be ready and on what manufacturing processes.

    My guess is IFS will be coming online and Intel is saving that for their customers. And the other is better margin. Intel has better margins on their own manufacturing reserved for their higher priced products.

    Lunar Lake being a consumer product will mean lower margins. And saving 20A/18A for server and server gpu products could mean better margins.

    Performance is likely secondary. All new silicon perform pretty close that you really can’t tell the difference.

    Only NVIDIA has the exception with the software scaling. Like DLSS and Frame Generation.


  • Since when did we care so much about the lawman? Especially the old and white international law man?

    Like I would totally download a car. I’ve downloaded many things I shouldn’t have on the internet just because I can. People speed daily because no body is looking. The whole reason why border patrol has a job is because they stop and check vehicles for contraband.

    US citizens make money buying drugs from one state and driving it over to another to cut up drugs and sell for more profit.

    People overcharge the normal us consumer and we blindly accept it. Heck that extra virgin olive oil you bought at your market likely isn’t even from Italy or it is full of filler.

    So why I ask. Who cares if NVIDIA can make more money selling equipment that everyone else is already using? And for what? So people can use bing more for search and some fun Ai Friday after work chat?? Or when you use Ai to generate some warhammer 40K ideas for new colors and inspiration?


  • While I do like new tech and certainly Apple M line of cpu+gpu chips on a single SOC are powerful, it needs to be compared to the right technology. Comparing an SOC with a single heat pipe versus a desktop class GPU with 2000 grams of heat sink is just not a good comparison. They are built for different things.

    The consumer class 4000 series rtx is not a workstation class GPU. It is designed for intensive grunt work for gamers. Latency is optimized for gaming. So high-frequency clock speeds. The higher the better latency.

    Apple M3 is geared towards the professional content creator class of user. Who does not like to be bothered by heavy GPU and fan noise. It won’t have the best latency but it has good access to the tools to help get professional work done.

    Until Apple develop their own “GPU”with high frequency and heavy cooling requirements, they will just be dumbing down games to candy crush style avg latency gaming.




  • What is weird about this? When you run cinebench you want two things to happen.

    1. get the highest score. (Most important)

    And

    1. use the most power (it is a benchmark tool after all, useful for troubleshooting and other things)

    The power values you get from software aren’t accurate.

    At all core load, you may not see peak single core burst. The CPU itself knows when it needs to hit peak frequency. But because your load doesn’t require it, it won’t hit it.

    Instead it knows you demand all core loading at high power so it shifts to the optimal frequency and power usage for your power virus load.

    Run a lighter higher frequency load like gaming with unlocked FPS and good memory and you should see it hit its advertised frequency.


  • Raja oversaw the launch of an entirely new product into an existing market segment. And they did it during the toughest time out there. A pandemic with poor economic outcome. It wasn’t that the market didn’t take to the product. Just people are skittish about opening up their wallets today.

    Intel essentially was one year too late and a year too early with their launch. If they had timed it earlier it would have been a home run during the pandemic fueled pc purchasing.

    Arc launched with Ray Tracing, XeSS up scaling, and good performance per value. Their driver team is still working hard and launching home runs with each delivery. They only lack a high end GPU capable of 400 watt and frame generation technology.

    Which to be absolutely fair to anybody, they aren’t expected to get right on the first product launch. NVIDIA is like on their 20 plus generation to this thing called a discrete graphics unit.


  • It doesn’t need to beat. Especially not on desktop. Desktop is like Lamborghini vs Ferrari vs Porsche. They are all winners.

    The real fight is in Mobile Laptop. And that is where MeteorLake has two edges over AMD.

    Ai accelerator chip and Big.LITTLE. And one could argue that they have volume as well. The laptop manufacturers want volume and consistency.

    AMD has to compete for nodes from TSMC vs. Apple, NVIDIA, Qualcomm, Intel and even AMD’s own Sony and XBOX platforms. Intel arguably has an edge on this front.

    Apple does have to use its volume from TSMC for iPad, iPhone, watches and then MacBooks.

    Intel has the luxury of supply and maybe even oversupply if they have Foundry customers lining up.


  • Give them more time. Honestly Intel Arc’s launch is amazing. Affordable price, XeSS, driver support, and Ray Tracing on launch day. Just give these things to the kids already for their Minecraft with RT and CS:go gaming machine already!

    It kicks ass!

    Save the 1199 dollar 4080 GPU for the college to professionals out there with the scratch to afford this kind of stuff. They don’t play games anyway. If they are anything like me, they just like to turn it on after dinner and run a few benchmarks just to be satisfied and say yup, I’ve made it.

    Intel Arc is seriously amazing. Intel needs to do now what NVIDIA and AMD do all the time. Bundle it with the next Roblox or Minecraft launch.