Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Commenters here seem dubious. I’ll take the contra-position. This feels to me like it’s going to be great; a big win for consumers and developers.

Current A12z chips are highly performant; Apple is roughly one chip cycle ahead on perfomance/watt from any other manufacturer. I presume their consumer hardware will launch with an A13Z, or maybe an A14 type chip.

Apple has consistently shipped new chip designs on time; Intel’s thrashing has cost them at least two significant update cycles on the macbook line in the last six years. Search this fine site for complaints about how new mac laptops don’t have real performance benefits over old ones —- those complaints are 100% down to being saddled with Intel.

Apple has a functional corporate culture that ships; adding complete control of the hardware stack in is going to make for better products, full stop.

Apple has to pay Intel and AMD profit margins for their mac systems. They are going to be able to put this margin back into a combination of profit and tech budget as they choose. Early days they are likely to plow all this back into performance, a win for consumers.

So, I’m predicting an MBP 13 - 16 range with an extra three hours of battery life+, and 20-30% faster. Alternately a Macbook Air type with 16 hours plus strong 4k performance. You’re not going to want an Intel mac even as of January of 2021, unless you have a very unusual set of requirements.

I think they may also start making a real push on the ML side in the next year, which will be very interesting; it’s exciting to imagine what Apple’s fully vertically integrated company could do controlling hardware, OS and ML stack.

One interesting question I think is outstanding - from parsing the video carefully, it seems to me that devs are going to want ARM linux virtualized, vs AMD64. I’m not highly conversant with ARM linux, but in my mind I imagine it’s still largely a second class citizen — I wonder if systems developers will get on board, deal with slower / higher battery draw intel virtualization, or move on from Apple.

Languages like Go with supremely simple cross architecture support might get a boost here. Rust seems behind on ARM, for instance; I bet that will change in the next year or two. I don’t imagine that developing Intel server binaries on an ARM laptop with Rust will be pleasant.



> So, I’m predicting an MBP 13 - 16 range with an extra three hours of battery life+, and 20-30% faster.

I'm predicting the opposite: you won't actually see any difference.

Once you look closely at power profiles on modern machines you'll see that most energy is going into display and GPU. CPUs mostly run idle. Even if you had a theoretical CPU using zero energy, most people are not going to get 30% battery life gains [1]. Not one thing that they demoed requires any meaningful CPU power.

Similarly, while ARM parts are more efficient than x86 per compute cycle, it's not a dramatic change.

The big changes, I think, are more mundane:

- Apple is going to save $200-$800 cost per Mac shipped

- Apple can start leaning on their specialized ML cores and accelerators. They will probably put that stuff in T2 for Intel Macs. If they're already shipping T2 on every machine, with a bunch of CPU cores, why not just make those CPU cores big enough for the main workload?

Doubling CPU perf is meaningless if you can ship the right accelerators that'll do 100x energy/perf for video compression, crypto and graphics.

[1] for a regular web browsing type user; obviously if you're compiling stuff this may not apply; if that is true you're almost certainly better off just getting a Linux desktop for the heavy lifting


    Apple can start leaning on their specialized ML 
    cores and accelerators
Thank you for mentioning this. I feel like many have missed it.

I think Apple sees this sort of thing as the future, and their true competitive advantage.

Most are focusing on Apple's potential edge over Intel when it comes to general compute performance/watt. Eventually Apple's likely to hit a wall there too though, like Intel.

Where Apple can really pull away is by leaning into custom compute units for specialized tasks. Apple and their full vertical integration will stand alone in the world here. Rather than hoping Intel's chips are good at the things it wants to do, it can specialize the silicone hardcore for the tasks it wants MacOS to do in the future. It will potentially be a throwback to the Amiga days: a system with performance years ahead of competitors because of tight integration with custom hardware.

The questions are:

1. Will anybody notice? The initial ARM Macs may be underwhelming. I'm not sure the initial Mac ARM silicon will necessarily have a lot of special custom Mac-oriented compute goodies. And even if it does, I don't know Mac software will be taking full advantage of it from Day 1. It will take a few product cycles (i.e., years) for this to really bear fruit.

2. Will developers bother to exploit these capabilities as Apple surfaces them? Aside from some flagship content-creation apps, native Mac apps are not exactly flourishing.


This could mean the following things.

1. If done correctly, non-Apple laptops may become significantly less attractive. Just like Android phones.

2. Intel may be in for a tough time, especially with AMD winning big on the console and laptop fronts recently.

3. AMD and Intel may have to compete for survival and to save the non-Apple ecosystem in general. If AMD/Intel can consistently and significantly beat Apple here, it may mean that the non-Apple ecosystem survives and even thrives. It may even mean that Apple looks at Intel/AMD as an option for Pro MacBooks in the future. However, this does seem a little less likely.

4. This could also herald the entry of Qualcomm and the likes into laptop territory.

Looks like a very interesting and possibly industry changing move. This could potentially severely affect Intel/AMD and Microsoft. And all these players will have to play this new scenario very carefully.


> 1. If done correctly, non-Apple laptops may become significantly less attractive. Just like Android phones.

What are you talking about? Android has about 72% of worldwide market share, so clearly Android phones are not significantly less attractive.

And I am not an Android fanboy, my first two phones were iPhone 1 and iPhone 3GS, and I still consider them very good phones.


> Android has about 72% of worldwide market share, so clearly Android phones are not significantly less attractive.

More precisely, Android has the BOTTOM 72% of the market, mostly cheap smartphones with thin profit margins. Almost all actual profits go to Apple.


This is the case; it really shouldn't be downvoted.

> Apple dominates the global handset market by capturing 66% of industry profits and 32% of the overall handset revenue.

Samsung and Huawei are second and third with about 20% and 10%, respectively. The three companies combine for about 95% of the profits.

https://www.counterpointresearch.com/apple-continues-lead-gl...

In the same quarter, Apple had 12% of the global market sales, against 21% by Samsung and 18% by Huawei. They combine for 51% of the sales.

https://www.counterpointresearch.com/global-smartphone-share...

So, that quarter, companies representing half of the worldwide cellphone sales combined for 5% of the profit.

Apple sold 12% of the phones and captured 32% of the revenue but 66% of the profit.

Apple is clearly able to sell its phones at a unique premium; I am not sure of a better way to measure "attractive".


But isn't it just a matter of time til the novelty of smart phones wear of, they stop being tres chic and the cheap ones becoming 'good enough'? It might have taken decades, but eventually Ford bought Cadillac, Fiat bought Ferrari, VW bought Porsche (and Bugatti and a few more).


Big difference is Ford, VW, et al had local dealer networks that not only fixed the cars, but turned the lessons and data learned in the fixing back into engineering improvements upstream. The net result of this is over a span of years Ford and VW buyers would see the product get better each time they bought a new one.

Android will always be a low budget product as a market, because it's run by Google. Google doesn't care about its customers at all, but for the data they generate and its impact on ad sales.

Every time a user opens the Google app store, they can expect it to be worse than the time they opened it previously. Every time an Android user buys a new device, it's a crap shoot what sort of hardware issues it will have, even if it's Google or Samsung branded.


Market share and attractiveness aren’t necessarily related. A Kia isn’t as attractive to its target customer as a Mercedes but outsells it because of price.


> Market share and attractiveness aren’t necessarily related.

Well, they are. You're confusing niche market segments with overall preferences. Veblen goods don't have traction in general markets.


Speak for yourself. If I'm going to buy a boring car, I certainly would take a Kia over a Mercedes, even if both were free.


Mercedes are completely outdated. Nice leather, that's it.


Mercedes would like to have a word with you.[0]

[0] - Mercedes to debut Formula 1 MGU-H technology in AMG road cars - https://www.formula1.com/en/latest/article.mercedes-to-debut...


Much more interesting would be the CVS gearbox which is THE Mercedes advantage, the TCU, the shifter or the ECU. 100x better but also 100x more expensive. Will not happen. Worked in F1.


A hack on the turbo of an ICE car sounds pretty out dated to me.


What would they have to do to get you really excited? Fusion drive?


Warp drive.

ICE is dead. It's a welcome piece of fun at weekends, at the track, and when we drive our classic cars. But, I'm afraid, thats it.


I do hope AMD does well here as Apple's chips with all their custom silicon, T3 etc, mean the death of booting anything but apple signed OS images on that, forget Linux.

And that's not the future I am willing to buy into.


Thank you for expressing this. As much as I like Apple and the wonderful approach they have to design, something felt amiss. This is what I wanted to express.


See also: Chromebooks and Surfaces.


The presentations today specifically mentioned booting other operating systems.


No it didn't. It mentioned running Linux in a VM, which isn't the same thing.


Do you complain if you’re running your OS on a CPU micro architetture VM abstraction?


I'm somewhat confused by this rhetorical question since the microcode of processor is vastly different than the userspace & kernel of Mac OS. Running an OS bare metal versus in a VM on top of Mac OS are different across a wide array of things. At a minimum performance is lower and less predictable on the VM; you now have two different OS's updates to worry about breakage with on top of their mutual interface (ask anyone who's done serious Linux dev work on a Mac); you have two different sets of security policies to worry about; the low level tools to debug performance in a VM don't have the level of access they do on bare metal; and if you're working with hardware for linux servers & devices in a VM you are going to have to go bare metall sooner or later.

The abstractions are leaky, the VM is not a pristine environment floating on top of some vaguely well-defined architecture. The software in one has two extra layers (VM software & OS) between it and the actual platform and all this is before you start hitting weird corner cases with cpu architecture differences in the layers.


Hmm, really? Since Windows 10 your desktop runs in a guest domain, while the kernel running the drivers is isolated.

Since about 5 years Apple provides this kit: https://developer.apple.com/documentation/hypervisor Yeah, you got to use the hardware drivers from Apple unless it also supports PCI pass through, not sure but with the current user base I guess nobody would do that anyway.

I expect Apple to eventually run their ring-1 off the T chip, with everything else from a VM abstraction. It’s just the natural evolution of the UEFI approach, and Apple being themselves they’re doing it “their way” without waiting for the crossfire infested industry committees to play along.


Nope there was no mention of booting another OS. Craig talked about native virtualization that can be used to run Docker containers and other OS runtimes.


Yes there was in the State of the Union. Also mentioned was booting another OS from an external drive.


Watch the other keynote from wwdc they do mention it lol


> 1. If done correctly, non-Apple laptops may become significantly less attractive. Just like Android phones.

What makes Android phones less attractive, in your opinion?


The low end iPhone SE compares favorably with the highest end Samsung Galaxy.

https://www.androidauthority.com/iphone-se-vs-most-powerful-...


Do many people care about phone CPU performance? Sure, it needs to be good enough, but after that it's really far down on the list of things that matter.

What matters to everyone I know is screen size, camera quality and that a really small selection of apps (messaging, maps, email, browser, bank app) work well. Raw CPU performance is only a very abstract concept.


Raw CPU performance, perhaps not. But people definitely do care about a specific set of user-facing, ML-driven functionality - think speech recognition, speech synthesis, realtime video filtering, and so on.

Many of these are only barely possible on "pre-neural" mobile ARM CPUs, and at a significant cost to power consumption. Developing for newer devices is like night and day.


Not sure that's true to be honest. Speech recognition on my old Pixel 2 is miles ahead of anything I've seen on any iPhone, which are 2-4x faster.


Google's speech recognition is damn impressive, but I'm talking performance/power consumption, not "quality". Sticking a 2080 into an iPhone won't give you better speech recognition results, but it will give you bad results faster.


Because speech recognition quality is a product of data harvesting, the one thing Google does well?


> > Many of these are only barely possible on "pre-neural" mobile ARM CPUs

> Speech recognition on my old Pixel 2

I don't think the Pixel 2 can be called "pre-neural". "[...] The PVC is a fully programmable image, vision and AI multi-core domain-specific architecture (DSA) for mobile devices and in future for IoT.[2] It first appeared in the Google Pixel 2 and 2 XL [...]" https://en.wikipedia.org/wiki/Pixel_Visual_Core


When speech recognition starts understanding European Portuguese without me playing stupid accent exercises, and mixed language sentences as well, then I will care about it.


I suspect access to a vast trove of user data is more important for ML than raw CPU power on the client.


I'm just talking real-time performance and power consumption, not accuracy.


It's nice that the iPhone SE performs so well, but there's more to a phone than just the CPU.


> only one camera, just a 4.7-inch display, and less than Full HD screen resolution

cpu selection is likely coming from industrialization concerns, less production line to maintain, less price per unit at volume etc, but they're going to beat that drum loud and proud for all it's worth, meanwhile the phone is cheap in area that in 2020 _do_ matter.


I know a couple people trying to port an ML app to iOS. It sounds like the interfaces are a bit of a nightmare, and the support for realllly basic stuff in the audio domain is lacking.

I don't know the dev ecosystem for apple broadly, but this doesn't bode well for people "bothering to exploit" the hardware.


#1, who can say. #2 might be side stepped by the compatibility they with iOS apps they will gain? (making it so all those iphone/ipad developers can ship their apps to macs, too.)


I fully expect any reduction in costs for Apple will get sent to their shareholders, not the consumers.


> I fully expect any reduction in costs for Apple will get sent to their shareholders, not the consumers.

Apple's margins are consistent, if their costs go down significantly, pricing comes down or features increase. The iPad is a perfect example, for years it was $500 and they just kept increasing the feature-set until eventually they could deliver the base product for significantly less.

Shareholders benefit from increased market share just as much as they do from increasing margins, arguably more. The base iPad and the iPhone SE both "cannibalize" their higher end products, but significantly expand their base. I wouldn't be surprised at all to see a $800 MacBook enter their lineup shipping with the same CPU as the iPad.


Considering they're selling a device with a 10.5" touchscreen and an A12 SoC for $500 today, I think they can go even lower than $800 for a device with only a slightly larger LCD and no digitizer.

While they won't be competing with Chromebooks for general education use cases, I could very well see Apple trying to upsell schools on a $599 alternative that happens to run GarageBand, iMovie, and even XCode.


Eh I don't see Apple selling their cheapest education MacBook for $600 instead of $900 simply because one of many components suddenly got significantly cheaper.


I can see them doing that for big volume buys for education. I don't see why they wouldn't just pass on the entire Intel margin to them, getting students using Apple products young has value.


Chromebooks are doing well in education at the moment. If Apple launched a product in that space, they could easily claw half of that back overnight. The ability to run real software is huge, especially for subjects like graphic arts and engineering.


> Considering they're selling a device with a 10.5" touchscreen and an A12 SoC for $500 today, I think they can go even lower than $800 for a device with only a slightly larger LCD and no digitizer.

While there is no digitizer, there is a keyboard and a touchpad. Also, I expect Apple is going to try to keep a gap between the base Mac and the iPad price-wise so they would add to the base storage and maybe RAM.

Then again, considering the pricing on the base iPad, maybe they will bring it down to $600.


They actually demoed the performance of an arm mac with a low amount of ram


It didn't have stock iPad RAM, it had 16GB of RAM. More details on it here: https://appleinsider.com/articles/20/06/22/apple-developer-t...


Maybe if they take a bet on (or force) the App Store to be the primary method of obtaining software. I’d expect Apple forecasts some amount of yearly revenue per iPhone/iPad and a lower amount per MacBook.


Why do we need to buy so many devices anyway? Why can't I just plug my iPad or iPhone into a dumb dock with a laptop screen and its own storage and battery, and use the CPU and GPU from the phone/ipad?


I don't need VSCode, Docker, or node.js on my phone. I don't want all the clones of the various repositories I'm working on on my phone. Even the best phones lack the RAM, high capacity drives, and video card my computer has. Nor does it have a keyboard or trackpad.

If your phone is good enough to take care of your day to day computing, you can probably get by with an inexpensive all-in-one computer and save the headache of docking.


You'd be surprised how many people would like exactly this, interestingly. There are certainly enough to quite literally pay real money for a somewhat lousy facsimile of the real thing; I know from experience.


The dock can have the extra storage.

(And the GPU, and maybe even more RAM etc).


> The dock can have the extra storage.

Then what is the point in docking at all? Now you have to keep track of what's on the dock and what's on the phone. Plus, by the time you integrate all this into a dock, you basically have something that costs as much as an inexpensive PC, so why bother?


You'll need something to connect all those dock components together so you don't have to run several cables to the phone. Something like a motherboard. So you'll have a full computer sans a cpu.


More like a Thunderbolt dock with a screen.


The Surface Book is exactly this: A (x64 Windows) tablet with a laptop dock that contains a stronger GPU and battery.

One problem is that people expect the CPU power of a laptop, which requires much more power and cooling than the typical tablet. As a consequence in tablet mode a Surface Book has about two hours of battery life.


So far: different architectures. But with this announcement it would make running macOS on a future (or even current) iPad quite feasible, so your kind of dock might become true soon. Apple's new magic iPad keyboards use a lot of weight to balance the heavy screen - might as well make that a battery.


I asked that myself and my answer is: software.

When looking for IDEs or tooling on iOS I still have not found anything remotely professionally usable... (I mean Visual Studio + Reshaper like, not VS Code...) but perhaps somebody could enlighten me...


Because a general purpose device is not good business sense for a company that sells devices. The more they can artificially specialize each thing, the more things you need to buy, and the more money they make. This is a much larger phenomenon than just Apple, or even computers.


An iPhone is a general purpose device compared to an iPod. But maybe Apple has lost the willingness to cannibalise its own sales for the sake of creating stunning new product categories.


You can plug in a USB dock into a lot of Android phones, and if you get a DisplayLink dock, you can add 2-3 monitors. Keyboard, mouse, sound, Ethernet all work with it too.


Everyone else's answers are excellent, but I would note that with this move, Apple is certainly getting themselves closer to that potential future.


You can already do this with a USB dock.


Have you looked at the price of non-apple laptops over the years?


“Sure you can get a hamburger for $1 ... but then you’d have to eat it.” - favorite ad


Unfortunately for high priced premium products, the increase in quality of basic products forces premium products to be better or fail.

Related to your example - $1 burgers are increasingly better, than you would expect. The difference between McDonald's midrange line and, say, a burger at a restaurant for $18 is negligible in flavor. I can no longer justify going to a restaurant and pay $18+tip for a burger.


>The difference between McDonald's midrange line and, say, a burger at a restaurant for $18 is negligible in flavor.

Oh come on. I get that you're trying to make a point but this is ridiculous.


Both provide expected caloric and nutritional value, let’s say that. *Even at a flavor baseline


Spoken like a true bean counter.


If that’s my goal, I’ll have a Soylent drink or GreenBelly bar.

https://Soylent.com

https://GreenBelly.co

I’ll eat a $18 hamburger because it tastes really good - yes, about 18x better than a $1 burger.


Fun fact - I never compared the quality of $1 and $18 burger.


Sure, to you. There's a whole lot of not you out there for whom the distinction is worth the price differential. That's true in both hamburgers and hardware. Needs, goals, and use cases differ significantly among people.

I'd argue that the functional difference between a Honda Fit and a Tesla is less than the difference between the best McDonald's hamburger and an $18 hamburger. That's why I drive a Honda Fit. In the face of Tesla's increasing sales it would be pretty strange to assert that my taste was somehow universal.


Honda Fit is awesome.

But $18 burger is not drastically better than McDonald's $7 burger.

Try doing an actual blind test, with a control... because the simple fact of perception will make you think one is better.


I would argue a many, perhaps most, people that won't eat a McDonald's hamburger because of some perceived lack of quality probably haven't had one in many years and are instead working off public perceptions and status indicators about what they think it represents and must be like.

And then we've come full circle to Apple products.


I'm a classically trained chef who tends to specialize in bar food. I know more about the marketing, creation, and perception of food than you do— you're wrong.

McDonald's has very high quality preparation standards. Their ingredients and techniques were constructed to facilitate their high-speed, high-consistency process, but prevent them from incorporating things that the overwhelming majority of burger consumers prefer.

For example, the extremely fine grind on the meat, the thin patty, the sweet bread, the singular cheese selection, the inability to get the patty cooked to specification, the lack of hard sear or crust and the maillardization that accompanies it, etc. etc. etc. At a minimum, people prefer juicier burgers with coarser, more loosely-packed texture, usually cooked to lower temperatures (though this depends on what part of the country you're in,) and the flavor and texture differential from a hard sear, be it on a flat top or grill, and toasted bread.

For consumers who, at least at that moment, have a use case that requires their food be cheap, fast, and available, well we know who the clear winner is.

In my new career as a software developer and designer, I use apple products. I am willing to pay for the reliable UNIXy system that can also natively run industry-standards graphics tools without futzing around with VMs and things, and do all that on great hardware. There will always be people who aren't going to compare bits pushed to dollars spent and are going to be willing to spend the extra few hundred bucks on a device they spend many hours a day interacting with.

This isn't about perception at all— Apple products meet my goals in a way that other products don't. If your goals involve saving a few hundred bucks on a laptop, then don't buy one. I really don't understand why people get so mad at Apple for selling the products that they sell.


> I know more about the marketing, creation, and perception of food than you do— you're wrong.

I don't doubt you know more about food. If you applied that knowledge to my actual point instead of what it appears you assumed my point was, this assertion might have been correct.

That's not entirely your fault, I was making a slightly different point than the exiting conversation was arguing, so it's easy to bring the context of that into what I was trying to say and assume they were more related than they were.

The belittling way in which you responded though, that's all on you.

> This isn't about perception at all— Apple products meet my goals in a way that other products don't. If your goals involve saving a few hundred bucks on a laptop, then don't buy one. I really don't understand why people get so mad at Apple for selling the products that they sell.

My point, applied to this, would be to question what other products you've tried? My assertion is that people perceive other products to be maybe 50%-70% as good, when in reality they are probably closer to 85%-95% as good (if not better, in rare instances). That is a gap between perception and reality.

As applied to burgers, I was saying that people that refuse to eat at McDonald's because of quality probably have a very skewed perception of the actual differences in quality in a restaurant burger compared to a McDonald's burger.

I'm fully prepared to be wrong. I'm wrong all the time. I also don't see how anything you said really applies to my point, so I don't think you've really proven I'm wrong yet.


So you're creating metaphors that don't make sense using things that you have a limited understanding of to describe something you think you might be wrong about and getting annoyed that everybody else isn't following along with your deep conversational chess. Right then. I'm going to go ahead and opt out of this conversation.


Feel free. I simply made an observation that was loosely connected to the existing converaation and noted how it seemed to parallel something else.

I wasn't annoyed by you misunderstanding, I was annoyed by you misunderstanding, assuming you understood my position completely because it would more conveniently fit with your existing knowledge, and then using that assumed position to proclaim your superiority and my foolishness.

It's not about deep conversational chess on my part, it's about common decency and not assuming uncharitable positions of others by default on your part. A problem, I'll note, that you repeated in the last comment.


Oh please!

Just the mere perception of quality will increase your satisfaction levels. The perception of lack of quality will reduce you satisfaction levels.

Thus I still maintain that your "perfect" $18 burger is only marginally better than McDonald's midrange burger. The fact that you actually spend time on making that burger more appetising - is proof that the low cost foods are getting better and better.

While focusing on my analogy, you literally prove my overall point.

30 years ago you weren't necessary, as low cost food wasn't nearly as good as today. Now - you have to exist to justify that premium.


> working off public perceptions and status indicators about what they think it represents and must be like.

I eat at McDonald's all the time, and I also get pricey burgers ($13-18) from a local place that makes the best I've ever had.

You can't be serious. If you are, I've gotta say if anyone has a perception issue about their respective quality it's you.


I think you're reading more into my comment than what I actually said, possible because of someone else's prior comment in this thread.

I was making a point less about McDonald's being equivalent to a restaurant burger and more about people's perceptions of McDonald's and how bad it is. That is, there's probably a lot less difference in the taste of those burgers than a lot of people want to admit.

The other aspect to consider is consistency. I had a $14 burger at a restaurant on Saturday that I would have been happy to swap in any single burger I've ordered from McDonald's in the last 12 months. You may not consider it high quality at McDonald's, but you have a pretty good idea what you're going to get.

All I'm really doing is making a point that there's a bit of fetishism about luxury items going on these days. Are Apple devices generally higher quality than many competitors? Yes. Is the difference in quality in line with most people's perception of the difference in quality? I don't think so.


I haven't had a McDonald's hamburger for many years. You are partly correct that it is because of my perception that it is trash. But when I walk by a McDonald's it doesn't smell like food to me anymore and smells more akin to garbage on a warm day.


I eat at mcdonalds regularly. It is not a high quality burger. at all.


There's like 10 different types of burgers at McDonald's, excluding specials


> The difference between McDonald's midrange line and, say, a burger at a restaurant for $18 is negligible in flavor.

This may be the single worst analogy I've ever seen.

There is no amount of money you can pay at McDonalds to get a good quality burger.

I don't spend $18 for burgers, since there are a million places where you can pay $5-8 dollars and get a damned good piece of beef. But not at McDonalds.


You haven't been to McDonald's in a while it seems.

$5-8? At a food truck? The ones that make burgers of an extremely varied quality?


If the employees are doing it right, it’s not “that bad” of a burger. So, just pay the employees enough to actually care about the burger and it comes out decent.

I’ve eaten at McDonald’s around the world, it really depends but they do have good burgers when they’re cooked right.


It's not the employees. In different countries the entire recipe and production system is different. In many non-US countries, McDonald's is a more upscale "foreign" restaurant and far more expensive than in the US.


It's not an employee thing, the source material is junk. The best employee in the world can't turn mediocre frozen patties into a good burger.


If you look at their increasing focus on services then it makes sense to pass on cost savings.


They have to pay back the investments in R+D first though, no?


90% of these Mac silicon investments would directly benefit their iPhone cash cow—perhaps not this cycle, but certainly in the chips they'll put in future iPhones.

And the remaining 10% would indirectly benefit benefit their iPhone cash cow in the form of keeping people inside the ecosystem.


You have this backwards.

The Mac silicon is inheriting the investments Apple made in the iPhone CPUs. This will continue. The bits which Apple invests to make their existing hardware scale to desktop and high end laptops won't benefit the iPhone much at all. On future generation chips, Apple will spread the development costs over a few more units, but since iPhone + iPad ship several times more units than the Mac, the bulk of the costs will be born by them.


This is the big gotcha. A lot of people see the incremental cost of the CPU as the cost, but the actual cost is:

`(development cost + (units sold * incremental cost) ) / units sold`

But a lot of Macs have higher end Intel CPUs so the per/ unit cost of Intel CPUs is pretty damned high.


[flagged]


Apple's financials are public knowledge. You might consider reading them before making trite comments based on blind emotions.


Indeed, apples G4 cube debuted at 1800 base in 2000. That’s the same ballpark as their iMac now and their Mac mini starts at about half that. Meanwhile inflation would have made that G4 ~2700 today.


> Meanwhile inflation would have made that G4 ~2700 today.

Which of the inputs going into making that device would have applied any inflationary pressure?

Most if not all the parts would have probably got cheaper over that time and wage pressure is always low only because of where these devices are made.


Silvers gone up. Golds gone up. Probably various fixed costs had to be further invested in the form of contracts with fabs or new fabs built. Etc. But really the Mac mini is more of a modern likeness to the G4 cube which retails now for a 800$ start, less than half the g4 cubes starting price.

Edit-they also went from being a company with around 8500 employees in 2000 to 137000 today. Surely every part of their organization chart has contributed toward pressures to otherwise push up their prices to maintain revenue.


Since Silver and Gold are priced in $USD their price is influenced by the actions of the US government. One of those driving forces is the current US monetary policies (i.e. an growing budget deficit).

Another factor is perceived risk. Since the markets are always worrying about the current US China trade talks, that uncertainty helps gold and silver as they are seen as safe havens.


Engineering costs, both in rate and quantity.


> pricing comes down or features increase.

That's unsupportable


This is exactly what Apple has done with all of their products over the 30+ years. The iPad is a perfect example of Apple doing both over the past 10 years. Likewise the iPhone SE & the Apple Watch. It's done it with every product in their portfolio.


I'm actually not so sure about this. Apple's gross margin target is in the ballpark of 38-40% and a savings of $200-800 per MBP would have a substantial upwards impact on that gross margin number. Apple carefully sets pricing to achieve a target gross margin without impacting sales too much (higher price = higher gross margin but likely lower net revenue because they're priced out of the market).

One of the two scenarios (or perhaps a mixture of both) are more likely, and I lean towards #1:

1. Apple decreases the price of the Mac to stay aligned with gross margin targets. This likely has a significant upwards impact on revenue, because a drop in price like this opens new markets who can now afford a Mac, increasing market share, driving new customers to their services, and adding a layer of lock-in for iPhone-but-not-Mac customers.

2. Apple uses the additional budget per device for more expensive parts/technology. They are struggling to incorporate technologies like OLED/mini-LED because of the significant cost of these displays and this would help open up those opportunities.


The high price of MacBooks is treated as a status symbol and the marketing department clearly knows as much, so I don't think they will be willing to give that up, so I lean towards your second option.


How do you explain the iPhone SE then?

Why not got the same road with an MacBook SE? I even think this will be the first product out the pipeline.

MacPro buyers usually don't want to be beta testers and will probably be the last to transition out once horsepower is clearly there with mesurable gains.


The iPhone SE is less a "cheap iPhone" and more "an expensive basic smartphone". Far more people upgrade to it from a low cost Android device then downgrade from a different iPhone.


And that is exactly why they can expect marketshare growth. Who is gonna buy a 600$ range plastic laptop PC if Apple ship a 700$ range MacBook SE?

Of course you'd still get 350$ range crappy product, Apple can't and don't want to compete a those levels.


Plastic PC laptops cost $300 now, not $600.


Please do read my second sentence before reacting to the first one...


The iPhone SE also picks-up the market segment that prefers smaller phones.


the SE is a parts bin phone, not an entirely new design.


Bingo. Estimates vary wildly, but I've seen figures saying that Axx CPUs cost Apple about $50 each. Even if it's more like $100, that's still an insane amount of additional profit per unit to be extracted. They don't need to deal with single-supplier hassles and they get much more control over what cores go into their SoC.

This is sort-of-OK for consumers but amazing for Apple and its shareholers.


I suspect the big motivation for Apple is less about squeezing a few dollars more profit per system and more about shipping systems which just aren't possible on Intel's roadmap. Just putting the A12Z into the previous 12" MacBook would be a massively better computer with better battery life, better performance, and significantly less expensive. All while Apple maintains their margins.

This isn't a zero sum game. Being able to ship less expensive computers which perform better is a win for consumers and Apple shareholders at the some time.

The only loser here is Intel.


> Just putting the A12Z into the previous 12" MacBook would be a massively better computer with better battery life, better performance, and significantly less expensive. All while Apple maintains their margins.

Don't we basically know that from the Surface X?


> Don't we basically know that from the Surface X?

Don't have to venture that far, we know it from the iPad Pro.


Microsoft doesn't control the hardware very much and definitely doesn't control the software developers whereas Apple completely controls the former and has a lot of leverage with the latter.


You can see that with the latest MacBook Pro 13", if you buy one of the cheaper devices it comes with last years processors. Intel are clearly having problems meeting customer demand.


But customers are going towards an entirely entirely closed everything. ios is apple languages, apple signature required to run code, apple processors. Desktop machines are the last bit of freedom in the apple ecosystem.

This isn't "sort-of-ok", it's "bad-for-customers" and "bad-for-developers".


Why are you implying that they're going to lock down the Mac and make it some kind of iPad Pro? You'll still have complete control to run anything you want on the system. Running unsigned binaries is as simple as a right click on the app to open it on Mac. Or launch it from the command line with no prompt at all.


If you've ever tried to reinstall the OS on a machine with a T2 chip, I think you'd have a better idea how this is going.


Nit: you're not required to use "Apple languages" to develop for iOS.


It looks like from the freedom end of things, the only thing that changes with ARM Macs is they're requiring notarization for kexts, and the fact that other OSes won't boot on the hardware since they don't have support for it. Unless anything changed, the T2 chip already killed linux support before?


This is just my opinion but I think it's great for consumers and a good restriction for developers.

As a consumer you shouldn't be running unsigned software because you're putting not only your data at risk but any data you have access to.

And as a developer on mac you can still run anything reasonably well in a VM. If you're using node, you should be running that in a virtualized environment in the first place, albeit I'm too lazy myself to always set that all up.

Actually it's pretty amazing that now we'll be able to run an entire x86 OS environment on an ARM chip and get very usable performance too.


> If you're using node, you should be running that in a virtualized environment in the first place

Just curious: why should node be ran in a virtualised environment for development? Is it a security concern? Does that apply to languages like python too? Would you be happy running it in a Docker container from macOS?

Thanks!


Headsuphigh, Debian at least has been signing packages since 2005


How do we know it's "very useable" performance wise?

I'd say that we've moved away from virtualisation completely, we now use containers, so developers will expect native performance, as we get on other platforms.


docker on macOS runs in a VM.

If they're already on macOS, that's a thing.


As someone on linux I've never run signed software ever in my life. Guess I've lost all my data and haven't even noticed!


Are you sure? Packages are signed for most of the mainstream distributions.


Linux signs its software.


I mean signed by Apple or any big corporation.


You could also argue that significant cuts to costs of already-profitable Mac computers, could lead to significantly higher sales volumes.

Greater marketshare also provides more value to shareholders meaning that shareholders still win, as do consumers.

More people with macs (and probably iPads/iPhones) would also increase other profit centers for Apple such as services (their highest profit center), warranties, and accessories. The profits and loyalty from these could easily far outweigh the $100-$300 of extra margin they might gain from keeping Mac prices the same.

Meaning that price cuts to macs might actually be more strategically beneficial (to EVERYONE) than hoarding higher margins.


Apple's services already make more money than their Mac business. There aren't enough macs sold for this to have a big impact


Even on the hardware side, they sell more than ten times as many iPhones as Macs.


Cost of cpu is unit cost + all other costs including r and d divided by units. We also don't arrive at a reasonable estimate of unit costs by taking a range of estimates and picking the estimate most favorable to our position.

I also don't believe it's reasonable to assume that switching to arm is as simple as putting an iPad cpu into a laptop shell.

Here is an estimate that their 2018 model costs 72 just to make not to design and make.

https://www.techinsights.com/blog/apple-iphone-xs-max-teardo...

The a14 that will power a MacBook is likely going to be more expensive not less. Especially with 15B transistors on the a14 vs less than 7B on a12.

Average selling price of Intel cpu looks like around $126. This includes a lot of low end cpus which is exactly the kind of cpu apple fans like to compare.

Apple may realize greater control and better battery life with the switch but they won't save a pile of money and thoughts about increasing performance are fanciful speculation that Apple, the people with the expertise are too smart to engage in.


Indeed. Apple is going to have to eat R&D costs that were previously bundled in Intel's pricing. And Mac sales are relatively small compared to the Windows market, so economies of scale are going to be less significant.

Which means the actual per-CPU fab cost is going to become a smaller part of the complete development and production cost of a run. And that total cost is the only one that matters.

I expect savings can still be made, because Apple will stop contributing to Intel's profits. On the other hand I'm sure Apple was already buying CPUs at a sizeable discount.

Either way it's an open question if Apple's margins are going to look much healthier.

IMO an important motivation is low power/TDP for AR/VR.

Ax will also eventually give Apple the option of a single unified development model, which will allow OS-specific optimisations and improvements inside the CPU/GPU.

Ax has the potential to become MacOS/iOS/A(R)OS on a chip in a way that Intel CPUs never could.


> amazing for Apple and its shareholers

This only makes sense if you know nothing about Apple's business.

You really think they're doing this to save $50 from ~5m Macs? You really think all this upheaval is for a mere $250m a year in savings? It'll cost them 10x that in pain alone to migrate to a new platform.

Come on now....$250m is nothing at Apple scale. Think bigger. Even if you hate Apple, think bigger about their nefariousness (if your view is that they have bad intentions - one I don't agree with).


I'm not sure how you calculated that but they sell about 20m macs per year not 5m. I also doubt the chips cost them 50$ per unit. The savings may worth few billions so it's not really like nothing. And they wouls save this every year. Will this change cost them 10x in pain alone? I doubt it. They already make the chips.


> I'm not sure how you calculated that but they sell about 20m macs per year not 5m

Quarterly numbers come in between 4.5-5m units these days but point taken - I recalled numbers for the wrong timeframe.

> I also doubt the chips cost them 50$ per unit. The savings may worth few billions so it's not really like nothing.

The true cost of this move is reflected in more than the R&D. This is a long multi-year effort involving several parties with competing interests. People are talking here as if they just flipped a switch to save costs.

Let me make this clear. In my view, this is an offensive/strategic move to drive differentiation, not a defensive move to save costs (though if this works, that could be a big benefit down the road). Apple has a long history of these kinds of moves (that don't just involve chips). This is the same response I have to people peddling conspiracy theories that Apple loves making money off of selling dongles as a core strategy (dongles aren't the point, wireless is; focusing on dongles is missing the forest for the trees).


You aren't making anything clear, just straw man arguments. Apple switches architecture when it suits them, you think they switched from powerpc to intel was for differentation? Nope, it was cost and performance aka value.


> Apple switches architecture when it suits them

The question isn't whether it suits them. The question is: "Why did they choose to take on the level of risk in this portion of their business and what is the core benefit they expect?"

If the the main reason was cost savings, this would be a horrible way to go about it.

There's a better answer: Intel can't deliver the parts they need at the performance and efficiency levels Apple needs to build the products the way they want to build them. This is not a secret. There is a ton of reporting and discussion around this spanning a decade about Intel's pitfalls, disappointments, and delays. Apple might also want much closer alignment between iOS and MacOS. Their chip team has demonstrated an ability to bring chipsets in-house, delivering performance orders of magnitude better than smartphone competition on almost every metric, and doing it consistently on Apple's timelines. It only seems natural to drive a similar advantage on the Mac side while having even tighter integration with their overall Apple ecosystem.


I think you are spot on. Any kind of cost savings here is going to be gravy and won’t come for a long time. This is going to let Apple reuse so much from phones in the future Mac line - all their R&D on hardware, the developer community, etc. It will be very interesting to see what the actual products are like, and whether the x86 emulation is any good.


Oh, so we are talking about value now? Please stick to an argument after you fail to defend it. You already used your dongle argument no one asked for.


> Oh, so we are talking about value now?

If you want to boil this conversation into one dimension, I'm not your guy - you'd be better suited by finding someone else to talk to. Cheers!


Then don't go on a tangent, when the point the parent was talking about potential savings and big oof when you get your numbers wrong then try to strawmen about points no one is arguing against. No one was arguing about vertical intergration bonuses Apple gets by their own SOC. You wanted to boil it into one dimension by dismissing the value Apple can provide with their own chip.


1. I stated quarterly numbers off the top of my head instead of yearly numbers. This mistake doesn't change my point at all at Apple scale - it's a negligible amount of savings relative to the risk. Companies of this scale don't make ecosystem level shifts without a reason far far better than "we can _maybe_ increase yearly profits by 1% (1/100 * 100) sometime in the future". It's just not relevant to bring that up as a primary motivation given what we're talking about.

2. I think you actually missed the point of the conversation. OP said "that's still an insane amount of additional profit per unit to be extracted" and followed that up with "amazing for Apple and its shareholers."

It is not insane at all. And not amazing. It just comes off as naive to anyone who's worked in these kinds of organizations and been involved in similar decisions.

I think it's hard for some people to comprehend that trying to save $1b a year for its own sake at the scale of an org like Apple can in many cases be a terrible decision.


You came with your strawman that it was for its own sake, they just stated it was a profitable move and "amazing for Apple and its shareholders, which is hard to refute. OP even said "They don't need to deal with single-supplier hassles and they get much more control over what cores go into their SoC." It seems you are now arguing with your own points.


> It seems you are now arguing with your own points.

Half the fun is writing down your own thoughts!

> You came with your strawman that it was for its own sake

That's possible. I saw the emphasis placed differently than you did even though we read the same words. Probably describes the nature of many internet arguments. Happy Monday - I appreciate you pushing me to explain myself. Seems like others were able to get value out of our back and forth.


The fact that they are saving $1 billion per year is what makes the transition possible, it's not actually the cause of the transition. They could have done the transition a long time ago if it was just about the money.


It saves them much more over the long term if it lets them get away from having two different processor architectures. It paves the way for more convergence between their OSes. Eventually a macbook will be just an ipad with a keyboard attached, and vice versa.

Yes, they're a big company. But they're also a mature company. A lot of their efforts are going to be boring cost-cutting measures, because that's how mature companies stay profitable.


They’ll pass the savings onto the consumers though right?? I mean, a fully specced MacBook costs over £3000 which seems expensive...

Here’s hoping anyway!!


Sure, they'll pass the $50 savings on.

You are overestimating how much a CPU costs...


It's more than just a CPU though - this will make the components of a Mac massively similar to an iPad, and probably save money on many other components.

It also removes any need for a dedicated GPU in their high-end laptops, which is probably $200 alone.

I have no idea how they justify the prices for their lower end laptops as-is, as they have worse screen and performace than recent iPads in pretty much all cases.


Think of it this way:

1. This is risky for consumers. Whereas the PPC->x86 move was clearly a benefit to consumers given how PPC was lagging Intel at the time, x86 had proven performance and a massive install base. It was low risk to consumers. This? Less so. Sure iOS devices run on ARM but now you lose x86 compatibility. Consumers need to be "compensated" for this risk. This means lower prices and/or better performance, at least in the short-to-medium term; and

1. This move is a risk for Apple. They could lose market share doing this if consumers reject the transition. They wouldn't undertake it if the rewards didn't justify the risk. They will ultimately capture more profit from this I'm sure but because of (1) I think they may well subsidize this move in the short term with more performance per $.

But I fully agree with an earlier comment here: Apple has a proven track record with chip shipments and schedules here so more vertically integrated laptop hardware is going to be a win, ultimately.


> This is risky for consumers.

EXACTLY.

If you are a photographer, a developer, a graphics designer, a musician, a teacher, or whatever, and you are looking at buying a new Mac, what is going to get you to buy the new Apple Silicon powered Mac which is almost certain to impact your workflow in some way? If you are making purchase decisions for classrooms, what makes you buy 200 Macs with a new, unknown architecture?

The first generation of Macs on Apple silicon absolutely needs to have a significantly better price/ performance point versus the current generation or they won't sell to anything more than the most loyal fans. If the new Macs come out and pricing is not good, I could seriously see a sort-of anti-Osborne effect where people gravitate towards Intel based Macs (or away from Macs entirely) to avoid the risk of moving to new architecture.

If anything, I expect margins on the first couple generations of Macs to go DOWN as margins on the first couple generations on all Apple products are lower (also public record).


> If you are making purchase decisions for classrooms, what makes you buy 200 Macs with a new, unknown architecture?

Yes, the "unknown" architecture powering the highest performing phones and tablets.

Apple has plenty of problems selling to schools for classroom use because other platforms have invested more in that use case. But ISA being the reason? No. Simply no.


Have you ever been behind the purchase choice for dozens of computers? Hundreds?

IT managers are conservative, if they make a bad call, they have to support crap equipment for the next 5+ years or so. Yes, I'm aware Apple's CPUs are in the iPhone and iPad, but it's a huge change for the Mac and it's a big risk for people making those purchase decisions.


As for this, I have, and I certainly would not buy for the first two-three (if not more) hardware revisions after such a major architecture change until I could evaluate how that hardware has been working out for the early adopter guinea pigs. I'd also need to see where everything stood concerning software, especially the educational software that has been getting written almost entirely for x86 systems or specifically targeting Chromebooks for the last 5+ years. Even then I am not sure the Technology Director is going to be anything but skeptical about running everything in VMs or Docker containers. Chromebooks are cheap, reasonably functional, easy to replace, and already run all district educational software.


As a S&P500 owner, I am Ok with this.


Buy shares. That’s what I’m doing.


undoubtedly, that’s capitalism! but they may also introduce some price cuts. These would probaby increase units sold, so they could better take advantage of their increased margin.


I'm also predicting there will be no difference in battery life.

If you check technical specifications on past MBP battery specification and battery life you can notice one thing: Watt/hour battery is always decreasing and battery life is always remaining constant (e.g., 10 hours of web scrolling).

Gain in power consumption allows to reduce component space which allows further slimmer designs.


Linus Tech Tips recently published a video where they did all kinds of cooling hacks to a Macbook Air, including milling out the CPU heat sink, adding thermal pads to dissipate heat into the chassis (instead of insulating the chassis from the heat), and using a water block to cool the chassis with ice water.

They got pretty dramatic results from the first few options, but it topped out at the thermal pads and nothing else made any difference at all. Their conclusion was that the way the system was built, there was an upper limit on the power the system could consistently provide to the CPU, and no amount of cooling would make any difference after that point.

The obvious conclusion for me was that Apple made decisions based on battery life and worked backwards from there, choosing a chip that fell within the desired range, designing a cooling system that was good enough for that ballpark, and providing just enough power to the CPU/GPU package to hit the upper end of the range.


It could just as well have been, choose a pref level and assure it will run for 10 hours...

It actually good engineering to have all the components balanced. If you overbuilt the VRM's for a CPU that would never utilize the current, its just wasted cost.

OTOH, maybe they were downsizing the batteries to keep it at 10H so they could be like "look we extended the battery to 16 hours with our new chips" while also bumping the battery capacity.

We shall see...


Per 'Reason077:

> The 16" MacBook Pro, for example, has a 100 Wh battery, which is the largest that Apple has ever shipped in a laptop. This is the largest battery size permitted in cabin baggage on flights.


I agree battery life for casual workloads will probably stay the same. However, if CPU power consumption decreases relative to other components, battery life on heavy workloads should go up.


My new 16“ MBP is good for 2-2.5h max when used for working on big projects in Xcode. I expect to almost double that with the new CPUs. The people who have exactly this problem are also those who buy the most expensive hardware from Apple.


> My new 16“ MBP is good for 2-2.5h max when used for working on big projects in Xcode.

That's…pretty bad. Do you have anything else open?


Slack, Chrome, Safari... but Xcode and the simulator is enough to bring it to its knees


> "Watt/hour battery is always decreasing"

This isn't always true. The 16" MacBook Pro, for example, has a 100 Wh battery, which is the largest that Apple has ever shipped in a laptop. This is the largest battery size permitted in cabin baggage on flights.


2015 15" Macbook Pro also had that (99.5 Wh).

https://support.apple.com/kb/sp719?locale=en_US


Great, they can make the laptops even slimmer. They're going to make them so thin they won't be able to put a USB-C port and use wireless charging. You'll soon learn that you don't actually need to plug anything into your device. Apple knows best.


>> You'll soon learn that you don't actually need to plug anything into your device. Apple knows best.

A wireless solution is long-awaited.


> Once you look closely at power profiles on modern machines you'll see that most energy is going into display and GPU. CPUs mostly run idle. Even if you had a theoretical CPU using zero energy, most people are not going to get 30% battery life gains

This doesn't really seem to match my experience; at least on a 2015 MBP, the CPU is always consuming at least 0.5-1W, even with nothing running. If I open a webpage (or leave a site with a bunch of ads open), the CPU alone can easily start consuming 6-7 watts for a single core.

Apple claims 10 hours of battery life with a 70-something WH battery, which would indicate they expect total average power consumption to be around 7W; even the idle number is a decent percentage of that.

(Also, has anyone been able to measure the actual power consumption of the A-series CPUs?)


A typical laptop display can consume around 10W all the time so the 1W from the idle CPU is negligible in comparison.

If anything, you should install an adblocker. A single website filled with ads (and they're all filled with tons of ads) can spin the CPU to tens of watts forever, significantly draining the battery.


10w is on the high end of this, my 1080p screen on my Precision 5520 sucks down a paltry 1.5w at mid brightness, the big killer is the wifi chip. That takes between 1.5-5w.

CPU tends to be quite lean, until something needs to be done then steps up very quickly to consuming 45w.


I usually consider 5 to 15W for laptop display consumption. Depends on the display, size and brightness.

It's quite variable, the highest brightness can consume double of the lowest brightness for example. One interesting test if one has a battery app showing instant consumption (I know lenovo laptops used to do), is to adjust brightness and see the impact.


Yeah, this is probably harder to do on a macbook, but intels 'powertop' program on Linux has quite high fidelity, matches the system discharge rate reported by the kernels battery monitor too.


Anecdotal evidence: On my work notebook (Lenovo X1 Carbon, Windows 10), the fan starts spinning when Slack is on a channel with animated emoji reactions.


I looked up the numbers out of curiosity. The X1 Carbon has a i7-8650U processor which does about 26 GFlops. The Cray-1, the classic 1976 supercomputer did 130 MFlops. The Cray-1 weighed 5.5 tons, used 115 kW of power, and cost $8 million. The Cray-1 was used for nuclear weapon design, seismic analysis, high-energy physics, weather analysis and so forth. The X1 Carbon is roughly equivalent to 200 Crays and (according to the previous comment) displays animated emojis with some effort. I think there's something wrong with software.


> I think there's something wrong with software.

Amen


Well yes, it's quite noticeably sluggish and bloated on the whole, with even UIs seemingly getting worse over time. Probably doesn't help that everything these days wants to push and pull from multiple networked sources instead of being more self-contained.


That’s because Slack runs on top of basically Chrome, which is a horrible battery hog.

If you run the web versions of Electron “apps” in Safari you’ll get substantially better battery life. (Of course, still not perfect; irrespective of browser all of these types of apps are incredibly poorly optimized from a client-side performance perspective.

If large companies making tools like slack had any respect for their users they would ship a dedicated desktop app, and it would support more OS features while using a small fraction of the computing resources.

(Large-company-sponsored web apps seem to be generally getting worse over time. Gmail for example uses several times more CPU/memory/bandwidth than it used to a few years ago, while simultaneously being much glitchier and laggier.)


Yes, Electron is a bit of a battery hog. But the Slack app itself is horrendous. If you read through their API docs and then try to figure out how to recreate the app, you'll see why. The architecture of the API simply does not match the functionality of the app, so there is constant network communication, constant work being done in the background, etc.


I'll turn your anecdote into an anecdatum and say the same; for all devices I've owned. (Linux on a Precision 5520 w/ Xeon CPU, Macbook pro 15" 2019 model, Mac Pro 2013)

Turn off animated gifs and emojis in slack.


You can do that? /me looks it up... OMG you can do that. Thank you so much!


Why does a CPU running 500+ gigaflops struggle to animate a gif? Is the software stack really that bad?


On my laptop, scrolling through Discord's GIF list can cause Chrome and Discord to hard-lock until I kill the GPU process. Possibly because of a bug in AMD's GPU drivers on Windows.


My anecdote mirrors yours, plus an extra nod towards animated 'gif' wars.


never leave slack in the foreground.


Seems to me to be very likely that Apple's graphic's silicon is much more performant and power efficient than Intel's integrated GPUs. CPUs idle most of the time seems to point to the advantage of a big.LITTLE style design which Apple have been using for iPad's etc for a while. So maybe not 30% but not negligible either.

They demoned lightroom and photoshop which are surely using meaningful CPU resources?

Agreed on the accelerators and the cost savings. All together probably a compelling case for switching.


Try browsing the web on a semi-decent laptop from, say, 2008. It's a frustrating experience. It is obnoxious how much CPU power modern websites require.


Honestly, back when my PSU died I just did that. Beyond the lack of video decoding support for modern codecs it was perfectly acceptable as a backup machine.


Join the trend! Deploy websites that mine bitcoin in the background in your viewers' web browsers.


It's worse than that. At least someone would profit off of those bitcoins being mined. Instead we use all of that power to make the dozens of dependencies play nice with one another.


> Dozens

Oh my sweet summer child.


You know that Apple is going to be making the GPU with the same technology as the CPU right?

And those accelerators don't need to be discrete, Apple can add them to their CPUs.

So, it looks like your point is: Sure, Apple is going to jump a couple process nodes from where Intel is, but everything is somehow going to remain the same?


> Once you look closely at power profiles on modern machines you'll see that most energy is going into display and GPU.

Hard to square this with the simple fact that my 2018 MacBook Pro 13" battery lifespan goes from 8 hours while internet surfing to 1.5 hours for iOS development with frequent recompilations.


I'm predicting a future where the os is completely locked down and all software for macs be purchased from the app store. Great revenue model for Apple.


That's basically what Microsoft did with ARM on Windows and I'm sure Apple will take advantage of this opportunity.


The first time (Windows RT), yes, and it was a complete market failure. The new Windows on Arm is not like this.


And it didn’t help that the Windows Store back then was a store for UWP/Metro apps.

It also took a long time for Microsoft to actually tackle the issues that UWP/Metro and WinUI/XAML faced. It took so long, it doesn’t even matter anymore and even Microsoft has moved on. But there’s quite a bit of hypocrisy, with Microsoft telling others to use WinUI while not using it everywhere themselves while refusing to update the styles of other design frameworks.


People have been making this claim for the last ten years, it hasn't happened yet.


Their pro/developers (influence) are too important to apple to do that.


Don't worry, they'll be able to 'unlock' the OS for an exorbitant price via an Apple Pay purchase. :)


> Apple is going to save $200-$800 cost per Mac shipped

Whom is Apple going to sell binned A14s to?

Where does everyone think margin comes from in the chip business?


Apple will simply use different bins in different products. The A12X is arguably a "binned" A12Z, after all. Higher bins for pro lines, lower bins for consumer lines.


Apple doesn't have the lineup for that. The CPU in the Mac Pro isn't the same silicon as the CPU in the Mini. It has more cores, bigger caches, more memory channels. It's not just the same chip binned differently.

In theory they could offer the Mini with eighteen different CPU options, but that's not really their style.


One question is whether they'll go down the chiplet route for higher end CPUs, then they can share a single die, binned differently, across more of their range, and just bundle them into different MCMs.


That's what AMD is doing, but it does weird things to your lineup. For example, here's the 3700X vs. the 3990X:

https://www.anandtech.com/bench/product/2520?vs=2584

The 3990X costs more than ten times as much as the 3700X. It has eight times more cores. On anything threaded it smashes the 3700X. On anything not threaded it... doesn't. In many cases it loses slightly because the turbo clock is lower.

It basically means that the processor with the best single thread performance is somewhere in the lower half of your lineup and everything above it is just more chiplets with more cores. That's perfectly reasonable for servers and high end workstations that scale with threads. I'm not sure how interesting it is for laptops. Notice that AMD's laptop processors don't use chiplets.


Even the highest core count Threadrippers have decent single thread performance. The Epyc lineup has much lower single core performance and that may make it less useful for desktop workloads.


AFAIK the AMD distinction is currently that APUs (mobile or desktop) don't use chiplets.

On the whole my guess would be that we have the iPad Pro and MacBook Air using the same SoC, the MacBook Pro doing… something (it'll still need integrated graphics, but do they really sell enough to justify a new die? OTOH they do make a die specifically for the iPad Pro, and I'd guess it's lowest-selling iOS device v. highest-selling macOS device, and idk how numbers compare!), and the iMac (Pro)/Mac Pro using chiplets.


Don't worry, apple already tiers most of it's hardware by soldering in the ram / storage & charging an offensive, obviously price gouging amount to upgrade - even though the maximum spec has a base cost to them of 1/4 to 1/6 of what they charge FOR AN UPGRADE.


The Mac line will start to look like the iOS line very quickly. Binning will be important and you'll likely see processor generations synchronized across the entire product base.


I've been thinking about this. I can't see Apple realistically being able to produce multiple variants (phone, tablet, laptop, speaker, tv) of multiple elements (cpu, gpu, neural accelerator, wireless/network, etc) packaged up on an annual cadence.

The silicon team is going to be very busy: they've got the A-series, S-series, T-series, H-series, W-series, and U-series chips to pump out on a regular roadmap.

The A-series (CPU / GPU / Neural accelerator) is the major work. It gets an annual revision, which probably means at least two teams in parallel?

The A-series X and Z variants seem to be kicked out roughly every second A-series generation, and power the iPads. The S-series seems to get a roughly annual revision, but it's a much smaller change than the main A-series.

I could see the Mac chips on a 2-year cycle, perhaps alternating with the iPad, or perhaps even trailing the iPads by 6 months?


The iOS line looks like the low end device using last year's chip. How does binning help with that? Are they going to stockpile all the low quality chips for two years before you start putting them in the low end devices? Wouldn't that make the performance unusually bad, because it's the older chip and the lower quality silicon?


Why would they make the ios line worse? Surely they still want to prioritize phones?


If you think the bins are determined by yield rather than by fitting a supply/demand curve, I have a bridge to sell you.

Of course, yield is still a physical constraint, but apple sells a wide range of products and shouldn't have any trouble finding homes for defect-ridden chips.


Chiplets and better interposers make yield/binning less of a need than before


The non Pro iPhones, non Pro iPads, Apple TVs, HomePods, maybe the Mac Minis if external GPUs are pushed as an alternative to integrated GPUs.


> CPUs mostly run idle. Even if you had a theoretical CPU using zero energy, most people are not going to get 30% battery life gains

I don't agree. Simply disabling Turbo Boost on MBP16 nets me around 10-15% more battery life. Underclocking a CPU can even result in twice to thrice the battery life on a gaming laptop under same workload.

Here's more details. https://www.extremetech.com/computing/304884-disabling-intel...


I actually think total battery life will go up a fair bit and compile times will be much faster, 20-30%, while giving everyone full power when not on the mains. The amount my MacBook throttles when on battery is startling and stopping that while still giving huge battery life, say 6h at 80% CPU will be a huge win. Apple wouldn’t bother unless they knew the benefits they can bring over the next 10 years will be huge.

All of this is complete speculation of course but I don’t believe it will be a financial decision this one, it’ll be about creating better products.


Multi-core performance is not a strong suit of Apple's ARM architecture, I suspect you're going to see a mild to moderate performance hit for things like compilation.


The rumours are that they're doubling the number of high-performance cores for the laptop chips (so 8 high performance cores and 4 low-power cores). That + better cooling ought to boost the multi-core performance quite significantly.


Thats mostly because of thermals, which are drastically different on Mac.


Is their multi-core performance poor, or have they just made size/power trade-offs against increasing the number of cores? The iPad Pro SoCs are literally the only parts they've made so far with four big cores.


Don’t iPads get significantly better battery life than similar casual workloads on a similar form factor Intel PC (like a MacBook Air)?


That’s mostly because desktop systems are built with more background services and traditional multitasking in mind. iOS has a different set of principles.


But luckily for Apple the control the OS as well.


If that was enough, though, they'd just do that now. They control the OS without making laptop silicon.


Right, but an iPad is already an example of what they can do when they control both.


I was looking at the benchmarks of the latest MacBook Air here [1]. In GPU performance it's not competitive with the iPad Pro, and that's quite an understatement. For me the most obvious win of this migration to "Apple Silicon" will be that entry-level MacBook/iMac will have decent GPU performance, at long last...

[1] https://arstechnica.com/gadgets/2020/03/macbook-air-2020-rev...


Apple can start leaning on their specialized ML cores and accelerators - Can you elaborate a bit more on this ?

I was wondering how this would unfold and looks like things are moving in that direction https://blog.tensorflow.org/2020/04/tensorflow-lite-core-ml-...

If TF models can interoperate with CoreML - boy that'll literally be a home run for Apple cos eventually all ML frameworks will follow suit.


You can convert TF models to CoreML and have been able to for quite a while: https://developer.apple.com/documentation/coreml/converting_...


“Apple can start leaning on their specialized ML cores and accelerators“

I think that hits the nail on the head. Since I only cursory listened to both the keynote and the state of the union I may have missed it, but I heard them neither mention “CPU” nor “ARM”. The term they use is “Apple Silicon”, for the whole package.

I think they are, at the core, but from what they said, these things need not even be ARM CPUs.


This doesn't match my experience. Web browsing kills my battery life, and my assumption is that it's driven by JS.


JS/ads and the wifi chipset seems to be the big culprit across laptops in general in this scenario. Even Netflix doesn't drain my battery as fast as hitting an ad heavy site with lots of JS and analytics and I can watch the power usage for my wifi chipset crank up accordingly. This happens across every laptop, iPad, Chromebook etc that I own.


I think it's largely just the JS. I adblock pretty aggressively.


Also heat is greatly reduced with ARM, no?


if power goes down heat will also, it's almost 1 to 1. Almost all power going into a CPU is lost as waste heat eventually.


To distill your post-

-the CPU will be a lot more powerful and faster, but it isn't really faster because it's like an accelerator or something.

-if you actually use your computer get some vague "Linux desktop" or something (which is farcical and borders on parody, completely detached from actual reality). Because in the real world people actually doing stuff know that their CPU, and its work, is a significant contributor to power consumption, but if we just dismiss all of those people we can easily argue its irrelevance.

My standards for comments on HN regarding Apple events are very low, but today's posts really punch below that standard. It's armies of ignorant malcontents pissing in the wind. All meaningless, and they're spraying themselves with piss, but it always happens.

In the end this noise doesn't matter whatsoever.


CPU’s do not mostly run idle, if my CPU usage indicator is accurate.


I was going to follow up with an anecdote about how my computer has used less than 15 minutes of CPU time in the last 2 hours but then again I forgot to stop a docker container that automatically ran an ffmpeg command in the background consuming 70 min of CPU time.


I don't know about MBP but the iPad Pro has a faster GPU than an MBA for most of the shaders on Shadertoy.com


> most energy is going into display and GPU

You know Apple Silicon is going to handle this too, right?


> Apple is going to save $200-$800 cost per Mac shipped

Does Apple actually have its own silicon fab now or are they outsourcing manufacture? If the former, those are /expensive/ and they'll still be paying it off.


No. Recent A-series chips are fabbed by TSMC.


This seems very inaccurate to me. Most laptops do not have discrete GPUs, so tasks like rendering a youtube video do require CPU cycles. Zoom is very CPU intensive on basically any mac laptop, and people always have a ton of tabs open, which can be fairly CPU intensive.

In other words, there are definitely gains to be had. My ipad pro offers a generally more smooth and satisfying experience with silent and much cooler running CPU versus my MBP, and they offer similar battery life. Scale up to MBP battery size and I suspect we will be seeing a few hours battery life gain.


Here's an analogy to help explain the skepticism: ants have amazing efficiency - they can lift multiples of their own body weight. So why can't an ant lift my car? Well, because it's too small. So let's just take the same ant design and scale it up? Unfortunately, it doesn't work like that. A creature capable of lifting my car wouldn't be much like an ant.

There is no guarantee that a phone-scale CPU can just become 4x faster by 4x'ing the power/TDP/die area. If it were that easy, Intel would already have done it (and no, the x86 architecture isn't so terribly inefficient that they are leaving triple-digit percentage improvements on the table).

What I expect we'll see are ARM chips that are power and performance competitive with x86 chips only for specific curated use cases. Apple will extract an advantage by putting custom hardware acceleration into them, to cater for those specific tasks. They will not be able to achieve general purpose performance improvements wildly beyond what Intel can already do.

This is how the current iDevices achieve their excellent performance and battery life. Not through raw general-purpose CPU horsepower, but by a finely tuned synergy between hardware and software. Apple are taking their desktop down the same route. This will be the ultimate competitive advantage for their own software - they will be able to move key software components into hardware, and make it look like magic. But as a developer, you won't be able to participate in this unless you target Apple-blessed hardware instructions/APIs. Your Python script isn't going to start running 4x faster unless you can convince Apple to implement its inner loop in custom silicon.

I have no doubt that Apple will be leaning hard into ASIC territory as they build out their new CPUs. The endgame? Every software function you need, baked into perfectly optimised silicon by the monovendor.


> What I expect we'll see are ARM chips that are power and performance competitive with x86 chips only for specific curated use cases.

Sorry but there is no justification for this. With the same thermal constraints there is every expectation that an Apple / Arm CPU would be more performant and efficient than a comparable x86. Why? Because aarch64 doesn't have the historical legacy that x86 has and Apple has already shown what they can do in the iPad etc. Sure they won't be triple digits but it will enough to be noticeable.

And, as you say, they will have the advantage of Apple's custom silicon for specific use cases. So best of both worlds.


The comparison is a bit unfair. x86 is like a decade older than ARM. Not that much in retrospect. aarch64 is as "free of historical legacy" as x86_64 is (that is: not at all free). There is lot of cruft and even multiple ISAs in aarch64 (e.g. T32/Thumb).

And the CISC vs RISC arguments are questionable, seeing that Apple has done the migration in both directions by now.


I noticed that Apple made absolutely no mention of ARM in their keynote. Seems like they're trying to whitelabel it for brand benefits as well as to divorce themselves from any expectations around standards?


That was interesting! Surely not an accident. Possibly to:

- Emphasise the breadth of their silicon expertise across CPU / GPU / Neural Engines etc. - Because Arm has little or no brand recognition (Apple > Intel > Arm in branding terms). - Distinguish from any me-too moves to Arm by competitors.


There you go! Someone finally figured it out. Apple is moving to Apple Silicon, not ARM. Try to get LG to announce they’re offering a PC with Apple Silicon tomorrow.

The absence of any ARM mention is marketing, nothing more.


Which competitors? Windows and ChromeOS have already been sold on ARM hardware.


They scarcely ever have with regards to iOS either; architecture has never been a talking point for their CPUs. How long was it from iPhone announcement to knowing it was ARM? How long from the Apple A4 announcement to knowing it was ARM?


Yep, most of that old "cruft" is essentially unused and turned off. A log of critics of x86 don't really know what they're talking about. x86 is inefficient because they don't really have much incentive to end the status quo where performance is more important than power to most customers. People are happy with 3-4 hours out of their laptops so Intel and AMD aim for that and sacrifice power for performance, quit often that is the tradeoff in the design.


> People are happy with 3-4 hours out of their laptops so Intel and AMD aim for that and sacrifice power for performance

Heck, I'm happy with 1 hour. I leave my laptop plugged in nearly 100% of the time. The point of the laptop is that it's easy to move, not that I want to use it while I'm in transit.


Intel missed the whole move of personal computing to mobile devices. Not missing out on that should have been incentive enough one might think.


Some is turned off but some still has to be dealt with (variable instruction lengths for example).

Intel tried to compete in mobile for a long time and failed even with a better manufacturing process.


> Some is turned off but some still has to be dealt with (variable instruction lengths for example).

Modern x86_64 processors don't actually natively execute x86 instructions, they translate them into the instructions the hardware actually uses. The percentage of the die required to do that translation is small and immaterial.

> Intel tried to compete in mobile for a long time and failed even with a better manufacturing process.

Intel didn't understand the market.

I recently bought a new phone. On paper it's twice as fast as my old phone. I imagine that's true but I can't tell any difference. Everything was sufficiently fast before and it still is. I never use my phone to do anything that needs an actually-fast CPU. I have no reason to pay additional money for a faster phone CPU. But I do notice how often I have to charge the battery.

These are not atypical purchasing criteria for mobile devices, but that's not the market Intel was chasing with their designs and pricing, so they failed. It's not because they couldn't make an x86 CPU for that market, it's because they didn't want to, because it's a lower margin commodity market.


Faster cpus become more power-efficient cpus because they can race to sleep. So you really do want to pay more for that cpu, but not for the compute performance but for the battery life.

https://en.wikichip.org/wiki/race-to-sleep


That's assuming the faster CPUs use the same amount of power. It's possible for a slower CPU to have better performance per watt. This is often exactly what happens when you limit clock speed -- performance goes down, performance per watt goes up.


> Intel tried to compete in mobile for a long time and failed even with a better manufacturing process.

They didn't fail because of performance, though, they failed because of app support & lack of a quality radio. The CPU performance & efficiency itself was otherwise fine. It wasn't always chart-topping good, but it wasn't bad either.


Agreed - CPUs (at the end at least) were fine. Also they were probably looking for bigger margins than were available.

General point is that I think that Arm has a small architectural advantage due to lack of cruft but that other factors are usually more important - e.g. the resources and quality of team behind implementation.


Sorry meant A64 rather than aarch64 as pretty sure that Apple hasn't supported 32 bit for a while now (so no T32 or Thumb) so the instruction set was announced in 2010 and definitely cleaner than x86.

Agreed that CISC vs RISC is very questionable by now.


Provided that the software is correctly written, ARM's weaker memory model allows for more flexible instruction and I/O scheduling.

It seems most people feel that the DEC Alpha went too far in weakening the memory model to improve performance, but A64 seems to at least be near the sweet spot.

It's also not a huge amount of work that gets thrown away decoding x86 instructions in parallel, but there's non-zero overhead introduced by having the start location of the next instruction depend on what the current instruction is.


The weaker memory model also uncovers synchronization bugs that have been papered over by x86's stronger semantics ;)


Apple's recent aarch64 implementations don't support any of the 32 bit ARM instruction sets, and aarch64 is a significant departure from armv7


Your wording makes it sound like ARM is still being used just for smaller devices and controllers with very well defined and limited uses. General purpose computing is already possible with iPads and iPhones. They're just artificially limited by the OS.

iDevices weren't really made with games in mind, but they can push out performance that beats handheld gaming devices. Artists (including myself) use iPads extensively and the response time with the Apple Pencil beats out just about anything else on the market. The only limiting factor is the tiny memory that limits the file size and layer limit on some programs. They're just fine for watching video, and even multitasking with a video playing while working on something else. This is on tiny device with no active cooling and long battery life, beating out most laptops in the same price range.

I don't believe there is any curated use case. They're already more than capable of being general purpose computers. I mean, Apple is already openly advertising that they're making iPad OS more desktop-like and operable with mice and keyboards. Literally the only things holding them back are the OS and Apple's refusal to put some decent memory inside.


The OS is the curated use case. Multitasking is an afterthought. Once the OS is no longer "holding them back" the Apple chip will run into similar problems that Intel CPUs run into.


Here's a counterexample (stretching your analogy a bit): Ants lifting the heaviest car in the world.

[1]https://www.top500.org/news/japan-captures-top500-crown-arm-... [2]https://news.ycombinator.com/item?id=23601098


They already have a mobile chip that is as fast as an active-thermally cooled notebook chip.


Isn't that for specific benchmarks though such as some geekbench/specint or web browsing benchmarks? I worry about non-gpu floating point for example. There is so much hand-optimized AVX/SSE code out there in big apps.


There’s a fair bit of AVX/SSE code out there, but these days the vast bulk of AVX/SSE code is generated by the autovectorizer and that’s mostly going to work on NEON without a hitch. Clang enables the autovectorizer at -O2 by default.

I’d be interested in estimates of how much hand-written AVX/SSE your computer actually runs. The apps I’ve seen usually have a fairly small core of AVX/SSE code.


They're admittedly not applications most users run every day, but many multimedia applications (audio processing, encoding, decoding) is mostly done with hand-crafted instrinsics, the same goes for video stuff.

In an even more niche area (high-end VFX apps, like compositors, renderers) SSE/AVX intrinsics are used quite a bit in performance-critical parts of the code, and auto-vectorisers can't yet do as good a job (they're pretty useless at conditionals and masking).


Even less esoteric: your libc likely has at least a half dozen vectorized functions for the mem* and str* functions.


But is the bulk of AVX code by time spent running, code that was generated by autovectorizer? The SIMD in openssl and ffmpeg is written by hand. I bet the code that spends a lot of time on the CPU, especially the code that runs a lot while humans are waiting, is written by hand.


Those should have AArch64 versions written. AArch64 is old now, it's not some niche architecture.


Desktop productivity content creation apps have never before needed ARM versions, so many probably don't have ARM specific optimizations, and some probably have x86 specific code that is just enabled by default.

The memory model differences are going to be painful to debug, I think ("all-the-world's-a-VAX syndrome" is now "all the world's a Pentium/x86-64").


"As fast" on specific curated use cases. Show me an Apple chip that beats any laptop on 7zip.


I don’t know about 7zip specifically, but the iPad Pro seems to beat even some MacBook Pros on some benchmarks.

https://www.macrumors.com/2020/05/12/ipad-pro-vs-macbook-air...


Given that Amazon was able to get there, what makes you think Apple can't? I would struggle to believe that Annapurna Labs has any significant advantage over PA Semi given the track record PA has had since joining Apple, and the fact they had nearly a decade head-start.

https://www.anandtech.com/show/15578/cloud-clash-amazon-grav...


not sure, what they are measuring though - random Xeon from spec.org: https://www.spec.org/cpu2006/results/res2017q3/cpu2006-20170... ; at most 30% higher frequency, yet twice as fast. Well...


Is this a joke, what kind of usage benchmark is 7zipping large numbers of files?


>>> They already have a mobile chip that is as fast as an active-thermally cooled notebook chip.

>> "As fast" on specific curated use cases. Show me an Apple chip that beats any laptop on 7zip.

> Is this a joke, what kind of usage benchmark is 7zipping large numbers of files?

A benchmark that Apple is unlikely to have implemented specific optimizations for, which therefore is a better test of the general purpose performance of the chip.

The situation being claimed here is sort of like if someone cited a DES benchmark to claim that Deep Crack's DES cracking chips (https://en.wikipedia.org/wiki/EFF_DES_cracker) were faster than a contemporary 1998 Pentium II.


A benchmark that probably relies as much on disk access speeds as CPU?


Nope, 7zip is using LZMA algorithm for compression, which is around a few MB/s on the fastest CPU. It's heavily CPU bound.

edit: Just tried compressing a large file, ultra setting on my desktop i5 CPU, it's running at 3 MB/s on 1 core.


Apple ships a framework for doing lzma and other compression algorithms. I doubt they will be taken by surprise


One large file could be CPU bound, many small files (which is partly why you zip/jar things up) is disk bound.


Not true. the bottleneck is going to be compression not disk access.


> A benchmark that probably relies as much on disk access speeds as CPU?

If true, that's just a nitpick that doesn't affect the overall point of the GGGP, though.


I think the point is that it won’t be as fast in applications that Apple didn’t anticipate.


I believe it is what the poster you are replying to would call "a specific curated use case."

(Semi-seriously, I don't know anyone who uses a Unix(-like) system who uses 7zip, although I'm sure they're out there. For the record, I just unzipped a 120M archive on both my 2020 Core i7 MacBook Air and my 2018 (last-gen) iPad Pro and as near as I can tell the iPad was faster actually extracting the files, but had an extra second or so of overhead from the UI.)


> I don't know anyone who uses a Unix(-like) system who uses 7zip

7zip is an implementation of LZMA, like xz. So, different names and file format details, but essentially the same algorithm.


Correct. 7zip is a LZMA compressor. The common equivalent command line tool on Linux is xz.

Linux distributions have been using xz compression for all packages (replacing gzip). So to the question of how relevant is xz/lzma/7zip performance to day to day task, is it's a lot relevant.

The successor will probably be zstd in the coming years. https://www.phoronix.com/scan.php?page=news_item&px=Fedora-3...


You can find a 7zip for UNIX here [1].

[1] http://p7zip.sourceforge.net/


Actually the zip algorithm is a perfect candidate for dedicated silicon of a dedicated instruction.

I read about a chip that had that feature yesterday but I can’t find the link unfortunately.


this is something that's looking at getting moved to storage controllers on motherboards, ex: PS5/Xbox consoles so that compressed data can be streamed directly to the GPU. Hopefully we'll start to get this type of tech after it's been proven in the console space.


Intel QAT supports gzip compression.



so could you provide some actual, open-source basic benchmarks. And not strange, opaque geekbench results...

I guess AMD is fine for me (as is my old Intel-notebook) and I'll just wait for POVray, GROMACS and Co.

EDIT: And well, I noticed, supposedly Anandtech ran SPECint2006 on the A13 (and numerous other chips) - they ran it with WSL for x86 (because running on a dozen Android things is easier than running a standard benchmark on Linux/Windows ofc). You find the results here: https://images.anandtech.com/doci/14892/spec2006-global-over... - I guess (not sure, because it's for some reason not cleary marked and mentioned...) these are SPECint2006-results. So, let's check them for validity (because WSL is no problem and it matches Linux ofc); just looking at an i7-6700K (which is a little bit behind the i9-9900K they supposedly ran on): https://www.spec.org/cpu2006/results/res2016q1/cpu2006-20160... - marginal worse performance than @anandtech in some benchmarks, but that's with an older CPU and an older arch! And then there are the 3 or 4 benchmarks which are just way off. Makes one wonder, what they really did (because of course, installing CentOS and running SPEC on native Linux is too much of a hassle, when running and compiling on 8 ARM-platforms!?)

EDITEDIT: it's even worse for SPECfp2006: https://www.spec.org/cpu2006/results/res2016q1/cpu2006-20160... [well, here the old 6700K suddenly sometimes is twice as fast as the 9900K and 3x as fast as the A13 (and yeah the story of the 2.8GHz-low-power-chip drawing circles around a 4.5GHz-HF-part just didn't sound convincing in the first place...)]


The official results from spec.org have a bunch of cheating, eg. exploiting undefined behaviour to run a benchmark improperly. AnandTech uses a consistent compiler (Clang, not ICC) without settings to exploit this, hence the divergence.

Andrei mentions this here: https://www.realworldtech.com/forum/?threadid=187314&curpost...


When I started at Google, I sat next to a guy who used to write compilers for DEC and Intel. I asked him, given the huge amount Google spends on hardware and electricity, if he thought that switching to ICC was worth while. His answer was basically that ICC is tuned to maximally exploit undefined behavior for marketing purposes and he wouldn't want to use it in production, at least without heavily tweaking flags to disable some optimizations. ICC gets most of its speed advantages by enabling optimizations that are present in GCC/Clang, but deemed too dangerous to turn on by default.


clang with "-Ofast"; I really wonder, which other options are there?! And imho this just doesn't explain the 200% difference?

And some other things which are bugging me:

- can you only optimize for A53 on Android with the big.LITTLE-configurations (they do!)?

- so they cross-compiled an AMD64-producing gcc 3.2 with Xcode 10 on MacOS-X for ARM? impressive.


Yeah, last time they did that with Nvidia graphics cards just as Adobe released it’s new rendering engine everybody was really thrilled to learn they could also buy Apple video editing software that would not sht itself instead of using the Adobe tools because of that inherent Apple advantage...

E.g., a nice article from 2010: https://nofilmschool.com/2010/07/apple-snubs-adobe-again-wit...


Intel can't shrink the die size to what TSMC/Global Foundries/Samsung can and they will never let them manufacture due to IP/national security/etc reasons.


Are you saying that Amazon AWS's Graviton 2 EC 2 servers can't handle server level performance?


I wonder how that will pan out for the people running MS Office all day on their Macs...


> Apple is roughly one chip cycle ahead on perfomance/watt from any other manufacturer.

Eh? This is a flimsy claim. AMD's performance/watt is extremely impressive right now. Apple is ahead of Intel for sure, but Intel isn't the only other player here.

> So, I’m predicting an MBP 13 - 16 range with an extra three hours of battery life+, and 20-30% faster. Alternately a Macbook Air type with 16 hours plus strong 4k performance.

A slightly more efficient CPU doesn't get you this. You need significant efficiency improvements across a variety of aspects, including those Apple has already been optimizing for years like the display.


I'd say that you should take a look at a comparison of the power efficiency of Apple's "little" core in the A13 to a stock ARM little core.

>In the face-off against a Cortex-A55 implementation such as on the Snapdragon 855, the new Thunder cores represent a 2.5-3x performance lead while at the same time using less than half the energy.

https://www.anandtech.com/show/14892/the-apple-iphone-11-pro...

Apple has been hitting it out of the park lately.


Yeah, the other ARMs are anemic, but the AMD cores are knocking it out of the park recently.


They’re not really though. Still slower IPC.


AMD is tiny compared to intel, the fact that they are besting them goes to show how they have been stuck for ~5 years.

The real problem though, is that apple is actually designing a core 100% focused on the target market. Unlike intel, for whatever reason, and AMD which didn't have the funds to run a dedicated design team for laptop/desktops.

So, I would expect the engineering tradeoffs for said laptop/desktop processor to show. AKA, things like hyperthreading are quite a win for servers, but at best are a wash for a desktop use case focused on extremely high single thread perf at the expense of throughput.


> AMD which didn't have the funds to run a dedicated design team for laptop/desktops.

Given the extremely impressive performance of the 4800H notebook cpus, I'd assume that might be a thing of the past.

> AKA, things like hyperthreading are quite a win for servers, but at best are a wash for a desktop use case focused on extremely high single thread perf at the expense of throughput.

This might be true for devices like the MacBook Air which are designed for relatively light usage like Office, but I don't see that argument working with their "Pro" lineup, including the iMac Pro and the MacBook Pro. These are devices specifically targeted to a professional audience like graphics designers, 3d artists, software developers or video editors. All of those tasks can be done with decent single-threaded performance, but lots of those tasks also benefit from multithreading. I haven't owned a single MacBook so far and I doubt that'll change anytime soon. Nevertheless, its exciting to see Apple do this move and it'll be interesting how good their CPUs can compare to mobile processors by Intel and AMD.


> Given the extremely impressive performance of the 4800H notebook cpus, I'd assume that might be a thing of the past.

The TDP on those is, what, 3-4x the A12Z?


TDP is whatever you want it to be. The big cores in an A12Z will pull around 4w each. That means an "unchained" A12Z is a ~16W+ CPU. The 4800H is a 45W TDP, but also has 2x the fast CPU cores. And the binned 4800HS is a 35W TDP, still for 8 cores / 16 threads.

So ~2-3x the TDP for 2x the core count and 4x the thread count. Pretty interesting head to head when the Apple dev kits actually show up in people's hands, don't you think?


I'm unconvinced this comparison is very meaningful at all.

First of all, TDP is not the same thing as power consumption - it is a specification for the required performance of the heatsink/fan cooling solution.

For example: a Ryzen 3900X is a 105W TDP chip. Running at full speed on all 12 cores it consumes 146W; about 10W per core and the remainder for the rest of the package.

Secondly, it is entirely typical to run a single-threaded workload at a higher clock frequency (because if that's all you have to do, why not?), and chasing higher clock speeds is disproportionately expensive since it requires higher voltages, and dynamic power in a switching system increases with the square of voltage.

Again, taking the Ryzen 3900X: that's a nominal 3.8GHz processor. Running a single-threaded workload, it will typically boost up to 4.45GHz in testing. At that frequency, that single core is drawing nearly 18W - i.e. 80% more than at the nominal frequency achieved when all cores are busy and no boost headroom is available.

From what I've read about the A12/A13, the voltage/clock curves are particularly skewed at maximum clock speeds - something like 1.1V at 2.49GHz on the A12 and well under 0.8V at 2.3GHz - basically half the power to run at 93% of the clock speed.

There are a lot of unknowns here, but I think there are more reasons for optimism than your analysis suggests.


Not 4W each. Stop saying this.


SPECint2006 is a single-threaded test: https://images.anandtech.com/doci/14892/specint2006-a13.png

That's either 4-5W per core or the uncore in an A13 is hugely power hungry. I'm rather positive it's not an extremely bad uncore, so the only other option here is a 4-5w per core power figure. Which also lines up with the voltage/frequency curve numbers: https://images.anandtech.com/doci/14892/a12-fvcurve.png

If you have data to support a different number I'm all ears, finding power draw figures in this space is rather difficult, but 4-5W per-core aligns with expectations here. A 1W consumption would be unheard of levels of good.


It’s a meaningless comparison. The 4800H could be a 200W chip if it was “unchained”. Peak burst performance is dynamic in modern CPUs, it’s what you can measure in the real world that matters.

Intel TDP doesn’t include the power usage of DRAM and other IO, or the screen, or WiFi or modem (which may have been disabled tbf).

Geekbench 5 multi core scores are roughly 7400 vs 3300. Let’s say for example that the Thunder cores are half the perf of the Lightning ones. So that 3300 score might be roughly the perf you could get from 4 x Lightning instead of 2 x Lightning and 4 x Thunder. 4800H has 8 cores. Getting a bit over 2x the performance.

But that’s at a TDP of 45W (let’s call it 40W to be more generous). 5W for A13 (well, A13 entire device) vs 40W 4800H. That’s 8x the power draw for 2x performance. Am I wrong?


The linked Anandtech data shows it using 4-5W in a single thread test. That doesn't mean it will use 4-5W/core in a multithreaded test, but thats almost certainly only due to limitations in power delivery and thermals.


With twice the cores at a higher per-core speed, yes. And also with a much more powerful GPU.


Yeah probably 3-4x the TDP (35-54W) with double the threads, 1.8GHz faster per core clock speed and a relatively powerful on-board graphics unit.


> All of those tasks can be done with decent single-threaded performance, but lots of those tasks also benefit from multithreading

Most of those tasks benefit from multiple processors. Multithreading is less clear-cut because you're trading the win for under-optimized code against increased pressure on shared resources (which is one of the reasons why it's opened some windows for security attacks). It's not hard to find pro workloads which perform better without multithreading enabled and considering that Apple will own the entire stack up to some of the most demanding apps they're well positioned to have both solid data on the tradeoffs and architectural changes.


Note for the downvoters who might confused: I’m using multithreading in hardware sense of symmetric multithreading (SMT), which Intel refers to as HyperThreading:

https://en.wikipedia.org/wiki/Simultaneous_multithreading


Hyper threading is indeed meant for languages like Python or Javascript that use pointers everywhere. Once you have an optimized workload with little pointer chasing the only other meaningful benefit of SMT come from the fact that you can run floating point workloads alongside integer workloads. That's a pretty rare situation but it does happen sometimes.


That was basically my thought: there are plenty of programs which it can help (almost all business apps) but not all of those are limiting anyone’s work and the feature isn’t free. Having multiple true cores has been common for multiple decades now and I’d be really curious whether a modern chip design team would feel it’s worth investing in if they didn’t already have it. My understanding is that SMT has a power cost comparable to extra cores and given how well Apple’s CPU team has been executing I’d assume there’s been careful analysis behind not implementing it yet.


> AMD is tiny compared to intel, the fact that they are besting them goes to show how they have been stuck for ~5 years.

I'm having trouble following the reasoning.

> AMD which didn't have the funds to run a dedicated design team for laptop/desktops.

They have plenty of funds for R&D. Problem is, processor manufacture goes much beyond the processors themselves. You have to design the entire manufacturing chain, and spend billions on new foundries which will get obsolete in a few years.


AMD is fabless, they just didn't have any money for spinning up several different chips for different markets. They even had a single chip from desktop to high end servers (with all the tradeoffs that entails).


AMD right now has a core 100% focused on low power devices with amazing performance, very likely ahead of Apple. So no, your premise is not accurate.


Why is it very likely ahead of Apple? Apple has been doing the dedicated low power core approach for a while.


Because AMD has a 4 core low power processor with multiple times the memory bandwidth and multiple times the I/O and greater performance per core at the same power draw as Apple's 2 core processor with about a tenth of the I/O and a third the memory bandwidth.


You're correct that AMD’s offerings are impressive, but that‘s vs an uncooled A12 chip. Add active cooling and a few more watts there's no reason why they couldn't blow the doors off.


> You're

> AMD’s

> that‘s

What is your method of input where you use three different characters for apostrophes?


When someone asks what "attention to detail" is, I want to point them to this post...


wow, that was amazing. I was on my iPad at the time. No idea how that came out.


I think you're either underestimating how much power an A12 consumes or overestimating what actively cooled CPUs consume per-core.

The A12 will pull around 4w on a single-core workload to come close to 9900K in performance. That's a good number, but it's not unheard of. The 4800HS is also a 4w per core CPU, and also comes close to the 9900K in performance.

The problem is increasing single-core performance becomes non-linear. It's not just a few more watts to bump from 3ghz to 4ghz. It's a lot of watts.

Simply having 4 big cores on an A12 would push it into 10w actively cooled territory as well, same as the i5 in a macbook air (a CPU that's also 10w). Add a few watts to bump the single core performance while you're at it and suddenly it's a 20w chip. Make it 8-cores to compete with the current macbook pro CPUs and suddenly it's a 40w chip.


> "Simply having 4 big cores on an A12 would push it into 10w actively cooled territory as well"

The A12X/A12Z already has 4 big cores (and 4 little cores, and 7-8 GPU cores). I imagine the A14X will follow this pattern, but the cores will be 2 generations newer, with 2 generations of performance-per-watt improvements.


Rumors indicate all high-power cores.


Heterogenous cores are a pretty huge advantage.


Power-performance is a very nonlinear curve, so it doesn't make sense to compare single-core TDPs with all-core TDPs. The iPhone XS is 2500 MHz when one core, but drops to 2380 MHz when both primary cores are in use, a 5% performance drop... but this drop lowers power from 3.85 W to 2.12 W, 45% less!

Hence it makes much more sense to treat the A series processors as 2 watt chips when looking at multi-core scores. They're targeting efficiently hitting these lower frequencies. You'll get a similar result for the 4800HS; it'll use a lot more than 4 W single-core.


I think you mean 2w cores not 2w chips? The A12X also cuts frequency when multiple are in use (as of course AMD and Intel CPUs do as well), but at 2w/core you're still talking 8w for a quad core, comparable in power to the quad core i5 in the current MacBook Air.

The problem here is a severe lack of quality data. The best we have right now is SPEC2006 which is unfortunately only single core. You're absolutely right that it makes more sense to compare like for like in workloads, but we don't have any good multithreaded cross-platform benchmarks. There's geekbench, but it's somewhere between mediocre and shitty. And then nothing else? There's then no multithreaded benchmarks that also have measured power draw on an A12/A13.

A 4800HS at it's 4w/core all core load is also still clocking higher than an A12X. When Apple has the thermal budget to spend as well they'd almost certainly do the same thing?


Yes, 2 W/core. The difference is that at this power level, the A series chip will be a lot closer to peak performance than the Intel chip.

> A 4800HS at it's 4w/core all core load is also still clocking higher than an A12X.

This doesn't mean that much, since the range of efficient clock speeds depends on design choices, so they aren't always 1:1 comparable between architectures. Apple might well increase clock speed on their desktop chips, but unless it's only a few percent, it won't be as simple as pumping more power into the same dies; they have to actually redesign the core to operate efficiently at those higher speeds.


The A12 draws 4W when all four big cores are maxed out. Not 4W per big core. That's 4x the performance per watt of your comparable Intel or AMD parts.


No, it pulls 4W under the single-threaded spec2006. And the A13 then actually regresses on perf/watt and hits 6W in a few tests https://www.anandtech.com/show/14892/the-apple-iphone-11-pro...

This isn't unique to Apple, either. The "big" cores in ARM CPUs have been pulling 2-4W for years and years. That's why thermal throttling is such a major issue in mobile, especially mobile games.


Apple SOCs are going to be a big part of the performance/watt gains.


>Eh? This is a flimsy claim.

Anandtech would like to disagree with your word choice.

https://www.anandtech.com/show/14892/the-apple-iphone-11-pro...


Did you read your own link?

> In virtually all of the SPECint2006 tests, Apple has gone and increased the peak power draw of the A13 SoC; and so in many cases we’re almost 1W above the A12. Here at peak performance it seems the power increase was greater than the performance increase, and that’s why in almost all workloads the A13 ends up as less efficient than the A12.

> The total power use is quite alarming here, as we’re exceeding 5W for many workloads. In 470.lbm the chip went even higher, averaging 6.27W. If I had not been actively cooling the phone and purposefully attempting it not to throttle, it would be impossible for the chip to maintain this performance for prolonged periods.

In other words, to get those good specint numbers, power was sacrificed to do it. 5W per-core power draw is right in line with a typical x86 laptop CPU, too. 4800HS sits at 35w, or 4.3w per core.


Yes. I saw that there were already other responses to your comment, but I'd like to add my own, quoted from the conclusion of the article:

"But the biggest surprises and largest performance increases were to be found in the A13's GPU. Where the new chip really shines and exceeds Apple’s own marketing claims is in the sustained performance and efficiency of the new GPU. Particularly the iPhone 11 Pro models were able to showcase much improved long-term performance results, all while keeping thermals in check. The short version of it is that Apple has been able to knock it out of the park, delivering performance increases that we hadn’t expected in what's essentially a mid-generation refresh on the chip manufacturing side of matters."

One of the key phrases is "...Apple has been able to knock it out of the park..."

The rest of the article is pretty clear - Apple gets it, and beats their competitors pretty soundly.


Apple is definitively competent at chip design but the end result won't be leagues ahead. They might be something like 10% ahead in terms of IPC and another 10-20% just because they get early access to 5nm compared to whatever "ancient" process Intel is using.


Everyone seems to think about cost and speed all the time.

I think that's only part of the story, which also needs to include the ability to add features and control the entire feature set across chips on all of their devices.

Encryption, ML, graphics, power management, security, etc. are all things that Apple can now add or remove as needed.

The level of optimization they can do is now well beyond just speed and price.


Right now absolutely everyone else is ahead of Intel in the NM race, with some currently shipping chips two process nodes ahead and several announced chips going so far as being twice as small as Intel's current process node. This is partially why AMD has been able to offer laptop CPUs that rival Intel's desktop offerings for less cost.


You do realize that's 5W total, not 5W per core? Apple is currently around 4-5x the performance per watt of Intel's most efficient parts.


It's a single-threaded test. That's 5W on a single core.

Here's the A12's power/frequency curve: https://images.anandtech.com/doci/14892/a12-fvcurve.png

That's the curve per core (that's how those charts work). 3.85W per core @ 2.5ghz is what a big core A12 is spec'd at.


That's fair. However, according to the same chart, it draws 1W at 2000 MHz, which should give 80% the single core performance. This is how Apple is getting 4x the perf/watt of Intel and AMD competitors in Geekbench and similar workloads. Apple is able to acheive very high multithreaded performance within a 5-6W TDP by running all the cores around 80% of peak performance.


> Those curves also apply to Intel & AMD. As in, you can drop frequency on AMD to also achieve significant improves in perf/watt. That's not a unique aspect of the A12. That curve is more "this is how TSMC's 7nm transistors behave" type of thing. Geekbench only measures perf, not perf/watt. It does not try to achieve maximum perf/watt, nor has Apple tuned to the A12/A13 to achieve maximum perf/watt in Geekbench either. Geekbench's single thread numbers where it "competes with Intel & AMD" are also these ~5W per core power figures.

While similar curves also apply to Intel and AMD, their mobile parts are drawing 10-20 watts per core to acheive the very top results. When you're using all the cores together under a 6W TDP (as apple is doing), Apple is able to achieve a much higher Geekbench result than Intel or AMD parts set to a comparable TDP, or even 3x the TDP. Compare multi-core Geekbench scores of Apple's parts running at a 6W TDP to Intel or AMD's most efficient parts running at a 15W TDP, and you'll see that Apple outperforms them while drawing 1/3 the power. Similar curves apply, but Apple can achieve far morea at 1W per core than any x86 competitors.


Those curves also apply to Intel & AMD. As in, you can drop frequency on AMD to also achieve significant improves in perf/watt. That's not a unique aspect of the A12. That curve is more "this is how TSMC's 7nm transistors behave" type of thing.

> 4x the perf/watt of Intel and AMD competitors in Geekbench

Geekbench only measures perf, not perf/watt. It does not try to achieve maximum perf/watt, nor has Apple tuned to the A12/A13 to achieve maximum perf/watt in Geekbench either. Geekbench's single thread numbers where it "competes with Intel & AMD" are also these ~5W per core power figures.

I'm not sure where you're getting this random 4x better number from anyway?


Researching the very most efficient parts from Intel and AMD today, 3x the perf/watt would be more accurate. Operating at a 5-6W TDP, the 2018 Apple A12X gets a multicore Geekbench score of 4730. Operating at a TDP of 15W, the i7-1065G7 (Ice Lake) gets multi-core Geekbench score of 4865. This is on Intel's 10 nm process that's comparable to TSMC 7 nm. Near equal performance for 3x the power.

I'd expect a 2020 A14X or whatever it's called to comfortably beat what they could achieve in 2018, so getting 4-5x the perf/watt of Intel and AMD's best is what I'd expect when operating at similar points in the frequency/power curve. The A12X was around 4-5x the perf/watt of what Intel and AMD had out in late 2018.

https://browser.geekbench.com/v5/cpu/2639065 https://browser.geekbench.com/v5/cpu/2638528


Unfortunately I could only find this old chart [0] showing how power draw scales with frequency on Intel. However for the sake of demonstration it should be more than enough.

The chip needs around 25W at 2.5GHz and 200W at 4.7GHz. 8x more power for 1.88 times the performance. In other words Intel chips running at 2.5GHz are 4.25 times more efficient than Intel chips running at 4.7Ghz. No magic. Once Apple has chips that go this far they will suffer from the same problems.

Here is a slightly newer chart [1] that demonstrates a 57% increase in power consumption for a 500Mhz frequency gain (12% performance gain).

[0] https://www.extremetech.com/wp-content/uploads/2018/09/Clock...

[1] https://www.extremetech.com/wp-content/uploads/2016/02/power...


If it was that easy then everyone would do it. I call this the curse of the single number. There is this complicated machinery with lots of parts with different shapes, some are bigger some are smaller. However, the customer is not aware of the complexity and only sees a single number like 5W and maybe another number that showcases the performance score of the chip. Surely, since that is the only information we have about power consumption and performance it must be true in all situations. The reality is that those two numbers were measured during different situations and combining them into a meaningful calculation might actually not be possible.

For example. Geekbench measures peak performance of all cores at the same time and the power draw may go above 5W.

The 5W TDP may refer to normal day to day use where one or two cores are active at the same time for the duration of the user interaction (play a game for 5 min or something) and once the user stops using the phone it will quickly go back to a lower TDP.


You're comparing peak single with all-core TDP/cores. Ryzen 2 cores peak at about 10 Watts per core...they probably have at least 30% frequency increase headroom...plus whatever bump they get when they move to 5 nm. I expect at least a 50% per core performance increase when they refresh their laptop/desktop lines, and probably twice the cores. And they'll also be saving a few hundred dollars per laptop...


Look at the A12's power/frequency chart: https://images.anandtech.com/doci/14892/a12-fvcurve.png

Increase frequency by 30% and the A12 will also be hitting 10 watts per core - the end of that graph is going real vertical real fast.

Since Zen 2 and A12/A13 are all on the same TSMC process this shouldn't be that surprising...

> I expect at least a 50% per core performance increase

Based off of what evidence? That'd be an unheard of improvement. TSMC isn't even claiming anything close to that at a pure transitor switching frequency for 5nm? They are predicting 15% frequency gain (at the same complexity and power) or a 20% power reduction (at the same frequency and complexity) over their 7nm process.


I meant cumulatively. I'm assuming a "A14x+" on 5 nm. Comparison to A13.

So a A14x+ (15 W+)

30% frequency increase vs A13, because of higher TDP +

10% IPC increase (larger caches, design tweaks) +

15% Frequency increase, due to 5 nm.

A i7-1065G7 costs Apple something like $400. The A13 costs something like $60 to manufacture. Apple is highly motivated.


> They are predicting 15% frequency gain (at the same complexity and power)

My intuition is that 50% might be overoptimistic. But going from iPad to Laptop thermal constraints, you'd expect a big increase in frequency just from clocking the thing higher, no?


Apple would lose their edge over Intel because most of the efficiency gains come from the lower frequency.


> Apple would lose their edge over Intel because most of the efficiency gains come from the lower frequency.

Presumably that would only apply when the cores are actually running at full-throttle though. For casual use there could still be considerable gains if the processors are better at power management (which they most likely are, as they've had to hyper-optimised for this for phones).


AMD is small though. I have no data to back up my gut but anecdotally I feel like they don't have the manufacturing capacity to keep up with Apple's demands right now.


AMD is the provider of APUs/CPUs and Graphics of both PlayStation 5 and Xbox Series X.

AMD represents about 2/9 in Windows and 3/10 in Linux of processors using Steam month-by-month and raising; In this same survey Windows represents 95% and MacOS 4% of computers.https://store.steampowered.com/hwsurvey/processormfg/

I think they can manage the production to provide for all Apple CPU needs.


The chips are going to be manufactured by TSMC whether they're AMD or Apple chips.

And the game consoles show that AMD can put together a secure, x86 chip at high volumes at leading nodes.


Agreed on all points.

I saw speculation elsewhere that this change, along with AWS's addition of Graviton-based (their own ARM processors) instances at much more competitive price points relative to x86, are bound to spearhead the change to "ARM by default."

If your devs are already using ARM, and ARM's notably cheaper in the cloud, that's a compelling case. If you're already using Kubernetes / Docker heavily, you're probably already 80% of the way there. Linuxes that aren't supporting ARM as a "first class citizen" will soon, and undoubtedly that will be a speed bump at worst.

I'm interested to see the specs relative to the x86 Macs, but the only open question to me was whether or not we'd see the x86 emulation layer. Well, we did, and it may not be perfect but it certainly looks like they put a lot of effort into it. If it works as well as it looks, I think this transition is borderline inevitable. I think I've bought my last x86 hardware.


This claim doesn't really hold up. The problem here is the vast majority of non-Apple laptops & desktops that are in use. THOSE will still all be x86 for the foreseeable future as ARM CPUs not made by Apple all have terrible per-core performance. Graviton2 compensates by just throwing 64 cores at the problem, but that's not going to do anything for your Electron-based text editor that struggles to use 2 CPU cores in the first place. Or for a typical webpage, which struggles to use more than a single CPU core.

But otherwise as a developer-focused example the 32-core Epyc Rome compiles Build2 faster than the 64-core Graviton 2: https://openbenchmarking.org/embed.php?i=2006047-PTS-EPYC2EC...

That's going to matter when a company is spec'ing out workstations to buy, which are unlikely to have an Apple option on the table at all in the first place, and Amazon isn't going to sell you Graviton2 CPUs to put under your desk, either.

This _could_ be the start of a bigger focus on ARM, definitely, but to really make inroads into what devs use you'll need someone other than Apple to step up to the plate. Or for Apple to become vastly larger than they are in the desktop space. Otherwise we'll all just keep cross-compiling like we have been for the last decade of mobile app development.


I can't agree with the characterisation here that only Apple can make decent Arm cores. Graviton is apparently pretty closely based on an Arm Neoverse N1 CPU and the 64 vs 32 core point is comparing a hyperthreaded part vs one that isn't. Plus Graviton seems to be materially more cost effective.

However, there is a real challenge here and that's who has the capability and incentives to make laptop and desktop Arm cores. Microsoft probably, but hard to see many other firms doing so.

So a scenario where Apple gains a material lead in desktop and laptop performance over everyone else and grows market share as a result seems quite credible.


> Graviton is apparently pretty closely based on an Arm Neoverse N1 CPU and the 64 vs 32 core point is comparing a hyperthreaded part vs one that isn't.

How does hyperthreading change the story here? The 32-core CPU is the one that had hyperthreading while the 64-core one didn't. Hypthreading is widely regarded as being around +20% performance for multithreading-friendly workloads. Either way, the per-core performance of the 32-core x86 CPU is nearly 2x that of the 64-core ARM one. That's not a good look for being desktop-viable.

Especially when the 32-core x86 cpu also comes in a 64-core variant. And then a 2P 64-core variant even. You can have double the CPU cores that are each 2x faster than the Graviton 2 CPU cores.

Which gets back to only Apple has managed to get ARM to have good per-core performance so far.

> Plus Graviton seems to be materially more cost effective.

The c5a.16xlarge is the same price as the m6g.16xlarge. No cost effective difference in that head-to-head.


Disclosure: I work at AWS building cloud infrastructure

> The c5a.16xlarge is the same price as the m6g.16xlarge. No cost effective difference in that head-to-head.

c6g.16xlarge is more than 10% cheaper than m6g.16xlarge (and c5a.16xlarge). It also provides more EBS and network bandwidth, and provides 64 cores versus 32 cores with SMT.

https://ec2instances.info/?region=us-west-2&compare_on=true&...


I actually agree that x86 will dominate the desktop for quite a while yet. I also agree that EPYC has materially better performance than the Graviton - Rome is very impressive.

Just can't agree though that only Apple has the ability to make desktop / server Arm parts that don't have 'terrible' per core performance. The real issue is who has the economic incentive to build competitive desktop parts - I don't see anyone who would see it as worthwhile.


That's the fundamental problem with the Apple monopoly. I would be perfectly happy if I could use an non-Apple laptop with an outdated Apple SoC. However, since only Apple gets access to their SoCs everyone is worse off.


c5a and m6g instances are the same price, but m6gs have twice as much memory. c6g instances are a better point of comparison for c5a – same vCPU count still, same memory, marginally better network at 8xlarge and up, and about 88% of the price.


> Graviton2 compensates by just throwing 64 cores at the problem, but that's not going to do anything for your Electron-based text editor that struggles to use 2 CPU cores in the first place. Or for a typical webpage, which struggles to use more than a single CPU core.

More cores will help your typical developer who's running 8+ apps at once, along with several browser tabs that are all running in separate processes.


Webapps are kinda like mobile phone apps. Only one tab is open and therefore only one process is actually running latency sensitive code. It's very unusual when a web app is using significant resources in the background since no rendering is taking place. Of course there are exceptions to the rule. One or two powerful cores are often all that's necessary.


Perhaps. But it's not at all uncommon for me to be running Chrome with devtools + Firefox + webpack + Sublime Text + xcode + a second webpack for react-native + an iphone simulator + Android Studio + flipper for debugging + Slack...

This definitely makes my computer run slowish (esp. Android Studio!). Of course I can shut things down and run fewer things at once, but it would definitely provide value to me not to have to.


I'm no expert on ARM vs x86 performance, and AWS's own language is careful to specify it's only significantly more cost-effective for certain workloads.

It'll be interesting to see how fast improvements are made in both Apple's and AWS's processors. That's another factor I see contributing to this: if Apple's pace of processor improvements continues as it has for iPhone and iPad, it'll be tougher year after year for competitors to stick with the status quo.


Does Docker work on ARM or does hypervisor have any Intel specific features?


Docker works on arm, using arm images.

Current hypervisor.framework is a very thin wrapper over Intel's virtualization extensions, so t'll have to change pretty heavily to accommodate ARM.


Apple chips are fast mostly because they have a lot of cache to spare.

Take for example the A12Z. It has 8MB of L2 (not L3, it's L2!) cache. An Intel Core i7-1068NG7 present in the latest Macbook Pros (that performs akin to the A12Z according to Geekbench) has only 2MB of L2 cache.

No other ARM CPU has this level of L2 cache. Apple chips are not "magical", Apple just can afford packing up lots and lots of cache because they are not in the silicon "race to the bottom" like Qualcomm and Intel are, for example. L2 cache is very expensive and Apple is just hacking its way up by packing as much L2 cache as they can.

Don't take me wrong, it's not that Apple is "right" or "wrong" by doing it. They just can and did it. However, it's needless to say that their CPUs are not so different from other ARM ones, they just happen to have a budget and a business model that let's them ignore the price/performance ratio when designing chips in order to achieve the maximum performance.


> Take for example the A12Z. It has 8MB of L2 (not L3, it's L2!)

It's not really that clear cut. You could also argue that the A12Z has 8MB of L3 and 0MB of L2. The L2 in the A12Z is shared while the L2 in the Intel CPU is not.

Similarly the latency of the A12Z's L2 is a lot higher than the latency of Intel's L2, but also then still lower than Intel's L3. https://images.anandtech.com/doci/13661/A12X-lat.png & https://images.anandtech.com/doci/14664/ICLlat.png

So it's not "traditional" L2 as you're familiar with it, it's more like an L2.5 or something. Although still accurately called L2 as it is the second level of cache, it's just that Apple went with a rather different cache hierarchy & latency structure than Intel did. But it's really not at all accurate to compare the 8MB of L2 on the A12Z to the 2MB of L2 on Ice Lake. Those are very different caches.


Latency increases as cache size increases. That's well a well known fact.


In college my comp. architecture prof. used to say 'cache is king'.


Why do you quote magical as if someone said that.

Further, your point is that Apple chips are faster because of easily replicated reasons. Intel is charging hundreds of dollars for their chip -- you should charge them some consulting fees and tell them how easy this is to boost their performance! This isn't even considering that your whole analysis is flawed to begin with and you're comparing apples and oranges.


I think you misunderstood the comment. Intel CPUs are already performing well and it is Apple that is using the same strategy to reach Intel level performance (or even go slightly above it). What you missed is the fact that other SoC vendors like Qualcomm don't follow this strategy and therefore end up with cheaper but also lower performance SoCs. Since Intel doesn't manufacture ARM chips, you are now forced to go with Apple if you want good performance from an ARM chip.


The comment was very literal in comparing the cache on the Apple chip to the Intel chip.

And just to be clear, cache is "expensive" in die size. They aren't putting an order in for L2 cache to Samsung or something.

That Apple chip has a die size less than half the size of Intel chips than it outperforms. So the whole "expensive" claim is debunked before it even gets started.

Further we are very explicitly comparing Apple silicon to Intel because that is exactly the transition that's happening here.


The Apple chip doesn't have all big cores and only 6 total cores in the A12, Intel or AMD (CCX) have 8 big cores with SMT. Apple's multithread performance is accordingly slower. Once you account for these two big omissions, you'll likely find Apple take as much or more die area than AMD's Zen2 CCX for similar MT performance.

It'll probably be more die area for equivalent performance, which for Apple might not be an issue given it's margins. Of all the ARM designs we've seen, cache is by far the unique factor in Apple's design, so comparing die size with equivalent cores+features makes complete sense.

Like others have mentioned, maybe Apple will just focus on implementing new instructions, but at that point, they will likely diverge enough from the ARM ecosystem that developers and users should be somewhat worried.


Amazing how quickly all of the goalposts are moving so people can desperately try to diminish whatever Apple does. Now it's die size? Or, odder still, die percentage.

Firstly, the A12Z is 8 full cores. The "small" cores aren't limited to a subset of instructions or something, they're made on a more efficient, but lower headroom, tracing. That is a 120mm2 die, versus 197mm2 for the Ryzen 7 3700x (8 cores).

Oh but wait, the 3700x has no integrated graphics, no video encoder/decoder, no 5TFlop neural network, no secure enclave... It's absolutely huge comparatively, and has a tiny fraction of the features.

This whole die size nonsense really isn't turning out, is it?

The 3700X is of course a faster chip (not in single threads, but when all cores are engaged), but that's with active cooling and a 65W+ TDP, versus about 6W for the A12Z. Oh, and it's even a year newer than the Apple chip which is just relevant for a development kit.

Maybe we can prioritize based upon how many "Zen" codenames exist in the product. There the A12Z clearly falters!


The 3700X CCX die has 8-core+36MB L2+L3 cache and is just 74mm2, the IO die has pcie4, ddr4, other IO and is 12nm 125mm2. For a total of 199mm2.

If you want to compare cpu, graphics, video and nn, then the AMD 4800U die size is 156mm2, this chip has +4MB L2+L3 cache (12MB total), a much better GPU+FP16 for 4TFlop nn, and full AVX2+SMT cores more than the A12Z. The little A12 cores might be full ISA, but they're 1/3 the die area and are lower performance. NEON is half the size of AVX2 and the GPU difference alone would likely push the A12Z past 156mm2. And there are 15W/45W versions of this chip going as low as 10W. The A12Z is likely around 10W+ too in the iPad Pro and the devkit, but I can't find sources on this.

Looking a lot more competitive now isn't it?

The Qualcomm 855 is 73mm2, and the A12 is 83mm2, and the performance gains here are impressive. Beyond this, A12Z 120mm2 vs AMD APU 156mm2 and it's starting to look like a much closer fight, and by no means a perf/watt or perf/$ advantage for Apple until we see real systems.

Die size is _the_ trade off Apple is making with their ARM/RISC+loads of L2 cache design. It's a trade off every chip makes, but it's especially important here with large cache sizes. I don't doubt in a couple of generations Apple can compete with an AMD 4800U CPU+GPU on real world multi-threaded tasks at 10W (assuming 15% increases/gen), but the 4800U is already a few months old now. Apple fanboys never learn. Sigh. Also, Apple fanboys are the new Intel fanboys when stressing single thread performance.

Sources: http://www.hw-museum.cz/cpu/414/amd-ryzen-7-3700x https://www.techpowerup.com/264801/amd-renoir-die-shot-pictu... https://www.cpu-monkey.com/en/igpu-amd_radeon_8_graphics_ren... https://en.wikichip.org/wiki/qualcomm/snapdragon_800/855 https://en.wikipedia.org/wiki/Apple_A12


> Apple fanboys never learn. Sigh. Also, Apple fanboys are the new Intel fanboys

Please edit flamebait out of your posts here. It's against the rules for good reason, and it evokes worse from others.

https://news.ycombinator.com/newsguidelines.html


"Apple fanboys never learn. Sigh."

Just to be clear, you (and several others running the same playbook) are attacking Apple's entrant from every possible dimension, cherry picking specific micro-traits from various other systems (even if they aren't SoCs and have a tiny fraction of the functionality -- hey, if you can tease a dumb argument out of it...) and turning that into some sort of Voltron combined creation to claim..."victory"? And people impressed with Apple's progress based upon actual reality are the "fanboys"?

Again about cache. To repeat what has already been said, the A12Z doesn't have an L3 cache. The L2 cache is an L3 cache given that it isn't per core.

The A12Z has 8MB of this L2+L3 cache. The 855 has 7.8MB of L2+L3 cache. The 4800U has 12MB of L2+L3 cache. The 3700X has 36MB of L2+L3 cache. So tell me again how the A12Z is somehow hacking the system or cheating? This is an outrageously dumb argument that the, I guess, "AMD fanboys" have all fed each other to run around trying to shit on Apple, and it betrays a complete lack of knowledge -- just copy/pasting some bullshit.

Enough about the stupid cache nonsense because it has no basis in reality.

"Also, Apple fanboys are the new Intel fanboys when stressing single thread performance."

It is the single most important facet of a single-user performance system, or we'd all be using shitty MediaTek NNN-core designs.

And, I mean, the A12Z annihilates the 4800U at single thread performance, and equals it at multithread performance...for a little tablet chip, and despite that 4800U having that mega, super, giant hack of die size cache, and despite it boosting that single core to 4Ghz, versus "just" 2.49Ghz for the A12Z.

Oh, and that Apple core has a 5TFlop neural engine aside from the GPU. Separate hardware encoders/decoders (not as a facet of the GPU). Camera controllers. And on and fucking on.

What Apple has done is very impressive, and I imagine on their desktop/laptop chips they'll be a lot less conservative, likely with all "Big" cores. Maybe they'll even put dedicated L2 cache!

sidenote - you talked about the AMD chip being a "couple of months" old. The A12Z we are talking about is over two years old. You understand that we don't know what Apple is going to drop in their actual production designs, and we are talking about the A12Z because they happened to be confident enough to demo their systems on it.


Time for more corrections, I don't keep up with Apple stuff. The 855 has ~5-6MB of L1+L2+L3. The A12X/Z has ~18MB of L1+L2+System cache. That's ~2x the performance and ~3x the cache against the 855, and 10% worse performance than the 4800U where AMD has 30% less cache at 12.5MB (L1+L2+L3). The 6 core A13 has 28MB of L2+System cache and is maybe 10% faster on single thread than the 15W 4800U with just 12.5MB! of cache.

Here's a I can haz cache meme for you: https://i.imgflip.com/468v8g.jpg

You want to compare Desktop systems with a mobile chip, but get blown out completely by the multi-thread performance, and then when comparing to a laptop chip when people point out the cache amounts say but look at the single thread performance. Who is the fanboy here? Apple can spend the money on die size/cache if it wants for single thread performance, but the rest of us care about a complete multi core CPU+GPU system. More cache means somewhat lower clocks and power use too, big surprise.

AMD 4800U FP16 4TFlop is 8TFlop for FP8 which is what Apple has, so enough of that. The 8 AVX2 units in the 8 core 4800U will do another ~1TFlop of FP32 if needed in 15W. The A13's AMX seems to have about 1TFlop more of FP8, which is like dual core AVX2 and not 8 cores of AVX2.

Audio/Camera and Video decoders/encoders all do the same stuff anywhere and are basically a commodity for any number of standards, so enough of that too.

Just to be clear, you and other Apple fanboys just can't handle what Apple has currently in CPU is no real way better than a 4800U. Single thread performance (with loads of cache!) is important to JS in the web browser, but by now even most AAA games will do better with more cores, and most real world tasks also do better with more cores. I'm just comparing reality, and you and other fanboys are the ones that aren't.

The 4800U is being generous for multicore CPU+GPU, the A12Z is about equal to the 12 Watt 4 core/4 thread Ryzen 2300U in multi threaded+GPU tasks, it's a 2 year old cheaper processor, and Apple is selling the same performance currently in a $1000+ iPad, I guess this is only possible because of fanboys. Even this is impressive to me given it's an ARM processor+in house GPU and Apple has been making chips for all of a decade now, but I lost all respect for people touting single threaded performance (with loads of cache!) 15 years ago when consumer dual cores first came out. The 2300U will run Shadow of the Tomb Raider at ~30FPS for reference.

Sources: https://www.anandtech.com/show/14892/the-apple-iphone-11-pro... https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-re... https://en.wikichip.org/wiki/qualcomm/snapdragon_800/855


Corrections? LOL.

"The A12X/Z has ~18MB of L1+L2+System cache."

The A12X/Z has 256KB of L1 cache per core, 6MB of L2/3 cache shared by all cores.

(256*8)+6144 = 8.2MB of L1+L2 cache.

It has no L3 cache. I don't know where you invented this so-called "system" cache, but are we now ridiculously adding GPU core caches or something absurd? Knowing this argument, probably.

The 855 has 512KB of L1 cache, 1,768KB of L2 cache, 5,120KB of L3 cache.

512+1768+5120 = 7.4MB of cache

You seem to be pulling numbers out of your ass, so refuting the rest of the bullshit you're inventing is a rather futile exercise. But keep on talking about "fanboys". LOL. You came straight form some sad AMD-rationalization website.


If you continue to break the site guidelines we will ban you.

https://news.ycombinator.com/newsguidelines.html


I'll bite.

  > Current A12z chips are highly performant; Apple is
  > roughly one chip cycle ahead on perfomance/watt from 
  > any other manufacturer.
We haven't been able to compare them. Micro-benchmarks do not count because mobile versions of Apple chips haven't been designed for desktop requirements. People love comparing CPU cores with micro-benchmarks, but the hardest thing for a modern desktop/server chip is to feed data to many cores while maintaining cache coherence.

  > So, I’m predicting an MBP 13 - 16 range with an 
  > extra three hours of battery life+, and 20-30% faster.
Before agreeing with your estimates, I want to play with a true 8-core Apple CPU with a large multi-level cache first. Building the Linux kernel with -J16 will be a fun exercise. Look, AMD is not stupid, and they're on the same node Apple will be using, and they're not 30% faster than even a 5y.o. Skylake.

  > Apple has to pay Intel and AMD profit margins for 
  > their mac systems. They are going to be able to put 
  > this margin back into a combination of profit and 
  > tech budget as they choose.
I wonder how their conversations with TSMC go. With Intel, at least they had AMD to use as a bargaining chip. With TSMC there's no alternative.

  > One interesting question I think is outstanding - 
  > from parsing the video carefully, it seems to me 
  > that devs are going to want ARM linux virtualized
  > vs AMD64.
That's the big one. The world's software is built for and runs in data centers, not laptops. Our machines are increasingly becoming nothing but thin clients, remote displays that happen to run Javascript. CPUs do not matter. And I suspect that's the real reason they're switching.

But from the developers perspective, it's incredibly convenient to use the same platform (OS + instruction set) as the machine they're targeting, even for interpreted languages. Linus Torvalds wrote a well-articulated email about this a while ago, IIRC he was commenting on POWER, but I think his points are valid. At my company, devs keep struggling with Docker on a Mac. Add to that the ARM pain, and I wonder how many will finally get a Thinkpad. Developers will switch to ARM when majority of AWS instance types goes ARM.

P.S. I love how "old-tech" HN is, but for the love of god, give us a decent way to "reply with quote".


> P.S. I love how "old-tech" HN is, but for the love of god, give us a decent way to "reply with quote".

You have to copy and paste, but that's not too hard, even on mobile.

The main thing is to not use code formatting, and not break up a quoted sentence or paragraph into multiple lines.

Instead, do it the way I quoted your comment above, like this:

  > *Entire quoted paragraph.*
That will render nicely on all devices regardless of the length of the paragraph. If you quote multiple paragraphs, add a blank line between each paragraph so they don't run together.


Wasn't Linus' email implying that if you were to run something like docker natively on ARM then the images you build would be ARM specific. You are not going to spend time and effort on running the build on a x86 machine to then deploy on another x86 machine. You will just deploy your docker images straight to an ARM server.


> Look, AMD is not stupid, and they're on the same node Apple will be using

Could you elaborate? What do you mean with the "same node"?


They're both making 7nm chips at the same TSMC fabs. Essentially, Apple will not have a "process advantage" over AMD. They may release their desktops chips on the latest 5nm TSMC process to get that "wow" product and make a good initial impression, but AMD will be right behind them with the equivalent desktop x64 chips. The current rumor is that late 2021 or early 2022 is when AMD will have a "Zen 4" on 5nm, which will almost certainly blow the pants off everything else on the market at the time.


> The current rumor is that late 2021 or early 2022 is when AMD will have a "Zen 4" on 5nm, which will almost certainly blow the pants off everything else on the market at the time.

You mean in terms of max performance, perf per watt, or... ?


Yes.


All of the above, and it will cost less to boot, and certainly will cost at minimum half of Apple's ridiculous markup.


I am pretty sure they had those TSMC conversations BEFORE the announcement.


I think it will work out fine.

Apple has an absolutely top-shelf team, designing chips.

By hand (Qualifier: Not sure if they still do, but they did, while everyone else was using automation).

They also have a great deal of experience in repaving the highway while traffic is running at capacity. They mentioned it in their keynote. They've done it three times. I have been there, for each of those times.

I was also there for the one time they completely pooched it (Can anyone say "Copeland"? Drop and give me twenty!).

It will be moderately painful. Not too bad. Quite manageable, and it will take at least a couple of years to transition.

I am in no hurry for one of those dev kits, though. They will be quite rough, and I have no compelling reason to use them.

I am looking forward to an entirely new Xcode. The current one is getting crashier every day. I'd also like to have one that can run on my iPad.


> By hand

Not using automation isn’t “badass”, it’s a sign of a deeply screwed up engineering culture. It’s on par with forcing software developers to program exclusively in machine code.

Luckily, Apple uses the standard EDA tools pretty extensively so I don’t really think this applies to them. I also agree that Apple hardware engineers are generally extremely good.


Toe-May-Toe, Toe-Mah-Toe. Some site that does tear downs (not iFixit) tore open one of their chips, once, and defecated masonry. They said the chip design was obviously hand-designed, and stood head and shoulders above other ARM architectures.

They have done OK.

Sorry if I offended you. None was meant. I’ll edit that out.


The article likely meant they did a custom implementation of the architecture, not that they didn’t use automation during the design process. At least that’s what I’d assume without reading it (I’d be interested to read if you have link). It’s basically the difference between optimizing your application by rewriting the performance critical parts (good idea) and never using a compiler (bad idea).

Also, I don’t think what you wrote is offensive in any way - no need to edit unless you feel compelled to.


This was where I saw the article, but I think the original one is gone. It was a while ago: https://m.slashdot.org/story/175407

Here's Wayback: https://web.archive.org/web/20121014135435/http://www.chipwo...

The money shot:

"So this is the first Apple core we’ve seen done with custom digital layout. In fact, with the exception of Intel CPUs, it’s one of the first custom laid out digital cores we’ve seen in years! This must have taken a large team of layout engineers quite a long time. The obvious question is, why? This is a more expensive and time-consuming method of layout. However it usually results in a faster maximum clock rate, and sometimes results in higher density. Certainly one possibility is that Apple could not meet timing on a automatically laid out block, and chose to go with a custom laid out block. Was this a decision at the architecture stage, or did timing fail late in the design cycle and a SWAT team of layout engineers brought in to save the day? We’ll probably never know, but it is fascinating, and also 2X faster (according to the below image)"

So layout; not design. My brother is the one that does this kind of thing; not me.


Hand layout is certainly impressive but also a very far cry from no automation. I'd also like to point out this was probably the result of a design problem that had to be fixed through sheer brute force rather than something to aspire to. The quoted part of the article sort of says as much, albeit in an oblique way.


Fair 'nuff. I used improper superlatives.

There was another article that I read, back then, by a more "reputable" site, maybe an IEEE pub. Can't remember. it was a ways back.


Why do feel that Rust is behind on ARM? I can't comment on the performance, but everything that I used that was purely written in Rust compiled and ran perfectly on my PineBook Pro. (with the exception of alacritty, but that's because PBP doesn't support OpenGL 3.x)

Go does have the (platform) advantage (?) of preferring the "rewrite everything in Go" approach, so those just transition when the tooling supports a new architecture. Rust is intentionally going with a interop design, rather than telling people the only answer to to rewrite all their favorite libraries.


Rust and ARM is just fine. For example, Cloudflare famously keeps their entire stack cross-compilable to ARM, and even ships Rust on iPhones.

There's two areas where I believe you could call it second-class at the moment, though:

1. There is no ARM targets in Rust's "Tier 1" platform support.

2. std::arch doesn't have ARM intrinsics in stable.

For 1, ARM targets are in a weird space; they aren't Tier 1, but they're closer than most of the other Tier 2 targets. Several ARM targets are Tier 1 for Firefox, for example, so they get a bunch of work done there.

For 2, well, there hasn't been as much demand before. I expect that to change because of this.

... we'll see what the future holds :)


Sounds like this will cost Mozilla some time and money, but they will have to support it.


I’m not sure why. As I said, they’ve been using Rust on ARM (as part of Firefox) for a long time; I’m not aware of them being unsatisfied with the current state.

I expect that the Rust project will end up benefiting from the extra interest though, and improving on the things above.


There has been very good progress on filling in the gaps to officially get Rust's AArch64 Linux toolchain triple to Tier-1 this year. We have CI for the Rust compiler test suites on native AArch64 silicon in a joint collaboration between Arm and the Rust lang core team and are converging on zero compiler test failures. Overall, we are pretty close to attaining Tier-1!


> a joint collaboration between Arm

Is this public knowledge yet?


We haven't been very vocal about it just yet but all the bits to enable CI etc and the new t/compiler-arm Zulip stream are happening openly pretty much.

I'll be pinging all the relevant folks soon-ish (we're pushing out fixes to the last remaining compiler test suite failures this week).

I mean to keep you on CC as I did the last time!


Cool, I was dancing around it in this thread and recently because I wasn't sure how public it was :) I appreciate it!


Anything that runs Firefox also supports Rust as Rust is a core part of Firefox.


I've been working on an OS in Rust that has an aarch64 port, and I've for sure seen some... questionable output. It's all been valid code for the input, but not nearly as optimized as I've come to expect out of LLVM based compilers. I'm sure there's some low hanging fruit that needs attention is all.


> I’m not highly conversant with ARM linux, but in my mind I imagine it’s still largely a second class citizen

IMO ARM linux is great, the real thing lacking is good hardware to run it.


Yeah, I tried out a Raspberry Pi 4 as a desktop replacement and pretty much everything is supported except for proprietary stuff like games.


Eh... that was not my experience: https://www.jeffgeerling.com/blog/2020/i-replaced-my-macbook...

(Unless you're speaking almost entirely of webapps, which mostly run fine even on the measly 1.5 GHz 4-core Pi ARM processor.)


I think most of the gripes described are really issues that come from moving from a MacOS desktop to try a Linux desktop; if you were moving from a x86 Linux desktop to the Pi the experience would have been much less painful.


You are stating in step #2 that the Pi4 cannot output 4k at 60Hz. Could you please insert a footnote that it actually can?

https://www.raspberrypi.org/documentation/configuration/hdmi...


I meant everything that runs on x86 Linux. Of course iMovie and Adobe stuff doesn't work, lol.


I had an ARM Chromebook for a while (around 2016) that I customized with a 256GB SD card and Linux Mint. The software all worked well, but the WiFi card died after a year and effectively bricked the damn thing. Cross-compiling might be an issue, but that's not a primary use case.


You mean all the Android handsets? The history of Linux on ARM is colorful, full of corporate missteps and giant brands (namely HTC and to lesser extent Samsung) that were created from that.


None of them come close to the performance Apple achieves on their chips.


> I’m not highly conversant with ARM linux, but in my mind I imagine it’s still largely a second class citizen

In terms of distros maybe yes. Most distros are targeted at laptops, desktops or servers, and few of those have ARM processors.

In terms of architectural support by the kernel and low-level infra, I see no reason for that to be true at all. Open source kernels and (at least lower-level) userspace have for decades paid more attention to compatibility with various hardware architectures than proprietary operating systems.

Of course you'll have fewer drivers for hardware associated with a particular architecture if there's less interest for the hardware, or if the hardware is less available in form factors that most developers are interested in. But that applies at least equally to non-open source platforms. If MS or Apple don't have a commercial interest in maintaining support for a particular platform (and they usually have only one or two in mind), nobody's going to do it.


> Most distros are targeted at laptops, desktops or servers, and few of those have ARM processors.

I wouldn't bet my farm on that without a bunch of research and cross checking because embedded linux is really common. Consider manufacturers are producing very large numbers of embedded ARM micro's with external memory interfaces. I think the majority of those are running linux.


That's a fair point. I was referring to "most" distros in terms of the plain number of distros with reasonable mainstream visibility (and therefore possibly the majority focus from mainstream userspace developers), as a kind of a generous argument. The number of embedded Linux deployments is undoubtedly huge.


I am predicting the opposite. Apple isn't about the extend its Mac for Performance or Battery sake.

They are going about expanding its marketshare.

There are close to 1 Billion iPhone users, most of them have never used a Mac. We will need a 2nd Devices for some of those task. And that will either be a iPad or Mac. Out of the 1.5B total PC Market, Apple has 100M Mac users. I will say it is not too far fetched to say Apple wants to double the number of Mac users to 200M.

For every $100 dollar going to Intel, Apple could knock $200 off its Retail price while getting the same margin.

A $799 MacBook ( the same starting price of iPad Pro ) will be disruptive, the premium is now small enough over the ~$500 PC Notebook price.

In the longer term I think Apple is trying to reach 2 Billion Active Devices. And it certainly cant do this with iPhone alone. There are plenty of Market Space for the Mac to disrupt.


> For every $100 dollar going to Intel, Apple could knock $200 off its Retail price while getting the same margin.

They're not going to change their retail price. It'll be the same, but "better"/"different" ... that's how they'll market it.


I suspect it'll allow them to put OLEDs on the Macs.

That would be a GIGANTIC unique differentiator for the Macs. Once you see it, you won't be able to tolerate a non-OLED display. ever again.

I'm just thinking about my development environment background, Terminal, etc in pure black and off... Wow.


I haven't felt particularly constrained from CPU in a long time. My main issues have been with RAM (thankfully the new MacBooks finally started supporting 32GB of RAM), and GPU, which ever since Apple has been in a fight with NVIDIA has been miserable. It's not just that Apple doesn't use NVIDIA, it's that they won't allow them to ship their own drivers for it.

I just want to plug in an eGPU to an RTX 2080 card. Instead, you have this incredibly limited set of officially supported cards the are also hyper expensive. Black Magic stopped making their eGPU Pro, so even if money is no object you can't get a great laptop GPU extension that supports their XDR displays.

Now, you might be saying, if you want a great GPU why are buying a laptop? Well, 1) even if I were to get the one model of Mac that allows me to do something interesting with GPUs (Mac Pro), I still can't install the NVIDIA cards I want. And 2) laptop + great eGPU is a great setup that is supported in the non-Mac space, so it is not a bizarre request.

All of this to say: the ARM stuff is fine, but it won't really move the needle for me, and doesn't address any of my performance issues, and I would argue a lot of the performance issues a lot of people actually have (especially graphics artists).


Rust isn't behind on ARM, except for SIMD intrinsics on stable, which are not things most languages (e.g. Java, Go) have at all for any architecture.


Apple doesn’t control their machine learning stack. The models they ship are likely created and trained on PCs running Linux and NVIDIA GPUs. It’s entirely possible they’ll extend the Neural Engine to be useful for training but they’d still need to contribute or convince others to contribute to the existing tooling.


The enduser doesn't care about training. iOS devs just care about APIs that Apple has built for them.


What about developers developing those models?

I think some worries are warranted.


You can do some refinement right now already (from WWDC 2019).

But yes, most models are trained on NVIDIA GPUs that are deep learning. All of the other model types can already be trained on regular CPUs.

https://machinethink.net/blog/coreml-training-part1/


> those complaints are 100% down to being saddled with Intel.

And yet the new Macbook pro base models feature an 8th Gen Core i5. That is 2 generations behind the bleeding edge. I think some people might be experiencing the speed problems you described because their machine has an old gen processor in a shiny new box


Not to mention that Apple setting a standard of throttling at 100c and not giving the machines adequate cooling affects performance in a non-trivial fashion.


> One interesting question I think is outstanding - from parsing the video carefully, it seems to me that devs are going to want ARM linux virtualized, vs AMD64. I’m not highly conversant with ARM linux, but in my mind I imagine it’s still largely a second class citizen — I wonder if systems developers will get on board, deal with slower / higher battery draw intel virtualization, or move on from Apple.

It's in fairly good shape, and has an active community. With the current proliferation of IoT devices, both the kernel and userland are well-maintained, and plenty of distros are available. The kernel also benefits from much of the work done for Android and Chromebooks as well.

All of the usual FOSS software is ported and runs well. As of this moment, you could easily take your pick from any of Debian, Ubuntu, Fedora, Arch, Manjaro, Slack, or Alpine just for starters, plus a whole mess of specialty distros. Many of those offer both 32- and 64-bit ARM versions.

Also bear in mind that recent ARM CPUs do support hardware-assisted virtualization as well. KVM and Xen are both available for ARM today, and I'd be shocked if Apple's Hypervisor.framework doesn't roll out with ARM support in the new macOS version as well.

(I'm writing this from Firefox in Manjaro ARM on a PineBook Pro, that I use as my daily driver)


it’s exciting to imagine what Apple’s fully vertically integrated company could do controlling hardware, OS and ML stack

So true! But from a market dominance perspective, it's also a bit terrifying.


Replace Apple with Google or Microsoft and it would get shit on around here. Apple gets a pass


Apple's market share is still low enough for this to be a relative non-issue.


Apple's marketcap however ($1.5 trillion) has me worried.


I agree with you but I think we are going to see at least one A12Z consumer product (other than the iPad) probably going to be a rereleased MacBook with a better keyboard.


> One interesting question I think is outstanding - from parsing the video carefully, it seems to me that devs are going to want ARM linux virtualized, vs AMD64. I’m not highly conversant with ARM linux, but in my mind I imagine it’s still largely a second class citizen — I wonder if systems developers will get on board, deal with slower / higher battery draw intel virtualization, or move on from Apple.

Somewhat ironically, I think it's mostly the languages trying to be safer alternatives to C that are most behind on supporting ARM.

I've done a little bit of Lisp development on my Raspberry Pi (with SBCL and Emacs/Slime), and in most cases I don't have to change anything moving between my AMD64/Linux desktop, Intel/OSX MBP, and ARM64/Linux Raspberry Pi. And that's even when using CFFI bindings to C libraries.

I'm not sure SBCL's ARM backend is at the same level as the x86 backends, but it works well, and there's on going work on it.


I'm not sure I agree with you on most of this, but I think more competition in the cpu biz is always a good thing.


I'm excited. My #1 wish is a 16" MacBook Pro that ways 3 lbs or less. Take an iPad Pro, make it 16" instead of 12, add a keyboard, run MacOS. LG already makes 15.6" intel notebooks that weigh 3 lbs. Apple can do it too!


> One interesting question I think is outstanding - from parsing the video carefully, it seems to me that devs are going to want ARM linux virtualized, vs AMD64.

Hahaha, look for a user-agent at 1:44:26 :) They use old Intel Mac for a virtualization demo.


The main thing Apple has done to improve their A-series chips has been massive L2 caches.

I still major advantages of putting a A-series chip into a MacBook Pro.

1) There will be a much larger thermal and power draw envelope available to new A-series chip. I suspect we will see insane “boosting” clock speeds.

2) Incredible “at idle” performance well beyond what X86 can provide with on did GPU cores, which means a bit better battery life for that screen.

3) More opportunity for tightly integrated acceleration chips On die for codec, ML, and other hardware acceleration methods for Apple only software libraries.

4) Easy porting between iOS and macOS, and tvOS.

#3 will be the most significant.


I don't want to repeat myself so I'll just link to my previous comment. https://news.ycombinator.com/item?id=23612245


> Rust seems behind on ARM, for instance; I bet that will change in the next year or two. I don’t imagine that developing Intel server binaries on an ARM laptop with Rust will be pleasant.

Umm what? Rust supports ARM as a first class citizen as does LLVM. They only list ARM as second class because it's not a desktop platform generally. https://forge.rust-lang.org/release/platform-support.html


Agree with you on some points, I’m really excited to see what’s next. I’m also betting on a new, and faster, MacBook maybe with a discounted price to incentive the migration?

About the virtualization, they will probably make it more efficient, resource wise? Some cloud providers are also offering ARM so...

Anxious to check the GPU performance too!

To add: Control Center on macOS and some other UI improvements hints for Mac w/ touchscreen?

Could we finally see a true BYOD, like Dex or using the improvements in Handoff?


I'm not sure virtualising ARM on Intel platforms will ever be performant enough to be usable. They will probably have to ship an emulator, and even then there will be issues as it'll be very difficult to emulate the strictness of ARM CPUs on non-ARM architectures, for things like unaligned memory accesses and replicating the memory model.


> Languages like Go with supremely simple cross architecture support might get a boost here.

Go's crypto libraries are heavily optimized on x86, not so on arm (see Phoronix Graviton2 benches).

We will have to wait and see how much of Apple's halo effect will contribute positively to optimized arm code.


Wont this move result in more software compatibility issues from developer side though? Like why would you buy the update if the developers don't want to move to the new platform or determine it's too big of a change now?


You've missed off possibly the biggest advantage of ditching Intel. A perfectly usable machine without jet engine fans and a scolded lap. I don't use Mac's, but I see this as good for the industry.


I think MacBooks can run heavy ML processes, but why not run those on separate devices with specific hardware for that? I'm thinking any kind of job you'd want to run on the GPU.


> Apple has a functional corporate culture that ships; adding complete control of the hardware stack in is going to make for better products, full stop.

One word, Catalina.


it's easy to resign iphone models every year. it's not as easy to increase chip performance every year. there's a lot of R&D involved, i think in the long run it'll be better but Apple will have to devote more resources into it. you can't just wish for specs. the manufacturers actually get the hard job of trying to make it.


Not a fan of Apple, but x86 is a mess and I'd love to see companies pushing for its replacement.


What's the issue with x86? Genuinely asking


It has a lot of historical baggage, most notably implicit interdependencies that make it hard/impossible to optimize and reorder stuff. We're entering an era of many-core processors, and IMHO we'll absolutely have to move to a RISC architecture, and the sooner we get that box checked, the better.


It's compatible with all the software I own. Using proton is already a hassle. Adding emulation will just make it worse.


Software compatibility will obviously be the main problem to resolve, but I'm an optimist.


> Apple has a functional corporate culture that ships;

Well we've already seen things gets delayed and even canceled (AirPower).

Cook's Apple is more about supply chain (where he excels), so if indeed the silicon design works, they would be able to 'ship'.


> So, I’m predicting an MBP 13 - 16 range with an extra three hours of battery life+f

I predict they'll have the same battery life. Any savings will be used to reduce the battery size.


I think without Ive they'll take the battery life and market it as the incredible win of moving away from Intel.


Agreed. I think this is especially likely given their decision to increase the battery size in the iPhone 11 lineup last year.


Paid for by Apple?


“Apple’s own pro apps will be updated to support the company’s new silicon in macOS Big Sur, and the company is hoping developers will update their apps. “The vast majority of developers can get their apps up and running in a matter of days,” claims Craig Federighi, Apple’s senior vice president of software engineering. []...

Microsoft is working on Office updates for the new Mac silicon, and Word and Excel are already running natively on the new Mac processors, with PowerPoint even using Apple’s Metal tech for rendering. Apple has also been working with Adobe to get these pro apps up and running on these new chips.“

So the bottom line is: “your previous tools won’t work, will have to be rewritten, the burden is on the developers so we can rake in more cash”

Great, customer focused, and completely altruistic move back in the days when they killed Nvidia cards on high-performance rendering and simulation machines (and everywhere else).

So, Apple has performance libraries that are better than what Intel has to offer? So, cross-platform applications are now again “passe”?

I don’t use my iPhone for working. Why would I want iOS Apps on my computer? So I can install Apple Mail instead of Outlook


> So the bottom line is: “your previous tools won’t work, will have to be rewritten, the burden is on the developers so we can rake in more cash”

Did you miss the part where there’s going to be a “Rosetta 2” AOT/JIT translation layer?


You are aware that you will (most likely) lose all those pretty amazing optimisations that you tend to rely on if you develop software that is a bit more “sophisticated” (e.g., parallel programming leveraging IPP etc.). Can’t wait to see how fast that translation layer will be for my FFTs that have been super optimised for Intel chips.

I wonder: when I observed 20+% performance loss for high compute intensive scientific tasks Intel vs. AMD because I relied on IPP - how much performance loss will we see for the x86 to Arm?

For your average Text application you may not care about performance loss. But I bet you, that for video, image editing, science etc. you are easily 20+% worse off than before.

So what exactly is the benefit for the customer or the developer community?

So I can play Angry birds on my Mac?


I mostly worry about software that will never receive an update. I'm sure the tools you listed will be ported over to ARM simply because it means vendors can sell them again to the same customer.


Customers do not tend to be willing to pay for updates that “merely ensure compatibility with their new OS” ;-)


Especially not corporations :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: