Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An open, high end CPU design is really going to change the cloud market. An ISA like this is a first step in that direction.

Facebook and Google already have their own compute projects and, like Amazon, have access to custom versions of silicon from a variety of vendors.

With a properly open CPU design we'll start to see the first tightly integrated, vertical "cloud" products that maybe still have a "commodity" API on the top (or maybe not?) but are custom all the way down from there.

With the end of Dennard Scaling, if not Moore's Law, Open ISAs and Open CPU designs will radically change both the hardware and compute markets and ecosystems over the next 5 to 15 years, similar to what we saw with Open Source in the 1990s.

Of course, it's not clear that POWER will be the one to do that, and RISC-V isn't going to be making a grab for Intel's crown any time soon, but this looks like IBMs bid to lead in that area.

When the cloud vendors start building systems like this they'll not look too much different from mainframes and IBM wants to continue to own that market.



It's a far, far cry from an open ISA to having multiple competing vendors, let alone open CPU designs.

It was much earlier, but OpenSPARC's impact was limited-- and that was full RTL.

If POWER is open, does anyone really want to make competing high-performance designs-- let alone open them? Better to take something like RISC-V and come up with the first high performance design.

This is especially true when you consider IBM's vertical integration: IBM is the only real POWER OEM and the only real POWER semiconductor vendor.

(If we really assume a reduction of innovation in processors, and a 15 year time horizon... expiration of IP becomes a significant factor, too. Why not just make generic ARM?)


"Better to take something like RISC-V and come up with the first high performance design."

The problem is that RISC-V mnemonics and programming model is so retarded (as compared to MC68000 or UltraSPARC) that one needs a compiler to abstract and hide that mess away. The other problem is that in several years in which RISC-V has been hyped, nobody came up with a 19" rack server design, let alone sold one priced competitively with a 1U P. C. tin bucket server. RISC-V is all hype, but without serious hardware, its impact will be and remains questionable at best.


People have made really fast implementations of RISC-V and universally praised it as being very nice.

And that a ISA that is that knew doesn't have of the shelf server, has nothing to do with the problems of the ISA but rather making mass-market produces for new ISA is incredibly difficult.

RISC-V has barley out of the lab for a couple years and the growth of software and hardware has been impressive so far. Saying it is 'all hype' is serious nonsense and speaks more about your expectations then RISC-V-


I should hope it speaks of my expectations: can't run server workloads on it, worse to program for than OpenSPARC or M68000. I actually want a nice processor and server hardware to use it in to do work. RISC-V ISA and the hardware around it provide neither and yet here we are, it's constantly being paraded as the non-plus-ultra of central processing units.


> mnemonics and programming model is so retarded

Could you provide some examples instead of a slur?


First, it's not like the objection even matters: how nice the assembly interface is doesn't really matter for adoption at all.

And it's not too bad; it's basically very close to a modernized MIPS. There are legitimate complaints, though.

Probably the most controversial is that integer divide by zero can't be made to raise an exception.

Similarly, omitting condition codes is something that will be distasteful to many.

Also, there are so many combinations of legal instruction subsets that compatibility may suffer. Most everything is in a large set of optional extensions (and some important optional extensions aren't really finished yet).


move dst, src, src -- I could stop right here, but wait, there is more!

lui, auipc -- because two instructions are better than a simple move.b or move.w. Really, what nonsense.

sx, ux - I'm speechless at that nonsense.

bltu, bgeu -- because blt and bge just weren't enough -- who designs a processor like this?

lb, lh, lhu, lbu, sltiu instead of move.b, why? I challenge the sales pitch of making more nonsensical instructions amounting to a simpler processor design! (Boy does this make me mad.)

It's not a slur, it really is utterly retarded, especially if one used to program an elegant microprocessor like the UltraSPARC or the Motorola 68000; even the MOS 6502 is more elegant.

But to each his own, live and let live, right? Well why then must this botched processor constantly be sold and paraded as the greatest thing since sliced bread, a non plus ultra of processors, when it isn't?


Plenty of HN readers have children with severe learning disability. Using the word "retard"[1] is likely to attract downvotes.

[1] Unless you're talking about progress or watch mechanisms.


That's exactly what I'm writing about, progress. RISC-V is not an advancement. What is opposite of advancement? In a system, it's either regression or retardation.

And expecting people outside of the Puritan U. S. to abide by the same political correctness norms is extremely rude, inconsiderate and exclusionist -- using those same politically correct norms no less, which is to say, the U. S. should ban political correctness, and do so yesterday for the benefit of everyone.


I don't care what words you use. I'm just telling you that when you describe people as retards you're going to get downvotes, and I'm telling you why that is.

I'm not American and I don't live in the US.


I didn't describe people as retarded, but their work. Even very smart people often do dumb things.


When you say things like this...

> mnemonics and programming model is so retarded

...you are going to get downvoted. This is because people who speak English as a first language understand you to mean "this is stupid, like a retard". They don't understand you to mean "this is delayed, like a watch mechanism would be adjusted".

You can keep arguing that you didn't mean what you said, but at least two people are telling you how your words are being interpreted.


...you are going to get downvoted.

I would be a sad excuse of a being if I feared what some people on a random forum will think of me, or "downvote" me in some arbitrary, imaginative system. The entire thing is a delusion.

Not singling out anyone in particular but I'm a formed adult and have been for several decades, and I do not require upbringing, id est, anyone telling me how to behave or what not to write.

I will write it how I want and I shall not fear arbitrary decisions based on some arbitrary policies someone somewhere thought up. If that gets me down-voted or even banned, I will not let it bother me, as life does not revolve around arbitrary websites trying to tell one how to behave and think and I will damn myself into oblivion before I allow someone to impose such a thing on me. Lest we forget: I'm the only one who decides that, and I'm not allowing anyone to control my thinking or writing.


"Tightly vertically integrated" and "open" are somewhat at odds with each other.

I think far too many people seem to think that the instruction set is something you can just drop in to a chip and start stamping it out, without any appreciation for the amount of device-specific engineering that has to happen. The reason things like a "true open source" Raspberry Pi haven't happened is the $5m - $10m of work required. And for high end devices that would be required to be competitive in the cloud, that number goes up a lot.

I've not heard of Facebook, Google or Amazon doing significant custom silicon projects themselves, as opposed to just working with vendors for some customisation. The only FAANGM in that space are Apple.

IBM are the like the pastoralists living in the ruins of Rome in ~1000AD. They're a consulting firm with a grand name and history.


I'm not sure about this - there are many open processor designs in academia if a fb/google wanted to pick them up - the difficulty is integration and software. They could easier just work on ARM, the reference designs are available if you are fb or google.

I guess what I'm saying is, even if a reletively modern, 2-issue, OoO, with SMT and 256b vector proc, came out open source, would anybody really bother to integrate it and fab it?

From what I see fb and Google work with silicon vendors because they don't want be silicon vendors.


Google have been experimenting with POWER in their datacenters for a while now: https://www.forbes.com/sites/patrickmoorhead/2018/03/19/head...

More historically, Google have been building their own networking gear for some time https://www.wired.com/2015/06/google-reveals-secret-gear-con...

I'm focussing on Google in particular because they have always had a strong preference for Open components wherever possible and they've traditionally taken advantage that openness wherever they think they need to even if that goes against common practice. (There's a story I can't find the link to where, in the very early days, they wrote their own patches to Linux to work around some bad RAM chips that they'd scavenged from somewhere.)

If Google can get an advantage then they will take it. They will also invest heavily, over years, to research these advantages and opportunities.

Their attitude to things like ARM is still fairly accurate at the scale of their datacenters: https://research.google.com/pubs/archive/36448.pdf


The patents have expired on i486. Does that mean x86 qualifies as a free/open ISA? Patents will expire on 64-bit soon.


> An open, high end CPU design is really going to change the cloud market.

I agree. It's only that POWER does not appear to be very high end to me. At best it is performing acceptable for the energy it consumes. Lowering energy consumption is what drives the margins. As a Cloud vendor I would stay as far away from POWER as possible.


As a cloud services consumer, what guarantee (financial, legal, indemnification) will you grant me that your systems will not leak or otherwise tamper with my data, given that you use machines that I know for a fact you have no control over and have not audited prior to the handoff from UEFI to the hypervisor/OS? For that matter how have you mitigated the persistent x86 rogue DMA problem?

POWER9 still has two advantages -- security and speed. Yes, speed -- the core is quite weak on some tasks and very strong on others. If you're buying this to primarily run an AVX intensive type workload, don't (unless you need the security aspects). Those massively wide, vector dependent workloads aren't exactly common in multitenant cloud though, unless you're using GPU offload where POWER again beats even the newest AMD chips for pure GPU offload performance.

So much for the good...the ugly is that POWER9 was fundamentally late and not at performance levels we wanted, but that's a transient state. Every CPU vendor puts a chip like that out from time to time, and IBM is acutely aware of the problems here. I see no reason to go to an even more problematic architectures (x86 duopoly with master vendor keys, RISC-V with fragmentation and weak cores / immature toolchains) when we now have a better option available.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: