Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A little Golang way (aerofs.com)
201 points by vonsnowman on Aug 5, 2015 | hide | past | favorite | 186 comments


There have been several blog posts and conference presentations with a similar theme -- "Go is efficient, especially compared to X."

I know it seems like hype, but I encourage others to try Golang out, maybe even slap a web app together with it. What you'll find is that your Go app will be extremely efficient and performant. It's kind of unfair to compare Go to many of the other languages of the web because it is compiled, which is a huge reason why it is so performant. So rather than compare Go to other languages, I encourage you all to give it a try. If you already program, working on a small Go side project will get you up to speed quickly and you'll learn about some of the awesome packages individuals from the Go community have put together for us.

Disclaimer: I do not work for Google. I write Go code and enjoy it. I think others will too.


> What you'll find is that your Go app will be extremely efficient and performant.

This is true, for sure. But my experience--and I've written plenty of Go, though I find it unpleasant to write and think about for all the usual reasons that Go partisans roll their eyes and so would rather deal with it as artifacts rather than actually writing it--has led to watching lots of developers make mudball codebases in the process (while blogging very heavily about how great it is three weeks in, and less a year-plus in). A pursuit of faux-simplicity has led to what I see as reinventing a lot of Java-1.4-era (because the language itself is essentially that) design patterns--and we should be reminded that design patterns exist to address defects in tooling--that more expressive languages have managed to avoid; the general desire for smaller applications helps to some extent, but software has this tendency to grow that I don't think Go gives you the tools to effectively manage and maintain. Reasonable minds can differ, of course.

Past that, while I think it's an alright choice for company-internal deployable products--web servers, worker nodes, etc.--I have straight-up problems with it being the new devops tool of choice. Statically linking your SSL library makes you an asshole when it inevitably fails and now an application has to be regression-tested so that new features and new bugs don't hose you just because that's the only way to upgrade Heartbleed 2.0. (Go also discourages program extensibility through components and rather as recompilation; the stuff Packer and Terraform do to provide "plugins" that the same company's Vagrant just did with a `require` is gross and, to my mind, completely foolish.)

So I wouldn't say it's hype, but I would say it's not all true, either.


> But my experience--and I've written plenty of Go, though I find it unpleasant to write and think about for all the usual reasons that Go partisans roll their eyes and so would rather deal with it as artifacts rather than actually writing it--has led to watching lots of developers make mudball codebases in the process (while blogging very heavily about how great it is three weeks in, and less a year-plus in).

I get what you're saying, and I agree with some of the criticism around the tooling, although I do think much of that owes to the age of the language. There are some specific things in the (sometimes rather cranky) replies to you that I think are incorrect but I'm not going to debate the merits of the language. Use it if you like, don't use it if you don't.

I will say that we've been using Go for nearly all back end services for nearly 3 years now, and we all still think it's pretty great. Our code base has also remained pretty clean, and in fact we're investing more heavily in Go going forward.


reinventing a lot of Java-1.4-era (because the language itself is essentially that) design patterns

Really? Java 1.4 had an extensive and consistent standard library, implicit interfaces, composition instead of inheritance, static builds, and built-in concurrency?

Go is not early Java, it's more like C 2.0, and I don't really see anything faux about the simplicity, it really is pretty simple, perhaps too simple for some tastes, but it's not pretending to be simple, nor is it simplistic. Which specific design patterns did you have in mind?


> extensive and consistent standard library

yes

> implicit interfaces

no

> composition over inheritance

yes

> static builds

no

> built-in concurrency

sure, with an 1:1 thread model as opposed to an M:N thread model.

M:N is not always a clear win over 1:1:

https://mail.mozilla.org/pipermail/rust-dev/2013-November/00...

http://xiao-feng.blogspot.com/2008/08/thread-mapping-11-vs-m...

Also, to be very clear, Java's threads ("green threads") were _originally_ M:N but they switched back to 1:1.


The comparison offered by the OP was Java 1.4 to Go 1.4.

I think the standard library is debatable (see the infamous early Java Date class), Java encourages using inheritance (so not composition instead of inheritance) - something Gosling later remarked as his biggest regret, and concurrency had improved tools in Java 2, not sure the story was as good in early Java.


> Also, to be very clear, Java's threads ("green threads") were _originally_ M:N but they switched back to 1:1.

Java's "green threads" were N:1 threads, not M:N threads. While both are sometimes referred to as "green threads", they are very different things. (N:1 tends to give cheap concurrency but no parallelism, M:N tends to give cheap concurrency with as much parallelism as the hardware can handle, but with more overhead than 1:1 threads.)


I don't know if the programming model of threads and channels that's used by haskell and golang strictly depends on the existence of M:N threading, but it sure is nicer to use than the model in java/rust.


Go's channels/threading model already exists in Java:

https://github.com/puniverse/quasar


It wasn't immediately obvious from scanning that link. Does it provide channel select? Because that's the hard part. Threadsafe queues are easy, select on them is hard.


Of course it does.

Java: http://docs.paralleluniverse.co/quasar/javadoc/co/parallelun...

Clojure: http://docs.paralleluniverse.co/pulsar/api/co.paralleluniver...

We'll have a nice slect API for Kotlin, too, very soon.


It doesn't strictly depend on it, but it is strongly supported by M:N threading since that threading model decouples the supportable degree of concurrency from the supportable degree of parallelism (unlike 1:1), without abandoning parallelism (unlike N:1).

You can do Erlang-style concurrency with 1:1 or N:1 threading models (and there are libraries for languages whose many implementations are N:1 or 1:1 rather than M:N that do that), but it makes most sense when you have M:N.


> It doesn't strictly depend on it, but it is strongly supported by M:N threading since that threading model decouples the supportable degree of concurrency from the supportable degree of parallelism (unlike 1:1), without abandoning parallelism (unlike N:1).

A modern Linux kernel has excellent thread scalability, and you can customize thread stack sizes to improve memory usage. (Thread memory use is kind of independent of 1:1 vs. M:N, honestly; the thing that can improve scalability is relocatable stacks, which is really not the same thing.)

It's instructive to look at the history of M:N versus 1:1 in the days of NPTL and LinuxThreads and compare that to the history of programming languages. In that world it was received wisdom that M:N would be superior for the reasons always cited today, but at the end of the day 1:1 was found to be better in practice, because the Linux kernel is pretty darn good at scheduling and pretty darn fast at syscalls. Nobody advocates M:N anymore in the Linux world; it's universally agreed to be a dead end. Now, to be fair to Golang, relocatable stacks do change the equation somewhat, but (a) as argued above, I think that's really independent of 1:1 vs. M:N; (b) any sort of userspace threading won't get you to the performance of, say, nginx, as once you have any sort of stack per "thread" you've already lost.


> A modern Linux kernel has excellent thread scalability, and you can customize thread stack sizes to improve memory usage.

Cross-platform code can't count on always running on a modern Linux kernel, though.

But, sure, I'd think that M:N (which is never free) is going to be less likely to be worth the cost if you are specifically targeting an underlying platform reduces the cost of high numbers of native threads and the cost of native thread switching so that the price for user-level threading in the runtime isn't buying you improvements in those areas.


Why is it nicer to use than that of Rust?


Well, if what you're saying about not needing M:N to get thread scalability is true, I'm probably wrong. I really like being able to run tons of threads and write synchronous code, letting the the haskell RTS swap threads when I block.

My mental model, which probably comes from everyone complaining about apaches thread-per-request model a few years ago, is that you basically either consume lots of resources with lots of threads, or you write ugly cps style code. I view haskell/go as giving the best of both worlds, but perhaps they aren't separate worlds after all.


I feel the same about Erlang processes. It's so much easier on my brain to write synchronous code that's preemptively executed and communicating by message passing. Wish Rust gave me that option, speed hit and all, but I understand why it doesn't.


How does Rust not give you that option?

The choice of 1:1 and M:N does not affect the programming model at all. It is simply an implementation detail. M:N scheduling is no more a requirement for CSP than Unix is for TCP.


Maybe I'm using terms incorrectly -- is there a good green thread library for Rust which supports the kind of Erlangy experience I've described? I looked around a couple of months ago but didn't find anything that seemed suitable.


Why do you need the threads to be "green"? That's what I'm asking. What's wrong with kernel threads, specifically?


In my experience, kernel threads have a cost that is sufficiently far from that of, for example, Erlang processes (400+ bytes each) that it changes the way you program with them.

In Erlang you don't think twice about spawning a million threads if that models your problem nicely. The same isn't true when you're dealing with kernel threads. (Right?) So I'm interested in green threads because when I have to start thinking about the cost of the threads I'm using, then I'm thinking less about how to most elegantly solve the problem and more about how to satisfy the architecture I'm programming on.

Now, if kernel threads are massively cheap these days and there's no problem spawning a million of them, then I need to take another look at that model.


So since you're talking about scalability into the millions of threads, I think what you actually want is stackless coroutines rather than M:N threading with separate user-level stacks. If you have 1M threads, even one page of stack for each will result in 4G of memory use. That's assuming no fragmentation or delayed reclamation from GC. Stacks, even when relocating, are too heavyweight for that kind of extreme concurrent load. With a stackless coroutine model, it's easier to reason about how much memory you're using per request; with a stack model, it's extremely dynamic, and compilers will readily sacrifice stack space for optimization behind your back (consider e.g. LICM).

Stackless coroutines are great--you can get to nginx levels of performance with them--but they aren't M:N threading as seen in Golang. Once you have a stack, as Erlang and Go do, you've already paid a large portion of the cost of 1:1 threading.


Thanks for the tip, stackless coroutines are new to me. Any of that on the Rust roadmap?


Am I correct that coroutines are intrinsically non-preemptive? If so, I'll need to keep looking.


Am I correct that coroutines are intrinsically non-preemptive? If so, I'll need to keep looking.

That's usually the case. Coroutines have their uses, but having used goroutines, that is my current preference.


Coroutines are preemptible at I/O boundaries or manual synchronization points. Those synchronization points could be inserted by the compiler, but if you do that you're back into goroutine land, which typically isn't better than 1:1. In particular, it seems quite difficult to achieve scalability to millions of threads with "true" preemption, which requires either stacks or aggressive CPS transformation.


A higher cost for context switching. Lots of people work on apps with lots of i/o, and there is a long history of coroutine/callback/green thread architectures beating the pants off of thread per request architectures.


No, the context switching overhead tends to be minimal if you're using a well-tuned kernel. You're doing a context switch to the kernel for I/O in the first place, and in a green thread model you have to do a userspace context switch in addition to the context switch the kernel imposes to get back into your scheduler.


Are there benchmarks to support that? JVM greenthreads vs. linux user threads, for example?


Both the JVM and Linux pthreads are 1:1.


Go is not very much like c. Pointers are very close to reified addresses, but much of their use comes from manual memory management. If you have to use GC, its not clear what you gain from the semantic similarities with c. Furthermore, it is a prescriptive ecosystem that is quite restrictive. The runtime is not very friendly to ffi. It is not a suitable systems language due to the GC and associated complexities (interfaces should not mandate GC roots).


> Go is not early Java, it's more like C 2.0

Or perhaps Go is C+


Technically, that'd be "float C2 = C + 0.5"? (And remember to treat C2 as a float, not an int!)


;)


It's actually more like a combination of ALGOL, Oberon, and practical solutions to problems. It's supposed to be simple, effective, easy to compile, and efficient at run-time. Like Wirth's languages. But with extras to help it go mass market, esp standard library. So, although I often poke that it's not-novel, I at least give credit that designers chose wisely in what to copy for their simple-to-use language.

Pretty much the opposite of the mess that was and is Java.


> Go also discourages program extensibility through components

Can you elaborate on this? Because in my experience with Go, I found that I had to make _far_ more components (assuming this means libraries?) than other languages.

> Statically linking your SSL library makes you an asshole when it inevitably fails and now an application has to be regression-tested so that new features and new bugs don't hose you just because that's the only way to upgrade Heartbleed 2.0

Not sure I understand that last bit. I get that statically linking could be bad(ish) because you would need to know to recompile with a fixed library, but what was that about new features?


With regards to your first question 'cause the sibling gets the second--I'm talking about a plugin architecture. Having to recompile an app to add a third-party feature is, in my world of "the user is more important than the developer," bogus; it means that I can't just use my OS packages if I need anything even remotely out of the ordinary. (nginx is the only thing I regularly use and might want to extend that has that misfeature, but it escapes that annoyance only because I've never needed to add a plugin Ubuntu doesn't by default.) Packer and Terraform attempt to get around this by shipping plugins as separate binaries. It is not the worst solution in the world, but I think it's an unpleasant, unsatisfying experience even when I do my best to separate that from just finding Go fugly in the first place.

It's an unrelated field to my day job of devops/server software, but I wrote a plugin system in .NET and it's super trivial to just suck down an assembly and expose its types to the core logic. I've written the same in Java, and Ruby basically makes it a breeze with a Gemfile and a 'require'. It can be slower (though for the overwhelming majority of tasks not at all too slow for a tool, rather than a high-throughput server or whatever) than Go can in some cases be, but it doesn't suck to use, and it's for that reason that while I can understand Go for server stuff I have a real beef with it intruding on my systems when I need to use it as a user.


I think the parent was saying that new releases would bring new features the end user may not want, in addition to something like a security fix for an included library.

With shared libs, you can keep using an old version if it works for you, while still updating ssl to a fixed version (assuming api compat).


This, precisely. If I have to compile YourApp 1.5 with YourTLSLib instead of just `apt-get upgrade`, I'm going to be sending you flaming karmic poop for your karmic doorstep.

And not just new features--though SemVer is very often honored in the breach more than the observance--but breaking, sometimes undocumented changes (two Go projects, Packer and Terraform, both come to mind).


`apt-get upgrade` is not magic. Someone had to package the app for apt. Someone had to decide when to make a new release.

The same can be done for a Go app and it'll be available via apt just as any other app.

A fair comparison would be compiling a C/C++ app from sources vs. compiling Go app from sources and Go wins that contest easily.


You are missing the point. "apt-get upgrade" would just get the latest SSL lib and fix the vulnerability. You wouldn't be required to upgrade the core app in question.


It is, to my mind, much less likely that somebody goes through and recompiles and publishes every statically linked application in the Ubuntu repos the day that the inevitable critical bug is unearthed than somebody recompiling and publishing libimportantthing3.

It is also much more likely that a "minor version bump" that happens to contain the dependency with the bug has regressions or new, untested-in-my-environment features that I must accept as the price of a Go application's upgrade unless I want to start playing with said application's vendored dependencies. Which I don't, which is why libimportantthing3 is a vastly superior choice for software I must use but do not want to adopt and care for.


> design patterns--and we should be reminded that design patterns exist to address defects in tooling

This is a really important point, and I think it's why a number of best-practices in Go are actually not best practices in other languages, and vice versa.

One of the explicit, top-level design goals of Go was to focus on creating top-notch tooling as part of the language. While it is not the only language that has tried to do this from the get-go, it's one of a very small number[0].

Because the tooling was a first-class design goal, a number of the problems that traditional design patterns were created to address are less problematic in Go code[1].

[0] Case-in-point: gofmt, which other languages are now adopting due to its success.

[1] Again, gofmt: there are a number of design patterns and style bikesheds around code format, but really, the most important thing is that there exist a uniform standard. gofmt provides that reliably and a way to enforce that as a pre-commit hook, which has done wonders for eliminating minor style variations that IMHO cause more problems than they solve.


Top notch tooling? The debugger is neigh unusable. `go get` is a community joke. The compiler is fast mostly because it doesn't check for things that better compilers do. Go race is a symptom that not all is well in the CSP house. Other static analysis tools are third party and limited if they exist at all. Go really does remind me of JDK 1.4.


I'm sad that your post is downvoted, because (while I might have been a little more pleasant about it) I think you're generally on-point. The debugger is pretty poor, the compiler is pretty poor, there isn't much static analysis (and while the language's youth is an excuse, the profusion of tools for more semantically rich languages like Scala make me skeptical of the excuse).

I didn't mention Java 1.4 just for a lark; it's been long enough since I used it that I don't remember the ecosystem well but I do remember the style of coding being so brutally centered around type assertions and blind casts that Go really does remind me a lot of it.


Perhaps his post was downvoted because he made a strong assertion - "the compiler is fast because it doesn't check for things that other compilers do" without elaborating or providing any evidence. This is the first time that I've seen someone complain about deficiencies in the compiler too, so that adds a little skepticism.

Could you elaborate on the static analysis that you find lacking?


Actually JDK 1.4 was much, much better than Go for tooling.

The debugger was usable. You had realtime memory/CPU profiling tools like JProfiler. There was a workaround to get JMX working (JMXRI) with is great for production monitoring. Code formatting tools were significantly better than gofmt. You had bug finding static analysis tools e.g. Findbugs which integrated well into build tools.

And of course you had great IDEs like Eclipse (which was actually great back then), NetBeans, JDeveloper which allowed for refactoring, autocompletion and code assistance.


You're right of course but what I meant was what it's like to actually type the code in and read other code more than the rest of the rant which is about tooling. In many ways saying "Go is like JDK 1.4" is a compliment as JDK 1.4 was older than Golang is now.

I'm working with Golang every day now and the ONLY reason I'm using it is its CSP concurrency model, which is still way behind what Clojure's core.async can do.


> Because the tooling was a first-class design goal, a number of the problems that traditional design patterns were created to address are less problematic in Go code[1].

> [1] Again: gofmt…

What does gofmt have to do with design patterns?

gofmt is about code formatting. Design patterns are about abstraction and expressiveness. A code formatting tool does nothing to address abstraction and expressiveness of the language.


> gofmt is about code formatting. Design patterns are about abstraction and expressiveness.

First, gofmt can do more than code formatting, such as applying simplifying code transformations that are semantically equivalent. Second, code formatting and design patterns are indeed related, because the way code is laid out in text is the way that design patterns are expressed. The layout of code affects how abstractions are presented, which affects which abstractions are easy to reason about and work with.

Finally, I picked gofmt because it's a pretty uncontroversial tool, and one that is so successful that even languages like Rust have adopted or are working something similar. I'm really not interested in starting another flamewar about why Go lacks $FEATURE and therefore $OTHER_LANG is better, because we have had enough of those on HN, don't you think?


> First, gofmt can do more than code formatting, such as applying simplifying code transformations that are semantically equivalent

Which also has nothing to do with design patterns. (At least not if you're limited to the extremely basic -r gofmt rewrite rules.)

> Second, code formatting and design patterns are indeed related, because the way code is laid out in text is the way that design patterns are expressed. The layout of code affects how abstractions are presented, which affects which abstractions are easy to reason about and work with.

No, I don't buy that a code formatting tool obviates the need for design patterns. How does gofmt replace the Visitor pattern (just to pick one at random from the GoF)?


> No, I don't buy that a code formatting tool obviates the need for design patterns. How does gofmt replace the Visitor pattern (just to pick one at random from the GoF)?

Asking if a code formatting tool "obviates the need for design patterns" is the wrong question, because it assumes that there is a need for design patterns in the first place.

I do agree with you that a code formatting tool is not capable of somehow fixing software design and architecture decisions. However, "design patterns" by their GoF meaning exist as to address shortcomings of the languages and tools used: most of their advice does not make sense as soon as you move away from Java into less object-oriented or less procedural languages. To talk about "the need for design patterns" as if they are some sort of mathematical truth is misleading and dangerous.

So, yes, I would argue that Go does not need the Visitor pattern (and so would Rob Pike [https://groups.google.com/forum/#!msg/golang-nuts/3fOIZ1VLn1...] ), code formatting tool notwithstanding.


You do need design patterns in Go. A prime example: the interface that Sort() requires is basically the Strategy pattern. You need it in Go because the language is missing generics. There are many other examples.

In that message, Rob Pike misunderstands what the visitor pattern is for. Go's "type switch" is just chained Java instanceof. Java still benefits from the visitor pattern for (e.g.) compiler transformations, even though it has instanceof.


This is one of the more troubling aspects of the parts of the Go community that I am exposed to--aggressive insularity. It's perhaps cyclical--in the past I've made similar criticisms of Node, and did of Ruby before the hype died down and the more aggressively enthusiastic people calmed down or moved on--but it's kind of a pain in the rear at present. That said, Go's deification of a single individual is unique to me; of Matz, Guido, Bjarne, Gosling, or Rasmus, I don't know any who are quoted as if it's by itself persuasive. What Rob Pike says about things is not necessarily correct--and sometimes it's not even accurate--but seems strangely, to Go advocates (as separate from "Go users"), to immediately enter a sort of canon to be trotted out at every opportunity.


All languages need design patterns; they are about designing how to implement certain functionality.

Languages differ in which design patterns have a native expression in the language, which can be reduced to simple reusable library code, and which require type-it-in-each-time fill-in-the-blanks code recipes.

The fact that Design Patterns were popularized in software development by the GoF book which, among other things, included code recipes to illustrate its patterns, and that the patterns in it tended to be ones for which the popular languages of the day required code recipes (lacking native implementations or the ability to provide general implementations as library code), has unfortunately associated the term with the code recipes, which aren't really the central point of understanding and using patterns.


> How does gofmt replace the Visitor pattern (just to pick one at random from the GoF)?

This would not be the first time that we have had a discussion on HN about this exact design pattern in Go, so it's hardly a random choice. And given past precedent, I think it's best if I end my half of the conversation here and propose to agree to disagree. To quote my previous post,

> I'm really not interested in starting another flamewar about why Go lacks $X and therefore $Y is better, because we have had enough of those on HN, don't you think?


You picked gofmt because go fans can't get over their amazement that a code formatting tool exists.

Did you all write your code in notepad before?


> Statically linking your SSL library makes you an asshole...Heartbleed 2.0.

Go has its own SSL library, crypto/tls, which is not linked to any C libraries and wasn't affected by Heartbleed 1.0. You haven't written much Go if you don't know that.

The argument is specious anyway, there's nothing difficult about building a binary from an old version of your code or just upgrading in most cases. Deploying a new Go binary is always trivial as compared with upgrading, testing, and deploying Python, Ruby, or Java applications with their associated libraries and interpreters.


> Go has its own SSL library, crypto/tls, which is not linked to any C libraries and wasn't affected by Heartbleed 1.0. You haven't written much Go if you don't know that.

The parent poster obviously wasn't saying that crypto/tls was affected by Heartbleed specifically. It was a statement about the security implications of static linking.


His point was not that golang was susceptible to heartbleed. It's about static vs dynamic linking.


"Go has its own SSL library, crypto/tls, which is not linked to any C libraries and wasn't affected by Heartbleed 1.0."

Is it going to be affected by Heartbleet 2.0? How about whatever the next exploit is?

"The argument is specious anyway, there's nothing difficult about building a binary from an old version of your code or just upgrading in most cases."

Not in every case, and not for customers who don't have access to the code.


I'm not even particularly worried about applications to which I don't have code access. I'm worried about getting off my OS's upgrade track because the minor version of the application I've verified to be usable and correct in my environment is no longer the one I'm going to have because a vendored dependency was upgraded during a release of the application itself rather than as an independent, dynamically linked library.


No, it will just have its own vulnerabilities instead.


I wrote a small/medium size, non-trivial tcp-server in golang two years ago, in just about 2-3 weeks. It pretty much worked first try! So far 2 minor bugs.

Originally it was meant to be just a quick kick-golang-tires prototype, to be replaced with C++ later, but it turned out to work really well.

My impression of golang is that it's very useful getting things done. The code has also been readable to other people pretty much immediately, and they've been able to contribute features into it. None of us used golang previously.

Had I written same in C++, I bet there'd been a lot more bugs to fix.


I think Go invites comparison to dynamically typed languages because it's compiler is so damn fast.

Yes, Go is technically compiled, but the development cycle is closer to that of dynamic languages. In a similar vein, I have a lot of trouble taking criticism of Go's type system seriously. Is it as robust as Rust's? Probably not (I've never used Rust), but that's way beside the point. Go's type system gives you a great deal of benefits while keeping the language very dynamic-y.

tl;dr: haters gonna hate. I like Go very much for certain things.


That's definitely an advantage. People used to rag on me because I used a BASIC dialect for most of my development. One of main reasons: tens to hundreds of thousands of lines of code compiled and linked effortlessly. I later moved to LISP with its incremental, function-by-function compilation to get pause times down to tenth of a second.

Either way, the elimination of long compile cycles and other interruptions maintains the mental flow of the programmer. This results in a significant productivity boost. Also, the job feels better as interruptions and backtracking can be stressful.


One of my colleagues (a Paris Politechnique guy, so very much a mathematical purist) was recently going on and on about the beauty of lisp, and how much better it was than C (on a formal level, of course).

I think Go sits on the opposite end of the spectrum. It's a hacker's language, not a mathematician's language. It's ugly and very useful.


" It's a hacker's language, not a mathematician's language. It's ugly and very useful."

Lmao. That's exactly what hackers said about LISP. That people can often emulate a language or even paradigm (eg OOP) within the original language using macro's shows it. Strange enough, LISP can modify itself to do whatever modern languages are doing with similar productivity. It doesn't work vice versa. On top of it, the LISP compilers make pretty fast code for such a flexible language and past engineers even made dedicated hardware for it.

That's why, although far from perfect, LISP is still the ultimate, hackers' language.


The language was a bit hyperbolic, but the point was rather that you shouldn't expect mathematical purity from Go.

Lisp is a great language. This wasn't meant to be a jab at lisp.


Oh, yeah, I'd never put Go into any mathematical category. It's more about practical purity and simplicity like the languages that inspired it. My gripe with go was that it was a clean-slate, modern language that didn't adopt any of the awesome things PL community invented since the 80's.

Julia seems to be the best example of a clean-slate language that tries to combine all the best features and attributes of various languages with good effect. Also beats Go and many other languages on various benchmarks despite dynamic typing. I'd like to see some application-server or RDBMS type of benchmarks, though, as Julia was designed for the mathematical stuff. Might not perform as well but should still be good.

http://julialang.org/


Me too. Some folks prefer to think instead of messing with the code and retesting. I hate it.


I honestly prefer to do both. Picking the right tool for the job implies that you've thought a great deal about your requirements.

Plus thinking is an enjoyable and rewarding experience.


I prototyped a cache in Go and the ref. impl. in C. I'm a long time Go coder but did not expect the Go prototype to run so damn close to the C (and for certain ops actually faster than C).

The runtime is shaping up quite nicely. Given dynamic linking and this language can pretty much pull its weight in the backend for almost anything.


Go is only a massive improvement over interpreted languages. Anyone who was already writing performance software will have a much harder time justifying the switch: many low and high level features are simply not in the language. (E.g. parametric types, macros, a runtime using a c-like stack, avoiding the GC).


> slap a web app together with it

Web apps are on the decline, being gradually replaced by Android/iOS apps. When people can use Go to slap up a performant Android app, it may find more use.


So slap an http api ?

Building native apps is hard unless you are specialized , but puking up some html and json is easy ,hence building a blog is usually good for introduction.


Nobody cares about Android apps. And they ain't gonna make you rich. Neither will iOS apps for that matter. That's a market raced to the bottom, that only makes sense for a few top players.

The web is not going away anytime soon.


What are these mobile apps that don't have Web backends?


"Code size was reduced by almost half, from 175 lines down to 96."

Hm, I how can we take a 175 LOC project as something relevant in any way?


The idea is that a server this small should probably not consume oodles of memory. They reduced RAM footprint by 100x. Apparently, their Java implementation of this toy service was wildly out of hand.

I wish they provided the JVM startup info and stats to compare. Maybe JVM's resource usage could be shrunk, too.


As much as I am a fan of go, this sounds like it's a bad build infrastructure more than any fault of Java. JAR shading exists, their imports were probably out of hand (and significantly unused), and while they were microservices, they probably could have had groups of them share a JVM (and benefited from the shared permgen space).

Still, that's time and effort, and in Go you get all of those things for free. There's far fewer janky edges (I spent hours figuring out the problems with signed jars, shaded jars, and symlinks when trying to deploy)


> The idea is that a server this small should probably not consume oodles of memory.

The number of LOC has no bearing on how much memory something will consume. For all you know the service could just be poorly implemented.


This is true regarding heap space (you can allocate couple gigs in one line), but perm gen space and JIT space can also be significant in JVM case.


Permgen space doesn't exist in the JVM any longer.

And not sure what JIT space is.


I think the OP meant "Code Cache" when they said "JIT space".


The relevance is that even a 175 LOC project in Java takes up enormous memory and disk footprint -- that the JVM is this super-heavyweight thing that's really just inherently inappropriate for a lot of applications.


> The relevance is that even a 175 LOC project in Java takes up enormous memory and disk footprint

_This particular_ 175 LOC Java project takes up an enormous amount of memory. For all you know it was just coded poorly and the Go one is a bit more reasonable.

> the JVM is this super-heavyweight thing that's really just inherently inappropriate for a lot of applications.

The JVM introduces overhead but not so much that you can make these sorts of extrapolations.


The footprint of this particular service could have been optimized in Java but:

  1. the JVM itself imposes a high floor (hotspot, many shared libs loaded, ...)

  2. the Java language is full of overhead at every level (boxed types are a pet peeve of mine)

  3. the Java ecosystem has a tendency to regard memory as an inexhaustible resource, which lead a lot of waste in many 3rd party libraries
The core point is that optimizing this particular Java program (and the others that followed) would have been more time-consuming than a Golang rewrite and would have probably increased the complexity whereas a Golang rewrite reduced it.

Optimization was the original goal, increased maintainability was a pleasant result.


Point 2... yeah, that's one of my major problems with Java. Unnecessary boxing and the unreasonable amount of complexity if you work around it. In any type of Object collections, it consumes memory, stresses garbage collector unnecessarily and causes a lot of CPU cache misses.

Value types would help so much with this issue. I know they're coming one day. I hope Java/JVM can replicate memory efficiency and cache coherence of C++ std::vector for small objects.


34MB for Hello World, according to this SO post: http://stackoverflow.com/questions/13692206/high-java-memory...

I'm not going to install the JDK just to verify, but feel free to report back if you get different results.


For comparison... I did a similar Hello World test around the timeframe of .Net 2.0, and it was around 11mb to load IIRC. The latest iojs on windows seems to be just under 9mb.

Not sure what the golang overhead is, by comparison.


I can't say what an Hello World uses, but I have a Tumblr API → RSS gateway written in Go running for a few months, and it uses ~2MB.


Until you have enough microservices and each microservice carries its own Go runtime. Then having one JVM suddenly isn't so bad. The problem (as someone else said) is that using Docker may not be the right solution here.

There are enough solutions (e.g. servlet containers) where you can run multiple services in one VM, with isolation, and security hardening (using security manager).

With regards to memory footprint - the difference here is that Java uses a minimum and maximum heap size that can be tuned with parameters. This has downsides (typically more memory use) and upsides (upper bound on the maximum memory use of a process).


Your remark about isolation is not entirely true: when you run multiple services in one VM, there are still a number of shared resources.

Most notably, all services will share the same heap, so one ill-behaved service can bring down all the other services. The only way to prevent this, is to run each service in a separate VM.

And at that point, you are once again comparing one Go runtime per service with one JVM per service.


> With regards to memory footprint - the difference here is that Java uses a minimum and maximum heap size that can be tuned with parameters. This has ... upsides (upper bound on the maximum memory use of a process).

If you're running a Go app in a container, you can use also tune the upper bound on memory use by restricting the available memory for the container.


Yes, it's larger than a C or Go process. Does it usually matter? No.

The whole Netflix is based on the JVM and Java. They are enormously big, they have high CPU and I/O requirements in many cases, and yet, they manage well on Java.

I'm not a big fan of Java as language (to say the least), but the JVM as a runtime is very-very sophisticated.


The disk footprint was measuring the size of the docker container, who knows what was in it. Something would have to be wrong for 175 lines to require 650MB of dependencies... it could if it were loading all of Spring/Hibernate/etc., but then I'd challenge it wasn't a "microservice" at all.


Basically Ubuntu 15.04 docker image + openjdk8 + mysqlclient + the service itself


JDK... 167MB tarball + 330MB uncompressed


That's the JDK.

You just need the Server JRE which is 57MB tarball + 157MB uncompressed.


As mentioned in the blog post, this was a hackathon project originally, hence the motivation to start with a very small server.

The other servers that went through a rewrite also ended up being significantly smaller in go but that's a story for another day.


Of course, neither program was actually 175 or 96 LOC -- they included thousands of lines of code from the standard library. Which is why it's possible to "do useful things" in ~200 LOC either in java or go (or python or...).

But just because you wrote just 200 LOC, that doesn't free you from bugs and regressions in the remaning thousands of lines of code you rely on.

As an anecdote, at a previous work place we had a program that parsed CVS-files for import into an SQL database. This particular program was written by a researcher (meaning: someone with great domain knowledge, but perhaps not a very strong software engineer) in Visual Basic.

The program worked great, but we had to upgrade the server that ran it to newer versions of windows, and at some point needed to recompile the code (I don't thing this was releated to a bugfix, IIRC it was simply a run-time/linking issue).

As it turned out, the code compiled, but the program didn't work -- MS had changed the API for CVS handling (probably fixing a bug). I don't recall exactly what it was, might have been how floating numbers were parsed/handled, possibly an edge case with Norwegian/English localization or something.

Net result, we had a pretty though debugging job on our hands, for a very small program...

(I feel a bit bad for singling out MS for this -- as I understand it, they are generally very good about maintaining backwards compatibility -- even to the extent that that becomes a problem. I guess we just hit on a corner case with our use-case and this particular old VB code).


I'm not sure what you'd expect a microservice with a straightforward bit of logic and a single HTTP route to actually cost in terms of lines?

I suspect the same (or greater) linecount gains could have been gotten by using a language even more pithy. How much you wanna bet the equivalent Clojure or Scala versio would be half again as many lines?


> How much you wanna bet the equivalent Clojure or Scala version would be half again as many lines?

263 lines?


Sorry, perhaps poorly worded. Reduced by another 50%, I mean.

Clojure is quite terse, and Scala can be when you're using the right toolsets.


It's merely a positive (to most) data point. Do with it what you wish. Dismissing it outright seems impruduent.


I guess nobody will be saying that Java is lightweight, but the actual results of porting from Java to Go will vary wildly from usecase to usecase.

Both examples mentioned in the article (the 175LoC program and the CA) sound like very simple programs. E.g. I once wrote a C program which watched some directories with inotify and compressed new files using zlib. The memory footprint was 350KB. Obviously an empty JVM alone would use 100x more RAM. This static ~30MB overhead might be important in some cases and not relevant at all in others. The incremental (per-object) overhead is probably more relevant to almost all real-world usecases which are a bit more complex then the ones mentioned above.

Also - care should be taken not to compare apples (no TM) to oranges: e.g. if you use a huge ORM in Java (which among other things also caches results of each query) and then do a simple SQL query in the new implementation, would be strange to expect those two to perform similar. This happens quite often with rewrites - they almost always aim for simplicity, do short cuts, get rid of "unused stuff". Basically a rewrite from Java to Java will also usually improve performance.

Must-read about rewrites: http://www.joelonsoftware.com/articles/fog0000000069.html . Though, I guess, everybody has already read it.


At my organization we have several Java-based services (some of them can even qualify as a form of microservices, if you squint your eyes). We have found that when you have very good developers and you write almost to the letter of the spec, Java can easily provide a stable base (which is probably true of any language/runtime/library).

However, we've been eager to try Go in several places. Believe it or not, what has held us back is the lack of a solid LDAP library. We could/should scratch our own itch and be done with it, we lack the time... still so many things to do! In the mean time, for us, Java support for LDAP is nothing short of stellar; and has been for years.


More and more I hear about Go, I feel more convinced that it would be worth giving it a shot.

Can you share what the microservices are doing? What are they for?


At this point we have 6 microservices written in go in the appliance:

  - team-server probe: already mentioned in the blog post.
    Determines if any installed Team Server is down.

  - ca: as mentioned in the blog post, as simple Certificate Authority

  - charlie: a checkin service. Desktop clients periodically post to it
    to signify they are up. This data is used in each user's device list
    to show if the device is up and which ip it was last seen from.

  - auditor: takes audit event in an HTTP endpoint and forwards
    them to a raw TCP connection as expected by splunk and co

  - valkyrie: a relay server used for data transfers when desktop
    clients cannot establish direct TCP connection (more about that
    in a future blog post)

  - lipwig: a messaging/pubsub server used for peer discovery
    and notifications (more about that in a future blog post)


I've been writing Go for some time. Just finished writing a small CDN for the company I currently work for. It's It's a fast language that performs well in systems programming (what it was made for). But boy is it ugly. Not verbose like Java, but ugly to write and read. It does force a good coding convention because otherwise you end up with a pile of ugly code. People seem to be bent on writing it like 80s C. Full of single character variable names and odd function naming. Dunno if it's just my experience. I do like the fact that it resembles python in how it feels. Overall I'd say it's a nice language that is not for everybody. I am playing with Elixir these days. Go is good enougb, but I'm not happy with it being that.


Something troubles me about this article. I hope I'm misunderstanding it.

It's stems from the following quotes; knowing this information, why do AeroFS still think Docker is a good fit for their use-case?

> However, after our move to Docker, we noticed a sharp increase of the appliance's memory footprint.

> ...

> We identified several major factors behind these symptoms:

> 1. an increase in the number of running JVMs, as each tomcat servlet was placed into a separate container

> 2. reduced opportunity for the many JVMs to share read-only memory: the JVM itself, all the shared libraries it depends on, and of course the many JARs used by multiple services

> 3. memory isolation could in some cases confuse some sizing heuristics, which lead to larger caches being allocated by some services


Presumably Docker offers some benefit. It's weird that you and several others have jumped to the conclusion that it's obvious they should get rid of Docker. I would say: knowing this information, why do you still think java is a good fit for their use-case?


Because:

a) Their service was already written in it b) Their service was already performing adequately before they brought docker into play

But those aren't answers to your question. Java may have been a terrible fit for them, but their decision to move away from it was not motivated by that -- it was motivated by their already-written, already-performing Java code no longer performing as well, because of Docker.

The only question is, does Docker bring them enough benefits to justify reimplementing large amounts code, in a new and unfamiliar language? Maybe it does, but somehow I doubt it.


I find this article confusing, in the first part the writer talks about Tomcat and servlets, in the second part he talks about reducing the LOC from 175 lines to 96 by using Go.

To me it seems the Java solution was wildly over engineered and probably could have been refactored to not even need Tomcat.


I am not sure I understand where the issue is Java pure and simple, and where the issue is that Java services were impaired by being wrapped by Docker?

"Everything was fine until we switched to Docker" makes me think the trouble may not be all Java. Anyone have educated thoughts on this?


Wow that pun in the title is painful.


Hehe, yes, I'm afraid I have a proclivity for terrible puns. I do think they're somewhat less cringe-worthy that the link-bait titles that are all the rage these days (or at least endearingly cringeworthy).


Enjoyably so.


Theres a pun in the title?


It's from a saying... a little goes a long way.


Thank you; I didn't understand it either.


Can anyone shed some light on how/why running the same Java apps in a docker container significantly increased the memory footprint?

Is JVM overhead shared when multiple Java apps are being run on the same machine?


A lot of it is, yes. The JVM loads a large number of largeish class files when it starts; these contain implementations of the standard library and whatever else is on your classpath that you've imported. These have the nice property that they're read-only, though, so multiple JVM processes can safely share them. When you move to sandboxing each JVM off by itself, you lose the ability for them to share memory (which is, in a sense, a _feature_ of sandboxing), so now each of them has to take the 50-100MB hit of those formerly-shared memory regions.

(Note that the huge size of the classes is also the big reason why JVM startup time is so crap; another reason that multitenant JVM systems are great is that every process after the first starts much faster)


Assuming this is referrring to the fact that separate containers will link in separate copies of all the binaries (executable + shared libraries), instead of sharing the pages across all instances of he JVM. There's no way for the kernel to know that they're all the same files. So a lot of code is duplicated.


Anecdotes like this makes me wonder how much funding money and electricity could be saved if people migrated en masse from Ruby/Python/Something else to Go.

(Edit: not meant to be a political statement, more of a practical observation. I only recently started using Go.)


That may be one of the bigger chances for go: mobile. After all being more efficient on mobile translates into longer battery life and go potentially has a huge advantage over the current java (ok, not java) based environment on Android. For google this would be a triple win, get out of the Oracle mess entirely, give their mobile developers a super fast toolchain and give the end users better battery life.


I love Golang, but its ain't very expressive, so it cannot be a good match for writing UIs. And Swift is much closer to Rust than to Go, IIUC...


Also, the more complex a system is, the less the runtime overhead is by comparison. Hello world examples in .Net, node.js etc are 35-20% the memory overhead compared to Java.

It's not like golang doesn't have some overhead of its' own... There's also Rust, and D to consider.


These guys went from 30 servers to 2 by switching to Go.

http://www.iron.io/blog/2013/03/how-we-went-from-30-servers-...


the best part was, they had 2 only for redundancy, they could have gotten away with only having one.


I appreciate you publishing this experiment as it will show Java's problems for what they are. However, a real test of Go's safety and efficiency would be a comparison to a similar language such as an optimized Wirth language, a subset of Free Pascal (Delphi-style), typed LISP (eg Racket), Julia, or Ada. People keep forgetting about the last one in safe, efficient, systems programming despite it doing for a long time what Go and Rust hope to do eventually. Its long-time use in embedded systems indicates it can be quite efficient.

Regardless, Go certainly improves on the reliability of applications vs C++ while being much more efficient than .NET or Java.


I've got a little Swift Language "search engine" that I built with Go on App Engine. I haven't crawled any sites (yet). I simply created a Go data structure to do in memory searches because I didn't feel like using Google's data store.

Here's the site: http://www.h4labs.com/dev/ios/swift.html

Here's the data: https://github.com/melling/SwiftResources/blob/master/swift_...

I'm approaching 1400 URL's and it's still snappy. I was hoping Go's claimed "efficiency" would keep freely hosted a bit longer than Python.


>I'm approaching 1400 URL's and it's still snappy. I was hoping Go's claimed "efficiency" would keep freely hosted a bit longer than Python.

That is exactly why I picked my Go for my latest side project.

I just hope the framework I'm using isn't the bottleneck, because I really hate writing pure Go servers.


Java's main feature is compile once - run (almost) anywhere. From embedded to mainframe as long as a JVM is available.

How does Go stack up for cross-platform development? Does every application and library have to be (re)compiled for the target platform?

What about support for alternative architectures (ARM, PowerPC, etc)?


Go does require recompilation for every platform, but I'll give them this, it's very painless. Apparently in 1.5 you can just set an environment variable for your architecture and one for your OS and you're good to go.


AFAIK Go support cross-compilation for multiple targets... as long as one isn't linking C libraries.


> Resident memory usage dropped from 87MB down to a mere 3MB, a 29x reduction!

This isn't so much Java vs. Go as it is JIT/interpreted vs. AOT-compiled. The numbers are entirely typical across a wide range of such comparisons.

With an interpreter or JIT, you need to load all your code at startup and process it. Generally, _all_ of your dependencies need to be loaded and parsed upfront, either converted to some internal in-memory representation or JIT'd directly to machine code. This will allocate a bunch of heap structures during the processing, and the end result is a bunch of data that needs to stay resident and can't easily be shared with other processes.

With AOT, you mmap() in a file. The OS only pages in the code you execute, and can page it back out as needed. Pages are shared between all processes running the same executable.

At sandstorm.io our rule of thumb is that an app written in Node, Ruby, Python, PHP, etc. will take 100MB of RAM while an app written in C++, Rust, or Go will take 2MB. Since Sandstorm runs per-user (and even per-document) app instances, this is a pretty big deal.

The good news is that https://github.com/google/snappy-start should fix this problem: by checkpointing the process after it finishes its parsing/JITing but before it starts handling requests, we can get an mmap-able starting state that is very much like an AOT-compiled binary. At least, in theory -- there's still a bunch of work to do for this to actually work in practice.

> The resulting docker image shrunk from 668MB to 4.3MB

While I would expect the Go image to be smaller (since Go builds static binaries, so literally all you need in the image is the binary), I suspect that the 668MB Java image was at least 90% unnecessary garbage that was not actually needed at runtime. Unfortunately the package managers we all use are not optimized for containers; instead they evolved targeting systems with dedicated disks that can easily absorb gigabytes of bloat.

In Debian, for example, every package implicitly depends on coreutils. That's perfectly reasonable when installing an OS on a machine: you almost certainly need a working shell to boot and administer your machine. But a container can get by just fine without coreutils, and a typical web server probably (hopefully) doesn't need to call out to a shell. Even if a shell is needed, busybox/toybox is probably sufficient and will take a lot less space.

Packages also often contain things like documentation, unit tests, etc. which obviously aren't needed in a container.

For Sandstorm.io we deal with this problem by running the app in a mode where we trace all the files it actually uses, and then we build a package containing only those. It mostly works and manages to keep packages reasonably-sized, but it does lead to bugs of the form: "I forgot to test this feature while tracing, so the assets it requires didn't make it into the package." We're looking for better options.


> This isn't so much Java vs. Go as it is JIT/interpreted vs. AOT-compiled. The numbers are entirely typical across a wide range of such comparisons. > [...] while an app written in C++, Rust, or Go will take 2MB. Agreed. As mentioned in the blog post, I considered Rust but decided against it because of I found it much less mature than Go. I did not consider C++ because, as mentioned in the blog post, part of the point was to experiment with new language/tools and even though I consider myself proficient with it, I learned the hard way that the lack of memory safety is rarely worth it.

> I suspect that the 668MB Java image was at least 90% unnecessary garbage that was not actually needed at runtime. Unfortunately the package managers we all use are not optimized for containers Exactly. That was part of the point of this blog post, which I may not have been successful at getting across. Switching to go was, if not the path of least resistance to solve this issue, at least one of a few relatively easy routes. It also happened to be a great deal of fun.


I assume you didn't use the normal Dockerfile-based build system to build your super-small Docker images for the Go-based services. So how did you do it?


We're not doing anything too fancy. Basically, we spawn a container to build a statically linked binary and do a regular Dockerfile-based build inside that container. The result is an image which contains only a single binary (any maybe some static assets like config files or images).

We're planning to open source our build script shortly.


Why snappy-start as opposed to any other, far more sophisticated checkpointing mechanism like CRIU or DMTCP? The idea is ancient.


CRIU actually doesn't solve the right problem.

We need to run the app up until the point when it diverges -- i.e. when it first observes input that will be different across different runs of the app. For that, we need to be watching the syscalls and evaluating each one for potential divergence. As long as we are doing that, we might as well at the same record a log of those syscalls which we can replay later. Then once a divergent syscall happens, we dump the state of memory. Later, we can restore the memory and replay the syscalls to reproduce an identical starting process.

CRIU has no concept of divergence. CRIU takes an already-running process with arbitrary state and snapshots it whole.

CRIU's problem is actually orders of magnitude more complicated than snappy-start's: it needs to understand every possible file descriptor type that the process could have open, every aspect of process state, etc. snappy-start only needs to understand the specific syscalls that we care to implement; it can simply consider any call it doesn't recognize as divergent, and stop there. Adding support for more syscalls is then merely an optimization.

CRIU also requires special kernel features to support, which means more attack surface. Sandstorm wants to block everything except the most common kernel APIs for security reasons. snappy-start requires no new kernel features; it uses the well-understood APIs debuggers use, and we know we can still prohibit apps themselves from using those APIs.

Meanwhile, CRIU is much harder to customize. How would we decide when to do the snapshot? We'd have to re-implement much of snappy-start just for that purpose. And how do we teach CRIU about the specific assumptions that are safe and useful to make given our particular environment?

None of this is to say that CRIU is bad -- it's actually pretty amazing. But it's not the best fit for this specific problem.


I'm all for replacing java bloat with a little bit of go, but it seems like the cause of the bloat becoming a real problem was hopping on the docker bandwagon, for a collection of services that always run together in an appliance and really benefited from being hosted in a shared JVM?

Isn't the JVM + servlet container thing supposed to be able to isolate the different services? They get their own bunch of threads, and their faults probably shouldn't crash other servlets?


You should read Sun's J2EE specs. There is little semantic distinction between a stateless session bean and a container hosted micro-service. The specs were never fully grasped (imo) by the Java community and those who got it (per gossip I heard) -- appserver vendors e.g. IBM, JBoss, etc. -- effectively crippled the spec by resisting the completion of the APIs that would commoditize their containers.

Sun was really ahead of the game in various fronts.


So true. I had to read the entirety of the JSP spec (and a few others) when I was an intern to help the company do some static analysis things. I also read a good chunk of Tomcat. Sun had this grand vision of App servers which you could just push code to and it would run completely isolated. The app servers where supposed to be spec compliant meaning your app could run on any implementation. But, and this is what bit the company I was working for, all of the implementations add lots of non-standard bits and broke standard bits. This meant that if you had written your app to go on WebSphere there was essentially no way it was going to run on TomCat without modification.

If Sun had more tightly controlled the marketplace for these things I think the whole ecosystem would have been more robust.


Most of our problems are self-inflicted.


Traditionally you'd deploy a bunch of apps to one container (Tomcat, or whatever) and they'd all share the same JVM. I'm not sure if jars are shared in this way or not. I think OSGI was supposed to allow deploy apps to reuse the same jars and resources but I don't know how commonly that is deployed.

Now it is more common to run Tomcat or Jetty embedded for your app so that they are isolated.


How tied is Go into Google? What if Google drops Go?


You're probably being downvoted for off-topicness, FYI, since this comment is equally germane to any mention of Go. That said, I feel it is a comment in good faith and deserves an answer.

Question #1: How invested should you model Google as being in Go's future? Answer: Extraordinarily invested. They have a whole lot of code written in it, most of which the world will never hear about, and which powers services that they will continue running until the sun goes nova. Google will likely continue maintaining and extending Go programs (as well as C++, Java, and Python) for the foreseeable future.

Question #2: Can Go survive without Google's corporate backing? Answer: Go is an OSS project. Like many OSS projects, it receives a substantial amount of support from corporate interests. Go is not uniquely a Google priority -- many organizations like having a systems programming language which has its feature set. In the absence of meaningful support from Google, Go code existing as of August 2015 would continue to function. Further versions of the language/toolchain/etc would have substantial question marks about them, but the community feels large enough that this would be a Significant Event rather than a death knell.

Does that help?


I'm not sure if this will help me. I guess I'm in a programmer midlife crisis. Every new thing is potentially a waste of time because you can't predict if it is still around in 2 years. And with Google you know that they invest millions of dollars in projects and then abandon them.

Would I be better off maintaining decade old COBOL projects? Sometimes I think so. Other times I'm glad I can chose the technology I want to solve the problems at work.

August 2015. 31 years since my first computer.


You can never stop learning. Go is my current 'thing', at least for personal projects. But who knows what the landscape will be like in 5 years or 10.

For a while I though Haskell was the most awesome thing. Good Haskell code has a timeless quality to it (in a couple senses of the word). But I found adapting my thinking to it difficult, and it wasn't likely to be something I'd use at work then or now.


Go is a programming language born out of Google for sure, but that does not mean it depends on Google. It's a community project after all.


> It's a community project after all.

It is absolutely not a community project. Go governance is a 100% @ Google . There is no go committee or go working group outside Google. It's backed by one company that has full control over it. Sure it's open-source, but good luck with a fork. How can anybody be so misleading about that fact ? What did make you think Go is a "community project" ?

> but that does not mean it depends on Google.

There is a top down , vertical relationship between the Go team at Google and the rest of Go users, Go main goal is to fulfill the Go team needs, period. If you find it useful then it's a bonus. That's exactly how the Go team speaks and act, in fact the Go team makes it really hard to contribute to the core.

Please, stop saying what you say, it is completely untrue and a total mis-characterization of how the Go project work.

I still want to know what made you think Go is a "community project".


I still want to know what made you think Go is a "community project".

Well, the main developers do listen to the community.

Any project can fork, if there are enough people who are very dissatisfied with the current management.

I'm not aware of any serious grumblings about such a fork though. Why?

The core team is very good, and very narrowly focused. They've communicated clearly on nearly every issue as to what they're doing, and why. It seems clear that they are intent on making the best possible tool that fits with their particular vision. There are people who agree with this vision, and broadly support their efforts. And then there are people who really don't like their vision, and wander off to use Rust or something else.

The core team has enormous respect from the existing golang community. If Google suddenly laid off the developers (or just switched them to something else) or otherwise dramatically changed direction in their support for the project, the community would move in quickly to help out the situation.

I could see people getting together to form a non-profit foundation that could at least pay for a few guys to continue to work on Go full-time. But currently, there is no apparent need, so it hasn't been done. Google is willing to pay substantially for the development, and I don't see gophers complaining about that.


> I could see people getting together to form a non-profit foundation that could at least pay for a few guys to continue to work on Go full-time.

It's not going to happen, for the same reason the biggest startups in the Silicon Valley never came together to write their own language before. Let's get things straight, the only people who control Go is the Go team period. You are not talking about facts, so i'm not sure what is the point of your comment.


The point I was trying to make is that the golang community has a lot of respect for the current core team. If the community didn't have that respect, then I speculate that there would be a fork (or takeover), and it would then look more like the kind of community-controlled project that you seem to prefer.

This kind of thing has happened in the past: XFree86, OpenOffice, gcc, etc.


What do you mean good luck with a fork?

I know of one fork that will provide some lucky grad students with a degree. Can't find the link right now.


If you're concerned about Go, you might also find better results with Rust or Nim. Both are up-and-coming in similar spaces to Go.


Go isn't going away anytime soon. Neither is Rust. You shouldn't choose Rust over Go because of fear of the language being abandoned.

(I'm not too familiar with Nim's community so I can't speak to it.)


Some people have concerns and it's just good to shop the ecosystem around. Competition is healthy.

For the record, I've chosen to use Go over alternatives, although Nim sings to my very soul.


I have a question on Go's performance that I'd like someone here, who has experience writing programs in it, to help shed some light upon.

1) Go does not have a runtime. This means that there is no JIT to do any optimizations based on runtime profiling.

2) Go is also designed to compile fast. Since it compiles fast, the compiler's time budget to do compile-time optimizations is small and it probably can't do the best job possible.

Are these two points accurate? If so, how does Go perform as well as it is claimed to do? Where is the catch?


1) Go has a runtime that's included in the binary that you get when you run `go build`. It takes care of garbage collection, goroutines etc. It does not do any JIT compilation afaik.

2) Go compiles fast because it gives up a lot of modern features (namely generics).


I always think Golang is a Java killer, but I think Java itself is more a Java killer than Golang.


I hope that there is a compiled version of Scala.


the only thing that's preventing me from jumping on the Go bandwagon is lack of a nice collections library (a la lodash).


That's due to a lack of generics. There are some workarounds like gonerics [1] -- a clever abuse of import declarations.

[1] http://bouk.co/blog/idiomatic-generics-in-go/


I'll bite.

While it is absolutely true Go servers use less memory that classic servlet deployments, The question is what were you using at first place? where you using a big framework? with this or that big IoC container ? with a bloated ORM ? ... or where you using barebone jdbc and writing servlets without any framework? because essentially that's what you're doing with Go, Go has 0 big framework(and no Beego or Revel are not complex), 0 big ORM , 0 IoC container ... And while the Java culture is about heavy decoupling and reusability , the Go code culture is basically about writing correct code without thinking about re-usability at all, because often you just can't. So I'm curious. My point is didn't you reduce memory usage because Go forced you to write code a certain way ?


> the Go code culture is basically about writing correct code without thinking about re-usability at all, because often you just can't. So I'm curious.

That's a very strong statement. Do you have evidence to support this? I don't code in Go right now, but have been keeping an eye on it as it continues to gather momentum.


I'll jump on this, but I'm going to change it up a bit.

> the Go code culture is basically about writing correct code without thinking about USABILITY at all

The reality is go is YOUNG and it shows. Lets take two good examples that are part of what is going on in golang today.

Vendoring: The core teams approach, and what the community are doing have diverged pretty rapidly. Recently there have been some efforts bring vendoring into alignment, but what is being used isn't the best approach/solution, its good but on the whole could be better.

http: The core library is great if you want something small, and its fairly easy to put together a tool chain that will remain "idiomatic". However if your going to go out and build something "large" then you have an interesting issue, because OOTB core http lacks any concept of context. There are some interesting tools and frameworks out there to make up for this (gorilla/context, codegangsta/negroni) but if you adopt them your no longer "idiomatic"... your libraries are now less reusable because they are tightly coupled. It looks like a new method signature is going to be required in the http package, one that uses golang.org/x/net/context and returns errors so we can have a sane http stack OOTB in go....

Log vs syslog: Logging in go is fairly messy right now... there are a bunch of packages that try to make up for this, but really better logging has to be built into the core. Syslog has levels/features, log is just dry and basic (maybe too much so). The core not only needs a unified solution, but one that is going to be context aware.

Will these get fixed? Probably! The core team isn't deaf. However if they don't start moving on a path to 2.0 soon and address some of the real issues, I fear that go will end up in the same boat that python did in 2.x vs 3.x.

P.S. in spite of all this I have been writing a LOT of go lately and enjoying it, but it really does need to grow and soon!


Lack of generics can make it harder to write cleanly reusable modules in Go. I've noticed gophers are often less allergic to copy pasting code with tweaks (as long as it isn't too many lines) and don't mind rewriting similar code if it is obviously correct. It was a bit of a culture shock for me at least.


You confuse parametric polymorphism with reusability. The former is merely one of several ways to achieve the latter. The Go way for reusability usually consists of interfaces.


Interfaces and parametric polymorphism are not alternative approaches to solving the same problem, which is why many languages that feature parametric polymorphism also feature interfaces or similar constructs [0] (and why some languages that had interfaces for a long time later added parametric polymorphism [1].)

[0] e.g., Haskell typeclasses

[1] e.g., Java


Code reuse is totally possible in Go. I never said otherwise. However, there may be unnecessary ergonomic and safety issues due to lack of generics. In particular, you may end up casting / type asserting when you really shouldn't have to.


The parent comment is incorrect. Orthogonality is a really big deal in Go, and is highly stressed in the culture. Libraries are strongly encouraged to present methods that work with interfaces that are defined by the standard library to ensure the libraries are composable.

If your library works with streams of bytes, its methods had better use io.Reader and io.Writer. If you're writing an ORM, it had better be built on database/sql.


The problem got exacerbated when the container-ized deployment stopped letting the OS more efficiently serve shared memory. There are lots of tricks and wisdom built into platforms like Tomcat to reduce memory use, too. They're just confused when they're alone in what appears to be their own universe.

This is why there's such a strong pressure to move to more svelte language implementations and "microservices" as you start to see containerization take off.


> the Go code culture is basically about writing correct code without thinking about re-usability at all, because often you just can't.

What a ridiculous statement. Go encourages elegant and simple interfaces to make code reuse easier. The tooling provides an incredibly easy way of sharing code. The "culture" is all about sharing code. Go's much better at facilitating code re-use than (say) Java, both at a language level and at a cultural level.


This makes absolutely no sense what so ever.

The JVM has the largest library collection of any platform (currently 1,026,516 libraries just in Maven Central repository). Sure a lot of is to do with the age of the platform but this idea that Java is somehow impeding code reuse due to language/cultural reasons is ridiculous.

One of the biggest complaints for people coming to Java is actually just how many transitive dependencies libraries are brought in.


You know, as fun as it sounds like this was, wouldn't the natural solution to the problem be to roll back the Docker rollout?


And the cycle continues, from one crappy enterprise language to the next. I don't who said it, but whoever said "Go is a bold step backwards" is spot on.


There are times when a step back is the mandatory step to get a new perspective on things. Just sayin'.


But this is a step back to the cold, dead, static systems of UNIX. How about stepping back and taking a look at the dynamic, hackable environments of Lisp?


You should rather ask yourself why Lisp hasn't been able to attract the same large and active community in 5 decades that Go has been able to attract in 5 years.


Argument ad populum.


I'm not saying either of them is better because or the amount of people they attracted. I merely ask why Go managed to interest more people than Lisp in an order of magnitude of time less.


I don't understand why Unix is equated to "cold, dead and static". Sure, that's the case for V7 Unix and descendents. Attempted Unix++ systems like Amoeba, Spring and Sprite were quite the opposite.

Further, if I wanted maximum dynamism and runtime hackability I'd go for a Smalltalk.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: