While it's true that some find it liberating, a large number don't — personally, I've written a fair amount of production Go code and found it unnecessarily verbose and repetitive in ways that generics would've helped. I imagine some of this is based on problem domain; if you're writing a web application for example, maybe you don't really need generics much. After all, how often do you need a function that logs in a user to also log in... a book you're selling? Not very often. And any advanced data structures probably live in your database: how much do you really need a B-tree in a webapp when you've already got one in MySQL?
That being said for a lot of other uses, you really do want high quality data structures beyond "array" and "dictionary."
Sure, that's true, and it's fine. There are people who find s-expressions both liberating and clarifying, and others who get lost in a sea of parentheses. And there are problems that are especially amenable to s-expressions, or generics, or fine-grained control of memory allocation, or interwoven code and markup, object hierarchies, and those problems sort of "ask" for particular languages. But for the most part, this stuff is subjective.
There are certain things that just can’t be done without generics, though. Type safe higher order functions, type safe custom collections, etc. Of course, perhaps these are all just subjective to you, because you can still write any program you need without them. But not having this feature does constrain the set of type-safe programs you can write quite a bit.
I feel like it pretty much always turns out that the things you can't do without generics are, like, second-order things. Higher order functions, type safe custom collections, those are tools. What we care about mostly is what we actually build, and people build pretty much everything in every language, generics or not.
It depends what you’re building. If you’re writing a library to do those things, they may well be first order. Certain extremely useful patterns like parser combinators rely on them. And of course, the current answer is just “do something else instead.” But I don’t really see why this has to be the answer. Surely the decision to leave them out is just as subjective as a decision to add them.
By the same token, functions in general are second-order things - you can build things just fine with GOTO, and plenty of real-world software was built like that back in the day. But we don't do that anymore.
Yeah, but programming is the business of building little tools to help you build what you actually want to build, over and over again. E.g., you might have some API calls in your code and need to implement an error handler on some of them. You could repeat the error handler code on all of them, or you could extract it out to a generic function and just use it as a wrapper for the others:
let call1 arg = ...
let call2 arg = ...
let error_handler call arg = ...
let call1 = error_handler call1
let call2 = error_handler call2
Python made this kind of technique famous, but languages with generics can do it pretty well too.
One example of a thing you want generics for is image processing procedures that operate on many different image formats with different color spaces. If you use the built-in image libraries (which is perhaps already a mistake), then you have an image that owns its channels, and you pass it to some resizing method or whatever, and then that resizing method's innermost loop chases pointers to find out which kind of image it was *for every pixel*. For performance reasons, you may not want to chase pointers in your innermost loop. Without generics, you need to copy and paste a bunch of code once per color space.
To add to your point, not using generics in Go is a choice too, even if it’s now an option.
But some people like offloading that to the language: not having it in a language means you don’t have to control for it on your team/project contributors + some 3rd party library.
I felt Go filled this minimalist category well. It’s always nice having a modern mainstream option doing so, not just a niche one on the fringes (ala LISP), even if I’m not personally a fan.
I feel like I should probably admit that I am just generally a fan of functional programming and I do think there are big benefits to it, most of the time (of course, some algorithms are just more elegant with loops and mutation). But my day job is writing services with Elixir, so ultimately complaining about generics in Go is a little rich coming from me.
At the end of the day, whatever gets the product built is what matters. I just fail to see how generics could be a hindrance.
You can absolutely write type-safe higher order functions without generics. You can't write generic higher order functions, but that's a tautology.
Also higher-order functions are a moot point. Higher-order functions give you convenience, but no increase in expressive power (defunctionalization is an homomorphism).
Of course, generics give you a true increase in power.
I think I see what you're doing. Right now, each result type implements NextPager, which returns information about how to fetch the next page. You client can implement a utility like FetchNextPage:
type NextPager interface {
NextPage() PageSpec
}
func (c *Client) FetchNextPage(ctx context.Context, current NextPager) (interface{}, error) {
...
}
Then for each type of paged object, you write:
func (c *FooClient) FetchNextFoo(ctx context.Context, current Foo) (Foo, error) {
next, err := c.client.FetchNextPage(ctx, current)
...
if n, ok := next.(Foo); ok {
return n, nil
}
return Foo{}, fmt.Errorf("unexpected type: got %T, want Foo", next)
}
That's annoying. But, this problem has come up before with `sql`, which has rows.Next() and rows.Scan() to iterate over arbitrary row types, and you could use that as a model:
pages := client.Query(...)
defer pages.Close()
for pages.Next() {
var foo Foo
if err := pages.Scan(&foo); err != nil { ... }
// do something with the page
}
Generics would let you enforce the type of `foo` at compile time, but it wouldn't save you many lines of code. I think you still have to write (or generate) a function like `func (c *Client) ListFoos(ctx context.Context, req ListFooRequest) (Paged[Foo], error) { ... }`. We hand-wave over that in the above example with a "..." passed to query (potentially possible if you retrive objects with a stringified query, like SQL or GraphQL), but that sounds like the hard and tedious part.
Let me conclude with a recommendation for gRPC and gRPC-gateway as a bridge to clients that don't want to speak gRPC. Then you can just return a "stream Foo", and the hard work is done for you. You call stream.Next() and get a Foo object ;)
I think readability has multiple dimensions, and it really depends what you are looking for.
For example here's a code in Go to look for a Prime:
func IsPrime(n int) bool {
if n < 0 {
n = -n
}
switch {
case n < 2:
return false
default:
for i := 2; i < n; i++ {
if n%i == 0 {
return false
}
}
}
return true
}
It's readable as it is simple to understand what each line does.
Here for example is a code that does the same thing in Rust:
fn is_prime(n: u64) -> bool {
match n {
0...1 => false,
_ => !(2..n).any(|d| n % d == 0),
}
}
It's might seem more complex at first (what does match do, what 0...1 means, !(2..n) what is any() doing. But if you understand the language it actually this seem much simpler and you can quickly look at it and know exactly what it is doing. And because it is less verbose it is easier to grasp the bigger code.
I also noticed that while individual functions in Go are simple to understand and follow, you can still create complex, hard to follow and understand programs in Go.
Anyway in Go (ironically because of lack of generics) if you use any numeric type other than int, int64, float64 you will be in the word of hurt. Rust doesn't have that issue.
So in practice you will likely use int, and I suppose you can add an assertion.
BTW: I only see that it would remove 3 lines though, where are the other 3?
I don't follow. I use unsigned ints in Go all the time. I've never been in a world of hurt with them. Mandatory explicit integer conversions (and the way Go consts work) are something Go gets right.
Ok, so you like that. I myself really hated when I used float32 and had to do this when I was doing calculations:
result = (float32)Max((float64)a, (float64)b)
I ended up switching the type to float64, and wonder why they even offer float32 if it's practically unusable. I had similar experience when I needed to use int8 or int16 etc.
An alternative was to make own version of Max/Min and other math functions, but this is what generics would solve.
I think what they mean is utility functions (e.g. min/max here) tend to only be implemented for one type, so if you’re using an other you keep casting back and forth.
Rust is very fussy about types and some operations are bound to a single one (e.g. stdlib sequences only index with usize) but utility functions tend to either be genetic or be mass-implemented using macros (or by hand probably for f32::min and f64::min though I did not check).
I don't have problem with being strong typed, I like that as well. The problem is that the stdlib doesn't really support other types and that's mostly due to lack of generics, so using uncommon types becomes quite annoying.
TBF the float Min/Max issue has nothing to do with generics, there isn't a "floating-point" generic class, and the entire reason why Go has an f64 Min/Max in the first place is that floats are not fully ordered[0], so you "can't" use regular comparisons (which you'd be told to do for integers) and thus you could not have a generic min/max relying on that even if there was one (e.g. in Rust `f32` and `f64` are not `Ord`, which `std::cmp::{min, max}` require, hence both `f32` and `f64` having their own inherent min/max methods).
So what your issue comes down to is Go's designers couldn't be arsed to duplicate the entire `math` package to also work on `float32`. Some members of the community did rise to the challenge[1] tho.
[0] well recent revisions of IEEE-754 have a total ordering predicate but I wouldn't say that it's really useful for such a situation as it positions NaNs on "outside" of the numbers, so a negative NaN is smaller than a negative number and a positive NaN larger than a positive number
I'm aware of it, in the example given I used float32. Anyway Max/Min was just example I used other functions from math as well.
Anyway my point was that with generics, they wouldn't need to copy anything, the math package would work with both float32 and float64 and many functions likely would also work on all integers.
The difference here is that I can hand off the first code to any random freshly hired CS grad or cheapest outsourced coder and they can grok the code quickly. This is the advantage Go has to all other languages.
The Rust code needs maintenance coders of way higher caliber, not something you'd usually find. It's super fun for the top-tier developers who love to be expressive and concise with their code, but all code is pushed down to maintenance mode eventually when the hotshots move on to the new shiny project.
Go has removed pretty much every footgun by sticking to the basics. You have one way to do a loop, one way to do comparisons etc. There are very few ways to hide non-obvious functionality.
It _is_ possible to create complex programs, that are hard to follow but that's a larger design problem. Not something the language can force on developers.
Thing is, care with what you wish for when easy to outsource is a goal, a welcomed feature mostly relevant to IT managers that don't care about the final quality of delivery, nor what consequences it makes to the home job market.
Personally I find code with generics just as easy to read as (largely duplicated) implementations for each type. I might consider Golang again when this drops and there are things to be like about how low level it is…
Error handling in Rust isn't always small or straightforward. I miss both options and match expressions when I switch back to Go from Rust, but there's also tangles of or_else's and maps and the fact that everyone uses third-party libraries to work out the types for errors. There's tradeoffs everywhere you look.
In rust you have to swallow an error quite explicitly. In go it’s extremely easy to swallow one "err”, by assigning to some previous one which was already checked.
That being said for a lot of other uses, you really do want high quality data structures beyond "array" and "dictionary."