Hacker Newsnew | past | comments | ask | show | jobs | submit | carloscm's commentslogin

If this indeed evolved into a stand alone, curated app store like Steam pre 2016 with a 88/12 revenue split it could indeed shake up Android game distribution. Mobile in general and Android in particular is such a brutal market for small indie developers. Epic could pull a Valve here.


From the article:

"The chips that produce an electric field for the Solar Probe Cup are made from tungsten, a metal with the highest known melting point of 6,192 F (3,422 C). Normally lasers are used to etch the gridlines in these chips—however due to the high melting point acid had to be used instead."

I had no idea tungsten was used to make ICs. What kind of density is achieved by laser etching (on tungsten or otherwise)?


I'm beating my own drum here, but the gameplay code for my game Galactology is 90%+ done in a Lisp dialect (l2l with LuaJIT underneath), clocking at 51K lines currently. It has been a huge productivity boost since I switched from C++ two years ago, but I do miss some level of typing and static checking. http://thespatials.com/


In a lisp dialect? I thought you went with s7 as an embedde scripting language for C++!


I originally did. This was fine when the scripting was just doing data setup in advance, and it wasn't being run per-frame as game logic. When I started The Spatials V3 (aka Galactology) I also started doing real time game logic in s7. After enough code it became clear that the fastest purely interpreted scheme wasn't fast enough to be put into a soft real time path (the game grew up to hundreds of systems, in the ECS sense).

So I made a 6 months long gameplay development freeze last year and I converted all the s7-flavored Scheme code into the l2l Lisp dialect with a custom transpiler (which had to be aware of all the s7-ism plus my own s7 reader macros), and replaced the C++ binding layer with sol2. This resulted in a ten-fold increase in performance when the game is limited by logic perf (it's not visible with an empty new game). LuaJIT is alien technology for all I know.

I have to do a proper write up some day.


Thanks man. I wish you the best with your game. Sounds like the total dream job to me. I am eager to read that write up!


I think the article is wrong to call the showcased devices "netbooks". Netbooks were a laptop-lite, with a slightly smaller size. The Dragonbox Pyra is more like a micro laptop, the size of two big smartphones stacked on top of each other. It's a different size category and that's what makes it interesting.


LuaJIT is, by far, the best option for scripting in games/RT, thanks to the incredible JIT performance and its incremental GC with deadlines.

But there's something when you start playing with Lisp and then you want to keep using it more and more. Suddenly the classic Ruby/JS/Python/Lua language design feels boring and stale (ironically, given the age of Lisp).

After getting my feet wet in v2, I'm doubling down on Scheme for The Spatials v3, this time using S7 Scheme, a true gem of an interpreter (first class environments, hook on symbol-access for reactive programming, super extensive C FFI support, etc.)


Going from Python to Lisp to Racket was an absolutely mind-blowing experience.

It literally changed my life, my perspective on programming, even what jobs I sought out.


Could anyone comment on the Lisp --> Racket mind-blowing experience? I have worked with Lisp and Clojure, and most of the big concepts are shared. What has racket on top of (common, I assume?) lisp?


* macro/module system with phases ( https://www.cs.utah.edu/plt/publications/macromod.pdf )

* submodules ( http://www.cs.utah.edu/plt/publications/gpce13-f-color.pdf )

* Languages as modules (i.e. the #lang mechanism ) ( http://www.ccs.neu.edu/racket/pubs/pldi11-thacff.pdf )

* Documentation system that links identifiers to documentation (respecting the scope)

  ( http://docs.racket-lang.org/ )


It is simply that I found Racket to be the most pleasant and inviting to use. An all-in-one toolset, with a large and vigorously well documented standard library, and some of the best books on programming there are.

As a sometimes-aspiring language designer who'd already done one esolang before coming to Lisp, the language dev tools that Racket provides were also far too enticing to stay away from.


> What has racket on top of (common, I assume?) lisp?

Racket is from the Scheme family rather than the Common Lisp family, (Racket used to be PLT-Scheme before the name was changed because Racket isn't strictly an implementation of the Scheme standard, though it includes such an implementation among its bundled languages.)


Started coding in the mid-80's and went through a plethora of languages, OSes, frameworks, you name it. Learned ML when Caml Light was new and got a third price in logic programming contest between universities.

Nowadays I care mostly about JVM, .NET and first class languages from OS vendors.

Why? Not all of us have the luxury to move around just for working with the languages, or the skilled team, we would like to.


This is primarily why I'm focusing on strengthening my JavaScript skills...There are tons of start-ups looking for back and front-end devs. Your comment also makes me want to get back into Java and C#...I've taken the time to learn Haskell but I know realsitically I'll never get a Haskell job (at least here in Quebec...).


> It literally changed ... what jobs I sought out.

Why?


Same reason I still hate doing dishes at home after using a professional-grade dish machine at work for a decade.


Sir, this comment just made my day. I am going to screen shot it and place it in our office when anybody asks why I use mostly Clojure for my work.


I never understood why people consider lisps superior, mind explaining exactly what changed your mind?


It's not that there is anything particularly special about what you can do in Lisp than what you can do in, say, C#. It's that Lisp provides facilities for metaprogramming that most other languages lack. Most other languages make metaprogramming hard enough, or require it all be done at runtime, that it is very discouraging against the concept in general. Lisp creates a culture around metaprogramming that fundamentally changes how you approach programming forever after.

You stop writing programs. You start writing programs that write programs. It's a lever that multiplies the power of the programmer. That can be both good and bad. For a hacker coding in earnest on her own, it can be extremely good.

Yes, if you are the type of beginner or intermediate programmer that still struggles to just write programs, advanced metaprogramming in any language is not for you. If you're an intermediate programmer looking to become an expert programmer, learn Lisp, where metaprogramming is easy, and take the lessons on to whatever other languages you eventually end up using.

That's what people mean.


I've never had to change my mind about Lisp -- but to me, Common Lisp's condition system kicks the crap out of every other language because it allows you to handle conditions without unwinding the stack.

And whenever you get dropped into the debugger, you can edit and reevaluate the broken code or values or whatever -- and then continue on with the task.

In other words, you can begin a task with imperfect code -- and refine it as you go along -- figuring out how to handle corner cases as you go. All without starting the task over from the beginning.

I know of only one other language that allows you to explore a problem space like that.

Besides, the parentheses make the code less jagged looking and jagged looking code imo, is more tiring to look at. Lisp is simply prettier.


Smalltalk also drops you into a debugger on an error, letting you inspect the state of the system, correct your code, and resume. Check out Pharo Smalltalk.


TIL.

I have edited my comment to account for smalltalk.


Smalltalk probably supports this as well, but what's cool about Common Lisp's condition system is that it doesn't depend on user interaction or debugger to work. Conditions let you separate the code making the decision about how to handle an error from actual error handling. The decision doesn't have to involve interactive debugger, you can have your code making the choice about which restart (unit of error handling code) to invoke. The common use case is that you install various restarts as you go down the call stack; then, when an exception happens, control goes up the stack until appropriate decision code, which selects restart, and control goes back down the call stack to the selected restart block.

It's an incredibly flexible system that makes exception handling much more useful.


Not a lisp user myself, so I can't speak firsthand, but Paul Graham has several essays about it. http://www.paulgraham.com/lisp.html


Can languages with other syntax have the same properties?


Tcl I heard has string-based homoiconicity. Otherwise, I'm yet to see a proof you can get an actual, usable and not incredibly annoying macro system working without turning your language into a Lisp. People keep trying, but it's always some inelegant subset bolted onto the core syntax.


Julia is homoiconic and, of course, has lisp-style macros. It has a syntax similar to something like array fortran.

http://julialang.org/


I love using Julia, but for me writing Julia macros came with a steeper learning curve than in Lisp / Scheme, due in part to the syntax. I kept wishing Julia was a layer on top of a Lisp, and that I can just write Lisp macros. (I know the parser is currently in Scheme, but you can't exploit that easily.)


I tend to end up writing `Expr` objects directly when I'm building larger macros as I find them much easier to reason about. It's clearly not as convenient/clean as Lisp though. (David Moon actually sent a PR last year to make hygiene much easier.. unfortunately it got stuck for a while, but the PR was recently rebased and hopefully will be merged eventually).

Regarding the learning curve: we rewrote the first half of the Metaprogramming section of the manual a few months ago to try to provide a more gradual introduction, especially for people without a Lisp background. Objectively I don't know if the change has helped, but I tell myself that the number of macro-related questions has gone down :)

We would really like to make these tools accessible to people both with and without a formal CS background (probably the majority of Julia users will fall in the latter category). So, if you have any suggestions for doc/pedagogy improvements in this area, I would be very happy to hear them!


The obvious solution is to write a julia macro that implements lisp macros as a DSL ;)


No need, Julia comes with a builtin lisp.


Dylan? Then again it is a Lisp with Algol syntax.


I believe Dylan used to be a Lisp with Lisp syntax.



Thanks for the link. The page on that site that gives the history including the original lispy syntax is here:

http://opendylan.org/history/index.html


We talk sometimes about having an additional reader to support another syntax ... but so far, it isn't something where someone's actually volunteered to step up and help. It'd be interesting to play with some ideas from David Moon's PLOT as well ...


Take a look at Converge, Nemerle, TH and many other similar languages - they're doing just fine without any homoiconicity. All you need is a decent quasiquotation.


Thanks for the pointers, I never heard of those languages before.

I am actually looking at the Converge documentation on macros now, and I found the perfect quote to highlight the problem I see with those approaches:

Quasi-quotes allow ITree's to be built using Converge's normal concrete syntax. Essentially a quasi-quoted expression evaluates to the ITree which represents the expression inside it. For example, whilst the raw Converge expression 4 + 2 prints 6 when evaluated, [| 4 + 2 |] evaluates to an ITree which prints out as 4 + 2. Thus the quasi-quote mechanism constructs an ITree directly from the users' input - the exact nature of the ITree is of immaterial to the casual ITree user, who need not know that the resulting ITree is structured along the lines of add(int(4), int(2)).

That is, quasi-quotes take some code and turn it into a tree object. This tree object can then somehow get compiled into actual code later. Compare that with Lisp approach, in which code is data. You don't think about the code as something separate from the tree that represents it. The code is the tree. There is no ITree object.

It may seem like just "semantics", but I find this to be a significant cognitive difference.


> Compare that with Lisp approach, in which code is data.

It's not any different. It does not matter if your `(,a ,b) is compiled into (list a b), or your ` \a\ = \b\` is compiled into make_binary('=',a,b) - both are constructors, essentially.

Take a look at what I'm doing with just a quasiquotation here: https://github.com/combinatorylogic/clike


Other languages can technically have homoiconicity, but there's something to be said for the simplicity of Lisp forms.


The property of homoiconicity is shared by all Lisps, Forths, and Prolog amongst others: https://en.wikipedia.org/wiki/Homoiconicity


Yes. See Smalltalk for a language with object-oriented homoiconicity.


That's not really homoiconicity though, is it?

Smalltalk source code does not appear to be expressed in terms of a Smalltalk data structure. Whereas Lisp source code is expressed as lists of lists.


Sure it is. All parts of a Smalltalk program are present as objects that can be manipulated by sending them messages. Classes, methods, variable bindings, closures, stack frames, even individual messages are objects.

As a practical matter, methods also have a textual representation, but that's not strictly necessary. Ten or fifteen years ago, the Squeak method browser had a mode where you could write Smalltalk code using tiles, kind of like in Etoys, no parsing necessary. That was a hassle to use, though, so people stuck with writing methods using text.

By the way, Lisp has the same sort of "impurity". S-expressions are very simple, but they are syntax. It's surely possible to create a tool that would let one construct cons cells in memory directly, but that would be a hassle. Instead we use parentheses.


The 'iconicity' part of homoiconicity refers to the textual representation.


Maybe not the same, but I feel like nim (http://nim-lang.org/docs/manual.html#templates) can have a comparable power ot redefining itself. But I'm absolutely not an expert neither in any lisp nor in nim so I can't really tell.


Yes, take a look at Dylan. But they almost never do.


Dylan's approach works because it has lisp at its core.


Any language with a compile-time metaprogramming would do.


After learning about Elixir (http://elixir-lang.org/) and playing with it some, I now want an Elixir job. (I'm skilled in Ruby, and Ruby already back in the day wanted me to get a Ruby job... which I finally did)

One nice thing about Elixir for Lisp lovers is that you get "real" macros (full AST access within a quoting context) AND actual syntax. I am not sure there are any other languages which feature that combination currently.


You get full AST access in Erlang as well. I don't know how it is in Elixir, but in Erlang you don't want to touch that feature with a ten foot pole. It's hairy and incredibly annoying to work with. I'm having a hard time to imagine how it couldn't be without homoiconicity and with that "actual syntax". I've been reading about macros in Scala recently, and they seem to suffer from the same problem - they just don't fit well with the rest of the language.


> but in Erlang you don't want to touch that feature with a ten foot pole. It's hairy and incredibly annoying to work with

It's pretty much the exact opposite in Elixir. The only thing you really have to wrap your head around is the switch between the quoting context and the unquoting context... which is pretty much no different from understanding macros period. The definition syntax looks just like a method definition, except it's "defmacro" instead of "def", and the macro body receives an AST instead of (or in addition to) your usual arguments. But I'm probably not doing it justice...

http://elixir-lang.org/getting-started/meta/macros.html

https://pragprog.com/book/cmelixir/metaprogramming-elixir

Here Dave Thomas creates a simple "returning" macro, you can just watch the screencast if you're feeling lazy: http://pragdave.me/blog/2014/11/05/a-simple-elixir-macro/


AND actual syntax

What does "actual syntax" mean here?


Yes, I was riffing on Lisp not really having a "syntax."

The fact that Erlang's syntax is (disclaimer: subjective) awful just adds further grist to the Elixir mill, assuming you think Elixir's syntax is (disclaimer: subjective) sweet.

Which a lot of people seem to be coming to the same conclusion on.


That's their loss, then.


I don't see a loss. I see a win.

In Elixir, I have the full power of Lisp-level macros, in a functional immutable pattern-matching language, with a ridiculous level of concurrency (you can spawn a million or so processes in a second on your typical laptop), hot software upgradability (https://www.youtube.com/watch?v=96UzSHyp0F8 for a cool demo), access to the entire repository of already-existing battle-tested Erlang code...

... AND it's readable. :P

Lisp with its powerful macro facility has had literally dozens of years to find acceptance and still struggles (argumentum ad populum notwithstanding, a userbase with critical mass brings a plethora of other advantages). Ruby found enough of a niche that I think there is something to be said for Ruby's style of syntax. Elixir gives you both, and then some.


I was talking about Erlang and Elixir, not Lisp and Elixir. I don't need an introduction to Erlang/OTP, as I've used it for quite a while. You sound a lot like you're in the hyper-enthusiastic newbie phase, though.


I... guess I am. Is that OK? :) I like Erlang too... but I was one of the folks for whom the syntax turned me off originally. I can't explain why, especially if you're one of those developers (bless their pure-engineering hearts) who thinks syntax is irrelevant once you grok it. The best I can explain it is that some brains interpret computer code as a very abstract form of writing and some don't (or don't need to), and I may be one of the former, and that causes some syntaxes to "feel better" or "feel worse". It's... not very rational, sigh.


Presumably a riff on "Lisp has no syntax"


I actually thought it was a riff on Erlang's syntax, which is just as annoying a complaint in 2015.


It does have a syntax. It's just hard to find in amongst all the parenthetical shrubbery.


the thing i remember most from comp.lang.lisp was someone defining a truly great language as one that could get you to take a job simply to work with it, irrespective of the actual problem being worked on.


Can confirm.

I once took a job working on a product I had a moral objection to -- on a platform I despise (windoze embedded) -- simply because I'd be working in Common Lisp.



Funny to see that latter link on this message board, of all places.


Can you explain why? I have no experience with Lips/Racket and I'd love to hear about your experience.


Not OP, but I find this text a good introduction to one of the most significant things you can get enlightened by when learning Lisp: http://www.defmacro.org/ramblings/lisp.html.


Learning lisp is enlightening, but to claim that it's that much more productive than some of the other well designed languages is a stretch.

- Lisp macros are powerful, so the core of the language can be kept simple. However, many languages take an alternate approach and codify the most common programming patterns into their specification. The result is a compromise that works for the majority of cases, and better understood than custom macros.

- Homoiconicity is elegant, but somewhat over rated in practice. Declaratively expressing intent does not require homoiconicity, you can do that in JSON or XML if you agree on a format. Now with lisp, code being data, you can be infinitely more expressive; but at the same time inferring intent out of complex code-as-data is essentially writing a very intelligent macro/parser. There's a cost to it.

- if you're not really gathering the intent from code-as-data, there are ways to do eval() in other languages as well.

- Lisp has not succeeded on a relative scale. Let's not discount that.

- Compilers can optimize well known constructs better than equivalent macros in lisp.

So again, learning lisp is a great idea. But there isn't a one programming paradigm that's universally better than others.


> but to claim that it's that much more productive than some of the other well designed languages is a stretch

Given that you can turn Lisp into any of those "other well designed languages", it's not a stretch at all.

> and better understood than custom macros.

What can be easier than the macros?

> but somewhat over rated in practice

True. You can build a decent meta-language without homoiconicity, all you need is a quasiquotation.

> there are ways to do eval() in other languages as well.

Eval is runtime, macros are compile-time. Huge difference.

> Compilers can optimize well known constructs better than equivalent macros in lisp.

No. Macros can optimise any way you fancy. There are no limits.

> But there isn't a one programming paradigm that's universally better than others.

A paradigm which contains all the others is exactly this.


No. Macros can optimise any way you fancy. There are no limits.

For example, the Racket compiler does not need to know about the type specializing optimizations that Typed Racket makes possible.


- Lisp has not succeeded on a relative scale. Let's not discount that.

Is there a clear reason for this? I've only ever heard good things about lisp.

My impression, as a hobbyist programmer, is that lisp appeals to people who have a deep intellectual curiosity about the way programs work. It doesn't seem to appeal to the larger pool of programmers who are looking for a language thy can pick up in a straightforward way so thy can either get a job, or build a project they've been thinking of.


I fell in love with Lisps and FP precisely because they were an easier, more straightforward way of just getting the job done than the alternative.

How many times have you written a dozen lines of for-loop that could've been one map/reduce? How many times have you written a whole page of Object { this.foo = ... } just to add the simplest of new features?

Literally the reason I got out of programming after high school almost 15 years ago and wrote it off as 'not for me' was that kind of tedium, and learning Lisp and FP were the point in my return when I said 'Oh, wait, actually this is pretty great; where the hell was this when I was a kid?'

Lisp didn't take off because 1) home-computer-ready implementations were largely out of reach for three decades, and 2) Lisp and FP both were embracing the importance of expressive power during an era in which most programming still worshiped doing things the hardest way possible. Shit, when I was a kid, you weren't a 'real programmer' unless you did everything in assembly. Then it was C above all, to be followed by EnterpriseFactoryJavaFactoryFactories.

By the standards of most of the programming world, where there are still real bosses who grade coder performance in KLOC, Lisp is 'wrong'. But pumping out thousands of lines of repetitive boilerplate is not equal to efficiency, it just looks like it to a work culture that only understands output of work rather than quality of work. If programmer A takes 1 hour to solve the problem with 100 LOC, and programmer B thinks for 45 minutes and then solves the same problem with 4, who's the most efficient in that scenario?

And more to the point, which of those two work environments do you want to sign on for?


Same here, I never understood Java interfaces, abstract classes, and a ton of other "features" but picking up Clojure was a breeze. I don't understand why complicating thing that supposed to be simple helps you by any mean. On the top of that, I have seen several cases when Java programmers tipped over in their own code because of the complexity that they thought they understand, except there was a particular case when it was doing something else then expected.

Reasoning about Clojure (LISP) code is always easy because if you follow best practices you have small functions with very little context to understand.

On the top of these, I see the ratio of LOC 50:1 (worst case even higher) for Java : Clojure code that does the same thing. Usually people get triggered and say why does it matter, but in reality less code is better for everybody. Easier to understand, less chance of errors, etc. Correctness was long time lost for majority of Java developers, just put out a survey and you can see it for yourself.

It is also pretty common practice not to handle exceptions well and just let a DNS not found error explode as an IOexception and good luck tracking down what caused it (literary happened to me).

I understand that the average Java dev does not see any value in LISP (Clojure) but it silly to expect that the average of any group is going to lead the scientific advancement of any field including computer science.

One tendency that you can see if you are walking around with open eyes that the people who spent significant time in developing procedural code in an imperative language understand the importance of functional language features and the power of LISP. One can pretend it does not matter see you in 5-10 years and see how much this changes.

https://twitter.com/id_aa_carmack/status/577877590070919168

https://www.youtube.com/watch?v=8X69_42Mj-g

https://www.youtube.com/watch?v=P76Vbsk_3J0


The closest I got to Xerox Parc environments was Smalltalk VisualWorks and Oberon (Wirth made it based on his Cedar experience).

Then thanks to my curiosity I delved into the Xerox's and Genera documentation.

It is sad that a PDP-11 and VMS descendedant won the mainstream.

How much better could computing be if those behind those systems hadn't failed to bring them to the masses.

However environments like the JVM, .NET and their tooling bring us somehow close to it.

Also Swift with its Playground is helping new generations to rediscover this experience.

So maybe not all is lost.


I take it you're probably not a huge fan of Golang? :)


I would imagine not, though the parent can speak for himself. I agree with what has been said before, that Go is a bold step backwards, just another language to replace large amounts of Legacy Enterprise Code with (sometimes) slightly less large amounts of (Soon To Be Legacy) Enterprise Code. Go has going for it corporate backing and a good community, but on the technical merits alone, there is a better language for every task.


I wish I was. That little gopher is adorable.

Rust, on the other hand, gives me palpitations.


Is golang very verbose?


I don't know about "very", but it's pretty verbose and very imperative. You have to do a lot of repetition, often straight up code copy+pasting, in some cases.


In a way it's similar to frameworks. Frameworks which are more popular try to make choices for the user (like Rails). As an end-user I clearly wanna focus on my tasks, rather than choosing a toolset or perhaps building one myself.

Lisp is minimal and abstract. That's appealing to a different set of people, who aren't satisfied with off-the-shelf abstraction levels. It's also fun and challenging to work at that level, though IMO it's not always going to translate to better productivity.

For me, learning assembly and going through the 80386 reference manuals were more rewarding in terms of understanding how programs work. Sorry I have no specific insight to offer on the question you asked.


Lisp was the hot new thing in the mid-to-late 80s, when the AI Winter hit.

https://en.wikipedia.org/wiki/AI_winter

When AI couldn't live up to the hype, funding dried up. A lot of that funding was driving the companies and research projects that were doing major Lisp development. After the AI Winter, Lisp was strongly associated with the unmet promises of AI and thus gained a reputation as a poor choice for serious projects.

It's never really recovered.


Around 2000 a new generation rediscovered Lisp. SBCL was released in dec 1999. CMUCL rethought with a simplified implementation and build process. From then on various implementations were improved. Clozure CL was released on multiple platforms. The commercial Lisps were moving to the then important operating systems and architectures.

The hype moved to Clojure, Racket, Julia and other Lisp-like languages. The core Lisp may not have the same depth of commercial activity as in the 80s, but generally the implementations are in the best shape since two decades. There are still lots of open issues, but from the Lisp programmer standpoint, it's an incredible time.


It’s the Lisp Curse: http://www.winestockwebdesign.com/Essays/Lisp_Curse.html

Lisp is so powerful that problems which are technical issues in other programming languages are social issues in Lisp.


lisp makes you an asshole programmer. you're encouraged and enabled to write your own language for each problem, thus isolating you in a world of your own views and ideas. it's a babelian tar pit, luring programmers to their doom.

being your own tin pot dictator is quite alluring. you get to go to great feats and neat hacks to get code working. to control and manipulate the code to allow you to write what you want. every new macro and construct shapes the product in your own image and ideals, subsequently alienating other programmers.

it's like these language revisionist cranks who want to replace english with their own little concoction that's just ever so perfect and logical. a complete ignorance of social factors.

anecdotally, I know of large scale codebases and products in simpler, less elegant languages, meanwhile lisp seems to be popular with the lone hacker aesthetic.

eventually, with enough practice, you get to the smug lisp asshole stage.

this is where you wonder why lisp is unpopular, or fragmented, but assume that it's simply too good for the populace. Classics like 'worse is better' struggle with the notion that lisp maybe isn't that good. Sometimes you get a naggum complex and trot out saphir-whorf. Other people are terrible and that is why they don't use lisp.

it can't be that lisp isn't a great idea. or macros aren't a great tradeoff. at least the ruby community is coming to terms with monkey patching coming at the expense of library conflicts.

lisp is a strange beast. a simple tool that encourages complexity. purity over utility. a perfect goo waiting for the next hero to shape it and return to the mortal plain with their new, perfect macros.

http://forums.somethingawful.com/showthread.php?threadid=348...


> a complete ignorance of social factors.

Maybe that's why I like Lisp so much. Because "social factors" are so frikkin' annoying and irrelevant and I feel the world would be so much better for everyone if we stopped paying so much attention to them as we do now.


To formulate this a little more nicely, I might say instead that there is a real need for "intimate" languages, just as there is a need for "collaborative" languages.

As an example, the shells on my personal machines are often customized beyond the comprehension of anyone who isn't me, with tons of two-letter aliases and bash functions, cryptic environment variables, and a $PATH that's several terminal lines long, filled with labrythine symlinks and oneliners that I've accumulated over the years. Many people have similarly elaborate and impenetrable emacs configurations.

That's fine, since this is my personal environment, but at work (I'm a sysadmin, more or less) I'm still able to use more-or-less bash, and even write portable shell scripts that eschew bash-isms. Similarly, all that horrible e-lisp powering super-personalized workflows doesn't prevent someone from writing an emacs mode that they can share with others, the point being that a language that enables customization is great, because you can always just not do that and write code that others will find comprehensible.

Conversely, if your language forces you to write in a collaborative style, you can't gain efficiencies in your private use of it.


> To formulate this a little more nicely, I might say instead that there is a real need for "intimate" languages, just as there is a need for "collaborative" languages.

That's a... way of putting it I've never seen before. I'll remember the concept of "intimate" vs. "collaborative" language for the future.

Personally, even though I write a lot of Lisp and live inside Emacs, my environment seems to be quite... standard. The "collaborative" mindset is emphasized in pretty much every programming book out there, and I must have acquired this kind of weird fear of overcustomizing my environment thanks to it.


I'm not a lisp user, but I've used xml + xslt to generate xslt that processes xml to xhtml and I liked it ;)


"own languages" are much more approachable for the others than the "own libraries". For very obvious reasons.


My naive reasoning of why is that a lot of people start by learning c-like languages and don't see the need to learn something as different as lisp. As lisp and it's descendants were never really the dominant language, it was never the first type of language most people learned. Now that many mainstream languages have progress to incorporate more and more lisp features, it's becoming less foreign to many devs and the popularity is increasing and is now higher than I think it ever was.


>> Lisp has not succeeded on a relative scale. Let's not discount that.

> Is there a clear reason for this?

Yes. In the 1980s AI was teh new shiny and at that time, Lisp was almost synonymous with AI.

A bunch of people over-promised when it came to AI and expert systems, and failed to deliver. And people conflated the failure of the promise of AI with the failure of Lisp. Essentially, guilt by association -- people can be dumb that way.

Amusingly, once something in the realm of AI actually works -- we stop calling it AI. But one thing is for certain: the scruffies have been right more often than the neats.

Still, Common Lisp is pretty effin' awesome.


Lisp macros are powerful, so the core of the language can be kept simple. However, many languages take an alternate approach and codify the most common programming patterns into their specification. The result is a compromise that works for the majority of cases, and better understood than custom macros.

The semantic core is kept simple. That doesn't mean lisps don't provide constructs for common patterns. Ej. Loop, with-open-file or defclass or Racket's for/ constructs.

Plus a good written macro is easy to understand and I'd argue most of macros are good written. In fact I'd like to see a macro that what it does is unclear on a widely use liso library.

> - Compilers can optimize well known constructs better than equivalent macros in lisp.

Lookup compiler macros. Macros can help the compiler optimize the code.


99% of programming works like this: you pass parameters to a function/macro. If you screw up, the compiler spits an error. The details of the error message are rarely relevant, you mostly know what's wrong even before you read that message anyway.

This whole talk about macros being crazy dangerous and difficult is very misguided. Most of the time, if a Lisp compiler spits a weird message on you, you know what you've screwed up. In the 1% cases you don't, you apply macroexpand-1 (or equivalent), see why the expansion doesn't make sense and fix it. In the 1% of the cases it doesn't help, you keep reading the source until you understand what's wrong. It's no different than debugging functions. Same rules apply.


> But there isn't a one programming paradigm that's universally better than others.

Maybe that's why Racket supports functional, imperative, declarative, and object oriented programming. I'm sure I'm even missing a few.


Don't forget relational and logic programming. :)

http://minikanren.org


> there isn't a one programming paradigm that's universally better than others.

That's so true. I wish someone would come up with a language where a wide variety of programming paradigms, including imperative, functional, and message passing styles, could find convenient expression.

EDIT: This comment is almost an exact quote from somewhere. Bonus points to whoever can identify where.


The parent's humorous wish is granted by the wizards of MIT Scheme (and, I daresay, even more fulfilled by Racket):

"Scheme is a statically scoped and properly tail-recursive dialect of the Lisp programming language invented by Guy Lewis Steele Jr. and Gerald Jay Sussman. It was designed to have an exceptionally clear and simple semantics and few different ways to form expressions. A wide variety of programming paradigms, including imperative, functional, and message passing styles, find convenient expression in Scheme."

http://groups.csail.mit.edu/mac/projects/scheme/


I got it from R4RS. I imagine MIT Scheme also got it from there.


Maybe. Looks like R4RS got it from R3RS. :-)


I think you should give Elixir a look: http://elixir-lang.org

Bonus points: Elixir has compile time macros and an AST with a representation that is similar to Lisp.


> and an AST with a representation that is similar to Lisp

That's actually cheating, you know :). Lisp pretty much is AST. It's a fun fact about Erlang that it's not directly compiled to BEAM, but is first translated into a very strange, almost lovecraftian Lisp. I've worked with Erlang commercially for some time and ever since learning about parse transforms I kept wondering why they didn't just clean up the "intermediate language" syntax; they'd have a decent Lisp instead of a parallel Prolog.


There is, of course, Lisp Flavoured Erlang: http://lfe.io/


Indeed there is :). I used to sneak up some code in it on the job ;). I'm happy to see it being actively developed to this very day.


oz/mozart does a good job of that; it's a shame it never really caught on as a non-research language.

http://mozart.github.io/


You have Scala. They even try to shoehorn macros into it. The result is a powerful but IMO messy language.


What for? With any meta-language you can build and mix any paradigms you like.


Although Lisp itself may not have succeeded on a relative scale, Clojure (a Lisp dialect for the JVM) seems to have a fairly good foothold.


Also: writing good macros is (even) more diffcult than writing good functions or modules.

Writing macros is doing language design and compiler implementation at the same time.

If you've ever cursed the error messages from a C++ compiler, think how much worse it would be if that compiler was written by your co-worker part-time as a side effect of writing rest of the code.

(I wrote clojure in anger for three years with brilliant co-workers)


> If you've ever cursed the error messages from a C++ compiler, think how much worse it would be if that compiler was written by your co-worker part-time as a side effect of writing rest of the code.

This is why you use proper facilities to write macros, that will enable you to forge better error messages than most C compilers. This is also why you don't simply use something like `defmacro` that will not let you supply syntactic information (syntax objects) with location and context information.

A good macro has a contract that constrains its use and inform you when you're breaking it, so that you don't have to rely on your coworkers to explain their macro to you.

See this[0][1] for a very detailed view on how you can provide the programmer with the tools they need to create abstractions that stretch well into the world of macros and still be able to make them usable.

You can set enforcable conditions for your macros to guide people, just like any language can statically check their syntax.

This is not an unsolved problem. What is unsolved, like many people have mentioned, is when the culture of a language (and the facilities of lesser languages) don't emphasize managing macros like other abstractions.

Using Lisps that don't emphasize more than "Macros are functions that modify lists of syntax at read time" will lead people to believe that's all there is to it. You won't have to be angry about macros if you use a language that gives people the tools to help you use them and then fosters that idea.

0 - http://docs.racket-lang.org/syntax/Parsing_Syntax.html?q=syn...

1 - http://docs.racket-lang.org/syntax/stxparse-specifying.html?...


No. Writing macros is less difficult than pretty much anything else. You just have to follow the proper methods.

Macros are, essentially, compilers. It is a very well studied area, writing compilers is a totally mechanical thing which does not require any creative thinking.

Compilers are best implemented if split into long sequences of very trivial tree rewrites (see the Nanopass framework for an example of this approach). Such tree rewrites should be declarative, and you don't really need a Turing-complete language to do this. This approach is inherently modular and highly self-documenting, so you're ending up with nicely organised, readable, maintainable code, much better than whatever you'd do with plain functions.


> However, many languages take an alternate approach and codify the most common programming patterns into their specification. The result is a compromise that works for the majority of cases, and better understood than custom macros.

That's not the case in my experience. It's very easy to look up the definition of a macro, or see what the macro-expansion of an expression looks like. Good luck trying to figure out what your interpreter/compiler/JIT is actually doing when the language documentation is lacking.

For Common Lisp at least the quality of documentation is also another huge advantage over any other language I've used - the Hyperspec (http://clhs.lisp.se/) is extremely detailed, concise, and easy to access (you can pull up the documentation page for a particular symbol with a keystroke, or browse through the index or by TOC when you aren't sure what you are looking for).

> Homoiconicity is elegant, but somewhat over rated in practice. Declaratively expressing intent does not require homoiconicity, you can do that in JSON or XML if you agree on a format. Now with lisp, code being data, you can be infinitely more expressive; but at the same time inferring intent out of complex code-as-data is essentially writing a very intelligent macro/parser. There's a cost to it.

Except that doing it with JSON or XML is usually two orders of magnitude more code than a macro solution would be. Now simple problems that could have been debugged with a macroexpand involve tens of thousands of lines of parser libraries (it's 2015, why are encoding errors still an issue for anyone?), ConfigurationFactories and class hierarchies for the internal representation of whatever it is you are trying to encode in JSON, and glue code to connect it to your current controller and model classes.

> Compilers can optimize well known constructs better than equivalent macros in lisp.

Compilers are very limited in the partial evaluation and domain-specific optimizations that they can do. This is not the case for macros.

> But there isn't a one programming paradigm that's universally better than others.

That's kind of the point of macros - to allow you to add new programming paradigms without having to change the base language or compiler.


> LuaJIT is, by far, the best option for scripting in games/RT

Much as we want to like it, we've been frustrated by Lua for a single reason: no preemptive threading library. Its lack of true prototype-style OO is more than a bit annoying too.


Threading I'll grant you, but prototype-based OO is practically Lua's defining feature. Could you expand?


Well, I guess that's not really fair. Obviously you can implement prototype-style OO, more or less, using the metatable and metamethod facilities. But as an old NewtonScript developer, I rather find

    foo = { x = 1, y = 2 }
    mt = { __index = bar }
    setmetatable(foo, mt)
to be crazy compared to

    foo := { _proto: bar, x: 1, y: 2 }
The uncleanliness of function declarations in Lua compared to NewtonScript also gets to me in this way. At any rate, to me, the metatable facility feels like a hack. That's not to denigrate it: it's a very flexible and clever gizmo which can be used for a variety of cool things. But to me proto-style OO languages are notable for their elegance and simplicity, and metatables expose a fair bit of wiring and boilerplate. [It also wasn't a good sign that the whole OO section in "Programming in Lua" was about "classes" :-( ]


There're lisp to lua solutions like l2l[1] or moonlisp[2]

(Shameless plug: I also have a WIP hy[3] inspired lisp->lua compiler hua[4])

[1]: https://github.com/meric/l2l [2]: https://github.com/leafo/moonlisp [3]: http://hylang.org [4]: https://github.com/larme/hua


I've looked into both l2l and moonlisp and they were too incomplete to be usable in their current state, and not being worked upon recently (or not much). The most current and interesting effort in this area in my opinion is Conspire[1], which is actually based on Terra[2], another amazing piece of software.

The decision to not go with LuaJIT was complicated and depending how it goes with S7 I probably won't take it again for a new game. For starters I doubt I will delegate any per-frame non-evented computation to the S7 side, while I could easily do it in LuaJIT. But I'm already seeing an at least 10:1 ratio of lines of code for things like building GUIs, and it's only going down even further now that I'm implementing event callbacks with lambdas and adding reactive-like system for rendering updates.

Good work on hua! I read somewhere that once you learn a Lisp you are destined to write one. I've been tempted too :)

[1]: http://blog.duangle.com/2015/01/conspire-programming-environ... [2]: http://terralang.org/


[deleted]


Millions of developers got no clue about the power of meta-languages.


That 1mm single fiber endoscope is just amazing, out of a spy/scifi novel.


Oh, it's that guy. Awesome. That's perked my interest more than Stephenson joining them.


This is notable not just because of Neal Stephenson, but because his blog post contains what I think is the most detailed public description of the tech so far:

--- Here’s where you’re probably expecting the sales pitch about how mind-blowingly awesome the demo was. But it’s a little more interesting than that. Yes, I saw something on that optical table I had never seen before--something that only Magic Leap, as far as I know, is capable of doing. And it was pretty cool. But what fascinated me wasn’t what Magic Leap had done but rather what it was about to start doing.

Magic Leap is mustering an arsenal of techniques--some tried and true, others unbelievably advanced--to produce a synthesized light field that falls upon the retina in the same way as light reflected from real objects in your environment. Depth perception, in this system, isn’t just a trick played on the brain by showing it two slightly different images.

Most of the work to be done is in applied physics, with a sizable dollop of biology--for there’s no way to make this happen without an intimate understanding of how the eye sees, and the brain assembles a three-dimensional model of reality. I’m fascinated by the science, but not qualified to work on it. Where I hope I can be of use is in thinking about what to do with this tech once it is available to the general public. "Chief Futurist" runs the risk of being a disembodied brain on a stick. I took the job on the understanding that I would have the opportunity to get a few things done. ---


One of the things that's interesting to me about that quote is the phrase "optical table."

I think that there's a presumption (that I've shared) that Magic Leap is trying to make AR goggles or glasses. But, as has been extensively commented upon in the past, that's just crazy. We're just barely at the point of doing semi-decent VR (Oculus Rift) and just barely at the point of doing semi-decent wearable heads-up displays (Google Glass). The idea that Magic Leap could in any foreseeable timeframe create a device that has all the virtues of the Rift + Glass + A huge dose of additional technology on top of both is just laughable.

But if they're trying for something much heavier-weight, like the ability to create convincing illusions not in the form-factor of "some goggles," but rather, "a specially prepared room and table," then that's maybe a little more realistic -- and of course less obviously revolutionary.


I think you should read this as "what they are doing in the lab."

An "optical table" is to optical technologies as a solderless breadboard is to electronics.

Basically, it's a big, stable platform with lots of threaded holes of a standard size and pitch for attaching lasers, mirrors, etc. Most have some kind of pneumatic isolation or damping to keep vibrations from being transmitted from the floor. Things like interference phenomena are sensitive to displacements of a few nanometers, so you really don't want things like passing trucks to ruin your experiments!

Examples here: http://www.newport.com/Optical-Table-Selection-Guide/140219/...


This got lost between my brain and my previous message, but yes, I agree, that's the most likely explanation.

But I mention the "they're trying to build like the equivalent of the early table-based Microsoft Surface (before that meant a tablet), but pseudo-holographic instead" idea just because it's so unbelievable to me that stand-alone goggles can possibly deliver what they're claiming.


Stand-alone goggles make more sense than a table, because they're closer to the eye and can more directly manipulate what the user sees.

If you read through the depths of Magic Leap's site combined with their patent applications, it becomes clear that they are trying to develop a set of goggles which combines some form of projection onto the retina [0] with some form of selective blocking [1] (to give contrast, and prevent the projected images from appearing as hazy mirages over the light otherwise reaching the retina).

Especially the blocking would be impossible to achieve with anything but goggles.

[0]: https://www.google.com/patents/US20140071539 [1]: https://www.google.com/patents/US20130128230


I think the idea is not that they wouldn't have goggles—these lab prototypes have goggles—but that they might be much more constrained than a free, walk-around head mount. Think: Imagineering-like controlled experiences down to arcades rather than a personal walk-around device like Glass^n or Rift AR. Controlling both the environment and background they augment and the head movement (and not worrying as much about miniaturization) could make the problem easier enough to be manageable.

Still unclear. They do seem to imply they want it to be an unconstrained mobile AR device, but that is indeed ambitious enough to warrant skepticism. Walk-around tracking for home/anywhere is still an unsolved problem for Oculus, and overlay AR is at least several times more demanding.


I think if you're tracking the user's position, then it may be practical to beam different images to each eye for a good 3D experience.

That's actually how (kinda) Stephenson's VR system he describes in Snow Crash works.

This kind of gets away from many of the issues you have with the Oculus Rift where you have such a tiny window of time (20ms or so) to react to how the person is moving his head. You still have to change the image based on the user's position, but not as much on head rotation which is the really hard part. Just each eye's location in space matters.

Multiple users would likely require multiple projectors.


It could also be far more revolutionary. They make it sound like they invented a small-scale holodeck. If that is the case, no matter how it works, or how limited it is, the potential is huge. Realistic holograms would be a major step forwards in connecting computers to humans.


i think the major issue is going to be power. the battery requirements for processing a small-scale holodeck will be huge. making something portable that lasts for a sufficient time is going to require an energy breakthrough


>I think that there's a presumption (that I've shared) that Magic Leap is trying to make AR goggles or glasses.

probably because they's explicitly what they've claimed they're working on in their recent funding announcement. The CEO described their product as a "lightweight wearable".


I think there are very concrete ways to accomplish this, for example an arbitrarily fast spinning mirror with a 3d scanner and a projector with an arbitrarily high refresh rate. If this system were precise enough, you could have strong stereoscopy with a table.


It sounds more to me like a direct projection to retina. Has this ever been attempted ? Would that be working with lasers ?


There have been head-mounted "virtual retinal displays" that focus a laser onto the retina.

I think they're mostly in patent hell.

http://en.wikipedia.org/wiki/Virtual_retinal_display


Manufacturers and commercial uses ... Consumer versions are expected to ship fall of 2015

It would be interesting if this was progressing because some major patent roadblocks where about to expire.


Interesting. Thanks !


Stupid question: Who is Neal Stephenson, and why is he famous?

Wikipedia suggests that he's a sci-fi author, is there a particular reason for his fame, could you recommend some of his works?


Snow Crash for old school howling-metal cyberpunk.

The Baroque Cycle for a semi-fictional view of the beginnings of science - the Newton and Hooke era.

Anathem for an interesting take on the philosophy of science disguised as a sci-fi epic.


Have you not read 'The Diamond Age'?

IMO it is one of his best works. Nanotech/Networks/Crypto for the masses to understand. I read, loved and was caught up in the VR fever of the 90's via Snow Crash, and love his other books, but TDA is the one I'll never get rid of.


I loved The Diamond Age as well, although I remember mostly bits and pieces now:

* cult sex scenes

* a forced-participation theater that humiliates you using Occulus Rift-technology

* the Kill Bill-esque ending

* ... and, most of all, the idea of continous education using an immersive Minecraft-like book/world that expands in complexity as your education grows.

Too bad we (the hackers) never completed even a crude version of the Primer (the book in the last bullet point) for the young geeks out there.


TDA is an interesting take on the issues a post-scarcity world might face. Unfortunately my immesion in the book was broken in several places by cringe-worthy "computers will never be able to do X" tropes. For example, the Primer must get a human to read its text aloud because "computers will never be able to reproduce a human voice"; this is in a world where atomically-precise, molecular diamond nanocomputers can be essentially 3D printed for free.


* > computers will never be able to do X*

I like your point; it compliments itself nicely with my previous one. Soon enough, computers may be able to do many things very well -- however, hackers are not catching up to corporations.

What I mean: if I remember correctly, in the DA world TV and big companies have as big as a grip on general populace as they have to day, but hackers are able to create alternatives, like the mentioned Primer for children's education.

In the real world, "hackers" (or those with the technical know-how to be one) love Apple and Google as much as the rest of the populace does, and leave the big things (OS, main APIs, maps, voice assistant, their personal data, ebook stores, videos available to young children) to them with very little opposition.


I think things will get more interesting when Oculus, Meta, Magic Leap and others get a little bit more entrenched (ass-u-me'ing it happens!).. give it another 10 years or so.. ;)


* They built everything with DIAMONDS... because it was cheap.


Someone endowed of clue should get going with a kickstarter for at least a film version of TDA, if not a lot of what was described therein :)

Damn, I'm going to have to read it again now. Still, it's about time for a refresh...


IMHO, it would do better with the Studio Ghibli treatment, than a live action movie.


Read the book in college, recently re-read as an audiobook - still relevant, still tantalizing.


I think I've read all his books. But his work is pretty diverse, so you get to pick and choose based on taste and preference.

There was a time when I thought Snow Crash was the best. There was a time when Cryptonomicon was a lot of fun (still is). Nowadays I incline slightly more towards the 'philosophical opus' type of vibe that Anathem gives off.

Anyway, all his books are pretty good representatives of one sub-genre or another. He's a very good author, and he wrote in a lot of different keys through his career so far.


All are fantastic. I was put off by the beginning of Snow Crash initially, as there are some tongue-in-cheek bits that struck me as too campy. But I might not say that now, having read it a few times.

The Baroque Cycle is a massive piece of work spanning 3 volumes, comprised of 8 nominally independent books. If it seems intimidating, just try the first one and see if you're not hooked. I'd love it if there was twice as much material.

Anathem is by far my favorite. Its hooks take longer to set, but for me they set much deeper. There is a lot going on in this book, and it will truly blow your mind if you let it.


I'm reading Snow Crash at the moment for the first time.

The beginning is actually really tough going as a completely new reader today. It's just so ridiculous. I can see where he was coming from, as I grew up in that era, but it's actually pretty bizarre now given the reality is nation states, religion and banks turned out to be so much more powerful than corporations.

Which is one of the perils of predictions in ageing sci-fi.

I've been on a sci-fi kick recently of all the classics I never read (William Gibson, Ender's Game, The Mars Trilogy, Forever War, Starship Troopers, A Canticle For Leibowitz, Philip K. Dick, Hyperion Cantos, Ringworld) and re-reading some I've not read for a long time (Foundation Series).

I personally found that Snow Crash is by far the most dated book. Even Ringworld and the foundation series were better.


I dunno, I find Snow Crash dates much better than most cyberpunk - William Gibson included - precisely because the ridiculousness was intentional. Neal no more believed we'd actually be living in an anarcho-capitalist dystopia with samurai-wielding hipster-heros delivering pizzas for the Mafia than Aldous Huxley believed we'd actually be letter-graded and programmed into praising his Fordship from birth.

Some of the space opera, on the other hand, was so earnest and certain that we'd be flying around at light-speed by now you feel almost disappointed for the authors.


Vernor Vinge is another great hard sci-fi author. "A Fire Upon the Deep" is a space opera epic if I've ever read one. Currently finishing up "Rainbows End", and wasn't sucked totally in until maybe 1/3 through, but now I'm hooked :)


Rainbow's End is one of my favorite books. I have probably read it 4 or 5 times since 2007. I find the ideas in it have gotten more accurate as time goes on.


> Snow Crash is by far the most dated book

All cyberpunk is like that, for reasons that are pretty obvious.

> Even Ringworld and the foundation series were better.

Of course. Physical reality changes much more slowly.


I mention other cyberpunk that's not dated, I've read 2 or 3 of the Neuromancer series and that hasn't fared anywhere near as badly, the only glaring plot point I noticed in that is that no-one had mobile phones.

And when I refer to Foundation & Ringworld I meant that they are from the 60s and so have some weird cultural ideals as well as some (unintentional) misogyny & racism in the foundation series.


It was bizarre then. I was very skeptical the first few pages. I did not understand that the adolescent cheese was tongue in cheek until the second chapter.


Check out the Old Man's War series by John Scalzi. It's kind of a Starship Troopers meets Hitchhikers Guide To The Galaxy (A bit less absurd.)


Have you read R.A. Lafferty?


> Anathem is by far my favorite. Its hooks take longer to set, but for me they set much deeper. There is a lot going on in this book, and it will truly blow your mind if you let it.

I've a major in Physics and it took me a couple readings to figure out all (well, most of) the connections therein. My favorite game to play while reading Anathem was figuring out where are the borders between historical fact (translated into the fictional world of Arbre, of course), current hypotheses within present-day science, and just downright fiction. Quick quiz: is "geometrodynamics" Stephenson's invention, or a term used in the real world? You get puzzles like that at every step, some easier, some harder.

A few examples that stand out:

Actual history of science - well, Thelenes, Adrakhones, Saunt Tredegarh, Saunt Muncoster, etc. (again, real people disguised under the mask of Arbran characters)

Current hypotheses - the whole Multiverse thing, the Fraa Paphlagon / Hugh Everett parallel.

Out-and-out fiction - eh... this is harder. The Wick, maybe?

And then there's Fraa Jad, all alone in a category of his own. :) I daresay one of the most striking, memorable characters in all sci-fi - if you get the point of the whole book.


Yes, I enjoy too. [1] is a good helper

[1] http://anathem.wikia.com/wiki/Earth%E2%80%93Arbre_Correlatio...


Don't for get Cryptonomicon!


I would also highly recommend:

The Mongoliad semi-fictional view of mid-thirteenth century Mongol invasion of Europe

Reamde MMO gold farming, social networking, criminal methods of the Russian mafia, Islamic terrorists


ugh. Reamde.

Reamde starts off with a lot of interesting ideas, and then morphs into quite possibly the worst watered-down, airport-paperback, fourth-rate-Tom-Clancy-triller nonsense I have ever read. Avoid it at all costs. Unbelievable plot and character motives. ick.


I really liked Reamde. It's a close second to Anathem as far as Neal Stephenson's books go for me.

If you can suspend disbelief at the sheer ridiculousness of the situation, it's pretty awesome.


Seconded. It's really awful. Read Snow Crash, Cryptonomicon, and Anathem.


The Russians were fun, if a bit cartoonish.


Man I am almost done Reamde and would not recommend that to anyone. I have a massive vocabulary and I was still looking up words every few pages. Every time I suspected it was a synonym for a simpler word, and every time I was right.

Writing aside I've found the plot pretty slow with not much interesting happening for most of the Zula portions (ie. middle half of the book). Maybe Stephenson's level of detail just isn't for me but it really wants for editing.


Tastes differ. I thought it was a fun story and a quick easy read. I'd recommend it to pretty much anyone. My wife, mother, and sister, all of whom are big readers but none of whom are really in the target nerd demographic, all liked it.

Anathem and Diamond Age were harder for me because of the depth of ideas. I had to slow down and think to get through them.

Baroque Cycle was harder because of the sheer number of characters with multiple and/or similar names, which is realistic but annoying. There's a list of characters in the back of the first book, which helps, but it's annoying to have to keep the first book handy when you're reading the others.

Stephenson does have a reputation for starting great books and not knowing how to finish them. But I think I've gotten more than my money's worth out of all of them.


I loved the Baroque Cycle, but I am also a history fan so a lot of the names were already familiar to me.


If you're familiar with his style, and you take Reamde for what it really is - stuff he played with while taking a break from "real work" after the massive Baroque trilogy - then it makes sense and it's quite enjoyable.


The Mongoliad is team-written, and it shows. It's a sprawling, uneven work, with interesting parts, but also tedious one. It is in no small part a vehicle for the authors' interest in the technical aspects of fighting with medieval weaponry.


In the parts of the Mongoliad I managed to make it through, there were characters and plots and swordfights the way you find characters and plots and sex in a porn flick. If that's your thing, you'll really like the book, but I'm just not that into swordfights.


In addition to what everyone else said, Stephenson's really long essay In the Beginning was the Command Line is worth a read, if one has the time:

http://www.cryptonomicon.com/beginning.html

Warning: written fifteen years ago, also really long.


As others have mentioned, Stephenson is known for his fiction. However, his nonfiction piece from 1996 for Wired magazine is my personal favorite piece. It recounts his trip around the world investigating undersea fiber-optic cables and speculating on what it all means for the future:

http://archive.wired.com/wired/archive/4.12/ffglass_pr.html


This quote: "Cable layers, like hackers, scorn credentials, etiquette, and nice clothes. Anyone who can do the work is part of the club. Nothing else matters. Suits are a bizarre intrusion from an irrational world. They have undeniable authority, but heaven only knows how they acquired it." Love it!


I would recommend Cryptonomicon.


He's a sci-fi author with a particular talent for believable, well-rationed near-future fiction. I recommend reading Snow Crash without looking at the publication date until after you finish the book.


Outside of sci-fi novels, he is known for his insights into tech/Internet propagation, particularly during the 90s/00s.

I'd recommend his "Mother Earth Mother Board" article (more like a small book, be warned) as the definitive tome on the physical reality of how the Internet is held together across the world: http://archive.wired.com/wired/archive/4.12/ffglass.html


Stephenson's post mentions Snow Crash. Start there.


He's relevant here for many reasons (his tech writing, consulting, other projects, etc.), but particularly it's for his concept of the "Metaverse" which appeared in Snow Crash: http://en.wikipedia.org/wiki/Metaverse


He's an author, but he has a bit of a history of 'product vision'. This might be interesting background: https://medium.com/message/the-kindle-wink-4f61cd5c84c5


Don't forget to tell Neal how much you love "The Big U" http://www.amazon.com/Big-U-Neal-Stephenson/dp/0380816032


{EDIT: snip}

Google Earth was heavily influenced by ideas in his books.

The Kindle was codenamed "Fiona", after Fiona Hackworth, a character in one of his books who uses a super-duper e-book.


> his blog post contains what I think is the most detailed public description of the tech so far

Gizmodo would beg to differ: http://gizmodo.com/how-magic-leap-is-secretly-creating-a-new...


On "how... the brain assembles a three-dimensional model of reality":

I was recently pulled into the wormhole of present day speculations (science??) on this. And wow, sounds like what they are working on may yield real, new understanding on cognition and consciousness.

One of the entertaining reads: "Space, self, and the theater of consciousness" by Trehub http://people.umass.edu/trehub/YCCOG828%20copy.pdf


Now all we need is Class V Star Cruiser to put it in.


~150USD / month from http://thespatials.com/ with a jump to 1000 USD in October after we announced our successful Greenlight. Launching in Steam in early 2015 and expect to shake things up.


This reminded me of ATS - http://www.ats-lang.org/ Check also Chris Double blog - http://bluishcoder.co.nz/tags/ats/

ATS combines ML-style types, linear types, dependent types and theorem-proving. Plus it compiles down to C, has pointers including (safe!) pointer arithmetic, has trivial C interop, and doesn't have GC (alloc/free safety is provided by the linear types, like with Rust lifetimes). It's a really interesting beast and I hope I have more time in the future to dive into it. Idris wins in the syntax department tho.


> alloc/free safety is provided by the linear types, like with Rust lifetimes

Rust's move/ownership semantics are based on linear types, where as its lifetimes are based on regions. ATS has linear types, but not regions.


Please don't name it just "Meta", it will make it a lot more difficult to find resources for it in the web. Imagine googling for "meta", "meta lang", "meta language", "metalang". Maybe something to go with "meta", like MetaCaml.


Agreed, just meta probably isn't a very good name (neither is standard meta-language for that matter, but at least the abbreviation sml isn't entirely hopeless). Just "meta" would be rather close to OMeta[1] as well.

I tried to see if there might be a more generic animal name to use for inspiration, and so learned that: "The even-toed ungulates (Artiodactyla) are ungulates (hoofed animals) whose weight is borne approximately equally by the third and fourth toes, rather than mostly or entirely by the third as in odd-toed ungulates (perissodactyls), such as horses.". Which while somewhat interesting doesn't really reveal an immediately usable name. But at least Artiodac should be unique...

[1] http://tinlizzie.org/ometa/


The hyrax. A very cute animal that can scale sheer rock faces.


Does seem to be rather distantly related to camels, though?


I didn't realize something camel-like was wanted.


I thought it was rather obvious in context that camels are even toed ungulates, and so the group to which camels belong (but a more generic term); and the language was a a kind of meta-(o)-caml, so looking for something that was related to camels, but more generic might be fruitful in order to find a name that was both unique, and also had a flavour of being "meta-caml" (or meta camel). Using myself as a the universal yardstick, I can say with certainty that not all people familiar with (o)caml and camels are aware camels are ungulates, let alone even toed ungulates... hence that particular avenue was a bit of a dead end...

Anyway, the Hyrax looks almost as interesting as the honey badger -- certainly a worthy code name -- but perhaps not for meta(o)caml IMNHO.

[edit: re-reading the language authors comment that: "It will be for OCaml what Elixir is for Erlang." -- I suppose something that is to a Camels as Elixir is ... to an abstract concept? Options might include: Water, Bedouin, Desert?

I also note that: "The earliest known camel, called Protylopus, lived in North America 40 to 50 million years ago (during the Eocene). It was about the size of a rabbit and lived in the open woodlands of what is now South Dakota."

I don't think protylopus is a very good name either, but probably better than ungulate. Perhaps "Dakota Caml" or Dakota Meta Language (dml) might work :-) ]


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: