Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I used to talk to him on one of those IRC networks for iOS hacking, forgot the name, but a good old friend of mine was involved in the scene, so I dropped in. He is definitely a skilled individual, but he lacks the key thing I think all true senior engineers need: humility. Without humility you're going to find yourself bikeshedding and wasting time and effort on things that may actually not be attainable within your lifetime with your current resources.

I personally don't see AI getting anywhere any time soon. Just look at some of the really funny images some of the best AI available generates[0]. Subtle things like this show you that AI is really a UX around a piece of software that attempts to understand what you're asking for with less input than having enough buttons to generate EXACTLY what you're asking for (I'm thinking of photoshop in the case of text -> image).

Some things in AI do impress me, but the fact its not far more pervasive in our every day (there are people who use voice assistants sure, but its not EVERY household doing it) lives tells me we still have a long way to go.

[0]: https://www.reddit.com/r/technicallythetruth/comments/y26t6x...



> but the fact it’s not far more pervasive in our every day

The joke I heard was that AI only refers to technology in the future, and not the past. We use plenty of machine learning models every day:

- Every time you use dictation

- Every time you use biometrics, like face scanning, to unlock your phone

- Every time you use any sort of search engine, news feed, or see an ad

- It is an integral part of the field of computer vision now, so anything to do with scanning documents, using AR, taking photos, etc uses AI

It’s only getting better and more integral to more fields over time.


> The joke I heard was that AI only refers to technology in the future, and not the past.

This is actually known as the AI effect.

https://en.m.wikipedia.org/wiki/AI_effect


This is a fair call out. I guess I'm more talking about more advanced things in the future though like robot maids like in The Jetsons.


Yeah, exactly what the AI Effect is about :)

> We don't use AI anywhere today

> But here in X places we do use AI

> Yeah, but not for Y

once Y is using AI

> We don't use AI anywhere today

> But here we're using AI for Y

> Yeah, but not for Z

And so on :)

From Wikipedia:

> Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." Researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"

> "The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet.

> Some people think that as soon as AI successfully solves a problem, the problem is no longer a part of AI.


Fair, though in my case (which is insignificant and out of the norm) I've always wanted AI robots, as a kid I loved watching Dexter's Lab and seeing him make robots to do things. I'm also okay with a really powerful AI assistant beyond what we have today. I tried using AI assistants a few times, but I always end up shutting them off for either privacy concerns or just get tired of figuring out the word soup to make them work. I really want to see an AI assistant that is insanely capable and fully offline first, so maybe something like Mycroft, not sure how fully capable it is without reaching out to the internet though, but I assume it has some capabilities without being online.

In my eyes, AI is something that could pass the Turing test without fail, ever because it knows how to communicate how we do, and think like we do.


I've seen it unfolding real-time with the famous Lee Sedol / AlphaGo match. On the day before the first match, people were saying how the match would be a joke, as machines would never understand the deep philosophy of go. A few days later, people were saying that it was a matter of calculating stuff very fast and the match was clearly rigged because Lee only had one brain while AlphaGo had thousands of GPUs. (Yes, someone actually said that. Facepalm.)


Some would call that "machine learning". I know it's hard to make a distinction, but the term "AI" is too ill-defined to argue about what constitues "real AI" and what is mere machine learning. Are those examples "thinking machines" (~= "artificial _intelligence_"? I'd say no, they are very good statistical pattern matchers without any understanding of the subject matter for the most part.

GPT-3 and image generators that have somewhat of a world-model are, imo, closer.


Intelligence is a Heap Paradox and trying to define where the boundary between Artificial "Intelligence" and "really good pattern matching" is a fool's errand. Intelligence is a continuum from the simplest bang-bang thermostat all the way up to the human brain.


I think I have to disagree with humility in engineering. There's a certain arrogance intrinsic to engineering, in that you have to be capable of believing you can find a better solution to a problem than literally every other person in history. Obviously it needs to be tempered in interactions with the outside world, but without that core idea, an engineer can't do their job effectively, like a surgeon who loses their nerve when it comes to taking lives in their hands.


> There's a certain arrogance intrinsic to engineering...

I don't agree. You can take pride in what you've accomplished, and you should do that, but you can be humble at the same time. Accepting there will be better individuals than you or better solutions than yours is both sobering and motivating to be and do better.

When you accept that you're very smart, but not the smartest, you unlock the potential to be one of the the 10x-100x engineers others aspire.

I'm a very motivated individual, and did some pretty impressive things back in the day, and I'm still capable of doing these things. However, I'm not the best, or won't be the best for long. This is how the world works.

Being able to say thanks, and being able to say sorry goes a long way, and takes you beyond where unabashed arrogance takes you.

Lastly, arrogance makes you fragile. Fragility is not good under stress, and under real world circumstances.


That isn't the best example, because surgical error due to the stereotypical cowboy surgeon was common and measurably improved with better process, ie surgical checklists: https://pubmed.ncbi.nlm.nih.gov/24973186/

Lack of humility can lead to the wrong limbs being amputated. The parallels to understanding your personal limits and knowing when to trust other people directly applies here. Confidence != arrogance.


> There's a certain arrogance intrinsic to engineering, in that you have to be capable of believing you can find a better solution to a problem than literally every other person in history.

As a programmer, I have never once believed that. I just believe that the problems I am being presented with are novel enough (usually because of the context and constraints surrounding it) to have not been encountered before and solved.

Just like you never cross the same river twice, I don't think you ever solve the same problem twice.

I think the core skill engineering requires is persistence: the belief that you can find a workable solution even when the problem itself gives you absolutely no positive feedback during the process.


That applies to problems that can be solved by 1 person.

But huge majority of the problems require whole teams of people to solve, especially in the commercial world. Senior engineer who can 2x team of 10 people is better than a single engineer who is 10x himself.


I agree that arrogance is common, but I'm not sure I'd call it necessary. I know a lot of good engineers who are very confident that they are good enough to build a great solution, but not arrogant.


I think there needs to be a mix of humility and arrogance. Humility because sometimes there’s an issue that’s unsolvable and a good engineer needs to know to pull back. Arrogance because it takes at least some arrogance to believe there’s a solution where others haven’t found one.

There are plenty of times in my career where I wish I were more arrogant. I’d probably be in a completely different place altogether if I were even slightly more arrogant and that lack of it has held me back in some ways.


Wouldn't "confidence" be a better word than "arrogance"?


Sorta, confidence is nice but arrogance is the difference between “I can do that” and “people have tried and failed but I know I can do it because I’m better.” There’s a particular example where I was arrogant (one of the few times) about something, said yes to the project that someone else had already architected, and I re-designed the architecture because I believed I was a better engineer. I need a little more of that. Particularly now, when I have an idea for a company I want to start but internally I’m scared. I’m confident in my ability, but I’m lacking the arrogance to just fucking go for it. I spend more time beating myself up over it than just doing the work. And I wish I just had the dab of arrogance to go for it, as opposed to just confidence.


Right, but humility is crucial in machine learning (and particularly in the area of self driving cars) where you're constantly calibrating uncertainty and risk with lives on the line.


You are onto something.

For hard tasks like engineering, you need a balance of ego (I can do this) and humility (this is hard, I need to work as a team and study the problem). Too much of either results in negative outcomes.


Just a note on humility: some people are taught it from childhood. Enough failures mixed with the steady hand of support renders humility (usually). If you have too many failures where you're the only one picking yourself back up it can render a non-humility that can get you where you need to go but in the long term is not very useful. Colloquially I believe this is reflected as having a chip on your shoulder. There's also lots of early success where failures are just forgotten, smoothed over, or not learned from that can render another form of non-humility. People in the latter situations can learn humility at adulthood but it probably takes other people, who are very patient, to help get someone there. I personally have at times worn the chip on my shoulder despite not needing it anymore and can attest it's difficult, but worthwhile, to shake.


I feel as if you are using a fallacy in your analysis. As AI models are just decision trees in essence and it's the same with human inference. That there is no magic and it seems mundane is the point. As our reality is surprisingly simple and these new models for artists and programmers alike demonstrate such.

It's moving the goal post simply. As it is the same with Google's Chatbot that "fooled," their own engineer. Thus it becomes a chicken and egg situation as to when AI truly passes the turing test. In the same way putting on a VR headset and being fooled you are someplace else the first few times. Then once you are used to it, it becomes as if you are changing your reality just like putting on new glasses.

Human beings have a hard time with exponential. And my guess will be next year that we have AI generated video that is indistinguishable from a human production. It's just software sure, but so are we.


I personally don't see AI getting anywhere any time soon

Totally, like there's no way an AI company could hit $80M revenue in <2 years [0], or power the recommendations systems for billions of dollars worth of online commerce.

Also there's no way that it could generate audio/image/video/text that would trick the average person that this was generated by an AI.

[0] https://twitter.com/tszzl/status/1583357703337885697


These are all examples of scaling and miniaturization. It's impressive to be sure but nothing that wasn't already done. Fundamentally we're still running mostly the same algorithms and have been spending the last however many years optimizing and scaling in various areas from hardware to labelling and automation.

That's why "lol it's just spreadsheets and if-statements," is kind of funny: there's a grain of truth to it.


You example is perhaps misleading. I tried to reproduce it using stable diffusion 1.5 and it’s possible using negative weights and some creative keywords but I doubt the title of the post is the actual prompt.


I can get some thing like this by adding context distractions. For example, "bear eating salmon in river" shows whole fishes, but "bear wearing tophat eating salmon in river" gives a bear (without a tophat usually) eating supermarket sliced salmon in a river.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: