Absolutely not a chance. You see, in the past there was nothing to train it on. And that's sort of the point: the only reason that this AI image generation works at all is because it is lifting on the hard work of the people that had the skills, put the time and the effort in.
Sure, but the practical form of this attack is limited.
You can't maliciously embed it in a site you control to either steal map usage or run up their bill because other people's web browsers will send the correct host header.
That means you can use a botnet or similar to request it using a a script. But if you are botnetting Google will detect you very quickly.
Is there a way to use Google maps apis on the web without exposing the key?
Re host header seems an odd way for Google to do it, surely they would have fixed that by now? I guess not a huge problem as attackers would have to proxy traffic or something to obscure the host headers sent by real clients? Any links on how people exploit this?
Something that can be abused is if the key also has other Maps APIs enabled, like Places API, Routes API or Static APIs especially for scraping because those produce valuable info beyond just embedding a map.
The only suggestions I have are:
- If you want to totally hide the key, proxy all the requests through some server.
- Restrict the key to your website.
- Don't enable any API that you don't use, if you only use the Maps Javascript API to embed a map then don't enable any other Maps API for that key.
It would be helpful if you answer the question about web api usage, most of that is not relevant.
The only suggestion I see there from a quick skim that would avoid the above is for customers to set up a google maps proxy server for every usage with adds security and hides the key. That is completely impractical suggestion for the majority of users of embedded google maps.
Today's swap also is not preallocated by the user. It is entirely handled by the OS itself. If it needs swap space to hibernate it will go ahead and allocate it itself.
It does? Last I checked Linux doesn't do dynamic swap sizes, and while Windows has dynamic swap sizes it has a separate big non-dynamic file for hibernation. I have no idea what MacOS does.
I can because I have also used similar arguments. There are people who say that you should use a real artist instead of AI due to AI's water use. Yet in actuality asking a human to draw something will require more water. There are people who think AI uses more resources than humans which is why it must be said.
> There are people who say that you should use a real artist instead of AI due to AI's water use.
Nobody I know says this. In fact, I've never heard of this ever before, and I read artist and hobby communities pretty hostile to AI, but I never once read this nice strawman you've built.
People say you should use a real artist instead of AI for a multitude of reasons:
- Because they want to enjoy art created by humans.
- Because it provides a living to artists, even artists for minor work like advertising or lesser commercial illustrations.
- Because AI "art" is built by stealing from human artists, and while human art has a history of copying and cloning, never before has tech allowed this in such a massive, soulless scale.
Sam Altman gave a deranged, completely out of touch reply, and he should be called to task for it, not defended. A human being is not some number on a spreadsheet, built over 20 years in order to achieve some "smartness" goal. That's a very stupid thing to say.
> A human being is not some number on a spreadsheet, built over 20 years in order to achieve some "smartness" goal
But from the perspective of the business and capitalism that's exactly what a human is. A tool that consumes resources and hopefully produces more value for the business than it consumes.
Sure we can dance around this and you can pretend your employer gives a shit about you and your family and your childhood stories but they don't.
You said that a CEO was out of bounds for framing employees as numbers on a spreadsheet. To me this suggests that you believe company owners should care about the humanity of their workers. And I'm saying they don't.
I get the general point you're making. Indeed, Altman's take is capitalism taken to 11. There was a lot of that going on before AI or the past few decades, but I don't think it wasn't as extreme and for every company. There's definitely a conversation to be had about modern capitalism (and plenty of people studying it, too). However, not everything is a FAANG or tech startup. Some owners do care about their employees to a higher degree than just numbers on a spreadsheet (not going into the whole "we're a family" bullshit speech, I mean the genuine stuff).
Imagine thinking of people as "resource-hogs before they reach peak smartness"!
What's new here, in my opinion, is people like Sam Altman behaving as if they didn't understand normal human behavior. You cannot simply compare an LLM to a growing human. You cannot say things like "grow a human over 20 years before they achieve smartness". What? That's not how human beings think about human beings, and Altman is detached from real human behavior here. He's saying out loud the thoughts he should keep to himself, a bit like a person with coprolalia. And it's ok for us to dislike him for this, even if he's just voicing the opinions of extreme techno-capitalism.
Sam Altman once joked (?) he wouldn't know how to raise his child without ChatGPT. Maybe he should ask ChatGPT how to behave more like a human? Or at least fake it?
> Sam Altman once joked (?) he wouldn't know how to raise his child without ChatGPT. Maybe he should ask ChatGPT how to behave more like a human? Or at least fake it?
Not to mention that was at a time when all kinds of wild suggestions like glue in pizza were coming out of ChatGPT’s sloppy outputs. There are so many little things that quickly become big things with kids, annd exhausted parents should absolutely not use LLM’s for sussing those things out.
I could easily see well-meaning parents looking for healthy snacks to make their kids accidentally feeding their baby fresh honey, for instance. Or asking how much water to give their infant and not realizing the answer is absolutely none unless they are severely dehydrated from an illness or something.
There are a lot of hazards for kids under 1 in particular that make me incredibly nervous to ever suggest exhausted parents use LLM’s to answer kid related questions. Recommendations also change relatively frequently so who knows if it’s even pulling on the most recent best practices.
It's that second point. We live in an age of artificial scarcity created by a system of social organization that we've mostly not argued about since the 50s, that's now showing it's stretch marks.
If it weren't for the need to 'earn' a living, I'd say to the other two points: Por que no los dos? Save for the capital argument (which is valid, I'm not saying it isn't. You will starve if you don't make money), why is it necessarily true that the two (AI and people) are in competition?
In fact, I think "actual" artists would benefit incredibly from the use of AI, which they could do if it weren't a shibboleth (like I said, for good reason). You'd no longer have to have an army of underpaid animators from vietnam to bring your OC to life - you could just use your own art and make it move and sing. We'd not need huge lumbering organizations full of people who, let's be honest, work there making other people's dreams come to life in large part because it's a better bet than taking a joe-job at the local denny's (after all, you're doing the thing you love even if it isn't truly "yours").
I've had this discussion with younger folks, who are legitimately shook by the state of things. They're worried that all the work they've done to this point is going to be moot, because they've correctly assessed that the whole capital system isn't going anywhere any time soon, and they've been prepping to try and get a job at netflix, or disney, or paramount - because that's the world we've handed them. They see those positions drying up and what else are you going to do? They have the power financially and politically and without them you're doing "not art" for work, which sucks because you need to work.
I say; eat the rich. General wildcat strikes until UBI. Tax the everloving shit out of capital gains and peel back personal income taxes. We (the millenials) were handed a steaming pile of shit for a world, so at least we know what would constitute not an absolute disaster for Zeds, Alphas, etc. Have I gone totally off the rails for a conversation about AI? Actually, I don't believe so. The cultural pushback is a function of a busted system. After all, it's the economy, stupid.
I’ve also seen people assert that the Earth is flat in YouTube comment sections so I wouldn’t let that color your view of what is considered “popular opinion.”
Ok, let me reframe in a less assertive way: it's not common to say that you should hire human artists "because of water", so uncommon it's not a widely held belief I've seen in artist or hobbyist communities, and therefore using this to justify Altman's deranged remark seems weird to me.
Over a year ago yeah I occasionally heard that argument or some light variation of it, though not nearly to this ridiculous extent that you’re portraying now. Now? It’s basically a strawman. Most people’s objections revolve around the theft/reckless scraping that has literally taken down public infrastructure required to train these models as well as the ridiculous expectations being put that all of us implement it in literally every aspect of our lives even if it doesn’t fit, especially professionally.
> a human to draw something will require more water.
That human would require the same amount of water whether you ask them to draw or not, and would exist anyway because they are not born for productivity reasons. "Creation" of humans isn't driven by the amount of work to accomplish.
You are not causing more water to be used by asking a human to work on something.
Same for energy consumption.
This argument doesn't work at all.
What you do for humans to use fewer resources is to work on making us produce less garbage, and produce things using techniques that are less resource-intensive.
I don't do any moral judgement at all, and I also don't predict the future.
I respond to "You are not causing more water to be used by asking a human to work on something.", because that statement is false. (Mental) work has an effect on the human metabolism.
I'm not judging, I'm telling you (a bit snarkily, true) that your brain activity won't stop with something else doing (a part of) your work. And this is the subtility that makes the statement true. Said overwise: sure, you consume marginally less at rest, but you won't be at rest, making the remark pointless.
Even if it were false, the difference in energy consumption is not significant, taking on acount what the AI uses, and also all the energy that you use to live (housing, heating, products and food you buy whose production uses energy, etc).
And about the water, it's even worse, even disregarding the AI: at rest, maybe you'll drink, I don't know, 1L less (that's a wild number!), compared to the 100(s) L that you use to cook, wash, clean, etc, not even counting the water used to produce stuff you buy.
But again, you won't do nothing. We are commenting on a post of a guy who was fired and couldn't help creating something. That's how we are. We hate boredom.
Worse: the way our societies are setup makes is so that ai, if it helps at all, likely won't free us from work, it will likely just make us collectively produce more garbage. That's more energy consumption, not less.
That "but don't forget humans consume a lot of energy too" argument is at best not connects to reality, more likely a Sam Altman lie and you shouldn't take it seriously.
This one is literally matching "innerHTML = X" and setting "setHTML(X)" instead. Not some complex data format transformation
But I can see what you mean, even if then it would still be better for it to print the code that does what you want (uses a few Wh) than doing the actual transformation itself (prone to mistakes, injection attacks, and uses however many tokens your input data is)
It's not other operating systems fault that they failed to invest into security. They should try and catch up instead of blaming people for not trusting their security on "regulatory capture".
Buddy, you're on HN. No one is going to buy that bullshit here. Thanks for the laugh, but seriously, don't insult us like that again. We may be dumb, but not that dumb
Which is exactly why I have to advocate for it here. There are literally people on this website who think their operating is secure, but in actuality they are one curl | bash or npm install away from having all of their login credentials stolen. No matter how smart they think they are in being able to avoid malware, that strategy does not scale.
Your argument is not sensible as usage of curl | bash doesn't scale. Your argument is people should stay locked up to not be endangered through freedom. There is no intelligence found here.
It is the easiest cross platform distribution method between macOS and Linux. It actually does scale in that regard which is why it is so popular.
People are not locked up. Apps and their secrets are. The idea that any app should be able to read the secrets of any other is not essential for user freedom.
Your argument is not sensible as usage of curl | bash doesn't scale. Your argument is people should stay locked up to not be endangered through freedom.
And if competitor locks were unpickable it wouldn't be regulatory capture to require unpickable locks for people to store valuables in a home. Just because people got away with bad locks for many years, that doesn't mean we have to accept that level of security.
reply