What is nice is that sometimes you will just write very badly what you want to say, like a scenario or badly written sentences, and just ask the LLM to reformulate that to be proper nicely written text.
But in that case, there are big chances that the stylistic issues described in the article are present despite you having carefully crafted the content.
The best is when you use a speech to text app like Whispr Flow and just ramble to the AI about an idea or an experience, get your thoughts out and it returns a silhouette of an insight or article.
So when people say they never get a good output it's because they're trying to from
thought > article
instead of
thought > exploration > direction > structure > outline > article
Sadly it shows that there is really a kind of "world order" conspiracy, and it is not conspiracy theorists that are crazy as we always assumed.
Just have a look at Epstein files, you can see that a lot of "occidental" politics and business leaders were connected to him one way or another. You see that we don't have equal rights but some get privileged info about big financial changes that could help them grow their wealth.
Look at how hard we had to fight Microsoft contracts creeping in everywhere in public institutions and schools, when you see in background that Microsoft was spending billions organizing events, gifts, ... to "legally" bribe officials.
Now we can all see that there is a big dictatorial shift in the political leaders of occidental countries that used to be lands of freedom and rights.
Politics see that they are unpopular and there is an uproar of the population to have change, but on the other side they see that a lot of dictatorial countries are able to control their population and stay in power using liberty restriction regulations (China, N Korea, Russia, Arabian countries, ...). So they are going to do the same.
And the best way for them to support such reduction in freedom for populations is to bound together to do it together:
- One "western" country leader that would instigate censorship, and reducing freedom and privacy right would be seen too clearly as abusing and on a dictatorial trend.
- But if multiple of them push for that at the same time, it might more easily be seen as "for the greater good", because they are the "free countries"...
So sad that the second world war is so far away, almost everyone forgot what happened in the years before the Nazi took control of Germany and started the obvious horrors. But we are on the same path.
Putin's Russia is also a good example of how a country that was on the verge of freedom and liberty for its population, slowly but surely shifted to the current dictatorial state. All of that without a clear "revolution" or sudden shift. And officially it is still a "democratic" country...
Well said. Freedom only ratchets one way, we always lose freedom until something monumental happens to reset. Something on the scale of a world war or revolution. The bureaucrats cooking up these laws never loosen the cuffs.
I'm so piss off by website like messenger, or google meet for example that try to force you to install their app on your phone when you just when to send a message or make a call on the web app.
Strangely it works very well in the browser, but they can't spy you as easily so they don't like that.
I would not say if Grok has a real problem or not but the CCDH that did the study looks like to be a "scam". I don't know who fund them but they have clearly an agenda and would "manufacture" data however they can to support it.
Title of the study and article says that Grok "Generated", but in fact:
> The CCDH then extrapolated
Basically they invent numbers.
They took a sample of 20k generated images, and it is assumed (but I don't know if the source is reliable) that Grok would have generated 4.6 millions image at the same time. So the sample is 00.4%.
If you see the webpage of the CCDH it is a joke their study.
First:
- Images were defined as sexualized if they contain [...] a person in underwear, swimwear or similarly revealing clothing.
- Sexualized Images (Adults & Children): 12,995 found
- Sexualized Images (Likely Children): 101 found
First they invent their own definition, then adequately mixup possible "adult" pictures to give scary numbers.
Not the person you are asking but I would require a better analyzer. It must be able to recognize children in sexual poses, children with exposed genitalia, children performing oral copulation or children being penetrated. If AI can be told to create a thing it should be able to identify that same thing. If Grok can not identify that which it was told to create that is potentially a bigger issue as someone may have nerfed that ability on purpose.
There are psychological books on identifying signs of prepubescence based on facial and genitalia features that one can search for if they are in that line of work. Some of the former Facebook mods with PTSD know what I am referring to.
Leave everything else to manual flagging assuming Grok has a flag or report button that is easy to find. If not send links to these people [1] if in the USA.
1) Zero is basically never the best error rate, effort isn't infinite and spending too much of it on one defect ends up meaning spending less on other issues.
2) Look at what he's saying. This is a classic pattern for providing a fake proof of evil.
a) Point to evil. For example, CSAM
b) Expand the definition of that evil in ways that are often not even evil. Here, include scantily clad in your definition of "sexual". Note that swimsuits qualify.
c) Point to examples of evil in your expanded pool.
d) Claim this points to evidence of the original definition. Note that nothing about their claims precludes their "CSAM" being nothing more than ordinary beach or pool scenes. Their claim includes the null and when the null is a possible answer it should be assumed.
I've asked how much lower the error rate should be in order to be acceptable, and you've then replied with a rebuttal to the message of the posted article.
I agree that a zero error rate is generally not possible, although I think a company like Xitter can manage better than 101 in 20k.
Has this been studied? I'm not following the topic, but without any evidence one could also say that availability of fake imagery might decrease demand for real imagery and therefore decrease the amount of abuse. But I'm not implying anything, just asking.
Honestly looks highly suspicious to me. Because ok they might need some big storage like petabits. But how can this be a match in proportion with the capacity that is currently usually needed for everything that is hard drive hungry. Any cloud service, any storage service, all the storage needed for private photo/video/media storage for everything that is produced everyday, for all consumer hardwares like computers...
Gpu I understand but hard drive looks excessive. It's like if tomorrow there is a shortage of computer cabling because ai datacenter needs some.
If you're building for future training needs and not just present, it makes more sense. Scaling laws say the more data you have, the smarter and more knowledgeable your AI model gets in the end. So that extra storage can be quite valuable.
That's the electronics industry in general though. The shortages are real and a normal part of growing pains for any industry that's so capital-intensive and capacity constrained.
An additional factor missing in the post I think Is AI.
Before, projects were more often carefully human crafted.
But nowadays we expect such projects to be "vibe coded" in a day. And so, we don't have the motivation to invest mental energy in something that we expect to be crap underneath and probably a nice show off without future.
Even if the result is not the best in the world, I think that what interest us is to see the effort.
reply