I've found some of my interactions with Gemini Pro 2.5 to be extremely surreal.
I asked it to help me turn a 6 page wall of acronyms into a CV tailored to a specific job I'd seen and the response from Gemini was that I was over qualified, it was under paid and that really, I was letting myself down. It was surprisingly brutal about it.
I found a different job that although I really wanted, felt I was underqualified for. I only threw it at Gemini as a moment of 3am spite, thinking it'd give me another reality check, this time in the opposite direction. Instead it hyped me up, helped me write my CV to highlight how their wants overlapped with my experience, and I'm now employed in what's turning out to be the most interesting job of my career with exciting tech and lovely people.
I found the whole experience extremely odd. and never expected it to actually argue with or reality check me. Very glad it did though.
Anecdotal, but I really like using Gemini for architecture design. It often gives very opinionated feedback, and unlike chatgpt or Claude does not always just agree with you.
Part of this is that I tend to prompt it to react negatively (why won't this work/why is this suboptimal) and then I argue with it until I can convince myself that it is the correct approach.
Often Gemini comes up with completely different architecture designs that are much better overall.
Agreed, I get better design and arch solutions from it. And part of my system prompt tells it to be an "aggressive critic" of everything, which is great -- sometimes its "critic's corner" piece of the response is more helpful/valuable than the 'normal' part of the response!
I think this has potential to nudge people in different directions, especially people who are looking for external input desperately. An AI which has knowledge about lot of topics and nuances can create a weight vector over appropriate pros and cons to push unsuspecting people in different directions.
Open source will keep good AI out there.. but I’m not looking forward to political arguments about which ai is actually lying propaganda and which is telling the truth…
Well, when you consider what it actually is (statistics and weights), it makes total sense that it can inform a decision. The decision is yours though, a machine cannot be held responsible.
LLMs are a stochastic (as opposed to a deterministic) system, which can make them better at tasks that by nature are difficult to express formally, but still require a degree of certainty ("how can I make this CV better").
I believe it's slightly more nuanced than a dice roll.
> can you help me with my CV please. It's awful and I think the whole thing needs reappraising. It's also too long and ideally needs tailoring to a specific job i've found.
Then it asked me for the job role. I gave it a URL to indeed to which it came back with an entirely different job details (barista rather than technical, but weirdly in the right city). After correcting this by pasting in the job description and my CV we chatted about it and it produced a significantly better CV than I'd managed with or without friends help in the two years previously.
Honestly, the whole thing is both amazing and entirely depressing. I can _talk_ walls of semi-formed thoughts at it (he's 7 overlapping/contradictory/half-had thoughts, and here's my question in the context of the above) and 9 times out of 10 it understands what I'm actually trying to ask better than, sadly, nearly any human I've interacted with in the last 40 years. The 1 in 10 times it fails has nearly always because the demo gods got involved.
It was correct since he managed to get a better job that he thought he wouldn't get but gemini told him he could get. Basically he underestimated the value of his experiences.
The trouble while hiring is that you generally have to assume that the worker is growing in their abilities. If there is upward trajectory in their past experience, putting them in the same role is likely to be an underutilization. You are going to take a chance on offering them the next step.
But at the same time people tend to peter out eventually, some sooner than others, not able to grow any further. The next step may turn out to be a step too great. Getting the job is not indicative of where one's ability lies.
> Basically he underestimated the value of his experiences.
How can anyone here confirm that's true, though?
This reads to me like just another AI story where the user already is lost in the sycophant psychosis and actually believes they are getting relevant feedback out of it.
For all I know, the AI was just overly confirming as usual.
In this case yes, absolutely. It would have basically been going back to doing what I was doing 20 years ago, and I've grown a lot since then. Though a mix of impostor syndrome, desperation, depression and medical reasons that stopped me making a complete career change after redundancy, I'd settled for something I would have quickly hated.
Most humans involved were just glad I was doing something though...
I asked it to help me turn a 6 page wall of acronyms into a CV tailored to a specific job I'd seen and the response from Gemini was that I was over qualified, it was under paid and that really, I was letting myself down. It was surprisingly brutal about it.
I found a different job that although I really wanted, felt I was underqualified for. I only threw it at Gemini as a moment of 3am spite, thinking it'd give me another reality check, this time in the opposite direction. Instead it hyped me up, helped me write my CV to highlight how their wants overlapped with my experience, and I'm now employed in what's turning out to be the most interesting job of my career with exciting tech and lovely people.
I found the whole experience extremely odd. and never expected it to actually argue with or reality check me. Very glad it did though.