Hacker Newsnew | past | comments | ask | show | jobs | submit | slama's commentslogin

macOS 12 is EOL and is no longer receiving security updates.

There’s a strong chance it’s vulnerable, too


Interestingly, the model hallucinated the ability to use a search tool when I was playing around with it


I thought I was crazy/imagining things, but I had a similar experience. Maybe I should try Suntheanine again


The M3 Ultra is the only configuration that supports 512GB and it has memory bandwidth of 819GB/s.


I wonder if they could fit a simple flip-out stand for some tilt without too many compromises


OpenAI is currently in talks to raise at a $340B valuation.

https://techcrunch.com/2025/01/30/openai-said-to-be-in-talks...


> Overall, we saw a slight lean towards augmentation, with 57% of tasks being augmented and 43% of tasks being automated.

I'd like to see a comparison to the data 6 months ago, before Sonnet 3.5. I suspect the automation rate will track up over time, but that may mostly be captured by API usage which isn't in the dataset.


The title here doesn't seem to match. The paper is called "TopoNets: High Performing Vision and Language Models with Brain-Like Topography"

Even with their new method, models with topography seem to perform worse than models without.


Submitted title was "Inducing brain-like structure in GPT's weights makes them parameter efficient". We've reverted it now in keeping with the site guidelines (https://news.ycombinator.com/newsguidelines.html).

Since the submitter appears to be one of the authors, maybe they can explain the connection between the two titles? (Or maybe they already have! I haven't read the entire thread)


Thanks for clarifying your reason for renaming the title.

The explanation for the original title is this plot from our publication in ICLR 2025: https://toponets.github.io/webpage_assets/FigureEfficiencyNa...

You can find more details on the website: https://toponets.github.io (see section: "Toponets deliver sparse, parameter-efficient language models")

We find out that inducing topographic structure in the weights of GPTs made them compressible (during inference) without losing out on performance.

I encourage you to revert the name if you find it justified after looking into the evidence I've shown here. Thanks.


My understanding is that enterprise purchasing teams are often evaluated based on their ability to secure discounts compared to the initial sticker price of the software. Therefore, having a firm sticker price might make them less incentivized to purchase your SaaS. I suspect many companies don't put pricing up front so the email can say "Normally, we charge X per seat, but we'll give you a special volume offer of Y"


It's a part of the enterprise dance, sure, but I wouldn't say they become deincentivized to purchase if you say no to discounts or negotiations, at least up to p99.


The two categories of enterprises I’ve seen most react differently. There are staid, predictable and well understood businesses that highly value discounts, some to the point of absurdity. There are also enterprises with a more dynamic nature that are going in new directions and highly value flexibility. Most fall in one of those camps, and sometimes both.


It is listed on their security advisories page, which you can navigate to from that link:

https://trust.okta.com/security-advisories/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: