Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Biases are just weights on an always on input.

There isn't much difference between weights of a linear sum and coefficients of a spline.



> Biases are just weights on an always on input.

Granted, however this approach does not require that constant-one input either.

> There isn't much difference between weights of a linear sum and coefficients of a function.

Yes, the trained function coefficients of this approach are the equivalent to the trained weights of MLP. Still this approach does not require the globally uniform activation function of MLP.


At this point this is a distinction without a difference.

The only question is if splines are more efficient than lines at describing general functions at the billion to trillion parameter count.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: