TCP keep alive has a minimum of 2 hours or so. Debugging can live with that.
And debugger ruins everything; If the other side sends something larger than the local receive buffer it will usually disconnect after a while. as it will sense no one on the other end.
All the things that debugger can "ruin" should just be parametric - increase buffer sizes / keep alive times when debugging.
Also, there are more options than "kernel" and "each process for themselves" - you could have a "network/TCP daemon". QNX successfully does that for disk drivers and file system drivers - and likely network too. So does Minix.
> And debugger ruins everything; If the other side sends something larger than the local receive buffer it will usually disconnect after a while. as it will sense no one on the other end.
"Everything"? Well when data isn't being sent then the connection could live on as long as it's kept alive by the kernel, right? Whereas with a userland implementation even that possibility becomes difficult.
You seem really intent on making unfounded blanket claims to rebut my point... but I feel like there's some validity to the point I'm making? It's be more helpful to see if you can instead find parts of it that might have some truth to them.
> It's just that historically, unix/linux/NT don't.
Yeah, hence why this approach seems problematic...
And debugger ruins everything; If the other side sends something larger than the local receive buffer it will usually disconnect after a while. as it will sense no one on the other end.
All the things that debugger can "ruin" should just be parametric - increase buffer sizes / keep alive times when debugging.
Also, there are more options than "kernel" and "each process for themselves" - you could have a "network/TCP daemon". QNX successfully does that for disk drivers and file system drivers - and likely network too. So does Minix.
It's just that historically, unix/linux/NT don't.