Hacker Newsnew | past | comments | ask | show | jobs | submit | kroc's commentslogin

What I’m proposing is that we replace PHP with a web browser so that the code we run on the server could be the same code we run on the client. To make it so developers only have to learn one language, one set of technologies.

That will eventually lead to decentralised applications, but in the mean time will simply lay the groundwork for it whilst people continue with the centralised model.

There will always be a need for centralised systems, the world is just not going to simply change their entire toolset and paradigm overnight. We must make centralised apps use pretty much the same technologies as decentralised ones first, then coax developers over.


With desktop applications you can use the same language/code on both the client and the server side. That did not lead to decentralization. Well, that isn't cross platform you might say. However, Java mostly covers that and still it did not happen. There are also numerous tools that allow you to write in one language (Java with GWT, some lisp one I cannot name) that runs on the server side that automatically generates equivalent javascript for the client.

There is no reason (and you provide no argument) that using the same code on the client/server results in decentralization. Furthermore, we have the fact that it already exists and still did not lead to decentralization.


Why. Why are we doing this? Why do people go and write this stuff when there is already the web. That doesn’t require an app store. It can use the same code as the actual website (imagine that).

Why are we running so fast in the wrong direction?


The web still sucks for phones. The formatting is better in apps and it runs faster than the browser. I don't have to double tap text to bring it to human readable size or deal with lag when scrolling. Until phone browsers and/or web sites catch up, apps such as this provide a lot of relief.


HN is unreadable even on a desktop browser, that's not a client fault. I read it on ihackernews.com


I half agree, half disagree with you. The web doesn't suck for phones, the websites suck for phones. If websites use a fluid layout or a mobile layout, nobody has to write an app for them. Posterous is great to read on mobiles due to their theme.

I hope more companies start using those tricks that let websites resize gracefully on any size screen, so you don't even need a mobile theme. Your layout just adjusts gracefully to the screen.


Sorry, by "web" I meant websites. Mobile layouts are nice when they are implemented correctly. The worst case is when a mobile version routes your request to a default landing page rather than the requested article or whatever or strips out functionality that exists in the regular version.

I expect that things will only improve as more people access the web via mobile devices. I think I use my phone more than a standard computer these days.


Yes, I think we're agreeing, but, to clarify, I meant that it's not websites in general (as in, the technology) that sucks, but particular websites (most of them) which haven't correctly implemented mobile-friendly versions.

We are agreeing, though, yes.


By far my favourite are websites which detects you are on a mobile device and then (automatically, no questions asked, no override available) "redirects" you to what is supposedly the same content on their mobile site.

And then the mobile site crashes for whatever reason? Malformed URL? Heck if I know. I just know I can't get to the website.

It also seems a lot of websites has a "standard" mobile-version (which is basically some iPhone-app like standard theme) which also always crashes when visited. Might be a WP-plugin, but I'm honestly not sure. 90% of the time though, it crashes and it wont let you go to the original web-page either.

Really. Until the web gets better at dealing with mobile devices, you will see applications like this being popular.


One reason is that if you load the web page, you have to download the UI every single page load. With an app, the downloaded data should be quite a bit more stripped down... but to be honest I couldn't say how the app gets its data. It might be scraping downloaded pages. In that case, I'd tend to agree with you about an HN app as it doesn't contain anything that requires a fluid interface.


"download the UI every single page load" - Client side XSLT resolves this. I'll admit that client side XSLT has other problems though. Compare these two urls:

https://grepular.com/

https://grepular.com/?response_type=xml

Then look at the source code of each


Cool, thanks.


No. Any well written web page can cache itself for offline use, and use AJAX to load all data after the first page. This is how iUI works, and therefore most iPhone-specific web apps.


You're basically right except I don't think HN does this at present. Good luck convincing PG within the next year... :)


I couldn't disagree more. The way we've always used the web is to have some content and a web server and let clients use an app to connect to our site and use the services. The difference is, we used a generic app (web browser) and had to encode all the presentation in with the content.

In my opinion this is going in exactly the right direction. Let clients worry about how to display the data and the web servers/services provide the content. Then your "web apps" can do anything. 3D, open GL, anything the device can support.

If you make all apps just be browser apps then you have to put all code for all possible clients on the server side. Doing things this way means I can write a really great web service and hire people for the different clients to make the best possible client. I don't need to worry about it at all, nor maintain these different clients.


At the same time as I agree with you, I also use these kind of apps. Typically they are faster and also offer better integration with the native features of the phone than a browser experience does. Perhaps you should take up the challenge and make an HTML5 equivalent and see how well you can do?


I agree. I never use applications written only for one website unless it provides additional features (e.g. the Facebook application includes hooks to the whole system for uploading pictures and such). I find it unnecessarily tedious to think beyond needing a web browser for web browsing.


Site’s currently down (8:24 PM GMT, 10th) due to massive bandwidth. It is just a shared host and it suffices for 99% of the site‘s uptime. Google cache here: https://webcache.googleusercontent.com/search?q=cache%3Ahttp...


How can there be demand if nobody is making it easy to use?


It seems like the incentive is missing. Browser developers have to prioritize what users use and what they know how to use. The author made the point that very few people even know what RSS is, and I think it's the content deliverers' responsibility to make users aware of that feature; there's only so much a browser can do to help with that.


Well put; what I believe in is that user agents should do their absolute best to make the most sense of what information is available to it in the context of the UI paradigm of the user agent. i.e. there is not enough screen space on phones for every website to fit an RSS icon; the browser should do the best thing that befits its UI and help minimise the efforts of the user to get the information they want. RSS can massively help with that.

Imagine for example that on the Chrome home page, where sites you visit often appear, Chrome also was following the RSS of these sites in the background, and listing new news items for those sites on the home page, all without you having to do anything.

There is infinite possibility here for browser vendors to make browsing quicker, easier and more intelligent and RSS is a key part of that. The browser vendors are not interested in exploring this avenue and as such everybody is stuck doing the same stupid routine every single day. This is dumb! Our computers should be smarter than this!


Imagine for example that on the Chrome home page, where sites you visit often appear, Chrome also was following the RSS of these sites in the background, and listing new news items for those sites on the home page, all without you having to do anything.

That's a hell of a good idea actually. Having a count of items that popped on a website since your last visit would be a really useful information.


RSS aggregators are used by the elite few. When websites start deciding to use Twitter and Facebook instead of RSS because it’s faster and gives them better features, and regular users understand it better, then there will be complaints.


because the browser is supposed to provide the button.

Why do you think I’m worried? I don’t want to have to clutter my site with a button to an XML feed that nobody understands. I want the browser auto-discovery to do the right thing and present the right interface to make RSS worthwhile.

Turning the key in a car shouldn’t present you a diagram on how to connect the battery. Browsers shouldn’t sit there dumbfounded when presented a piece of RSS.


Your analogy doesn't work for me. A car is built to run when you turn it on. That is basic functionality. The reverse analogy to a browser would be "making an HTTP request shouldn't present you a diagram on how to do DNS resolution," and the browser doesn't.


If I copy the URL of your site/blog into my RSS reader's Add Subscription dialog, I don't know of any RSS readers that wouldn't scrape your site for the link tags I assume you put in and discover the feed for me.


Why do you think I’m worried?

Because auto-discovery isn't working on his site either? (At least in Safari)


It is working. Safari changed the way it displays RSS feeds. You have to click and hold on the “Reader” button to display any RSS feeds.


An alternative view of Captain Crunch: http://www.osnews.com/story/20606/_Captain_Crunch_on_Apple (see the snippet from the Recollection article). Read the full recollection article, this is an important read re: phreaking


Bear in mind that a HTTP Request is significantly slower than the bytes in the page. A request could be anything from .25s to 2 whole seconds, if the bandwidth is saturated. Not least problems with blocking other parallel downloads and delaying the initial paint. HTTP requests that are not images or the CSS are to be absolutely avoided!


Since this isn't used until the menu is pinned, and doesn't show on the page, the time it takes to download should only be taken when the user requests that it be pinned. This shouldn't hold up the page being rendered, since it is not page content.

Edit: Additionally, how does what's in the pinned menu get updated? Does it need to make a full page request to get the new items? This will needlessly inflate pageview numbers in that case (I'm sure the "fix" for that will be that Microsoft will change the User Agent to say that it's not the browser but rather than pinned menu making the request). As for implementation, does the browser create the menu from the this and it's static? Or does the URL to the page get passed to the menu bar where it is pinned and then the menu bar is responsible for making the request? Or does the browser actually run the menu bar -- this seems like the wrong kind of integration.


I don’t think you understand what “CSS hack” means.


By hack I meant workaround to overload the original function of the CSS menu tag to replicate the function of the <meta> tag. The <meta> tag is explicitly meant to represent metadata about the page. The <menu> tag is explicitly meant to represent a menu in the page.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: