I'm in the group described halfway down: was on the Internet during PHP 5, lost interest in it, and moved on [to Go]. I haven't written anything in PHP newer than version 5. Even transitioning from 4 to 5 was quite a big deal, I definitely noticed improvements.
But it wasn't enough.
I couldn't fit the data set in memory with PHP. But I could do it with Go.
I couldn't do parallel computations in PHP in order to respond to an HTTP request quickly enough. But I could do it with Go.
I couldn't reliably and easily deploy to different systems with PHP. But I could do it with Go.
Eventually, I couldn't write a web server with PHP. But I could do it with Go.
A lot of my early websites were written in PHP and I was able to build them quickly and routinely. I didn't really have a problem with PHP as a paradigm, or even its security and consistency posture. I'm glad they've since made an actual language spec and fixed a lot of issues with it. And I don't judge or look down on PHP programmers. I just don't think it was the right tool for my jobs.
I agree, the main problem with PHP in my experience so far has been that it's very memory-hungry and slow (even PHP7/8), especially when coupled with frameworks/ORM magic.
I remember after spending some time with Go, I got used to being able to process tens of thousands objects in memory in milliseconds. When I proposed to do the same in PHP, during architecture review, PHP devs thought I'm out of my mind because that would take like a gig of RAM (which would compete with other PHP processes on the server) and considerable amount of time. You have to use a lot of hacks to make it all fit in memory and be fast.
Our Symfony framework also initializes in like 300 ms on each request, while in Go it's below 10 ms. As every PHP process dies after serving a request, you have to reinitialize the whole dependency container on each request from scratch, and in large enterprise applications, that's a lot of dependencies.
FWIW: There are a bunch of ways to "do" php execution, and a lot of them are wrong. That's not exactly PHP's fault, just that there's been a lot of blind-leading-the-blind.
Assuming that you're not spinning up and tearing down a container for every request, you want to be sure you're running php with a php-fpm configuration (preferably talking over a unix socket) -- this is the fastcgi process manager, which maintains a pool of "hot" php interpreters for you that are immediately ready to execute code for an inbound request. This is usually good enough for most applications without going into the weeds on things like opcode caching, but that's all available as options too.
I'd be happy to help troubleshoot this with you if you're interested. I've also got a fully automated build script that works pretty well. You can find my contact info via the link in my profile. I promise it doesn't have to take anywhere near 300ms for php to reply to a request.
My experience is that Symfony, in its fat, batteries-included form, does take quite long time to initialize on cold caches. The first request can take hundreds of milliseconds but usually the very next request is in the normal 10ms range. This is especially noticeable on (cheap) shared hosting which has always been a common place to run PHP.
I've never found it to be a problem on a VPS but if you're developing on something slow like a Raspberry Pi I can see this happening regularly. If you're used to deploying stuff in containers and constantly throw out the cache the initialisation problem can happen very easily as well.
IIRC back in symfony 1 part of it was becasue symfony used the first request to write optimized versions of the code into a folder that I cannot remember the name of.
I think there was a way to do this up front in Symfony 1 and there might be some way to do this as part of a build and deploy step. Of course on applications with less traffic and less strict requirements just hooking something up to run a request immediately after deploy might work just as well, but if it runs across n small pods it might be a much better idea to do it on a beefy build machine instead of on n resource constrained pods causing slow loading for n customers.
That 300ms initialization time sounds like a cache-free execution (aka dev mode). I have symfony projects light on ORM usage (I dislike Doctrine from the bottom of my hearth, but what can you do it's the blessed symfony ORM) and after a cache warmup it handles requests under 100ms.
Yes, the frameworks have obscene (java-like) class dependencies that build up during initialization (I'd prefer if there'd be a lighter function based framework nowadays), however for someone that knows how to manage PHP on the server there are OPCache tweaks, preloading facilities which help improve the request initialization performane (these steps are also expanded on in the symfony docs).
The share-nothing arhitecture of each request is either a benefit or a downside, depending who you ask.
It does run with caches enabled, my memory is murky but I remember Go processing requests on an order of magnitude faster (10x). Maybe it's about dependencies reconnecting on each request, or the large amount of classes in the monolith? I remember marking dependencies as "lazy load" helps.
>there are OPCache tweaks, preloading facilities which help improve the request initialization performance
I remember we disabled some of the settings for preload because we hit a bug which manifested as a segfault due to PHP's shared memory space for preload getting corrupted under a high load.
In any case, everything is fast and usually doesn't require a lot of tuning in Go out of the box.
However, I do love PHP's shared nothing architecture for the reasons of memory isolation: we have thousands of B2B tenants and with PHP, I'm confident we won't accidentally spill one company's data into another company's account. Things like when ChatGPT exposed your conversations to random people because a bug in the Redis connection pool inside Python's shared memory space returned a connection for a different user, due to a race condition.
Well, that's the problem with all PHP frameworks: they're written in PHP! Yes, PHP has gotten faster thanks to OPCache and other tricks, but still, the less time your application spends executing PHP code, the faster it will be. And frameworks like Laravel just pile on additional PHP code to execute like there's no tomorrow. I mean, just look at the callstack when an exception happens...
I was running tens of thousands of jobs an hour with an asynchronous AWS SQS job dispatcher written in php that would launch php sub processes on the CLI. Super fast. Was able to get by with a very modest ec2 instance ($20/month) handling jobs for 300k users. Was auto scaling too.
PHP 8.x has a ton of performance improvements that make what you're saying sort of not relevant anymore. I prefer PHP over Python these days on the Linux CLI. The code is cleaner.
We do a lot of work with large datasets. PHP 7&8 are so much better than 5 in terms of memory usage for large datasets.
Its not magic though, and I'm not surprised a compiled executable is many times faster, especially for math heavy stuff. Slow is often "good enough" though and deploying is quite straight forward.
I’ve written PHP apps that process millions of entities without issue at my old job. We didn’t use an ORM or anything magic. At my current job, using an ORM, my code has twice as much memory yet I can only load a few thousand entities before OOMing.
If you’re willing to give up magic, it’s worth it.
What you are describing only happens in dev environment
The Symfony container does not rebuild in production and requests should easily be served within 5-10ms as well so you might want to check your deployment pipeline and that you correctly composer dump-env prod
The way developers are encouraged to structure PHP projects (mainly due to autoloader semantics) always felt more like Java or .NET to me than it did like python. The resource consumption comparison between a Symfony web app vs the same thing written in ASP.NET or Spring Boot has a pretty clear winner, and it's not PHP.
You're doing something terribly wrong. Sounds like caching isn't setup and your http -> php handler is cold booting the interpreter for every single request.
Check these Go vs. PHP benchmarks. PHP is quite fast and stands up nicely to the performance you get out of Go.
All i see is that the php fpm based frameworks (e.g. laravel) are 100x - 300x slower. And this is the best case scenario. When projects grow, php-fpm gets slower and slower. Which is not the case for go. Im not a go-fanboy, my entire carreer has been in PHP. Im just saying, PHP is one of the most terrible languages for web servers. And it's all php-fpm's fault. On top of that, the PHP community seems to promote OOP and SOLID which are the last design patterns you want to combine with php-fpm. There's a reason why facebook created their own PHP transpiler.
Isn't that synthetic benchmarks for toy projects? The amount of code which is run on every request grows with the size of the project in PHP/Symfony (which can take quite some time in a huge monolith), while in Go, everything is usually initialized once at startup.
In terms of parallelization and reinterpreting every request, there is Swoole (implemented in Laravel as Optane) that fixes both of those issues. Most Laravel projects can handle 2x as many requests with a simple modification.
PHP was the right tool at the time especially when everything on the web was somewhat Wordpress first. PHP felt like a blast with WAMP/LAMP.
Since then I have moved on and honestly it never occurred to me, that PHP still could be an option in my tech stacks, neither one of my devs recommended it.
No one hates or disliked PHP, there are simply other options.
Looking back, recommending PHP today feels like "You can do this with jQuery, too" in the Frontend domain. Yes, you can, but maybe you shouldn't or only if you have the right people. And PHP is a rare skill now.
I'd say PHP is great to get started, Go is for when you need more control, more mechanical sympathy.
I did write a REST/JSON API in PHP 5.2 (two years ago, I'm aware there's newer versions out there but they aren't easily available in RHEL 6/7 used at our customers at the time - it was a slow moving industry); it's doable, and using best practices learned from other languages makes it look maintainable at least.
Did run into some issue with large datasets though, but that was an implementation problem; the original author would read a CSV, convert it to XML using concatenation, then parse the XML to convert it into JSON because at some point a decade ago he found out that the X in XHR was no longer (and never was) the norm, all in memory. That broke when there were more than a few thousand rows in the CSV.
>I couldn't fit the data set in memory with PHP. But I could do it with Go.
I guess this on is self-explanatory.
>I couldn't do parallel computations in PHP in order to respond to an HTTP request quickly enough. But I could do it with Go.
Consider the following (covers both statements above): you need to get some data from a few sources (databases etc) do some computation on each set and then do some sort of mapping to get the resulting set. You may want those computations to run in parallel and idealy you'd like to start mapping as soon as each computation function starts producing results.
>I couldn't reliably and easily deploy to different systems with PHP. But I could do it with Go.
I haven't been using PHP since 5 but I assume it's still much easier to just push you Go binary to a destination.
Though with Docker and company deploying PHP code is not a big issue these days I assume.
> >I couldn't fit the data set in memory with PHP. But I could do it with Go.
when would you ever have a website serve a request, and have to use gigabytes of memory to do so?
> Consider the following (covers both statements above): you need to get some data from a few sources (databases etc) do some computation on each set and then do some sort of mapping to get the resulting set. You may want those computations to run in parallel and idealy you'd like to start mapping as soon as each computation function starts producing results.
Again, why would you ever have an HTTP server do so much work in order to serve a request?
>when would you ever have a website serve a request, and have to use gigabytes of memory to do so?
I'm pretty sure that PHP is not only used for web sites otherwise comparing to Go is simply meaningless. There is little to no point in using Go to build something with a relatevely low load.
>Again, why would you ever have an HTTP server do so much work in order to serve a request?
http server != website. You can have two services somewhere down infrastructer that communicate via http(s).
> you need to get some data from a few sources (databases etc) do some computation on each set and then do some sort of mapping to get the resulting set. You may want those computations to run in parallel and idealy you'd like to start mapping as soon as each computation function starts producing results.
This definitely sounds like a typical situation for handling the job in a queue (plus with an async library like spatie/async to retrieve data in parallel), though I see the advantages and convenience of using a natively-async language here.
An example (not the best one maybe but I'm not the right guy to come up with good examples immediately) from a year ago:
We have to integrate company M and W (one is a marketplace the other one is a big store, think Walmart or something).
Company W mostly uses software similar to SAP-whatever and have a small team resposible for building 'helper' services in place where SAP can't do the job.
Company W can't communicate with M in any other way except through your generic http API they provide.
So every now and then M has to send a request to W and W has to prepare a pretty large XML response. (obviously the data is split in some way and we are not talking about tens of GBs but even so W has has to fetch the data from more than one data source, process it and send it)
In some cases you can simply have a cache\a view\whatever for this (so you only have to fetch prepared data) but in some cases you can not.
PS: this is if we are talking about HTTP communication. Or maybe some real-time communication where you can't really respond with "hey, we are getting your data prepared so just wait for a bit and re-request it with this nice jobID at a later time"
Well, for that use case, it would be typical in my industry for party A to send a callback URL to party B, so that B can POST the required information back to A after doing the multi-step processing. It's not really a done thing to make a synchronous HTTP request and wait say a minute or more for the response. Maybe that's just different expectations in different industries, though.
But it wasn't enough.
I couldn't fit the data set in memory with PHP. But I could do it with Go.
I couldn't do parallel computations in PHP in order to respond to an HTTP request quickly enough. But I could do it with Go.
I couldn't reliably and easily deploy to different systems with PHP. But I could do it with Go.
Eventually, I couldn't write a web server with PHP. But I could do it with Go.
A lot of my early websites were written in PHP and I was able to build them quickly and routinely. I didn't really have a problem with PHP as a paradigm, or even its security and consistency posture. I'm glad they've since made an actual language spec and fixed a lot of issues with it. And I don't judge or look down on PHP programmers. I just don't think it was the right tool for my jobs.