seriously, with an opportunity like that it ABSOLUTELY does not matter if your acronym matches. They must now suffer publicly, forever, for we know where their unpoetic hearts lie.
I've seen the exact opposite - many projects are going back towards monolithic structures and then decoupling a minority of components that need to horizontally scale.
This I feel is the ideal way in many many cases. Only separate out components if the additional cost of service communication (when moving the component away from a monolith) is less than the benefits of scaling/separate repo/etc.
I'm personally a fan of the mono-microlith (or whatever people call it), where you compartmentalize the monolith so that it's a just a flag of what "micro-services" are enabled on each instance. Easy to scale down on a dev machine (one process) and easy to scale out, the local procedure calls just become remote procedure calls (over HTTPS or whatever).
Yeah I think there are fundamental challenges with microservices that are very hard to address unless you're someone like Google and you can invent BigTable and Zanzibar. It used to be you needed to move processing into app code and out of the database because they were hard and expensive to scale, but that's not true anymore.
The nice part is that we're no longer at the beginning of this journey. BigTable and Zanzibar already have open source equivalents: HBase/Cassandra-likes (hard to pin BigTable to just one thing) and SpiceDB. As these projects mature, they only become easier for folks to adopt into their architectures and get all the same benefits.
Yeah but if you choose something like BigQuery you really don't need microservices, and thus you probably don't need a distributed auth solution either.
Sounds like a correction from going all-in on microservices back to a more normal/natural, on-demand scalable architecture. Honestly a lot of architectures are selected based on hype instead of if it's the right solution. And a big reason for that is hiring and artificially creating excitement for developers, because let's face it, most software development is boring.
Speaking of boring development, I took a programming aptitude test at the first company I worked for many years ago. If you got a good score you could stop doing boring tasks like data entry or tape backups and learn to write programs instead. The exam room was pretty crowded as that sounded much more fun.
The guy giving the test saw it as his duty to tamp down our excessive enthusiasm. After explaining how the test worked he then told us how a recent study showed that programming was the 10th most boring job in the US. He went on at some length. This was back in the days of COBOL running on mainframes so there was probably something in what he said. At the time I supposed he was trying to provide consolation if you got a bad score. Be that as it may, the "stat" stuck with me for the rest of my life.
I mean, I'm talking from a b2b and an on-prem/world of many differently capable clouds, so I'm most likely a dinosaur anyway. We have established code bases for domains.
We've been looking at lambdas or FaaS for quite a few things and... we kind of always end up with weighing loads of operational effort versus just sticking a method into a throwaway controller on the existing systems. Is it somewhat ugly? Probably. Does it reuse the code people maintain for that domain anway? Very much so. Would you need a dependency or duplicate maintenance of that model code between the FaaS code and the monolith anyway? Most likely.
All in all, we mostly have teams utilizing Pactbroker to work out their APIs usually, deploying decently chunky systems in containers and being happy with it.
The one thing we really do on-demand compute like this is some model refinement. Because here a tiny service can trigger the consumption of rather awesome resources for a short while.
>> Almost nobody will also train their own models ...
If the value is in better training data then everyone will train their own models. We should have a goal where we have domain-specific models that make us more efficient, but also constantly train a context-aware model based on individuals. Model composition is the future.
individuals / companies that cannot afford their own workflows will fine-tune models with heavy preexisting biases. The upper-class of the market will train their own models, producing advantaged outcomes. Data is the moat.
Thus there is a market on creating open, auditable models of every media type, with sectoral and national variants.
Models are like talented grads that occasionally go off their meds, proprietary black box models are like hiring them from McKinsey, fine tuning your own ones that you control is like hiring them
All govts, regulated industries etc will use open auditable models so much bigger TAM
Considering proprietary black box models are going to be present in some form, is the difference between having AI lead to huge societal benefit or big brother down to whether data is open and provided by governments or closed in big tech?
The content in article doesn't even satisfy the headline/title. The quotes in the article specifically state what the accounts are for but the author hand-waves on what "could" be if many hypotheticals are reality. Do better Reuters.
> content in article doesn't even satisfy the headline/title
How does “the news agency reviewed a bank record showing that on Feb. 10, 2021, Binance mixed $20 million from a corporate account with $15 million from an account that received customer money” not satisfy the headline?
> According to the sources and the February 2021 bank record seen by Reuters, Binance mixed customer money and company revenues in a third Silvergate account, belonging to a Zhao-controlled Cayman firm. Binance converted money from this third account into the dollar-linked token BUSD, according to the person with knowledge of Binance’s group finances and company messages
> These accounts were not used to accept user deposits; they were used to facilitate user purchases” of crypto, said spokesperson Brad Jaffe. “There was no commingling at any time because these are 100% corporate funds.” When users sent money to the account, he said, they were not depositing funds but buying the exchange’s bespoke dollar-linked crypto-token, BUSD. This process was “exactly the same thing as buying a product from Amazon,”
The article has a lot of details and infographics - but ends with no conclusion, just guesswork. I appreciate the work and time that went into an article like this - it would be nice if it was more factual and less "blind-sourced" extrapolated hypothesis.
Adobe already had xD - a direct competitor to Figma. Adobe purchased Figma for many reasons; if you want a single reason why Adobe purchased Figma it's because Figma was successful: all the things that go into executing a vision at scale.