The article literally cites a statement put out by a respected medical journal on Feb. 29, 2020, and signed by 27 scientists, "roundly rejecting the lab-leak hypothesis, effectively casting it as a xenophobic cousin to climate change denialism and anti-vaxxism."
Whatever your recollection of "rank-and-file scientists" attitudes is, the narrative on record is to the contrary.
Rank-and-file scientists tend to stay out of that kind of political statement-making IME, just as when you see a statement from the student union of XZY university condemning whatever that tells you very little about what rank-and-file students of that university are thinking. So I don't see a contradiction here.
iOS apps are more profitable on a per install basis. So the discrepancy cannot be explained by there being multiple Android stores. Your hypothesis in brackets is the more likely explanation.
Source: I worked on an app with millions of paid subscribers.
iPhone had first mover advantage and comes from an American company.
As a result iPhones have 50% of the smartphone market in the US, and Americans have the highest disposable income per capita of any large country (>20 million people).
iPhones tend to have much smaller market shares outside of the US. In some places they're maybe around 30% but generally they're much, much lower.
iPhone has a smaller market share in poor countries. In rich European countries the market share is much higher than in the US. In Scandinavia seeing an Android phone is a rare occurrence, if your friend has one you're surprised and ask how come.
It's a function of wealth, not first mover advantage. Well off people choose iPhones. That should tell you something.
> When writing SPIR-V, you can’t have two integer types of the same width. Or two texture types that match. All non-composite types have to be unique for some reason. We don’t have this restriction in Naga IR, it just doesn’t seem to make any sense. For example, what if I’m writing a shader, and I basically want to use the same “int32” type for both indices and lengths. I may want to name these types differently
This doesn't really make sense for IR. IR is not meant to be human writeable. It's meant to be generated by a compiler. So, having a one to one mapping between concept and name in the IR is a feature, not a bug.
Honestly, WGSL just repeats the JavaScript mistake: where we should have started with something like WASM instead of JavaScript. And JS could have been just one of the many languages that targeted WASM.
We were this close to not repeating this mistake by adopting an IR language (SPIR-V) for WebGPU, but then that got abandoned mostly for political reasons. Too bad. Now we get to write transpilers and hacks for decades to come, just like web people have been trying to paper over JS problems for decades.
> This doesn't really make sense for IR. IR is not meant to be human writeable. It's meant to be generated by a compiler. So, having a one to one mapping between concept and name in the IR is a feature, not a bug.
Do you consider the name for a type to be a part of the concept? SPIR-V doesn't.
Overall, if you wanted one-to-one mapping, at least that would be consistent. But SPIR-V decided to require this only for simple types, while you can still have duplicate composite types.
> Honestly, WGSL just repeats the JavaScript mistake: where we should have started with something like WASM instead of JavaScript. And JS could have been just one of the many languages that targeted WASM.
We don't know if Web would be nearly as successful if it started with WASM instead of JS. The ability to just open a text editor and make a web page has served well in the early days.
> We were this close to not repeating this mistake by adopting an IR language (SPIR-V) for WebGPU, but then that got abandoned mostly for political reasons.
Look at the situation today: the very same people who were rooting for SPIR-V are currently introducing features the diverge WGSL away from SPIR-V, further and further. It's quite telling, I think. The moral of the story is: regardless of reasons (which we can argue a ton), WG members admit that what we ideally need is not SPIR-V, and this is supported by the way WGSL is shaping.
> We don't know if Web would be nearly as successful if it started with WASM instead of JS. The ability to just open a text editor and make a web page has served well in the early days.
And you would be able to do that just fine. All you'd need to do is include the JS compiler (helpfully hosted at http://cdn.google.com/ecmascript-2015.wasm) in the <head> of the HTML file's script tag, open a text editor and type away. Or, you know, include a more sane language like typescript, or python. Heck even lisp if you're so inclined. Or, and this is like totally crazy, but say you need performance, and want tight control over memory layout and allocations, then go for C or rust! Oh and no need to minify your JS. Just ship the WASM precompiled. Save on browser compilation time and network bytes at the same time. So long as you generate WASM that the browser understands use whatever lang makes you happy. Type it right into a text editor and include the compiler in the head of the HTML. Easy peasy.
It's without question that JS has held back web development for decades. Of course, people didn't know any better back then, and really JS was added as almost an afterthought, so we can't really blame them for getting it wrong.
We do know better now though. And still we get WGSL. Looking forward to 2041, when we finally get WGASM. The hottest new thing in web gpu technologies.
I remember reading that too. I think Ben Horowitz might have talked about this in The Hard Thing About Hard Things, though I might be misremembering the source.
Agreed. Besides Geometric Algebra, dual numbers also play a huge role in automatic differentiation -- the core building block of modern machine learning frameworks.
I really like the analogy in this talk about how Al-Khwarizmi's six quadratic equations simplify to just one, once we learn about negative numbers and zero.
In a lot of ways, geometric algebra (and dual numbers) are our discovery of "negative numbers and zero", but for the 21st century.
In the video they present it as an algebraic framework where you can "add" to a number system elements x such that x^2 = -1, x^2 = 0, or x^2 = 1. Adding the element x^2 = -1 to the real numbers gives you the complex numbers, with x = i. Adding the element x^2 = 0 to the real numbers gives you the dual numbers with x = epsilon, which is what can be used for automatic differentiation. The case of x^2 = 1 is more complicated.
Newer ML frameworks do source to source transformations, which allows calculating the derivative without changing the function signature, but the concepts used remain the same.
No, because 10M generates a steady return in a growing economy. So the steady-state consumption of someone with a net worth of $10M is much greater than someone with $0, all other things (e.g., income) being equal.
I really like how type checking is implemented for parameter lists. I think there's a more generalized extension of this.
Specifically, I think that there exists a lisp with a set of axioms that split program execution into "compile-time" execution (facts known about the program that are invariant to input) and a second "runtime" execution pass (facts that depend on dynamic input).
For example, multiplying a 2d array that's defined to be MxN by an array that's defined to be NxO should yield a type that's known to be MxO (even if the values of the array are not yet known). Or if the first parameter is known to be an upper-triangular matrix, then we can optimize the multiplication operation by culling the multiplication AST at "compile-time". This compile-time optimized AST could then be lowered to machine code and executed by inputting "runtime" known facts.
I think that this is what's needed to create the most optimally efficient "compiled" language. Type systems in e.g. Haskell and Rust help with optimization when spitting out machine code, but they're often incomplete (e.g., we know more at compile time than what's often captured in the type system).
I've put "compilation" in quotes, because compilation here just means program execution with run-time invariant values in order to build an AST that can then be executed with run-time dependent values. Is anyone aware of a language that takes this approach?
Idris is not a Lisp and I've never used it, but it has dependent types (types incorporating values) and encodes matrix dimensions into the type system (I think only matrix multiplications which can be proven to have matching dimensions can compile). I think the dimension parameters are erased, and generic at runtime (whereas C++ template int parameters are hard-coded at compile time). IDK if it uses dependent types for optimization.
I'm not sure, but this seems a bit like how Julia specialize functions based on type of arguments? Or maybe it's the inverse - as Julia creates specialized functions for you (eg add can take numbers, but will be specialized for both int32 and int64 and execute via appropriate machine instructions).
In fact, I think Julia is a great example of taking some good parts of scheme and building a more conventional (in terms of syntax anyway) language on top.
Yes, Julia has some of it. But you're still required to specify the template parameters of a type (unless I'm mistaken). Whereas what I'm talking about is that any value of a data type could be compile time known. For example, some or all of the dimensions of an nd-array, as well some or all values of said nd-array.
Julia has explicit parameterization, but will also interprocedurally propagate known field values at compile time if known (which happens a lot more because our compile time is later), even if they weren't explicitly parameterized. Since this is so useful (e.g. as you say for dimensions of nd arrays - particularly in machine learning), there's been some talk of adding explicit mechanisms to control the implicit specialization also.
I'm not sure, but I think it's different. Specifically, I think you would do macro evaluation first, then fully evaluate the resulting program on run-time independent values, and only then evaluate the resulting program on run-time dependent values.
Edit: Also, run-time independent evaluation would need to handle branching differently. For example, in this expression: (if a b c). If `a` is not known at "compile time" then this expression remains in the AST, and run-time independent value propagation continues into `b` and `c`. If `a` is known at "compile time" then only `b` or `c` remain in the AST depending on whether `a` is true or false.
Scooters could be using NFC/RFID payments today (e.g., Apple pay) instead of QR codes. So you could just walk up, tap your phone, and go. UWB offers no real usability advantage here. Scooter companies don't use these methods because they want you to install their app for user retention reasons.
So what's missing in your scooter use case is a loyalty mechanism rather than a lack of payment technology.
Also you want your scooters to work for the large portion of people who didn’t upgrade to a NFC enabled phone yet
QR codes are highly backwards compatible
Doesn’t preclude a NFC improvement, but my experience with Scan QR to Unlock has been very good, point the phone at the bike, bike unlocks, not much to improve
I suppose that would only work if whatever keys the pirate needed to publish to allow the data to be downloaded wouldn't _also_ give the keyholder the ability to delete that data. Otherwise the copyright holder could just issue the command to delete the file themselves. I'm not sure if Sia works that way or not; would be interesting to see.
Since they say their intent is to compete on price with S3 / CDNs, it seems possible to be able to download a file without having permissions to delete that file. If that were not the case, then Sia would be limited to personal backup only.
It's confusing, because they refer to themselves as a potential competitor to S3 several times in the linked article, but I thought I read somewhere that conceptually what they're building is actually just the data persistence layer of a service like S3?
A complete S3-like service would require a third-party tool on top of Sia. Goobox[1], for example, uses sia as a storage backend and provides an S3-compatible API[2].
In other words, right now - I think if you are interacting with Sia directly you can do whatever you want with the files you have access to. Not 100% sure about that.
Whatever your recollection of "rank-and-file scientists" attitudes is, the narrative on record is to the contrary.