Most of the apps to learn Japanese/Chinese seems to focus on the reading part, where it will present a word, and ask the user if they remember the meaning/pronounciation.
I find that I learn much faster (and remember a word for longer) when I focus on writing, instead of just... look. And writing is something most apps just skip. Some apps do show the animation of the stroke orders, but I think the user needs to be proactively write it down somewhere to remember it better.
Not Asahi but I recently revived a 10-year-old MacBook Pro (MBP 2015) that had been sitting in my closet for many years by installing Fedora on it. To my surprise, it's fast and sleek, just like a brand new computer. All of the drivers worked!
The laptop now serves as a desktop when I'm at home and as an SSH server when I'm at work. And my 5-year-old M1 MacBook is now sitting in the closet, waiting for its turn in the next 10 years.
Except for all the issues running ARM, 16k page sizes (cuts out some flatpaks!), and the stuff that's not yet implemented like dispaly out on thunderbolt.
Amazing! On a fun note, I believe if a human kid were cleaning up the spill and threw the sponge into the sink like that, the kid would be in trouble. XD
At first, it's unsure, but also mention that there are a lot of riverside cafes in Southeast Asia that have this view. Then I said it was in Vietnam, and it was immediately concluded that this was taken at the Han River in Da Nang city, which was correct.
I can see that there is some actual analysis skill here. I'm not 100% convinced, but I'm still impressed.
I took a screenshot of your image and this prompt "play the game geoguesser and guess where this image was taken"
Putting those pieces together, the most likely spot is one of the cafés on the east bank just north of Dragon Bridge. A popular candidate with a very similar railing/table setup is Bridgespan Café (also called Bridge Cafe) at ≈ 16.0645 N, 108.2292 E.
Location guess: A second‑floor riverside café on Trần Hưng Đạo street, east bank of the Hàn River, Đà Nẵng, Vietnam (looking southwest toward Dragon Bridge).
Approx. coordinates: 16.064 °N, 108.229 °E
Confidence level: 70 %
The bridge‑light pattern and cruise‑boat LEDs strongly suggest Đà Nẵng, but several cafés share almost identical views, so the exact establishment is harder to pin down.
I built the same thing a few years back [0], and used the YouTube API for searching. It was fun on the building part.
For hosting, though, I picked Heroku, and they kept removing my deployment because I downloaded ytdlp on it! I ended up deploying it on my own server to make it work.
I know it was kind of a norm in game dev. I was never brave (or talented) enough to compete in a game jam, let alone finish anything like that though :D
My advice would be to give it a try! There are literally no downsides to crashing and burning, if you do decide to show up. And you'll be surprise how quickly you can improve with very little experience under your belt.
I'd recommend trying to attend an in-person event, and try to join a team, or attend with friends.
I think the biggest revelation will be just how much work goes into making a semi-polished game, and how many different skills are required.
Looking at the R1 paper, if the benchmark are correct, even the 1.5b and 7b models are outperforming Claude 3.5 Sonnet, and you can run these models on a 8-16GB macbook, that's insane...
I think because they are trained on Claude/O1, they tend to have comparable performance. The small models quickly fails on complex reasoning. The larger the models, the better the reasoning is. I wonder, however, if you can hit a sweet spot with 100gb of ram. That's enough for most professional to be able to run it on an M4 laptop and will be a death sentence for OpenAI and Anthropic.
because the valley is burning money and GPUs training these and somebody else comes out with another model for a tiny fraction of cost it's an easy assumption to make it was trained on synthetic data
It is a laptop. The memory is also shared which means if you are looking for a non-gaming workload, you can use it. If you have laptop equivalents in the same memory range, feel free to share.
I have laptop equivalents in the same memory range and is at least $2,500 cheaper.
Unfortunately, it does not have "unified memory", a somewhat "powerful GPU", and of course no local LLM hype behind it.
Instead, I've decided to purchase a laptop with 128GB RAM with $2,500 and then another $2,160 for 10 years Claude subscription, so I can actually use my 128GB RAM at the same time as using a LLM.
I see this comment all the time. But realistically if you want more than 1 token/s you’re going to need geforces, and that would cost quite a lot as well, for 100 GB.
GB10, or DIGITS, is $3,000 for 1 PFLOP (@4-bit) and 128GB unified memory. Storage configurable up to 4TB.
Can be paired to run 405B (4-bit), probably not very fast though (memory bandwidth is slower than a typical GPU's, and is the main bottleneck for LLM inference).
I usually use LLM to help brainstorming ideas or finding issues/concern in my code, sometimes when I start out a new project, it can help provide me a starting point of the architecture [0] too (spoiler: the link below is a blog post about my product, I built it to solve my own need but I thought it might be useful to share).
I find that I learn much faster (and remember a word for longer) when I focus on writing, instead of just... look. And writing is something most apps just skip. Some apps do show the animation of the stroke orders, but I think the user needs to be proactively write it down somewhere to remember it better.