1 million token context window. Multi modal inputs. This must be costing Google a fortune to offer free inference with a window of this size. ~200mm USD training costs alone by recent estimates from Stanford(if memory serves).
I'd recommend throwing some thick documentation at it. Images must be uploaded separately. If you use the full window, expect lengthy inference compute times. I've been highly impressed so far. Greatly expands capability for my daily use cases. They say they've stretched it to 10M in research.
It says you should be able to access access trough Vertex AI.
This might be consumer protections issue. EU regulations protect mainly consumer privacy and rights, so product for 'retail' consumers has stricter rules than those intended for developers.
Perhaps someone knowledgeable could add context as to why they wouldn't include EU. I know they've passed more regulation and earlier... But in technical/legal terms...
Not enough time to ensure compliance?
Capability or privacy issues exceed regulatory framework?
I haven't been following the regulatory side too closely as of late.
I'd recommend throwing some thick documentation at it. Images must be uploaded separately. If you use the full window, expect lengthy inference compute times. I've been highly impressed so far. Greatly expands capability for my daily use cases. They say they've stretched it to 10M in research.