Hacker Newsnew | past | comments | ask | show | jobs | submit | davidbou's commentslogin

I don't think it was meant to be taken literally (we didn’t write the article). We’d actually love to do more marketing, we barely have time for it though. We don’t have a storefront website—just a basic site with outdated product info but we dedicate all our efforts to the support section. We post on LinkedIn a couple times a year to reassure everyone that we're still alive, but that’s hardly a real marketing strategy. Currently our sales come from word-of-mouth and industry connections, not much from marketing. Hopefully, we’ll find the time to step it up in the future!


Yeah, reflecting on it, the article was obviously just being hyperbolic - I think I'm just on a hair's trigger for anything bordering falsehoods because of the current state of my country (USA). Also "storefront" was a poor word choice - I was originally going to say "professional," but decided against it for some reason.

Regardless, just keep making quality software that sells itself!


I admit to hyperbole.

The interesting part is that the main marketing and sales is by word-of-mouth and quality of product. All the hardware is not even on the website, which was very confusing to my understanding when writing. It makes sense under the resource constraints.


No, it deals with metadata: control and status as explained in a previous reply https://news.ycombinator.com/item?id=43479094#43482362

Elixir does some computations as well but when we had to compute 3D luts based on video processing algorithms, Ghislain had to write them in C to be fast enough for our needs on embedded hardware.


They used around 150 cameras for the last Super Bowl. Most of them were Sony studio cameras, controlled with Sony remotes to ensure perfect alignment. But now, they’ve added a lot of specialty cameras: probably 4 or 8 pylons, each equipped with 2 to 4 cameras, plus drones, handheld mirrorless cameras, mini high-speed cameras, and a few other mini-cams for PoV (Point of View) shots. Last year, they even had a mini-cam inside the cars driving from the Bellagio to the stadium, controlled remotely over cellular. An Elixir process ran on a RIO in the car to manage the camera and connect to a cloud server, while the remote panel was linked to the same server to complete the connection. All three ran Elixir code, with the cloud server acting as a simple data relay.

If you want the green of the grass on all the pylon cameras to match your main production cameras, adjustments are a must. And with outdoor stadiums, this is a constant task—lighting conditions change throughout the day, even when a cloud moves across the sky. When night falls, video engineers are working non-stop to keep everything perfectly aligned with the main cameras.


We're still a rather small team of mainly tech people, we don't have a single sales person in the traditional way (not yet). Ghislain was able to develop an architecture that we could count on as being reliable while being able to quickly experiment in all directions and build on top of what was started. We were never really afraid of major failures as the system has been proven to be robust after the first 2 years (everything was started from scratch, including hardware).

As we were able to very quickly respond to customer demands for anything special that they would need, they ended-up being our main sales channel by recommending the solution further. And nearly 10 years after, we're still pretty much on the same model, trying to keep up with the developments, delivering products and supporting our customers. The website is outdated and it's been years we're trying to make any progress there, eventually we'll succeed at that.


Congrats on an incredibly impressive and technically complex product.

Operating such high visibility events like the Olympics sounds pretty nerve-wracking. How much of an issue is security for you? Do you experience any attacks?


Security has been a hot topic for the past few years, but it's getting even more attention now. Fortunately, it’s mostly a concern for production facilities, and the most effective solution is often complete isolation—most production networks don’t have internet access at all.

With the rise of remote production (where the control room is located at headquarters while cameras and microphones are on-site at stadiums), broadcasters are implementing VPNs, private fiber connections, and other methods to stay largely separate from the public internet.

In our case, the only part that uses the public internet is the relay server, which is necessary when working over cellular networks. Security is one of the main reasons we haven’t expanded this service into a full cloud portal yet—it’s much easier to secure a lightweight data relay with no database, running on a single port, than to lock down a larger, more complex system.


I want to add that the relay server is never handling any customer secrets (so a low value target), and we have techniques in place to reduce the probability of DoS (increase the cost to the attacker).

So even if someone would be able to break into the server through the small attack surface, he would not be able to change any setting on any of our customer's devices. Or even read any status either. Of course, if someone can break into our server, the DoS is inevitable, but so far this never happened.


Yes, this was one of our initial considerations when we first started, and the telecom analogy of the original Erlang development application was one of the main reasons we took this approach. Now, we only "stream" metadata, control data, and status. Even though we manage video pipelines and color correctors, the video stream itself is always handled separately.

For anyone interested in the video stream itself, here's a summary. On-site, everything is still SDI (HD-SDI, 3G-SDI, or 12G-SDI), which is a serial stream ranging from 1.5Gbps (HD) to 12Gbps (UHD) over coax or fiber, with no delay. Wireless transmission is typically managed via COFDM with ultra-low latency H.264/H.265 encoders/decoders, achieving less than 20ms glass-to-glass latency and converting from/to SDI at both ends, making it seamless.

SMPTE 2110 is gaining traction as a new standard for transmitting SDI data over IP, uncompressed, with timing comparable to SDI, except that video and audio are transmitted as separate independent streams. To work with HD, you need at least 10G network ports, and for UHD, 25G is required. Currently, only a few companies can handle this using off-the-shelf IT servers.

Anything streamed over the public internet is compressed below 10 Mbps and comes with multiple seconds of latency. Most cameras output SDI, though some now offer direct streaming. However, SDI is still widely used at the end of the chain for integration with video mixers, replay servers, and other production equipment.


I was tempted to go into the fact that the video streams wouldn't pass through BEAM, because that would be crazy, but I cut it out.

AIUI, technically, the old phone switches worked the same way. BEAM handled all the metadata and directed the hardware that handled the phone call data itself, rather than the phone call data directly passing through BEAM. In 2025 it would be perfectly reasonable to handle the amount of data those switches dealt with in 2000 through BEAM, but even in 2025, and even with voice data, if you want to maximize your performance for modern times you'd still want actual voice data to be handled similarly to how you handle your video streams, for latency reliability reasons. By great effort and the work of tons of smart people, the latency sensitivity of speech data is somewhat less than it used to be, but one still does not want to "spend" your latency budget carelessly, and BEAM itself is only best-effort soft realtime.


There's a notable difference between shading and grading. Shading is for the TV industry where you adjust all cameras to match perfectly the exposure, tone curve and colors. So when switching between camera angles you don't notice any difference in skin tone or detail, and the green of the grass and blue of the sky are all the same. Also a very important point is to get the color of the sponsor logos right, that would be where to start sometimes... There's less creativity here, you have mainly to follow the standards like ITU-R BT.709 or for HDR HLG and ITU-R BT.2020.

Grading is the creative process of adding a look to your production, which is usually handled in post production but there are now ways to do it live, although by using similar tools as the post production software. And they still re-do it in post production. This is used live for concerts and fashion shows.

There is a significant distinction between shading and grading.

Shading is essential in the TV industry, where the goal is to ensure all cameras are perfectly matched in exposure, tone curve, and colors. This ensures seamless transitions between camera angles, maintaining consistency in skin tones, fine details, and the color of grass and sky. A crucial aspect of shading is accurately reproducing sponsor logos' colors, which can sometimes be the starting point as that's where the money comes from. Creativity plays a lesser role here, as the focus is on following industry standards such as ITU-R BT.709 for SDR or ITU-R BT.2020 and HLG for HDR.

Grading, on the other hand, is a creative process meant to give a distinctive look to a production . Traditionally done in post-production, it can also now be applied in real time using tools similar to those found in post-production software. Despite this, it is often still refined further in post. Live grading is commonly used for events such as concerts and fashion shows, where you want to look different from TV productions.


TIL about shading, and am surprised how less I've seen this term in grading tutorials. While different, I feel like shading is something that should be learnt before grading.

PS You might have pasted two different answer drafts above. Paras 1,4 and 2,5 deliver similar information


MQTT is used for messaging between processes on the embedded device itself, which can be the remote control panel, or a camera node. The panel itself is driven by a microcontroller which gets all the parameters to display and request changes through MQTT. If the camera is controlled locally, like on a LAN, then another process picks up the action and handles the communication with the camera. If the camera is remote (over cellular for example), we don't rely on the bridging functionality that some MQTT brokers provide but rather use Elixir sockets to send the data over. Typically parameter changes would be sent towards the camera and new status would be populated back to everyone. In most cases it's been a single control room, sometimes 2 at different locations, and one camera site so the needs for a wide distributed architecture hasn't been felt yet.

One of the next steps would be to have a real cloud portal where we could remotely access cameras, manage and shade them from the portal itself. In this context we have been advised to look at NATS. Remote production or REMI is now getting more traction and some of our clients handle 60+ games at the same time from a central location. That definitely creates new challenges as centralizing everything for control is a need but keeping distributed processes and hardware is key to keep the whole system up if one part fails.


This gives an idea of the parameters we cover for roughly 200 different models of broadcast cameras we might have so far. These are only to tweak the image quality which is the job of the video engineer (vision engineer in UK). We usually don't cover all the other functions a camera has, which could be more intended for the camera operator himself. The difficulty is to bring some consistency with so many different cameras and protocols.

https://pastebin.com/cgeG2r0k


Do you "normalize" the parameters to some intermediate config so that everything behind that just needs to work with that uniform intermediate config? What about settings that are unique to a given device?


That was the idea—we started by normalizing all the standard parameters found in most cameras. The challenge came when we had to incorporate brand-specific parameters, many of which are only used by a single manufacturer. Operators also weren’t keen on having values changed from what the camera itself provided, as some settings serve as familiar reference points. For example, they know the right detail enhancement values to use for football or studio work. So, we kept normalization for the key functions where it made sense, but for other parameters, we now try to stay as close as possible to the camera’s native values.

As for the topics on MQTT, they function as a kind of universal API—at least internally. Some partners and customers are already using them to automate certain functions. However, we haven’t officially released anything yet, as we wouldn’t be able to guarantee stability or prevent changes at this stage.


I have noticed that you and your team's answers are detailed and insightful - much appreciated.


Major events use it for all kind of soecialty cameras as they aready have the technology for the main studio cameras. So we had to develop solutions for everything that was not working. And major productions have budgets for all kind of new toys:mini-cams, drones, cable cams, now cinematic look from small mirrorless cameras, slow motion, etc. That opened up a whole lot of possibilities to be creative but you have to be as reliable as the main cameras and aim for the best image quality.

Now the same products are used for very small productions that don't have the budget for any studio camera (look typically at 50k+ for a camera without lens). In that case we try to provide a similar user experience and functions but with much more ffordable cameras.

Finaly more and more live productions are now handled using cine style cameras which don't have the standard broadcast remote panels and that's another area we cover, by combining camera control with control of many external boxes like external motors to drive manual lenses or 3D Lut video processors. Applications are on fashion shows, concerts, theater, Churches, studio shows, even corporate.

In the end Elixir is used for a lot of small processes which handle very low level control protocols. And then on top of that add a high-level of communication between devices either on local networks or over the cloud.


> Now the same products are used for very small productions that don't have the budget for any studio camera

Just out of curiosity, what would be examples of very small productions here? Would an independent YouTube channel with great production quality be using this?


Typically 4 cameras setups where a single remote can control all of their cameras. For classical concert, they would use 2 PTZ robotic cameras and 2 mini cams on some artists and instruments. There is no camera operator at the camera side (for costs reasons) so a single operator has to do it all.

One important point, if you are not live, then there's usually the possibility to adjust everyting manually on the camera and then finish in post production so our remotes are nearly never used outside the constraints of live productions.

One the opposite direction, I heard that they had around 250 cameras on Love Island but you can pretty much control everything from one or 2 remotes as there isn't a need for a lot of changes at a single time. The action only happens in front of a few of them. That said, we still have 250 processes running and controlling these cameras continuously.


The extreme upper range of YouTube channels sometimes use a RED camera. I've not seen a lot of ARRI for YouTuber behind-the-scenes. Usually they go with high-end prosumer full-frame mirrorless Sony, Canon or equivalent. Those are probably below what the Cyanview's stuff is intended for or just on the edge of what gets used.

I suppose FX30, FX3 and FX6 is in Sony's cinema line and may have all the color stuff that these systems want to tweak but I'm not sure. These cameras do get a fair bit of play on YouTube.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: