Nice project! I've used Crosslink-NX and Cypress FX-3 on various MIPI camera projects as well. Did you notice that Radiant does not control the timing of the data from the mixel hardmacro to your FPGA logic? (click on one of the pins in the physical viewer to see this). I ended up adding a ring of flip-flops and physically locking them near the top edge of chip to get consistent timing.
Looking at code: why are you not using the byte aligner built into the hardmacro?
FPGA ISP i am using with this project is improved version of ISP that i made for lattice machxo3 FPGA , those FPGA like most FPGAs do not have any MIPI hard PHY.
If you want to port this ISP to Xilinx you would not find hard PHY in many FPGAs and you would need a byte aligner.
That is why Byte Aligner was implemented and left enabled in there for the sake of portability to other FPGA it does not hurt (except for may be very very small performance in very edge case or some FPGA resource consumption).
I had many issues with Crosslink NX part. I never specifically got/or noticed the issue you have mentioned.
How can we get USB-C (or USB3) connectivity with 720p@240fps? The IMX477 theoretically can do this but due to 2 lane limitation on the Jetson and the RPi this is infeasible (plus write speeds saturate bandwidth, but you can dump frames to DDR RAM first).
Have had a ton of problems trying to figure this out.
720p@240FPS would be little hard because of limitations on the USB controller side. because of 100Mhz and 32bit limit you have possibility of only 400Mbytepersecnd and with few % overhead you have may be get around 200 FPS of color UYV image. if you are ok without colors then you can get to 400FPS @ 720p
B&W is fine by me, color not necessary. Also fine with PoE and 10GBE if required. Seems crazy that the IMX477 is capable of so much more than what people are currently drawing out of it.
Any pointers on where to start? I've done raspiraw frame dumping with the RPi HQ camera but it's just going to be a non-starter given all the issues it has.
There are only few options,
1. Use USB controller Chip that support 10Gbit or more.
2. Use 10Bit or more ethernet, Optical or copper.
3. Use PCIe
4. Use onboard storage.
5. Use HDMI or something custom and have another receiver (They are called frame grabber and everybody hate them).
Right now there are not many controllers on market that can do more than 5Gbit
Most useable solution is to use Ethernet , I would say optical.
Typically what cameras like the Edgertronic do are have 8-16 GB of onboard DDR3-4 RAM and write raw frames directly to memory on a trigger or a loop/trap. Then when you have X amount of frames you want, process them using ffmpeg/gstreamer using onboard hardware acceleration methods and write to an SD card / network drive / attached SSD or similar. Simply bind X GB of RAM to a memdisk and write to that path in Linux, for example.
I am fine doing this and have done it using raspiraw and know the Edgertronic platform / source code very well. The problem is that sensors and MIPI lanes on commercially-available products are complete trash with even worse documentation.
Thank you.