Soldering BGA is the easiest part, You just need good hotair.
While soldering connectors are hardest. They melt , do not self alging and many other problems
Nice project! I've used Crosslink-NX and Cypress FX-3 on various MIPI camera projects as well. Did you notice that Radiant does not control the timing of the data from the mixel hardmacro to your FPGA logic? (click on one of the pins in the physical viewer to see this). I ended up adding a ring of flip-flops and physically locking them near the top edge of chip to get consistent timing.
Looking at code: why are you not using the byte aligner built into the hardmacro?
FPGA ISP i am using with this project is improved version of ISP that i made for lattice machxo3 FPGA , those FPGA like most FPGAs do not have any MIPI hard PHY.
If you want to port this ISP to Xilinx you would not find hard PHY in many FPGAs and you would need a byte aligner.
That is why Byte Aligner was implemented and left enabled in there for the sake of portability to other FPGA it does not hurt (except for may be very very small performance in very edge case or some FPGA resource consumption).
I had many issues with Crosslink NX part. I never specifically got/or noticed the issue you have mentioned.
How can we get USB-C (or USB3) connectivity with 720p@240fps? The IMX477 theoretically can do this but due to 2 lane limitation on the Jetson and the RPi this is infeasible (plus write speeds saturate bandwidth, but you can dump frames to DDR RAM first).
Have had a ton of problems trying to figure this out.
720p@240FPS would be little hard because of limitations on the USB controller side. because of 100Mhz and 32bit limit you have possibility of only 400Mbytepersecnd and with few % overhead you have may be get around 200 FPS of color UYV image. if you are ok without colors then you can get to 400FPS @ 720p
B&W is fine by me, color not necessary. Also fine with PoE and 10GBE if required. Seems crazy that the IMX477 is capable of so much more than what people are currently drawing out of it.
Any pointers on where to start? I've done raspiraw frame dumping with the RPi HQ camera but it's just going to be a non-starter given all the issues it has.
There are only few options,
1. Use USB controller Chip that support 10Gbit or more.
2. Use 10Bit or more ethernet, Optical or copper.
3. Use PCIe
4. Use onboard storage.
5. Use HDMI or something custom and have another receiver (They are called frame grabber and everybody hate them).
Right now there are not many controllers on market that can do more than 5Gbit
Most useable solution is to use Ethernet , I would say optical.
Typically what cameras like the Edgertronic do are have 8-16 GB of onboard DDR3-4 RAM and write raw frames directly to memory on a trigger or a loop/trap. Then when you have X amount of frames you want, process them using ffmpeg/gstreamer using onboard hardware acceleration methods and write to an SD card / network drive / attached SSD or similar. Simply bind X GB of RAM to a memdisk and write to that path in Linux, for example.
I am fine doing this and have done it using raspiraw and know the Edgertronic platform / source code very well. The problem is that sensors and MIPI lanes on commercially-available products are complete trash with even worse documentation.
I am the Developer behind this camera.
If you buy from mouser and try to make just 1 then Only part would be around 200 USD.
If you make hundred in bath of new improved hardware, You can look at BOM cost under 100 USD.
I am working on new version of this camera with totally new enclosure and PCBs as well.
You will be able to buy this soon enough.
I am the Developer behind this camera.
I have assembled cameras the exact way you have described. If you buy from mouser and try to make just 1 then Only part would be around 200 USD.
I was wondering about this comment on the article:
> IMX335 is also MIPI CSI sensor, So it can be added, I just did a quick search and found sensor itself is not available for purchase but you can buy IMX335 based cameras pretty cheap and get camera sensor out of it.
Do you desolder the sensor (and how would you do that)? Or does the sensor still needs to be on its original PCB?
I am developer behind this camera.
Its a second revision of hardware and also second revision or firmware. I have been doing MIPI CSI FPGA stuff from quite sometime.
Current state of firmware is stable. Where work need to be done isimage quality in terms of color correction , Whitebalance, noise reduction other similar things.