The interfaces with devices over SPI/I2C etc is interesting. There's actually some other interesting developments in this area - 10Base-T1S and 10Base-T1L are recent standards that are designed to allow ethernet packet communication over much more lightweight multi-drop networks - basically allowing a small network of interconnected devices to talk ethernet to each other (and the outside world, via a bridge) without the requirement for bulky and expensive traditional ethernet transformers.
I don't think there's wide silicon support yet - but the prospect of a small network of sensors which communicate over ethernet, which can be effortlessly bridged to a standard network is quite nice.
Not just sensors — entire devices and ecosystems of devices could use this, internally and for expandability. Anything that currently uses an RS-485-based protocol or even CAN could potentially switch.
IP over CAN is fine, but the useful throughput is limited. Overhead is very high because the frames are so short--64 useful bits in every 108-bit frame--so you end up repeating the headers many times. That limits effective throughput to about 600 kbps on a 1 Mbps shared bus. (And the nature of the shared bus means that's a single throughput budget for all attached devices.)
Except ... that they always wind up there eventually.
CAN is primarily useful if you have a broadcast node that only has a small amount of data to send at periodic intervals.
The problem is that as soon as you start adding round trips, CAN starts not looking so good. Which message are you referring to? How are you validating that you aren't getting duplicate messages? etc.
Finally, when you need more than 8 bytes, everything falls into a heap. You have to start putting in sequence numbers (which take up parts of your 8 byte packets), holding things for reassembly, etc. Once you reach that point, you might as well just use Ethernet.
The main issue back when it was developed was that CAN was way cheaper than Ethernet when CAN was released in 1986. The problem is that Moore's Law is a thing. There have been about 25 iterations of Moore's Law since then decreasing the cost differential. CAN MACs are not really more expensive than Ethernet MACs, and Ethernet MACs get WAY more testing. CAN PHYs are only really cheaper than the 10Base-T1S Ethernet PHYs (which are faster) and are comparable to standard Ethernet PHYs in cost (which are WAY faster).
At this point, you're just better off using Ethernet. It's about the same price, significantly faster, and far better tested.
I guess when we get demand we will also get microcontrollers that integrate more and more of it. Still it feels like replacing outdated solution with ill-fitting one.
>Finally, when you need more than 8 bytes, everything falls into a heap. You have to start putting in sequence numbers (which take up parts of your 8 byte packets), holding things for reassembly, etc. Once you reach that point, you might as well just use Ethernet.
But on other side ethernet overhead on just sensors is huge. The minimum payload is 46 bytes (in 64 byte frame), so sending say just a 2 byte temperature sensor data is 32x of overhead vs CANs 4x (IIRC 8 byte packet for 2 byte value. For 8 byte CAN only have 1.75x overhead while ethernet frame still has 8x overhead.
So in sensor-filled device like car you'd be getting far lower real throughput than the 10Mbit/s. And the sensors will want to broadcast to whoever listens anyway so a good part of the frame is wasted. Some kind of ethernet light where say:
* frames can be as short as few bytes
* destination address can be skipped and receiving devices treat it as broadcast
would double or triple the throughtput.
> CAN MACs are not really more expensive than Ethernet MACs, and Ethernet MACs get WAY more testing.
Average car have dozens, I'm pretty sure CAN MACs are tested well enough lmao, what a non-argument
> Some kind of ethernet light where say: frames can be as short as few bytes...
On that note: To keep overhead low for sensors, SatCat5 does not impose a minimum frame length on SPI or UART interfaces. i.e., The minimum payload length is zero bytes. You still have 18 bytes of overhead (destination, source, EtherType, FCS) per Ethernet frame, but that is still a huge win for the low-speed interfaces that need it most.
For compatibility's sake, runt-frames are zero-padded (and FCS updated) before transmission on conventional Ethernet interfaces (*MII). But typically those are running at 100+ Mbps so the impact is acceptable.
It's worth noting that you can use Ethernet without a full IP stack. Protocols like CoAP are specifically designed to run on a UDP-only stack (ARP/ICMP/IP/UDP) to save on code-space for tiny microcontrollers. If you only need local connectivity, raw-Ethernet messages have even less overhead.
This is really intriguing. I was recently pondering something related:
10/100 ethernet only uses two pairs, one in each direction. Of course the jack has four pairs, and the cable has four pairs and gigabit uses all four. But, if you're only using 10/100, it's trivial to split the jack on either end of a run, and carry two "ports" worth over a single cat5 cable.
However, this requires a second splitter at the switch end, like so:
Would it be possible for a suitably-bastardized GigE PHY to drive the four pairs as two independent "ports" of 10/100, assuming such a splitter was just on the far end, but the cable on the near end was plugged directly into the switch?
Don't ask about practical applications, I really have none in mind. It just struck me as a perverted and wrong thing to do, therefore it must be done.
(And on that note, since GigE really runs the four pairs as four nearly-independent bidirectional channels and then merges their capacity afterward.... could you get four 250Mbps "ports" out of such a thing? By routing all four pairs to four separate devices, each of which is also running this bonkers implementation? Essentially reinventing single-pair ethernet (yet again yet again yet again yet again) using all stock PHYs. This would require cooperating devices on both ends of the link, but the prior idea could be used with stock 10/100 devices.)
that's hilarious. it's essentially a physical vlan. it could be of use for anything that doesn't support vlans directly, and it'd be a lot cheaper than running a switch to do the tagging/untagging.
I've done exactly that, when I couldn't be arsed to figure out QinQ on a wrt54g which already uses VLAN tagging internally between the SOC and the switch ASIC. Just used two physical ports and a splitting scheme just like the above.
The trick would be doing it with _one_ physical port on a suitably malleable GigE-equipped device.
A lot of modern Ethernet speeds are the result of what you mention at the end. Doing it with a gigabit port was never standardized to my knowledge, it only really got popular after 10G and later hit the scene, but there shouldn't be anything preventing a custom implementation.
While I am always down with Ethernet All The Things(tm), this seems like an odd duck. I'm thinking that they wanted to monetize this but 10Base-T1L and 10Base-T1S sidelined them.
SPI Ethernet chips have existed for quite a while. Existing COTS Ethernet switch chips are going to be far more robust and quite a bit smaller. And, now that you can actually buy 10Base-T1L and 10Base-T1S transceivers--you can get even lower power by reducing the signalling levels (or maybe not for space applications). T1L gets you solid industrial point-to-point robustness, and T1S finally gets you multi-drop on single pair again (And the angels sang "Hallelujah!" for networking like it's 1988). Both of which can have genuine transformer or transformerless galvanic isolation.
All this for far cheaper, far less complexity and WAY less power than that FPGA.
I guess the main advantage is UART interface Ethernet? A lot of cellular modem-like things have this really bastardized networking over UART that does have the advantage that it seems to be a "standardized" AT command set, so it's possible that this is important in satellite stuff which tends to piggyback on other industries as much as possible due to low volume.
Lead developer here. The intent from the start was to make an open-source design for the explicit purpose of encouraging adoption. The cubesat and smallsat industry as a whole could really use more standardization.
In short: Writing a white-paper saying "plz use Ethernet in your cubesat" is one thing. Writing one that says "plz use Ethernet, we're using it for all of our new cubesats, and by the way here's the source code for the Ethernet switch" is quite another.
For us, the main advantage is having a mixed-media switch. The Slingshot-1 cubesat launched July 2022 literally has UARTs and gigabit Ethernet (SGMII) on the same LAN.
Interesting that it's both patented and under LGPLv3/GPLv3 (unclear which applies), seems like a way to avoid non-FOSS/proprietary implementations although I would think releasing it as regular GPLv3 would already serve as prior art under patent law, IANAL but would love to know.
OSH licenses for FPGA code are a total mess. I am working on some hardware that I plan to open source very soon, but figuring out what license to use is extremely difficult. My options seem to be:
* Start with CC-BY-NC that pretty clearly disallows commercial use in any way and punt on the question (or allow commercial users to pay a nominal fee for a commercial license)
* GPL or LGPL - both of which are unclear how they apply in the case of ASIC and FPGA designs, particularly when you consider weird corner cases like partial reconfiguration (FPGA and ASIC tools allow you to sequester the open source block in its own "compilation unit" and compiling that unit separately from the main system, only connecting the wires at the last steps)
* CERN OHLv2 license of some kind, which seems to give up its copyleft provisions on things that don't involve giving stuff to a customer
* MIT/BSD
A lot of folks in hardware are not particularly interested in going for the MIT/BSD route because of the commercial nature of IP cores. It essentially allows a random firm to repackage your IP and sell it as their own (usually for a shit load of money - hardware licenses are often 5-6 figures).
I addressed my concerns about the CERN license already. In particular the definition of "Convey" doesn't seem to apply unless you provide a product of some sort to a customer of some sort or release your source code as open-source. Hosted products and commercial use for internal purposes are arguably not covered under the copyleft parts of the license.
It does seem to be the least bad option, though, for most things.
> (FPGA and ASIC tools allow you to sequester the open source block in its own "compilation unit" and compiling that unit separately from the main system, only connecting the wires at the last steps)
The whole "compilation unit" stuff kind of reminds of the whole debacle with Nvidia (and many other linux driver vendors) creating a "generic" closed source driver, and a thin, useless, open source "shim" layer that connected the closed source driver to the GPL symbols of the kernel.
Needless to say, nobody was amused by this. It never went to court, but the linux kernel put various technical restrictions in place to prevent those shims from working. See these notes from kernel 5.9[0].
> GPL or LGPL - both of which are unclear how they apply in the case of ASIC and FPGA designs, particularly when you consider weird corner cases like partial reconfiguration (FPGA and ASIC tools allow you to sequester the open source block in its own "compilation unit" and compiling that unit separately from the main system, only connecting the wires at the last steps)
If the goal is to write a "pure" license (that only grants rights, and doesn't restrict any you already have without the license) and not a contract (eg EULA) it isn't really possible for there to be much clarity here. Because then the license only applies to derivative works (and exact copies), and the boundaries of that is entirely up to the courts to decide, not the license. Even for software, most of the common knowledge about where those boundaries occur (say when linking) hasn't been tested in court. All we know for certain are the things the license explicitly allows (like using system libraries).
The courts already threw us a curve ball by saying that APIs were eligible for copyright, and implementing them was a derivative work (albeit, usually one covered by fair use). I wouldn't be surprised if they threw some more.
I agree that hardware should have stronger copyleft requirements than code.
MIT/BSD has a place where you have a consortium or an org encouraging the ecosystem to maintain compatibility. The gain is that the ecosystem is compatible with a spec, so the whole is more important than each individual entity.
I just read the MPL, and if you look at the CERN license, you can see some of the problems with licenses like GPL and MPL.
In the software-oriented licenses, the lawyers who wrote them are very diligent to define things very clearly in software-specific terms, and then stop before defining those terms. It lets them keep the contract shorter and allow those definitions to change over time. For the GPL, this means taking the definition of "linking" for granted, and in the MPL, the definition of "larger work" pretty clearly seems to apply to only software:
A "Larger Work" is a work that "...combines Covered Software with other material, in a separate file or files, that is not Covered Software." In software land, this is fine because you distribute software in the form of files, but if you are distributing the Covered software as a piece of silicon, it's unclear whether that is a "larger work" or not, because the other material is not in a separate file at the point of distribution.
Here's the CERN "strongly reciprocal" license for reference, showing the difference between the software- and hardware-specific language:
do you need to provide a license? who’s your target audience (end users and contributors) and how do you imagine their workflows looking?
you hint that maybe you don’t want commercial use of this thing: if you expect the person who `git clone`s your repository to be the same person that builds/assembles the product, you might also consider the option of just not providing any license. “code/build plans are freely accessible at some authoritative website/repo” might be license enough for anyone directly consuming the product, and a lack of license may be enough to scare away commercial use by any business which would adhere to your license in the first place.
nailing down a license would be most relevant if you want to facilitate some kind of cottage industry, or want to invite collaborators who don’t trust you and might be wary that you’ll turn around and commercialize their contributions against their desires (i find working in low-trust environments to be exhausting, but sometimes the benefits do outweigh the costs there).
That's a good point. I don't mind transformative commercialization, personally, but I do mind direct resale. The current application space is wide enough that there are a lot of forms of that, and none of the off-the-shelf licenses give that to me.
By the way, the application is a custom trusted enclave for the cloud inside an FPGA, which essentially means a bunch of processors with some custom cryptographic accelerators and application-specific programming.
I think I'm going to only end up releasing a few packageable subsets of it that do fit well with one of these licenses, like the processor core, for now. I probably won't be accepting contributions either way.
Lead developer here. The license is LGPLv3; sorry for any confusion.
The Aerospace Corporation specifically wanted a weak copyleft license for SatCat5. i.e., If you use it as-is in a larger design, great; if you modify it, please make those improvements available to the wider community. A full GPL license would have imposed copyleft obligations on the rest of the end-user design, and we felt that was too much of a barrier for industry adoption. Our intent is to capture improvements to SatCat5 itself and only those improvements.
Others here have mentioned that terms like "compilation unit" are ambiguous at best in an FPGA or ASIC design, and I fully agree. Sadly, there just weren't a lot of widely-known, weak-copyleft options when we were making that decision in mid-2019.
The good news is this isn't set in stone and it can be changed. CERN-OHL-W 2.0 wasn't around when we first published this in 2019, but I'll take a close look and see if that's a better fit.
Wow this looks really cool and useful! I had a thought though, if the concept here is to create a local LAN for devices on a isolated system (satellite) to send data to each other, I wonder if there is some other model than a network that might be better. I am thinking like a shared memory store with transactional operations support, key/value and pub/sub notifications etc. Sort of like a hardware version of redis? Seems like dealing with network stuff is just necessary overhead.
As an embodiment of hardware abstraction, the present invention relates to a key value interface in an Ethernet switch. The invention provides a Redis interface on the switch, enabling the interfacing with I2C devices to be akin to a read operation from a key/value store over the Redis protocol. The invention contemplates the publication of RISC-V or Wasm code to a key/value endpoint, followed by awaiting a response on a value. The present invention constitutes prior art in the patent categories of network switches, key-value stores, and hardware abstraction.
Models other than networking already exist for the embedded space (some sorta analogous to what you propose, but not exactly thought about in those terms). They're the more common option than going all the way to a communication protocol as complex as ethernet.
But you eventually have to bridge between that and typical IP networking. It's usually the responsibility of the designer to bring their own hardware for the transition between (e.g.) SPI and twisted pair Ethernet. That's why this project to put it in the switch is novel.
I think it depends on the application. It's very unlikely someone will MITM the connection between components on a set of circuit boards in a satellite (the use-case they designed this for)
This exactly. The radio has the encryption; the LAN doesn't really need it. If you send a robot to wiretap the satellite then we're in trouble, but that's true regardless.
> SatCat5 also includes software libraries targeting both baremetal and POSIX systems for:
So with the windows linux subsystem, it might just about run anywhere which could then introduce its own security situation.
It will be interesting to see if the UAC prompt overrides I've seen when using Kali Linux and wireshark as a VMware guest on Win10, will also occur with this code. So the UAC prompt override's is where the Vmware guest network for Kali linux is put into promiscuous mode in order to capture all packets.
Every windows reboot switches off the promiscuous mode and someone on here suggested its Vmware doing this not windows.
Maybe this is the code/tool to see if it is windows or vmware overriding UAC protected settings or not.
https://www.microchip.com/en-us/solutions/ethernet-technolog...
I don't think there's wide silicon support yet - but the prospect of a small network of sensors which communicate over ethernet, which can be effortlessly bridged to a standard network is quite nice.