Wiznet makers

viktor

Published March 18, 2026 ©

150 UCC

20 WCC

48 VAR

0 Contests

0 Followers

0 Following

Original Link

How to Build an FPGA-Centric Slow Control Interlock with W5500 on GateMate M1A1?

This project implements a slow-control and interlock pipeline on a Cologne Chip GateMate M1A1 FPGA, using WIZnet W5500 devices as the Ethernet

COMPONENTS Hardware components

WIZnet - W5500

x 1


PROJECT DESCRIPTION

Summary

This project implements a slow-control and interlock pipeline on a Cologne Chip GateMate M1A1 FPGA, using WIZnet W5500 devices as the Ethernet front end for UDP metric ingress and telemetry egress. The W5500’s hardwired networking model lets the FPGA stay focused on packet adaptation, threshold evaluation, and interlock generation instead of carrying a full software network stack inside the control path.

SCIS Infrastructure
Source: https://github.com/HTI-OVGU/FPGA-centric-SCIS

What the Project Does

FPGA-centric-SCIS is a supervisory monitoring unit that accepts metric packets, evaluates them in FPGA fabric, and raises alerts when thresholds or buffer conditions require action. In the repository, metric packets use a compact format with a V01 protocol code, a device identifier, and a signed Q22.10 fixed-point value. Those packets arrive as UDP traffic through W5500-based Ethernet interfaces, pass through a UDP packet adapter, and feed a data concentrator that prioritizes incoming streams and drives interlock behavior. On the software side, a Python metric packet server exposes the data to Prometheus, with Grafana used as the visualization layer.

The repository is not just a protocol demo. It includes the FPGA HDL, testbenches, build flow for the GateMate M1A1 board, a Prometheus/Grafana-oriented software stack, and timing scripts for UDP round-trip and alert measurement. The hardware README explicitly positions the design as a deterministic, AXI-stream-based data concentrator with low-latency interlock assertion and priority-aware telemetry.

GateMate M1A1 FPGA Evaluation Board V3.2, two W5500 Ethernet Modules and one ESP32 for testing external interlock assertion
Source: https://github.com/HTI-OVGU/FPGA-centric-SCIS

Where WIZnet Fits

The exact WIZnet part used here is the W5500. In this design it is not a generic network accessory; it is the transport boundary between Ethernet and the FPGA’s internal control fabric. The repo uses two SPI-connected W5500 modules on the GateMate evaluation board, one dedicated to receive traffic and one dedicated to transmit traffic, which is a strong architectural choice for a system that cares about predictable monitoring and alert delivery.

That makes technical sense for this class of project. The W5500 integrates MAC, PHY, and a hardwired TCP/IP stack, exposes communication through SPI, and supports eight hardware sockets with 32 KB of internal buffer memory. For an FPGA-centric supervisory unit, that means the design can keep threshold logic, prioritization, and interlock timing in HDL while offloading the Ethernet session mechanics to a dedicated chip instead of building or hosting a larger networking stack in fabric or on a soft processor.

The practical fit is visible in the repo itself: the system runs at a 40 MHz FPGA system clock, opens all eight sockets, assigns UDP source ports per socket, and currently notes a receive-side corner case when UDP traffic exceeds a 2 KB RX buffer budget. That is the kind of constraint where a W5500-based, socket-oriented architecture is attractive: the network edge is explicit, the SPI boundary is simple, and the remaining determinism problem is mostly inside the FPGA datapath.

Implementation Notes

In hardware/hdl/top.vhd, the design makes the split-RX/split-TX intent explicit:

tx_w5500_fsm : w5500_state_machine
  generic map(socket_amount => 8, DEFAULT_ROUTINE => "send_first")

rx_w5500_fsm : w5500_state_machine
  generic map(socket_amount => 8, DEFAULT_ROUTINE => "receive_first")
This matters because the project is not time-sharing one Ethernet controller for all duties. It assigns one W5500 state machine to stay biased toward transmission and another to stay biased toward reception, which matches the stated “dual W5500, one for RX and one for TX” architecture in the README and reduces contention between monitoring traffic and outbound alerts.

The same top-level file also shows how received UDP payload is pushed into the control pipeline:

unit_udp_packet_adapter : udp_packet_adapter
data_concentrator_input_vector(0) <= post_udp_adapter_axis;

That connection is important because it shows the W5500 is not being used as an end in itself. The RX-side payload is converted into the project’s AXI-stream metric format, then handed directly to the data concentrator where prioritization and interlock decisions happen in FPGA logic.

On the host side, software_infrastructure/MetricPacketServer/metric_packet_server.py preserves the same framing contract:

UDP_PORTS = range(9217, 9225)
PROTOCOL_VERSION = b'V01'
This matters because the software infrastructure is aligned with the hardware socket model rather than hiding it. The Python server expects the same versioned packet format described in the repo README, listens across the eight-port range, and publishes decoded values to Prometheus, which is exactly what makes the HDL-to-dashboard path coherent instead of ad hoc.

Practical Tips / Pitfalls

  • Keep the dual-W5500 partition if you want the repo’s current behavior. The top-level HDL is written around one TX-oriented controller and one RX-oriented controller, not a single shared Ethernet device.
  • Treat receive-buffer pressure as a real design limit. The repo already documents a case where the RX W5500 can close a socket when UDP traffic overruns a 2 KB RX buffer budget.
  • Preserve clean SPI routing and chip-select separation. This implementation uses two distinct SPI interfaces, so wiring shortcuts will change the architecture, not just the pinout.
  • For latency measurements, copy the project’s host-side test discipline: disable NIC coalescing and pin the test process to a dedicated CPU core with real-time priority.
  • Do not casually change packet framing. The hardware and software both assume V01 packets carrying an identifier and Q22.10 value, so format drift breaks the full chain.
  • Stay with UDP unless you plan extra HDL work. The README lists TCP handling as backlog rather than finished functionality.

FAQ

Q: Why use the W5500 for this project instead of building Ethernet more directly in the FPGA?
A: Because this design is FPGA-centric for supervision and interlock logic, not for implementing a complete network stack. The W5500 provides hardware sockets, integrated MAC/PHY, and hardwired TCP/IP over SPI, so the HDL can concentrate on packet adaptation, prioritization, and threshold decisions while the Ethernet edge stays offloaded.

Q: How does the W5500 connect to the GateMate platform here?
A: Over SPI. The top-level HDL exposes separate mosi, miso, sclk, and cs signals for two W5500 instances, and the hardware README states that the GateMate M1A1 build targets two W5500s connected to PMOD pins.

Q: What role does the W5500 play in this specific SCIS design?
A: The RX-side W5500 accepts UDP metric packets from external sources and feeds them into the UDP adapter and data concentrator. The TX-side W5500 sends processed telemetry and alert traffic back out, including explicit interlock notifications generated in HDL.

Q: Can beginners follow this project?
A: It is accessible to an intermediate FPGA developer, but it is not a beginner-first Ethernet tutorial. You need to be comfortable with VHDL, SPI peripherals, UDP packet structure, and the GateMate build/simulation flow; the repo does help by including make targets, GHDL/GTKWave simulation steps, and Python timing scripts.

Q: How does this compare with using a soft Ethernet MAC plus lwIP or another software-heavy network path?
A: A soft-MAC-plus-lwIP approach can be more flexible, but it usually pulls more protocol work into a CPU or softcore environment. This repo is organized around keeping the supervisory path in HDL and using the W5500 as a socket-oriented Ethernet boundary, which is a cleaner fit for deterministic interlock and metric concentration than asking the FPGA to host both control logic and a broader software network stack.

Documents
Comments Write