Jump to content
  • 0

Sendiing IPv4/UDP packets through RGMII interface


Zhargal

Question

Hello, everyone!

I want to send some data from ADC using UDP protocol using ethernet on Genesys 2 board, but I don't see anything on Wireshark. Even though, I tried to sent example packet from here:

https://www.fpga4fun.com/10BASE-T2.html

And even my checksum at the end is calculated properly. I tried 125 MHz at first, then shifted to 25 MHz and still don't know how to handle this. What should I do and how to find my mistake? Even LED on board is blinking and firewall is off in Windows.

Link to comment
Share on other sites

Recommended Posts

  • 0
The Genesys2 board Ethernet PHY comes out of reset all ready to rock at 1 GbE mode. You'll need a 125 MHz reference clock. If you want to do 10BASE-T rates you'll need to change some of the the PHY registers through the serial management interface, and use a 25 MHz reference clock. If you want to support multiple data rates then you need a clock switch to work with 10/100/1000 Mbps data rates.

Windows is not the easiest OS to work with for connecting external devices to using Ethernet.
Link to comment
Share on other sites

  • 0
1 hour ago, zygot said:

The Genesys2 board Ethernet PHY comes out of reset all ready to rock at 1 GbE mode. You'll need a 125 MHz reference clock. If you want to do 10BASE-T rates you'll need to change some of the the PHY registers through the serial management interface, and use a 25 MHz reference clock. If you want to support multiple data rates then you need a clock switch to work with 10/100/1000 Mbps data rates.

Windows is not the easiest OS to work with for connecting external devices to using Ethernet.

Thank you very much for answer! Unfortunately even with 125 MHz and 2 ns data to clock skew, I don't see anything on Wireshark. The last thing I can do is to try values in register to be able using 25 MHz clock. Anyway, why 125 MHz doesn't work is still mistery for me.

ila2.png

Link to comment
Share on other sites

  • 0
If your FPGA application can't communicate with the Ethernet PHY then trying to see packets on an external device is a waste of time.

The Genesys2 Ethernet PHY uses RGMII. You'll need to convert DDR data to 8-bit GMII SDR data before any application in the FPGA can talk to the PHY. You'll need to provide good timing constraints to get RGMII working, especially at 1000 Mbps.

First thing to do is verify that your Ethernet PHY interface is working properly. The best way to do this is to change the PHY register that puts the PHY into loop-back mode. This will echo any data that your FPGA application is writing to the interface. An ILA capturing the GMII data might help.
Link to comment
Share on other sites

  • 0
On 11/21/2022 at 4:45 PM, zygot said:

If your FPGA application can't communicate with the Ethernet PHY then trying to see packets on an external device is a waste of time.

The Genesys2 Ethernet PHY uses RGMII. You'll need to convert DDR data to 8-bit GMII SDR data before any application in the FPGA can talk to the PHY. You'll need to provide good timing constraints to get RGMII working, especially at 1000 Mbps.

First thing to do is verify that your Ethernet PHY interface is working properly. The best way to do this is to change the PHY register that puts the PHY into loop-back mode. This will echo any data that your FPGA application is writing to the interface. An ILA capturing the GMII data might help.

Well, I changed values in register to Loop back mode and changed the speed to 100 Mbps. Still don't see anything. The intersting thing that I am capable to read correct PHYID1 and PHYID1 values and catch data from laptop but I still don't know why I can't see anything on laptop side.

ila_mdio.png

Link to comment
Share on other sites

  • 0
Well, for one, if the PHY is in loopback mode mode, nothing will be happening on the cable attached to the RJ45 connector.

While you are in loopback mode you can verify that the data being sent to the PHY is the same as being returned. If that's the case, then you are on a different level of things to check.

I don't know anything about the tutorial that you are following or that you implemented. What I do know is that Ethernet doesn't work unless you packetize your data and every packet starts with the correct preamble. You should see this in loopback mode using your ILA.

If everything checks out in loopback mode there still might be hurdles. PC and switches keep a list of active IP addresses for connected Ethernet devices. If they send an ARP packet periodically to discover who's on the other end of the Ethernet cable and get no response this could be a problem. Your FPGA application might have to provide ARP packets to advertise your IP address and reply to ARP requests. An Ethernet switch between your FPGA and your PC would simply not forward any traffic if you don't do this. Connecting your FPGA directly to your PC using Wireshark or other diagnostic software should work, as long as you properly calculate the CRC and populate the packet headers. Of course you need to set up your Ethernet port properly in any modern OS or you will have problems communicating or even seeing the FPGA packets.
Link to comment
Share on other sites

  • 0
On 11/24/2022 at 2:19 PM, zygot said:

Well, for one, if the PHY is in loopback mode mode, nothing will be happening on the cable attached to the RJ45 connector.

While you are in loopback mode you can verify that the data being sent to the PHY is the same as being returned. If that's the case, then you are on a different level of things to check.

I don't know anything about the tutorial that you are following or that you implemented. What I do know is that Ethernet doesn't work unless you packetize your data and every packet starts with the correct preamble. You should see this in loopback mode using your ILA.

If everything checks out in loopback mode there still might be hurdles. PC and switches keep a list of active IP addresses for connected Ethernet devices. If they send an ARP packet periodically to discover who's on the other end of the Ethernet cable and get no response this could be a problem. Your FPGA application might have to provide ARP packets to advertise your IP address and reply to ARP requests. An Ethernet switch between your FPGA and your PC would simply not forward any traffic if you don't do this. Connecting your FPGA directly to your PC using Wireshark or other diagnostic software should work, as long as you properly calculate the CRC and populate the packet headers. Of course you need to set up your Ethernet port properly in any modern OS or you will have problems communicating or even seeing the FPGA packets.

That is the point that I don't see anything coming back. Well I generate the same sequence from there:

https://www.fpga4fun.com/10BASE-T2.html

And I still don't see anything coming back in loop mode. Also I manually put ip address and mac address to ARP table. And checksum at the end calculates properly.

Link to comment
Share on other sites

  • 0
It's hard to tell what's going on from your ILA screenshot. The first problem is that you are capturing 4-bit DDR data on the RGMII interface. You need to be converting that to GMII 8-bit SDR data, as I mentioned before. So your Ethernet PHY data interface isn't working. Don't even try doing anything until you are capturing 8-bit data out and 8-bit data in that matches what's going to the PHY.

There were some older FPGA boards with GMII Ethernet PHY interfaces, like the ATLYS and original Genesys. These are better platforms for getting started. RGMII and SGMII are good projects if you've already mastered the GMII SDR data interface.

I did look at your project. The 8-bit data sequence makes for an easy way to get started. But that's your problem. You need to do conversions between 8-bit SDR and 4-bit DDR. Unfortunately, this is a pretty old tutorial. I didn't bother to see if it had any advice on how to create a PHY interface; but this is the first step. You're trying to start swimming before leaping into the pool.

So, stop what you are doing and start doing some research. If you can find older tool versions for Quartus or ISE that we of the AT:LYS and Genesys era you can find some guidance about how to do GMII-RGMII data conversion.
Link to comment
Share on other sites

  • 0
19 minutes ago, zygot said:

It's hard to tell what's going on from your ILA screenshot. The first problem is that you are capturing 4-bit DDR data on the RGMII interface. You need to be converting that to GMII 8-bit SDR data, as I mentioned before. So your Ethernet PHY data interface isn't working. Don't even try doing anything until you are capturing 8-bit data out and 8-bit data in that matches what's going to the PHY.

There were some older FPGA boards with GMII Ethernet PHY interfaces, like the ATLYS and original Genesys. These are better platforms for getting started. RGMII and SGMII are good projects if you've already mastered the GMII SDR data interface.

I did look at your project. The 8-bit data sequence makes for an easy way to get started. But that's your problem. You need to do conversions between 8-bit SDR and 4-bit DDR. Unfortunately, this is a pretty old tutorial. I didn't bother to see if it had any advice on how to create a PHY interface; but this is the first step. You're trying to start swimming before leaping into the pool.

So, stop what you are doing and start doing some research. If you can find older tool versions for Quartus or ISE that we of the AT:LYS and Genesys era you can find some guidance about how to do GMII-RGMII data conversion.

Thank you that you are trying to help. But I don't have time and opportunity for buying another board and don't see any sense for making conversion from GMII-RGMII, since I already see how signal is look like in testbench and ILA.

Link to comment
Share on other sites

  • 0

Hi @Zhargal

Getting RGMII to work is a bit of a rite of passage --  any of a dozen small mistakes can mean you don't see anything.

Here's a quick checklist. I didn't read your entire exchange so far in detail, so some of these may have been covered already:

* Make sure your PHY chip is in a mode that allows autoconfiguration of a connection at 1 gigabit. To verify this, the practical thing to do is hook up 1-to-1 to a computer with an Ethernet port (without a switch in between) and some tool to inspect the status of the ethernet transceiver; I recommend linux and "ethtool". Using your tool, make sure both sides see each other and establish a gigabit link. If yes, then at least the PHYs on both sides like each other. If not, you will first need to do some MDIO programming.
* make sure you properly lead both your 125 MHz clock and data bits through the appropriate DDR transmitters.
* make sure you have the appropriate timing offset between the CLK and DATA bits. RGMII is very picky about this and specifies a mandatory offset between CLK and DATA bits. I personally have had success by generating two 125 MHz phase-shifted clocks, using 1 for the four DATA signals and ENABLE, and the other for the CLK signal. This step may take some trial and error.
* Make sure ENABLE is set from the first preable octet all the way to the last FCS octet.
* Make sure you have the bit order right, both in terms of time and in terms of endianness, as you push octets into the DATA DDR blocks. It's easy to make a mistake! My approach in cases like these is to simply try all possible combinations until you find the one that works, and then retroactively convince myself that it was the correct choice ... :)
* Make sure you send a valid packet. I hope the fpga4fun packet generator is correct there, including the FCS. Also, make sure you honor standard Ethernet payload lengths (min 46, max 1500 Ethernet payload octets, if memory serves).
* I would recommend for your first experiments to use both Ethernet and IPv4 broadcast addresses, as this increases the probability that you will at least see something.
* When sending out packets, honor the mandatory inter-frame gap (12 cycles at 125 MHz between successive packets, at least).

 

 

Link to comment
Share on other sites

  • 0
I'm not trying to sell you anything; just pointing out that starting with a complicated hardware interface is more difficult and requires more preparation.

Thinking that you understand what your design is doing by looking at the ILA is part of your problem. I don't think that you understand the terms that I've been using, like RGMII, GMII, DDR and SDR. Start there. Logic is almost never DDR, it's too hard to conceptualize and meet timing. The ILA doesn't support DDR clocking.

Since you need to start off with RGMII things would be simpler just doing 1 GbE as that's what the Genesys2 PHY wants to do out of reset; it does all of the thinngs necessary for 1 GbE without any register setup changes.

Your statement that you don't see a need for RGMII-GMII conversion make me think you aren't ready to tackle this project yet.

Look over this: https://forum.digilent.com/topic/16802-ethernet-phy-test-tool/

It has source for the Nexys Video Ethernet PHY that is also RGMII. This might help, but I'm thinking that you need to do more preparation work before you're ready for that as well.

[edit] I've been using Ethernet PHYs on FPGA boards for as long as 1 GbE PHYs have been on such platforms. Whenever I do a new design for a new platform or device I use my ATLYS or GENESYS as a tester to verify that the PHY interfaces are working. It doesn't always go as smoothly as I anticipate. That's the story behind the project link that I just posted. I can run many hundred of GB through 2 Ethernet PHY devices connected by a cable in a coupe of hours at very near 125 MiB/s. It's the only test that I'm comfortable with before going on to use a new design.
Edited by zygot
Link to comment
Share on other sites

  • 0
It's been quite a while since I posted the Ethernet PHY test tool project. There's been no response so I've kind of forgotten about it. I downloaded it to refresh my memory and discovered a silly error with the receiver ILA. But this brings up a subject that I didn't mention before. The Ethernet PHY has 2 clock domains. One is the transmit reference clock and the other is the receiver clock. If you want to capture data and control states you need 2 different ILA instances, one for transmit and one for receive. In ETH_DUT I have a receive side ILA but use the wrong clock... it should be rxclk, not clk125 as shown. The fact that they are the "same" frequency doesn't change the fact that rxclk and clk125 represent two different clock domains.

The project code posted for the Nexys Video DUT was modified from what I actually used to test the board because I didn't want to publish my UART source at the time. In fact all references to the UART should have been removed, as all they do is confuse the reader. The printout of an actual test session output is real.

For a number of reasons a 1 GbE RGMII Ethernet PHY involves concepts that are complicated enough that put it beyond beginner HDL skill levels. DDR and clock domains are two of those concepts. Edited by zygot
Link to comment
Share on other sites

  • 0
On 11/25/2022 at 7:09 PM, reddish said:

Hi @Zhargal

Getting RGMII to work is a bit of a rite of passage --  any of a dozen small mistakes can mean you don't see anything.

Here's a quick checklist. I didn't read your entire exchange so far in detail, so some of these may have been covered already:

* Make sure your PHY chip is in a mode that allows autoconfiguration of a connection at 1 gigabit. To verify this, the practical thing to do is hook up 1-to-1 to a computer with an Ethernet port (without a switch in between) and some tool to inspect the status of the ethernet transceiver; I recommend linux and "ethtool". Using your tool, make sure both sides see each other and establish a gigabit link. If yes, then at least the PHYs on both sides like each other. If not, you will first need to do some MDIO programming.
* make sure you properly lead both your 125 MHz clock and data bits through the appropriate DDR transmitters.
* make sure you have the appropriate timing offset between the CLK and DATA bits. RGMII is very picky about this and specifies a mandatory offset between CLK and DATA bits. I personally have had success by generating two 125 MHz phase-shifted clocks, using 1 for the four DATA signals and ENABLE, and the other for the CLK signal. This step may take some trial and error.
* Make sure ENABLE is set from the first preable octet all the way to the last FCS octet.
* Make sure you have the bit order right, both in terms of time and in terms of endianness, as you push octets into the DATA DDR blocks. It's easy to make a mistake! My approach in cases like these is to simply try all possible combinations until you find the one that works, and then retroactively convince myself that it was the correct choice ... :)
* Make sure you send a valid packet. I hope the fpga4fun packet generator is correct there, including the FCS. Also, make sure you honor standard Ethernet payload lengths (min 46, max 1500 Ethernet payload octets, if memory serves).
* I would recommend for your first experiments to use both Ethernet and IPv4 broadcast addresses, as this increases the probability that you will at least see something.
* When sending out packets, honor the mandatory inter-frame gap (12 cycles at 125 MHz between successive packets, at least).

 

 

Sorry for the late response. Thank you very much for help! At least I found the first mistake. The reason was about inter-frame gap. I've made it longer so now I can at least see example frame. But now I have another problem. I used frame from this example: https://www.fpga4fun.com/10BASE-T2.html, but when I am trying to change payload inside I again don't see anything. All checksums are correct (FCS octet at the end, IP header checksum and UDP checksum). Funny thing that I see that LED is blinking, but Wireshark is silent. So now I am again confused and don't know what to do and where I can find my another mistake

rgmii.png

Link to comment
Share on other sites

  • 0

Hi @Zhargal

The TCP header and UDP checksums will not prevent the frames from reaching Wireshark when they are incorrect; if that happens, Wireshark will just show the packet with a "bad checksum" warning. In fact, the UDP checksum is optional (at least, in IPv4); you can put a value 0x0000 there which will mean "no checksum". That can make life a lot easier, since you don't need to make a checksum-calculating pass over the data.

The FCS is another matter. If that isn't correct, the frame will not reach Wireshark at all. It could be that the frame gets discarded by an intermediary switch (you have any?), or by the receiving OS. The node that rejects such frames will sometimes offer a way to figure out if a lot of "bad FCS" packets are being discarded, by keeping a counter each time that happens. Depending on your precise network connections, that may be a good place to check. On modern hardware and with decent cables, true FCS errors happen very, very rarely (less than once every 1e8 packets for sure). So if you see a counter increasing somewhere along the way, that's your culprit.

Think long and hard if you are really only changing a few payload byte values (and the end-of-frame FCS to match); if that truly makes the difference between not seeing packets and seeing packets (so there are no other changes to the packets, like header fields, or the packet length), the only explanation I see is that there must be something wrong with the FCS.

 

 

Link to comment
Share on other sites

  • 0
The FCS is unique for all packets and is computed based on the contents of the packet. This value needs to be presented in a format that has the same "endianess" as the CPU on either end of the cable. This can be a problem when converting a std_logic_vector into a multi-byte number that's machine readable on a computer. Edited by zygot
Link to comment
Share on other sites

  • 0

@zygot

> This value needs to be presented in a format that has the same "endianess" as the CPU on either end of the cable.

The FCS octet order is defined by the Ethernet standard. It is independent of the endianness of the computers on either side of the cable. It obviously has to be, since otherwise it would be hard to impossible for computers of unknown endianness to talk to each other.

The least-significant byte of the CRC goes first; the most-significant byte goes last. Thus, with regard to the FCS, Ethernet is little-endian.

 

Edited by reddish
Link to comment
Share on other sites

  • 0
23 hours ago, reddish said:

Hi @Zhargal

The TCP header and UDP checksums will not prevent the frames from reaching Wireshark when they are incorrect; if that happens, Wireshark will just show the packet with a "bad checksum" warning. In fact, the UDP checksum is optional (at least, in IPv4); you can put a value 0x0000 there which will mean "no checksum". That can make life a lot easier, since you don't need to make a checksum-calculating pass over the data.

The FCS is another matter. If that isn't correct, the frame will not reach Wireshark at all. It could be that the frame gets discarded by an intermediary switch (you have any?), or by the receiving OS. The node that rejects such frames will sometimes offer a way to figure out if a lot of "bad FCS" packets are being discarded, by keeping a counter each time that happens. Depending on your precise network connections, that may be a good place to check. On modern hardware and with decent cables, true FCS errors happen very, very rarely (less than once every 1e8 packets for sure). So if you see a counter increasing somewhere along the way, that's your culprit.

Think long and hard if you are really only changing a few payload byte values (and the end-of-frame FCS to match); if that truly makes the difference between not seeing packets and seeing packets (so there are no other changes to the packets, like header fields, or the packet length), the only explanation I see is that there must be something wrong with the FCS.

 

 

Hi @reddish and @zygot!

The problem with FCS would make sense if I couldn't see absolutely anything or if I put fixed value from example. But FCS recalculates everytime according to all bytes (except preambule at the beginning of course) and I still don't know what is the problem. And no, I don't have any intermediate switch. Looks this is the end for me, because this is complently zero sense for me and no light at the end of the tunnel

Link to comment
Share on other sites

  • 0

Hi @Zhargal

If you saw packets earlier with the fixed packet that you got from the website you are very close to a working system. Don't throw in the towel just yet. One of the things you really need when doing FPGA work is grit and a penchant for debugging.

You need to get back to the condition where you see incoming packets in Wireshark and make small, incremental changes. In particular, first, without making any other changes to your code, replace the fixed, pre-calculated CRC bytes as given by the website with the CRC calculated in the FPGA. If that still works, your CRC calculation and ordering is good. If not, it's bad, or the order you append its octets to your packet is wrong. Also, depending on your CRC implementation, you may need to take care to initialize the CRC register with 0xffffffff rather than zero at the start of the calculation; and you may  need to invert the bits that you append to your packets. It depends on the details of your particular implementation.

If the CRC calculation is good (packets come through with an FPGA-calculated CRC), replace a single payload byte and verify that the packet still comes through.

Your receiving PC, is that a Linux system or a Windows system? In the former, it is easier to see if the "bad CRC" condition happens.

 

Link to comment
Share on other sites

  • 0
On 12/14/2022 at 7:26 PM, reddish said:

Hi @Zhargal

If you saw packets earlier with the fixed packet that you got from the website you are very close to a working system. Don't throw in the towel just yet. One of the things you really need when doing FPGA work is grit and a penchant for debugging.

You need to get back to the condition where you see incoming packets in Wireshark and make small, incremental changes. In particular, first, without making any other changes to your code, replace the fixed, pre-calculated CRC bytes as given by the website with the CRC calculated in the FPGA. If that still works, your CRC calculation and ordering is good. If not, it's bad, or the order you append its octets to your packet is wrong. Also, depending on your CRC implementation, you may need to take care to initialize the CRC register with 0xffffffff rather than zero at the start of the calculation; and you may  need to invert the bits that you append to your packets. It depends on the details of your particular implementation.

If the CRC calculation is good (packets come through with an FPGA-calculated CRC), replace a single payload byte and verify that the packet still comes through.

Your receiving PC, is that a Linux system or a Windows system? In the former, it is easier to see if the "bad CRC" condition happens.

 

Hi @reddish and @zygot!

As I've already told you I don't use fixed FCS value, I am using module which is attached in message. In fcs1.png you can see the end of frame for this sequence:

ffffffffffff00123456789008004500002eb3fe000080110540c0a8002cc0a8000404000400001a2de8000102030405060708090a0b0c0d0e0f1011

And the result from https://crccalc.com/ is the same so I can consider that this is right result . Now, I want to replace the third and fourth byte (0x0203) to 0xffff with the sequence:

ffffffffffff00123456789008004500002eb3fe000080110540c0a8002cc0a8000404000400001a2de80001ffff0405060708090a0b0c0d0e0f1011

The result can be seen in fcs2.png and it's the same as it should be from https://crccalc.com/, but I don't see anything on Wireshark in this case. The result captured from simulation but it's the same on ILA. So for me this is completely zero sense. I just don't know why it's not working.

PS I am using Windows as operarting system

fcs1.png

fcs2.png

CRC_gen.v

Edited by Zhargal
Link to comment
Share on other sites

  • 0

In your last ILA picture:

  • Where did the differential r_clk_in come from?
  • What clock is connected to your ILA?
  • On your toplevel module, how are you driving phy_txc_gtxclk?
  • Are you trying to transmit and receive 4-bit data?

If you don't know what you are doing pictures can be deceptive using any tool, even the ILA, even a logic simulator.

Link to comment
Share on other sites

  • 0
13 minutes ago, zygot said:

In your last ILA picture:

  • Where did the differential r_clk_in come from?
  • What clock is connected to your ILA?
  • On your toplevel module, how are you driving phy_txc_gtxclk?
  • Are you trying to transmit and receive 4-bit data?

If you don't know what you are doing pictures can be deceptive using any tool, even the ILA, even a logic simulator.

  • r_clk_in is a differential 200 MHz clock from oscilator in board.
  • For ILA I generate 500 MHz clock to able catch 2 ns gap between TX_CLK and TX[3:0]
  • w_TX_clk is connected to phy_txc_gtxclk. It's 125 MHz as it should be and shifted to 2 ns forward
  • Only transmit, but receiving is working well
Link to comment
Share on other sites

  • 0

When you use an Ethernet PHY in a FPGA design it adds two new clock domains. For the Genesys2 they are phy_rxclk and phy_txc_gtxclk. phy_txc ( if you are using it ). Neither of these is related to the 200 MHz external clock module on the Genesys2. Your assumption that oversampling the phy_txd signals allows you to look at an ILA picture and make any sort of assessment about what is going on at the PHY is a poor one. It's easy to misinterpret simulation results from how they are rendered as well. Pictures lie, unless you understand what they represent. As I previously mentioned, converting DDR data into SDR 8-bit data and using an ILA to capture the SDR data with SDR versions of the correct clocks might be more informative. It's also more compatible with whatever is connected to your Ethernet PHY in your design because that logic uses an SDR clock. You can't just use signals generated in one clock domain in a different clock domain without proper design techniques.

Previously, I pointed you to an example of an RGMII PHY interface that works. Perhaps, instead of pursuing your current course, it might be time to see if you can get something that uses the DDR capability of the Series 7 IO working and then try out your own alternative theory. You might be a bit less frustrated.

Show me the actual Verilog that connect your design to the PHY pins

[edit] The Ethernet PHY is an 8-bit interface. For 1 Gbps speeds, RGMII is a 4-bit DDR interface. It appears that you want to pretend that it's a 4-bit 250 MHz interface. I would recommend against this, as it will cause you all kinds of problems. Series 7 devices have DDR and LVDS capabilities built into the IOB. If you are connecting your FPGA design to a DDR interface then you should use this facility. Digilent has always done a good job making the Ethernet PHYs on their boards easy to use ( well except for the NetFPGA_1G_CML ). The Kintex is fast enough that you might get something half-working ignoring the DDR interface. If you try to do this with slower programmable devices you will get into trouble as 250 MHz is close to the internal clocking limits. Also, timing closure on any application using your Ethernet interface will be a big problem. Working with 8-bit data in your design will avoid a lot of confusion.

Edited by zygot
Link to comment
Share on other sites

  • 0
On 12/16/2022 at 3:19 PM, zygot said:

When you use an Ethernet PHY in a FPGA design it adds two new clock domains. For the Genesys2 they are phy_rxclk and phy_txc_gtxclk. phy_txc ( if you are using it ). Neither of these is related to the 200 MHz external clock module on the Genesys2. Your assumption that oversampling the phy_txd signals allows you to look at an ILA picture and make any sort of assessment about what is going on at the PHY is a poor one. It's easy to misinterpret simulation results from how they are rendered as well. Pictures lie, unless you understand what they represent. As I previously mentioned, converting DDR data into SDR 8-bit data and using an ILA to capture the SDR data with SDR versions of the correct clocks might be more informative. It's also more compatible with whatever is connected to your Ethernet PHY in your design because that logic uses an SDR clock. You can't just use signals generated in one clock domain in a different clock domain without proper design techniques.

Previously, I pointed you to an example of an RGMII PHY interface that works. Perhaps, instead of pursuing your current course, it might be time to see if you can get something that uses the DDR capability of the Series 7 IO working and then try out your own alternative theory. You might be a bit less frustrated.

Show me the actual Verilog that connect your design to the PHY pins

[edit] The Ethernet PHY is an 8-bit interface. For 1 Gbps speeds, RGMII is a 4-bit DDR interface. It appears that you want to pretend that it's a 4-bit 250 MHz interface. I would recommend against this, as it will cause you all kinds of problems. Series 7 devices have DDR and LVDS capabilities built into the IOB. If you are connecting your FPGA design to a DDR interface then you should use this facility. Digilent has always done a good job making the Ethernet PHYs on their boards easy to use ( well except for the NetFPGA_1G_CML ). The Kintex is fast enough that you might get something half-working ignoring the DDR interface. If you try to do this with slower programmable devices you will get into trouble as 250 MHz is close to the internal clocking limits. Also, timing closure on any application using your Ethernet interface will be a big problem. Working with 8-bit data in your design will avoid a lot of confusion.

Hello, @zygot! Probably you right about ILA and I overestimate capabilities of using this on higher frequency. I am sending you several Verilog codes, which I was triying to use. First udp_to_rgmii_v1.sv, which works only with certain payload(sorry for lack commenting). Then I decide to rewrite everythting and make some kind of converter from byte to 4 bit bus (rgmii.sv). It was worked well with any random payload. But I've got another problem. When I add the rest of code for reading data from ADC (which is work on 40 MHz) and synchronizing it through fifo_generator (as recommend my friend for synchronizing data with 2 different clock domains), I saw that rgmii output is completely wrong comparing to simulation. For example, it transfers only half of preamble at the beginning (the rest data from each state is also sending only on hald). The rest of code I can't provide. This is private information. Probably you also right about 250 MHz, but I don't know how to design this using only 125 MHz and provide 2 ns time shift at the beginning as it should be.

This is probably my last message and you can close this topic, because I need to find another solution to solve this problem. Thanks for all help, which you provide to me, and happy holidays!

udp_to_rgmii_v1.sv rgmii.sv

Link to comment
Share on other sites

  • 0
Your last post makes me a bit worried about what your application might be.

You still might be missing the essential point of my previous advice. 4-bit DDR clocked at 125 MHz is not equivalent to 8-bit SDR at 250 MHz. I don't see anything in your code that will make the synthesis tool infer DDR in the IOBs. Again, sometimes it's better to get a concept working using IP provided by the FPGA vendor before trying out your own unique approach; fewer problems to resolve simultaneously.

Your friend is correct about using a dual-clock FIFO to pass data across clock domains. FIFOs in Vivado have their own peculiar behavior to consider, depending on how you use them, and what version of the tools you are using ( Vivado changed how FIFOs work quite a few Vivado versions ago ). If you are doing that correctly it shouldn't result in the behavior you describe. It certainly shouldn't create problems with the preamble or header parts of your packets.

Sometimes, when a logic design meets hardware this FPGA development stuff can be difficult, right? Trying to implement complicated things beyond one's development skill level is frustrating. Same for me as it is you. Trying to design something that you aren't prepared to do rarely results in success.

I should note that the link that I provided doesn't use a MAC or processor which might be confusing to you. This should be irrelevant as any design that doesn't get the basic communication between the PHY and the logic design correct will be a failure. Edited by zygot
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...