08-30-2017 01:51 PM
I encountered this error message after I added ILA to probe some signals. Prior to addition the ILA, without any code of XDC changes, I was able to successfully build and generate a bit file. All help appreciated. This is a design with PCIe and I am driving my entire design using the PCIe user clock from the IP core.
[DRC 23-20] Rule violation (REQP-1847) IBUFDS_GTE3_O_may_only_drive_GTxE3 - The IBUFDS_GTE3 user_interface_inst/clocks_interface_inst/clocks_tope_intf_inst/clock_pin_gen_ku.refclk_bb0_loop_ku.refclk_bb0_inst O pin may only be connected to the GTREFCLK pin of a GTHE3_COMMON, GTHE3_CHANNEL, GTYE3_COMMON, or GTYE3_CHANNEL component. The IBUFDS_GTE3 O pin cannot drive .
08-30-2017 02:22 PM
As the error message states, that clock cannot be used in the ILA. There is a received or transmitted GT clock for clocking data in or out of the GT that may be used, instead. The reference has no appearance in the fabric (only present inside the GT).
08-30-2017 02:41 PM
Glad to see familiar names from back when comp.arch.fpga was a thing. So I understand what you are saying about the reference clock, however, I don't think I am selecting the wrong clock for chipscope (screen grab attached below)
If I am correct I think the tool correctly picked the PCIe "user clock" for me automatically. But somehow, somewhere, the tool thinks the ref clock was used. And may be the report is correct about that too, but if so, that was not of my own choosing. To give a bit more background... I can build the design from beginning to end, resulting in a good bit file that meets timing. Then i want to chipscope some signals, so I went back into the code, added mark_debug to signals of interest, re-synthesis, launch the [Set Up Debug] via GUI, add my probes and then go through the rest of implementation stages. I can get past impl with a strategy that meets timing, but when I try to bitgen I get that error.
Is it possible that the automatic clock domain selection from the tool can be wrong?
08-31-2017 07:39 AM
Possible, but unlikely. Perhaps some of the signals you marked cannot be probed with the clock you selected? I would go back and change the signals list to see what happens (binary search: cut half out, see if problem still there, cut half again, ...).
If you can identify the offending signal, that might give some insight into the problem. If not, then we may think of it as a bug. In any event, I look forward to your results here.
09-07-2017 09:40 AM
After trying out your suggestions, I found that I can chipscope anything on the write side of an AXI4-S Packet FIFO, but not on the read side. A bit more background: Vivado 2016.4 on CentOS7.
From a high level perspective, I have a packet generator that feeds an AXI4-s Packet FIFO (FIFO Gen 13.1), which feeds some dma logic, which then drives the PCIe hard core on a Kintex Ultrascale device. The PCIe is configure as x4, and so the local user clock is running at 250MHz. All the logic are run in the same clock domain. When I chipscope anything on the write side of the FIFO, I encountered no issues. But looking at any port signals out of the read side causes that DRC error in my original email.