As subject: our design typically has 3-5 clock domains, and using chipscope instance for each one starts to be felt through either BRAM or logic usage, design dependant, but mostly BRAM.
Each of our designs almost always has 200 MHz and 100 MHz for the AXI / AXI Lite business, and for higher-tier designs, we also have 291 MHz +/- to support high-speed data path. Plus, we also have some interface clocks we must have, which could be as slow as 56 MHz or as fast as 175 MHz (mostly ISERDES/OSERDES clocking).
Now, our current instrumentation IP has multiple interfaces for each clock domain, followed by subsequent MUX and optional pipeline stages, and finally, chipscope ICON/ILA pair per instance (we are still using pre-generated NGC netlists, while being on the VIVADO 2015.2).
I can think of a few more or less ugly alternatives to collapse all chipscope cores into one, with or without clock muxing, with or without creating a gapped clock based on some FIFO's level, but I wonder if anyone of you has ever implemented something like this? Are there are gotchas in feeding fapped clock into the ILA (but continuous clock, i.e. pulse width can very if it is sourced by a FF).