Jump to content

zygot

Members
  • Posts

    2,859
  • Joined

  • Last visited

Everything posted by zygot

  1. Sorry, I misunderstood the reason for mentioning the stuff about the pulses... guess I misunderstood your original question as well. Just make sure that your stepper drive signals hare compatible with 3.3V logic using whatever IOSTANDARD you choose to select. I suggest that you select a SLOW slew rate on your outputs if you are connecting a cable or wires between the stepper drive and your FPGA GPIO header.
  2. Xilinx FPGA devices ( except for ones with a hard processor like ZYNQ ) don't have an Ethernet port, but perhaps your FPGA board has an Ethernet PHY. You certainly can use an FPGA board with an Ethernet PHY to communicate with a PC. The PHY interface to the FPGA can be complicated and require advanced FPGA skills. Regardless, you will need to do some software on the PC side and possibly on the FPGA development side, depending on your design flow. Most people would take what appears to be the easy path and use a MicroBlaze centric design ( I didn't ask but assume that you don't have a ZYNQ based board ). I can't help you with MicroBlaze. If you do have a ZYNQ FPGA board then the Ethernet PHY and MAC are built into the FPGA PS complex. For ZYNQ boards there's no logic design and you can try your luck with any of the OSs supported by the tools. The tools support the Xilinx standalone OS, Free RTOS, and Linux. As far as I know Linux development for Xilinx FPGA tools still isn't integrated. So, you need to install Petalinux or, if you are brave try picking a flavor of your favorite Linux distribution and try doing development that way. Now that I've said all of that I suspect that your question is a homework assignment...
  3. You can certainly do that with logic clocked at 100 MHz, 50 MHz, or even 12 MHz. Usually stepper motors involve quadrature signals, though this depends on the driver design. You don't want to try driving a stepper motor directly from FPGA IO pins. Somehow, I think that I'm still not understanding the comments about pulses with repsect to your original question about external clocking.
  4. Nothing about what I've written implies anything about your intelligence or mine, same for experience, same for skill, etc. etc. Frankly if either of us has been abusive, it's been you. You are the only one slinging personal attacks. But you can relax because you won't be hearing from me in the future. I'm sure that you can figure this out on your own but I do hope that you get another reply to your original question. Believe what you want to but my only intention is to help in the best way that i can. I have no power to stop people from inferring things that aren't implied or explicitly written.
  5. Perhaps I don't understand what you are implying. Do you want to create a clock less than 12 MHz? BTW, the CMOD-A7 has a number of clock capable GPIO pins so really you can use any external clock frequency your heart desires by connecting a 3.3V IOSTANDARD compatible clock source to one of the appropriate pins.. As for measuring a pulse width, the higher the clock that the logic runs at the higher the resolution of the measurement that you can make. The same is true for generating pulses.
  6. I'm really hoping that the schematics presented to customers aren't documentation recreations of the actual schematic. But the schematic clearly shows the memory device A5 pin as unconnected to anything. If the schematic that I'm looking at is the one used to layout the board then we [ the users ] have a problem. I don't understand how that wouldn't alter the device functionality. Suspiciously, the open source repository has nothing supporting the QDRII+ device so this isn't encouraging. I've read the device documentation and the Xilinx IP documentation and nothing in any of those documents suggest doing something like this, even for the MiG core generator.
  7. If you are going to spend any significant time using the Xilinx tools, especially the SDK or Vitis, it's worth the effort to learn how to use the GUI interface to dig down into the support libraries and code. While not always up to date or helpful for some issues, there's plenty of assistance and documentation on how to use all of the standalone libraries. If you don't want to use the standalone libraries directly you can write programs just as if were any old micro-controller. The IDE is pretty good at locating source code and getting information on how to use the libraries. But really? You don't get the ironic humor in your posts?
  8. Well, I was really just reading your code snippets and missed the fact that you were using MicroBlaze. I've seen issues with ZYNQ and some AXI logic causing bus faults. The ARM cores are really fast... but that doesn't mean that a MicroBlaze couldn't have a similar problem. Again, I don't have any recent ( like 20 years ) experience with MicroBlaze. As for caching, you might be reading stale values in the cache instead of where you think the data is coming from. You can disable/enable it, flush cache with the ARM cores.. don't know how that works on MicroBlaze, but I assume that it's similar. But now that we're on the same page the post place and route timing score, from Vivado, might be of interest. If it's 0 then more than likely that's because there wasn't any timing constraints to work with. I assume that, for board design flow projects, Vivado inserts it's own timing constraints into your project for you so I wouldn't expect that to be a problem. If you aren't using an MMCM or PLL for clocking then you at least need 1 timing constraint for your external clock. I'd think that the MicroBlaze IP would use some sort of clock management and take care of that for you... I'm not a MicroBlaze guy. If Vivado is reporting failing timing paths then you need to embark on some sort to timing closure process. If Vivado is handling everything for you I don't know how that would work. Vivado gets testy if it thinks that you are over-riding it's constraints, with results that aren't always obvious to understand or deal with. So, timing and AXI are the two things that I'd worry about. You seem to have shown that your MicroBlaze isn't causing a bus fault. I don't see anything wrong with the code itself.
  9. The CMODs are the wrong platform for designs that need input from USB keyboards and mice. Digilent does sell a number of FPGA boards with USB HID or OTG controllers. For USB Host applications you need to write software to connect to the endpoint devices. A possible alternative for a CMOD might be to use a micro-controller and use SPI or some other interface to provide the functionality that you want. There's always the UART. I suggest using a 3.3V TTL USB UART cable or breakout board.
  10. This is Xilinx standalone OS we're talking about. Even print statements are discouraged. But the SDK ( I haven't used Vitis ) has a number of example applications for just about any interface attached to your HW system. Just read through the code carefully to avoid nasty surprises, like having to have a hardware loopback connection. If you say it's easy, that's fine. Call me in an hour and let me know how you did it. p.s. What should we think of someone who asks for help, and oh by the way if this is complicated can I have a step by step tutorial, because my time is too important to find the answers for myself. And then, to the first person willing to give up his time to reply gets a 'how hard can it be?' retort? You wanna provide the punchline or should I? Last thought: I've done simple menu driven programs for Windows and Linux and even when the std input device is a keyboard trying to detect user input in a non-blocking way isn't straight forward ( on Linux at least ) . But we're talking about a UART. For an embedded ucontroller you can always check uart registers to check for an incoming character.
  11. Writing messages to a standard output using xil_printf is quite a bit different than reading user input. For one thing you have parse the user input to make sense of it. It's the same with PC applications programming. it's a big deal. What you probably want is some sort of menu loop to accept user commands. So, I seriously doubt that 'tweaking' the code that you have is going to work. Here are a few snippets of what I'm taking about from an old project for the Eclypse-Z7: while(1) { ucmd = 0; ucmd = GetUserCmd(); xil_printf("Command(%c) Mode(%d) Num(%d)\n\r", ucmd, mode, CmdValues); // if (ucmd == 'C') { switch (ucmd) { case 'C' : *(pcontrol) = 0x00000380; // Quescent mode usleep(1); ConfigRegs(&cfgbuff[1], (u32) 6); mode = cfgbuff[0]; DisplayCtrlRegs(); if (CmdValues > 7) { // Copy Message to BRAM0 j = CmdValues-7; if (j > 511) j = 511; for (i=0; i<j; i++) { *(ptrBRAM0 + i) = cfgbuff[i+7]; } // Initiate HDL DMA from BRAM0 to SIG_GEN freq_lut BRAM LoadMsg(); } break; case 'T' : mode = cfgbuff[0]; ConfigRegs(&cfgbuff[1], (u32) 6); DisplayCtrlRegs(); break; case 'R' : DisplayCtrlRegs(); break; case 'B' : x = cfgbuff[0]; DisplayBRAM(x); break; default : xil_printf("Bad Command...\n\r"); break; } d = 0x00000380 | mode; *(pcontrol) = d; switch (mode) { case 2 : xil_printf("Running User Tone Demo\n\r"); break; case 4 : xil_printf("Running User Chirp Demo\n\r"); break; case 6 : xil_printf("Running User FSK Demo\n\r"); break; case 8 : xil_printf("Running User PSK Demo\n\r"); break; case 10 : xil_printf("Running User 4-QPSK Demo\n\r"); break; case 12 : xil_printf("Running User AM Demo\n\r"); break; default : xil_printf("Error...\n\r"); } } char GetUserCmd() { long d; unsigned int ReceivedCount; int i,j; char userInput; char cmd; int cmdend; // u32 *ptrCONFIG = &cfgbuff[0]; CmdValues = 0; ReceivedCount = 0; userInput = 0; cmd = 0; cmdend = 0; i = 0; while(cmdend != 1){ /* Wait for data on UART */ while (!XUartPs_IsReceiveData(XPAR_PS7_UART_0_BASEADDR)) {} /* Store the first character in the UART receive FIFO and echo it */ userInput = XUartPs_ReadReg(XPAR_PS7_UART_0_BASEADDR, XUARTPS_FIFO_OFFSET); // xil_printf("%c", userInput); if (userInput == ';') { //for (j=0; j<8; j++){ // xil_printf("%x \n\r", cfgbuff[j]); //} CmdValues = ReceivedCount; cmdend = 1; } // first char has to be a valid command else if ((cmd == 0) && (i == 0)) { cmd = toupper(userInput); //xil_printf("%c\n\r",cmd); } else if ( (cmd != 0) && (i > 0) && (userInput == ' ')) { d = strtoul(datain,NULL,10); // d = atoi(datain); cfgbuff[ReceivedCount] = d; ReceivedCount += 1; i = 0; for (j=0; j<sizeof(RecvBuffer); j++){ RecvBuffer[j] = 0; } } // start parsing space delimited else if ((cmd != 0) && (userInput != ' ')) { *(datain + i) = userInput; i += 1; // xil_printf("%c", userInput); } } return cmd; } This might be helpful as a general idea. It's a bit rough as it was part of a preliminary experiment to test out some idea. But, the general form is useful. Note that GetUserCmd() is blocking execution of the program until the user enters a command via the UART. That is not always what anyone wants. If you want to get more daring you will be into strings and conversions, and who knows what...
  12. Well not really. I've spent the past 2 months mostly using ISE 14.7 on WIN10. Chipscope doesn't work on Win10, ISIM is wonky and if you catch it in a good mood might get some use out of it, but eventually it will frustrate you the point of giving up. You don't need IMPACT at all. Just use the excellent Digient Adept Utility for Windows to program your board. I've programmed the ATLYS (Spartan 6) and Genesys (Virtex 5) from Win10 using the Adept Utility more than a few times, using the HS3 or HS1 since the USB programming connector broke off a while ago. &^$*!^*( SMT USB micro connector! You might have better luck using ISE on Linux. As I recall, I was using ISE on Centos6 before support for that OS was dropped. You can go crazy trying to match Xilinx tool version to the exact OS version that they 'officially' support. I suspect that the OS version that any Vivado release says is supported really should be interpreted as "OS version xxxx was the one we tested this release on and we think that you should be able to install it and use it, but we don't want to hear from you if you install the tools on something else...". That's just an impression I've gotten from many many installs on 'something else'.
  13. Why is A5 of the CY7C2263KV18-450BZXI SRAM disconnected and A[5:18] from the FPGA connected to the device A[6:19] ?
  14. No... schematic signal labels can be deceiving. Just because a power rail is called Vadj doesn't mean that the user can adjust it. I really am fond of the Mimas A7. Oddly, all of GPIO on the board is connected to the same power supply output which is 3.3V. As they spent a lot of time making well matched differential pairs in the PCB layout to the GPIO headers I wondered why they would do this. The answer is that the vendor is happy to take orders on special builds where all of the GPIO are some other voltage like 2.5,V 1.8, 1.5V etc. Fortunately, a user only has to replace one resistor to change the Vccio on the GPIO banks. It's not an easy changes but I managed to do it and now have 2.5V differential capability on the board. I suggested to someone at Numato Labs that a small change allowing the user to select Vccio would be a nice change for the next board spin. I didn't get an encouraging response. If they made just a few alterations in the design the Mimas A7 could be perfect platform. Sigh.... The biggest problem with the Mimas A7 might be finding one to purchase as COVID is a global problem especially for product vendors. As the Arty only has 16 low speed ( < 10 MHz ) and 16 high speed IO I'm guessing that your interface has less than 16 wires. One way to go would be to make a custom board with dual power supply level converters, or 0 delay digital switches. Be careful of level converters that have complicated descriptions of how to use them as they tend to be low data rate and problematic for some applications.
  15. Hmmm. I tossed the two darts that came to mind... currently out of darts. I won't be able to let go of this until you report a resolution so maybe I might add to the thread later. I don't use MicroBlaze so perhaps my ARM experience might not be all that helpful. If misery likes company I will report that I've just fixed a nasty bug, that's been plaguing me for over a week, caused by a typo where my Linux application was stomping over the wrong register and causing what I was convinced was an HDL error... Centos7, PCIe, FPGA... lots of suspects when things don't work. Hate tripping over my own feet...own feet
  16. What happens if you put a 'wait for 1 us' before each of the regx updates? Use something like usleep(), or you could just print the value of the updated register every time you update it.. that should slow down the AXI access rate enough. When you say the code fails, what exactly does this mean? Have you disabled data caching?
  17. The AD9648 has 2 speed grades as you noted from the datasheet, with 125 Mhz Fs being the highest sample rate. There's no sampling at 200 MHz. The device has one clock input that can be up to 1 GHz. It has an internal clock divider to provide a suitable Fs from any input clock within the specified range. The SYNC signal allows for synchronizing the Fs sample clock across multiple AD9448 devices. I also allows for phase compensation. With the Eclypse-Z7 you have the potential for up to 4 channels of ADC with a synchronized Fs clock. The device has a lot of programmable options and the default low level SPI controller automatically configures the device for one of many operating states. The writeup for the ZMOD1410 design is pretty comprehensive. This pod doesn't provide for an external clock input, nor does the Eclypse-Z7 so users should stick with the clocking scheme provided for in the released interface IP. Since, you can't use an external system clock source, there's no point in changing something for which the pod is designed for. An exception might be if an Fs of 100 MHz is inappropriate for some reason and you need to select a slightly different sample rate. As far as the other modes are concerned, you need to understand the control registers bit functionality. It might very well be useful to change the output code or use the test waveform function. If you want to work with data samples that are less than 100 Mhz you can decimate and or interpolate to get to a desired sample rate without changing the clocking scheme.
  18. Well the ILA idea is better but still might be misleading. Sometimes even very precise time measurements turn out not to be repeatable under all operating conditions. But the whole purpose of experiments is to test your assumptions, and perhaps expose concepts that we believe are factual but really poor assumptions. There's nothing wrong with assumptions unless they aren't properly tested. One thing about DDR is that the controllers generally work on cache-lines of multiple bytes or words because of the high data rates and a desire to keep logic clocked at a reasonable rate. The implementation of external dynamic memory controllers as a whole seriously complicates any mathematical analysis of performance. This is doubly so when a processor is executing opcodes out of DDR, whether or not your soft-processor uses instruction and or data cache or not. Perhaps, a better test would be measuring large blocks of data transfer and working out an average rate in bytes or words per second. And perhaps not. It depends on what you are looking for. Sometimes what we are looking for isn't what we should be looking for, as far as getting answers to questions are concerned. Usually, particularly in the beginning of an educational journey the initial questions need improvement. You correctly understand that the time to do a measurement, or get a timestamp, can become the predominating part of a measurement. So, go with that thought, understanding that there might be other factors that you haven't taken into account that might affect the quality of of your measurements and conclusions about those measurements. Don't forget about latency, that is the time that your processor wants data to when it gets data. The ILA approach is good at measuring clocks between signal states but perhaps not necessarily at latency which might be as or even more important than delay between signals. That is, what's happening on the logic side might not be as important as what's happening on the software (opcode) side. Of course, if you can break into your processor you can tweak the measurements to be more accurate for what you want to measure. Again, even if you get a pretty accurate measurement of the minimum possible time to read or write data this might not be all that helpful for real-world applications once all of the levels of software processes are taken into account. This is one reason why DMAing data directly from logic into memory is so attractive. If your data memory is the same as your instruction memory then even DMA analysis gets pretty complicated. I'm sure that you've read CPU and GPU performance reports by various testing websites comparing, for instance AMD and INTEL products and the supporting devices on motherboards and memory. Even with standardized synthetic and 'actual application' test suites making sense of performance numbers as it pertains to a different application that you are interested in is fraught with danger. So, with all of that in mind, perhaps you can think of a way to construct your experiment and measurements that are a bit more comprehensive and take into account of concepts discussed so far. [edit] Also, for dynamic memories there are periods during which the memory controller performs refresh and the application doesn't have access. So, be suspicious of very consistent measurements. Usually, it's better to track minimum and maximum times as well as average.
  19. I like the idea of creating some experiments to get a feel about performance. You ask good questions and perhaps your questions ought to suggest to you some additional experiments. That's how you learn stuff. Of course understanding how DDR and BRAM and your controller work helps. Indeed, the danger of asking questions and doing experiments is that you usually end up with more and ever more complex questions. So more questions: Is the test above really just measuring how long it takes to read a 32-bit value from memory or something else? How could, looking at the 3 lines of code above, could you improve on the accuracy of your measurement? What would happen if your soft processor ran the code out of BRAM and accessed data in both BRAM and DDR ( or visa versa ) ? I frequently like to get a sense of performance and latency for my projects and it's common for me to instrument my sources and create experimental side projects to answer questions that come up during design phases... and there are always questions. Of course being able to do the HDL development flow helps considerably. It's amazing what a simple counter, some logic, and a few trusty UART debugging modules can do. Be aware that for more complicated systems, and I believe that yours applies here, simple tests can be misleading when applied to real-world performance. There can be lots of little details and behaviors that aren't at all that obvious to complicate things if you want to extrapolate simple performance numbers into more general expectations. Sometimes just thought experiments are productive. What's the difference between moving blocks of data and true random R/W access? Does block size matter? Is random R/W operation different than serial operation?
  20. No, it's static ram, there's no refresh. It doesn't matter how slow you change the address, the [edited] memory contents will stay the same. The SRAM will maintain it's contents until commanded ( intentionally or not ) to write new data. I suggest that you look into your SRAM controller logic. Get the data sheet and make sure that you are properly observing all timing specifications. My first guess is that you are encountering bus contention, which is very bad. My second guess is that you are reading the true contents of memory, and not writing what you think that you are. You did simulate your design didn't you? You might try 8-bit transactions instead of 32-bit ones as that is the native word size for your SRAM. Adding pull-up or pull-down resistors to fix your problem is the wrong band-aid.
  21. You've probably overshot the TMI threshold but the basic idea is pretty clear. Of course the devil is in the details and those should be confidential. The Kintex, especially of it's a faster speed grade, is a pretty good device for products that aren't in the 'price is no object' category. I haven't had the interest, incentive or time to do anything with Vitis, mostly because it doesn't support any of the platforms that I have to work with directly. Vivado 2019.1 and the SDK support freeRTOS natively, but don't underestimate issues learning the Xilinx version of the Eclipse IDE. You will no doubt have to make adjustments to the default settings. It might take some time even for a seasoned SW engineer to get comfortable with. By comfortable, I mean familiar with it's idiosyncrasies and bugs. So don't underestimate the learning curve. You should be able to build a ZYNQ HW system using the board presets ( assuming that you are using the correct board version ) and create a freeRTOS BSP and applications without many issues. At least the second time should be pretty simple. There are a lot of ways to do FPGA logic development so who knows if you can achieve your goals... on time. The Kintex family is more capable than Artix which you loaner board PL is akin to. I'd expect that you can run a MicroBlaze quite a bit faster than in the Zedboard PL. Don't ask how much because I wouldn't want to hazard a guess, which would be hazardous to anyone reading it. If I were in your position, I'd concentrate on trying to get competent with the tools as a first step. This means having problems and being able to quickly find answers from the extensive Xilinx documentation, which is frequently out of date. BTW, I really haven't a clue as to how a MicroBlaze in a X7020 PL debugging session would look like... I doubt that the tools are going to automatically download code to the PL buffers... better spend some time seeing if anyone has done that.
  22. There's nothing wrong about what you want to do. It's a shame tha you have to work with a ZYNQ based board though because this will likely double the work and learning curve. But one has to work with what one has. If you look around I'm sure that you can find application notes or project sources for implementing an ARM/Soft-Processor design. You can tie off the ARM and just proceed as if you were working with an Artix 75 device because that's roughly the equivalent of what the Zedboard PL has in terms of resources. There are applications notes for that. My suspicion is that your will largely have to work from scratch because you won't find a tutorial that has everything worked out for you. This isn't a bad thing as you are correctly trying to get up to speed before deadline craziness becomes a impediment to progress. "The MicroBlaze will also interface with some other hardware that is time critical and requires deterministic response times" I don't associate soft-processors with concepts like 'time critical' or 'deterministic'. Logic and state machines perhaps. But again you have to work with the requirements, and time budget, that you have been given. On the bright side perhaps you can compartmentalize you objectives. If you aren't familiar with working with FreeRTOS and Xilinx tools then the Zedboard can provide that for your platform using ARM instead of MicroBlaze. The working theory here is that whoever designed the system architecture and decided to save time by using MicroBlaze has reason to believe that it was up to the task(s) in terms of latency and throughput. I'm not sure that MicroBlaze saves time or complexity over standard logic development, but this depends on a lot of factors. As for PCIe I'm not sure that your platform can help much in terms of getting a 'feel' for how well the MicroBlaze will work. You can certainly to experiments with DMAing data from the PL to the Zedboard memory. The maximum possible is about 1200 MB/s which is likely way lower than any multi-lane PCIe you will have in the final hardware. A big consideration is whether or not your Kintex PCIe <--> MicroBlaze uses external memory for data buffer(s). On the Zedboard, like most ZYNQ platforms, the only DDR is connected to the PS memory controller. The biggest problem for you to solve, at least from my perspective, is to figure out how many pound of learning effort you can stuff into the time sack that is now to when things get serious. Personally, I'd want to minimize the number of extraneous unknowns and try and concentrate on a few specific areas of investigation and learning.
  23. Well, OK, there have been some demo projects that have been upgraded to support popular hardware, like the PCAM board. I didn't mean to suggest that none of your demos would work, as is, on the latest tools. But what percentage of the demos pointed to by product support pages can the latest version of Vivado or VIVIS open and create HW/SW that runs the demo? A quick perusal through Digilent's main pages and github pages suggests that the percentage is not very high. And since almost all of Digilent's demos use either MicroBlaze or ZYNQ that's a lot of demos tied to old versions of the tools e.g. Vivado 2018.x... I read the questions posted to the Digilent Forums almost daily, so while I didn't intend to offend anyone working to support Digilent's products I'm not quite ready to walk back my comments above either. Mainly, those comments were to provide a different viewpoint to the statements about ISE and Vivado. What I did notice was how sparse the list of FPGA boards on the Digilent sales page has gotten.
  24. I beg to differ. In fact Vivado is no good for doing PCIe based designs for the NetFPGA-1G-CML board that Digilent sells. I know, because I've tried. The Vivado support for PCIe simply doesn't work ( at least on Vivado 2019.1 forward ) for the XC7K325T-1FFG676 part for this board because it only has 4 transceiver lanes, because of the lanes that the board designers chose to use, and because Vivado refuses to let you select the correct GTX bank. If anyone has discovered how to do this I'd like to know the secret. So, I've been using ISE on WIN10 for over a year for devices that aren't supported by Vivado and for the aforementioned platform. I didn't use the latest archived version of ISE 14.7 from Xilinx to install it, but used an old DVD from Xilinx.. from back when the tools installer fit one DVD. Installation wasn't straightforward for sure but that version of ISE 14.7 supports Spartan 3, unlike the archived version available from the Xilinx download site. But the tools needed to create a bitstream do work on WIn10. I haven't tried ChipScope or IMPACT, and ISIM has issues on Win10, but on occasion ISE is the only option for some projects. The ISE installer is less than 4 GB. Compare that to Vivado. ISE has bugs but not to the extent that every new version of Vivado has. I would suggest that an earlier version of Vivado, say 2018.4, might be the best option for the Zedboard. I've done Vivado development for that board on Win10. One thing to you need to do is figure out if the version of your board is supported by Vivado. I have an older one had to spend some time trying to figure out what board version to choose and what the revision changes were because the Vivado installations that I have on Win10 don't support my board explicitly. In some ways ISE is easier to work with than Vivado. I haven't seen any dramatic improvement in synthesis or P&R with Vivado. IMPACT can be a pain in the neck but I use the very useful Adept Utility for Windows to configure the FPGA. The one thing that Vivado does improve on is the integration of the debug tools into the GUI. ChipScope is clunky and you need a license to use it. Can you do development work for the Zedboard using VITIS? I don't know because I haven't had the time to work my way through the maze. Unless you need VITIS experience I suggest that using a version of the tools that were new about the time that your platform was designed is the easiest way to get to a working project development cycle. You can spend weeks trying to convert old demo projects to the newer Vivado releases. Who has the time for that? Certainly not Digilent, though they still sell the board. That should tell you something. Understand that for ZYNQ development you have 2 toolchains to debug; HW and SW. VITIS simply wasn't designed for the Zedboard. Last thought on the subject. I've had to eschew using flags and word counts for Vivado FIFO IP because they just aren't reliable. Vivado has messed up the FIFO IP for quite a few version now. ISE didn't have this problem, so it's the tool software that's the issue. If I spent some time thinking about it I'm sure that I could come up with other examples of how Vivado is frustrating to work with. BTW, though Quartus for Intel FPGA devices is riddled with bugs, and finding the version that works with a particular device can take hours of research it does install relatively painlessly on just about any OS and is under 8 GB.
  25. Have you read the ug585-ZYNQ-7000-TRM GPIO section; specifically 14.2.4? The ZYNQ interrupt complex is quite complicated and the documentation isn't always very clear. But really, the point of the PL is to let the designer create the external interface to suit just about any hardware, that best suits the designer's needs, so I don't think that there is a 'proper' way. Personally, if it doesn't consume too many resources I'd rather simplify my software development at the expense if a few lines of HDL code. I wouldn't think that there's a 'proper' way to handle your issue. here's a lot of ways to do most anything. Usually, there's only one optimal way, a few easy ways, a lot of hard and complicated ways, and an almost infinite number of wrong ways to solve a problem... and of course it all depends on your project requirements and constraints.
×
×
  • Create New...