Jump to content

RyanW

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by RyanW

  1. This complexity may be problematic for me. I need to create my own simple FMC card to connect to some custom 16 channel LVDS interface. I really need something that can get to this without extra steps on a devboard. The ZedBoard could do this for me and I have experience using basic Zynq devices, but I really need the Video Codec Unit eventually. The Ultrascale+ MPSoC devices do scare me a bit, because it seems its always different and more complicated whenever I look at boards for it and I am worried I will run into roadblocks that take too much time to solve. The goal is eventually to develop a daughter board for the Kria, whether by contracting it out, or by me taking a stab at it. It seems that the XCZU5EV-SFVC784-1-E chip on the Gensys is very close to the Kria chip, so I am hoping to use this board to get developing things without daughter board roadblocks on the Kria devkits (can't get the right voltages/IO on the AI or Robot starter kit). I'm trying not to run into that problem again. a 1.8V HP bank (or even a 2.5V HR) would be perfect for my use cases.
  2. So is this to say that the Platform MCU can re-assign bank voltages after the FPGA has been programmed? I thought that the bank voltages would need to be powered during the bitstream process. So if I can force the Platform MCU to reassign the bank voltages, does that mean I should be setting that in the PL and that the bank voltages on Zynq can be swapped after programmed?
  3. Hello, I am a little confused on how to set the VADJ on the Genesys XU-5EV. According to the reference manual external VADJ_LEVEL1 and VADJ_LEVEL0 signals control the state of VADJ. How can I ensure that these two signals both are signaled to 1 and 1 upon boot. I suppose I will need to ensure these signals reach their levels before the platform management starts its booting process. What are the tips on this as I really need the capability of easily getting 1.8V on the LPC FMC and the usage of the video codec unit; otherwise I would consider the ZedBoard option with jumper headers for VADJ. I do not yet have the board to play around with, so some of this is still just conceptually abstract to me.
  4. Thanks for this. I figured there was probably some way to do this, but for the time being I just had to work with my cobbled together solution.
  5. I had this same misunderstanding starting out. The voltage constraints are there to tell Vivado what voltage you have externally supplied to it. The only way to change the voltage on those IO banks is to physically change the incoming voltage source to the pins on the FPGA that supply those IO banks. Just glancing at the Zybo Z7, it doesn't seem to have a pin header switch or any other way to switch the bank voltages; and considering this is a dev-board, there's not really a good way to go in and hack away voltage supply to those pins. I know on the Zed Board, there are some bank voltages you can control with a pin header jumper, and there are probably some other boards that have that capability as well. If you really need to use this board for this application at 1.8V. It seems digilent has a PMOD connector that could probably help you out. I don't know what your target data-rate is, but you can check to see of the chip can handle it by viewing the data sheet. (It seems to say max data rate for translating to 1.8V is 75 Mbps on the features page, but their charts are confusing as I'm calculating slightly higher). Digilent PMOD Level Shifter: https://digilent.com/shop/pmod-lvlshft-logic-level-shifter/ Digilent PMOD Level Shifter References: https://digilent.com/reference/pmod/pmodlvlshft/reference-manual?redirect=1 TI SN74LVC1T45 Datasheet: https://www.ti.com/lit/ds/symlink/sn74lvc1t45.pdf
  6. Hello, I have a Cora Z7-10 and it appears that the USB Type-A functionality of this board is not working. When measured, I get 0 volts across the power rails on the USB 2.0 pins. I also checked this on the single core version of the board I have and I got the same results. I can't access any peripheral USB devices I am trying to hook up to the board due to this issue. I tried to power the board with an external power supply thinking that maybe the USB was shutting down or not turning on due to some circuitry things, but after powering it in both USB and external modes, the USB Type-A port still seems to be inactive/has zero volts on its power rails. Everything else on the board appears to work well, and I have been using them for around a year now (just never tried anything with USB up till now). Is there something I need to do to enable the functionality of the USB host port for the Cora Z7-10/7s?
  7. RyanW

    Cora Z7-10 Discontinued

    Thank you for the response, that makes sense for the reasoning, although I liked having the better chip on it.
  8. Hi, I had a similar problem using the Cora Z7-07S. To make sure we're on the same page. I'm changing this in the file $PETALINUX_PROJECT/components/plnx_workspace/device-tree/device-tree/zynq-7000.dtsi I just commented out that reference to CPU1. /*cpu1: cpu@1 { compatible = "arm,cortex-a9"; device_type = "cpu"; reg = <1>; clocks = <&clkc 3>; };*/ And then replaced the reference to cpu1 with cpu0 in the PTM. ptm@f889d000 { compatible = "arm,coresight-etm3x", "arm,primecell"; reg = <0xf889d000 0x1000>; clocks = <&clkc 27>, <&clkc 46>, <&clkc 47>; clock-names = "apb_pclk", "dbg_trc", "dbg_apb"; //cpu = <&cpu1>; cpu = <&cpu0>; out-ports { port { ptm1_out_port: endpoint { remote-endpoint = <&funnel0_in_port1>; }; }; }; }; I'm not sure if that second PTM physically exists in this chip or not, but doing just this worked for me on my system. The ptm1 is referenced only in one other spot in my device tree, so I'm considering taking out the chain of references stemming from the cpu1 to ptm1. From what I can tell a PTM is A "Program Trace Macrocell (PTM) is a real-time trace module providing instruction tracing of a processor." so it would seem that is has something to do with tracking instructions in a debug mode or something inside the processor, and it would stand to reason there would be one per core, so the second one seems extraneous.
  9. RyanW

    Cora Z7-10 Discontinued

    Hello, this post mostly goes out to Digilent staff, but I'd like to hear other's input on this as well. I noticed a couple months ago that the Cora Z7-10 was discontinued and only the single core option is now available. Is this due to chip shortages/supply chain issues? Is there any plan to bring this version of the board back? I have both the single and dual core, and unfortunately the single core model does not play nice with the Xilinx toolchain for petalinux. I was eventually able to get linux working on the single core, but I had to modify the zynq-7000.dtsi to exclude references to the second core. All the bare metal stuff works nicely. I just thought it seemed odd to discontinue the product when I thought the pin-outs on both chips were the same (I could be entirely wrong on this). https://digilent.com/shop/cora-z7-zynq-7000-single-core-and-dual-core-options-for-arm-fpga-soc-development/
  10. Hello everyone, I am having a great deal of trouble figuring out how to setup drivers for the Xilinx DMA IP core in Linux/Petalinux Project. I have a bare-metal application setup currently that work well at 400MB/s from the PL to PS and streams data from AXI stream generator I made to simulate a video stream. That works well, but I need to be able to transfer data from the PS side to a PC via Ethernet and I was hoping to use Linux to handle that interaction. I essentially want a proxy driver that I can control from user-space to initiate the transfer upon interrupt from the DMA engine. I was doing direct register programming before and reprogramming upon interrupt in the bare-metal application. Is there some way to do this from user-space with provided drivers somewhere, or does it take a different approach? If there are some provided drivers that take a different approach that is fine, I just need the data in the PS from the PL. Is there anything someone can point me to for the right direction? I am even considering doing a direct register programming mode of the AXI DMA IP core handled entirely in the PL portion along with the interrupts. This would allow me to poll some arbitrary AXI lite interface to see if I received an entire image frame or not yet. And read the data most like likely out of an MMAP portioned of memory, which maybe I could define in the device-tree although I've had problems setting that up as well. I would want to avoid this however as it seems like its just a work around and it seems better to do things the right way. How/where do I find/setup the Xilinx DMA kernel drivers and tie them to the Linux DMA framework? I then Imagine I would need to create my own proxy module to tie the syscalls together at that point. I'm still not sure how it would allow me to program a destination address and transfer length to the AXI DMA IP core or handle the interrupts to reset the interrupt and reprogram the engine. Can anyone help weigh in on this?
  11. I made made the sysroot by running the 2 commands after I had setup and built the base project: petalinux-build--sdk petalinux-package --sysroot When running this command: I see that the file does exist, only it seems to exist in a sub-folder right under where the path specified in the error message pointed to. The output is shown below. ./images/linux/sdk/sysroots/cortexa9t2hf-neon-xilinx-linux-gnueabi/usr/lib/arm-xilinx-linux-gnueabi/11.2.0/crtbeginS.o EDIT: Slight mistake in the section above, this is being run inside the project folder, not the /tools/Xilinx/Vitis/2021.2 directory as specified in the error message. However, similar results were obtained as shown below ./gnu/aarch32/lin/gcc-arm-linux-gnueabi/cortexa9t2hf-neon-xilinx-linux-gnueabi/usr/lib/arm-xilinx-linux-gnueabi/10.2.0/crtbeginS.o There were a couple of other things I should note/have questions about. When I packaged the sysroot, this message below popped up. How can I run this inside of Vitis? It seems important, but when I source this file from the terminal and then subsequently launch Vitis from that environment, it still fails to compile. SDK has been successfully set up and is ready to be used. Each time you wish to use the SDK in a new shell session, you need to source the environment setup script e.g. $ . /home/bryan/Vivado/Testing/LinuxTest/pLinux/images/linux/sdk/environment-setup-cortexa9t2hf-neon-xilinx-linux-gnueabi There is also this message I see when I compile the project platform in Vitis. I can't figure out why it does not want to copy; when I check the file, I see that it is owned by me. Copying the sysroot data, this may take few minutes... WARNING: Failed to copy boost::filesystem::copy_file: Permission denied: "/home/bryan/Vivado/Testing/LinuxTest/pLinux/images/linux/sdk/sysroots/cortexa9t2hf-neon-xilinx-linux-gnueabi/usr/bin/sudo", "/home/bryan/Vivado/Testing/LinuxTest/Vitis/LinuxPlat/export/LinuxPlat/sw/LinuxPlat/linux_domain/sysroot/cortexa9t2hf-neon-xilinx-linux-gnueabi/usr/bin/sudo" Note: the project path is slightly different than in the original question because I eventually deleted that whole project directory, but this one is setup just the same, it only has slightly different names.
  12. A nice quick tip is that you can also boot over JTAG which is a bit less of a hassle. Just have the board connected to some JTAG, typically over the USB/Serial connection/cable on dev boards. From the petalinux project root: petalinux-build petalinux-package --boot --fsbl images/linux/zynq_fsbl.elf --u-boot --fpga images/linux/system.bit --force petalinux-package --prebuilt --force petalinux-boot --jtag --prebuilt 3 And connect to it normally over the serial cable. The base image is something like 16MB. so its not too bad to load onto to it with JTAG, but if the image starts to get larger from packages whatnot it might take a while, and depending on the RAM, I'm not sure if it will load at all. I still can't get TFTP boot to work, so JTAG will have to do for me for now.
  13. Hello everyone, I'm having some difficulty getting applications to compile in Vitis for deployment on embedded Linux. I thought the way I setup the platform project was correct because the included header files were found by Vitis. However, when I compile the program it seems to fail. The weird thing is that it actually did compile and run on my Cora Z7-10 when I hadn't populated all the fields for the platform project. So I believe I may be doing something incorrectly here. How am I supposed to setup the Vitis project properly to get applications for Linux to compile on them? Above is how I setup my platform project in Vitis with (what I think is) the relevant petalinux directory information. Building target: HelloWorldApp.elf Invoking: ARM v7 Linux gcc linker arm-linux-gnueabihf-gcc -L/home/bryan/Vivado/Tutorials/Linux_UIO/Vitis/LinuxUIOPlat/export/LinuxUIOPlat/sw/LinuxUIOPlat/linux_domain/sysroot/cortexa9t2hf-neon-xilinx-linux-gnueabi/lib -L/home/bryan/Vivado/Tutorials/Linux_UIO/Vitis/LinuxUIOPlat/export/LinuxUIOPlat/sw/LinuxUIOPlat/linux_domain/sysroot/cortexa9t2hf-neon-xilinx-linux-gnueabi/usr/lib -o "HelloWorldApp.elf" ./src/helloworld.o --sysroot=/home/bryan/Vivado/Tutorials/Linux_UIO/Vitis/LinuxUIOPlat/export/LinuxUIOPlat/sw/LinuxUIOPlat/linux_domain/sysroot/cortexa9t2hf-neon-xilinx-linux-gnueabi -Wl,-rpath-link=/home/bryan/Vivado/Tutorials/Linux_UIO/Vitis/LinuxUIOPlat/export/LinuxUIOPlat/sw/LinuxUIOPlat/linux_domain/sysroot/cortexa9t2hf-neon-xilinx-linux-gnueabi/lib -Wl,-rpath-link=/home/bryan/Vivado/Tutorials/Linux_UIO/Vitis/LinuxUIOPlat/export/LinuxUIOPlat/sw/LinuxUIOPlat/linux_domain/sysroot/cortexa9t2hf-neon-xilinx-linux-gnueabi/usr/lib /tools/Xilinx/Vitis/2021.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/x86_64-petalinux-linux/usr/bin/arm-xilinx-linux-gnueabi/arm-xilinx-linux-gnueabi-ld.real: cannot find crtbeginS.o: No such file or directory /tools/Xilinx/Vitis/2021.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/x86_64-petalinux-linux/usr/bin/arm-xilinx-linux-gnueabi/arm-xilinx-linux-gnueabi-ld.real: cannot find -lgcc /tools/Xilinx/Vitis/2021.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/x86_64-petalinux-linux/usr/bin/arm-xilinx-linux-gnueabi/arm-xilinx-linux-gnueabi-ld.real: cannot find -lgcc collect2.real: error: ld returned 1 exit status make: *** [makefile:38: HelloWorldApp.elf] Error 1 Above is the error I'm getting when attempting to compile in Vitis. Seems like maybe I'm setting up the settings for the compiler or the sysroot wrong?
  14. Zygot, I've been thinking about this response this week. And I think the best option here is like you said So I'll be trying out a couple different solutions. I still like direct LVDS input the best out of the options so I will be getting a board which can more cleanly do this.
  15. Thank you for the response Zygot, and yes I have a bit of pickle here. I understand that there are tons of problems to be solved in this design. I may be hitting this problem with the wrong hammer. The overall goal is to deserialize data from the image sensor I have and collect the data into the onboard DDR memory. I have 4 LVDS channels of data running at 462Mbps each and a LVDS pixel clock that runs at 66MHz. Its a 1:7 deserialization. The initial plan was to use the ISERDES module on the inputs with the LVDS lines, but due to the bank voltage being 3.3V, I would need to terminate it with a 100 Ohm resistor across the lines, which I can't do with the dev board. I had looked into XAPP585 (LVDS Source Synchronous 7:1 Serialization and Deserialization Using Clock Multiplication Application Note) a while back and for Artix-7 SDR designs, it would seems as if I'd be able to drive it 464Mbps, which I believe is due to the global clock buffer only being able to handle 464Mhz. It also seemed like it could potentially go higher if I didn't drive it through a global clock buffer to around 600Mbps. The PMODS are supposedly differentially routed with 100 Ohm impedance (+/- 10%) as stated in the reference manual Digilent gave. I kind of assumed they were all trace matched to each other, but it could be they're only trace matched for the individual pair.I was hoping they could potentially get high enough speeds to accommodate what I need. They also have a single MRCC pair that I was planning on driving the clock line into. I would have preferred to use it for the direct LVDS implementation to help mitigate some of the signal integrity issues single-ended would face on the board. But yeah, I would need a termination resistor for this bank voltage. For LVDS inputs, this FPGA should be able to handle LVDS on a 3.3V, but it does require external termination. I found this in a chart Xilinx put out on their forums here (https://support.xilinx.com/s/article/43989?language=en_US). So the next factor I wanted to try, and I liked this option the best, was using a TI LVDS to LVTTL IC like the DS90CR288A, but they are all out of stock until 5/2023. Otherwise it would have let me deserialize right on the chip for 4channel with a common pixel clock into 28 single-ended bits + a 66MHz single ended clock. And I felt that would be way more manageable. There was some other chips that I could get like the DS90CR218A, which actually has plenty of stock. But it only has 3 LVDS input channels + a clock, and I wasn't sure splitting the clock signal between two chips for a data stream like this was a good idea. I recently found some literature talking about bifurcated termination where once it splits, the impedance goes to 2Z and then terminate 2Z at both chips. But I am not sure if that's such a good idea anyhow, especially considering all the data needs to be aligned with each other. So this is how I got here. The current plan is to use something like a SN65LVDT352PW which just shifts the level from LVDS to single ended LVTTL with a max switching of somewhere around 500Mbps. I was making an interim board that plugs directly into the sensors outputs and the plugs directly into the pmods with just the level shifter chips on board, so I could keep the lines short enough from sensor to chips and from chips to FPGA. I've been having fun learning about making controlled impedance lines for what its worth. This was not my favorite option, but its the option I have right now. I suppose getting a different FPGA dev board might be what I really need to do, but this is hand I'm playing currently. I've checked for a lot leaks, but I'm still fearful my boat is going to sink anyways. This is all pretty new territory for me. The fastest signals I've ever dealt with before were definitely less than 10MHz, so I've just multiplied that by 50. Writing this is at least a good recap to justify things to myself at least. I still think it might be possible, but what scares me the most is I will have no way to verify what the signals even look like, like I;ve always been able to do in the past. My oscilloscope is a measly 1G Sa/s 200MHz bandwidth scope, so It's even hard to even see the pixel clock on it. edit: I thought I would include the board I was making to kind of show the idea. LVDS goes in the left and LVTTL goes out the right. It just plugs right into both boards without the need for cables. It's a bit whack right now and nowhere near done, but that's the concept anyways. The impedance is supposedly right for a JLCPCB 4 layer stackup. If I went with the LVDS to 28 bits out, I would make a shield-like implementation for the Cora Z7-10 that also plugs directly into the sensor board.
  16. Hi, I'm trying to use the PMODs on the Cora Z7-10 to accept high-speed single ended signals from another board I am making. In the reference manual for this device it says: So my question is in regards to proper grounding here. If I have a pair JA1_P and JA1_N and want to use JA1_P as the single-ended input. It says I should drive JA1_N low on in the FPGA fabric, but do I also need to connect the output of that to ground (it kind of seems like I should)? I'm perhaps a little worried about ground spikes, so if I do, should I put a series resistor in with it? Will this affect the single-ended speed potential? My data rate is ~500Mbps on each line.
  17. RyanW

    AXI DMA Help on Cora Z7-10

    Thank you. I took the advice and wrote my own drivers for this kind of thing. Perhaps I just didn't understand how to use the Xilinx provided ones, but direct transfer mode is fairly simple when you lay it out like that. I know I had read the programming sequence in the docs, but I figured it was just handled in the simple transfer function which wouldn't allow consecutive transfers as it checks if the DMA has been started before already. I had thought that the DMA would de-assert back to a halted state, but seems this is not the case. Thank you everyone for helping me clear this up. I would like to select everyone as best answer, but I can't, so I'll just go with the last one in the progression.
  18. RyanW

    AXI DMA Help on Cora Z7-10

    The data_gen is something that is under development. I just needed to test how to get any arbitrary data out of the PL faster than with AXI GPIO. I thought I had come across this problem already a week ago where I found out that my valid and data output didn't line up well if the ready was de-asserted. I thought I had fixed that issue and the simulation seemed to show so, but I could be largely misinterpreting how the AXI stream interface works. I am very new to AXI, and it has been giving me lots of troubles ever since I got in to it. I have now created a new data_gen that I think adheres to the AXI Stream rules much better. I have an extensive testbench for a lot of cases including broken up data beats. This makes a lot of sense now that you say it. I changed my code to something to reflect this and I also invalidated the cache before the transfer (would having the buffer be volatile, as I had it, not already do this? I figured that's what volatile did to some degree, but I went ahead and flushed the cache anyways). Along with better transfer parameters, cache invalidation, and a new data_gen, I was actually able to populate all 32 words from the PL into DDR RAM; however the DMA engine would still hang and never generate an interrupt or assert high on the halted bit or de-assert to low on the idle bit in the S2MM_DMASR. I tried using the ILA to capture what was going on and this is what I found. One thing I also found peculiar is that I called XAxiDMA_Busy even before the transfer and both directions are still registered as busy; however, the actual status register shows it as halted and run/stop = 0. When the transfer starts these registers are flipped to indicate that it is still running, but it goes on forever. The first 4 words come extremely early, so I couldn't capture them in the same waveform and had to re-run the program for the last part. It seems to play out correctly and the same to how I simulated it within the new testbench I made for the data_gen. Thank you both, for the great help already. I feel like I actually made some progress on this for once. What can I do to get the DMA engine to stop hanging on this transfer? I'm guessing this might have to do with tlast again, but it seems like its signaling at the right time.
  19. Hello anyone, I have been trying to get a simple AXI DMA transfer for the PL to PS on my Cora Z7-10 working for a while now. I have followed many tutorials and guides and for some reason I'm just not getting any results. I'm really hoping someone here can help me out with this, as I have been stuck on trying to get this to work for a long time now. The C program seems to get stuck waiting for XAxiDMA_Busy after I call XAxiDma_SimpleTransfer(&AxiDma, (UINTPTR) StreamBuffer, 4, XAXIDMA_DEVICE_TO_DMA). All other calls setting up the PL DMA engine seem to return as successes. I have an arbitrary stream of data being generated by an AXI stream module that just counts up 1 from 0 every transfer. I'm going to input a lot of pictures here in hopes that it might help anyone who wants to take a stab at helping me here. My data generator has 32 bit output and counts up to 31 from 0. I have wondered if there was a problem with tlast, in how the DMA engine considers packets, so I tried using tlast at the end of the 32 word stream and I also tried tying it high. Above is my block diagram for this system. The data generator streams to a FIFO which then streams to the AXI DMA and that's about it. I have the sys_clock coming in at 125MHz which enters the clock wizard and comes out at 100MHz. Here is the data_gen sim with tlast. (This module only, not connected in block diagram; however, I have simulated both of these designs hooked up to a stream data-fifo and they passed the data through just fine, so I don't think it's my handshaking but I'm not ruling out out the possibility that I screwed up another part of the streaming protocol). I also tried tying tlast high for the whole stream as well in the full implementation. The configuration I have for the DMA is fairly stripped down and here is the way I configured it in Vivado. The code I have is fairly straight forward. I lookup the config which returns success as do all the other cases. It gets stuck during the loop checking if the DMA is still busy, and from the debugger I can see that no data was ever transferred into the DMA. I also used to have a print statement in the wait loop to see if any of the values changed in the StreamBuffer array. #include <stdio.h> #include "platform.h" #include "xil_printf.h" #include "xaxidma.h" #define DMA_DEV_ID XPAR_AXIDMA_0_DEVICE_ID int main() { init_platform(); xil_printf("\n\r"); xil_printf("AXI DMA Self Test\n\r"); XAxiDma AxiDma; XAxiDma_Config *CfgPtr; int Status = XST_SUCCESS; CfgPtr = XAxiDma_LookupConfig(DMA_DEV_ID); if (!CfgPtr) { xil_printf("Case 1: Failure\n\r"); } else { xil_printf("Case 1: Success\n\r"); } Status = XAxiDma_CfgInitialize(&AxiDma, CfgPtr); if (Status != XST_SUCCESS) { xil_printf("Case 2: Failure\n\r"); } else { xil_printf("Case 2: Success\n\r"); } Status = XAxiDma_Selftest(&AxiDma); if (Status != XST_SUCCESS) { xil_printf("Case 3: Failure\n\r"); } else { xil_printf("Case 3: Success\n\r"); } XAxiDma_IntrDisable(&AxiDma, XAXIDMA_IRQ_ALL_MASK, XAXIDMA_DEVICE_TO_DMA); XAxiDma_IntrDisable(&AxiDma, XAXIDMA_IRQ_ALL_MASK, XAXIDMA_DMA_TO_DEVICE); xil_printf( "HasStsCntrlStrm : %u\n\r" "HasMm2S : %u\n\r" "rHasMm2SDRE : %u\n\r" "Mm2SDataWidth : %u\n\r" "HasS2Mm : %u\n\r" "HasS2MmDRE : %u\n\r" "S2MmDataWidth : %u\n\r" "HasSg : %u\n\r" "Mm2sNumChannels : %u\n\r" "S2MmNumChannels : %u\n\r" "Mm2SBurstSize : %u\n\r" "S2MmBurstSize : %u\n\r" "MicroDmaMode : %u\n\r" "AddrWidth : %u\n\r" "SgLengthWidth : %u\n\r", CfgPtr->HasStsCntrlStrm, CfgPtr->HasMm2S, CfgPtr->HasMm2SDRE, CfgPtr->Mm2SDataWidth, CfgPtr->HasS2Mm, CfgPtr->HasS2MmDRE, CfgPtr->S2MmDataWidth, CfgPtr->HasSg, CfgPtr->Mm2sNumChannels, CfgPtr->S2MmNumChannels, CfgPtr->Mm2SBurstSize, CfgPtr->S2MmBurstSize, CfgPtr->MicroDmaMode, CfgPtr->AddrWidth, CfgPtr->SgLengthWidth ); xil_printf("AXIDMA HasSg: 0x%08x\n\r", AxiDma.HasSg); //-------------------------------------------------------- volatile u32 StreamBuffer[256]; for(int i = 0; i < 256; i++) { StreamBuffer[i] = 0; } while(!XAxiDma_ResetIsDone(&AxiDma)) {} Status = XAxiDma_SimpleTransfer(&AxiDma, (UINTPTR) StreamBuffer, 4, XAXIDMA_DEVICE_TO_DMA); if (Status != XST_SUCCESS) { xil_printf("Case 4: Failure\n\r"); } else { xil_printf("Case 4: Success\n\r"); } int DMA_Busy_DevToDMA = 1; int DMA_Busy_DMAToDev = 1; while(DMA_Busy_DevToDMA || DMA_Busy_DMAToDev) { //Wait //xil_printf("Waiting\n\r"); DMA_Busy_DevToDMA = XAxiDma_Busy(&AxiDma,XAXIDMA_DEVICE_TO_DMA); DMA_Busy_DMAToDev = XAxiDma_Busy(&AxiDma,XAXIDMA_DMA_TO_DEVICE); } for(int i = 0; i < 100000; i ++) { } xil_printf("DMA StreamBuffer Test Data\n\r"); for(int i = 0; i < 16; i++) { xil_printf("0x%08x: %d\n\r", &StreamBuffer[i], StreamBuffer[i]); } xil_printf("Successfully ran AxiDMASelfTest Example\r\n"); cleanup_platform(); return 0; } Here is the serial output I get showing that it gets stuck waiting forever for data to transfer and never transfers anything. Case 1 is the success return of CfgPtr = XAxiDma_LookupConfig(DMA_DEV_ID); Case 2 is the success return of Status = XAxiDma_CfgInitialize(&AxiDma, CfgPtr); Case 3 is the success return of Status = XAxiDma_Selftest(&AxiDma); Case 4 is the success return of Status = XAxiDma_SimpleTransfer(&AxiDma, (UINTPTR) StreamBuffer, 4, XAXIDMA_DEVICE_TO_DMA); I also print out the configuration pointer data here. I have determined that the value on the S_AXIS_S2MM_tdata bus has gotten to 31, so it makes me think there is some form of transfer going on there, but I can't figure out why I don't see any values in the stream buffer still. I have tried directly using Xilinx's examples from their website and followed multiple tutorials in the same way the presenter did them. And imported the examples from the drivers in Vitis and changed the DDR base address in them to fit my board with using the correct address as defined in xparamters.h. One of the more recent tutorials I did was with this video below. And the same configuration on my end with the same code seems to still get stuck (This time I can't even tell where as the debugger crazily jumps around in a fashion that makes no sense). No matter what avenue I go it seems like I just can't get the DMA to work, which seems crazy to me. Is there anyone out there who has experienced these difficulties with the AXI DMA engine before? I just can't seem to figure out what's going wrong here despite a couple months of trying many many different things. For anyone who has bothered to read this far down in the post. You're a hero.
  20. That would explain quite a lot, I thought that VCC was able to be changed from internal electronics using the Xilinx constraints; I'm new to the ins and outs of FPGAs so I assumed wrongly. So each bank needs to be externally supplied at the desired VCC? Does Digilent offer any Zynq based boards with VCC2V5 or a way to reconfigure the voltage supplied to individual banks from board selection jumpers/programmable power ICs? Thank you so much for your response as well, I've nearly gone crazy trying to figure this one out.
  21. Hello, I am having some difficulties getting LVDS to output on my Cora Z7-10. I've tried boiling it down to its most basic form of just trying to output a 10Mhz clock through on one of the differentially routed PMOD ports (JA), but I'm still not seeing any output on my oscilloscope. Can anyone help in in understanding what's going wrong here? Has anyone else gotten LVDS to work correctly on this board's PMOD ports? I tried looking at some other designs like the HDMI TMDS33 through PMOD that was posted on this forum before and I was able to get TMDS33 to work with 3.3V 50 Ohm pullup resistors. Any help on this would be greatly appreciated. This is my full constraints file: set_property -dict {PACKAGE_PIN D20 IOSTANDARD LVCMOS33} [get_ports RESET] #IO_L4N_T0_35 Sch=btn[0] set_property -dict {PACKAGE_PIN U18 IOSTANDARD LVDS_25} [get_ports {CLKT_clk_p[0]}] #IO_L12P_T1_MRCC_34 Sch=ja_p[3] set_property -dict {PACKAGE_PIN U19 IOSTANDARD LVDS_25} [get_ports {CLKT_clk_n[0]}] #IO_L12N_T1_MRCC_34 Sch=ja_n[3] This image shows all that I have in this design. I am trying to use the Vivado block diagram IP designer to instantiate the buffers, and the schematic and device view from implementation shows that the buffer is connected in the design, so I don't think the router/synthesizer is throwing it out. CLKT is using the port interface "xilinxcom:interface:diff_clock_rtl:1.0". Reset is hooked up to button 0 on this board, sys_clock is the 125MHz system clock on H16. The utility buffer IP here is configured to use an OBUFDS. Considering I was able to get TMDS working with the pullup resistors it could be an issue with termination. I've tried it a few different ways without really seeing any output from it as well. here are the two I've tried. My initial goal was to try and serialize parallel data from the PL, output it over LVDS on some wires back into the PL with SERDES primitives and see how fast I could get reliable transmission, but I'm not able to get any LVDS out, so I need to solve that bit first. I do have two of these devices, so it would be fun to transmit from one to the other as well.
×
×
  • Create New...