Jump to content

zzzhhh

Members
  • Posts

    27
  • Joined

  • Last visited

Posts posted by zzzhhh

  1. On 5/22/2023 at 2:51 PM, zygot said:

    This refers to the Artix 35T version which has been discontinued. The DDR clocking capabilities of the L part are slower than for the regular speed and power grades. Asmi is correct that there are lower limits to how fast you want to run your DDR3 memory CLK. It's how DDR3 works. It wouldn't be a bad idea for anyone wanting to use DDR in a design to educate themselves about such things. For some Artix devices you are limited to a 2:1 MIG controller design, which is what the reference provides for.

    For the reference that you are using the 6000ps input clock is the source clock for the MIG controller clocking. So, if you provide a 166.7 MHz input clock and have a 2:1 memory controller design your DDR CLK will be 333 MHz and you will have a 667 Mbps DQ data rate. That's as good as it gets for the XC7A35TICSG324-1L part.

    If you've purchased you board recently check the part number on the FPGA device as it's likely a larger part that isn't low power.

    I'm fairly certain that the "Max. clock period" in the reference manual is a poor choice of words because not many customers want to know how slow they can run the DDR PHY clock. For the XC7A35TICSG324-1L I suppose that max clock period and min clock period are pretty much the same as this is the lower end for the DDR3 chip and the upper end for the FPGA device.

    Feel free to ignore all of the posts to your question so far and wait for someone from Digilent to tell you the same things.

    Of course I will ignore your garbage. Also, since you know I will ignore it, don't put your ignorant garbage in my question in the future.

  2. In section 5.1 DDR3L of Arty A7 Reference Manual (link), Table 2 says the "Max. clock period" is 3000 ps. But I think 3000 ps should be the "Min. clock period" because we can further increase the clock period (= reduce clock frequency) to like 6000 ps shown below by Recommended Input Clock Period in the same table. If you are an employee of Digilent, can you please double check it? Thanks.

  3. On 3/18/2019 at 6:25 PM, jpeyron said:

    Hi @newkid_old,

    Here is a verified Vivado 2018.2 Arty-A7-35T gpio interrupt project using your SDK code. Please download and run this project. Do you get the expected results. If not please attach screen shots of your serial terminal output.

    thank you,

    Jon

    Thank you and other guys discussing interrupt on MicroBlaze. I was also stuck at first with my own AXI4 peripheral IP which outputs interrupt signal. Your SDK code helped me find out what I was missing.

    PS, in my Vivado/Vitis 2022.2.2 project, calls to XIntc_MasterEnable and microblaze_enable_interrupts are not needed. And I call XIntc_LookupConfig before XIntc_Initialize. Finally, the second argument of Xil_ExceptionRegisterHandler should be the ISR function because INTC_HANDLER is undefined in current version of Vitis. I'm pretty surprised that we still need to call exception related APIs since I think I have done all the initialization and configuration work for interrupt, so why bother setting up exception? This is what I was missing.

    It's so sad to see you go, Jon. I can't imagine a current Digilent staff could carefully read a customer's question, not to mention creating a working project to answer the question.

  4. I embedded a MicroBlaze core in the FPGA of Arty A7-100T board. To measure time spent by a piece of code run by MicroBlaze core in Vitis, I tried

    #include <time.h>
    
    clock_t start, end;
    start=clock();
    // code to be measured
    end=clock();

    But I received error

    Quote

    (.text+0x8): undefined reference to `times'

    So, is there any API from Vitis SDK I can use to measure time using MicroBlaze core in Vitis for Arty A7-100T? If answer is no, is that because of the limitation of MicroBlaze? If so, if I replace MicroBlaze with some other processor core, say RISC-V, can I measure time in Vitis? Or, is there no hope at all to do time measurement using C code in Vitis on top of Artix-7 FPGA?

     

  5. 5 hours ago, zygot said:

    Large data file transfers via a UART may expose issues on either end. Transferring small amounts of data may not. The first thing to confirm is that you can receive and transmit small messages. This is where trying to echo characters as they are received can help debug your MicroBlaze UART code. One way to test only the FPGA side would be to connect the Tx signal to the Rx signal in your board schematic instead of the USB UART pins. This is one form of loopback. You could send messages and see if you receive them. If there are no problems doing that, then go back to using the USB UART, and change your MicroBlaze application to echoing received characters to the transmitter and type into Tera Term or your serial terminal application on the Windows host. If that works, most likely there is an issue with the MicroBlaze application getting characters out of the UART quickly enough. Make a plan for short steps that will provide a clue as to what's working.

    Yes, if the polling interval isn't short enough. Interrupts add complexity but there should be example software projects to help.

     

    You need to understand that there are a lot of levels of hardware going on here. In the MicroBlaze UART you have small Rx and TX FIFOs. If your code writes to the UART Tx FIFO faster than they can be sentout over the Tx pin then you will lose data. For a USB UART the bridge device has Tx and Rx FIFOs. PC host drivers take care of making sure that receive data is being emptied fast enough. The default driver for FTDI COM devices is the VCP driver. It does have limitations and might have problems with really larger amount of continuous data at higher baud rates. That's why UARTs have flow control. Data going the other way is similar. Serial terminal applications like Putty or Tera Term, in concert with the driver should handle both hardware or XON/XOFF flow control if you set up the serial port to use it. On the MicroBlaze side you need to handle that in your software.

    You can get a fairly cheap USB UART cable or breakout board as a alternative to the UART/JTAG FTDI device found on most Digilent boards. I am always using a few of these. These support hardware flow control. I find them to be invaluable for debugging FPGA designs.

    9600 baud is a pretty low data rate, 960 characters/sec for 1 start bit, 1 stop bit and no parity. For short messages it would seem like this would present no problem for even a soft processor implemented in logic. When there's constant stream of characters over a long time period it's easy to overlook operating conditions that might cause data loss. The safest way to avoid problems is with hardware flow control. 

    I was not asking you.

  6. Thank you all guys for replies.

    @artvvb: May I follow up with a few more questions based on your reply?

    First, from your surprise, both UARTLite IP and the USB to UART bridge (a Cypress chip I don't remember) are supposed to be able to transfer large file from host PC to the Arty A7-100T board, right? If so, I'll double check my code; maybe there is some bug in my code which uses low-level driver APIs.

    Second, I am using the default polling mode for quick start of the development. Do you mean interrupt mode can avoid possible data loss?

    Third, for UART data transfer from host PC to the board using Tera Term, do you happen to know whether Tera Term will pause and hold the data in the file being sent when it detects that the UART FIFO is full? On the other hand, for data transfer from the board to host PC, does MicroBlaze or UARTLite IP or the USB to UART bridge will pause and hold the data or just discard the data when it detects that the UART FIFO is full? I have no control over Tera Term, but I can control the program transferring data from board to host. If MicroBlaze doesn't hold when sending data, I can add some sleep in the code to make sure there are always space in the FIFO. Do you have some suggestion of how long I should sleep after sending each byte?

    Finally, kind of off-topic, I hope Digilent can release some (working and correct) tutorials like UART transmission which is more easy to follow and to learn than documents. Users have their own work to do, and do not have time and necessity to learn technicality and specificity of UART or Ethernet port or DDR3 memory controller. Just tell us (working and correct) steps; no tons of theories and explanations. For now the only working and correct tutorial is the baremetal one I referred to in my question. All others like Ethernet echo server are either simply not working or unavailable. Can you guys in Digilent please update and add these tutorials so that we can focus on our own work after paying so much dollars? I don't think that's extremely hard. Thanks a lot.

     

  7. I am using Digilent Arty A7-100T board and Vivado/Vitis 2022.2.2 on Windows. First I followed this tutorial to build a baremetal system having a MicroBlaze processor and external DDR support. Then I drag and drop a USB UART from the Board tab into the block design diagram and apply the auto connect wizard to complete the hw design. After successfully passing the compiling and bitstream generation, I exported the hardware and created an application in Vitis in order to transmit a large file between host PC and the board through USB UART. The program on Vitis is as follows:
     

    #include <stdio.h>
    #include "xaxidma.h"
    #include "xuartlite.h"
    #include "xparameters.h"
    #include "xil_printf.h"
    
    #define fileSize 3220
    
    int main(){
        print("Hello World\n\r");
    
        u8 * imageData;
        imageData = malloc(fileSize);
    
        for (int i=0;i<fileSize;i++) {
            scanf("%c", imageData+i);
            xil_printf("%c", imageData[i]);
        }
    
        free(imageData);
        return 0;
    }

    To test, I use Tera Term as serial terminal because it supports sending file. I created a large text file and run the program. But there are data loss based on the output.

    I also tried low-level UARTLite code for the same purpose:

        u8 * imageData;
        u32 receivedBytes, totalReceivedBytes=0, sentBytes, totalSentBytes=0;
        XUartLite_Config *myUartConfig;
        XUartLite myUart;
        u32 status;
    
        myUartConfig = XUartLite_LookupConfig(XPAR_AXI_UARTLITE_0_DEVICE_ID);
        status = XUartLite_CfgInitialize(&myUart, myUartConfig, myUartConfig->RegBaseAddr);
        if (status == XST_SUCCESS)
        	print("UART initialization succeeds\n\r");
    
        while (totalReceivedBytes<imageSize) {
        	receivedBytes = XUartLite_Recv(&myUart, imageData+totalReceivedBytes, 1);
        	totalReceivedBytes += receivedBytes;
        }

    But I encountered the same data loss. I set the baud rate to 9600 and I cannot see anything wrong in other parts of the hw/sw design. So, if you did the same thing correctly on Arty A7-100T board using Vivado/Vitis 2022.2.2 on Windows, how did you transmit large file between host PC and the board through USB UART? Is this because the limitation or bug of UARTLite IP in Vivado? Or, is the USA UART interface of this board designed to be unable to transmit large file?

  8. Except Digilent staffs, some people do not reply in order to answer a question. They only come here to show off. Even after finishing the reply, they still do not know what the question is. Their reply can do nothing but confuse the asker by complicating simple things, with a mild threat. I hope the asker can be smart enough to identify such a people.

  9. I think a low-end Arty A7-100T is more than enough. A book "Architecting High-Performance Embedded Systems: Design and build high-performance real-time digital systems based on FPGAs and custom circuits" written by Jim Ledin uses this board to implement a digital oscilloscope sampled at 100 MHz with 14 bits of resolution. Digilent is offering 15% discount right now.

    Note however that the step-by-step tutorial of baseline Vivado project (the same as this Digilent tutorial) in this book does not work under current version of Xilinx Vivado/Vitis software (2022.2.2). I still don't know how to use the Ethernet of this board, and you can get no technical support in this regard from Digilent or Xilinx.

    If you somehow get through this tutorial, I would greatly appreciate it if you can share with us how you make it.
     

  10. Thanks for being on it. I can get through the Vivado part. But just can't get through the Vitis part even if I exactly followed the steps. The errors I encountered is 1) DHCP does not work, 2) telnet cannot connect to the server. The error messages I obtained in Vitis Serial Terminal are:

    Quote

     


    -----lwIP Socket Mode Echo server Demo Application ------
    link speed: 100
    ERROR: DHCP request timed out
    Configuring default IP of 192.168.1.10
    Board IP: 192.168.1.10

    Netmask : 255.255.255.0

    Gateway : 192.168.1.1


                  Server   Port Connect With..
    -------------------- ------ --------------------
             echo server      7 $ telnet <board_ip> 7

     

    Do you have some quick suggestions? I'm using Vivado/Vitis 2022.2.2 on Widows 10.

  11. The tutorial is https://digilent.com/reference/learn/programmable-logic/tutorials/arty-getting-started-with-microblaze-servers/start.

    If you follow it, you'll see this tutorial is erroneous under current version of Vivado and Vitis 2022.2.2. So, is there any chance for Digilent to update it so that your customers can follow it without errors? Thanks.

    Meanwhile, can you please add a note in red at the beginning of the webpage "This tutorial is erroneous. Please don't follow it in Vivado and Vitis 2022.2.2"? That can at least avoid wasting plenty of time of your customers. Thanks.

     

  12. Thank you for the reply and recommendations therein. I have an additional question about the function of "Olimex ARM-USB-TINY-H USB Programmer". If we can use a micro-B USB cable to program FPGA and Flash on the development board through USB JTAG interface, why can't we use the same cable and interface for RISC-V programming? What is absolutely necessary to use Olimex ARM-USB-TINY-H USB Programmer instead of a USB JTAG for RISC-V programming on FPGA? Thank you.

     

     

  13. artvvb's reply reveals the root cause. The original design in which I got the same BACKBONE error sends clock signal from a clock wizard to MIG, not from MIG to clock wizard required by the recent Vivado (like Vivado 2022.2.2 I am using). After I reverse the clock path, the BACKBONE error disappears.

    artvvb, can you please, however, explain to us why we should send clock signal from MIG to other downstreaming components if we don't wanna see the BACKBONE error? Also, I have another question about picking clock connection here: Why "Do not select the system clock input to the MIG"? Can you please answer? I'm a verified buyer of Digilent product. Thanks a lot.

  14. There is a tutorial for "Running a RISC-V Processor on the Arty A7" in Digilent webpage for Arty A7 board. But it is running on Linux, requires Arduino development environment. To aggravate the situation, an "Olimex ARM-USB-TINY-H USB Programmer" cable is needed. Since there is a "Getting Started with Vivado and Vitis for Baremetal Software Projects" alongside this RISC-V tutorial, which also uses a soft core, runs on Windows, uses Vivado and Vitis, and does not need additional hardware, I think we can also run RISC-V processor on Arty A7 with Vivado and Vitis on Windows without an Olimex ARM-USB-TINY-H USB Programmer cable. Why not? RISC-V is no special than Arm or MicroBlaze in terms of SoC development workflow. So, do you happen to know a step-by-step RISC-V tutorial (preferable in a video) similar to "Getting Started with Vivado and Vitis for Baremetal Software Projects", only with Arm or MicroBlaze replaced with RISC-V. Any RISC-V IP would be ok. Also can be either baremetal or on Linux, as long as I can follow. Thanks for your recommendation.

  15. OK, I think I have found an answer to what I want. The FPGA is Xilinx Virtex UltraScale+ HBM, and the board is called VCU128. Of course, the price of the board is prohibitive to me, but I can at least simulate it in host PC and acquire some idea of characteristics of multi-channel memory. Each layer of HBM having two channels, think how many channels it has if two 8-layer HBM stacks are installed, and how many if four HBM chips? Enough for my application scenario.

    Here I have some words for zygot. First, I thank you for the detailed reply. I don't know if you are an employee of Digilent or Xilinx. If not, I have nothing to say. If you are, as a technical support, your job here is not showing off how much your knowledge is. In stead, your job here is to assist (possibly potential) customers. To that end, in front of a question, you should read, and then try to understand, then think, before answering a question. But you just skim through the question, pick up a few words that you happen to know of, then assume (instead of understand) that this is what the question is asking about, then dumping out what you know about it, totally ignorant of what the asker really wanted to know in his/her original question. Take my original question as an example. What I want is to have an FPGA development environment in which there are multiple memory channels. I did NOT imply that I will simulate a PC and restrict the environment "along the lines of what your PC has, with it's CPU and IO chip-set and all". I know I mentioned DDR3 installed in Arty board, that's because I only knew of DDR as the only type of external memory at the time of asking. But you got excited at this and desperately assumed that I was asking for DDR multi-channel memory, instead of thinking one more step what I really want. So, to repeat, in front of a question, I hope you can read from beginning to end, make sure you understand on the asker's side, think, and finally answer. I don't meant to assail you; I just hope you can do well in your job. Maybe your technical background about DDR is really very strong so many people have to agree with you because they don't even know what you are talking about. But at the same time, you will develop an illusion that once you answer, your answer is correct; other people must agree with you and exactly follow your line of thinking; otherwise, they are simply wrong; you might never thought of a possibility that your answer is correct but not to the point. You are just reciting your knowledge, but not answering. I think this is what you need be be aware of and try to improve. At the end, thanks again for your replies.

  16. 9 hours ago, zygot said:

    OK... so I guessed wrong. Your definition of dual-channel is along the lines of what your PC has, with it's CPU and IO chip-set and all.

    FPGA devices have a limited number of IO pins so you aren't going to find many FPGA development boards with multiple banks of external memory, each with their own address, data and control lines. The only exception that I know of is the Cyclone V GT Development board. This one has a 40-bit (DQ) DDR3 bank connected to the Cyclone hard controller and a completely separate 64-bit DDR bank connected to a soft-memory controller implemented in programmable logic. But, and this is a very nice FPGA development board for a lot of reasons, even this board won't suit your purposes. It's kind of like going to the Porsche dealer and asking about their fastest street legal car and, oh by the way I'd like to be able to haul my 20 tons of rock with it, and I need a hydraulic dumper. Two goals requiring separate purpose-designed vehicles...

    Your definition of dual-channel is not similar to the one that I used in my previous post. No, it's extremely unlikely that you are ever going to do an FPGA project where you can test the performance of external memory comparing one verses two channels. Now, if you can build you own board that has add-on DDR cards with a transceiver interface to the FPGA device... you might have a chance. 4 lanes of mDP transceivers can definitely transfer more data than 2 lanes of mDP transceivers. But you still have a lot of issues, like coherency to resolve. This really isn't a problem that anyone that I know of wants to solve.

    So what's your definition of dual-channel mode? Wikepedia says it clearly so I assume that's a commonly accepted concept even in FPGA community. At least I entered a Porsche store and talked about 20 tons hydraulic dumper at very beginning. It is the dealer who knows street legal car only and thought I wanted to buy that.

  17. 14 hours ago, zygot said:

    It's not clear what you mean by "dual_channel mode"

    Let me assume that you are referring to the kind of external memory controller that the Spartan 6 series sported; this hard block could support multiple channels. Using the concept of mode is probably confusing.

    The short answer is yes.

    Since AMD/Xilinx decided to drop hard DDR controller blocks in its devices they also decide not to provide a soft multi-channel DDR controller either for some reason. The only external memory controller for IP Series7 FPGA devices that Vivado provides is the MIG. This IP actually isn't a complete solution, as the user has to create logic to control the MIG controller. So, yes you can create a multi-channel controller that runs the MIG controller. You need to decide if your channels get access to the external memory using a priority or round-robin scheme. BTW, the Spartan 6 hard controller wasn't a complete solution either; the user has to provide logic to control the controller.

    If you want to see a simplistic multi-channel priority driven MIG you can look at the source in this tutorial: https://forum.digilent.com/topic/25315-using-ddr-as-a-video-frame-buffer/

    I asked the same question in EE stackoverflow here. I agree on the convincing but discouraging answer so I think the answer is no. After all, the hardware configuration of Arty A7 board is fixed -- there is only a single DDR3 memory chip installed. My purpose is to implement dual-channel architecture and verify that it is really faster than single channel. To that end, an FPGA development board should have multiple DDR memory chips installed. Then we can embed multiple memory controller IP cores that support dual-channel architecture into FPGA (so it is internal). Maybe Digilent has had such a board, but I guess it must be very expensive. Even if I can afford one, putting two MIG memory controllers into FPGA to implement dual-channel may be more challenging than simply embedding one existing mature and complete dual-channel memory controller IP core.

    Many thanks for your link. From the filename it seems that it is a dual-channel memory controller IP core aiming at Nexys A7. But Nexys A7 also has only one DDR chip, just like Arty A7. So, how did you realize dual-channel? Did you add any additional circuit to Nexys A7? I'm still new to this area, so maybe I have missed some important point and underestimate the difficulty. Perhaps I will have a deeper understanding in the future after I have studied enough background knowledge. PS, I would like to use round-robin scheme.

  18. Hi, JColvin,

    Thank you for your reply. External DDR memory is really a problem; thank you for mentioning that. Let's forget about Linux for the moment. This is what I am thinking: if we can embed a RISC-V CPU into an Artix-7 FPGA (model: XC7A35T) on Basys 3, is there any chance to also embed some DDR memory into it? I don't need very large memory space for code -- 10KB is enough for my design. Since it is on-chip memory, its access speed should be no less than actual external DDR. As for data, it relates to another question of mine: is there any chance to transfer data from host PC to FPGA through the micro-USB cable? I can allocate all the remaining FPGA hardware resources available on Basys 3 board for DDR.

    Thanks again for your reply.

×
×
  • Create New...