Jump to content
  • 0

Speech Processing using Zybo Z7-10 board


donald

Question

10 answers to this question

Recommended Posts

  • 0

Hi @donald,

If you want to blink LEDs on the board, you can do so by following Digilent's guide here: https://digilent.com/reference/programmable-logic/guides/getting-started-with-ipi.

There is a demo using the on-board audio for the Zybo Z7 available in it's Resource Center here: https://digilent.com/reference/programmable-logic/zybo-z7/start#example_projects, if you wanted a reference project that more generically detects sound input.

Speech processing/recognition on the other hand is not easy nor does Digilent have any sort of reference material for it. I found a couple of articles that discuss speech processing of some variety on a Xilinx SoC system here and here though.

Thanks,
JColvin

Link to comment
Share on other sites

  • 0

Thanks for your reply.

I am trying to implement audio recognition algorithms on FPGA and i got this error related to resource utilization. 

I am referring this 

GitHub - shivarajagopal/ece5775-final: Voice Recognition using FPGA-Based Neural Networks

error.png

Link to comment
Share on other sites

  • 0

Hello @donald,
HLS may overestimate the resource usage during C synthesis. What you are seeing is not necessarily an error, just HLS warning you that it thinks the design may not fit in the FPGA. 
Try to implement the project and see if the resource usage improves. 
 

Edited by Niță Eduard
additional comment
Link to comment
Share on other sites

  • 0

Hello @donald,
Building a neural network for a FPGA can be challenging, even with HLS.

In your first post you mentioned you are new to FPGA design. If you have not done so yet, you might want to start with some simple project regarding audio processing in order to ensure that you can acquire data successfully.

If you want to reuse the project mentioned, one way of lowering the resource usage is by replacing the floating point representation with a fixed point one in order to lower the number of bits in the representation. This might lower the accuracy of the model, but by negligible amounts. If you're not constrained by using this specific project, you may want to generate a neural network implementation from an existing python model, with an external tool, such as hls4ml. Note that if it meets your expectation in terms of resources, you will still have to connect it to an audio source and make the connections necessary in creating the entire application, so there is no running away from FPGA elements.

Edited by Niță Eduard
Link to comment
Share on other sites

  • 0

Hi @Niță Eduard

  I am not building neural network now on FPGA. I already run examples given in Chapter 5 of Zynq tutorial book and learn about audio processing little bit. I am just building simple voice recorder IP in vivado HLS and interfacing with same method explained in chapter 5. But i am getting error as lack of resources ( LUTs)

design.png

lack_resources.png

Link to comment
Share on other sites

  • 0

I have some thoughts that you can feel free to ignore if you want.

The first thing that anyone embarking on a project should do is decide on an appropriate platform for meeting the goals of a project. I have no reason to believe that you can't success implementing some portion of "speech processing", however you might care to define that in real terms. Choosing one of the least powerful ZYNQ devices in terms of FPGA resource and ARM processing power might be something to evaluate. A lot of this depends on the audio signal sample rate,  sample storage requirements, whether you want to do real time signal processing or post capture processing, etc. Most of the projects that I do involves some calculating and experimental preliminary projects to make a basic assessment. Of course available hardware interfaces like an audio CODEC make a big difference.  Fortunately, you don't actually have to possess hardware in order to do this level of preparation. The current tools allow you to target just about any ZYNQ processor to test out investigatory projects designed to provide a sense of what a particular platform's capabilities are.

The next important question, for AMD/Xilinx development is what is the best tool to use. I don't have any real world experience with HLS so I can't comment. A lot of the calculus in deciding to use HSL or the regular version of the tools is how experiences you are with logic development, and what kinds of resources you want to bootstrap off of to get started. All I want to say about that is assuming that a similar project implemented on one platform can be ported to any other is probably not a good idea.

Here's a few though to consider, or not, if they don't make sense to you:

  • start off with one or two narrowly defined processing algorithms to implement and prototype them on a PC using whatever tools you are comfortable with and it's udio interfaces.
  • if you don't have extensive FPGA development experience consider alternate approaches. For instance a cheap FPGA board without ARM processors can implement what programmable logic is really good at, such as custom interfaces. You can connect an FPGA to a Raspberry PI 3 or 4 through one or two SPI interfaces using DMA. 3-4 MB/s per SPI interface is a reasonable goal. The idea is to leverage software tools and libraries available on the RPi to simplify achieving the initial project goals. You can always build on small successes to get to a final goal. Often, the shortest route to achieving a goal that involves a lot of complexity is not a straight line.
  • it might be easier to get started by by skipping most of the FPGA development flow if you are just learning it. Learning how to do programmable logic design while trying to implement a complex design is not for most people. If you want to use an FPGA board to capture 9600 Hz 16-bit audio from a CODEC, a UART at 921600 might be sufficient, though you might want to use 2 ascii characters per hex nibble. There are likely better alternatives, but I'm just throwing out ideas. My point is that it would be better trying to implement a project using your favorite language on a PC first and then try and port it to an embedded platform later. Too many variables spoil the soup.. to mangle more than one phrase of wisdom.

It's hard to say from your post if you have well defined specific way-points and goals in mind or just want to dive into something that isn't well defined. Hoping to have success replicating something that you believe has been done without knowing the details of the hardware and software development flow and the specific pieces involved is hard to pull off.

Edited by zygot
Link to comment
Share on other sites

  • 0

Hi @zygot

      Thank you for your long explanation. 

       Although, your last sentence replicates that I  am using vivado hls and FPGA without knowing the hardware and software part. It is an unrealistic comment and it does not have sense. 

      Yeah, implementing signal processing on FPGA platform is my first experience. Although i understood audio examples using FPGA.

      Thank you once again.

Link to comment
Share on other sites

  • 0

Hello @donald,
Most of your LUTs are used in instances. You can further analyze where these resources are used in the table found at Utilization Estimates > Detail > Instance inside the C synthesis report. I've highlighted it in one of your screenshots. This may give you hints on where you can optimize your design.

image.thumb.png.cd99e4e39a8f39537b869a1c40bab932.png


 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...