Jump to content
  • 0

FPGA on-board microphone band pass filter with audio output


ncondor

Question

So essentially what I have been trying to do is connect a microphone to the J10 "Mic In" port and then convert the analogue signal to digital , after , I would like to implement some sort of high pass filter / low pass or ideally both and output it through the headphone Jack or play it through computer speakers it doesn't matter. To confirm how it is affected

I have currently have the following board
https://digilent.com/reference/dsdb/dsdb

DSDB Board

Which Is attached to an National Instruments II+ and I have the following cables connected to the PC: USB-B (To the Elvis) , Micro-USB ( Connected to the UART bridge in the bottom right corner of the DSDB)

I have the following programmes downloaded but am no expert in them and am not expecting full answers just points on what needs to be done and why :

Vivado 2019
LabView

Is there any other programme that would be useful or more suited please let me know!

I have read the manual and it does have an onboard ADC and DAC however I believe it is connected to the Serial bus (I may be mistaken please correct me)

I have attached some photos , in order to have a master clock of 12.288MHz , am I supposed to use the Clocking Wizard in "Vivado" and modify it do let me know if this is correct. (The default Clock is 125MHz)

I am not well versed on codecs and does there need to be any sort of changes to codec.

I can use any programme and please suggest if there are alternative for convenience ,additionally I am also aware that there is alternative method by connecting the (+ve) of the microphone to AI0 and the sleeve to (-ve) or ground.
If using this method would this be more convenient or will I run into other issues.

I do however have some good experience in LabView.

TL;DR

Just to reiterate I want to receive a signal , process it by applying some sort of filter, and then output it to the HPH Out port or play it back on the computer connected speakers if possible.

Anything at all would be greatly appreciated , if I should post this somewhere else please let me know I will redirect it.

fpga-on-board-microphone-band-pass-filter-with-audio-output-v0-ik77dh0qm9ed1.webp

fpga-on-board-microphone-band-pass-filter-with-audio-output-v0-wrmg7fw2o9ed1.webp

fpga-on-board-microphone-band-pass-filter-with-audio-output-v0-rrmhad6bo9ed1.webp

Link to comment
Share on other sites

3 answers to this question

Recommended Posts

  • 1

Hi @ncondor

First off, for some other potential places to look for help, you might try the NI support forum: https://forums.ni.com/ (though I would not expect a response...), or, if you have a service contract with NI, their email support channel.

Second, and this may or may not make the rest of my answer redundant, but: What's the purpose of your project, and why did you pick this particular hardware to do it? There are other better-supported products that might be better-suited, if you're looking to learn about FPGAs and design a system that captures audio and filters it before outputting it again... Any Digilent FPGA board and a Pmod I2S2 could be suitable, for example.

To try to answer your question about how to approach this project with the hardware on hand:

If you really need to use the DSDB: Digilent may not be able to provide much support on the Elvis || end of things. The engineers who worked on the DSDB directly are no longer with the company, so I'm also flying a little bit blind. That said, for the DSDB itself, it's a Zynq-based development board similar to some of the others listed on the Digilent website. On looking through the manual and schematics, the audio codec uses the same part in nearly the same configuration as the audio codec on the Zybo Z7. There's a DMA Audio demo for the Zybo Z7 that uses the audio codec to record audio data from the line in or microphone jack into DDR memory and then play it back out through HPH OUT. The project is fairly complicated, but the best bet might be to try porting it to the DSDB - which would involve switching out at least the Zynq configuration used in the Vivado project with the preset for the DSDB, as well as the location constraints, and potentially substantial other hardware or software changes. The demo is also unfortunately architected to save and then send data on separate button pushes, meaning that it isn't able to forward signals from an input to an output in real time without modifications.

In general, the DMA audio demo has a couple of subcomponents that are particularly useful - 1. it shows how you might control the codec over I2C, and 2. it has an IP that handles I2S communication with the codec and interfaces data coming from and going to the codec to the rest of the Zynq design through AXI stream interfaces.

If porting the demo like this worked successfully, I would be looking to implement a filtering algorithm first in Zynq PS software, then potentially move that implementation to the PL while modifying the project to pass data from input to output in real-time. I'd also strip some audio cables and wire the audio output from the DSDB directly to the Elvis analog inputs, or output directly to some headphones or a speaker (avoiding the use of Elvis and LabViEW entirely).

On 7/25/2024 at 9:01 AM, ncondor said:

I have read the manual and it does have an onboard ADC and DAC however I believe it is connected to the Serial bus (I may be mistaken please correct me)

The ADC and DAC referred to here appear to be connected to the myRIO Extension Port connector, which would be intended for use with other external hardware. It's still potentially possible to use the ADC and DAC to help interface with the Elvis, but it would require additional controllers for both components written in an HDL.

Hope this helps, and my apologies that the proposed solution is much more complicated than it seems at first glance it needs to be...

Arthur

Link to comment
Share on other sites

  • 1

Okay, I'm not Digilent, but I'll step in.  I run a blog/website which many here have found valuable.

If you are beginner, get comfortable with your hardware first.  Know how to build a design and load it on your board.  Know how to control the LEDs.  Be able to issue a command from your PC that will adjust the LEDs on your board and readback the status of your buttons.  Learn how to peek inside your FPGA board to know what it's doing.  While a project in itself, you'll need this background for your next steps.  I might argue you should learn how to do the above via the Zynq chip interacting with your hardware as well, but that's beyond most of what I've done to date.  (I'm personally still stuck in the pre-Zynq world, and still loving every minute of it ...)

The next order of business is to look up the interface you'll need to use for your project.  It is I2S.  I2S is a fairly easy standard to work with, and it's been around for some time.  Indeed, I'm putting together a demo of the features on my Nexys Video board and I could be convinced to add my I2S work to that demo.  That said, most of my experience is with a prior chip (the ADAU1761) on the Nexys Video board, so your configuration experience is likely to be very different from mine.  You should download a copy of the SSM2603 specification, as you are likely to need to reference it many times over.

I should also point out that you aren't likely to be able to get any Xilinx PLLs to produce the 12.288MHz clock rate you need.  It's just ... not a nice ratio to get to from any input clocks.  Don't despair, there are other approaches.  In particular, I've used this technique very successfully in the past--particularly to generate this exact frequency.

Also, while I2S is "serial" in nature, in that it sends one bit at a time, I'd would caution against calling this a "serial" port or protocol.  Most times you read about "serial ports" you'll be reading about a completely different protocol-UART.  If you need to google anything for inspiration, google I2S.

You will need to configure the SSM2603 via an I2C port.  From the schematic, it looks like you can do this either via the FPGA or the Zynq.  If the SSM2603 is anything like the ADAU1761, you have a lot of reading to do and a lot of options to consider.  Perhaps the first/best/easiest step on your road to success will be to learn to read and write registers within the SSM2603 via the I2C port.  Convince yourself that you can do this successfully.

Once you can configure the SSM2603, configure the headphones (not the microphone--one thing at a time) at first and simply output a tone.  Play that through a head set, speakers, whatever.  Change the frequency of the tone.  Make sure it's not a mistake that you can hear this on your speakers.  Then turn it off before it drives you and your family crazy.

I think you'll find the AXI Stream protocol to be a very valuable tool when moving audio around within your design.  You might wish to look it up early and get familiar with it.  This knowledge will take you a long way.

Let me also caution you, before you dive into your ultimate plan, that you will need to be ware of feedback through your system.  A microphone that can hear itself can be a dangerous thing, and the feedback can be really nasty.  Make sure you have a mute button on hand to spare your ears.

Dan

Link to comment
Share on other sites

  • 0
On 7/26/2024 at 11:30 PM, artvvb said:

Hi @ncondor

First off, for some other potential places to look for help, you might try the NI support forum: https://forums.ni.com/ (though I would not expect a response...), or, if you have a service contract with NI, their email support channel.

Second, and this may or may not make the rest of my answer redundant, but: What's the purpose of your project, and why did you pick this particular hardware to do it? There are other better-supported products that might be better-suited, if you're looking to learn about FPGAs and design a system that captures audio and filters it before outputting it again... Any Digilent FPGA board and a Pmod I2S2 could be suitable, for example.

To try to answer your question about how to approach this project with the hardware on hand:

If you really need to use the DSDB: Digilent may not be able to provide much support on the Elvis || end of things. The engineers who worked on the DSDB directly are no longer with the company, so I'm also flying a little bit blind. That said, for the DSDB itself, it's a Zynq-based development board similar to some of the others listed on the Digilent website. On looking through the manual and schematics, the audio codec uses the same part in nearly the same configuration as the audio codec on the Zybo Z7. There's a DMA Audio demo for the Zybo Z7 that uses the audio codec to record audio data from the line in or microphone jack into DDR memory and then play it back out through HPH OUT. The project is fairly complicated, but the best bet might be to try porting it to the DSDB - which would involve switching out at least the Zynq configuration used in the Vivado project with the preset for the DSDB, as well as the location constraints, and potentially substantial other hardware or software changes. The demo is also unfortunately architected to save and then send data on separate button pushes, meaning that it isn't able to forward signals from an input to an output in real time without modifications.

In general, the DMA audio demo has a couple of subcomponents that are particularly useful - 1. it shows how you might control the codec over I2C, and 2. it has an IP that handles I2S communication with the codec and interfaces data coming from and going to the codec to the rest of the Zynq design through AXI stream interfaces.

If porting the demo like this worked successfully, I would be looking to implement a filtering algorithm first in Zynq PS software, then potentially move that implementation to the PL while modifying the project to pass data from input to output in real-time. I'd also strip some audio cables and wire the audio output from the DSDB directly to the Elvis analog inputs, or output directly to some headphones or a speaker (avoiding the use of Elvis and LabViEW entirely).

The ADC and DAC referred to here appear to be connected to the myRIO Extension Port connector, which would be intended for use with other external hardware. It's still potentially possible to use the ADC and DAC to help interface with the Elvis, but it would require additional controllers for both components written in an HDL.

Hope this helps, and my apologies that the proposed solution is much more complicated than it seems at first glance it needs to be...

Arthur

Hello, thank you for your response, I am currently using this board because it is the only thing I have available right now.

I am attempting this project to better understand how codecs working and learning how VHDL language is used in a more practical sense.

I do understand that you have stated that this is a fairly complicated project however I would really like to learn how to do this.

I really do appreciate your link to the DMA audio demo as I had no idea how to even start.

 

I have actually confirmed that through the Elvis Analog Inputs, I am able to easily process and output through MATLAB and LABVIEW, however I will need to learn more on how to output directly to the speakers without this software.

 

I would like to reiterate my appreciation for this reply as well, I was told by some of my lecturers which I asked for help that this would be quite difficult and has mentioned this would be quite difficult to do.

However, thank you very much in putting me a step forward in the right direction.   

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...