Jump to content

ncondor

Members
  • Posts

    2
  • Joined

  • Last visited

Reputation Activity

  1. Like
    ncondor reacted to D@n in FPGA on-board microphone band pass filter with audio output   
    Okay, I'm not Digilent, but I'll step in.  I run a blog/website which many here have found valuable.
    If you are beginner, get comfortable with your hardware first.  Know how to build a design and load it on your board.  Know how to control the LEDs.  Be able to issue a command from your PC that will adjust the LEDs on your board and readback the status of your buttons.  Learn how to peek inside your FPGA board to know what it's doing.  While a project in itself, you'll need this background for your next steps.  I might argue you should learn how to do the above via the Zynq chip interacting with your hardware as well, but that's beyond most of what I've done to date.  (I'm personally still stuck in the pre-Zynq world, and still loving every minute of it ...)
    The next order of business is to look up the interface you'll need to use for your project.  It is I2S.  I2S is a fairly easy standard to work with, and it's been around for some time.  Indeed, I'm putting together a demo of the features on my Nexys Video board and I could be convinced to add my I2S work to that demo.  That said, most of my experience is with a prior chip (the ADAU1761) on the Nexys Video board, so your configuration experience is likely to be very different from mine.  You should download a copy of the SSM2603 specification, as you are likely to need to reference it many times over.
    I should also point out that you aren't likely to be able to get any Xilinx PLLs to produce the 12.288MHz clock rate you need.  It's just ... not a nice ratio to get to from any input clocks.  Don't despair, there are other approaches.  In particular, I've used this technique very successfully in the past--particularly to generate this exact frequency.
    Also, while I2S is "serial" in nature, in that it sends one bit at a time, I'd would caution against calling this a "serial" port or protocol.  Most times you read about "serial ports" you'll be reading about a completely different protocol-UART.  If you need to google anything for inspiration, google I2S.
    You will need to configure the SSM2603 via an I2C port.  From the schematic, it looks like you can do this either via the FPGA or the Zynq.  If the SSM2603 is anything like the ADAU1761, you have a lot of reading to do and a lot of options to consider.  Perhaps the first/best/easiest step on your road to success will be to learn to read and write registers within the SSM2603 via the I2C port.  Convince yourself that you can do this successfully.
    Once you can configure the SSM2603, configure the headphones (not the microphone--one thing at a time) at first and simply output a tone.  Play that through a head set, speakers, whatever.  Change the frequency of the tone.  Make sure it's not a mistake that you can hear this on your speakers.  Then turn it off before it drives you and your family crazy.
    I think you'll find the AXI Stream protocol to be a very valuable tool when moving audio around within your design.  You might wish to look it up early and get familiar with it.  This knowledge will take you a long way.
    Let me also caution you, before you dive into your ultimate plan, that you will need to be ware of feedback through your system.  A microphone that can hear itself can be a dangerous thing, and the feedback can be really nasty.  Make sure you have a mute button on hand to spare your ears.
    Dan
  2. Like
    ncondor reacted to artvvb in FPGA on-board microphone band pass filter with audio output   
    Hi @ncondor
    First off, for some other potential places to look for help, you might try the NI support forum: https://forums.ni.com/ (though I would not expect a response...), or, if you have a service contract with NI, their email support channel.
    Second, and this may or may not make the rest of my answer redundant, but: What's the purpose of your project, and why did you pick this particular hardware to do it? There are other better-supported products that might be better-suited, if you're looking to learn about FPGAs and design a system that captures audio and filters it before outputting it again... Any Digilent FPGA board and a Pmod I2S2 could be suitable, for example.
    To try to answer your question about how to approach this project with the hardware on hand:
    If you really need to use the DSDB: Digilent may not be able to provide much support on the Elvis || end of things. The engineers who worked on the DSDB directly are no longer with the company, so I'm also flying a little bit blind. That said, for the DSDB itself, it's a Zynq-based development board similar to some of the others listed on the Digilent website. On looking through the manual and schematics, the audio codec uses the same part in nearly the same configuration as the audio codec on the Zybo Z7. There's a DMA Audio demo for the Zybo Z7 that uses the audio codec to record audio data from the line in or microphone jack into DDR memory and then play it back out through HPH OUT. The project is fairly complicated, but the best bet might be to try porting it to the DSDB - which would involve switching out at least the Zynq configuration used in the Vivado project with the preset for the DSDB, as well as the location constraints, and potentially substantial other hardware or software changes. The demo is also unfortunately architected to save and then send data on separate button pushes, meaning that it isn't able to forward signals from an input to an output in real time without modifications.
    In general, the DMA audio demo has a couple of subcomponents that are particularly useful - 1. it shows how you might control the codec over I2C, and 2. it has an IP that handles I2S communication with the codec and interfaces data coming from and going to the codec to the rest of the Zynq design through AXI stream interfaces.
    If porting the demo like this worked successfully, I would be looking to implement a filtering algorithm first in Zynq PS software, then potentially move that implementation to the PL while modifying the project to pass data from input to output in real-time. I'd also strip some audio cables and wire the audio output from the DSDB directly to the Elvis analog inputs, or output directly to some headphones or a speaker (avoiding the use of Elvis and LabViEW entirely).
    The ADC and DAC referred to here appear to be connected to the myRIO Extension Port connector, which would be intended for use with other external hardware. It's still potentially possible to use the ADC and DAC to help interface with the Elvis, but it would require additional controllers for both components written in an HDL.
    Hope this helps, and my apologies that the proposed solution is much more complicated than it seems at first glance it needs to be...
    Arthur
×
×
  • Create New...