peepo Posted February 6, 2015 Share Posted February 6, 2015 Lara, is the Zybo Image Processing from sources tutorial ready? loved the Quick Start Test Demo and Github archive: you say: "We'll go into building these into a later tutorial," I'd particularly welcome a barebones git, as the current one is rather bulky! many thanks Jonathan Link to comment Share on other sites More sharing options...
sbobrowicz Posted March 3, 2015 Share Posted March 3, 2015 The source for that project is built using ISE/EDK, which has been succeeded by Vivado. We are currently designing a video input/output demo in Vivado that also incorporates additional features. Rather than teach people to use the old tools, we will instead fully document the source for the new project in Vivado. I describe the upcoming video project a bit more in this post: This new project will not include the Linear filtering and colorspace conversion cores used in the GoPro project, but I've made a To Do item to port these two cores from EDK to Vivado, document them, and add them to this library: https://github.com/DigilentInc/vivado-library. That repository will be where we post all of our custom Vivado cores in the future so that people can add them to their own designs as needed. After this, if there is enough demand, we will consider creating a tutorial that recreates the GoPro application by adding these cores to the Vivado Video input/output project. Timelines on these items are pretty gray. We will do our best to get them out as soon as possible, and for updates on expected times, you may contact support@digilentinc.com Link to comment Share on other sites More sharing options...
peepo Posted March 3, 2015 Author Share Posted March 3, 2015 okay, good to know... please could you keep it small or publish as part works? feature creep seems to be particularly prevalent in vhdl, ie publishing monolithic projects without comments. this naturally makes them rather harder to use.... Link to comment Share on other sites More sharing options...
Evocati Posted September 21, 2015 Share Posted September 21, 2015 Hi Sam, Sorry to bother here since I have post so many questions recently on this forum. But I do find that what you mentioned here is the most urgent thing for newbies like me who has very limited experience on FPGA design. I found tons of documentations and answers for knowledge related to the GoPro project. Currently I'm trying to pass the signal through HDMI -> FPGA -> VGA. And the docs here in https://github.com/DigilentInc/vivado-library and discussions here https://forum.digilentinc.com/topic/560-help-with-a-zybo-video-design/#comment-1832 are extremely useful. However, none of the information is complete and solid. Especially when different IPs are put together, it is hard to figure out where the problem is. I am wondering if there is any tutorial that goes through a simple example like HDMI -> FPGA -> VGA? If there is, I think it will be the best foundation for people who start to do video processing on Zybo. Thank you very much!! Hao Link to comment Share on other sites More sharing options...
Abdelkader Posted October 31, 2016 Share Posted October 31, 2016 Hello my friends, Can any one help me on designing of a MORAVEC detector on HDMI IN ???? THANK YOU Link to comment Share on other sites More sharing options...
D@n Posted October 31, 2016 Share Posted October 31, 2016 Is this the sort of thing you are looking for? Dan Link to comment Share on other sites More sharing options...
jpeyron Posted October 31, 2016 Share Posted October 31, 2016 Hi Abdelkader, I'm not aware of any ip cores that facilitates and edge detector into this type of design. We found this here that works with simulink if that is an option. If you are not set on using baremetal designs or ip cores then another approch would be installing linux and use the opencv library like described here with a tutorial here. The tutorial is in spanish but my browser translated it to english. Hope this helps! cheers, Jon Link to comment Share on other sites More sharing options...
nattaponj Posted November 13, 2017 Share Posted November 13, 2017 Hi @sbobrowicz, I'm learning about image processing on the Zybo board. from link I wonder about the source of the calculations grayscale images. Can you explain or refer to the relevant documents for me? gray = (r * 76 + g * 150 + b * 29 + 128) >> 8; Thank you. Link to comment Share on other sites More sharing options...
jpeyron Posted November 14, 2017 Share Posted November 14, 2017 Hi @nattaponj, Here is an article that explains Weighted method for converting RBG to grayscale. Are you asking about why we chose those specific weights or just about the algorithm in general? cheers, Jon Link to comment Share on other sites More sharing options...
nattaponj Posted November 15, 2017 Share Posted November 15, 2017 Hi @jpeyron, I suspect that number is 76 150 29 and 128 came from. Thank you. Link to comment Share on other sites More sharing options...
sbobrowicz Posted November 27, 2017 Share Posted November 27, 2017 Yes, I used the weighted method for converting to gray scale. I did it using integer math, which makes it a bit more complicated, but basically I just multiply each color by (W*256), then divide the result by 256. This allows the avoidance of floating point arithmetic. Link to comment Share on other sites More sharing options...
Question
peepo
Lara,
is the Zybo Image Processing from sources tutorial ready?
loved the Quick Start Test Demo and Github archive:
you say: "We'll go into building these into a later tutorial,"
I'd particularly welcome a barebones git, as the current one is rather bulky!
many thanks
Jonathan
Link to comment
Share on other sites
10 answers to this question
Recommended Posts
Archived
This topic is now archived and is closed to further replies.