Start a new topic

could not receive the data more than 16 KB

hello every one

so previously I discussed the same issue, and I tried to read the data with one burst, around 80000 bytes (128*625) but unable to do so.

I checked the reference software as well and yes the transfer size is 512 Mbyte as shown  in

#define DDR3_V2_MAX_SIZE (512 * 1024 * 1024) /*!< A DDR3 v2 bank is 512MB wide. */

but unfortunately I cannot read more than 16 KB.

Please note that I am able to read all the 80000 bytes using a loop of 625 and hence design works well and I get the desired result. But this makes the process very slow. Hence I want to fetch the all burst at one go.

Do you have any idea about what might go wrong. Please note that I have verified with multile ways that I have all 80000 data present, thru coutter of the FIFO, implenting a pulse indicating the completion of transfer (after which only I can fetch data and hence I am able to do so in the 625 chunks of 128 Bytes...)

please also note that I tried to increase the timeout time as well (although I understand its function is not related to the issue we are discussing)

hope this helps to understand what I say. Please find the attached file and after 8192 data (in 41th row) you will find the 0xCDCD values, indicating that the memory location were never initialized... please also see the following related codes

char *pInDMA = (char *)_aligned_malloc( sizeof(char)*80000, 4096); //640000 (multiple of 128)

rc = sipif_readdata(pInDMA, 40000*2); //128 bytes multiple[/code]


Dear Jaffry,

So this resume the situation, the reference firmware/software delivered as source code is able to transfer 512MB in burst of 8MBs and in your design you cannot transfer more than 16kB. The first thing coming in my mind is that something is wrong with your design.

It is hard for us to debug your design, I hope you understand that. One thing you don't tell is whether or not the function returns with failure or success. If it returns failure, then you should not even look at the data.

You are right, Visual Studio initializes buffer with 0xcdcdcdcd values so if the buffer has 0xcdcdcdcd, the DMA engine has not placed data in there.

Inserting a chipscope in your design or simulating the design should surely help you understanding where the problem is; Analyzing FIFO signals will show where the problem is.

Best Regards,
Hi Arnaud,

That was really great result...Actually what I did wront was I was giving a command which was enabling the rd_en from my DMAout FIFO and hence once the FIFP in iFPGA slave got fulled the rd_en was getting LOW, since my logic was something like

if (rd_en_bit = '1' and ... and dout_stop = '0') then
DMA_rd_en <= '1';
[font=verdana]DMA_rd_en <= '0';[/font]

[font=verdana]hence this was causing to stop after 8192 ...[/font]

[font=verdana]Hence later I transformed somewhat simialr to 4DSP design and the result is THE best I obtained. my movie generation time is reduced more than half...[/font]

[font=verdana]thanks that is a great help. You can close the topic now. [/font]

Dear Shan,

Thanks for the feedback!

This topic is being closed because the issue is considered as resolved by 4DSP. Feel free to create a new topic for any further inquiries.