UVC Cortex M-7 wrong resolution | Cypress Semiconductor
UVC Cortex M-7 wrong resolution
I'm in the need of help since after almost 4 months of debugging I still fail to find my issue and I'm at the point where I would swear that I tried everything. I read a few documents from Cypress and have the impression that here I could find someone who certainly has experienced this and might help me! I'm hoping for the best.
My implementation of UVC on a Cortex M-7 works so far that I am able to connect a camera via USB and stream its video data onto my RAM, and send it out via UDP for debugging on my computer.
The issue I'm talking about is this: My camera supports several formats and frames. I am interested in uncompressed (YUV). There I can choose between 5 different resolutions, ranging from 160x120 to 1280x1024. During UVC negotiation after I probe with the request, e.g. Frame 02, Format 03, I get always(!) the reply (GET) that the device indeed will send me the desired frame and format (e.g. 02/03). This works the way it should.
Now when stream starts and keep on polling, I'm checking for the FID bit, since the end of frame bit is not reliable on my camera. By checking the Frame start bit I receive CONSTANT frame sizes of 342160 bytes representing 171080 Pixels.
The problem here: This is neither the resolution I requested NOR do I get other frame sizes when choosing, e.g. a much higher resolution. It just keeps on sending me 342160 Bytes.
Note: The camera works with a standard UVC webcam viewer, flawlessly.
I checked those 171080 pixels and what I found is this: when I break after 182 pixels, I get a reasonable horizontal alignment of the image (otherwise there's a saw-like displacement between sequential rows). This results in a height of 940 = 171080/182.
Furthermore I notice that, when drawing the RGB converted bytes/image, that in fact I can see TWO images stacked vertically. So first image is 182x470 and the second image is 181x470. Unusual resolutions. Also the images are distorted and stretched vertically!
Therefore I visualise the images (after reception) like this:
I either only consider every 8th row (dropping the 7 sequential rows) resulting in an 182x118 image which shows both images squeezed.
Or I only consider every 4th row (dropping 3 rows) resulting in an 182x118 image (and droping the second half of the array) which is showed correctly.
I attached images (this is a fluorescence microscope, therefore I can't present you easily with a simpler image, you will need to look closely, they're dark):
- Original (taken by a computer software)
- Embedded, considering every 4th row (looking similar)
- Embedded, considering every 4th row (doubled the Y value of the YUV, in order to make the image a little bit brighter)
- Embedded, considering every 8th row
I'm happy, that after 4 months I at least SEE an image, but it's more or less cheating that it looks like the original.
=> WHY does the camera return such a format though it confirms my desired frame and format? What can I do?
I really hope that someone could take the time to help me here, I really do.
Thank you very much for your efforts!