Controlling SRAM Usage in PSoC Applications | Cypress Semiconductor
Controlling SRAM Usage in PSoC Applications
A number of people have commented to me recently that PSoC applications appear to use more SRAM than they expect. There is a very simple reason for this and it is easily changed. In an attempt to make application development really easy for new users, PSoC Creator allocates large amounts of SRAM to the stack and heap. This prevents people from getting frustrated with memory overflow problems while they are learning about PSoC and PSoC Creator. Of course, if you do not need that amount of SRAM to be allocated that way, it is trivially simple to change the values in PSoC Creator and rebuild the application.
For example, a new PSoC 4 application, targeting the top-of-the-line CY8C4245AXI-483 part, allocates 1024 bytes to the stack and 256 bytes to the heap. That is over 31% of the total RAM available in the device!
What are these memory spaces for? The heap is used by certain run-time library functions, most notably sprintf(), and also by applications that grab chunks of memory using malloc() and its related APIs. If you are not using dynamic memory allocation then this heap is just wasted space and you can remove it. In the System tab of the resources editor just change the heap size to zero bytes.
The stack is, of course, used for function calling. When a function is called the stack is used for the return address, to pass arguments to the called function, and for return values. The amount of stack you need depends upon how deeply nested your application is and the number (and type) of parameters are passed at each level. Just like the heap, the stack size is set in the System tab of the resources file.
There are two complimentary ways of determining the right amount of stack usage for an application. The first method is to review your source code. Determine the functions that use a lot of stack space and the where the deepest nesting occurs. From that point you can calculate the maximum stack usage. Use the compiler listing files to get the stack usage data for each function. I would always recommend adding a little extra to your computed maximum so that you have space to add a little code to the application without having to change the stack size. Let s face it, this is math , and the less we have to do it the better!
The second method is more empirical use the debugger to monitor the memory used. First you fill the stack with unusual data . A silly number like 0xFEEDBEEF works well because it sticks out visually and the chance of your application writing a lot of that number on the stack is low. Then you simply run the application and check the Memory window to see how much of the stack has not been over-written. It is important to exercise the application thoroughly, of course, to make sure all the paths have been followed and the maximum stack usage has occurred.
An alternative to this method is to place a read/write (access) breakpoint at the bottom of the stack and see if it is ever reached. If it is then you need more stack space. If not, simply move it up until it does hit and you have your peak usage.
Lastly, some third-party tools and IDEs offer built-in stack-checking functions, which can be especially useful when you are using a real-time operating system (RTOS). In those environments, each task typically has its own dedicated stack and you want to be sure to optimize each one to minimize waste.