flash/nor/core: Fix chunk size calculation in default_flash_mem_blank_check
Slow version of blank check procedure reads target memory sector-by-sector using 1 KB chunks. Due to bug in chunk size calculation algorithm the actual size of the chunk is always 1 KB even if sector size is smaller. This causes out-of-boundary read of the last sector. Steps to reproduce: 1) Use target with small sectors (e.g. psoc6 with 512-byte sectors) 2) set WORKAREASIZE_CM0 0 3) flash erase_check 1 Running slow fallback erase check - add working memory Info : SWD DPIDR 0x6ba02477 Error: Failed to read memory at 0x14008000 unknown error when checking erase state of flash bank #1 at 0x14000000 Bank is erased Change-Id: I03d0d5fb3a1950ae6aac425f5e24c7fd94b38325 Signed-off-by: Bohdan Tymkiv <bhdt@cypress.com> Reviewed-on: http://openocd.zylin.com/4785 Reviewed-by: Tomas Vanek <vanekt@fbl.cz> Tested-by: jenkins Reviewed-by: Antonio Borneo <borneo.antonio@gmail.com>
This commit is contained in:
parent
f197483f57
commit
02279e2f5e
|
@ -322,8 +322,8 @@ static int default_flash_mem_blank_check(struct flash_bank *bank)
|
|||
for (j = 0; j < bank->sectors[i].size; j += buffer_size) {
|
||||
uint32_t chunk;
|
||||
chunk = buffer_size;
|
||||
if (chunk > (j - bank->sectors[i].size))
|
||||
chunk = (j - bank->sectors[i].size);
|
||||
if (chunk > (bank->sectors[i].size - j))
|
||||
chunk = (bank->sectors[i].size - j);
|
||||
|
||||
retval = target_read_memory(target,
|
||||
bank->base + bank->sectors[i].offset + j,
|
||||
|
|
Loading…
Reference in New Issue