target/cortex_a: use aligned accesses for read/write cpu memory slow

Armv7a is able to read and write memory at un-aligned address, but
only when bit SCTLR.A (Alignment check enable) is zero and the
address belongs to a memory space with attribute "Normal" (see [1]
chapter A3.2.1 "Unaligned data access"). In all the other cases
the memory access will trigger an alignment fault data abort
exception.
Memory attributes are explained in [1] chapter A3.5 "Memory types
and attributes and the memory order model".

Disabling the MMU cause a change in memory attribute, as explained
in [1] chapter B3.2 "The effects of disabling MMUs on VMSA
behavior".
This can cause several issues. e.g. a SW breakpoint on un-aligned
4-byte Thumb instruction, set when MMU is on, can be impossible to
remove when MMU turns off.

While is possible to check all the possible conditions before an
un-aligned memory access, it's clearly more maintainable to skip
such complexity and only perform aligned accesses.

Check the alignment and eventually modify the data size before
calling the functions cortex_a_{read,write}_cpu_memory_slow().
Change the comment in the two functions above to comply with the
new behaviour.

[1] ARM DDI 0406C.d - "ARM Architecture Reference Manual, ARMv7-A
    and ARMv7-R edition"

Change-Id: I57b4c11e7fa7e78aaaaee4406a5734b48db740ae
Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com>
Reviewed-on: http://openocd.zylin.com/5138
Tested-by: jenkins
Reviewed-by: Matthias Welwarsky <matthias@welwarsky.de>
This commit is contained in:
Antonio Borneo 2019-04-27 15:52:52 +02:00 committed by Andreas Fritiofson
parent 5f42124a40
commit 5dc5ed5714
1 changed files with 37 additions and 4 deletions

View File

@ -1893,7 +1893,8 @@ static int cortex_a_write_cpu_memory_slow(struct target *target,
{
/* Writes count objects of size size from *buffer. Old value of DSCR must
* be in *dscr; updated to new value. This is slow because it works for
* non-word-sized objects and (maybe) unaligned accesses. If size == 4 and
* non-word-sized objects. Avoid unaligned accesses as they do not work
* on memory address space without "Normal" attribute. If size == 4 and
* the address is aligned, cortex_a_write_cpu_memory_fast should be
* preferred.
* Preconditions:
@ -2050,7 +2051,22 @@ static int cortex_a_write_cpu_memory(struct target *target,
/* We are doing a word-aligned transfer, so use fast mode. */
retval = cortex_a_write_cpu_memory_fast(target, count, buffer, &dscr);
} else {
/* Use slow path. */
/* Use slow path. Adjust size for aligned accesses */
switch (address % 4) {
case 1:
case 3:
count *= size;
size = 1;
break;
case 2:
if (size == 4) {
count *= 2;
size = 2;
}
case 0:
default:
break;
}
retval = cortex_a_write_cpu_memory_slow(target, size, count, buffer, &dscr);
}
@ -2136,7 +2152,8 @@ static int cortex_a_read_cpu_memory_slow(struct target *target,
{
/* Reads count objects of size size into *buffer. Old value of DSCR must be
* in *dscr; updated to new value. This is slow because it works for
* non-word-sized objects and (maybe) unaligned accesses. If size == 4 and
* non-word-sized objects. Avoid unaligned accesses as they do not work
* on memory address space without "Normal" attribute. If size == 4 and
* the address is aligned, cortex_a_read_cpu_memory_fast should be
* preferred.
* Preconditions:
@ -2352,7 +2369,23 @@ static int cortex_a_read_cpu_memory(struct target *target,
/* We are doing a word-aligned transfer, so use fast mode. */
retval = cortex_a_read_cpu_memory_fast(target, count, buffer, &dscr);
} else {
/* Use slow path. */
/* Use slow path. Adjust size for aligned accesses */
switch (address % 4) {
case 1:
case 3:
count *= size;
size = 1;
break;
case 2:
if (size == 4) {
count *= 2;
size = 2;
}
break;
case 0:
default:
break;
}
retval = cortex_a_read_cpu_memory_slow(target, size, count, buffer, &dscr);
}