summaryrefslogtreecommitdiff
path: root/docs/user-guide.md
blob: 519be3d966b9b8222465b735e0f79162ccc23206 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
### Building the test framework

2 platforms are supported at the moment:
   * FVP models
   * Juno board

To build the software for one of these two platforms, follow these steps:

1.   Specify the cross-compiler prefix, the targeted platform and build:

         CROSS_COMPILE=<path-to-aarch64-gcc>/bin/aarch64-none-elf- \
         make PLAT=<platform>

     ... where `<platform>` is either `fvp` or `juno`.

     By default this produces a release version of the build. To produce a
     debug version instead,

         CROSS_COMPILE=<path-to-aarch64-gcc>/bin/aarch64-none-elf- \
         make PLAT=<platform> DEBUG=1

     To make the build verbose, use:

         CROSS_COMPILE=<path-to-aarch64-gcc>/bin/aarch64-none-elf- \
         make PLAT=<platform> V=1

2.   The build process creates products in a `build` directory tree.
     The resulting binary is in `build/<platform>/<build_type>/tftf.bin`
     where `<build_type>` is either `debug` or `release`.
     The resulting ELF file is in `build/<platform>/<build_type>/tftf/tftf.elf`


### Overview of TFTF behaviour

Tests are listed in tftf/tests/tests.xml file. They are grouped into testsuites.
Each testsuite consists in a number of test cases.

[NOT IMPLEMENTED YET: need watchdog support]
If a test hangs or crashes badly, the platform will reset and TFTF will try to
resume test session where it has been left.
[NOT IMPLEMENTED YET: need watchdog support]

Once all tests have completed, a report is generated. TFTF currently
supports 2 report formats:
  * Raw output [default] (i.e. text messages on the serial console output)
  * Junit output

The report format is configurable at compilation time via the
`TEST_REPORT_FORMAT` environment variable:

     CROSS_COMPILE=<path-to-aarch64-gcc>/bin/aarch64-none-elf- \
     make PLAT=<platform> TEST_REPORT_FORMAT=raw

     CROSS_COMPILE=<path-to-aarch64-gcc>/bin/aarch64-none-elf- \
     make PLAT=<platform> TEST_REPORT_FORMAT=junit

If the chosen report format is Junit, the TFTF will produce a file called
'tftf_report_junit.xml'.
Note that the Junit output requires semihosting support.


### How to write a test

A test is effectively a function pointer of type `TESTCASE_FUNC`.
You are expected to implement the function in
`tftf/tests/<testsuite_directory>/<testcase>.c`

TFTF provides a set of helper functions to dispatch the test execution
on one/several cores.


### Structure of the code

The C entrypoint function for the primary core is `tftf_cold_boot_main()` (in
`framework/main.c` file). Secondary cores will be brought up by the primary core
during the TFTF initialisation using PSCI CPU_ON interface. Their entrypoint in
the TFTF is `tftf_hotplug_entry()`.

After some initialisations, all CPUs end up in `main_test_loop()` function. They
will decide who is going to be the dispatcher for the next test. The dispatcher
role involves coordinating all the CPUs throughout the test and collecting the
test results. The other CPUs (the so-called "slaves") wait for some work to do.
The dispatcher submits tasks via the `mp_task_entries` array. Each core can only
execute 1 task at a time ; in other words the `mp_task_entries` array contains
1 entry per core, which corresponds to the current task of the core (it is NULL
when the core has nothing to do).

Tests results are written into NVM as we go along. The following data is saved
(see struct `TEST_NVM` in `include/tftf.h`):

    * current_testcase

      Contains the function pointer of the current test. It is set up just
      before starting the execution of the test and reset after the test has
      completed. This is used to detect when the previous test session crashed:
      if current_testcase is not empty when the platform is brought up
      then it means that a test crashed/timed out during last run.

    * next_testcase

      Contains the function pointer of the next test to run. It is used
      to allow a test session to be interrupted and resumed later:
      if next_testcase is not empty when the platform is brought up
      then it means that the last test session is not over and TFTF will
      try to resume test execution where it has been left.

    * testcase_buffer

      A buffer that the test can use as a scratch area for whatever it is
      doing.

    * testcase_results

    * result_buffer_size

    * result_buffer

      Buffer holding the tests output. Tests output are concatenated.

Note: On both FVP and Juno platforms, NVM support is not implemented yet so we
use DRAM to store test results as a workaround. This has obvious limitations.