summaryrefslogtreecommitdiff
path: root/docs/user-guide.md
blob: 65066247af268d78df82a4c6eba88e29df20971c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
EL3 Firmware Test Framework User Guide
======================================

Contents :

1.  Introduction
2.  Host machine requirements
3.  Tools
4.  Building the Test Framework

- - - - - - - - - - - - - - - - - -

1.  Introduction
----------------

This document describes how to build the EL3 Firmware Test Framework for
the Juno ARM development platform and ARM Fixed Virtual Platform (FVP)
models.


2.  Host machine requirements
-----------------------------

The minimum recommended machine specification for building the software and
running the FVP models is a dual-core processor running at 2GHz with 12GB of
RAM. For best performance, use a machine with a quad-core processor running at
2.6GHz with 16GB of RAM.

The software has been tested on Ubuntu 12.04.04 (64-bit). Packages used
for building the software were installed from that distribution unless
otherwise specified.


3.  Tools
---------

The following tools are required to use the EL3 Firmware Test Framework:

*   Baremetal GNU GCC tools. Verified packages can be downloaded from [Linaro]
    [Linaro Toolchain]. The rest of this document assumes that the
    `gcc-linaro-aarch64-none-elf-4.9-2014.07_linux.tar.xz` tools are used.

        wget http://releases.linaro.org/14.07/components/toolchain/binaries/gcc-linaro-aarch64-none-elf-4.9-2014.07_linux.tar.xz
        tar -xf gcc-linaro-aarch64-none-elf-4.9-2014.07_linux.tar.xz

*   (Optional) For debugging, ARM [Development Studio 5 (DS-5)][DS-5] v5.19.


4.  Building the Test framework
---------------------------------

Two platforms are supported at the moment:
   * FVP models: Foundation, Base AEM, Base Cortex
   * Juno board

To build the software for one of these two platforms, follow these steps:

1.   Specify the cross-compiler prefix, the targeted platform and build:

         CROSS_COMPILE=<path-to-aarch64-gcc>/bin/aarch64-none-elf- \
         make PLAT=<platform> all

     ... where `<platform>` is either `fvp` or `juno`.

     By default this produces a release version of the build. To produce a
     debug version instead,

         CROSS_COMPILE=<path-to-aarch64-gcc>/bin/aarch64-none-elf- \
         make PLAT=<platform> DEBUG=1 all

     To make the build verbose, use:

         CROSS_COMPILE=<path-to-aarch64-gcc>/bin/aarch64-none-elf- \
         make PLAT=<platform> V=1 all

2.   The build process creates products in a `build` directory tree.
     The resulting binary is in `build/<platform>/<build_type>/tftf.bin`
     where `<build_type>` is either `debug` or `release`.
     The resulting ELF file is in `build/<platform>/<build_type>/tftf/tftf.elf`

### Summary of build options

The Test Framework build system supports the following build options. Unless
mentioned otherwise, these options are expected to be specified at the build
command line and are not to be modified in any component makefiles. Note that
the build system doesn't track dependency for build options. Therefore, if any
of the build options are changed from a previous build, a clean build must be
performed.

*   `CROSS_COMPILE`: Prefix to toolchain binaries. Please refer to examples in
    this document for usage.

*   `DEBUG`: Choose between a debug and release build. It can take either 0
    (release) or 1 (debug) as values. 0 is the default.

*   `PLAT`: Choose a platform to build the Test Framework for. The chosen
    platform name must be the name of one of the directories under the `plat/`
    directory other than `common`.

*   `TEST_REPORT_FORMAT`: Format of the test report. It can take either 'raw'
    (text output on the console) or 'junit' (XML Junit format). The default is
    'raw'.

*   `V`: Verbose build. If assigned anything other than 0, the build commands
    are printed. Default is 0.

- - - - - - - - - - - - - - - - - - - - - - - - - -

_Copyright (c) 2014, ARM Limited and Contributors. All rights reserved._


[ARM FVP website]:         http://www.arm.com/fvp
[Linaro Toolchain]:        http://releases.linaro.org/14.07/components/toolchain/binaries/
[DS-5]:                    http://www.arm.com/products/tools/software-tools/ds-5/index.php


---------------------------------------------------------
Old documentation

### Overview of TFTF behaviour

Tests are listed in tftf/tests/tests.xml file. They are grouped into testsuites.
Each testsuite consists in a number of test cases.

[NOT IMPLEMENTED YET: need watchdog support]
If a test hangs or crashes badly, the platform will reset and TFTF will try to
resume test session where it has been left.
[NOT IMPLEMENTED YET: need watchdog support]

Once all tests have completed, a report is generated. TFTF currently
supports 2 report formats:
  * Raw output [default] (i.e. text messages on the serial console output)
  * Junit output

The report format is configurable at compilation time via the
`TEST_REPORT_FORMAT` environment variable:

     CROSS_COMPILE=<path-to-aarch64-gcc>/bin/aarch64-none-elf- \
     make PLAT=<platform> TEST_REPORT_FORMAT=raw

     CROSS_COMPILE=<path-to-aarch64-gcc>/bin/aarch64-none-elf- \
     make PLAT=<platform> TEST_REPORT_FORMAT=junit

If the chosen report format is Junit, the TFTF will produce a file called
'tftf_report_junit.xml'.
Note that the Junit output requires semihosting support.


### How to write a test

A test is effectively a function pointer of type `TESTCASE_FUNC`.
You are expected to implement the function in
`tftf/tests/<testsuite_directory>/<testcase>.c`

TFTF provides a set of helper functions to dispatch the test execution
on one/several cores.


### Structure of the code

The C entrypoint function for the primary core is `tftf_cold_boot_main()` (in
`framework/main.c` file). Secondary cores will be brought up by the primary core
during the TFTF initialisation using PSCI CPU_ON interface. Their entrypoint in
the TFTF is `tftf_hotplug_entry()`.

After some initialisations, all CPUs end up in `main_test_loop()` function. They
will decide who is going to be the dispatcher for the next test. The dispatcher
role involves coordinating all the CPUs throughout the test and collecting the
test results. The other CPUs (the so-called "slaves") wait for some work to do.
The dispatcher submits tasks via the `mp_task_entries` array. Each core can only
execute 1 task at a time ; in other words the `mp_task_entries` array contains
1 entry per core, which corresponds to the current task of the core (it is NULL
when the core has nothing to do).

Tests results are written into NVM as we go along. The following data is saved
(see struct `TEST_NVM` in `include/tftf.h`):

    * current_testcase

      Contains the function pointer of the current test. It is set up just
      before starting the execution of the test and reset after the test has
      completed. This is used to detect when the previous test session crashed:
      if current_testcase is not empty when the platform is brought up
      then it means that a test crashed/timed out during last run.

    * next_testcase

      Contains the function pointer of the next test to run. It is used
      to allow a test session to be interrupted and resumed later:
      if next_testcase is not empty when the platform is brought up
      then it means that the last test session is not over and TFTF will
      try to resume test execution where it has been left.

    * testcase_buffer

      A buffer that the test can use as a scratch area for whatever it is
      doing.

    * testcase_results

    * result_buffer_size

    * result_buffer

      Buffer holding the tests output. Tests output are concatenated.

Note: On both FVP and Juno platforms, NVM support is not implemented yet so we
use DRAM to store test results as a workaround. This has obvious limitations.