summaryrefslogtreecommitdiff
path: root/docs/implementing-tests.md
blob: 312d38d81a4c171dda783e0d6a238bfd921609d4 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
How to implement tests
======================

Contents :

1.  Overview
2.  Life cycle of a test
3.  Template test code
4.  Tests API

- - - - - - - - - - - - - - - - - -

1.  Overview
------------

Implementing a new test is a 4-step process:

1.   Choose where to implement the test. It can be in an existing test file
     or you can create a new test file. The location of test files should
     respect the directory layout described in the `EL3 Firmware Test framework
     Design`.

2.   Implement the test using the APIs provided by the Test Framework (more
     details on that in the next sections of this document). The test function
     have the following prototype:

         test_result_t test_function(void);

     Please keep in mind the framework's limitations when implementing a test.
     Refer to the `EL3 Firmware Test framework Design` for a list of these
     limitations.

3.   Open the file `tests/tests.xml` and create a new `testcase` XML node.
     You must provide a unique name for the test as well as the entry function
     of the test. Optionally, a description can also be provided.

     Example: To create a test case named "Foo test case", whose entry function
     is `foo()`, add the following line in the file `tests/tests.xml`:

         <testcase name="Foo test case" function="foo" />

     A testcase must be part of a testsuite. The `testcase` XML node above must
     be put inside a `testsuite` XML node. If necessary, a new testsuite can be
     created. Again, you need to provide a unique name for the testsuite and
     also a (mandatory) description.

     Example: To create a test suite named "Foo test suite", whose description
     is: `An example test suite`, add the 2 following lines:

         <testsuite name="Foo test suite" description="An example test suite">
         </testsuite>

4.   Open the makefile `tests/tests.mk` and add the path to the new test file to
     the `TEST_SOURCES` variable.


2.  Life cycle of a test
------------------------

To implement a test, it is useful to know the life cycle of a test first.

A test has a main entry point function. Only the lead CPU enters this main
entry point function, while other CPUs are powered down.

The lead CPU should start by checking that the test can run on the current
platform. Some tests might require specific platform features to work properly.
An example of such requirements is platform topology requirements: a test
might need a minimum number of CPUs and/or clusters. The test should be
skipped if these requirements are not met.

The lead CPU can power on other CPUs by calling the function `tftf_cpu_on()`.
When doing so, it provides a test entry point function for the non-lead CPU.
The non-lead CPU is boostrapped by the test framework, then it enters its test
entry point.

The function `tftf_cpu_on()` just ensures that the power-on request has been
issued successfully, it doesn't exercise more control over the targeted CPU.
This means that the different CPUs live their own life and they don't have
any relation in their execution paths. Hence the need for synchronisation
points. In most cases, tests will need to introduce synchronisation points
that all/some CPUs need to reach before test execution continues. The events
API is provided to this end.

Any CPU that enters the test must return from it. The test framework expects
each CPU involved in the test to return a status code indicating whether the
test:
*   succeeded;
*   failed;
*   was skipped.

Each CPU speaks for itself, i.e. the status code returned by a given CPU
indicates its own view of the test result. E.g. for a test involving 2 CPUs,
it is possible that CPU 0 declares the test as passed whereas CPU 1 declares
it as failed. The test framework is responsible for aggregating individual
CPUs' test results and deduces the overall test result from them.

The test has some responsibilities regarding the state in which it leaves the
platform. It should ensure that it leaves the system in a clean state prior to
going back to the framework. Any change to the system configuration (e.g. MMU
setup, GIC configuration, system registers, ...) must be undone and the original
configuration must be restored. This guarantees that a test is not affected by
the previous one.

Once exception to this rule is that CPUs powered on as part of a test must not
be powered down, they must stay powered on. As already stated above, as soon as
a CPU enters the test, the framework expects it to return from the test.
Obviously it can't do that if it is powered down. If a CPU never goes back from
the test, the framework will endlessly wait for this CPU.


3.  Template test code
----------------------

Some template test code is provided in the `tests/template_tests/` directory.
It can be used as a starting point for developing new tests. You'll find
template code for single-core and multi-core tests.


4.  Tests API
-------------

The APIs provided by the Test Framework to develop tests is well-documented
in the header files in the `include/` directory. This section aims at giving an
overview of the panel of features provided and pointers to the right header
files to find the documentation.

`include/drivers/`
  * Generic GIC driver. The `arm_gic.h` contains the public API which can be
    used by the tests. TFTF supports both GIC architecture version 2 and 3.

  * PL011 UART driver

  * Intel P30 flash memory controller driver. This is the flash controller
    modelled on FVP and present on the Juno board.
    NOTE: In most cases, tests shouldn't need to use this driver directly. Tests
    are expected to use the `tftf_nvm_read()` and `tftf_nvm_write()` APIs
    instead. See definitions in `framework/include/nvm.h`. See also the NVM
    validation test case (i.e. `tests/framework_validation_tests/test_validation_nvm.c`)
    for an example of usage of these functions.

`include/lib/`
  * `aarch64/`: Architecture helper functions for e.g. system registers access,
    cache maintenance operations, MMU configuration, ...

  * `events.h`: Events API.
    Used to create synchronisation points between CPUs in tests.

  * `irq.h`: IRQ handling support.
    Used to configure IRQs and register/unregister handlers called upon
    reception of a specific IRQ.

  * `power_management.h`: Power management operations.
    i.e. CPU on, CPU off, CPU suspend.

  * `sgi.h`: Software Generated Interrupt support.
    Used as an inter-CPU communication mechanism.

  * `spinlock.h`: Lightweight implementation of synchronisation locks.
    Used to prevent concurrent accesses to shared data structures.

  * `timer.h`: Support for programming the always-on timer.
    Any timer like system timer which is in the 'always-on' power domain
    can be used to exit CPUs from suspend state.

  * `tftf_lib.h`: Miscellaneous helper functions/macros.
    MP-safe printf(), low-level PSCI wrappers, insertion of delays, raw SMC
    interface, support for writing a string in the test report, macros to skip
    tests on platforms that do not meet topology requirements, et al.

  * `semihosting.h`: Semihosting support.

  * `io_storage.h`: Low-level IO operations. Tests are not expected to use these
    APIs directly. If they need to write data to non-volatile memory
    flash), it is expected that the test framework will provide a higher-level
    driver driven by `tftf_nvm_read()` and `tftf_nvm_write()` APIs.

`include/plat/`
  APIs to discover the platform topology at runtime, i.e. how many CPUs and
  clusters there are.

`include/runtime_services/`
  APIs to call runtime services provided by the EL3 firmware, e.g. PSCI,
  Standard Service queries, Trusted OS calls. Runtime services are invoked
  through the SMC interface. Refer to the [SMC Calling Convention PDD][SMCCC]
  for more details.

`include/stdlib/`
  * Local standard C library (memcpy(), printf(), and so on).


- - - - - - - - - - - - - - - - - - - - - - - - - -

_Copyright (c) 2014, ARM Limited and Contributors. All rights reserved._

[SMCCC]:            http://infocenter.arm.com/help/topic/com.arm.doc.den0028a/index.html "SMC Calling Convention PDD (ARM DEN 0028A)"