aboutsummaryrefslogtreecommitdiff
path: root/docs/manual/advanced-clocks.xml
blob: d6c800712303663b2f3edb377b3c52b7414b16ea (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
<chapter id="chapter-clocks">
  <title>Clocks and synchronization in &GStreamer;</title>

  <para>
    When playing complex media, each sound and video sample must be played in a
    specific order at a specific time. For this purpose, GStreamer provides a
    synchronization mechanism.
  </para>
  <para>
    &GStreamer; provides support for the following use cases:
    <itemizedlist>
      <listitem>
        <para>
          Non-live sources with access faster than playback rate. This is
          the case where one is reading media from a file and playing it
          back in a synchronized fashion. In this case, multiple streams need
          to be synchronized, like audio, video and subtitles.
        </para>
      </listitem>
      <listitem>
        <para>
          Capture and synchronized muxing/mixing of media from multiple live
          sources. This is a typical use case where you record audio and
          video from a microphone/camera and mux it into a file for 
          storage.
        </para>
      </listitem>
      <listitem>
        <para>
          Streaming from (slow) network streams with buffering. This is the
          typical web streaming case where you access content from a streaming
          server with http.
        </para>
      </listitem>
      <listitem>
        <para>
          Capture from live source and and playback to live source with
          configurable latency. This is used when, for example, capture from
          a camera, apply an effect and display the result. It is also used
          when streaming low latency content over a network with UDP.
        </para>
      </listitem>
      <listitem>
        <para>
          Simultaneous live capture and playback from prerecorded content.
          This is used in audio recording cases where you play a previously
          recorded audio and record new samples, the purpose is to have the
          new audio perfectly in sync with the previously recorded data.
        </para>
      </listitem>
    </itemizedlist>
  </para>
  <para>
    &GStreamer; uses a <classname>GstClock</classname> object, buffer
    timestamps and a SEGMENT event to synchronize streams in a pipeline
    as we will see in the next sections.
  </para>

  <sect1 id="section-clock-time-types" xreflabel="Clock running-time">
    <title>Clock running-time </title>
    <para>
      In a typical computer, there are many sources that can be used as a
      time source, e.g., the system time, soundcards, CPU performance
      counters, ... For this reason, there are many
      <classname>GstClock</classname> implementations available in &GStreamer;.
      The clock time doesn't always start from 0 or from some known value.
      Some clocks start counting from some known start date, other clocks start
      counting since last reboot, etc...
    </para>
    <para>
      A <classname>GstClock</classname> returns the
      <emphasis role="strong">absolute-time</emphasis>
      according to that clock with <function>gst_clock_get_time ()</function>.
      The absolute-time (or clock time) of a clock is monotonically increasing.
      From the absolute-time is a <emphasis role="strong">running-time</emphasis>
      calculated, which is simply the difference between a previous snapshot
      of the absolute-time called the <emphasis role="strong">base-time</emphasis>.
      So:
    </para>
    <para>
      running-time = absolute-time - base-time
    </para>
    <para>
      A &GStreamer; <classname>GstPipeline</classname> object maintains a
      <classname>GstClock</classname> object and a base-time when it goes
      to the PLAYING state.  The pipeline gives a handle to the selected
      <classname>GstClock</classname> to each element in the pipeline along
      with selected base-time. The pipeline will select a base-time in such
      a way that the running-time reflects the total time spent in the
      PLAYING state. As a result, when the pipeline is PAUSED, the
      running-time stands still.
    </para>
    <para>
      Because all objects in the pipeline have the same clock and base-time,
      they can thus all calculate the running-time according to the pipeline
      clock.
    </para>
  </sect1>

  <sect1 id="section-buffer-running-time" xreflabel="Buffer running-time">
    <title>Buffer running-time</title>
    <para>
      To calculate a buffer running-time, we need a buffer timestamp and
      the SEGMENT event that preceeded the buffer. First we can convert
      the SEGMENT event into a <classname>GstSegment</classname> object
      and then we can use the
      <function>gst_segment_to_running_time ()</function> function to
      perform the calculation of the buffer running-time.
    </para>
    <para>
      Synchronization is now a matter of making sure that a buffer with a
      certain running-time is played when the clock reaches the same
      running-time. Usually this task is done by sink elements. Sink also
      have to take into account the latency configured in the pipeline and
      add this to the buffer running-time before synchronizing to the
      pipeline clock.
    </para>
    <para>
      Non-live sources timestamp buffers with a running-time starting
      from 0. After a flushing seek, they will produce buffers again
      from a running-time of 0.
    </para>
    <para>
      Live sources need to timestamp buffers with a running-time matching
      the pipeline running-time when the first byte of the buffer was
      captured.
    </para>
  </sect1>

  <sect1 id="section-buffer-stream-time" xreflabel="Buffer stream-time">
    <title>Buffer stream-time</title>
    <para>
      The buffer stream-time, also known as the position in the stream,
      is calculated from the buffer timestamps and the preceeding SEGMENT
      event. It represents the time inside the media as a value between
      0 and the total duration of the media.
    </para>
    <para>
      The stream-time is used in:
      <itemizedlist>
        <listitem>
          <para>
            Report the current position in the stream with the POSITION
            query.
          </para>
        </listitem>
        <listitem>
          <para>
            The position used in the seek events and queries.
          </para>
        </listitem>
        <listitem>
          <para>
            The position used to synchronize controlled values.
          </para>
        </listitem>
      </itemizedlist>
    </para>
    <para>
      The stream-time is never used to synchronize streams, this is only
      done with the running-time.
    </para>
  </sect1>

  <sect1 id="section-time-overview" xreflabel="Time overview">
    <title>Time overview</title>
    <para>
      Here is an overview of the various timelines used in &GStreamer;.
    </para>
    <para>
      The image below represents the different times in the pipeline when
      playing a 100ms sample and repeating the part between 50ms and
      100ms. 
    </para>

    <figure float="1" id="chapter-clock-img">
      <title>&GStreamer; clock and various times</title>
      <mediaobject>
        <imageobject>
          <imagedata scale="75" fileref="images/clocks.&image;" format="&IMAGE;" />
        </imageobject>
      </mediaobject>  
    </figure>

    <para>
      You can see how the running-time of a buffer always increments
      monotonically along with the clock-time. Buffers are played when their
      running-time is equal to the clock-time - base-time. The stream-time
      represents the position in the stream and jumps backwards when
      repeating.
    </para>
  </sect1>

  <sect1 id="section-clocks-providers">
    <title>Clock providers</title>
    <para>
      A clock provider is an element in the pipeline that can provide
      a <classname>GstClock</classname> object. The clock object needs to
      report an absoulute-time that is monotonocally increasing when the
      element is in the PLAYING state. It is allowed to pause the clock
      while the element is PAUSED.
    </para>
    <para>
      Clock providers exist because they play back media at some rate, and
      this rate is not necessarily the same as the system clock rate. For
      example, a soundcard may playback at 44,1 kHz, but that doesn't mean
      that after <emphasis>exactly</emphasis> 1 second <emphasis>according
      to the system clock</emphasis>, the soundcard has played back 44.100
      samples. This is only true by approximation. In fact, the audio
      device has an internal clock based on the number of samples played
      that we can expose.
    </para>
    <para>
      If an element with an internal clock needs to synchronize, it needs
      to estimate when a time according to the pipeline clock will take
      place according to the internal clock. To estimate this, it needs
      to slave its clock to the pipeline clock.
    </para>
    <para>
      If the pipeline clock is exactly the internal clock of an element,
      the element can skip the slaving step and directly use the pipeline
      clock to schedule playback. This can be both faster and more
      accurate.
      Therefore, generally, elements with an internal clock like audio
      input or output devices will be a clock provider for the pipeline.
    </para>
    <para>
      When the pipeline goes to the PLAYING state, it will go over all
      elements in the pipeline from sink to source and ask each element
      if they can provide a clock. The last element that can provide a
      clock will be used as the clock provider in the pipeline.
      This algorithm prefers a clock from an audio sink in a typical
      playback pipeline and a clock from source elements in a typical
      capture pipeline.
    </para>
    <para>
      There exist some bus messages to let you know about the clock and
      clock providers in the pipeline. You can see what clock is selected
      in the pipeline by looking at the NEW_CLOCK message on the bus.
      When a clock provider is removed from the pipeline, a CLOCK_LOST
      message is posted and the application should go to PAUSED and back
      to PLAYING to select a new clock.
    </para>
  </sect1>

  <sect1 id="section-clocks-latency">
    <title>Latency</title>
    <para>
      The latency is the time it takes for a sample captured at timestamp X
      to reach the sink. This time is measured against the clock in the
      pipeline. For pipelines where the only elements that synchronize against
      the clock are the sinks, the latency is always 0 since no other element
      is delaying the buffer.
    </para>
    <para>
      For pipelines with live sources, a latency is introduced, mostly because
      of the way a live source works. Consider an audio source, it will start
      capturing the first sample at time 0. If the source pushes buffers with
      44100 samples at a time at 44100Hz it will have collected the buffer at
      second 1.  Since the timestamp of the buffer is 0 and the time of the
      clock is now >= 1 second, the sink will drop this buffer because it is
      too late.  Without any latency compensation in the sink, all buffers will
      be dropped.
    </para>

    <sect2 id="section-latency-compensation">
      <title>Latency compensation</title>
      <para>
        Before the pipeline goes to the PLAYING state, it will, in addition to
        selecting a clock and calculating a base-time, calculate the latency
        in the pipeline. It does this by doing a LATENCY query on all the sinks
        in the pipeline. The pipeline then selects the maximum latency in the
        pipeline and configures this with a LATENCY event.
      </para>
      <para>
        All sink elements will delay playback by the value in the LATENCY event.
        Since all sinks delay with the same amount of time, they will be
        relative in sync.
      </para>
    </sect2>

    <sect2 id="section-latency-dynamic">
      <title>Dynamic Latency</title>
      <para>
        Adding/removing elements to/from a pipeline or changing element
        properties can change the latency in a pipeline. An element can
        request a latency change in the pipeline by posting a LATENCY
        message on the bus. The application can then decide to query and
        redistribute a new latency or not. Changing the latency in a
        pipeline might cause visual or audible glitches and should
        therefore only be done by the application when it is allowed.
      </para>
    </sect2>
  </sect1>
</chapter>