Loading...
Searching...
No Matches
imquic architecture

The imquic library is structured to try and keep its different features mostly separated in different files, for better understanding of the internals and a cleaner separation of responsibilities. Taking into account that support for the QUIC protocol itself is provided by the picoquic library, at the moment the code is mainly structured like this:

  • an endpoint abstraction, to represent the entry point to a QUIC client or server, and deal with networking (network.c, network.h and configuration.h); this is mapped to a picoquic_quic_t picoquic context;
  • the QUIC stack, whicm mostly orchestrates picoquic callbacks and scheduling (quic.c and quic.h);
  • a connection abstraction, that mantains a QUIC connection from the perspective of the imquic endpoint that received or originated it (connection.c and connection.h); this is mapped to a picoquic_cnx_t picoquic connection;
  • a STREAM abstraction, to keep track of streams handled by a connection (stream.c and stream.h);
  • a public API to access those features in a transparent way (see the public API documentation).

All this is tied by an event loop (loop.c and loop.h), and relies on a few utilities, namely:

On top of the raw QUIC stack, other parts of the code deal with a more application level overlay, specifically:

  • WebTransport support (http3.c and http3.h); notice that, despite the name of the source files, we don't support the whole HTTP/3 protocol, but only the limited set of functionality that allow for the establishment of WebTransport connections (CONNECT request), by leveraging a custom QPACK stack (qpack.c, qpack.h and huffman.h);
  • native RTP Over QUIC (RoQ) support (roq.c and internal/roq.h);
  • native Media Over QUIC (MoQ) support (moq.c and internal/moq.h).

Library initialization

The library needs a one-time initialization, to initialize different parts of its stack. In order to do that it relies on an atomic initialized property that refers to values defined in imquic_init_state to figure out what the initialization state is.

After initializing the QUIC stack via imquic_quic_init (which simply takes note of the SSLKEYLOGFILE to use, if provided, and then calls a few picoquic methods, e.g., to register congestion control algorithms), the initialization method simply calls, in sequence, imquic_network_init for networking, and imquic_moq_init and imquic_roq_init to initialize the native support for MoQ and RoQ. To conclude, it uses imquic_loop_init to initialize the event loop.

Logging

The library can log messages with different levels of debugging. Specifically, when a debugging level is configured in the application, only messages that have an associated level lower than (or equal to) the configured level will be displayed on the logs. By default, this value is IMQUIC_LOG_VERB, but a different level can be configured at any given time via imquic_set_log_level.

By default the library writes its logs on the standard output. To change this behaviour (e.g., to customize the format of the logs, log to file, or integrate imquic logging in the logs already generated by the container application), a function called imquic_set_log_function is available, which for each log line routes the log level, the format string and the text to write to the application.

Versioning

The versioning information is in a dynamically generated version.c file, that is filled in at compile time out of the configuration and building process. It is then exposed to the library via some extern variables defined in version.h.

Locking and reference counting

Most resources that this documentation covers involve mutexes for thread-safeness, and reference counting to keep track of memory usage in order to avoid race conditions or use-after-free uses.


Event loop

The event loop is basically a GLib implementation, so built on top of GMainContext and GMainLoop . At the moment, this consists in a single loop running on a dedicated thread, that's launched at initialization time. Plans for the future include refactoring this part, so that we can, e.g., have either different loops per endpoint, or possibly a configurable number of loops/threads each responsible for a higher number of endpoints.

There's mainly two sources that can be attached to this event loop:

  1. imquic_network_source, added with a call to imquic_loop_poll_endpoint, which has the loop monitor a UDP socket for incoming traffic;
  2. imquic_connection_source, added with a call to imquic_loop_poll_connection, where the loop monitors such a custom source that just checks if the connection asked the loop to do something.

There's also a separate source for timed firing of callbacks, using imquic_loop_add_timer, but at the moment that's only used to schedule picoquic via imquic_quic_next_step to orchestrate calls to picoquic_get_next_wake_delay and figure out when to schedule an action next, and picoquic_prepare_next_packet to implement it.

Each imquic_network_source is associated to a specific endpoint in the library (a client or a server), and so to a imquic_network_endpoint (more on that later). When incoming traffic is detected on such a source, the loop passes that to imquic_quic_incoming_packet, which basically calls the picoquic picoquic_incoming_packet function, so that it can be processed accordingly. As we'll see later, for servers this may get the stack to detect and create a new connection, that will be handled within the context of the library.

When a connection is created by the QUIC stack (more on that later too), it's added to the loop as a source via the above mentioned imquic_connection_source instance, whose purpose is just to reference the connection in the loop. More precisely, any time the application wants to perform a specific activity (e.g., sending data or closing a connection), the associated info is queued so that it can be performed within the context of the loop thread (since picoquic is not thread-safe). In order to ensure the loop sees in a timely fashion, imquic_loop_wakeup is called too.

Note: As anticipated, the whole event loop mechanism is at the moment a bit flaky, and definitely not the best in terms of performance. This will probably be refactored, especially the part that concerns the integration of connections in the loop and their events.

QUIC stack

As we've seen in the intro to this documentation, while originally imquic shipped its own homemade QUIC stack, this part is now covered by picoquic. Its implementation is then integrated in, and mapped to, imquic fundamentals.

As such, the QUIC stack part of imquic is leaner than it was before, as it's mostly meant to orchestrate calls to picoquic and intercepting events, besides ensuring that any interaction with picoquic is done on the same thread that originated a specific context.

A picoquic context (server or client) is created using the imquic_quic_create_context function, which is a helper function always invoked internally when creating a new endpoint (see next section).

Apart from that, there's only two things that the QUIC code does: intercepting events triggered by picoquic (to react to them accordingly), and advancing the scheduling of picoquic actions via the main loop.

For intercepting events, the QUIC stack implements two main callbacks:

  1. imquic_quic_incoming_packet is invoked when a packet reaches on of the endpoints managed by the library; this function internally invokes picoquic_incoming_packet to process the packet (which may result in notifications about new incoming data, connections, etc.);
  2. imquic_quic_queued_event is invoked when there is an event that was queued for a connection (e.g., data to send), that should now be implemented via picoquic in a thread-safe way.

The stack also implements a couple of other callbacks to intercept relevant events: the imquic_quic_stream_callback function, for instance, is a private callback that will intercept events associated to a specific context and/or connection (e.g., a QUIC connection changing state, incoming data, etc.).

Scheduling, instead, is handled via the imquic_quic_next_step function, which checks when picoquic needs to be triggered next via picoquic_get_next_wake_delay and then creates a timer that will fire the imquic_network_send_packet function.

Endpoints

As a QUIC library, imquic obviously starts from the concept of endpoint, and as such on whether a user wants to create a QUIC server or a QUIC client, which will have different requirements in terms of how they're configured and then managed.

The public API documentation explains how these endpoints are created from the perspective of someone using the library. Internally, both clients and servers are represented by the same abstraction, called imquic_network_endpoint, which represents an endpoint in the library capable of sending and receiving messages, independently of whether it's a server or client. This abstraction obviously mantains different pieces of information associated to the role of the endpoint and how it's configured.

Specifically, creating an endpoint starts from a imquic_configuration object, that the public API code fills in according to what was passed by the user. This configuration object is passed to imquic_network_endpoint_create and returns a imquic_network_endpoint instance. Before this resource becomes "operative", it must be "started": first of all, it must be added to the loop in order to be monitored (which as we've seen is done with a call to imquic_loop_poll_endpoint); then, it can actually be started with a call to imquic_network_endpoint_start. Considering the QUIC functionality is provided by picoquic, this function makes a first call to imquic_quic_next_step, which as we've seen in the previous section is what kickstarts the context lifecycle. For servers, that's enough, because a server will wait for incoming connection attempts and react to them. For new clients, we need to initiate the steps to establish a new connection, and so we create a new connection via imquic_connection_create (more on that in a minute) and then we start the connection via picoquic using picoquic_start_client_cnx .

The endpoint code is also responsible of actually sending data when it is time to do so. When a timer created by imquic_quic_next_step is fired, it means that picoquic is scheduled to send data, which could be actual application data (e.g., STREAM or DATAGRAM data) or other frames at the QUIC level (e.g., ACK or others). As such, when the imquic_network_send_packet callback is fired by the timer, the function invokes picoquic_prepare_next_packet multiple times, until there is nothing else to send: for each packet that this picoquic function prepares, the data is sent out to the peer address.

Connections

The library uses the imquic_connection structure as an abstraction for a connection: this identifies a specific connection an imquic endpoint is part of. For clients, an imquic_connection instance is automatically created when imquic_network_endpoint_start is called, since that's when an attempt to establish a connection is performed. For servers, an instance is instead created when an endpoint receives a packet, and picoquic detects and handles a new connection: in imquic, this is done when picoquic fires the picoquic_callback_almost_ready event in the application callback.

When a connection is created via a call to imquic_connection_create, it is initialized and mapped to the imquic_network_endpoint that originated it (which is used for sending and receiving messages on that connection). For clients, this is also where we create a picoquic connection via picoquic_create_cnx and initialize the transport parameters. Finally, the connection is added to the loop via the already mentioned imquic_loop_poll_connection.

At this point, all actions on this connection refer to that instance: this includes creating streams, sending and receiving data, handling callbacks and so on. This imquic_connection instance is also what is passed to end users of the library as an opaque pointer: considering this structure is reference counted, the public API provides an interface to end users to add themselves as users of the connection as well.

Sending and receiving data

When not using self containing messages for delivering data (e.g., using DATAGRAM ), QUIC can send and receive data in chunks, e.g., as part of a STREAM . By chunking we mean that, although the overall stream of data in each context is assumed to be in order (as in TCP), different portions of the buffer to send can actually be delivered in any order you want, by providing offset and length values to let the user know which portion of the overall data this "chunk" should fit in.

In order to provide a streamlined API to end users, sending and receiving STREAM data assumes an always ordered delivery. This means that the application and the library never deal explicitly with offsets, but always append data when sending, and see data in order when receiving. This maps to how picoquic deals with exchanging data as well.

Although the actual management of STREAM frames is performed by picoquic, considering the multistream nature of QUIC imquic exposes a dedicated structure called imquic_stream, that provides an abstraction to an actual QUIC stream. This is mainly done to keep track of which streams currently exist, and their life cycle.

This imquic_stream structure contains all the required info needed to manage a specific stream, including its ID, who originated it, and whether it's bidirectional or unidirectional. State is also mantained, in order to figure out, e.g., when a stream is complete.

A list/map of such imquic_stream instances is kept in the imquic_connection that is managing them. New imquic_stream instances can be created either because the stack sees an incoming STREAM frame from the peer for a new ID, or because the end user or the QUIC stack locally create one. In both cases, imquic_stream_create is used to create a new stream the connection should be aware of, since any attempt to interact with such a stream (e.g., for the purpose of delivering data) will fail if the stream ID is unknown.

In order to ensure a monotonically increasing allocation of locally created stream IDs to end users (and native protocols, as we'll see later), the internal imquic_connection API provides a helper function called imquic_connection_new_stream_id for the purpose.

Once a stream exists, incoming STREAM data will be notified via internal callbacks on the connection (and from there to the end user or, if mediated, to the native protocol handling them), while data can be sent on a STREAM using imquic_connection_send_on_stream. It's important to point out that this function only queues the data to deliver: considering picoquic is not thread-safe, the actual delivery will only happen once the loop is awaken by the connection.

To conclude, as anticipated data can be exchanged in self-contained messages in QUIC too, specifically using DATAGRAM if support for that frame was negotiated. In that case, the process is similar, which means internal buffering is performed to mediate between the end user and the actual delivery of the data, which as we explained is always triggered in a scheduled way by the event loop, and not directly when the user calls the function to send it. Incoming DATAGRAM frames will be notified via internal callbacks on the connection (and from there to the end user or, if mediated, to the native protocol handling them), while data can be sent on a DATAGRAM using imquic_connection_send_on_datagram, which will also involve the loop as in the STREAM case.


Native protocols

While imquic itself provides a raw QUIC stack that should be usable for different use cases and applications, it also comes, out of the box, with native support for a few applition level protocols, in order to simplify the life of developers interested in some specific use cases.

WebTransport

WebTransport is a "first class citizen" in imquic, meaning that it's exposed as an option as part of the public APIs, independently of the protocols that will be build on top of that. The native support of MoQ, for instance, builds on top of this WebTransport support.

As explained in the intro, this is achieved in the core by implementing the basics of HTTP/3 CONNECT for the sole purpose of establishing a WebTransport connection, when needed. When a user is interested in WebTransport, a imquic_http3_connection instance is created and associated to the imquic_connection instance. This new HTTP/3 specific resource is then used any time data is sent or received over the associated QUIC connection: any time there's incoming STREAM data, for instance, rather than pass it to the application as the stack would normally do, it's passed to the WebTransport stack first instead, via a call to imquic_http3_process_stream_data. This function checks if it's a Control stream, if it's a stream related to QPACK, or if's a stream meant for exchanging data. For WebTransport, this means checking the codes that identify the usage of those streams, and handle them accordingly. After that, the WebTransport layer becomes a transparent "proxy" between the connection and the application, with STREAM offsets shifted in order to mask this intermediate layer from the application perspective.

In order to set up a WebTransport on a QUIC connection, some HTTP/3 messages must be exchanged first. Specifically, both endpoints need to exchange a SETTINGS frame to negotiate some parameters. Parsing a remote SETTINGS is done in imquic_http3_parse_settings, while preparing the local one is done in imquic_http3_prepare_settings. After that, a client is supposed send a CONNECT request, while the server will (hopefully) send a 200 back. If the imquic endpoint is acting as a client, it will use imquic_http3_check_send_connect to prepare a CONNECT message to send, and then wait for a response. For both clients and servers, parsing HTTP/3 requests/responses is done by imquic_http3_parse_request, which will in turn parse the HEADERS frame using imquic_http3_parse_request_headers. This would conclude the setup for clients, while servers will need to send a response back, which is done in imquic_http3_prepare_headers_response.

The QPACK portion of the exchange is performed via a custom QPACK stack that uses static tables for taking care of Huffman encoding and decoding. Specifically, the HTTP/3 stack creates a imquic_qpack_context that controls two dynamic tables (imquic_qpack_dynamic_table). Incoming QPACK messages are processed either in imquic_qpack_decode (for messages coming from the peer's encoder stream) or in imquic_qpack_process (for actual HTTP/3 requests/responses compressed with QPACK). The first method decodes Huffman codes where needed, and updates the remote dynamic table accordingly; the second one references static and dynamic tables to reconstruct headers to return back to the HTTP/3 stack. Outgoing messages, instead, are passed to the imquic_qpack_encode method, which checks if new additions must be made to the local dynamic table (and in case prepares QPACK encoder stream with Huffman codes to send to the peer), and then references the static and dynamic tables to encode requests/responses via QPACK.

The WebTransport stack also supports the Application Protocol Negotiation functionality, by basically manipulating and monitoring HTTP/3 headers negotiate which protocol to use on top of WebTransport. This is currently heavily used by the MoQ stack, since that's the mechanism used to negotiate which draft version semantics to use on the wire.

Note: At the time of writing, the stack is a bit naive as it doesn't really ever add anything to the local table, preferring the inline usage of Indexed Field Line, Literal Field Line with Name Reference and Literal Field Line with Literal Name, without any encoder instruction on the QPACK encoder stream. Besides, on the way in it currently assumes a referenced entry will be in the table already, which means it may not work as expected if encoder instructions are delayed or out of order.

RTP Over QUIC (RoQ)

TBD.

Media Over QUIC (MoQ)

TBD.


QLOG support (optional)

The library can optionally be built with QLOG support. In order to do that, the Jansson library must be found and linked, which can be done by passing the –enable-qlog flag to the configure script. The code for such integration is available in the core (see qlog.c and qlog.h for the QLOG foundation, http3.c and internal/http3.h for HTTP/3 events, roq.c and internal/roq.h for RoQ events, and moq.c and internal/moq.h for MoQ events).

It's important to point out that this only impacts the generation of QLOG logs for protocols the library has direct control of, which means WebTransport, RoQ and MoQ. For QLOG logging of QUIC itself, this is delegated to picoquic (which provides this functionality out of the box) and so handled in a different way. This also means that, at the time of writing, tracking a connection via QLOG will result in more two files being generated, instead of just one, if the library user is interested in logs related to both QUIC and any of the upper layers.

That said, if compiled, QLOG support can be enabled programmatically and separately per each created endpoint. To enable such a feature, the configuration will need a folder to save QLOG files to: the library will not create folders if they're missing, so it's important that the provided folder does exist already, or QLOG will be silently disabled. Once QLOG support is requested for an endpoint, the library can save events related to QUIC, HTTP/3, RoQ and/or MoQ. For application layers, the output can be written to either contained JSON files, or sequential JSON; QUIC QLOG files, instead (which as we mentioned are originated by picoquic), will always be generated in contained JSON files instead.

Considering that different protocols can be tracked, enabling QLOG specifies what should be stored as part of the imquic_network_endpoint structure: as soon as an imquic_connection is created out of such a network instance, QLOG generation can be configured; for QUIC files, this means enabling it in picoquic; for application layers, an imquic_qlog instance is created contextually via imquic_qlog_create, inheriting the configured properties.

The library then uses different methods made available in qlog.h add events to the QLOG trace. An event to add to the trace can be prepared with imquic_qlog_event_prepare, which creates an empty event of the provided name, and automatically sets a timestamp as part of the process. An empty data object can be added via a call to imquic_qlog_event_add_data, which returns a reference to the data object to allow the caller to fill it in. Once an event has been filled in with all the relevant details, it can be added to the trace with a call to the imquic_qlog_append_event method: for contained JSON files, this simply adds the object to the array of events; for serialized JSON, this serializes the new event to JSON text, and appends it to the QLOG file prefixed by the RS record separator.

When a connection is closed, the associated QLOG instance is destroyed too, via imquic_qlog_destroy : for QLOG instances saving to contained JSON files, this performs an automatic call to imquic_qlog_save_to_file to regenerate the JSON serialization, which doesn't happen for sequential JSON instead (as, in that case, events have been written to file already by means of imquic_qlog_append_event).

The library comes with many helpers to generate events specific to HTTP/3, RoQ and MoQ. At the time of writing, not all events defined in those specs have been implemented, and a few of those that have are not complete. Integrating the missing events and information is left to future revisions of the code.