Hi there! Welcome back, It has been some time. This will be a different kind of post since we will
looking under the hood to learn and understand how tide works, for this purpose we will
examine the life cycle of a request.
Tide is a modular web framework, that means is built by the composition of different modules (
crates to be correct) that cooperate to give the users the features that expect in a web framework ( e.g. listeners, routing, extraction and more ).
So, let’s start digging Tide’s design by following the
request and to do that we can create a minimal application.
And check the response
Great! we have our minimal application working. > We have a server that is listening for connections on port 8080, accepting http requests and producing responses.
Expanding the main macro
Let’s now start to examine the building blocks, first you may notice the
#[async_std::main] macro, that allow us to write our
main function as
async. If we expand the macro we check how the code looks after expansion
We can see that
our main function is
wrapped inside another not async main function that run
our code inside a an async task blocking the current thread.
Creating the app
Back to our code, inside of our
main fn we are creating a
new tide application.
app but the actual type is
new return a
Servers are built up as a combination of state, endpoints and middleware
Stateis defined by users and
tidemake it available as shared reference in each request.
routerthe server’s routing table, used behind and
middleware, allow users to extend the default behavior in both input (
request) and output (
response) direction. This field in particular holds a vector behind an
We will talk about the
Middlewares in the next post, but we will focus later on how the routing decision is made based on the routing table.
Our next line make a couple of things changing the
at function allow users to
add a new route (at a given
path)to the router and return the created
Route allowing the
( You can read the official
segment definition in the tide server module documentation)
/hello/:name) is composed by zero or many
segment represent a non empty string separated by
/ in the path. There are two kind of segments,
- Contreate: match exactly with the part of the path ( e.g
- Wildcard: extracts and parses the respective part of the path of the incoming request to pass it along to the endpoint as an argument. Wildcards segments have also different alternnatives:
- named (e.g
/:name) that create an endpoint parameter called
- optional (
/*:name) will match to the end of given path, no matter how many segments are left, even nothing.
- unnamed (e.g
/:) name of the parameter can be omitted to define a path that matches the required structure, but where the parameters are not required
:will match a segment, and
*will match an entire path.
- named (e.g
As we say before, the
at method return a new
Route and if we look the definition of Route
route holds a
ref of the router, have a
path and a
vector of middlewares to apply. Also, there is a
prefix flag used to decide if strip_prefix should be applyed or not.
But, in our example we use the
get method to
endpoint (or in our case the
closure to execute when the request arrive). Let’s check that method.
tide provides methods for each http verb ( e.g
put, etc) that internally call the
method method with the correct
http method type as argument.
Until now we were always looking the code in the
tide source code, but now this methods are using the
http-types dependency. This
crate provides shared types for common HTTP operations.
Let’s also looks how the
method function looks like
For now let’s focus on the
else part, since we don’t need to strip any prefix. This function is adding the
route definition (a
http verb and endpoint) to the router, but is wrapping the endpoint with the
middlewares that should be executed. Also, notice that is returning
Route allowing to chaining with other methods).
Great! We already setup our
app). At the moment we define a route that:
- should match at
/hello/:name path and the http
- should run the defined endpoint, a clousure in our case.
But we are not listening any connection yet, let take a look how
tide allow us to listen.
Next line in our example app is
This line set the
listener and start listen to incomming connections by
awaiting (remember that
futures are lazy in rust). Let’s take a look of the listen method.
Tide have the concept of
listener that is implemented as an
async trait that represent an
http transport, an build using the
to_listener implementation. Out of the box
tide provide a
tcp listener and a
unix socket listener, but you can create your owns.
listen fn then call the
bind method of the
listener that start the
listening process by opening the neccessary networks ports. At this points the
ports are open but not accepting connection yet, for that the
listen method call the
accept method of the listener.
Awesome! now we are running our
app and listening for networks connections, we can easy check that using
### Follow the trace
Now that we have the setup in place and our application running we can start review the life of a request. Let’s start with a simple test
Lot’s of things happens before get the
Hi there! response, let’s dive in…
First, we want to add the logger and set the debug level to
Let’s run the app again and make the test request to see the log (leaving the async_io and polling outside).
So, we can see the logs from the
middleware and also from
async_h1, and this is another
dep crate used to parse
HTTP 1.1. And this is something to note now, tide currently support only
HTTP 1.1 at this moment.
Now is time to examine how the connection is stablished and follow the path from the
listener to the
First, going back to our
tcp listener in our case), remember that we need to call
accept in order to start accepting connections, so let’s take a look there to see the behavior.
listener.incomming return us an stream that we can then loop calling
next to handle each connection calling
handle_tcp with the server and the stream.
This spawn a new
async task and inside that task call
async_h1.accept (the http parser) with the stream to parse and a clousure to execute. So, let’s follow this request to see how the parser handle it
async_h1 create a new instance of
Server with the
io stream and the endpoint. Then is calling the
accept method of the
server and return the
accept method just loop while the connetion
keep alive calling
accept_one method is the one that
decode the incomming request, read the body and parse the
Pass the request to the
write the response.
Nice! we follow all the path from accepting the connection, decoding, calling endpoint, encoding and writing the response.
We can now go depper and follow the clousure…
One level further
After decoding and parsing the headers, the
clousure passed to
async_h1 is executed
Now is time to go deeper into the
respond method and see how this request is proccessed inside tide.
respond recive a
request, first need to figure out which endpoint need to be called based on the
method of the request.
route method tries different strategies to select the
endpoint that should be used and if no one matching the request a
404 endpoint is called to return
NOT FOUND to the client.
Once we have the
best match endpoint, the middleware use the
Next struct to drive the execution, including the actual endpoint, and call
run to start proccessing.
Notice that we are using
handle to execute the
call to run the endpoint, that is because the middleware receive also the
next as argument, so can continue calling the
next middleware or break the chain with a response.
Awesome! we follow the request until we
call the endpoint.
Now the response is send to the client!
That’s all for today, we follow the code ( an crates ) that
tide us to accept connections, decode and parse the request, decide the best endpoint (route) to use, execute the middleware chain and call the endpoint. There are still lots of topics to cover like body parsing, parameter extraction and middleware execution in both directions ( input/output). In the next notes we will start covering some of those topics.
As always, I write this as a learning journal and there could be errors or misunderstandings and any feedback is welcome.
Author Javier Viola