Over a year ago, I wrote my first draft dev proxy launcher in Async Python, based on an idea I had to have something like xinetd for my various in development projects.
[service.django]
host = "django.localhost"
port = 8001
cwd = "~/Projects/my-django-project"
command = "uv run manage.py runserver {port}"
delay = 3
[service.hugo]
host = "hugo.localhost"
port = 8002
cwd = "~/Projects/my-hugo-site"
command = "hugo server --buildDrafts --buildFuture --port {port} --liveReloadPort 80"
While the Python version validated the idea, and usually worked well, there were a few cases that did not work as well.
Recently, I decided to work on a new version using rust and hyper to handle the proxied requests, which I have named Norikae (乗り換え).
Under the Hyper docs, was a proxy example that helped me with a core part of my own server with a few differences.
Managing sub processes
When connecting to the upstream server fails, I want to automatically try to launch our dev server.
// Create my own connect method to handle initial connection error
// and attempt to launch our process
async fn connect(&self) -> Result<TcpStream, std::io::Error> {
if let Ok(stream) = TcpStream::connect(self.address()).await {
return Ok(stream);
}
log::info!("Attempting to run: {:#?}", self);
self.launch().await?;
log::debug!("Waiting for {:#?}", self.delay());
sleep(self.delay()).await;
TcpStream::connect(self.address()).await
}
let stream = TcpStream::connect((host, port)).await.unwrap();
// to
let stream = self.connect().await.unwrap();
For now, I am just running processes under tmux, but in the future I will likely do something a bit more robust.
Handling Websockets
One thing I did not implement in the Python version, was proper handling of HTTP 101
and upgrading Websocket connections.
This initially stumped me for a while, until I found some examples online regarding upgrading connections. To ensure that the connection was properly handled in both directions, I needed to wrap my proxy request in a bit more handling.
// If our client request contains an upgrade header, then we need to store an OnUpgrade
// Object that we can later tunnel with. If not we can just ignore it.
let maybe_client_upgrade = if request.headers().contains_key(UPGRADE) {
log::debug!("Upgrade request {}:{}{}", self.host, self.port, request.uri());
let upgrade = hyper::upgrade::on(&mut request);
Some(upgrade)
} else {
None
};
// Proxy the original request to our upstream server
let mut response = sender.send_request(request).await?;
// If the response tells us to switch protocoles, then we need to configure the upgrade
// for the response. We will pass the server_upgrade with the client upgrade to our tunnel
// method.
if response.status() == StatusCode::SWITCHING_PROTOCOLS {
if let Some(client_upgrade) = maybe_client_upgrade {
let server_upgrade = hyper::upgrade::on(&mut response);
tokio::task::spawn(tunnel(client_upgrade, server_upgrade));
}
}
Lastly, I had to make sure my http builder, and my connection handler both used .with_upgrades()
to ensure a long running tunnel would not be closed early.
Next Steps
I still have a lot to learn about the best way to structure things in rust, but I am very happy that my new rust version is working better than the older python version. The current version works on MacOS using tmux, but in the future, I would like to handle Linux and perhaps other combinations a bit nicer.
Code for Norikae can be found on codeberg.