exercise bike

so i needed to get in shape (still do): borrowed an exercise bike from a friend. i wanted to see the biggest cities in the world while doing so: google earth. all that was missing was a way to control google earth from the bike: kinect and the speech recognition sdk.

Video

Source

my solution is a modification of this project: Kinect Excercise. added speech recognition (let me ride… a big city) and some keyboard shortcut and mouse automation for google earth (using http://www.autoitscript.com) + OSC for an upcoming project (small game based on this concept). you can download the hacky c# project.

20

08 2013

guitar neck tracking & gesture recognition

i finally found something useful to do with my kinect: tracking the neck of a guitar and using gesture recognition to control the FX rack of a pure data patch.

Video

Guitar neck tracking

i used the natural interaction middleware hand tracking example (PointViewer) and added open sound control (liblo). latency is 33ms. you can download the source and the executable for linux (64bit).

Gesture recognition

i am using the neat gesture recognition toolkit by Nick Gillian. using the DTW (Dynamic Time Warping) example (coded in openframeworks), i simply added open sound control to send the predicted gesture to pure data. you can download the source and the executable for linux (64bit).

Pure Data

nothing fancy here, just a patch to send the tracking via osc to the gesture recognition i get back the result from it, apply some FX to an incoming signal using X, Y, Z. you can download the patch.

09

07 2013

theremin à crayon

The “Not-just-for-sci-fi electronic instrument” that is played without being touched + a graphic tablet on top & some very simple electronics in the case (power / convert the theremin via USB). Both antennas (control voltage for volume and pitch) are routed to PureData.

The patch is really just a bridge (open sound control) to MyPaint (open-source graphics application for digital painters). Right now the volume is linked to the diameter of the brush and the pitch is linked to the brightness color (this can be changed in the code see below).

BTW this is the beauty of the open source movement: had the idea in the morning, talk to some people on #mypaint in the afternoon, hack the source for my needs during the night and went to bed with a working prototype. Ready-made Solutions Require Ready-made Problems; For Everything Else There Is Open Source Software!

Video

Source

MyPaint: share/gui/document.py -> pyliblo server (receive from pd)

import liblo, sys
 
class Document (CanvasController):
	def __init__(self, app, leader=None):
		global created
		if(created == False):
			self.server = liblo.Server(9997)
			self.server.add_method("/mp/radius", 'f', self.oscradius)
			self.server.add_method("/mp/zoom", 'f', self.osczoom)
			self.server.add_method("/mp/rotate", 'f', self.oscrotate)
			gobject.timeout_add(20, self.pollcheck)
			created = True
 
    def oscradius(self, path, args):
        adj = self.app.brush_adjustment['radius_logarithmic']
        adj.set_value(args[0])
 
    def oscv(self, path, args):
        h, s, v = self.app.brush.get_color_hsv()
        v = args[0]
        if v < 0.005: v = 0.005
        if v > 1.0: v = 1.0
        self.app.brush.set_color_hsv((h, s, v))
 
    def osczoom(self, path, args):
        self.tdw.set_zoom(args[0])
 
    def oscrotate(self, path, args):
        self.tdw.set_rotation(args[0])
 
    def pollcheck(self):
        self.server.recv(10)        self.finished = False
        Stroke.serial_number += 1
        self.serial_number = Stroke.serial_number
        return True

MyPaint: share/mypaint/lib/stroke.py -> pyliblo client (send pressure, x, y to pd)

import liblo, sys
    def __init__(self):
        self.target = liblo.Address(1234)
 
    def record_event(self, dtime, x, y, pressure, xtilt,ytilt):
        self.tmp_event_list.append((dtime, x, y, pressure, xtilt,ytilt))
        liblo.send(self.target, "/mypaint/pressure", pressure)

PureData patch:
bridgepd

25

03 2013

heavy box

NFAQ

how heavy?
32lb (14.5 kg) roughly a medium-sized dog

what are you going to do with it?
a centralized place for my digit/art project and my analog output. think of it as an effects unit or dsp in a box, but it’s also a silent computer with good processing power (as the time of this writing). the main project is to remix videos on the fly while playing an instrument.

what os/software are you using?
ubuntu studio with a low-latency kernel; pure data (effect rack, video player); sooperlooper (looping station); control (osc android application) all open source project

what kind of wood did you use?
okoumé (marine plywood). it’s a light wood and i was able to cut it with a x-acto!

i want more information
sure! source of the project (mainly related to the electronics) are here

13

01 2013

PunBB to NodeBB

Here’s my note that I took when I transferred the Pure Data forum in NodeBB from PunBB. NodeBB is coded in Node.js and use Redis or MongoDB.

Install Node.js & Redis

I’m using Redis (all in memory / faster) but you can use MongoDB.

sudo add-apt-repository ppa:chris-lea/node.js
sudo apt-get update && sudo apt-get dist-upgrade
sudo apt-get install redis-server

Preparation

  • MySQL dump the PunBB forum
  • tar.gz the attach folder
  • put the PunBB forum in maintenance
  • import the database and run fixPunBB.sql
  • rename and copy the attachment files using attach.php

attach.php

<?php
$link = mysql_connect('localhost', 'xxx', 'xxx');
mysql_select_db('xxx');
$result = mysql_query("SELECT punbb_attach_2_files.location, replace(replace(replace(punbb_attach_2_files.filename, ' ', '_'), '#', '_'), '%20', '_') FROM punbb_attach_2_files");
while ($row = mysql_fetch_array($result, MYSQL_NUM)) {
    printf("ID: %s FILE: %s\n" , $row[0], $row[1]);
    rename($row[0], "/var/www/oldpunbbattachment/".$row[1]);
}
mysql_close($link);
?>

fixPunBB.sql

ALTER TABLE punbb_attach_2_files ADD INDEX (post_id);
optimize table punbb_attach_2_files;
drop table if exists punbb_posts_tmp;
create table punbb_posts_tmp_tmp as (select
punbb_posts.id as id
from punbb_topics, punbb_posts  
where punbb_topics.id = punbb_posts.topic_id AND punbb_posts.id IN(SELECT MIN(punbb_posts.id) FROM punbb_posts where punbb_posts.topic_id = punbb_topics.id));
create index punbb_posts_tmp_tmp_idx on punbb_posts_tmp_tmp (id);
create table punbb_posts_tmp as (select * from punbb_posts where id not in (select id from punbb_posts_tmp_tmp));
drop table punbb_posts_tmp_tmp;

Export

Based on the work of https://github.com/akhoury/nodebb-plugin-import-ubb:

git clone https://github.com/patricksebastien/nodebb-plugin-import-punbb.git
cd nodebb-plugin-import-punbb && edit export.config.json (database access)

The repo is for a generic export, but here’s what I actually used when exporting the Pure Data forum. Edit lib/export.js and change the 2 sql queries for topics / posts:

// topics
select
punbb_topics.id as _tid,
punbb_topics.forum_id as _cid,
punbb_posts.id as _pid,
punbb_posts.poster_id as _uid,
punbb_topics.num_views as _viewcount,
punbb_topics.subject as _title,
punbb_topics.posted as _timestamp,
punbb_posts.topic_id as _post_tid,
if(punbb_attach_2_files.filename is null, punbb_posts.message, concat(punbb_posts.message, '\n\n[url]http://www.pdpatchrepo.info/hurleur/', replace(replace(replace(punbb_attach_2_files.filename, ' ', '_'), '#', '_'), '%20', '_'), '[/url]')) as _content
from punbb_topics, punbb_posts
left join punbb_attach_2_files
on punbb_attach_2_files.post_id = punbb_posts.id
where punbb_topics.id = punbb_posts.topic_id AND punbb_posts.id IN(SELECT MIN(punbb_posts.id) FROM punbb_posts where punbb_posts.topic_id = punbb_topics.id)

// posts
select
punbb_attach_2_files.filename,
punbb_posts_tmp.id as _pid,
topic_id as _tid,
posted as _timestamp,
if(punbb_attach_2_files.filename is null, punbb_posts_tmp.message, concat(punbb_posts_tmp.message, '\n\n[url]http://www.pdpatchrepo.info/hurleur/', replace(replace(replace(punbb_attach_2_files.filename, ' ', '_'), '#', '_'), '%20', '_'), '[/url]')) as _content,
poster_id as _uid
from punbb_posts_tmp
left join punbb_attach_2_files
on punbb_attach_2_files.post_id = punbb_posts_tmp.id
order by punbb_posts_tmp.posted

Run the export!

node bin/export.js --storage="storage" --config="./export.config.json" --flush

Install NodeBB

You need to use a specific version of NodeBB:

git clone git://github.com/NodeBB/NodeBB.git nodebb && cd nodebb
git checkout 206acab1bfab4be0f9073f7f885b773c5942ead2
npm install
./nodebb setup
edit /etc/redis/redis.conf (requirepass)

Import

Now it’s time to import from PunBB to NodeBB.

npm install nodebb-plugin-import
cd node_modules/nodebb-plugin-import
nano import.config.json (see below)
node import.js --config="./import.config.json" | tee import.log
cd ../../
git checkout master
npm install
./nodebb setup
./nodebb upgrade

import.config.json

{
    "log": "debug",
    "passwordGen": {
        "enabled": true,
        "chars": "{}.-_=+qwertyuiopasdfghjklzxcvbnmQWERTYUIOPASDFGHJKLZXCVBNM1234567890",
        "len": 8
    },
    "redirectTemplatesStrings": {
        "users": null,
        "categories": null,
        "topics": null,
        "posts": null
    },
    "storageDir": "/path/to/storage",
    "convert": "bbcode-to-md",
    "nbb": {
        "setup": {
            "runFlush": true,
            "adminConfig": {
                "admin:username": "user",
                "admin:password": "pwd",
                "admin:password:confirm": "pwd",
                "admin:email": "admin@dns.com"
            },
            "fileConfig": null
        },
        "autoConfirmEmails": true
    }
}

Send email to users

Now you need to tell your users about the new forum and also that they need to change their password. You can use Mandrill and this excellent web-interface: http://akhoury.github.io/pages/mandrill-blast/

rgrep "user-csv" ./node_modules/nodebb-plugin-import/import.log | cut -d "]" -f 2 | sed -e 's/^[ \t]*//' > user.csv

Start NODEBB on boot

I’m using an UpStart script. Be sure to “chown -R nodebb.nodebb /srv/www/youforumpath/nodebb”.

start on startup
stop on runlevel [016]

respawn

setuid nodebb
setgid nodebb

script
    cd /srv/www/youforumpath/nodebb
    ./nodebb start
end script

Finish installation

Install some plugins:
youtube, soundcloud, linkchecks, nodebb-plugin-vimeo, twitter, mandrill
npm install nodebb-plugin-bbcode-to-markdown
npm install nodebb-plugin-spam-be-gone

Here’s some themes:
npm install nodebb-theme-ifsta-ui
npm install nodebb-theme-rocket
npm install nodebb-theme-cerulean
npm install nodebb-theme-blacknred
npm install nodebb-theme-purplish
npm install nodebb-theme-dark-rectangles

Apache proxy

If you can use Nginx or Apache 2.4! But if you’re like me stuck with Apache and a very old version of it then you need to patch it. The patch is only for Apache 2.2, but I modified it to work with Apache 2.14:

diff -Naur httpd-2.2.24.orig/modules/proxy/config.m4 httpd-2.2.24/modules/proxy/config.m4
--- httpd-2.2.24.orig/modules/proxy/config.m4   2013-04-25 21:08:56.414107928 +0200
+++ httpd-2.2.24/modules/proxy/config.m4    2013-04-25 21:08:58.474145010 +0200
@@ -18,6 +18,7 @@
 proxy_http_objs="mod_proxy_http.lo"
 proxy_scgi_objs="mod_proxy_scgi.lo"
 proxy_ajp_objs="mod_proxy_ajp.lo ajp_header.lo ajp_link.lo ajp_msg.lo ajp_utils.lo"
+proxy_wstunnel_objs="mod_proxy_wstunnel.lo"
 proxy_balancer_objs="mod_proxy_balancer.lo"
 
 case "$host" in
@@ -29,6 +30,7 @@
     proxy_http_objs="$proxy_http_objs mod_proxy.la"
     proxy_scgi_objs="$proxy_scgi_objs mod_proxy.la"
     proxy_ajp_objs="$proxy_ajp_objs mod_proxy.la"
+    proxy_wstunnel_objs="$proxy_wstunnel_objs mod_proxy.la"
     proxy_balancer_objs="$proxy_balancer_objs mod_proxy.la"
     ;;
 esac
@@ -37,6 +39,7 @@
 APACHE_MODULE(proxy_ftp, Apache proxy FTP module, $proxy_ftp_objs, , $proxy_mods_enable)
 APACHE_MODULE(proxy_http, Apache proxy HTTP module, $proxy_http_objs, , $proxy_mods_enable)
 APACHE_MODULE(proxy_scgi, Apache proxy SCGI module, $proxy_scgi_objs, , $proxy_mods_enable)
+APACHE_MODULE(proxy_wstunnel, Apache proxy Websocket Tunnel module, $proxy_wstunnel_objs, , $proxy_mods_enable)
 APACHE_MODULE(proxy_ajp, Apache proxy AJP module, $proxy_ajp_objs, , $proxy_mods_enable)
 APACHE_MODULE(proxy_balancer, Apache proxy BALANCER module, $proxy_balancer_objs, , $proxy_mods_enable)
 
diff -Naur httpd-2.2.24.orig/modules/proxy/mod_proxy.h httpd-2.2.24/modules/proxy/mod_proxy.h
--- httpd-2.2.24.orig/modules/proxy/mod_proxy.h 2013-04-25 21:08:56.414107928 +0200
+++ httpd-2.2.24/modules/proxy/mod_proxy.h  2013-04-25 21:08:58.477478403 +0200
@@ -770,6 +770,46 @@
 ap_proxy_buckets_lifetime_transform(request_rec *r, apr_bucket_brigade *from,
                                         apr_bucket_brigade *to);
 
+/**
+ * Create a HTTP request header brigade,  old_cl_val and old_te_val as required.
+ * @parama p              pool
+ * @param header_brigade  header brigade to use/fill
+ * @param r               request
+ * @param p_conn          proxy connection rec
+ * @param worker          selected worker
+ * @param conf            per-server proxy config
+ * @param uri             uri
+ * @param url             url
+ * @param server_portstr  port as string
+ * @param old_cl_val      stored old content-len val
+ * @param old_te_val      stored old TE val
+ * @return                OK or HTTP_EXPECTATION_FAILED
+ */
+PROXY_DECLARE(int) ap_proxy_create_hdrbrgd(apr_pool_t *p,
+                                           apr_bucket_brigade *header_brigade,
+                                           request_rec *r,
+                                           proxy_conn_rec *p_conn,
+                                           proxy_worker *worker,
+                                           proxy_server_conf *conf,
+                                           apr_uri_t *uri,
+                                           char *url, char *server_portstr,
+                                           char **old_cl_val,
+                                           char **old_te_val);
+
+/**
+ * @param bucket_alloc  bucket allocator
+ * @param r             request
+ * @param p_conn        proxy connection
+ * @param origin        connection rec of origin
+ * @param  bb           brigade to send to origin
+ * @param  flush        flush
+ * @return              status (OK)
+ */
+PROXY_DECLARE(int) ap_proxy_pass_brigade(apr_bucket_alloc_t *bucket_alloc,
+                                         request_rec *r, proxy_conn_rec *p_conn,
+                                         conn_rec *origin, apr_bucket_brigade *bb,
+                                         int flush);
+
 #define PROXY_LBMETHOD "proxylbmethod"
 
 /* The number of dynamic workers that can be added when reconfiguring.
diff -Naur httpd-2.2.24.orig/modules/proxy/mod_proxy_wstunnel.c httpd-2.2.24/modules/proxy/mod_proxy_wstunnel.c
--- httpd-2.2.24.orig/modules/proxy/mod_proxy_wstunnel.c    1970-01-01 01:00:00.000000000 +0100
+++ httpd-2.2.24/modules/proxy/mod_proxy_wstunnel.c 2013-04-25 21:09:09.871016830 +0200
@@ -0,0 +1,400 @@
+/* Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "mod_proxy.h"
+
+module AP_MODULE_DECLARE_DATA proxy_wstunnel_module;
+
+/*
+ * Canonicalise http-like URLs.
+ * scheme is the scheme for the URL
+ * url is the URL starting with the first '/'
+ * def_port is the default port for this scheme.
+ */
+static int proxy_wstunnel_canon(request_rec *r, char *url)
+{
+    char *host, *path, sport[7];
+    char *search = NULL;
+    const char *err;
+    char *scheme;
+    apr_port_t port, def_port;
+
+    /* ap_port_of_scheme() */
+    if (strncasecmp(url, "ws:", 3) == 0) {
+        url += 3;
+        scheme = "ws:";
+        def_port = apr_uri_port_of_scheme("http");
+    }
+    else if (strncasecmp(url, "wss:", 4) == 0) {
+        url += 4;
+        scheme = "wss:";
+        def_port = apr_uri_port_of_scheme("https");
+    }
+    else {
+        return DECLINED;
+    }
+
+    port = def_port;
+    ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r, "canonicalising URL %s", url);
+
+    /*
+     * do syntactic check.
+     * We break the URL into host, port, path, search
+     */
+    err = ap_proxy_canon_netloc(r->pool, &url, NULL, NULL, &host, &port);
+    if (err) {
+        ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r, "AH02439: " "error parsing URL %s: %s",
+                      url, err);
+        return HTTP_BAD_REQUEST;
+    }
+
+    /*
+     * now parse path/search args, according to rfc1738:
+     * process the path. With proxy-nocanon set (by
+     * mod_proxy) we use the raw, unparsed uri
+     */
+    if (apr_table_get(r->notes, "proxy-nocanon")) {
+        path = url;   /* this is the raw path */
+    }
+    else {
+        path = ap_proxy_canonenc(r->pool, url, strlen(url), enc_path, 0,
+                                 r->proxyreq);
+        search = r->args;
+    }
+    if (path == NULL)
+        return HTTP_BAD_REQUEST;
+
+    apr_snprintf(sport, sizeof(sport), ":%d", port);
+
+    if (ap_strchr_c(host, ':')) {
+        /* if literal IPv6 address */
+        host = apr_pstrcat(r->pool, "[", host, "]", NULL);
+    }
+    r->filename = apr_pstrcat(r->pool, "proxy:", scheme, "//", host, sport,
+                              "/", path, (search) ? "?" : "",
+                              (search) ? search : "", NULL);
+    return OK;
+}
+
+
+static int proxy_wstunnel_transfer(request_rec *r, conn_rec *c_i, conn_rec *c_o,
+                                     apr_bucket_brigade *bb, char *name)
+{
+    int rv;
+#ifdef DEBUGGING
+    apr_off_t len;
+#endif
+
+    do {
+        apr_brigade_cleanup(bb);
+        rv = ap_get_brigade(c_i->input_filters, bb, AP_MODE_READBYTES,
+                            APR_NONBLOCK_READ, AP_IOBUFSIZE);
+        if (rv == APR_SUCCESS) {
+            if (c_o->aborted)
+                return APR_EPIPE;
+            if (APR_BRIGADE_EMPTY(bb))
+                break;
+#ifdef DEBUGGING
+            len = -1;
+            apr_brigade_length(bb, 0, &len);
+            ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r, "AH02440: "
+                          "read %" APR_OFF_T_FMT
+                          " bytes from %s", len, name);
+#endif
+            rv = ap_pass_brigade(c_o->output_filters, bb);
+            if (rv == APR_SUCCESS) {
+                ap_fflush(c_o->output_filters, bb);
+            }
+            else {
+                ap_log_rerror(APLOG_MARK, APLOG_ERR, rv, r, "AH02441: "
+                              "error on %s - ap_pass_brigade",
+                              name);
+            }
+        } else if (!APR_STATUS_IS_EAGAIN(rv) && !APR_STATUS_IS_EOF(rv)) {
+            ap_log_rerror(APLOG_MARK, APLOG_DEBUG, rv, r, "AH02442: "
+                          "error on %s - ap_get_brigade",
+                          name);
+        }
+    } while (rv == APR_SUCCESS);
+
+    if (APR_STATUS_IS_EAGAIN(rv)) {
+        rv = APR_SUCCESS;
+    }
+    return rv;
+}
+
+/* Search thru the input filters and remove the reqtimeout one */
+static void remove_reqtimeout(ap_filter_t *next)
+{
+    ap_filter_t *reqto = NULL;
+    ap_filter_rec_t *filter;
+
+    filter = ap_get_input_filter_handle("reqtimeout");
+    if (!filter) {
+        return;
+    }
+
+    while (next) {
+        if (next->frec == filter) {
+            reqto = next;
+            break;
+        }
+        next = next->next;
+    }
+    if (reqto) {
+        ap_remove_input_filter(reqto);
+    }
+}
+
+/*
+ * process the request and write the response.
+ */
+static int ap_proxy_wstunnel_request(apr_pool_t *p, request_rec *r,
+                                proxy_conn_rec *conn,
+                                proxy_worker *worker,
+                                proxy_server_conf *conf,
+                                apr_uri_t *uri,
+                                char *url, char *server_portstr)
+{
+    apr_status_t rv = APR_SUCCESS;
+    apr_pollset_t *pollset;
+    apr_pollfd_t pollfd;
+    const apr_pollfd_t *signalled;
+    apr_int32_t pollcnt, pi;
+    apr_int16_t pollevent;
+    conn_rec *c = r->connection;
+    apr_socket_t *sock = conn->sock;
+    conn_rec *backconn = conn->connection;
+    int client_error = 0;
+    char *buf;
+    apr_bucket_brigade *header_brigade;
+    apr_bucket *e;
+    char *old_cl_val = NULL;
+    char *old_te_val = NULL;
+    apr_bucket_brigade *bb = apr_brigade_create(p, c->bucket_alloc);
+    apr_socket_t *client_socket = ap_get_module_config(c->conn_config, &core_module);
+
+    header_brigade = apr_brigade_create(p, backconn->bucket_alloc);
+
+    ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r, "sending request");
+
+    rv = ap_proxy_create_hdrbrgd(p, header_brigade, r, conn,
+                                 worker, conf, uri, url, server_portstr,
+                                 &old_cl_val, &old_te_val);
+    if (rv != OK) {
+        return rv;
+    }
+
+    buf = apr_pstrcat(p, "Upgrade: WebSocket", CRLF, "Connection: Upgrade", CRLF, CRLF, NULL);
+    ap_xlate_proto_to_ascii(buf, strlen(buf));
+    e = apr_bucket_pool_create(buf, strlen(buf), p, c->bucket_alloc);
+    APR_BRIGADE_INSERT_TAIL(header_brigade, e);
+
+    if ((rv = ap_proxy_pass_brigade(c->bucket_alloc, r, conn, backconn,
+                                    header_brigade, 1)) != OK)
+        return rv;
+
+    ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r, "setting up poll()");
+
+    if ((rv = apr_pollset_create(&pollset, 2, p, 0)) != APR_SUCCESS) {
+        ap_log_rerror(APLOG_MARK, APLOG_ERR, rv, r, "AH02443: "
+                      "error apr_pollset_create()");
+        return HTTP_INTERNAL_SERVER_ERROR;
+    }
+
+#if 0
+    apr_socket_opt_set(sock, APR_SO_NONBLOCK, 1);
+    apr_socket_opt_set(sock, APR_SO_KEEPALIVE, 1);
+    apr_socket_opt_set(client_socket, APR_SO_NONBLOCK, 1);
+    apr_socket_opt_set(client_socket, APR_SO_KEEPALIVE, 1);
+#endif
+
+    pollfd.p = p;
+    pollfd.desc_type = APR_POLL_SOCKET;
+    pollfd.reqevents = APR_POLLIN;
+    pollfd.desc.s = sock;
+    pollfd.client_data = NULL;
+    apr_pollset_add(pollset, &pollfd);
+
+    pollfd.desc.s = client_socket;
+    apr_pollset_add(pollset, &pollfd);
+
+
+    r->output_filters = c->output_filters;
+    r->proto_output_filters = c->output_filters;
+    r->input_filters = c->input_filters;
+    r->proto_input_filters = c->input_filters;
+
+    remove_reqtimeout(r->input_filters);
+
+    while (1) { /* Infinite loop until error (one side closes the connection) */
+        if ((rv = apr_pollset_poll(pollset, -1, &pollcnt, &signalled))
+            != APR_SUCCESS) {
+            if (APR_STATUS_IS_EINTR(rv)) {
+                continue;
+            }
+            ap_log_rerror(APLOG_MARK, APLOG_ERR, rv, r, "AH02444: " "error apr_poll()");
+            return HTTP_INTERNAL_SERVER_ERROR;
+        }
+        ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r, "AH02445: "
+                      "woke from poll(), i=%d", pollcnt);
+
+        for (pi = 0; pi < pollcnt; pi++) {
+            const apr_pollfd_t *cur = &signalled[pi];
+
+            if (cur->desc.s == sock) {
+                pollevent = cur->rtnevents;
+                if (pollevent & APR_POLLIN) {
+                    ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r, "AH02446: "
+                                  "sock was readable");
+                    rv = proxy_wstunnel_transfer(r, backconn, c, bb, "sock");
+                    }
+                else if ((pollevent & APR_POLLERR)
+                         || (pollevent & APR_POLLHUP)) {
+                         rv = APR_EPIPE;
+                         ap_log_rerror(APLOG_MARK, APLOG_NOTICE, 0, r, "AH02447: "
+                                       "err/hup on backconn");
+                }
+                if (rv != APR_SUCCESS)
+                    client_error = 1;
+            }
+            else if (cur->desc.s == client_socket) {
+                pollevent = cur->rtnevents;
+                if (pollevent & APR_POLLIN) {
+                    ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r, "AH02448: "
+                                  "client was readable");
+                    rv = proxy_wstunnel_transfer(r, c, backconn, bb, "client");
+                }
+            }
+            else {
+                rv = APR_EBADF;
+                ap_log_rerror(APLOG_MARK, APLOG_INFO, 0, r, "AH02449: "
+                              "unknown socket in pollset");
+            }
+
+        }
+        if (rv != APR_SUCCESS) {
+            break;
+        }
+    }
+
+    ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r,
+                  "finished with poll() - cleaning up");
+
+    if (client_error) {
+        return HTTP_INTERNAL_SERVER_ERROR;
+    }
+    return OK;
+}
+
+/*
+ */
+static int proxy_wstunnel_handler(request_rec *r, proxy_worker *worker,
+                             proxy_server_conf *conf,
+                             char *url, const char *proxyname,
+                             apr_port_t proxyport)
+{
+    int status;
+    char server_portstr[32];
+    proxy_conn_rec *backend = NULL;
+    char *scheme;
+    int retry;
+    conn_rec *c = r->connection;
+    apr_pool_t *p = r->pool;
+    apr_uri_t *uri;
+
+    if (strncasecmp(url, "wss:", 4) == 0) {
+        scheme = "WSS";
+    }
+    else if (strncasecmp(url, "ws:", 3) == 0) {
+        scheme = "WS";
+    }
+    else {
+        ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r, "AH02450: " "declining URL %s", url);
+        return DECLINED;
+    }
+
+    uri = apr_palloc(p, sizeof(*uri));
+    ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r, "AH02451: " "serving URL %s", url);
+
+    /* create space for state information */
+    status = ap_proxy_acquire_connection(scheme, &backend, worker,
+                                         r->server);
+    if (status != OK) {
+        if (backend) {
+            backend->close = 1;
+            ap_proxy_release_connection(scheme, backend, r->server);
+        }
+        return status;
+    }
+
+    backend->is_ssl = 0;
+    backend->close = 0;
+
+    retry = 0;
+    while (retry < 2) {
+        char *locurl = url;
+        /* Step One: Determine Who To Connect To */
+        status = ap_proxy_determine_connection(p, r, conf, worker, backend,
+                                               uri, &locurl, proxyname, proxyport,
+                                               server_portstr,
+                                               sizeof(server_portstr));
+
+        if (status != OK)
+            break;
+
+        /* Step Two: Make the Connection */
+        if (ap_proxy_connect_backend(scheme, backend, worker, r->server)) {
+            ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r, "AH02452: "
+                          "failed to make connection to backend: %s",
+                          backend->hostname);
+            status = HTTP_SERVICE_UNAVAILABLE;
+            break;
+        }
+        /* Step Three: Create conn_rec */
+        if (!backend->connection) {
+            if ((status = ap_proxy_connection_create(scheme, backend,
+                                                     c, r->server)) != OK)
+                break;
+         }
+
+        /* Step Three: Process the Request */
+        status = ap_proxy_wstunnel_request(p, r, backend, worker, conf, uri, locurl,
+                                      server_portstr);
+        break;
+    }
+
+    /* Do not close the socket */
+    ap_proxy_release_connection(scheme, backend, r->server);
+    return status;
+}
+
+static void ap_proxy_http_register_hook(apr_pool_t *p)
+{
+    proxy_hook_scheme_handler(proxy_wstunnel_handler, NULL, NULL, APR_HOOK_FIRST);
+    proxy_hook_canon_handler(proxy_wstunnel_canon, NULL, NULL, APR_HOOK_FIRST);
+}
+
+APLOG_USE_MODULE(proxy_wstunnel);
+module AP_MODULE_DECLARE_DATA proxy_wstunnel_module = {
+    STANDARD20_MODULE_STUFF,
+    NULL,                       /* create per-directory config structure */
+    NULL,                       /* merge per-directory config structures */
+    NULL,                       /* create per-server config structure */
+    NULL,                       /* merge per-server config structures */
+    NULL,                       /* command apr_table_t */
+    ap_proxy_http_register_hook /* register hooks */
+};
diff -Naur httpd-2.2.24.orig/modules/proxy/mod_proxy_wstunnel.dsp httpd-2.2.24/modules/proxy/mod_proxy_wstunnel.dsp
--- httpd-2.2.24.orig/modules/proxy/mod_proxy_wstunnel.dsp  1970-01-01 01:00:00.000000000 +0100
+++ httpd-2.2.24/modules/proxy/mod_proxy_wstunnel.dsp   2013-04-25 21:08:58.480811797 +0200
@@ -0,0 +1,123 @@
+# Microsoft Developer Studio Project File - Name="mod_proxy_wstunnel" - Package Owner=<4>
+# Microsoft Developer Studio Generated Build File, Format Version 6.00
+# ** DO NOT EDIT **
+
+# TARGTYPE "Win32 (x86) Dynamic-Link Library" 0x0102
+
+CFG=mod_proxy_wstunnel - Win32 Release
+!MESSAGE This is not a valid makefile. To build this project using NMAKE,
+!MESSAGE use the Export Makefile command and run
+!MESSAGE
+!MESSAGE NMAKE /f "mod_proxy_wstunnel.mak".
+!MESSAGE
+!MESSAGE You can specify a configuration when running NMAKE
+!MESSAGE by defining the macro CFG on the command line. For example:
+!MESSAGE
+!MESSAGE NMAKE /f "mod_proxy_wstunnel.mak" CFG="mod_proxy_wstunnel - Win32 Release"
+!MESSAGE
+!MESSAGE Possible choices for configuration are:
+!MESSAGE
+!MESSAGE "mod_proxy_wstunnel - Win32 Release" (based on "Win32 (x86) Dynamic-Link Library")
+!MESSAGE "mod_proxy_wstunnel - Win32 Debug" (based on "Win32 (x86) Dynamic-Link Library")
+!MESSAGE
+
+# Begin Project
+# PROP AllowPerConfigDependencies 0
+# PROP Scc_ProjName ""
+# PROP Scc_LocalPath ""
+CPP=cl.exe
+MTL=midl.exe
+RSC=rc.exe
+
+!IF  "$(CFG)" == "mod_proxy_wstunnel - Win32 Release"
+
+# PROP BASE Use_MFC 0
+# PROP BASE Use_Debug_Libraries 0
+# PROP BASE Output_Dir "Release"
+# PROP BASE Intermediate_Dir "Release"
+# PROP BASE Target_Dir ""
+# PROP Use_MFC 0
+# PROP Use_Debug_Libraries 0
+# PROP Output_Dir "Release"
+# PROP Intermediate_Dir "Release"
+# PROP Ignore_Export_Lib 0
+# PROP Target_Dir ""
+# ADD BASE CPP /nologo /MD /W3 /O2 /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /FD /c
+# ADD CPP /nologo /MD /W3 /O2 /Oy- /Zi /I "../../include" /I "../../srclib/apr/include" /I "../../srclib/apr-util/include" /D "NDEBUG" /D "WIN32" /D "_WINDOWS" /Fd"Release\mod_proxy_wstunnel_src" /FD /c
+# ADD BASE MTL /nologo /D "NDEBUG" /win32
+# ADD MTL /nologo /D "NDEBUG" /mktyplib203 /win32
+# ADD BASE RSC /l 0x809 /d "NDEBUG"
+# ADD RSC /l 0x409 /fo"Release/mod_proxy_wstunnel.res" /i "../../include" /i "../../srclib/apr/include" /d "NDEBUG" /d BIN_NAME="mod_proxy_wstunnel.so" /d LONG_NAME="proxy_wstunnel_module for Apache"
+BSC32=bscmake.exe
+# ADD BASE BSC32 /nologo
+# ADD BSC32 /nologo
+LINK32=link.exe
+# ADD BASE LINK32 kernel32.lib ws2_32.lib mswsock.lib /nologo /subsystem:windows /dll /out:".\Release\mod_proxy_wstunnel.so" /base:@..\..\os\win32\BaseAddr.ref,mod_proxy_wstunnel.so
+# ADD LINK32 kernel32.lib ws2_32.lib mswsock.lib /nologo /subsystem:windows /dll /incremental:no /debug /out:".\Release\mod_proxy_wstunnel.so" /base:@..\..\os\win32\BaseAddr.ref,mod_proxy_wstunnel.so /opt:ref
+# Begin Special Build Tool
+TargetPath=.\Release\mod_proxy_wstunnel.so
+SOURCE="$(InputPath)"
+PostBuild_Desc=Embed .manifest
+PostBuild_Cmds=if exist $(TargetPath).manifest mt.exe -manifest $(TargetPath).manifest -outputresource:$(TargetPath);2
+# End Special Build Tool
+
+!ELSEIF  "$(CFG)" == "mod_proxy_wstunnel - Win32 Debug"
+
+# PROP BASE Use_MFC 0
+# PROP BASE Use_Debug_Libraries 1
+# PROP BASE Output_Dir "Debug"
+# PROP BASE Intermediate_Dir "Debug"
+# PROP BASE Target_Dir ""
+# PROP Use_MFC 0
+# PROP Use_Debug_Libraries 1
+# PROP Output_Dir "Debug"
+# PROP Intermediate_Dir "Debug"
+# PROP Ignore_Export_Lib 0
+# PROP Target_Dir ""
+# ADD BASE CPP /nologo /MDd /W3 /EHsc /Zi /Od /D "WIN32" /D "_DEBUG" /D "_WINDOWS" /FD /c
+# ADD CPP /nologo /MDd /W3 /EHsc /Zi /Od /I "../../include" /I "../../srclib/apr/include" /I "../../srclib/apr-util/include" /D "_DEBUG" /D "WIN32" /D "_WINDOWS" /Fd"Debug\mod_proxy_wstunnel_src" /FD /c
+# ADD BASE MTL /nologo /D "_DEBUG" /win32
+# ADD MTL /nologo /D "_DEBUG" /mktyplib203 /win32
+# ADD BASE RSC /l 0x809 /d "_DEBUG"
+# ADD RSC /l 0x409 /fo"Debug/mod_proxy_wstunnel.res" /i "../../include" /i "../../srclib/apr/include" /d "_DEBUG" /d BIN_NAME="mod_proxy_wstunnel.so" /d LONG_NAME="proxy_wstunnel_module for Apache"
+BSC32=bscmake.exe
+# ADD BASE BSC32 /nologo
+# ADD BSC32 /nologo
+LINK32=link.exe
+# ADD BASE LINK32 kernel32.lib ws2_32.lib mswsock.lib /nologo /subsystem:windows /dll /incremental:no /debug /out:".\Debug\mod_proxy_wstunnel.so" /base:@..\..\os\win32\BaseAddr.ref,mod_proxy_wstunnel.so
+# ADD LINK32 kernel32.lib ws2_32.lib mswsock.lib /nologo /subsystem:windows /dll /incremental:no /debug /out:".\Debug\mod_proxy_wstunnel.so" /base:@..\..\os\win32\BaseAddr.ref,mod_proxy_wstunnel.so
+# Begin Special Build Tool
+TargetPath=.\Debug\mod_proxy_wstunnel.so
+SOURCE="$(InputPath)"
+PostBuild_Desc=Embed .manifest
+PostBuild_Cmds=if exist $(TargetPath).manifest mt.exe -manifest $(TargetPath).manifest -outputresource:$(TargetPath);2
+# End Special Build Tool
+
+!ENDIF
+
+# Begin Target
+
+# Name "mod_proxy_wstunnel - Win32 Release"
+# Name "mod_proxy_wstunnel - Win32 Debug"
+# Begin Group "Source Files"
+
+# PROP Default_Filter "cpp;c;cxx;rc;def;r;odl;hpj;bat;for;f90"
+# Begin Source File
+
+SOURCE=.\mod_proxy_wstunnel.c
+# End Source File
+# End Group
+# Begin Group "Header Files"
+
+# PROP Default_Filter ".h"
+# Begin Source File
+
+SOURCE=.\mod_proxy.h
+# End Source File
+# End Group
+# Begin Source File
+
+SOURCE=..\..\build\win32\httpd.rc
+# End Source File
+# End Target
+# End Project
diff -Naur httpd-2.2.24.orig/modules/proxy/NWGNUmakefile httpd-2.2.24/modules/proxy/NWGNUmakefile
--- httpd-2.2.24.orig/modules/proxy/NWGNUmakefile   2013-04-25 21:08:56.417441322 +0200
+++ httpd-2.2.24/modules/proxy/NWGNUmakefile    2013-04-25 21:08:58.480811797 +0200
@@ -159,6 +159,7 @@
    $(OBJDIR)/proxybalancer.nlm \
    $(OBJDIR)/proxyajp.nlm \
    $(OBJDIR)/proxyscgi.nlm \
+   $(OBJDIR)/proxywstunnel.nlm \
    $(EOLIST)
 
 #
diff -Naur httpd-2.2.24.orig/modules/proxy/NWGNUproxywstunnel httpd-2.2.24/modules/proxy/NWGNUproxywstunnel
--- httpd-2.2.24.orig/modules/proxy/NWGNUproxywstunnel  1970-01-01 01:00:00.000000000 +0100
+++ httpd-2.2.24/modules/proxy/NWGNUproxywstunnel   2013-04-25 21:08:58.480811797 +0200
@@ -0,0 +1,250 @@
+#
+# Make sure all needed macro's are defined
+#
+
+#
+# Get the 'head' of the build environment if necessary.  This includes default
+# targets and paths to tools
+#
+
+ifndef EnvironmentDefined
+include $(AP_WORK)/build/NWGNUhead.inc
+endif
+
+#
+# These directories will be at the beginning of the include list, followed by
+# INCDIRS
+#
+XINCDIRS   += \
+           $(APR)/include \
+           $(APRUTIL)/include \
+           $(SRC)/include \
+           $(STDMOD)/http \
+           $(STDMOD)/proxy \
+           $(NWOS) \
+           $(EOLIST)
+
+#
+# These flags will come after CFLAGS
+#
+XCFLAGS        += \
+           $(EOLIST)
+
+#
+# These defines will come after DEFINES
+#
+XDEFINES   += \
+           $(EOLIST)
+
+#
+# These flags will be added to the link.opt file
+#
+XLFLAGS        += \
+           $(EOLIST)
+
+#
+# These values will be appended to the correct variables based on the value of
+# RELEASE
+#
+ifeq "$(RELEASE)" "debug"
+XINCDIRS   += \
+           $(EOLIST)
+
+XCFLAGS        += \
+           $(EOLIST)
+
+XDEFINES   += \
+           $(EOLIST)
+
+XLFLAGS        += \
+           $(EOLIST)
+endif
+
+ifeq "$(RELEASE)" "noopt"
+XINCDIRS   += \
+           $(EOLIST)
+
+XCFLAGS        += \
+           $(EOLIST)
+
+XDEFINES   += \
+           $(EOLIST)
+
+XLFLAGS        += \
+           $(EOLIST)
+endif
+
+ifeq "$(RELEASE)" "release"
+XINCDIRS   += \
+           $(EOLIST)
+
+XCFLAGS        += \
+           $(EOLIST)
+
+XDEFINES   += \
+           $(EOLIST)
+
+XLFLAGS        += \
+           $(EOLIST)
+endif
+
+#
+# These are used by the link target if an NLM is being generated
+# This is used by the link 'name' directive to name the nlm.  If left blank
+# TARGET_nlm (see below) will be used.
+#
+NLM_NAME   = proxywstunnel
+
+#
+# This is used by the link '-desc ' directive.
+# If left blank, NLM_NAME will be used.
+#
+NLM_DESCRIPTION    = Apache $(VERSION_STR) Proxy Web Socket Tunnel Module
+
+#
+# This is used by the '-threadname' directive.  If left blank,
+# NLM_NAME Thread will be used.
+#
+NLM_THREAD_NAME    = Prxy WbSkt Module
+
+#
+# If this is specified, it will override VERSION value in
+# $(AP_WORK)/build/NWGNUenvironment.inc
+#
+NLM_VERSION    =
+
+#
+# If this is specified, it will override the default of 64K
+#
+NLM_STACK_SIZE = 8192
+
+
+#
+# If this is specified it will be used by the link '-entry' directive
+#
+NLM_ENTRY_SYM  =
+
+#
+# If this is specified it will be used by the link '-exit' directive
+#
+NLM_EXIT_SYM   =
+
+#
+# If this is specified it will be used by the link '-check' directive
+#
+NLM_CHECK_SYM  =
+
+#
+# If these are specified it will be used by the link '-flags' directive
+#
+NLM_FLAGS  =
+
+#
+# If this is specified it will be linked in with the XDCData option in the def
+# file instead of the default of $(NWOS)/apache.xdc.  XDCData can be disabled
+# by setting APACHE_UNIPROC in the environment
+#
+XDCDATA        =
+
+#
+# If there is an NLM target, put it here
+#
+TARGET_nlm = $(OBJDIR)/$(NLM_NAME).nlm
+
+#
+# If there is an LIB target, put it here
+#
+TARGET_lib =
+
+#
+# These are the OBJ files needed to create the NLM target above.
+# Paths must all use the '/' character
+#
+FILES_nlm_objs = \
+   $(OBJDIR)/mod_proxy_wstunnel.o \
+   $(EOLIST)
+
+#
+# These are the LIB files needed to create the NLM target above.
+# These will be added as a library command in the link.opt file.
+#
+FILES_nlm_libs = \
+   $(PRELUDE) \
+   $(EOLIST)
+
+#
+# These are the modules that the above NLM target depends on to load.
+# These will be added as a module command in the link.opt file.
+#
+FILES_nlm_modules = \
+   libc \
+   aprlib \
+   proxy \
+   $(EOLIST)
+
+#
+# If the nlm has a msg file, put it's path here
+#
+FILE_nlm_msg =
+
+#
+# If the nlm has a hlp file put it's path here
+#
+FILE_nlm_hlp =
+
+#
+# If this is specified, it will override $(NWOS)\copyright.txt.
+#
+FILE_nlm_copyright =
+
+#
+# Any additional imports go here
+#
+FILES_nlm_Ximports = \
+   @libc.imp \
+   @aprlib.imp \
+   @httpd.imp \
+   @$(OBJDIR)/mod_proxy.imp \
+   $(EOLIST)
+
+#
+# Any symbols exported to here
+#
+FILES_nlm_exports = \
+   proxy_wstunnel_module \
+   $(EOLIST)
+
+#
+# These are the OBJ files needed to create the LIB target above.
+# Paths must all use the '/' character
+#
+FILES_lib_objs = \
+   $(EOLIST)
+
+#
+# implement targets and dependancies (leave this section alone)
+#
+
+libs :: $(OBJDIR) $(TARGET_lib)
+
+nlms :: libs $(TARGET_nlm)
+
+#
+# Updated this target to create necessary directories and copy files to the
+# correct place.  (See $(AP_WORK)/build/NWGNUhead.inc for examples)
+#
+install :: nlms FORCE
+
+#
+# Any specialized rules here
+#
+
+vpath %.c balancers
+#
+# Include the 'tail' makefile that has targets that depend on variables defined
+# in this makefile
+#
+
+include $(APBUILD)/NWGNUtail.inc
+
+
diff -Naur httpd-2.2.24.orig/modules/proxy/proxy_util.c httpd-2.2.24/modules/proxy/proxy_util.c
--- httpd-2.2.24.orig/modules/proxy/proxy_util.c    2013-04-25 21:08:56.417441322 +0200
+++ httpd-2.2.24/modules/proxy/proxy_util.c 2013-04-25 21:08:58.484145190 +0200
@@ -2680,3 +2680,329 @@
     }
     return rv;
 }
+
+/* Clear all connection-based headers from the incoming headers table */
+typedef struct header_dptr {
+    apr_pool_t *pool;
+    apr_table_t *table;
+    apr_time_t time;
+} header_dptr;
+
+static int clear_conn_headers(void *data, const char *key, const char *val)
+{
+    apr_table_t *headers = ((header_dptr*)data)->table;
+    apr_pool_t *pool = ((header_dptr*)data)->pool;
+    const char *name;
+    char *next = apr_pstrdup(pool, val);
+    while (*next) {
+        name = next;
+        while (*next && !apr_isspace(*next) && (*next != ',')) {
+            ++next;
+        }
+        while (*next && (apr_isspace(*next) || (*next == ','))) {
+            *next++ = '\0';
+        }
+        apr_table_unset(headers, name);
+    }
+    return 1;
+}
+
+static void proxy_clear_connection(apr_pool_t *p, apr_table_t *headers)
+{
+    header_dptr x;
+    x.pool = p;
+    x.table = headers;
+    apr_table_unset(headers, "Proxy-Connection");
+    apr_table_do(clear_conn_headers, &x, headers, "Connection", NULL);
+    apr_table_unset(headers, "Connection");
+}
+
+PROXY_DECLARE(int) ap_proxy_create_hdrbrgd(apr_pool_t *p,
+                                            apr_bucket_brigade *header_brigade,
+                                            request_rec *r,
+                                            proxy_conn_rec *p_conn,
+                                            proxy_worker *worker,
+                                            proxy_server_conf *conf,
+                                            apr_uri_t *uri,
+                                            char *url, char *server_portstr,
+                                            char **old_cl_val,
+                                            char **old_te_val)
+{
+    conn_rec *c = r->connection;
+    int counter;
+    char *buf;
+    const apr_array_header_t *headers_in_array;
+    const apr_table_entry_t *headers_in;
+    apr_table_t *headers_in_copy;
+    apr_bucket *e;
+    int do_100_continue;
+    conn_rec *origin = p_conn->connection;
+    proxy_dir_conf *dconf = ap_get_module_config(r->per_dir_config, &proxy_module);
+
+    /*
+     * To be compliant, we only use 100-Continue for requests with bodies.
+     * We also make sure we won't be talking HTTP/1.0 as well.
+     */
+    do_100_continue = (worker->ping_timeout_set
+                       && !r->header_only
+                       && (apr_table_get(r->headers_in, "Content-Length")
+                          || apr_table_get(r->headers_in, "Transfer-Encoding"))
+                       && (PROXYREQ_REVERSE == r->proxyreq)
+                       && !(apr_table_get(r->subprocess_env, "force-proxy-request-1.0")));
+
+    if (apr_table_get(r->subprocess_env, "force-proxy-request-1.0")) {
+        /*
+         * According to RFC 2616 8.2.3 we are not allowed to forward an
+         * Expect: 100-continue to an HTTP/1.0 server. Instead we MUST return
+         * a HTTP_EXPECTATION_FAILED
+         */
+        if (r->expecting_100) {
+            return HTTP_EXPECTATION_FAILED;
+        }
+        buf = apr_pstrcat(p, r->method, " ", url, " HTTP/1.0" CRLF, NULL);
+        p_conn->close = 1;
+    } else {
+        buf = apr_pstrcat(p, r->method, " ", url, " HTTP/1.1" CRLF, NULL);
+    }
+    if (apr_table_get(r->subprocess_env, "proxy-nokeepalive")) {
+        origin->keepalive = AP_CONN_CLOSE;
+        p_conn->close = 1;
+    }
+    ap_xlate_proto_to_ascii(buf, strlen(buf));
+    e = apr_bucket_pool_create(buf, strlen(buf), p, c->bucket_alloc);
+    APR_BRIGADE_INSERT_TAIL(header_brigade, e);
+    if (conf->preserve_host == 0) {
+        if (ap_strchr_c(uri->hostname, ':')) { /* if literal IPv6 address */
+            if (uri->port_str && uri->port != DEFAULT_HTTP_PORT) {
+                buf = apr_pstrcat(p, "Host: [", uri->hostname, "]:",
+                                  uri->port_str, CRLF, NULL);
+            } else {
+                buf = apr_pstrcat(p, "Host: [", uri->hostname, "]", CRLF, NULL);
+            }
+        } else {
+            if (uri->port_str && uri->port != DEFAULT_HTTP_PORT) {
+                buf = apr_pstrcat(p, "Host: ", uri->hostname, ":",
+                                  uri->port_str, CRLF, NULL);
+            } else {
+                buf = apr_pstrcat(p, "Host: ", uri->hostname, CRLF, NULL);
+            }
+        }
+    }
+    else {
+        /* don't want to use r->hostname, as the incoming header might have a
+         * port attached
+         */
+        const char* hostname = apr_table_get(r->headers_in,"Host");
+        if (!hostname) {
+            hostname =  r->server->server_hostname;
+            ap_log_rerror(APLOG_MARK, APLOG_WARNING, 0, r, "AH01092: "
+                          "no HTTP 0.9 request (with no host line) "
+                          "on incoming request and preserve host set "
+                          "forcing hostname to be %s for uri %s",
+                          hostname, r->uri);
+        }
+        buf = apr_pstrcat(p, "Host: ", hostname, CRLF, NULL);
+    }
+    ap_xlate_proto_to_ascii(buf, strlen(buf));
+    e = apr_bucket_pool_create(buf, strlen(buf), p, c->bucket_alloc);
+    APR_BRIGADE_INSERT_TAIL(header_brigade, e);
+
+    /* handle Via */
+    if (conf->viaopt == via_block) {
+        /* Block all outgoing Via: headers */
+        apr_table_unset(r->headers_in, "Via");
+    } else if (conf->viaopt != via_off) {
+        const char *server_name = ap_get_server_name(r);
+        /* If USE_CANONICAL_NAME_OFF was configured for the proxy virtual host,
+         * then the server name returned by ap_get_server_name() is the
+         * origin server name (which does make too much sense with Via: headers)
+         * so we use the proxy vhost's name instead.
+         */
+        if (server_name == r->hostname)
+            server_name = r->server->server_hostname;
+        /* Create a "Via:" request header entry and merge it */
+        /* Generate outgoing Via: header with/without server comment: */
+        apr_table_mergen(r->headers_in, "Via",
+                         (conf->viaopt == via_full)
+                         ? apr_psprintf(p, "%d.%d %s%s (%s)",
+                                        HTTP_VERSION_MAJOR(r->proto_num),
+                                        HTTP_VERSION_MINOR(r->proto_num),
+                                        server_name, server_portstr,
+                                        AP_SERVER_BASEVERSION)
+                         : apr_psprintf(p, "%d.%d %s%s",
+                                        HTTP_VERSION_MAJOR(r->proto_num),
+                                        HTTP_VERSION_MINOR(r->proto_num),
+                                        server_name, server_portstr)
+                         );
+    }
+
+    /* Use HTTP/1.1 100-Continue as quick "HTTP ping" test
+     * to backend
+     */
+    if (do_100_continue) {
+        apr_table_mergen(r->headers_in, "Expect", "100-Continue");
+        r->expecting_100 = 1;
+    }
+
+    /* X-Forwarded-*: handling
+     *
+     * XXX Privacy Note:
+     * -----------------
+     *
+     * These request headers are only really useful when the mod_proxy
+     * is used in a reverse proxy configuration, so that useful info
+     * about the client can be passed through the reverse proxy and on
+     * to the backend server, which may require the information to
+     * function properly.
+     *
+     * In a forward proxy situation, these options are a potential
+     * privacy violation, as information about clients behind the proxy
+     * are revealed to arbitrary servers out there on the internet.
+     *
+     * The HTTP/1.1 Via: header is designed for passing client
+     * information through proxies to a server, and should be used in
+     * a forward proxy configuation instead of X-Forwarded-*. See the
+     * ProxyVia option for details.
+     */
+    if (PROXYREQ_REVERSE == r->proxyreq) {
+        const char *buf;
+
+        /* Add X-Forwarded-For: so that the upstream has a chance to
+         * determine, where the original request came from.
+         */
+        apr_table_mergen(r->headers_in, "X-Forwarded-For",
+                         c->remote_ip);
+
+        /* Add X-Forwarded-Host: so that upstream knows what the
+         * original request hostname was.
+         */
+        if ((buf = apr_table_get(r->headers_in, "Host"))) {
+            apr_table_mergen(r->headers_in, "X-Forwarded-Host", buf);
+        }
+
+        /* Add X-Forwarded-Server: so that upstream knows what the
+         * name of this proxy server is (if there are more than one)
+         * XXX: This duplicates Via: - do we strictly need it?
+         */
+        apr_table_mergen(r->headers_in, "X-Forwarded-Server",
+                         r->server->server_hostname);
+    }
+
+    proxy_run_fixups(r);
+    /*
+     * Make a copy of the headers_in table before clearing the connection
+     * headers as we need the connection headers later in the http output
+     * filter to prepare the correct response headers.
+     *
+     * Note: We need to take r->pool for apr_table_copy as the key / value
+     * pairs in r->headers_in have been created out of r->pool and
+     * p might be (and actually is) a longer living pool.
+     * This would trigger the bad pool ancestry abort in apr_table_copy if
+     * apr is compiled with APR_POOL_DEBUG.
+     */
+    headers_in_copy = apr_table_copy(r->pool, r->headers_in);
+    proxy_clear_connection(p, headers_in_copy);
+    /* send request headers */
+    headers_in_array = apr_table_elts(headers_in_copy);
+    headers_in = (const apr_table_entry_t *) headers_in_array->elts;
+    for (counter = 0; counter < headers_in_array->nelts; counter++) {
+        if (headers_in[counter].key == NULL
+            || headers_in[counter].val == NULL
+
+            /* Already sent */
+            || !strcasecmp(headers_in[counter].key, "Host")
+
+            /* Clear out hop-by-hop request headers not to send
+             * RFC2616 13.5.1 says we should strip these headers
+             */
+            || !strcasecmp(headers_in[counter].key, "Keep-Alive")
+            || !strcasecmp(headers_in[counter].key, "TE")
+            || !strcasecmp(headers_in[counter].key, "Trailer")
+            || !strcasecmp(headers_in[counter].key, "Upgrade")
+
+            ) {
+            continue;
+        }
+        /* Do we want to strip Proxy-Authorization ?
+         * If we haven't used it, then NO
+         * If we have used it then MAYBE: RFC2616 says we MAY propagate it.
+         * So let's make it configurable by env.
+         */
+        if (!strcasecmp(headers_in[counter].key,"Proxy-Authorization")) {
+            if (r->user != NULL) { /* we've authenticated */
+                if (!apr_table_get(r->subprocess_env, "Proxy-Chain-Auth")) {
+                    continue;
+                }
+            }
+        }
+
+        /* Skip Transfer-Encoding and Content-Length for now.
+         */
+        if (!strcasecmp(headers_in[counter].key, "Transfer-Encoding")) {
+            *old_te_val = headers_in[counter].val;
+            continue;
+        }
+        if (!strcasecmp(headers_in[counter].key, "Content-Length")) {
+            *old_cl_val = headers_in[counter].val;
+            continue;
+        }
+
+        /* for sub-requests, ignore freshness/expiry headers */
+        if (r->main) {
+            if (    !strcasecmp(headers_in[counter].key, "If-Match")
+                || !strcasecmp(headers_in[counter].key, "If-Modified-Since")
+                || !strcasecmp(headers_in[counter].key, "If-Range")
+                || !strcasecmp(headers_in[counter].key, "If-Unmodified-Since")
+                || !strcasecmp(headers_in[counter].key, "If-None-Match")) {
+                continue;
+            }
+        }
+
+        buf = apr_pstrcat(p, headers_in[counter].key, ": ",
+                          headers_in[counter].val, CRLF,
+                          NULL);
+        ap_xlate_proto_to_ascii(buf, strlen(buf));
+        e = apr_bucket_pool_create(buf, strlen(buf), p, c->bucket_alloc);
+        APR_BRIGADE_INSERT_TAIL(header_brigade, e);
+    }
+    return OK;
+}
+
+PROXY_DECLARE(int) ap_proxy_pass_brigade(apr_bucket_alloc_t *bucket_alloc,
+                                         request_rec *r, proxy_conn_rec *p_conn,
+                                         conn_rec *origin, apr_bucket_brigade *bb,
+                                         int flush)
+{
+    apr_status_t status;
+    apr_off_t transferred;
+
+    if (flush) {
+        apr_bucket *e = apr_bucket_flush_create(bucket_alloc);
+        APR_BRIGADE_INSERT_TAIL(bb, e);
+    }
+    apr_brigade_length(bb, 0, &transferred);
+    if (transferred != -1)
+        p_conn->worker->s->transferred += transferred;
+    status = ap_pass_brigade(origin->output_filters, bb);
+    if (status != APR_SUCCESS) {
+        ap_log_rerror(APLOG_MARK, APLOG_ERR, status, r, "AH01084: "
+                      "pass request body failed to %pI (%s)",
+                      p_conn->addr, p_conn->hostname);
+        if (origin->aborted) {
+            const char *ssl_note;
+
+            if (((ssl_note = apr_table_get(origin->notes, "SSL_connect_rv"))
+                 != NULL) && (strcmp(ssl_note, "err") == 0)) {
+                return ap_proxyerror(r, HTTP_INTERNAL_SERVER_ERROR,
+                                     "Error during SSL Handshake with"
+                                     " remote server");
+            }
+            return APR_STATUS_IS_TIMEUP(status) ? HTTP_GATEWAY_TIME_OUT : HTTP_BAD_GATEWAY;
+        }
+        else {
+            return HTTP_BAD_REQUEST;
+        }
+    }
+    apr_brigade_cleanup(bb);
+    return OK;
+}

See the documentation for the actual process.

01

01 2013