dedicated to cloud & apache to nginx

When I created 11h11.com in 1999 (yes in the 20th century) I had some specific needs that made co-location hosting not an option. I had to learn how to manage a dedicated server running apache, postgresql, php, postfix, vsftp etc.

15 years later, I finally took the time to port this server to the SSD cloud for a fraction of the price. I also opt for using nginx instead of apache. Here’s my note, feel free to poke me if you have any questions.

SECURITY FIRST

When you open a server, the first thing to do is secure it. The most basic steps are:

  • add a regular user
  • tweak sshd_config (no root login)
  • install fail2ban
  • create iptables
*filter
#  Allow all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0
-A INPUT -i lo -j ACCEPT
-A INPUT -d 127.0.0.0/8 -j REJECT
#  Accept all established inbound connections
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
#  Allow all outbound traffic - you can modify this to only allow certain traffic
-A OUTPUT -j ACCEPT
#  Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL).
-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 443 -j ACCEPT
-A INPUT -p tcp --dport 25 -j ACCEPT
#  Allow SSH connections
-A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT
#  Allow ping
-A INPUT -p icmp -j ACCEPT
#  Log iptables denied calls
#  -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7
#  Drop all other inbound - default deny unless explicitly allowed policy
-A INPUT -j DROP
-A FORWARD -j DROP
COMMIT

You should set the /etc/hostname – /etc/hosts & dpkg-reconfigure tzdata & reverse dns. Also after installing all your services and before going to production, it is a good idea to scan your server for vulnerabilities. For that you can use a “free PCI scan service”.

INSTALL NGINX STABLE

I am using the official repository of nginx to install the stable version of nginx on ubuntu 14.04 lts. You can also compile it from source, but then you will need to keep it updated manually.

  • wget http://nginx.org/keys/nginx_signing.key
  • sudo apt-key add nginx_signing.key
  • nano /etc/apt/sources.list
    deb http://nginx.org/packages/ubuntu/ trusty nginx
    deb-src http://nginx.org/packages/ubuntu/ trusty nginx
  • apt-get update && apt-get install nginx

INSTALL PHP / MYSQL

  • apt-get install php5-cli php5-cgi php5-fpm
  • apt-get install mysql-server php5-mysql
  • apt-get install php5-mcrypt php5-curl php5-gd php5-imagick php5-intl
  • php5enmod mcrypt gd curl imagick intl
  • apt-get install php-apc
  • mysql_secure_installation
  • service php5-fpm restart
  • curl -sS https://getcomposer.org/installer | php
  • mv composer.phar /usr/local/bin/composer
    • CONFIGURE

      Depending on your instance capacity (mostly ram / cpu), you should tweak nginx, php and mysql settings. Do not use those values. Only a reference for a single core, 1GB ram. You should set date.timezone in php.ini.

      • nano /etc/nginx/nginx.conf
        user www-data;
        worker_processes  1; # nb of cpu
         
        events {
            worker_connections  1024; # use this ulimit -n to find out
        }
         
        http {
            client_body_buffer_size 10K;
            client_header_buffer_size 1k;
            client_max_body_size 128m; # should be set also in php.ini (upload_max_filesize)
            large_client_header_buffers 2 1k;
        }
        gzip on; # more cpu, less bandwidth - test: http://www.gidnetwork.com/tools/gzip-test.php
        gzip_min_length 10240;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
        gzip_disable "MSIE [1-6]\.";
         
        server {
        ...
        location ~* ^.+\.(css|js|jpg|jpeg|gif|png|ico|gz|svg|svgz|ttf|otf|woff|eot|mp4|ogg|ogv|webm)$ {
            expires max;
            access_log off;
        }
        ...
        }
      • nano /etc/php5/fpm/pool.d/www.conf
        listen.owner = www-data
        listen.group = www-data
        listen.mode = 0666 (not recommended, use 0660)
      • You can use this script to tune Mysql: http://mysqltuner.com
        Or do it manually: sudo nano /etc/mysql/my.cnf

        max_connections = 75
        key_buffer = 32M
        max_allowed_packet = 1M
        thread_stack = 128K
        table_cache = 32
      • sudo nano php5/fpm/php.ini
        error_log = /var/log/php/error.log
        upload_max_filesize = 32M
        post_max_size = 32M
        memory_limit = 128M
      • Install Alternative PHP Cache

      STRESS TEST / SPEEDTEST

      You can now stress test your server using cloud based services (free for 10k clients) or better use apache jmeter. This will give you an idea on how well your server perform. Also make sure you speedtest your website: http://www.webpagetest.org/

      To measure bidirectional bandwidth of your server you can install this command-line client:

      sudo apt-get install python-pip
      sudo pip install speedtest-cli
      speedtest-cli (selecting best server based on ping)
      speedtest-cli --list (choose the server (geographically nearest first)):
      speedtest-cli --server 911

      NGINX SERVER{}

      There’s no support for .htaccess in nginx. Everything is configured in the server block. You can use this online tool to convert your apache rewrite rules to nginx.

      • WordPress website
        chown -R www-data.www-data yourwordpresspath/

        server {
            server_name www.yourwordpressdns.com yourwordpressdns;
            root /srv/www/yourwordpresspath/public_html;
            index index.php;
         
            location / {
                try_files $uri $uri/ /index.php?q=$uri&$args;
            }
         
            location ~ \.php$ {
                try_files $uri =404;
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                include /etc/nginx/fastcgi_params;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            }
        }

        Be sure to use a cache plug-in.

      • Kohana 2.4
        server {
        server_name www.kohana2_4.com kohana2_4.com;
        root /srv/www/kohana2_4/public_html;
        index index.php;
         
        location / {
            try_files $uri $uri/ @kohana;
        }
        location ~* \.php$ {
            try_files $uri $uri/ @kohana;
            fastcgi_pass unix:/var/run/php5-fpm.sock;
            fastcgi_index index.php;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include fastcgi_params;
        }
        location ~* \.html$ {
            try_files $uri $uri/ @kohana;
            fastcgi_pass unix:/var/run/php5-fpm.sock;
            fastcgi_index index.php;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include fastcgi_params;
        }
        location @kohana
        {
            fastcgi_pass unix:/var/run/php5-fpm.sock;
            fastcgi_index index.php;
            include fastcgi_params;
            fastcgi_param SCRIPT_FILENAME $document_root/index.php;
        }
        }
      • Nodebb proxy
        server {
            listen 80;
         
            server_name yourforum.yourwebsite.com;
         
            location / {
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header Host $http_host;
                proxy_set_header X-NginX-Proxy true;
         
                proxy_pass http://127.0.0.1:4567/;
                proxy_redirect off;
         
                # Socket.IO Support
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection "upgrade";
            }
        }

      MAINTENANCE

      I find that most hosting provider are too pricey for back-up solution. I am using a free service (you name it) that give me 25GB of storage. Just install their client (headless). This solution have a drawback: it will use your bandwidth. Here’s 2 script to back-up your websites and mysql database (/etc/cron.daily):

      Mysql:

      #!/bin/sh
      SAUV=/home/backup/mysql/
      TODAY=$(date +%Y%m%d)
      LASTWEEK=$(date --date '1 week ago' +%Y%m%d)
      /usr/bin/mysqldump --user=YOURUSER --password=YOURPASS --lock-all-tables --all-databases > ${SAUV}${TODAY}_mysql.sql
      gzip ${SAUV}${TODAY}_mysql.sql
      if [ -f ${SAUV}${LASTWEEK}_mysql.sql.gz ]
      then
      rm -f ${SAUV}${LASTWEEK}_mysql.sql.gz
      fi

      Website

      #!/bin/sh
      SAUV=/home/bk
      SITE=mywebsite
      DIR=/srv/www/mywebsite/public_html
      TODAY=$(date +%Y%m%d)
      LASTDAY=$(date --date '1 day ago' +%Y%m%d)
      tar -czf ${SAUV}${SITE}_${TODAY}.tar.gz ${DIR}
      if [ -f ${SAUV}${SITE}_${LASTDAY}.tar.gz ]
      then
      rm -f ${SAUV}${SITE}_${LASTDAY}.tar.gz
      fi

      From time to time I run this script (upgrade server, optimize mysql):

      apt-get update
      apt-get upgrade
      apt-get dist-upgrade
      apt-get autoremove
      mysqlcheck --user=YOURUSER --password=YOURPASS--auto-repair --optimize --all-databases
      htop
      df -h
      free -m
      lastlog
      cat /var/log/syslog

      CONTENT DELIVERY NETWORK

      Adding a free CDN might be a good idea:

      • Distribute your content around the world so it’s closer to your visitors
      • Protect your website from a range of online threats (SQL injection, DDOS, etc)
      • If your server is down, your website could still be accessed

      SFTP CHROOT

      /etc/ssh/sshd_config

      UsePAM yes #need to be above
      Subsystem sftp internal-sftp
      Match Group www-data
        X11Forwarding no
        AllowTcpForwarding no
        ChrootDirectory %h
        ForceCommand internal-sftp

      service ssh resttart

      useradd -d /srv/www/theproject -s /bin/false -g www-data theusername
      passwd theusername
      mkdir /srv/www/theproject
      chown root.root /srv/www/theproject
      mkdir /srv/www/theproject/app
      chown theusername.www-data /srv/www/theproject/app

      Postfix

      Just a quick note to disable the limit of filesize:
      main.cf

      mailbox_size_limit = 0
      message_size_limit = 0
      virtual_mailbox_limit = 0

      Also you need to be sure to add an AAAA entry in your DNS for your IPV6 and reverse IP it so that gmail accept your email. Also setup OpenDKIM as explained here + adding a SPF record in your DNS. Finally if you are using alias (hello@example.com to me@gmail.com) you need to install SRS https://github.com/roehling/postsrsd. Yes it is a pain to run a mail server, you should consider using external services if you can avoid 550-5.7.1 from gmail and yahoo DMARC.

      Symfony

      Install symfony

      chown -R www-data:www-data app/cache & logs
      chmod -R 700 app/cache & logs

01

01 2012

Your Comment