Tuning sysctl.conf file on Ubuntu

sysctl is used to modify kernel parameters at runtime. The parameters available are those listed under /proc/sys/. Procfs is required for sysctl support in Linux. You can use sysctl to both read and write sysctl data.

/etc/sysctl.conf is the typical file, in which you can make the below modifications.

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Ignore all ICMP ECHO and TIMESTAMP requests sent to it via broadcast/multicast
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_echo_ignore_all = 1

# Prevent against the common 'syn flood attack'
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_syn_retries = 3
net.ipv4.tcp_synack_retries = 3
net.ipv4.tcp_max_syn_backlog = 5120

net.ipv4.netfilter.ip_conntrack_max = 196608
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_recv=45

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1

# Accept Redirects? No, this is not router
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0

To load settings, enter:
$ sudo sysctl -p

Nginx webserver configuration

Nginx web server comes with the default website. But there will be business cases where you have to host more than one website or sub domain on your Nginx web server. In many cases, you might want to configure nginx as reverse proxy for multiple website that are hosted on your upstream server, such as Apache. This article will guide you with common configurations that tune your nginx server’s performance and offer first line of security. This article assumes that you have nginx installed on Ubuntu 16.04 LTS.

nginx.conf

Modify your nginx.conf’s http { } block with the attributes below.

user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
        worker_connections 10000;
        multi_accept on;
}
http {

    # Basic
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    types_hash_max_size 2048; 
    
    # worker connections
    worker_processes 1;
    worker_connections 1024;

    # keepalive
    keepalive_requests 500
    keepalive_timeout 65;
    
    # buffers
    client_body_buffer_size 100K;
    client_header_buffer_size 1k;
    client_max_body_size 25m;
    large_client_header_buffers 4 16k;

    #fastcgi
        fastcgi_buffers 8 16k;
        fastcgi_buffer_size 32k;
        fastcgi_connect_timeout 300;
        fastcgi_send_timeout 300;
        fastcgi_read_timeout 300;

    # timeouts
    client_body_timeout 10;
    client_header_timeout 10;
    send_timeout 10;
    
    server_names_hash_bucket_size 64;
    # server_name_in_redirect off;
     
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
    
    # Logging
    access_log /var/www/nginx_logs/access.log; # or off
    # access_log off;
    error_log /var/www/nginx_logs/error.log;

    # purge cache
    map $request_method $purge_method {
        PURGE 1;
        default 0;
        }
    
    # disable bots
    if ($http_user_agent ~* LWP::Simple|BBBike|wget) {
        return 403; 
    }
    
    # restrict header types
    add_header Allow "GET, POST, HEAD" always;
    if ($request_method !~ ^(GET|HEAD|POST)$ )
    {
        return 444;
    }

    # security headers
    server_tokens off;
    # add_header X-Frame-Options "SAMEORIGIN";
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header Content-Security-Policy "default-src 'self';";
        add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload';
    
    # Limit requests
    limit_conn_zone $binary_remote_addr zone=global_limit_conn_zone:10m;
    limit_req_zone $binary_remote_addr zone=global_limit_req_zone:10m rate=50r/s;

    # proxy cache
    proxy_cache_path /tmp/nginx levels=1:2 keys_zone=global_cache_zone:20m max_size=500m inactive=60m use_temp_path=off purger=on;
    proxy_cache_key "$scheme$request_method$host$proxy_host$request_uri";

    # SSL Settings
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:50m;
    ssl_session_timeout 10m;
      
    # gzip
    gzip             on;
    gzip_disable "msie6";
    gzip_comp_level  6;
    gzip_min_length  1000;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_vary on;
    gzip_proxied     expired no-cache no-store private auth;
    gzip_types        text/plain application/x-javascript text/xml text/css application/xml application/json application/javascript application/xml+rss text/javascript;

    # Virtual Hosts
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*.conf;

    # ..
}

Virtual Hosts

You can create individual configuration files for individual websites/subdomain sites in /etc/nginx/sites-available/.

$ sudo nano /etc/nginx/sites-available/www.example.com

Your site configuration file will typically have only server { } block. Below is the typical configuration that you can use. The configuration blocks have comments to make you understand what they mean.

server {
    listen 80;

    root /var/www/example.com/www.example.com;
    index index.php index.html index.htm;
    server_name example.com www.example.com;

    location / {
        # try_files $uri $uri/ /index.php;

        # ddos protection
        limit_req zone=global_limit_req_zone burst=20 nodelay;
        limit_req_log_level warn;
        limit_req_status 444;
        limit_conn conn_limit_per_ip 10;

        # deny IPs
        # deny 123.123.123.0/28;

        # proxy cache
        add_header X-Proxy-Cache $upstream_cache_status;
        proxy_cache global_cache_zone;
        proxy_cache_min_uses 5;
        proxy_cache_bypass  $http_cache_control;
        proxy_cache_bypass $cookie_nocache $arg_nocache$arg_comment;
        proxy_cache_methods GET HEAD POST;
        proxy_cache_valid 200 302 10m;
        proxy_cache_valid 404      1m;
        proxy_cache_valid any 5m;
        proxy_no_cache $http_pragma $http_authorization;
        proxy_cache_purge $purge_method;
      
        # reverse proxy
        include proxy_params;
        proxy_pass http://127.0.0.1:8080;
        
    }

    location ~ \.php$ {
        # include snippets/fastcgi-php.conf;
            # fastcgi_pass unix:/run/php/php7.4-fpm.sock;
    }

    location ~ /\.ht {
        deny all;
    }
    
    # security headers
    server_tokens off;
    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header Content-Security-Policy "default-src 'self';";
    
    # enables server-side protection from BEAST attacks
    ssl_prefer_server_ciphers on;
    ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4:@STRENGTH";
    
    # gzip responses
    gzip on;
    gzip_disable "msie6";
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
    gzip_min_length 50000;
    gzip_proxied no-cache no-store private expired auth;

    # Expire rules for static content

    # cache.appcache, your document html and data
    location ~* \.(?:manifest|appcache|html?)$ {
      expires -1;
      # access_log logs/static.log; # I don't usually include a static log
    }

    # Feed
    location ~* \.(?:rss|atom)$ {
      expires 1h;
      add_header Cache-Control "public";
    }

    # Media: images, icons, video, audio, HTC
    location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc|otf|ttf|eot|woff)$ {
        expires 1M;
        access_log off;
        tcp_nodelay off;
        add_header Vary Accept-Encoding;
        add_header Cache-Control "public";
      
        ## Set the OS file cache.
        open_file_cache max=3000 inactive=120s;
        open_file_cache_valid 45s;
        open_file_cache_min_uses 2;
        open_file_cache_errors off;
    }

    # CSS and Javascript
    location ~* \.(?:css|js)$ {
        expires 1w;
        access_log off;
        add_header Cache-Control "public";
    }
}

Once the virtual host file is added, link it to sites-available and restart the nginx server

$ sudo ln -s /etc/nginx/sites-available/www.example.com /etc/nginx/sites-enabled/www.example.com
$ sudo service nginx restart

Setup ProFTPD on AWS Ubuntu Server

ProFTPd is a popular FTP server that can be configured to use the SFTP protocol, a secure FTP alternative, instead of FTP. This article will show you how to configure ProFTPd to use this protocol to avoid the insecurity of FTP.

We will show you how to configure this on an Ubuntu 16.04, but most distributions should operate in a similar way.

Installation

The ProFTPd software is in Ubuntu’s default repositories. We can install it by typing:

$ sudo apt update
$ sudo apt install proftpd

ProFTPD can be run either as a service from inetd, or as a standalone server. Each choice has its own benefits. With only a few FTP connections per day, it is probably better to run ProFTPD from inetd inorder to save resources. On the other hand, with higher traffic, ProFTPD should run as a standalone server to avoid spawning a new process for each incoming connection. Choose “stand alone” when prompted during installation.

Configurations

After the installation is done, you have to configure the server. The configurations are present in proftpd.conf

$ sudo nano /etc/proftpd/proftpd.conf

Change the following attributes to the values given below.

  • UseIPv6 off
  • ServerName “MyFTPDServer”
  • DefaultRoot /var/www/
  • Port 990
  • PassivePorts 1024 1048
  • MasqueradeAddress xxx.xxx.xxx.xxx <- Your Elastic IP
  • RequireValidShell on
  • AuthOrder mod_auth_pam.c* mod_auth_unix.c

Save the file and restart the service.

$ sudo service proftp restart

Creating users

Default user of proftpd will be created automatically. But it is better to create another user to share among your developers.

Create a user and assign a password with these commands

$ sudo adduser ftpusername
$ sudo passwd ftpusername

Restrict the access of this user to /var/www/ only the access of this user to

$ sudo usermod -m -d /var/www/ ftpusername

Add the user to www-data group, so that users can update the files. If users are able to access folder, but can not make any changes, give this command.

$ sudo chown -R ftpusername:www-data /var/www/

You may use any FTP software, such as Filezilla or WinSCP, to connect

Enabling TLS in ProFTPD

To run the ProFTPD on TLS authentication mode, you have to generate a key. You can generate a key from the command below.

$ sudo openssl req -x509 -nodes -newkey rsa:2048 -keyout /etc/ssl/private/proftpdserverkey.pem -out /etc/ssl/certs/proftpdcertificate.pem -days 3650

TLS configurations are present in tls.conf file. You have to enable this in the proftpd.conf file.

  • Include /etc/proftpd/tls.conf

Open up the tls.conf to modify some configurations.up the tls.conf to modify

$ sudo nano /etc/proftpd/tls.conf

This file should contain the following configurations.

  • TLSRSACertificateFile /etc/ssl/certs/proftpdcertificate.pem
  • TLSRSACertificateKeyFile /etc/ssl/private/proftpdserverkey.pem
  • TLSEngine on
  • TLSLog /var/log/proftpd/tls.log
  • TLSProtocol SSLv3 TLSv1
  • TLSRequired off
  • TLSOptions NoCertRequest EnableDiags NoSessionReuseRequired
  • TLSVerifyClient off
  • TLSRenegotiate none

Save the file and restart the service.

$ sudo service proftp restart

AWS security group

From your AWS console, add the ports that you have configured in the Inbound rules list. You may want to restrict the IPs from which these ports are added. In this article, we have mentioned 990 and 1024-1048 ports. You can give other ports as per your choice/requirement.

Working with AWS CodeCommit on Ubuntu

AWS Codecommit is a Git-based repository to maintain source code offered by AWS.

I. Creating CodeCommit Repository

We can create a code repository on CodeCommit by going to AWS Services menu and selecting CodeCommit service. Click on “Create Repository” button. Enter the name of your repository and click on Create Repository button. See http://docs.aws.amazon.com/codecommit/latest/userguide/getting-started-cc.html for more detailed info.

II. Installing Git client in local system

Once created, you can access the repository from your Linux system. In this example, we give you and overview of accessing the repository from Ubuntu. We need Git software

sudo apt-get install git

After Git is installed, we need to create an SSH key on Ubuntu and add the public key back into the CodeCommit repository. This is required to give access to the users to pull and push the code into the repository.

III. Creating IAM user in AWS

The user has to be created in AWS IAM to access the code repository.

  • Go to AWS IAM
  • Click on “Add User”
  • Enter the username.
  • Select Access Type as “Programmatic Access”
  • Click Permissions button and give access to “AmazonRDSFullAccess, AWSCodeCommitPowerUser, AmazonElastiCacheFullAccess, AmazonS3FullAccess” policies
  • Click on Review button
  • Click on Create button

The next screen will show the Access key ID and Secret access key. Make a note of these.

The user will be created and shown in the IAM Users list. Click on the user name to check the details.

IV. Creating SSH keys for AWS CodeCommit

In your Linux Terminal give the below command

$ ssh-keygen

Select a file name and enter a passphrase. Below is an example

lightracers@lightracers-laptop:~$ ssh-keygen
 Generating public/private rsa key pair.
 Enter file in which to save the key (/home/lightracers/.ssh/id_rsa): /home/lightracers/.ssh/id_codecommit_rsa
 Created directory '/home/lightracers/.ssh'.
 Enter passphrase (empty for no passphrase):
 Enter same passphrase again:
 Your identification has been saved in /home/lightracers/.ssh/id_codecommit_rsa.
 Your public key has been saved in /home/lightracers/.ssh/id_codecommit_rsa.pub.
 The key fingerprint is:
 SHA256:lim0...................mKy7QnDPxR/2pELs lightracers@lightracers-laptop
 The key's randomart image is:
 +---[RSA 2048]----+
 |o.+oo. o |
 |*o .+ = |
 |=o+ = O . |
 |+ @ = + . |
 |o.O * X S o |
 | o + + B . |
 | . . E o |
 | . |
 | |
 +----[SHA256]-----+

Next step is to add this RSA public key to the IAM credentials tab. The RSA public key will be available with the extension of .pub for the rsa id file you had created earlier. For ex. /home/lightracers/.ssh/id_codecommit_rsa.pub

$ cat /home/lightracers/.ssh/id_codecommit_rsa.pub

The output will be key file..

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDZ3nfWB+........O+ULf lightracers@lightracers-laptop

Go back to IAM User details screen. Under Security Credentials tab, click on “Upload SSH Public Key” button. Enter the public key copied in to the field and click on “Upload SSH Public Key” button. Once this, this will generate a SSH Key ID. Create a ssh config file if not yet created

$ nano ~/.ssh/config

Copy the SSH Key ID generated created and paste it in ~/.ssh/config file. The file content will be

Host git-codecommit.*.amazonaws.com
 User APKAXXXXXX
 IdentityFile ~/.ssh/id_rsa

Go back to terminal and give following commands

$ ssh -v git-codecommit.ap-south-1.amazonaws.com

If connected successfully, you will get the success message.

You have successfully authenticated over SSH. You can use Git to interact with AWS CodeCommit. You can refer to http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-ssh-unixes.html for more details.

V. Creating the first branch on your repository

Go to your required folder and give the following command in your terminal

$ git clone ssh://git-codecommit.ap-south-1.amazonaws.com/v1/repos/MyDemoRepo my-demo-repo

This will clone your repository.

VI. Useful Git commands

Below are the useful git commands

Adding files

$ git add .

Specific files

$ git add path/to/your/file.xyz

Commit command

$ git commit -m "commit message "

Git push command

$ git push origin master

Git pull command

$ git pull origin master

Reverting the changes

$ git stash

LAMP Commands for Ubuntu – Cheatsheet

You might have already installed Ubuntu on your system/server. So you already have L- Linux. Below are the commands you would require to setup others.

Basic

sudo apt-get update
sudo apt-get upgrade

A-Apache

sudo apt-get install apache2
sudo a2enmod ssl
sudo a2enmod vhost_alias
sudo a2enmod rewrite
sudo service apache2 restart

M-MySQL 5.7

sudo apt-get install mysql-server mysql-client

P-PHP 7.1

sudo add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install php7.1 php7.1-mbstring php7.1-mcrypt php7.1-mysql 
sudo apt-get install php7.1-bcmat php7.1-xml php7.1-curl
sudo apt-get install php7.1-zip php7.1-gd php7.1-intl php7.1-soap php7.1-xmlrpc

Securing Apache

In apache2.conf

ServerSignature Off 
ServerTokens Prod

VHost Setup

We can setup virtual hosts in apache.  In /etc/apache2/sites-available, copy 000-default.conf to xxxhost.local  (Rename this as per your need)

<VirtualHost www.website.com:80>
ServerAdmin root@localhost
DocumentRoot /var/www/website/www_website_com/
ServerName www.website.com
ServerAlias www.website.com
ErrorLog /var/www/website/logs/www.website.com.error_log
CustomLog /var/www/website/logs/www.website.com.access_log common
<Directory /var/www/website/www_website_com/>
AllowOverride All
Order allow,deny
Allow from all
Require all granted
</Directory>
</VirtualHost>

Certbot (Lets Encrypt)

Note: Certbot does not work for localhost and IP based servers. A domain name is required.

sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install python-certbot-apache
sudo certbot --apache

FOR ONLY CERT:

sudo certbot --apache certonly

For renewing SSL

sudo certbot renew --dry-run

Put certbot renew <– in cron