Friday 31 May 2013

Configure an alternate JAVA

Get the new JDK and put it in /usr/liv/jvm and run this commands

$ update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk-7u21/bin/java" 1
$ update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk-7u21/bin/javac" 1
$ update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/lib/jvm/jdk-7u21/bin/javaws" 1

$ chmod a+x /usr/bin/java
$ chmod a+x /usr/bin/javac
$ chmod a+x /usr/bin/javaws
$ chown -R root:root /usr/lib/jvm/jdk-7u21

Make sure you read the below command output clear and choose which Java you want.
$ update-alternatives --config java
There are 2 alternatives which provide `java'.

  Selection    Alternative
-----------------------------------------------
 +        1    /usr/lib/jvm/java-6-openjdk/jre/bin/java
*         2    /usr/lib/jvm/jdk-7u21/bin/java

Press enter to keep the default[*], or type selection number: (If you want java 6, write 1 and if you java 7 write 2)

* This commands are being good in debian/ubuntu system. I am not sure about Red Hats/CentOs

Load Balancing using Apache mod_jk

After installing apache, install mod_jk module.

1. $ apt-get install libapache2-mod-jk

2. create a file jk.conf(if not present) in mods-available directory and write these lines
JkWorkersFile   /etc/apache2/workers.properties
JkLogFile       /var/log/apache2/mod_jk.log
JkShmFile       /var/log/apache2/mod_jk.shm
JkLogLevel      error

After creating file, may be you need to do symbol linking to mods-enabled directory. Or just disabling and enabling the mod_jk will do.

$ a2dismod jk
$ a2enmod jk

3. create a file /etc/apache2/workers.properties
and write lines for load balancing, creating a load balancer and workers and assembling the workers in load balancer. In below configuration server1 will take 60% load and server2 will take 40%.
#
worker.list=loadbalancer

worker.server1.port=8009 (-- The product server port where you want to forward the request)
worker.server1.host=server1 IP Address (-- The proxy server IP where you want to forward the request)
worker.server1.type=ajp13 (-- Protocol setting, in case of mod_jk module, it will be always ajp13)
worker.server1.lbfactor=60 (-- This parameter to set, how much load this server1 will be given)

worker.server2.port=8009
worker.server2.host=server2 IP Address
worker.server2.type=ajp13
worker.server2.lbfactor=40

worker.loadbalancer.type=lb
worker.loadbalancer.sticky_session=true
worker.loadbalancer.balance_workers=server1,server2

4. In virtual host configuration of Apache
In below example, all the request starting with /web/static/css(js)(images), will be JkUnMounted so , they will be served from apache document root directory(which is /var/www ), and rest all requests will be JkMounted, so they will be forwarded to load balancer and load balancer will forward the reuqest either to server1 or server2. This is done using AJP protocol, so make sure that you have configured AJP protocol in your application server like tomcat. TO configure the tomcat, you can check this url Configuration Apache and Tomcat to user Mod_jk connector for proxy passing


  ServerAlias www.example.com
  DocumentRoot /var/www
  ServerName example.com

  DocumentRoot /var/www/

  JkMount /* loadbalancer
  JkUnMount /web/static/css/* loadbalancer
  JkUnMount /web/static/js/* loadbalancer
  JkUnMount /web/static/images/* loadbalancer

  ErrorLog /var/log/apache/example-com-error_log
  CustomLog /var/log/apache/example-com-access_log combined



Apache performance tuning and security tuning

MaxKeepAliveRequests

It's actually the maximum number of requests to serve on a TCP connection. If you set it up to 100, clients with keepalive support will be forced to reconnect after downloading 100 items. Default in Apache is 100, you can increase it if you have enough memory on the system. If you are serving a page which contain high number of images then keeping is high is better because then it utilize the alive connections to serve the image requests.

KeepAliveTimeout

KeepAliveTimeout determines how long to wait for the next request. Set this to a low value, perhaps between two to five seconds. If it is set too high, child processed are tied up waiting for the client when they could be used for serving new clients.

MaxRequestsPerChild

The MaxRequestsPerChild directive sets the limit on the number of requests that an individual child server process will handle. After MaxRequestsPerChild requests, the child process will die. It's set to 0 by default, the child process will never expire. It is appropriate to set this to a value of few thousands. This can help prevent memory leakage, since the process dies after serving a certain number of requests. Don't set this too low, since creating new processes does have overhead.

Proper user of MPM (Multi-Processing Module)

This I have already explain at this URLConfiguring Apache/Tomcat for serving Maximum number of requests

Security tweaks

1. ServerTokens
This directive configures what you return as the Server HTTP response
Header. The default is 'Full' which sends information about the OS-Type and compiled in modules.
# Set to one of:  Full | OS | Minimal | Minor | Major | Prod
where Full conveys the most information, and Prod the least, you can also set it to "ProductOnly" which is best

ServerTokens ProductOnly

2. ServerSignature
Optionally add a line containing the server version and virtual host
# Set to one of:  On | Off | EMail
You can Set to "EMail" to also include a mailto: link to the ServerAdmin, better to set it to Off

ServerSignature Off

3. TraceEnable 
This Allow TRACE method to enable/disabled
# Set to one of:  On | Off | extended
Set to "extended" to also reflect the request body, best it to make it Off

TraceEnable Off

Monday 27 May 2013

Why nginx usually throws 403, Forbidden?


A. This problem is mostly because user (who is running the nginx) doesn't have the access of that resource.

Opne file /etc/nginx/nginx.conf
------------------
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;

events {
        worker_connections 768;
        # multi_accept on;
}
server {
    listen          80;
    server_name     www.healthcaremagic.com;
    access_log  /var/log/nginx/localhost.access.log;
    index           index.html;
    root            /var/www/
}

--------------------
Try these:
1. Open nginx.conf, locate user directive (change as per who has the access, may be www-date is good)
2. Nginx master process would be running using the user who start the nginx service may be the root, but nginx will use another user to create threads to serve the contents, which is configured in nginx.conf file. Note that this user only need to have read access of the directory which are set using root directive.
2. Go to directory which you set as root in location context (which is /var/www in above example) and check the access, ls -al should show
3. You can change the ownership of files and direcotry using command "chown -R usergroup.username directoryName"

Friday 17 May 2013

Create user in mysql


Create an admin user who can access anything from anywhere

mysql> grant all privileges on *.* to 'admin'@'%' identified by 'password';

Create an user who can access any database from a network colcation

mysql> grant all privileges on *.* to 'admin'@'192.168.%' identified by 'password';

*If somehow this doesn't work, execute this
update mysql.user set Host='192.168.%' where User='admin';

Create an user who can only access from localhost
mysql> grant all privileges on *.* to 'admin'@'localhost' identified by 'password';

Create an user who can access only one fix database
mysql> grant all privileges on dbname.* to 'admin'@'localhost' identified by 'password';

Here is the description of every word in the above command

grant all privileges - its granting permission(so it creates user also)
on dbname.* - its dbname and table name access restriction (*.* means all db, dbname.* means only one datase)
to 'admin'@'localhost' - its first quoted string is username, and 2nd quoted string is host access, who can connect to the mysql db, in the current only, only localhost host users would be allowed to connect
identified by 'password'; - its the password which is required to connect the mysql db server

Note : 
Here do not get confused with "bind-address" configuration in mysql configuraiton file, which actually provides binding access, click here to read more about "bind-address" and access point

Wednesday 8 May 2013

Secure your website with SSL - guidelines and experience


1.
First generate the key file
$ openssl genrsa -des3 -out server.key 2048
It will ask for a pass phrase, which will be further used to start the web server, so save it properly

2.
Now generate the CSR (Certificate Signing Request) file
$ openssl req -new -key server.key -out server.csr

This ask informations like, Location, Company Name, Common Name. Its better to ignore the "challenge password". Be careful with entering common name, which has to be your domain name.

If you serve your users with www.example.com, common name should be "www.example.com". Once certificate is issued for www.example.com, it won't be valid for example.com. If you want to secure with and without www, there is a certain preference you'll have to choose at the time of buying the certificate. If you want to secure all subdomains, there will be different prerefernece as well. Depending of number of sub domains you are looking for to make secure, cost will also vary. As of today verisign charges $ 400 USD for one domain, $ 600 for with and without www, and around $ 1500 USD for securing infinite sub domains.

3. Now use this CSR and avail the certificates which is crt file from any CA (certificate authority) company like verisign(costliest), go daddy cheapest (may be $ 10 USD)

4. Once you buy the SSL certificate, the product management will guide you on how to get the certificates. Its very simple.

5. In case of verisign, they will take average of 2 to 4 days for the entire process execution, as they will validate "CSR Verification", "Proof of Organization" and "Proof of Domain Registration".
They would ask company registration certificates also as a part of process. But if you buy from go daddy, no verification process, only based on CSR file they will issue you the certificates within a minute.

6. At the time of downloading the certificates makes sure that you also download the intermediate certificate. Intermediate certificates are connecting the certificate chains. In few browsers(without having intermediate certificate), some users might face unwanted error message.

7. Deploying the certificates, copy these 3 files at the following places and restart Apache

$ cp server.key /etc/ssl/private/
$ cp example.com.crt /etc/ssl/certs/
$ cp intermediate.crt /etc/ssl/certs/


8. Now change in apache
Enable the ssl module, if you are on debian(Ubuntu, RedHat) systems then you can use command a2enmod ssl.
Go to virtual host configuration and write these lines

SSLEngine on
SSLProtocol -all +TLSv1 +SSLv3

SSLCertificateKeyFile /etc/ssl/private/server.key
SSLCertificateFile /etc/ssl/certs/example.com.crt
SSLCertificateChainFile /etc/ssl/certs/intermediate.crt
     
$ /etc/init.d/apache2 restart (It will ask for the pass phrase that you created at step 1)
- and its Done :)
8. To validate everything done properly or not there are several websites to check one is, http://www.sslshopper.com/ssl-checker.html