Monday, November 16, 2015

A few memory issues in Hadoop!

In a Hadoop setup (rather any Big Data setup), memory issues are not unexpected!

An update on couple of issues we have seen off late –

 

1.       NameNode process gets stuck:

In this case, typically you will see following symptoms –

a.       DataNode gives following timeout error -

WARN ipc.Client: Exception encountered while connecting to the server : java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/<datanode-ip>:<datanode-port> remote=/<namenode-ip>:<namenode-port>]

ls: Failed on local exception: java.io.IOException: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/<datanode-ip>:<datanode-port> remote=/<namenode-ip>:<namenode-port>]; Host Details : local host is: "<datanode-ip>"; destination host is: "<namenode-ip>":<namenode-port>;

ð  What this essentially means is that DataNode process timed-out while trying to connect to the NameNode. So obviously the next step is to check why NameNode didn’t respond.

b.      On checking NameNode logs, we observed following warning –

WARN org.apache.hadoop.util.JvmPauseMonitor (org.apache.hadoop.util.JvmPauseMonitor$Monitor@18df3f69): Detected pause in JVM or host machine (eg GC): pause of approximately 74135ms No GCs detected

This indicates that the NameNode paused for longer than expected time of 60000ms. This also explains why DataNode did not get response from NameNode in designated 60000ms.

The warning also indicates that the pause was not due to GC. Typically GC can cause such ‘stop the world’ pauses and if that’s the case, it calls for a memory profiling and GC Tuning.

However, in this case, it turned out CPU activity was very high on the master node due to another cronjob. We sorted out the cronjob issue and the issue was resolved.

 

2.       DataNode process OOM:

Depending on the size of data and amount of data activity, you may observe OOM issue in DataNode process once in a while.

A quick fix would be to allocate more memory to DataNode process. Typically following configuration change will be helpful –

Update value of HADOOP_DATANODE_HEAPSIZE in <HADOOP HOME>/conf/hadoop-env.sh

Also, it is advisable to configure data-node to generate heap-dump on OOM error. That will help you with further analysis of heap if you get same error again.

(This is applicable to other processes as well – NN/RM/NM etc.)

 

Regards,

Sarang

 

Wednesday, June 17, 2015

EMR cluster and selection of EC2 instance type - Cost Optimization!

AWS Elastic MapReduce (EMR) is Amazon’s service providing Hadoop in the Cloud.
EMR inherently uses the EC2 nodes as the hadoop nodes. While triggering an EMR cluster, we can choose appropriate instance type based on the requirements of resources and profile of hadoop process.

In our case, we achieved direct cost saving of 30% by moving from m2.4xlarge to r3.2xlarge for our EMR cluster.
Apart from this, we had additional advantage of performance improvement of our hadoop jobs because of new generation CPUs in r3 instances and also because of SSD.
SSD is better for HDFS as well as MapReduce jobs because a MapReduce job reads/writes intermediate data to local disk.

Apart from this, earlier we used to use m2.4xlarge cluster for all transient clusters but as part of this optimization I observed that some jobs used to take hardly 15 minutes.
Now, as EMR billing is done on hourly rates, even for using cluster for only 20 minutes, we will end up paying cost for full 1 hour.
In such cases, I decided to move corresponding jobs to smaller r3.xlarge instances which resulted in cost savings of 65%!

We can further push this cost savings upto 90% by moving to spot instances for our transient clusters.
However this requires more effort in terms of ensuring fault-tolerance of our workflows and avoiding delays in our job execution due to loss of spot instances because of price spikes.

In summary, appropriate instance type selection can significantly reduce your EMR bills!

Following page talks about available EC2 instance types - http://aws.amazon.com/elasticmapreduce/pricing/
After comparing various instance types, I decided to move our transient clusters from m2.4xlarge to r3.2xlarge clusters.

For example, let’s compare m2.4xlarge vs r3.2xlarge


m2.4xlarge
r3.2xlarge

vCPU
8 core
8 core (new gen CPU)

RAM
68.4GB
61GB

SSD
No (2 x 840 GB)
Yes (1 x 160 GB)
SSD is better for performance of hadoop jobs
EC2 Price
$0.980 per Hour
$0.700 per Hour

EMR Price
$0.246 per Hour
$0.180 per Hour

Total Price
$1.226 per Hour
$0.880 per Hour
~30% cost saving





Default Memory Allocation:
Another interesting aspect of EMR is that EMR's ResourceManager allocates different memory to various YARN containers (mapper, reducer, application-manager etc.) based on instance type.
That means that only looking at the available resources for an instance type is not sufficient to take decision about which instance type to use for our EMR cluster.
m2.4xlarge
Configuration Option
Default Value
mapreduce.map.java.opts
-Xmx1280m
mapreduce.reduce.java.opts
-Xmx2304m
mapreduce.map.memory.mb
1536
mapreduce.reduce.memory.mb
2560
yarn.scheduler.minimum-allocation-mb
256
yarn.scheduler.maximum-allocation-mb
8192
yarn.nodemanager.resource.memory-mb
61440

r3.2xlarge
Configuration Option
Default Value
mapreduce.map.java.opts
-Xmx2714m
mapreduce.reduce.java.opts
-Xmx5428m
mapreduce.map.memory.mb
3392
mapreduce.reduce.memory.mb
6784
yarn.scheduler.minimum-allocation-mb
3392
yarn.scheduler.maximum-allocation-mb
54272
yarn.nodemanager.resource.memory-mb
54272


Following page shows default memory allocations for various instance types.
http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/HadoopMemoryDefault_H2.html
http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/TaskConfiguration_H2.html
The memory allocation does not go well with all hadoop jobs, so we wrote a script to reduce the memory allocation in r3.2xlarge instances during bootstrap of the cluster.

Overall, we achieved direct cost saving of 30% to 90% by selecting appropriate EC2 instance types for our EMR cluster.

- Sarang Anajwala

Tuesday, May 26, 2015

Cost optimization through performance improvement of S3DistCp

We reduced the cost of running our production cluster by about 60% by reducing total size of production cluster from 15 nodes to 6 nodes through performance tuning of AWS utility S3DistCp. This page provides some details about the same.

What is S3DistCp

Apache DistCp is an open-source tool you can use to copy large amounts of data. DistCp uses MapReduce to copy in a distributed manner—sharing the copy, error handling, recovery, and reporting tasks across several servers. For more information about the Apache DistCp open source project, go to http://hadoop.apache.org/docs/r1.2.1/distcp.html.
S3DistCp is an extension of DistCp that is optimized to work with AWS, particularly Amazon S3.

Issue

The AWS S3DistCp command, by default, is susceptible to take very high number of resources of the cluster to transfer even a small file!
As mentioned above, S3DistCp uses MapReduce to copy a file in distributed manner. By default, a MapReduce job creates approximately as many number of reducer as teh available container slots in the cluster.
Hive optimizes this behavior by estimating required number of reducers based on the input data size. However, S3DistCp uses the default MapReduce logic to trigger as many reducers as available slots in the cluster.
Because of this, S3DistCp ends up creating huge number of reducers for large clusters.

For example, for one instance it created 1 mapper and 498 reducers to transfer 694kb of data!
This behavior eats up lots of resources on the cluster and eventually slows down the overall throughput of the cluster.



Specifying required number of  reducers by passing argument  "-D mapreduce.job.reduces=${no_of_reduces}" can optimize the number of reducers being triggered for given S3DistCp operation.


After restricting the default count of reducers to 5, the load on cluster reduced and throughput increased significantly.

Immediately after deployment of fix

Number of spikes reduced in cluster CPU utilization and number of containers running in cluster reduced significantly immediately after deploying the fix.



After 10 days

In 10 days, after we deployed S3DistCp fix, the cluster performance improved significantly. (Green block in graph indicates the improvement because of deployment of the fix)
We reduced the total cluster size in production from 15 nodes to 6 nodes.
As a result of this, our average cluster utilization improved. (The yellow line in graph indicates improvement in utilization as we slowly reduced nodes from 15 to 6.)
We are monitoring the cluster utilization and would slowly bring down the cluster size to about 3 to 4 nodes in coming days.

- Sarang Anajwala

Wednesday, April 29, 2015

Impact of NULL values on where-clause/group-by-clause in Hive queries

Following is the check to verified that NULL values do not impact GROUP BY but it DOES IMPACT where clause.

Query: select count(*) from table1 where (field1 is NULL) AND dth >= '2014-12-01-00' AND dth <= '2014-12-01-23';
Result: 24

- select count(*) from table1 where (field1 != 'Value1') AND dth >= '2014-12-01-00' AND dth <= '2014-12-01-23';
Result: 1853517 (The correct count is 1853541 which gets reported incorrectly here because count of NULL values is ignored.)

- select count(*) from table1 where (field1 = 'Value1') AND dth >= '2014-12-01-00' AND dth <= '2014-12-01-23'
Result: 142570

select field1, count(*) from table1 where dth >= '2014-12-01-00' AND dth <= '2014-12-01-23' GROUP by field1;
Result:

field1_c1
ValidateAuthorization
1196966
GetUserProfile
470557
Authorize
142570
SignIn
86351
26101
Register
14726
GetUserEntitlement
12056
UpdateUser
11529
GetChildApprovalStatus
9813
UserProfile
9362
LogOut
4974
LogOn
3763
GetExtendedProfile
3060
GetAvaiableTrials
1934
LinkAccounts
1011
UpdateProfile
387
ChangePassword
337
SignUpChild
206
UploadProfilePicture
173
CoppaSentinelValidate
80
GetChildrenForModerator
38
GetModeratorsForChild
37
NULL
24
UpdateUserType
16
LdapSignIn
14
ModeratorApproval
13
UpdateChildModeratorAccountStatus
12
SignUp
1

Wednesday, March 25, 2015

Hive - S3NativeFileSystem - Insert Overwrite Bug

We store all our data in S3. We create external tables pointing to the data in S3 and run hive queries on these tables.

In one of the use cases, we needed to update a table incrementally with incoming data.
The query was something like this –

INSERT OVERWRITE TABLE main_table1
SELECT distinct my_field
FROM
(
    SELECT my_field
        FROM new_table1
    UNION ALL
    SELECT my_field
      FROM main_table1
) s;

Basically what we are doing here is that we union new data with existing data and eliminate duplicates from this union.
The expected behavior is that the ‘main_table1’ should get updated  on each run with new data from ‘new_table’.
What we observed is this query was overwriting ‘main_table1’ with the data of ‘new_table’. That means on each run, we will lose all the old data and only new data will remain in the table.

The behavior is due to a bug in EMR’s S3 library. S3NativeFileSystem class deletes the S3 file pointed by ‘main_table1’ while preparing query plan itself!

Even simple ‘Explain’ statement for this insert-overwrite-query deletes the data file in S3! This can result in SERIOUS DATA LOSS!

Not-So-Good-Solution:
Use a staging area (tmp table) to store your results and then copy the result from tmp table to main table.

CREATE TABLE tmp_table….

INSERT OVERWRITE TABLE tmp_table
SELECT distinct my_field
FROM
(
    SELECT my_field
        FROM new_table1
    UNION ALL
    SELECT my_field
      FROM main_table1
) s;

INSERT OVERWRITE main_table1
SELECT my_field from tmp_table;

There is one problem with this solution though. If you stop the last INSERT OVERWRITE statement (INSERT OVERWRITE main_table1 SELECT my_field from tmp_table;) before it completes successfully, you lose all your data! (Remember? - The S3 file is deleted while preparing query plan itself!)

Stable Solution:
Use ‘INSERT INTO’ instead of ‘INSERT Overwrite’.

INSERT INTO main_table1
SELECT t1.my_field FROM new_table1 t1
WHERE t1.my_field NOT IN (SELECT t2.my_field FROM main_table1);

This will ensure that you incrementally update the ‘main_table1’ without using INSERT OVERWRITE!

- Sarang

Thursday, March 12, 2015

Tuning Yarn container for Oozie

Oozie is a popular workflow management tool for BigData applications.
To give some high level idea, following is the container allocation for a typical oozie workflow application with hive action.




If you are running a heavy job through Oozie, there are chances that the yarn container which runs oozie job (‘Oozie workflow’ container in above image) may give out of memory.
The memory allocation for yarn container for oozie can be increased with property ‘oozie.launcher.mapreduce.map.memory.mb’ & 'oozie.launcher.mapreduce.map.java.opts'. The default value is typically 1536.
The property can be updated in oozie application workflow definition (workflow.xml) to allocate additional memory to container.

   <global>
        <job-tracker>${jobTracker}</job-tracker>
        <name-node>${nameNode}</name-node>
        <job-xml>/user/hive/conf/hive-default.xml</job-xml>
        <configuration>
            <property>
                <name>hive.metastore.uris</name>
                <value>${metastore}</value>
            </property>
            <property>
                <name>hive.metastore.client.socket.timeout</name>
                <value>3600</value>
            </property>
            <property>
                <name>mapred.reduce.tasks</name>
                <value>-1</value>
            </property>
            <property>
                <name>oozie.launcher.mapreduce.map.memory.mb</name>
                <value>3072</value>
            </property>
            <property>
                <name>oozie.launcher.mapreduce.map.java.opts</name>
                <value>-Xmx2560m</value>
            </property>          
            <property>
                <name>oozie.launcher.yarn.app.mapreduce.am.resource.mb</name>
                <value>3072</value>
            </property>
            <property>
                <name>oozie.launcher.yarn.app.mapreduce.am.command-opts</name>
                <value>-Xmx2560m</value>
            </property>
        </configuration>
    </global>

- Sarang

Thursday, February 26, 2015

Hive - dynamic partitions performance issue with S3/EMR

Problem:
We use Hive in Amazon EMR to parse our logs. Any insert query to insert data into a table where number of partitions are high (8000+ in our case), takes huge amount of time because the insert operation loads partitions every time.
This load partitions is done by making one hit per partition to S3 and one hit to metastore!
In our case, for one of the jobs I deciphered the logs - about 95 mins out of 102 mins are wasted in loading partitions.... which means almost 93% of time is wasted in loading partitions!

Following are the logs which suggest loading of partitions -
5538245 [main] INFO  hive.ql.metadata.Hive  - New loading path = s3://<path>/dth=2011-09-04-01 with partSpec {dth=2011-09-04-01}
5538697 [main] INFO  hive.ql.metadata.Hive  - New loading path = s3://<path>/dth=2015-01-01-15 with partSpec {dth=2015-01-01-15}
5539151 [main] INFO  hive.ql.metadata.Hive  - New loading path = s3://<path>/dth=2014-11-16-04 with partSpec {dth=2014-11-16-04}
5539661 [main] INFO  hive.ql.metadata.Hive  - New loading path = s3://<path>/dth=2014-08-16-19 with partSpec {dth=2014-08-16-19}
5540109 [main] INFO  hive.ql.metadata.Hive  - New loading path = s3://<path>/dth=2014-12-15-06 with partSpec {dth=2014-12-15-06}
…………………………………
…………………………………
6152836 [main] INFO  org.apache.hadoop.hive.ql.exec.Task  - Loading partition {dth=2014-12-04-18}
6152888 [main] INFO  org.apache.hadoop.hive.ql.exec.Task  - Loading partition {dth=2015-01-13-19}
6152941 [main] INFO  org.apache.hadoop.hive.ql.exec.Task  - Loading partition {dth=2014-12-27-11}
6152994 [main] INFO  org.apache.hadoop.hive.ql.exec.Task  - Loading partition {dth=2014-08-25-14}
6153046 [main] INFO  org.apache.hadoop.hive.ql.exec.Task  - Loading partition {dth=2014-08-28-18}


Solution:
For all insert (/overwrite) queries, insert should be performed into a new(empty) table pointing to a temporary staging location. Once the insert statement completes, the data file should be copied using a cp command (hdfs cp/distcp) command to final destination.
This would ensure that the insert query completes quickly as the destination table for insert query is empty and no partitions will have to be loaded.
File copy operation should also complete quickly as it will be done outside hive and no partition loading will be involved.

As we are copying data file for new partition directly on file system and not through hive, we will have to recover these new partitions to make sure they are added into hive metastore otherwise this new partitions will not be visible to any next job in workflow which uses this table as input.

In our case, we observed performance for a particular job increase by almost 90%. Run-time for the job reduced from 4+ hours to 25-30 minutes.

- Sarang