Bachelor of Engineering, Computer Engineering @
University of Mumbai
Through my graduate curriculum and various other activities, I have realized my interest lies in the area of Computer Sciences, and the endless possibilities that they present. My flair for the field, my quest in understanding its implication on businesses, and understanding the principles behind making solutions faster, better and cheaper are key drivers behind my desire
Through my graduate curriculum and various other activities, I have realized my interest lies in the area of Computer Sciences, and the endless possibilities that they present. My flair for the field, my quest in understanding its implication on businesses, and understanding the principles behind making solutions faster, better and cheaper are key drivers behind my desire to pursue a successful career in this field. My career objective is to rise in the software industry wherein I get opportunities to manage, implement and design distributed and scalable software systems for providing efficient business solutions.
In my 1.5 years experience with Deutsche Bank, I have been actively involved in BigData technologies, Java, PL/SQL development. I am excited by the evolving, dynamic nature of the Hadoop ecosystem with special interests towards Hive, Sentry, Shark, HCatalog. Recent PoCs in these technologies have helped me gain an overall understanding of these components.
Software Engineer @ From June 2014 to Present (1 year 7 months) Analyst @ App Dev, Team: BigData, June 2013-Present
Technologies used: Hive, Sentry, HCatalog, Shark, REST, Bash, JSON, Oozie, Python
- Developed a PoC on Sentry to enable fine grained authorization on Hive data warehouse
- Developed PoC on HCatalog to make Hive metadata available to Pig and MapReduce
- Developed a PoC to present performance gains in data analytics by using Shark in place of Hive
- Automated cluster configuration using bash, REST API and JSON resulting in 4.5 hours (90% reduction in effort) saved per cluster installation
- Stress tested CDH 4 Oozie to ensure that it can handle current data loads in CDH3 before upgrading the cluster from CDH3 to CDH4
- Provided a demo to clients on HBase replication, snapshots, CopyTable to replicate/ transfer data between their UAT, Prod and DR environments.
- Responsibilities include installing and supporting clusters for customers using the Cloudera Distribution of Hadoop
- Carry out the performance benchmark of the cluster to monitor its Network, Disk, MapReduce throughputs.
App Dev, Team: Dodd-Frank, June 2012-May 2013 Technologies used: Java, PL/SQL, XML, XSD, Junit4, utPLSQL
- Implemented engine to parse and persist trade and risk data
- Designed the performance benchmarking strategy for PL/SQL write APIs. From 2012 to June 2014 (2 years) Technical Student A @ •SAS, Technical Student, Platform Research and Development, Mid-Tier Platform, June 2011 – August 2011.
•Helped to increase the code coverage of two projects using JUnit4, ECLEmma, Cobertura, spring framework mock objects.
•Project 1: 1615 LOC, increased coverage by 24%
•Project 2: 4962 LOC increased coverage by 30%
•Implemented mock request, response, filters to unit test the source code.
•Migrated JUnit3 tests to JUnit4.
Looking for a different
Get an email address for anyone on LinkedIn with the ContactOut Chrome extension